mirror of
https://github.com/vacp2p/rfc-index.git
synced 2026-01-11 08:38:19 -05:00
Compare commits
21 Commits
community_
...
workflow-f
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
60f8b66139 | ||
|
|
d1e274075e | ||
|
|
2293f7f6d0 | ||
|
|
7bc488417a | ||
|
|
55a504eea3 | ||
|
|
6f800e0481 | ||
|
|
089f3b867f | ||
|
|
55faa270cd | ||
|
|
b50fa37f3f | ||
|
|
4375c21635 | ||
|
|
49b205702a | ||
|
|
c48c1a57bb | ||
|
|
418bfd0183 | ||
|
|
b6489768ba | ||
|
|
26c531eb91 | ||
|
|
4e7259d5b7 | ||
|
|
fa0ccea0ed | ||
|
|
51d5ce6eb1 | ||
|
|
e1aa274475 | ||
|
|
481dba51a9 | ||
|
|
54917cd0fb |
1
.github/workflows/.gitignore
vendored
1
.github/workflows/.gitignore
vendored
@@ -1 +0,0 @@
|
||||
.DS_Store
|
||||
11
.github/workflows/.markdownlint.json
vendored
11
.github/workflows/.markdownlint.json
vendored
@@ -1,4 +1,7 @@
|
||||
{
|
||||
"MD013": false,
|
||||
"MD024": false
|
||||
}
|
||||
{
|
||||
"default": true,
|
||||
"MD013": {
|
||||
"tables": false,
|
||||
"code_blocks" : false
|
||||
}
|
||||
}
|
||||
|
||||
46
.github/workflows/markdown-lint.yml
vendored
46
.github/workflows/markdown-lint.yml
vendored
@@ -1,23 +1,23 @@
|
||||
name: markdown-linting
|
||||
|
||||
on:
|
||||
|
||||
push:
|
||||
branches:
|
||||
- '**'
|
||||
pull_request:
|
||||
branches:
|
||||
- '**'
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Markdown Linter
|
||||
uses: DavidAnson/markdownlint-cli2-action@v15
|
||||
with:
|
||||
config: .github/workflows/.markdownlint.json
|
||||
globs: '**/*.md'
|
||||
name: markdown-linting
|
||||
|
||||
on:
|
||||
|
||||
push:
|
||||
branches:
|
||||
- '**'
|
||||
pull_request:
|
||||
branches:
|
||||
- '**'
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Markdown Linter
|
||||
uses: DavidAnson/markdownlint-cli2-action@v15
|
||||
with:
|
||||
config: .github/workflows/.markdownlint.json
|
||||
globs: '**/*.md'
|
||||
|
||||
94
README.md
94
README.md
@@ -1,47 +1,47 @@
|
||||
# Vac Request For Comments(RFC)
|
||||
|
||||
*NOTE*: This repo is WIP. We are currently restructuring the RFC process.
|
||||
|
||||
This repository contains specifications from the [Waku](https://waku.org/), [Nomos](https://nomos.tech/),
|
||||
[Codex](https://codex.storage/), and
|
||||
[Status](https://status.app/) projects that are part of the [IFT portfolio](https://free.technology/).
|
||||
[Vac](https://vac.dev) is an
|
||||
[IFT service](https://free.technology/services) that will manage the RFC,
|
||||
[Request for Comments](https://en.wikipedia.org/wiki/Request_for_Comments),
|
||||
process within this repository.
|
||||
|
||||
## New RFC Process
|
||||
|
||||
This repository replaces the previous `rfc.vac.dev` resource.
|
||||
Each project will maintain initial specifications in separate repositories,
|
||||
which may be considered as a **raw** specification.
|
||||
All [Vac](https://vac.dev) **raw** specifications and
|
||||
discussions will live in the Vac subdirectory.
|
||||
When projects have reached some level of maturity
|
||||
for a specification living in their repository,
|
||||
the process of updating the status to **draft** may begin in this repository.
|
||||
Specifications will adhere to
|
||||
[1/COSS](./vac/1/coss.md) before obtaining **draft** status.
|
||||
|
||||
Implementations should follow specifications as described,
|
||||
and all contributions will be discussed before the **stable** status is obtained.
|
||||
The goal of this RFC process will to engage all interseted parities and
|
||||
reach a rough consensus for techcinal specifications.
|
||||
|
||||
## Contributing
|
||||
|
||||
Please see [1/COSS](./vac/1/coss.md) for general guidelines and specification lifecycle.
|
||||
|
||||
Feel free to join the [Vac discord](https://discord.gg/Vy54fEWuqC).
|
||||
|
||||
Here's the project board used by core contributors and maintainers: [Projects](https://github.com/orgs/vacp2p/projects/5)
|
||||
|
||||
## IFT Projects' Raw Specifications
|
||||
|
||||
The repository for each project **raw** specifications:
|
||||
|
||||
- [Vac Raw Specifications](./vac/raw)
|
||||
- [Status Raw Specifications](./status/raw)
|
||||
- [Waku Raw Specificiations](https://github.com/waku-org/specs/tree/master)
|
||||
- [Codex Raw Specifications](none)
|
||||
- [Nomos Raw Specifications](https://github.com/logos-co/nomos-specs)
|
||||
# Vac Request For Comments(RFC)
|
||||
|
||||
*NOTE*: This repo is WIP. We are currently restructuring the RFC process.
|
||||
|
||||
This repository contains specifications from the [Waku](https://waku.org/), [Nomos](https://nomos.tech/),
|
||||
[Codex](https://codex.storage/), and
|
||||
[Status](https://status.app/) projects thatare part of the [IFT portfolio](https://free.technology/).
|
||||
[Vac](https://vac.dev) is an
|
||||
[IFT service](https://free.technology/services) that will manage the RFC,
|
||||
[Request for Comments](https://en.wikipedia.org/wiki/Request_for_Comments),
|
||||
process within this repository.
|
||||
|
||||
## New RFC Process
|
||||
|
||||
This repository replaces the previous `rfc.vac.dev` resource.
|
||||
Each project will maintain initial specifications in separate repositories,
|
||||
which may be considered as a **raw** specification.
|
||||
All [Vac](https://vac.dev) **raw** specifications and
|
||||
discussions will live in the Vac subdirectory.
|
||||
When projects have reached some level of maturity
|
||||
for a specification living in their repository,
|
||||
the process of updating the status to **draft** may begin in this repository.
|
||||
Specifications will adhere to
|
||||
[1/COSS](./vac/1/coss.md) before obtaining **draft** status.
|
||||
|
||||
Implementations should follow specifications as described,
|
||||
and all contributions will be discussed before the **stable** status is obtained.
|
||||
The goal of this RFC process will to engage all interseted parities and
|
||||
reach a rough consensus for techcinal specifications.
|
||||
|
||||
## Contributing
|
||||
|
||||
Please see [1/COSS](./vac/1/coss.md) for general guidelines and specification lifecycle.
|
||||
|
||||
Feel free to join the [Vac discord](https://discord.gg/Vy54fEWuqC).
|
||||
|
||||
Here's the project board used by core contributors and maintainers: [Projects](https://github.com/orgs/vacp2p/projects/5)
|
||||
|
||||
## IFT Projects' Raw Specifications
|
||||
|
||||
The repository for each project **raw** specifications:
|
||||
|
||||
- [Vac Raw Specifications](./vac/raw)
|
||||
- [Status Raw Specifications](./status/raw)
|
||||
- [Waku Raw Specificiations](https://github.com/waku-org/specs/tree/master)
|
||||
- [Codex Raw Specifications](none)
|
||||
- [Nomos Raw Specifications](https://github.com/logos-co/nomos-specs)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Codex RFCs
|
||||
|
||||
Specifications related the Codex decentralised data storage platform.
|
||||
Visit [Codex specs](https://github.com/codex-storage/codex-spec)
|
||||
to view the new Codex specifications currently under discussion.
|
||||
# Codex RFCs
|
||||
|
||||
Specifications related the Codex decentralised data storage platform.
|
||||
Visit [Codex specs](https://github.com/codex-storage/codex-spec)
|
||||
to view the new Codex specifications currently under discussion.
|
||||
|
||||
@@ -1,485 +0,0 @@
|
||||
---
|
||||
title: CODEX-BLOCK-EXCHANGE
|
||||
name: Codex Block Exchange Protocol
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags: codex, block-exchange, p2p, data-distribution
|
||||
editor: Codex Team
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The Block Exchange (BE) is a core Codex component responsible for
|
||||
peer-to-peer content distribution across the network.
|
||||
It manages the sending and receiving of data blocks between nodes,
|
||||
enabling efficient data sharing and retrieval.
|
||||
This specification defines both an internal service interface and a
|
||||
network protocol for referring to and providing data blocks.
|
||||
Blocks are uniquely identifiable by means of an address and represent
|
||||
fixed-length chunks of arbitrary data.
|
||||
|
||||
## Semantics
|
||||
|
||||
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
|
||||
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
|
||||
document are to be interpreted as described in
|
||||
[RFC 2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Definitions
|
||||
|
||||
| Term | Description |
|
||||
|------|-------------|
|
||||
| **Block** | Fixed-length chunk of arbitrary data, uniquely identifiable |
|
||||
| **Standalone Block** | Self-contained block addressed by SHA256 hash (CID) |
|
||||
| **Dataset Block** | Block in ordered set, addressed by dataset CID + index |
|
||||
| **Block Address** | Unique identifier for standalone/dataset addressing |
|
||||
| **WantList** | List of block requests sent by a peer |
|
||||
| **Block Delivery** | Transmission of block data from one peer to another |
|
||||
| **Block Presence** | Indicator of whether peer has requested block |
|
||||
| **Merkle Proof** | Proof verifying dataset block position correctness |
|
||||
| **CID** | Content Identifier - hash-based identifier for content |
|
||||
| **Multicodec** | Self-describing format identifier for data encoding |
|
||||
| **Multihash** | Self-describing hash format |
|
||||
|
||||
## Motivation
|
||||
|
||||
The Block Exchange module serves as the fundamental layer for content
|
||||
distribution in the Codex network.
|
||||
It provides primitives for requesting and delivering blocks of data
|
||||
between peers, supporting both standalone blocks and blocks that are
|
||||
part of larger datasets.
|
||||
The protocol is designed to work over libp2p streams and integrates
|
||||
with Codex's discovery, storage, and payment systems.
|
||||
|
||||
When a peer wishes to obtain a block, it registers its unique address
|
||||
with the Block Exchange, and the Block Exchange will then be in charge
|
||||
of procuring it by finding a peer that has the block, if any, and then
|
||||
downloading it.
|
||||
The Block Exchange will also accept requests from peers which might
|
||||
want blocks that the node has, and provide them.
|
||||
|
||||
**Discovery Separation:** Throughout this specification we assume that
|
||||
if a peer wants a block, then the peer has the means to locate and
|
||||
connect to peers which either: (1) have the block; or (2) are
|
||||
reasonably expected to obtain the block in the future.
|
||||
In practical implementations, the Block Exchange will typically require
|
||||
the support of an underlying discovery service, e.g., the Codex DHT,
|
||||
to look up such peers, but this is beyond the scope of this document.
|
||||
|
||||
The protocol supports two distinct block types to accommodate different
|
||||
use cases: standalone blocks for independent data chunks and dataset
|
||||
blocks for ordered collections of data that form larger structures.
|
||||
|
||||
## Block Format
|
||||
|
||||
The Block Exchange protocol supports two types of blocks:
|
||||
|
||||
### Standalone Blocks
|
||||
|
||||
Standalone blocks are self-contained pieces of data addressed by their
|
||||
SHA256 content identifier (CID).
|
||||
These blocks are independent and do not reference any larger structure.
|
||||
|
||||
**Properties:**
|
||||
|
||||
- Addressed by content hash (SHA256)
|
||||
- Default size: 64 KiB
|
||||
- Self-contained and independently verifiable
|
||||
|
||||
### Dataset Blocks
|
||||
|
||||
Dataset blocks are part of ordered sets and are addressed by a
|
||||
`(datasetCID, index)` tuple.
|
||||
The datasetCID refers to the Merkle tree root of the entire dataset,
|
||||
and the index indicates the block's position within that dataset.
|
||||
|
||||
Formally, we can define a block as a tuple consisting of raw data and
|
||||
its content identifier: `(data: seq[byte], cid: Cid)`, where standalone
|
||||
blocks are addressed by `cid`, and dataset blocks can be addressed
|
||||
either by `cid` or a `(datasetCID, index)` tuple.
|
||||
|
||||
**Properties:**
|
||||
|
||||
- Addressed by `(treeCID, index)` tuple
|
||||
- Part of a Merkle tree structure
|
||||
- Require Merkle proof for verification
|
||||
- Must be uniformly sized within a dataset
|
||||
- Final blocks MUST be zero-padded if incomplete
|
||||
|
||||
### Block Specifications
|
||||
|
||||
All blocks in the Codex Block Exchange protocol adhere to the
|
||||
following specifications:
|
||||
|
||||
| Property | Value | Description |
|
||||
|----------|-------|-------------|
|
||||
| Default Block Size | 64 KiB | Standard size for data blocks |
|
||||
| Multicodec | `codex-block` (0xCD02) | Format identifier |
|
||||
| Multihash | `sha2-256` (0x12) | Hash algorithm for addressing |
|
||||
| Padding Requirement | Zero-padding | Incomplete final blocks padded |
|
||||
|
||||
## Service Interface
|
||||
|
||||
The Block Exchange module exposes two core primitives for
|
||||
block management:
|
||||
|
||||
### `requestBlock`
|
||||
|
||||
```python
|
||||
async def requestBlock(address: BlockAddress) -> Block
|
||||
```
|
||||
|
||||
Registers a block address for retrieval and returns the block data
|
||||
when available.
|
||||
This function can be awaited by the caller until the block is retrieved
|
||||
from the network or local storage.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `address`: BlockAddress - The unique address identifying the block
|
||||
to retrieve
|
||||
|
||||
**Returns:**
|
||||
|
||||
- `Block` - The retrieved block data
|
||||
|
||||
### `cancelRequest`
|
||||
|
||||
```python
|
||||
async def cancelRequest(address: BlockAddress) -> bool
|
||||
```
|
||||
|
||||
Cancels a previously registered block request.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `address`: BlockAddress - The address of the block request to cancel
|
||||
|
||||
**Returns:**
|
||||
|
||||
- `bool` - True if the cancellation was successful, False otherwise
|
||||
|
||||
## Dependencies
|
||||
|
||||
The Block Exchange module depends on and interacts with several other
|
||||
Codex components:
|
||||
|
||||
| Component | Purpose |
|
||||
|-----------|---------|
|
||||
| **Discovery Module** | DHT-based peer discovery for locating nodes |
|
||||
| **Local Store (Repo)** | Persistent block storage for local blocks |
|
||||
| **Advertiser** | Announces block availability to the network |
|
||||
| **Network Layer** | libp2p connections and stream management |
|
||||
|
||||
## Protocol Specification
|
||||
|
||||
### Protocol Identifier
|
||||
|
||||
The Block Exchange protocol uses the following libp2p protocol
|
||||
identifier:
|
||||
|
||||
```text
|
||||
/codex/blockexc/1.0.0
|
||||
```
|
||||
|
||||
### Connection Model
|
||||
|
||||
The protocol operates over libp2p streams.
|
||||
When a node wants to communicate with a peer:
|
||||
|
||||
1. The initiating node dials the peer using the protocol identifier
|
||||
2. A bidirectional stream is established
|
||||
3. Both sides can send and receive messages on this stream
|
||||
4. Messages are encoded using Protocol Buffers
|
||||
5. The stream remains open for the duration of the exchange session
|
||||
6. Peers track active connections in a peer context store
|
||||
|
||||
The protocol handles peer lifecycle events:
|
||||
|
||||
- **Peer Joined**: When a peer connects, it is added to the active
|
||||
peer set
|
||||
- **Peer Departed**: When a peer disconnects gracefully, its context
|
||||
is cleaned up
|
||||
- **Peer Dropped**: When a peer connection fails, it is removed from
|
||||
the active set
|
||||
|
||||
### Message Format
|
||||
|
||||
All messages use Protocol Buffers encoding for serialization.
|
||||
The main message structure supports multiple operation types in a
|
||||
single message.
|
||||
|
||||
#### Main Message Structure
|
||||
|
||||
```protobuf
|
||||
message Message {
|
||||
Wantlist wantlist = 1;
|
||||
repeated BlockDelivery payload = 3;
|
||||
repeated BlockPresence blockPresences = 4;
|
||||
int32 pendingBytes = 5;
|
||||
AccountMessage account = 6;
|
||||
StateChannelUpdate payment = 7;
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `wantlist`: Block requests from the sender
|
||||
- `payload`: Block deliveries (actual block data)
|
||||
- `blockPresences`: Availability indicators for requested blocks
|
||||
- `pendingBytes`: Number of bytes pending delivery
|
||||
- `account`: Account information for micropayments
|
||||
- `payment`: State channel update for payment processing
|
||||
|
||||
#### Block Address
|
||||
|
||||
The BlockAddress structure supports both standalone and dataset
|
||||
block addressing:
|
||||
|
||||
```protobuf
|
||||
message BlockAddress {
|
||||
bool leaf = 1;
|
||||
bytes treeCid = 2; // Present when leaf = true
|
||||
uint64 index = 3; // Present when leaf = true
|
||||
bytes cid = 4; // Present when leaf = false
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `leaf`: Indicates if this is dataset block (true) or standalone
|
||||
(false)
|
||||
- `treeCid`: Merkle tree root CID (present when `leaf = true`)
|
||||
- `index`: Position of block within dataset (present when `leaf = true`)
|
||||
- `cid`: Content identifier of the block (present when `leaf = false`)
|
||||
|
||||
**Addressing Modes:**
|
||||
|
||||
- **Standalone Block** (`leaf = false`): Direct CID reference to a
|
||||
standalone content block
|
||||
- **Dataset Block** (`leaf = true`): Reference to a block within an
|
||||
ordered set, identified by a Merkle tree root and an index.
|
||||
The Merkle root may refer to either a regular dataset, or a dataset
|
||||
that has undergone erasure-coding
|
||||
|
||||
#### WantList
|
||||
|
||||
The WantList communicates which blocks a peer desires to receive:
|
||||
|
||||
```protobuf
|
||||
message Wantlist {
|
||||
enum WantType {
|
||||
wantBlock = 0;
|
||||
wantHave = 1;
|
||||
}
|
||||
|
||||
message Entry {
|
||||
BlockAddress address = 1;
|
||||
int32 priority = 2;
|
||||
bool cancel = 3;
|
||||
WantType wantType = 4;
|
||||
bool sendDontHave = 5;
|
||||
}
|
||||
|
||||
repeated Entry entries = 1;
|
||||
bool full = 2;
|
||||
}
|
||||
```
|
||||
|
||||
**WantType Values:**
|
||||
|
||||
- `wantBlock (0)`: Request full block delivery
|
||||
- `wantHave (1)`: Request availability information only (presence check)
|
||||
|
||||
**Entry Fields:**
|
||||
|
||||
- `address`: The block being requested
|
||||
- `priority`: Request priority (currently always 0)
|
||||
- `cancel`: If true, cancels a previous want for this block
|
||||
- `wantType`: Specifies whether full block or presence is desired
|
||||
- `wantHave (1)`: Only check if peer has the block
|
||||
- `wantBlock (0)`: Request full block data
|
||||
- `sendDontHave`: If true, peer should respond even if it doesn't have
|
||||
the block
|
||||
|
||||
**WantList Fields:**
|
||||
|
||||
- `entries`: List of block requests
|
||||
- `full`: If true, replaces all previous entries; if false, delta update
|
||||
|
||||
**Delta Updates:**
|
||||
|
||||
WantLists support delta updates for efficiency.
|
||||
When `full = false`, entries represent additions or modifications to
|
||||
the existing WantList rather than a complete replacement.
|
||||
|
||||
#### Block Delivery
|
||||
|
||||
Block deliveries contain the actual block data along with verification
|
||||
information:
|
||||
|
||||
```protobuf
|
||||
message BlockDelivery {
|
||||
bytes cid = 1;
|
||||
bytes data = 2;
|
||||
BlockAddress address = 3;
|
||||
bytes proof = 4;
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `cid`: Content identifier of the block
|
||||
- `data`: Raw block data (up to 100 MiB)
|
||||
- `address`: The BlockAddress identifying this block
|
||||
- `proof`: Merkle proof (CodexProof) verifying block correctness
|
||||
(required for dataset blocks)
|
||||
|
||||
**Merkle Proof Verification:**
|
||||
|
||||
When delivering dataset blocks (`address.leaf = true`):
|
||||
|
||||
- The delivery MUST include a Merkle proof (CodexProof)
|
||||
- The proof verifies that the block at the given index is correctly
|
||||
part of the Merkle tree identified by the tree CID
|
||||
- This applies to all datasets, irrespective of whether they have been
|
||||
erasure-coded or not
|
||||
- Recipients MUST verify the proof before accepting the block
|
||||
- Invalid proofs result in block rejection
|
||||
|
||||
#### Block Presence
|
||||
|
||||
Block presence messages indicate whether a peer has or does not have a
|
||||
requested block:
|
||||
|
||||
```protobuf
|
||||
enum BlockPresenceType {
|
||||
presenceHave = 0;
|
||||
presenceDontHave = 1;
|
||||
}
|
||||
|
||||
message BlockPresence {
|
||||
BlockAddress address = 1;
|
||||
BlockPresenceType type = 2;
|
||||
bytes price = 3;
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `address`: The block address being referenced
|
||||
- `type`: Whether the peer has the block or not
|
||||
- `price`: Price (UInt256 format)
|
||||
|
||||
#### Payment Messages
|
||||
|
||||
Payment-related messages for micropayments using Nitro state channels.
|
||||
|
||||
**Account Message:**
|
||||
|
||||
```protobuf
|
||||
message AccountMessage {
|
||||
bytes address = 1; // Ethereum address to which payments should be made
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `address`: Ethereum address for receiving payments
|
||||
|
||||
**State Channel Update:**
|
||||
|
||||
```protobuf
|
||||
message StateChannelUpdate {
|
||||
bytes update = 1; // Signed Nitro state, serialized as JSON
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `update`: Nitro state channel update containing payment information
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Block Verification
|
||||
|
||||
- All dataset blocks MUST include and verify Merkle proofs before acceptance
|
||||
- Standalone blocks MUST verify CID matches the SHA256 hash of the data
|
||||
- Peers SHOULD reject blocks that fail verification immediately
|
||||
|
||||
### DoS Protection
|
||||
|
||||
- Implementations SHOULD limit the number of concurrent block requests per peer
|
||||
- Implementations SHOULD implement rate limiting for WantList updates
|
||||
- Large WantLists MAY be rejected to prevent resource exhaustion
|
||||
|
||||
### Data Integrity
|
||||
|
||||
- All blocks MUST be validated before being stored or forwarded
|
||||
- Zero-padding in dataset blocks MUST be verified to prevent data corruption
|
||||
- Block sizes MUST be validated against protocol limits
|
||||
|
||||
### Privacy Considerations
|
||||
|
||||
- Block requests reveal information about what data a peer is seeking
|
||||
- Implementations MAY implement request obfuscation strategies
|
||||
- Presence information can leak storage capacity details
|
||||
|
||||
## Rationale
|
||||
|
||||
### Design Decisions
|
||||
|
||||
**Two-Tier Block Addressing:**
|
||||
The protocol supports both standalone and dataset blocks to accommodate
|
||||
different use cases.
|
||||
Standalone blocks are simpler and don't require Merkle proofs, while
|
||||
dataset blocks enable efficient verification of large datasets without
|
||||
requiring the entire dataset.
|
||||
|
||||
**WantList Delta Updates:**
|
||||
Supporting delta updates reduces bandwidth consumption when peers only
|
||||
need to modify a small portion of their wants, which is common in
|
||||
long-lived connections.
|
||||
|
||||
**Separate Presence Messages:**
|
||||
Decoupling presence information from block delivery allows peers to
|
||||
quickly assess availability without waiting for full block transfers.
|
||||
|
||||
**Fixed Block Size:**
|
||||
The 64 KiB default block size balances efficient network transmission
|
||||
with manageable memory overhead.
|
||||
|
||||
**Zero-Padding Requirement:**
|
||||
Requiring zero-padding for incomplete dataset blocks ensures uniform
|
||||
block sizes within datasets, simplifying Merkle tree construction and
|
||||
verification.
|
||||
|
||||
**Protocol Buffers:**
|
||||
Using Protocol Buffers provides efficient serialization, forward
|
||||
compatibility, and wide language support.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
### Normative
|
||||
|
||||
- [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt) - Key words for use
|
||||
in RFCs to Indicate Requirement Levels
|
||||
- **libp2p**: <https://libp2p.io>
|
||||
- **Protocol Buffers**: <https://protobuf.dev>
|
||||
- **Multihash**: <https://multiformats.io/multihash/>
|
||||
- **Multicodec**: <https://github.com/multiformats/multicodec>
|
||||
|
||||
### Informative
|
||||
|
||||
- **Codex Documentation**: <https://docs.codex.storage>
|
||||
- **Codex Block Exchange Module Spec**:
|
||||
<https://github.com/codex-storage/codex-docs-obsidian/blob/main/10%20Notes/Specs/Block%20Exchange%20Module%20Spec.md>
|
||||
- **Merkle Trees**: <https://en.wikipedia.org/wiki/Merkle_tree>
|
||||
- **Content Addressing**:
|
||||
<https://en.wikipedia.org/wiki/Content-addressable_storage>
|
||||
@@ -1,802 +0,0 @@
|
||||
---
|
||||
slug: codex-marketplace
|
||||
title: CODEX-MARKETPLACE
|
||||
name: Codex Storage Marketplace
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags: codex, storage, marketplace, smart-contract
|
||||
editor: Codex Team and Dmitriy Ryajov <dryajov@status.im>
|
||||
contributors:
|
||||
- Mark Spanbroek <mark@codex.storage>
|
||||
- Adam Uhlíř <adam@codex.storage>
|
||||
- Eric Mastro <eric@codex.storage>
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
Codex Marketplace and its interactions are defined by a smart contract deployed on an EVM-compatible blockchain. This specification describes these interactions for the various roles within the network.
|
||||
|
||||
The document is intended for implementors of Codex nodes.
|
||||
|
||||
## Semantics
|
||||
|
||||
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Definitions
|
||||
|
||||
| Terminology | Description |
|
||||
|---------------------------|---------------------------------------------------------------------------------------------------------------------------|
|
||||
| Storage Provider (SP) | A node in the Codex network that provides storage services to the marketplace. |
|
||||
| Validator | A node that assists in identifying missing storage proofs. |
|
||||
| Client | A node that interacts with other nodes in the Codex network to store, locate, and retrieve data. |
|
||||
| Storage Request or Request | A request created by a client node to persist data on the Codex network. |
|
||||
| Slot or Storage Slot | A space allocated by the storage request to store a piece of the request's dataset. |
|
||||
| Smart Contract | A smart contract implementing the marketplace functionality. |
|
||||
| Token | The ERC20-based token used within the Codex network. |
|
||||
|
||||
## Motivation
|
||||
|
||||
The Codex network aims to create a peer-to-peer storage engine with robust data durability, data persistence guarantees, and a comprehensive incentive structure.
|
||||
|
||||
The marketplace is a critical component of the Codex network, serving as a platform where all involved parties interact to ensure data persistence. It provides mechanisms to enforce agreements and facilitate data repair when SPs fail to fulfill their duties.
|
||||
|
||||
Implemented as a smart contract on an EVM-compatible blockchain, the marketplace enables various scenarios where nodes assume one or more roles to maintain a reliable persistence layer for users. This specification details these interactions.
|
||||
|
||||
The marketplace contract manages storage requests, maintains the state of allocated storage slots, and orchestrates SP rewards, collaterals, and storage proofs.
|
||||
|
||||
A node that wishes to participate in the Codex persistence layer MUST implement one or more roles described in this document.
|
||||
|
||||
### Roles
|
||||
|
||||
A node can assume one of the three main roles in the network: the client, SP, and validator.
|
||||
|
||||
A client is a potentially short-lived node in the network with the purpose of persisting its data in the Codex persistence layer.
|
||||
|
||||
An SP is a long-lived node providing storage for clients in exchange for profit. To ensure a reliable, robust service for clients, SPs are required to periodically provide proofs that they are persisting the data.
|
||||
|
||||
A validator ensures that SPs have submitted valid proofs each period where the smart contract required a proof to be submitted for slots filled by the SP.
|
||||
|
||||
---
|
||||
|
||||
## Part I: Protocol Specification
|
||||
|
||||
This part defines the **normative requirements** for the Codex Marketplace protocol. All implementations MUST comply with these requirements to participate in the Codex network. The protocol is defined by smart contract interactions on an EVM-compatible blockchain.
|
||||
|
||||
## Storage Request Lifecycle
|
||||
|
||||
The diagram below depicts the lifecycle of a storage request:
|
||||
|
||||
```text
|
||||
┌───────────┐
|
||||
│ Cancelled │
|
||||
└───────────┘
|
||||
▲
|
||||
│ Not all
|
||||
│ Slots filled
|
||||
│
|
||||
┌───────────┐ ┌──────┴─────────────┐ ┌─────────┐
|
||||
│ Submitted ├───►│ Slots Being Filled ├──────────►│ Started │
|
||||
└───────────┘ └────────────────────┘ All Slots └────┬────┘
|
||||
Filled │
|
||||
│
|
||||
┌───────────────────────┘
|
||||
Proving ▼
|
||||
┌────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Proof submitted │
|
||||
│ ┌─────────────────────────► All good │
|
||||
│ │ │
|
||||
│ Proof required │
|
||||
│ │ │
|
||||
│ │ Proof missed │
|
||||
│ └─────────────────────────► After some time slashed │
|
||||
│ eventually Slot freed │
|
||||
│ │
|
||||
└────────┬─┬─────────────────────────────────────────────────┘
|
||||
│ │ ▲
|
||||
│ │ │
|
||||
│ │ SP kicked out and Slot freed ┌───────┴────────┐
|
||||
All good │ ├─────────────────────────────►│ Repair process │
|
||||
Time ran out │ │ └────────────────┘
|
||||
│ │
|
||||
│ │ Too many Slots freed ┌────────┐
|
||||
│ └─────────────────────────────►│ Failed │
|
||||
▼ └────────┘
|
||||
┌──────────┐
|
||||
│ Finished │
|
||||
└──────────┘
|
||||
```
|
||||
|
||||
## Client Role
|
||||
|
||||
A node implementing the client role mediates the persistence of data within the Codex network.
|
||||
|
||||
A client has two primary responsibilities:
|
||||
|
||||
- Requesting storage from the network by sending a storage request to the smart contract.
|
||||
- Withdrawing funds from the storage requests previously created by the client.
|
||||
|
||||
### Creating Storage Requests
|
||||
|
||||
When a user prompts the client node to create a storage request, the client node SHOULD receive the input parameters for the storage request from the user.
|
||||
|
||||
To create a request to persist a dataset on the Codex network, client nodes MUST split the dataset into data chunks, $(c_1, c_2, c_3, \ldots, c_{n})$. Using the erasure coding method and the provided input parameters, the data chunks are encoded and distributed over a number of slots. The applied erasure coding method MUST use the [Reed-Solomon algorithm](https://hackmd.io/FB58eZQoTNm-dnhu0Y1XnA). The final slot roots and other metadata MUST be placed into a `Manifest` (TODO: Manifest RFC). The CID for the `Manifest` MUST then be used as the `cid` for the stored dataset.
|
||||
|
||||
After the dataset is prepared, a client node MUST call the smart contract function `requestStorage(request)`, providing the desired request parameters in the `request` parameter. The `request` parameter is of type `Request`:
|
||||
|
||||
```solidity
|
||||
struct Request {
|
||||
address client;
|
||||
Ask ask;
|
||||
Content content;
|
||||
uint64 expiry;
|
||||
bytes32 nonce;
|
||||
}
|
||||
|
||||
struct Ask {
|
||||
uint256 proofProbability;
|
||||
uint256 pricePerBytePerSecond;
|
||||
uint256 collateralPerByte;
|
||||
uint64 slots;
|
||||
uint64 slotSize;
|
||||
uint64 duration;
|
||||
uint64 maxSlotLoss;
|
||||
}
|
||||
|
||||
struct Content {
|
||||
bytes cid;
|
||||
bytes32 merkleRoot;
|
||||
}
|
||||
```
|
||||
|
||||
The table below provides the description of the `Request` and the associated types attributes:
|
||||
|
||||
| attribute | type | description |
|
||||
|-----------|------|-------------|
|
||||
| `client` | `address` | The Codex node requesting storage. |
|
||||
| `ask` | `Ask` | Parameters of Request. |
|
||||
| `content` | `Content` | The dataset that will be hosted with the storage request. |
|
||||
| `expiry` | `uint64` | Timeout in seconds during which all the slots have to be filled, otherwise Request will get cancelled. The final deadline timestamp is calculated at the moment the transaction is mined. |
|
||||
| `nonce` | `bytes32` | Random value to differentiate from other requests of same parameters. It SHOULD be a random byte array. |
|
||||
| `pricePerBytePerSecond` | `uint256` | Amount of tokens that will be awarded to SPs for finishing the storage request. It MUST be an amount of tokens offered per slot per second per byte. The Ethereum address that submits the `requestStorage()` transaction MUST have [approval](https://docs.openzeppelin.com/contracts/2.x/api/token/erc20#IERC20-approve-address-uint256-) for the transfer of at least an equivalent amount of full reward (`pricePerBytePerSecond * duration * slots * slotSize`) in tokens. |
|
||||
| `collateralPerByte` | `uint256` | The amount of tokens per byte of slot's size that SPs submit when they fill slots. Collateral is then slashed or forfeited if SPs fail to provide the service requested by the storage request (more information in the [Slashing](#### Slashing) section). |
|
||||
| `proofProbability` | `uint256` | Determines the average frequency that a proof is required within a period: $\frac{1}{proofProbability}$. SPs are required to provide proofs of storage to the marketplace contract when challenged. To prevent hosts from only coming online when proofs are required, the frequency at which proofs are requested from SPs is stochastic and is influenced by the `proofProbability` parameter. |
|
||||
| `duration` | `uint64` | Total duration of the storage request in seconds. It MUST NOT exceed the limit specified in the configuration `config.requestDurationLimit`. |
|
||||
| `slots` | `uint64` | The number of requested slots. The slots will all have the same size. |
|
||||
| `slotSize` | `uint64` | Amount of storage per slot in bytes. |
|
||||
| `maxSlotLoss` | `uint64` | Max slots that can be lost without data considered to be lost. |
|
||||
| `cid` | `bytes` | An identifier used to locate the Manifest representing the dataset. It MUST be a [CIDv1](https://github.com/multiformats/cid#cidv1), SHA-256 [multihash](https://github.com/multiformats/multihash) and the data it represents SHOULD be discoverable in the network, otherwise the request will be eventually canceled. |
|
||||
| `merkleRoot` | `bytes32` | Merkle root of the dataset, used to verify storage proofs |
|
||||
|
||||
#### Renewal of Storage Requests
|
||||
|
||||
It should be noted that the marketplace does not support extending requests. It is REQUIRED that if the user wants to extend the duration of a request, a new request with the same CID must be [created](### Creating Storage Requests) **before the original request completes**.
|
||||
|
||||
This ensures that the data will continue to persist in the network at the time when the new (or existing) SPs need to retrieve the complete dataset to fill the slots of the new request.
|
||||
|
||||
### Monitoring and State Management
|
||||
|
||||
Client nodes MUST implement the following smart contract interactions for monitoring and state management:
|
||||
|
||||
- **getRequest(requestId)**: Retrieve the full `StorageRequest` data from the marketplace. This function is used for recovery and state verification after restarts or failures.
|
||||
|
||||
- **requestState(requestId)**: Query the current state of a storage request. Used for monitoring request progress and determining the appropriate client actions.
|
||||
|
||||
- **requestExpiresAt(requestId)**: Query when the request will expire if not fulfilled.
|
||||
|
||||
- **getRequestEnd(requestId)**: Query when a fulfilled request will end (used to determine when to call `freeSlot` or `withdrawFunds`).
|
||||
|
||||
Client nodes MUST subscribe to the following marketplace events:
|
||||
|
||||
- **RequestFulfilled(requestId)**: Emitted when a storage request has enough filled slots to start. Clients monitor this event to determine when their request becomes active and transitions from the submission phase to the active phase.
|
||||
|
||||
- **RequestFailed(requestId)**: Emitted when a storage request fails due to proof failures or other reasons. Clients observe this event to detect failed requests and initiate fund withdrawal.
|
||||
|
||||
### Withdrawing Funds
|
||||
|
||||
The client node MUST monitor the status of the requests it created. When a storage request enters the `Cancelled`, `Failed`, or `Finished` state, the client node MUST initiate the withdrawal of the remaining or refunded funds from the smart contract using the `withdrawFunds(requestId)` function.
|
||||
|
||||
Request states are determined as follows:
|
||||
|
||||
- The request is considered `Cancelled` if no `RequestFulfilled(requestId)` event is observed during the timeout specified by the value returned from the `requestExpiresAt(requestId)` function.
|
||||
- The request is considered `Failed` when the `RequestFailed(requestId)` event is observed.
|
||||
- The request is considered `Finished` after the interval specified by the value returned from the `getRequestEnd(requestId)` function has elapsed.
|
||||
|
||||
## Storage Provider Role
|
||||
|
||||
A Codex node acting as an SP persists data across the network by hosting slots requested by clients in their storage requests.
|
||||
|
||||
The following tasks need to be considered when hosting a slot:
|
||||
|
||||
- Filling a slot
|
||||
- Proving
|
||||
- Repairing a slot
|
||||
- Collecting request reward and collateral
|
||||
|
||||
### Filling Slots
|
||||
|
||||
When a new request is created, the `StorageRequested(requestId, ask, expiry)` event is emitted with the following properties:
|
||||
|
||||
- `requestId` - the ID of the request.
|
||||
- `ask` - the specification of the request parameters. For details, see the definition of the `Request` type in the [Creating Storage Requests](### Creating Storage Requests) section above.
|
||||
- `expiry` - a Unix timestamp specifying when the request will be canceled if all slots are not filled by then.
|
||||
|
||||
It is then up to the SP node to decide, based on the emitted parameters and node's operator configuration, whether it wants to participate in the request and attempt to fill its slot(s) (note that one SP can fill more than one slot). If the SP node decides to ignore the request, no further action is required. However, if the SP decides to fill a slot, it MUST follow the remaining steps described below.
|
||||
|
||||
The node acting as an SP MUST decide which slot, specified by the slot index, it wants to fill. The SP MAY attempt to fill more than one slot. To fill a slot, the SP MUST first reserve the slot in the smart contract using `reserveSlot(requestId, slotIndex)`. If reservations for this slot are full, or if the SP has already reserved the slot, the transaction will revert. If the reservation was unsuccessful, then the SP is not allowed to fill the slot. If the reservation was successful, the node MUST then download the slot data using the CID of the manifest (**TODO: Manifest RFC**) and the slot index. The CID is specified in `request.content.cid`, which can be retrieved from the smart contract using `getRequest(requestId)`. Then, the node MUST generate a proof over the downloaded data (**TODO: Proving RFC**).
|
||||
|
||||
When the proof is ready, the SP MUST call `fillSlot()` on the smart contract with the following REQUIRED parameters:
|
||||
|
||||
- `requestId` - the ID of the request.
|
||||
- `slotIndex` - the slot index that the node wants to fill.
|
||||
- `proof` - the `Groth16Proof` proof structure, generated over the slot data.
|
||||
|
||||
The Ethereum address of the SP node from which the transaction originates MUST have [approval](https://docs.openzeppelin.com/contracts/2.x/api/token/erc20#IERC20-approve-address-uint256-) for the transfer of at least the amount of tokens required as collateral for the slot (`collateralPerByte * slotSize`).
|
||||
|
||||
If the proof delivered by the SP is invalid or the slot was already filled by another SP, then the transaction will revert. Otherwise, a `SlotFilled(requestId, slotIndex)` event is emitted. If the transaction is successful, the SP SHOULD transition into the **proving** state, where it will need to submit proof of data possession when challenged by the smart contract.
|
||||
|
||||
It should be noted that if the SP node observes a `SlotFilled` event for the slot it is currently downloading the dataset for or generating the proof for, it means that the slot has been filled by another node in the meantime. In response, the SP SHOULD stop its current operation and attempt to fill a different, unfilled slot.
|
||||
|
||||
### Proving
|
||||
|
||||
Once an SP fills a slot, it MUST submit proofs to the marketplace contract when a challenge is issued by the contract. SPs SHOULD detect that a proof is required for the current period using the `isProofRequired(slotId)` function, or that it will be required using the `willProofBeRequired(slotId)` function in the case that the [proving clock pointer is in downtime](https://github.com/codex-storage/codex-research/blob/41c4b4409d2092d0a5475aca0f28995034e58d14/design/storage-proof-timing.md).
|
||||
|
||||
Once an SP knows it has to provide a proof it MUST get the proof challenge using `getChallenge(slotId)`, which then
|
||||
MUST be incorporated into the proof generation as described in Proving RFC (**TODO: Proving RFC**).
|
||||
|
||||
When the proof is generated, it MUST be submitted by calling the `submitProof(slotId, proof)` smart contract function.
|
||||
|
||||
#### Slashing
|
||||
|
||||
There is a slashing scheme orchestrated by the smart contract to incentivize correct behavior and proper proof submissions by SPs. This scheme is configured at the smart contract level and applies uniformly to all participants in the network. The configuration of the slashing scheme can be obtained via the `configuration()` contract call.
|
||||
|
||||
The slashing works as follows:
|
||||
|
||||
- When SP misses a proof and a validator trigger detection of this event using the `markProofAsMissing()` call, the SP is slashed by `config.collateral.slashPercentage` **of the originally required collateral** (hence the slashing amount is always the same for a given request).
|
||||
- If the number of slashes exceeds `config.collateral.maxNumberOfSlashes`, the slot is freed, the remaining collateral is burned, and the slot is offered to other nodes for repair. The smart contract also emits the `SlotFreed(requestId, slotIndex)` event.
|
||||
|
||||
If, at any time, the number of freed slots exceeds the value specified by the `request.ask.maxSlotLoss` parameter, the dataset is considered lost, and the request is deemed _failed_. The collateral of all SPs that hosted the slots associated with the storage request is burned, and the `RequestFailed(requestId)` event is emitted.
|
||||
|
||||
### Repair
|
||||
|
||||
When a slot is freed due to too many missed proofs, which SHOULD be detected by listening to the `SlotFreed(requestId, slotIndex)` event, an SP node can decide whether to participate in repairing the slot. Similar to filling a slot, the node SHOULD consider the operator's configuration when making this decision. The SP that originally hosted the slot but failed to comply with proving requirements MAY also participate in the repair. However, by refilling the slot, the SP **will not** recover its original collateral and must submit new collateral using the `fillSlot()` call.
|
||||
|
||||
The repair process is similar to filling slots. If the original slot dataset is no longer present in the network, the SP MAY use erasure coding to reconstruct the dataset. Reconstructing the original slot dataset requires retrieving other pieces of the dataset stored in other slots belonging to the request. For this reason, the node that successfully repairs a slot is entitled to an additional reward. (**TODO: Implementation**)
|
||||
|
||||
The repair process proceeds as follows:
|
||||
|
||||
1. The SP observes the `SlotFreed` event and decides to repair the slot.
|
||||
2. The SP MUST reserve the slot with the `reserveSlot(requestId, slotIndex)` call. For more information see the [Filling Slots](###filling slots) section.
|
||||
3. The SP MUST download the chunks of data required to reconstruct the freed slot's data. The node MUST use the [Reed-Solomon algorithm](https://hackmd.io/FB58eZQoTNm-dnhu0Y1XnA) to reconstruct the missing data.
|
||||
4. The SP MUST generate proof over the reconstructed data.
|
||||
5. The SP MUST call the `fillSlot()` smart contract function with the same parameters and collateral allowance as described in the [Filling Slots](###filling slots) section.
|
||||
|
||||
### Collecting Funds
|
||||
|
||||
An SP node SHOULD monitor the requests and the associated slots it hosts.
|
||||
|
||||
When a storage request enters the `Cancelled`, `Finished`, or `Failed` state, the SP node SHOULD call the `freeSlot(slotId)` smart contract function.
|
||||
|
||||
The aforementioned storage request states (`Cancelled`, `Finished`, and `Failed`) can be detected as follows:
|
||||
|
||||
- A storage request is considered `Cancelled` if no `RequestFulfilled(requestId)` event is observed within the time indicated by the `expiry` request parameter. Note that a `RequestCancelled` event may also be emitted, but the node SHOULD NOT rely on this event to assert the request expiration, as the `RequestCancelled` event is not guaranteed to be emitted at the time of expiry.
|
||||
- A storage request is considered `Finished` when the time indicated by the value returned from the `getRequestEnd(requestId)` function has elapsed.
|
||||
- A node concludes that a storage request has `Failed` upon observing the `RequestFailed(requestId)` event.
|
||||
|
||||
For each of the states listed above, different funds are handled as follows:
|
||||
|
||||
- In the `Cancelled` state, the collateral is returned along with a proportional payout based on the time the node actually hosted the dataset before the expiry was reached.
|
||||
- In the `Finished` state, the full reward for hosting the slot, along with the collateral, is collected.
|
||||
- In the `Failed` state, no funds are collected. The reward is returned to the client, and the collateral is burned. The slot is removed from the list of slots and is no longer included in the list of slots returned by the `mySlots()` function.
|
||||
|
||||
## Validator Role
|
||||
|
||||
In a blockchain, a contract cannot change its state without a transaction and gas initiating the state change. Therefore, our smart contract requires an external trigger to periodically check and confirm that a storage proof has been delivered by the SP. This is where the validator role is essential.
|
||||
|
||||
The validator role is fulfilled by nodes that help to verify that SPs have submitted the required storage proofs.
|
||||
|
||||
It is the smart contract that checks if the proof requested from an SP has been delivered. The validator only triggers the decision-making function in the smart contract. To incentivize validators, they receive a reward each time they correctly mark a proof as missing corresponding to the percentage of the slashed collateral defined by `config.collateral.validatorRewardPercentage`.
|
||||
|
||||
Each time a validator observes the `SlotFilled` event, it SHOULD add the slot reported in the `SlotFilled` event to the validator's list of watched slots. Then, after the end of each period, a validator has up to `config.proofs.timeout` seconds (a configuration parameter retrievable with `configuration()`) to validate all the slots. If a slot lacks the required proof, the validator SHOULD call the `markProofAsMissing(slotId, period)` function on the smart contract. This function validates the correctness of the claim, and if right, will send a reward to the validator.
|
||||
|
||||
If validating all the slots observed by the validator is not feasible within the specified `timeout`, the validator MAY choose to validate only a subset of the observed slots.
|
||||
|
||||
---
|
||||
|
||||
## Part II: Implementation Suggestions
|
||||
|
||||
> **IMPORTANT**: The sections above (Abstract through Validator Role) define the normative Codex Marketplace protocol requirements. All implementations MUST comply with those protocol requirements to participate in the Codex network.
|
||||
>
|
||||
> **The sections below are non-normative**. They document implementation approaches used in the nim-codex reference implementation. These are suggestions to guide implementors but are NOT required by the protocol. Alternative implementations MAY use different approaches as long as they satisfy the protocol requirements defined in Part I.
|
||||
|
||||
## Implementation Suggestions
|
||||
|
||||
This section describes implementation approaches used in reference implementations. These are **suggestions and not normative requirements**. Implementations are free to use different internal architectures, state machines, and data structures as long as they correctly implement the protocol requirements defined above.
|
||||
|
||||
### Storage Provider Implementation
|
||||
|
||||
The nim-codex reference implementation provides a complete Storage Provider implementation with state machine management, slot queueing, and resource management. This section documents the nim-codex approach.
|
||||
|
||||
#### State Machine
|
||||
|
||||
The Sales module implements a deterministic state machine for each slot, progressing through the following states:
|
||||
|
||||
1. **SalePreparing** - Find a matching availability and create a reservation
|
||||
2. **SaleSlotReserving** - Reserve the slot on the marketplace
|
||||
3. **SaleDownloading** - Stream and persist the slot's data
|
||||
4. **SaleInitialProving** - Wait for stable challenge and generate initial proof
|
||||
5. **SaleFilling** - Compute collateral and fill the slot
|
||||
6. **SaleFilled** - Post-filling operations and expiry updates
|
||||
7. **SaleProving** - Generate and submit proofs periodically
|
||||
8. **SalePayout** - Free slot and calculate collateral
|
||||
9. **SaleFinished** - Terminal success state
|
||||
10. **SaleFailed** - Free slot on market and transition to error
|
||||
11. **SaleCancelled** - Cancellation path
|
||||
12. **SaleIgnored** - Sale ignored (no matching availability or other conditions)
|
||||
13. **SaleErrored** - Terminal error state
|
||||
14. **SaleUnknown** - Recovery state for crash recovery
|
||||
15. **SaleProvingSimulated** - Proving with injected failures for testing
|
||||
|
||||
All states move to `SaleErrored` if an error is raised.
|
||||
|
||||
##### SalePreparing
|
||||
|
||||
- Find a matching availability based on the following criteria: `freeSize`, `duration`, `collateralPerByte`, `minPricePerBytePerSecond` and `until`
|
||||
- Create a reservation
|
||||
- Move to `SaleSlotReserving` if successful
|
||||
- Move to `SaleIgnored` if no availability is found or if `BytesOutOfBoundsError` is raised because of no space available.
|
||||
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
|
||||
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
|
||||
|
||||
##### SaleSlotReserving
|
||||
|
||||
- Check if the slot can be reserved
|
||||
- Move to `SaleDownloading` if successful
|
||||
- Move to `SaleIgnored` if `SlotReservationNotAllowedError` is raised or the slot cannot be reserved. The collateral is returned.
|
||||
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
|
||||
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
|
||||
|
||||
##### SaleDownloading
|
||||
|
||||
- Select the correct data expiry:
|
||||
- When the request is started, the request end date is used
|
||||
- Otherwise the expiry date is used
|
||||
- Stream and persist data via `onStore`
|
||||
- For each written batch, release bytes from the reservation
|
||||
- Move to `SaleInitialProving` if successful
|
||||
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
|
||||
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
|
||||
- Move to `SaleFilled` on `SlotFilled` event from the `marketplace`
|
||||
|
||||
##### SaleInitialProving
|
||||
|
||||
- Wait for a stable initial challenge
|
||||
- Produce the initial proof via `onProve`
|
||||
- Move to `SaleFilling` if successful
|
||||
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
|
||||
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
|
||||
|
||||
##### SaleFilling
|
||||
|
||||
- Get the slot collateral
|
||||
- Fill the slot
|
||||
- Move to `SaleFilled` if successful
|
||||
- Move to `SaleIgnored` on `SlotStateMismatchError`. The collateral is returned.
|
||||
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
|
||||
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
|
||||
|
||||
##### SaleFilled
|
||||
|
||||
- Ensure that the current host has filled the slot by checking the signer address
|
||||
- Notify by calling `onFilled` hook
|
||||
- Call `onExpiryUpdate` to change the data expiry from expiry date to request end date
|
||||
- Move to `SaleProving` (or `SaleProvingSimulated` for simulated mode)
|
||||
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
|
||||
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
|
||||
|
||||
##### SaleProving
|
||||
|
||||
- For each period: fetch challenge, call `onProve`, and submit proof
|
||||
- Move to `SalePayout` when the slot request ends
|
||||
- Re-raise `SlotFreedError` when the slot is freed
|
||||
- Raise `SlotNotFilledError` when the slot is not filled
|
||||
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
|
||||
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
|
||||
|
||||
##### SaleProvingSimulated
|
||||
|
||||
- Submit invalid proofs every `N` periods (`failEveryNProofs` in configuration) to test failure scenarios
|
||||
|
||||
##### SalePayout
|
||||
|
||||
- Get the current collateral and try to free the slot to ensure that the slot is freed after payout.
|
||||
- Forward the returned collateral to cleanup
|
||||
- Move to `SaleFinished` if successful
|
||||
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
|
||||
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
|
||||
|
||||
##### SaleFinished
|
||||
|
||||
- Call `onClear` hook
|
||||
- Call `onCleanUp` hook
|
||||
|
||||
##### SaleFailed
|
||||
|
||||
- Free the slot
|
||||
- Move to `SaleErrored` with the failure message
|
||||
|
||||
##### SaleCancelled
|
||||
|
||||
- Ensure that the node hosting the slot frees the slot
|
||||
- Call `onClear` hook
|
||||
- Call `onCleanUp` hook with the current collateral
|
||||
|
||||
##### SaleIgnored
|
||||
|
||||
- Call `onCleanUp` hook with the current collateral
|
||||
|
||||
##### SaleErrored
|
||||
|
||||
- Call `onClear` hook
|
||||
- Call `onCleanUp` hook
|
||||
|
||||
##### SaleUnknown
|
||||
|
||||
- Recovery entry: get the `on-chain` state and jump to the appropriate state
|
||||
|
||||
#### Slot Queue
|
||||
|
||||
Slot queue schedules slot work and instantiates one `SalesAgent` per item with bounded concurrency.
|
||||
|
||||
- Accepts `(requestId, slotIndex, …)` items and orders them by priority
|
||||
- Spawns one `SalesAgent` for each dequeued item, in other words, one item for one agent
|
||||
- Caps concurrent agents to `maxWorkers`
|
||||
- Supports pause/resume
|
||||
- Allows controlled requeue when an agent finishes with `reprocessSlot`
|
||||
|
||||
##### Slot Ordering
|
||||
|
||||
The criteria are in the following order:
|
||||
|
||||
1) **Unseen before seen** - Items that have not been seen are dequeued first.
|
||||
2) **More profitable first** - Higher `profitability` wins. `profitability` is `duration * pricePerSlotPerSecond`.
|
||||
3) **Less collateral first** - The item with the smaller `collateral` wins.
|
||||
4) **Later expiry first** - If both items carry an `expiry`, the one with the greater timestamp wins.
|
||||
|
||||
Within a single request, per-slot items are shuffled before enqueuing so the default slot-index order does not influence priority.
|
||||
|
||||
##### Pause / Resume
|
||||
|
||||
When the Slot queue processes an item with `seen = true`, it means that the item was already evaluated against the current availabilities and did not match.
|
||||
To avoid draining the queue with untenable requests (due to insufficient availability), the queue pauses itself.
|
||||
|
||||
The queue resumes when:
|
||||
|
||||
- `OnAvailabilitySaved` fires after an availability update that increases one of: `freeSize`, `duration`, `minPricePerBytePerSecond`, or `totalRemainingCollateral`.
|
||||
- A new unseen item (`seen = false`) is pushed.
|
||||
- `unpause()` is called explicitly.
|
||||
|
||||
##### Reprocess
|
||||
|
||||
Availability matching occurs in `SalePreparing`.
|
||||
If no availability fits at that time, the sale is ignored with `reprocessSlot` to true, meaning that the slot is added back to the queue with the flag `seen` to true.
|
||||
|
||||
##### Startup
|
||||
|
||||
On `SlotQueue.start()`, the sales module first deletes reservations associated with inactive storage requests, then starts a new `SalesAgent` for each active storage request:
|
||||
|
||||
- Fetch the active `on-chain` active slots.
|
||||
- Delete the local reservations for slots that are not in the active list.
|
||||
- Create a new agent for each slot and assign the `onCleanUp` callback.
|
||||
- Start the agent in the `SaleUnknown` state.
|
||||
|
||||
#### Main Behaviour
|
||||
|
||||
When a new slot request is received, the sales module extracts the pair `(requestId, slotIndex, …)` from the request.
|
||||
A `SlotQueueItem` is then created with metadata such as `profitability`, `collateral`, `expiry`, and the `seen` flag set to `false`.
|
||||
This item is pushed into the `SlotQueue`, where it will be prioritised according to the ordering rules.
|
||||
|
||||
#### SalesAgent
|
||||
|
||||
SalesAgent is the instance that executes the state machine for a single slot.
|
||||
|
||||
- Executes the sale state machine across the slot lifecycle
|
||||
- Holds a `SalesContext` with dependencies and host hooks
|
||||
- Supports crash recovery via the `SaleUnknown` state
|
||||
- Handles errors by entering `SaleErrored`, which runs cleanup routines
|
||||
|
||||
#### SalesContext
|
||||
|
||||
SalesContext is a container for dependencies used by all sales.
|
||||
|
||||
- Provides external interfaces: `Market` (marketplace) and `Clock`
|
||||
- Provides access to `Reservations`
|
||||
- Provides host hooks: `onStore`, `onProve`, `onExpiryUpdate`, `onClear`, `onSale`
|
||||
- Shares the `SlotQueue` handle for scheduling work
|
||||
- Provides configuration such as `simulateProofFailures`
|
||||
- Passed to each `SalesAgent`
|
||||
|
||||
#### Marketplace Subscriptions
|
||||
|
||||
The sales module subscribes to on-chain events to keep the queue and agents consistent.
|
||||
|
||||
##### StorageRequested
|
||||
|
||||
When the marketplace signals a new request, the sales module:
|
||||
|
||||
- Computes collateral for free slots.
|
||||
- Creates per-slot `SlotQueueItem` entries (one per `slotIndex`) with `seen = false`.
|
||||
- Pushes the items into the `SlotQueue`.
|
||||
|
||||
##### SlotFreed
|
||||
|
||||
When the marketplace signals a freed slot (needs repair), the sales module:
|
||||
|
||||
- Retrieves the request data for the `requestId`.
|
||||
- Computes collateral for repair.
|
||||
- Creates a `SlotQueueItem`.
|
||||
- Pushes the item into the `SlotQueue`.
|
||||
|
||||
##### RequestCancelled
|
||||
|
||||
When a request is cancelled, the sales module removes all queue items for that `requestId`.
|
||||
|
||||
##### RequestFulfilled
|
||||
|
||||
When a request is fulfilled, the sales module removes all queue items for that `requestId` and notifies active agents bound to the request.
|
||||
|
||||
##### RequestFailed
|
||||
|
||||
When a request fails, the sales module removes all queue items for that `requestId` and notifies active agents bound to the request.
|
||||
|
||||
##### SlotFilled
|
||||
|
||||
When a slot is filled, the sales module removes the queue item for that specific `(requestId, slotIndex)` and notifies the active agent for that slot.
|
||||
|
||||
##### SlotReservationsFull
|
||||
|
||||
When the marketplace signals that reservations are full, the sales module removes the queue item for that specific `(requestId, slotIndex)`.
|
||||
|
||||
#### Reservations
|
||||
|
||||
The Reservations module manages both Availabilities and Reservations.
|
||||
When an Availability is created, it reserves bytes in the storage module so no other modules can use those bytes.
|
||||
Before a dataset for a slot is downloaded, a Reservation is created, and the freeSize of the Availability is reduced.
|
||||
When bytes are downloaded, the reservation of those bytes in the storage module is released.
|
||||
Accounting of both reserved bytes in the storage module and freeSize in the Availability are cleaned up upon completion of the state machine.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Availability] -->|creates| R[Reservation]
|
||||
A -->|reserves bytes in| SM[Storage Module]
|
||||
R -->|reduces| AF[Availability.freeSize]
|
||||
R -->|downloads data| D[Dataset]
|
||||
D -->|releases bytes to| SM
|
||||
TC[Terminal State] -->|triggers cleanup| C[Cleanup]
|
||||
C -->|returns bytes to| AF
|
||||
C -->|deletes| R
|
||||
C -->|returns collateral to| A
|
||||
```
|
||||
|
||||
#### Hooks
|
||||
|
||||
- **onStore**: streams data into the node's storage
|
||||
- **onProve**: produces proofs for initial and periodic proving
|
||||
- **onExpiryUpdate**: notifies the client node of a change in the expiry data
|
||||
- **onSale**: notifies that the host is now responsible for the slot
|
||||
- **onClear**: notification emitted once the state machine has concluded; used to reconcile Availability bytes and reserved bytes in the storage module
|
||||
- **onCleanUp**: cleanup hook called in terminal states to release resources, delete reservations, and return collateral to availabilities
|
||||
|
||||
#### Error Handling
|
||||
|
||||
- Always catch `CancelledError` from `nim-chronos` and log a trace, exiting gracefully
|
||||
- Catch `CatchableError`, log it, and route to `SaleErrored`
|
||||
|
||||
#### Cleanup
|
||||
|
||||
Cleanup releases resources held by a sales agent and optionally requeues the slot.
|
||||
|
||||
- Return reserved bytes to the availability if a reservation exists
|
||||
- Delete the reservation and return any remaining collateral
|
||||
- If `reprocessSlot` is true, push the slot back into the queue marked as seen
|
||||
- Remove the agent from the sales set and track the removal future
|
||||
|
||||
#### Resource Management Approach
|
||||
|
||||
The nim-codex implementation uses Availabilities and Reservations to manage local storage resources:
|
||||
|
||||
##### Reservation Management
|
||||
|
||||
- Maintain `Availability` and `Reservation` records locally
|
||||
- Match incoming slot requests to available capacity using prioritisation rules
|
||||
- Lock capacity and collateral when creating a reservation
|
||||
- Release reserved bytes progressively during download and free all remaining resources in terminal states
|
||||
|
||||
**Note:** Availabilities and Reservations are completely local to the Storage Provider implementation and are not visible at the protocol level. They provide one approach to managing storage capacity, but other implementations may use different resource management strategies.
|
||||
|
||||
---
|
||||
|
||||
> **Protocol Compliance Note**: The Storage Provider implementation described above is specific to nim-codex. The only normative requirements for Storage Providers are defined in the [Storage Provider Role](#storage-provider-role) section of Part I. Implementations must satisfy those protocol requirements but may use completely different internal designs.
|
||||
|
||||
### Client Implementation
|
||||
|
||||
The nim-codex reference implementation provides a complete Client implementation with state machine management for storage request lifecycles. This section documents the nim-codex approach.
|
||||
|
||||
The nim-codex implementation uses a state machine pattern to manage purchase lifecycles, providing deterministic state transitions, explicit terminal states, and recovery support. The state machine definitions (state identifiers, transitions, state descriptions, requirements, data models, and interfaces) are documented in the subsections below.
|
||||
|
||||
> **Note**: The Purchase module terminology and state machine design are specific to the nim-codex implementation. The protocol only requires that clients interact with the marketplace smart contract as specified in the Client Role section.
|
||||
|
||||
#### State Identifiers
|
||||
|
||||
- PurchasePending: `pending`
|
||||
- PurchaseSubmitted: `submitted`
|
||||
- PurchaseStarted: `started`
|
||||
- PurchaseFinished: `finished`
|
||||
- PurchaseErrored: `errored`
|
||||
- PurchaseCancelled: `cancelled`
|
||||
- PurchaseFailed: `failed`
|
||||
- PurchaseUnknown: `unknown`
|
||||
|
||||
#### General Rules for All States
|
||||
|
||||
- If a `CancelledError` is raised, the state machine logs the cancellation message and takes no further action.
|
||||
- If a `CatchableError` is raised, the state machine moves to `errored` with the error message.
|
||||
|
||||
#### State Transitions
|
||||
|
||||
```text
|
||||
|
|
||||
v
|
||||
------------------------- unknown
|
||||
| / /
|
||||
v v /
|
||||
pending ----> submitted ----> started ---------> finished <----/
|
||||
\ \ /
|
||||
\ ------------> failed <----/
|
||||
\ /
|
||||
--> cancelled <-----------------------
|
||||
```
|
||||
|
||||
**Note:**
|
||||
|
||||
Any state can transition to errored upon a `CatchableError`.
|
||||
`failed` is an intermediate state before `errored`.
|
||||
`finished`, `cancelled`, and `errored` are terminal states.
|
||||
|
||||
#### State Descriptions
|
||||
|
||||
**Pending State (`pending`)**
|
||||
|
||||
A storage request is being created by making a call `on-chain`. If the storage request creation fails, the state machine moves to the `errored` state with the corresponding error.
|
||||
|
||||
**Submitted State (`submitted`)**
|
||||
|
||||
The storage request has been created and the purchase waits for the request to start. When it starts, an `on-chain` event `RequestFulfilled` is emitted, triggering the subscription callback, and the state machine moves to the `started` state. If the expiry is reached before the callback is called, the state machine moves to the `cancelled` state.
|
||||
|
||||
**Started State (`started`)**
|
||||
|
||||
The purchase is active and waits until the end of the request, defined by the storage request parameters, before moving to the `finished` state. A subscription is made to the marketplace to be notified about request failure. If a request failure is notified, the state machine moves to `failed`.
|
||||
|
||||
Marketplace subscription signature:
|
||||
|
||||
```nim
|
||||
method subscribeRequestFailed*(market: Market, requestId: RequestId, callback: OnRequestFailed): Future[Subscription] {.base, async.}
|
||||
```
|
||||
|
||||
**Finished State (`finished`)**
|
||||
|
||||
The purchase is considered successful and cleanup routines are called. The purchase module calls `marketplace.withdrawFunds` to release the funds locked by the marketplace:
|
||||
|
||||
```nim
|
||||
method withdrawFunds*(market: Market, requestId: RequestId) {.base, async: (raises: [CancelledError, MarketError]).}
|
||||
```
|
||||
|
||||
After that, the purchase is done; no more states are called and the state machine stops successfully.
|
||||
|
||||
**Failed State (`failed`)**
|
||||
|
||||
If the marketplace emits a `RequestFailed` event, the state machine moves to the `failed` state and the purchase module calls `marketplace.withdrawFunds` (same signature as above) to release the funds locked by the marketplace. After that, the state machine moves to `errored`.
|
||||
|
||||
**Cancelled State (`cancelled`)**
|
||||
|
||||
The purchase is cancelled and the purchase module calls `marketplace.withdrawFunds` to release the funds locked by the marketplace (same signature as above). After that, the purchase is terminated; no more states are called and the state machine stops with the reason of failure as error.
|
||||
|
||||
**Errored State (`errored`)**
|
||||
|
||||
The purchase is terminated; no more states are called and the state machine stops with the reason of failure as error.
|
||||
|
||||
**Unknown State (`unknown`)**
|
||||
|
||||
The purchase is in recovery mode, meaning that the state has to be determined. The purchase module calls the marketplace to get the request data (`getRequest`) and the request state (`requestState`):
|
||||
|
||||
```nim
|
||||
method getRequest*(market: Market, id: RequestId): Future[?StorageRequest] {.base, async: (raises: [CancelledError]).}
|
||||
|
||||
method requestState*(market: Market, requestId: RequestId): Future[?RequestState] {.base, async.}
|
||||
```
|
||||
|
||||
Based on this information, it moves to the corresponding next state.
|
||||
|
||||
> **Note**: Functional and non-functional requirements for the client role are summarized in the [Codex Marketplace Specification](https://github.com/codex-storage/codex-spec/blob/master/specs/marketplace.md). The requirements listed below are specific to the nim-codex Purchase module implementation.
|
||||
|
||||
#### Functional Requirements
|
||||
|
||||
##### Purchase Definition
|
||||
|
||||
- Every purchase MUST represent exactly one `StorageRequest`
|
||||
- The purchase MUST have a unique, deterministic identifier `PurchaseId` derived from `requestId`
|
||||
- It MUST be possible to restore any purchase from its `requestId` after a restart
|
||||
- A purchase is considered expired when the expiry timestamp in its `StorageRequest` is reached before the request start, i.e, an event `RequestFulfilled` is emitted by the `marketplace`
|
||||
|
||||
##### State Machine Progression
|
||||
|
||||
- New purchases MUST start in the `pending` state (submission flow)
|
||||
- Recovered purchases MUST start in the `unknown` state (recovery flow)
|
||||
- The state machine MUST progress step-by-step until a deterministic terminal state is reached
|
||||
- The choice of terminal state MUST be based on the `RequestState` returned by the `marketplace`
|
||||
|
||||
##### Failure Handling
|
||||
|
||||
- On marketplace failure events, the purchase MUST immediately transition to `errored` without retries
|
||||
- If a `CancelledError` is raised, the state machine MUST log the cancellation and stop further processing
|
||||
- If a `CatchableError` is raised, the state machine MUST transition to `errored` and record the error
|
||||
|
||||
#### Non-Functional Requirements
|
||||
|
||||
##### Execution Model
|
||||
|
||||
A purchase MUST be handled by a single thread; only one worker SHOULD process a given purchase instance at a time.
|
||||
|
||||
##### Reliability
|
||||
|
||||
`load` supports recovery after process restarts.
|
||||
|
||||
##### Performance
|
||||
|
||||
State transitions should be non-blocking; all I/O is async.
|
||||
|
||||
##### Logging
|
||||
|
||||
All state transitions and errors should be clearly logged for traceability.
|
||||
|
||||
##### Safety
|
||||
|
||||
- Avoid side effects during `new` other than initialising internal fields; `on-chain` interactions are delegated to states using `marketplace` dependency.
|
||||
- Retry policy for external calls.
|
||||
|
||||
##### Testing
|
||||
|
||||
- Unit tests check that each state handles success and error properly.
|
||||
- Integration tests check that a full purchase flows correctly through states.
|
||||
|
||||
---
|
||||
|
||||
> **Protocol Compliance Note**: The Client implementation described above is specific to nim-codex. The only normative requirements for Clients are defined in the [Client Role](#client-role) section of Part I. Implementations must satisfy those protocol requirements but may use completely different internal designs.
|
||||
|
||||
---
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
### Normative
|
||||
|
||||
- [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt) - Key words for use in RFCs to Indicate Requirement Levels
|
||||
- [Reed-Solomon algorithm](https://hackmd.io/FB58eZQoTNm-dnhu0Y1XnA) - Erasure coding algorithm used for data encoding
|
||||
- [CIDv1](https://github.com/multiformats/cid#cidv1) - Content Identifier specification
|
||||
- [multihash](https://github.com/multiformats/multihash) - Self-describing hashes
|
||||
- [Proof-of-Data-Possession](https://hackmd.io/2uRBltuIT7yX0CyczJevYg?view) - Zero-knowledge proof system for storage verification
|
||||
- [Original Codex Marketplace Spec](https://github.com/codex-storage/codex-spec/blob/master/specs/marketplace.md) - Source specification for this document
|
||||
|
||||
### Informative
|
||||
|
||||
- [Codex Implementation](https://github.com/codex-storage/nim-codex) - Reference implementation in Nim
|
||||
- [Codex market implementation](https://github.com/codex-storage/nim-codex/blob/master/codex/market.nim) - Marketplace module implementation
|
||||
- [Codex Sales Component Spec](https://github.com/codex-storage/codex-docs-obsidian/blob/main/10%20Notes/Specs/Component%20Specification%20-%20Sales.md) - Storage Provider implementation details
|
||||
- [Codex Purchase Component Spec](https://github.com/codex-storage/codex-docs-obsidian/blob/main/10%20Notes/Specs/Component%20Specification%20-%20Purchase.md) - Client implementation details
|
||||
- [Nim Chronos](https://github.com/status-im/nim-chronos) - Async/await framework for Nim
|
||||
- [Storage proof timing design](https://github.com/codex-storage/codex-research/blob/41c4b4409d2092d0a5475aca0f28995034e58d14/design/storage-proof-timing.md) - Proof timing mechanism
|
||||
@@ -1,377 +0,0 @@
|
||||
---
|
||||
title: CODEX-COMMUNITY-HISTORY
|
||||
name: Codex Community History
|
||||
status: raw
|
||||
tags: codex
|
||||
editor:
|
||||
contributors:
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document describes how nodes in Status Communities archive historical message data of their communities.
|
||||
Not requiring to follow the time range limit provided by [13/WAKU2-STORE](../../waku/standards/core/13/store.md)
|
||||
nodes using the [BitTorrent protocol](https://www.bittorrent.org/beps/bep_0003.html).
|
||||
It also describes how the archives are distributed to community members via the [Status network](https://status.network/),
|
||||
so they can fetch them and
|
||||
gain access to a complete message history.
|
||||
|
||||
## Background
|
||||
|
||||
Messages are stored permanently by [13/WAKU2-STORE](../../waku/standards/core/13/store.md) nodes for a configurable time range,
|
||||
which is limited by the overall storage provided by a [13/WAKU2-STORE](../../waku/standards/core/13/store.md) nodes.
|
||||
Messages older than that period are no longer provided by [13/WAKU2-STORE](../../waku/standards/core/13/store.md) nodes,
|
||||
making it impossible for other nodes to request historical messages that go beyond that time range.
|
||||
This raises issues in the case of Status communities,
|
||||
where recently joined members of a community are not able to request complete message histories of the community channels.
|
||||
|
||||
### Terminology
|
||||
|
||||
| Name | Description |
|
||||
| ---- | -------------- |
|
||||
| Waku node | A [10/WAKU2](../../waku/standards/core/10/waku.md) node that implements [11/WAKU2-RELAY](../../waku/standards/core/11/relay.md) |
|
||||
| Store node | A [10/WAKU2](../../waku/standards/core/10/waku.md) node that implements [13/WAKU2-STORE](../../waku/standards/core/13/store.md) |
|
||||
| Waku network | A group of [10/WAKU2](../../waku/standards/core/10/waku.md) nodes forming a graph, connected via [11/WAKU2-RELAY](../../waku/standards/core/11/relay.md) |
|
||||
| Status user | A Status account that is used in a Status consumer product, such as Status Mobile or Status Desktop |
|
||||
| Status node | A Status client run by a Status application |
|
||||
| Control node| A Status node that owns the private key for a Status community |
|
||||
| Community member | A Status user that is part of a Status community, not owning the private key of the community|
|
||||
| Community member node | A Status node with message archive capabilities enabled, run by a community member |
|
||||
| Live messages | [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md) received through the Waku network |
|
||||
| BitTorrent client | A program implementing the BitTorrent protocol |
|
||||
| Torrent/Torrent file | A file containing metadata about data to be downloaded by BitTorrent clients |
|
||||
| Magnet link | A link encoding the metadata provided by a torrent file (Magnet URI scheme) |
|
||||
|
||||
## Specification
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
|
||||
“SHOULD NOT”, “RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Message History Archive
|
||||
|
||||
Message history archives are represented as `WakuMessageArchive` and
|
||||
created from a [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md) exported from the local database.
|
||||
The following describes the protocol buffer for `WakuMessageArchive` :
|
||||
|
||||
``` protobuf
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
message WakuMessageArchiveMetadata {
|
||||
uint8 version = 1;
|
||||
uint64 from = 2;
|
||||
uint64 to = 3;
|
||||
repeated string content_Topic = 4;
|
||||
}
|
||||
|
||||
message WakuMessageArchive {
|
||||
uint8 version = 1;
|
||||
WakuMessageArchiveMetadata metadata = 2;
|
||||
repeated WakuMessage messages = 3; // `WakuMessage` is provided by 14/WAKU2-MESSAGE
|
||||
bytes padding = 4;
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The `from` field SHOULD contain a `timestamp` of the time range's lower bound.
|
||||
This type parallels to the `timestamp` of a `WakuMessage`.
|
||||
The `to` field SHOULD contain a `timestamp` of the time range's the higher bound.
|
||||
The `contentTopic` field MUST contain a list of all community channel `contentTopic`s.
|
||||
The `messages` field MUST contain all messages that belong in the archive, given its `from`,
|
||||
`to`, and `contentTopic` fields.
|
||||
|
||||
The `padding` field MUST contain the amount of zero bytes needed for the protobuf encoded `WakuMessageArchive`.
|
||||
The overall byte size MUST be a multiple of the `pieceLength` used to divide the data into pieces.
|
||||
This is needed for seamless encoding and
|
||||
decoding of archival data when interacting with BitTorrent,
|
||||
as explained in [creating message archive torrents](#creating-message-archive-torrents).
|
||||
|
||||
#### Message History Archive Index
|
||||
|
||||
Control nodes MUST provide message archives for the entire community history.
|
||||
The entire history consists of a set of `WakuMessageArchive`,
|
||||
where each archive contains a subset of historical `WakuMessage` for a time range of seven days.
|
||||
All the `WakuMessageArchive` are concatenated into a single file as a byte string, see [Ensuring reproducible data pieces](#ensuring-reproducible-data-pieces).
|
||||
|
||||
Control nodes MUST create a message history archive index,
|
||||
`WakuMessageArchiveIndex` with metadata,
|
||||
that allows receiving nodes to only fetch the message history archives they are interested in.
|
||||
|
||||
##### WakuMessageArchiveIndex
|
||||
|
||||
``` protobuf
|
||||
|
||||
syntax = "proto3"
|
||||
|
||||
message WakuMessageArchiveIndexMetadata {
|
||||
uint8 version = 1
|
||||
WakuMessageArchiveMetadata metadata = 2
|
||||
uint64 offset = 3
|
||||
uint64 num_pieces = 4
|
||||
}
|
||||
|
||||
message WakuMessageArchiveIndex {
|
||||
map<string, WakuMessageArchiveIndexMetadata> archives = 1
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
A `WakuMessageArchiveIndex` is a map where the key is the KECCAK-256 hash of the `WakuMessageArchiveIndexMetadata`,
|
||||
is derived from a 7-day archive
|
||||
and the value is an instance of that `WakuMessageArchiveIndexMetadata` corresponding to that archive.
|
||||
|
||||
The `offset` field MUST contain the position at which the message history archive starts in the byte string
|
||||
of the total message archive data.
|
||||
This MUST be the sum of the length of all previously created message archives in bytes, see [creating message archive torrents](#creating-message-archive-torrents).
|
||||
|
||||
The control node MUST update the `WakuMessageArchiveIndex` every time it creates one or
|
||||
more `WakuMessageArchive`s and bundle it into a new torrent.
|
||||
For every created `WakuMessageArchive`,
|
||||
there MUST be a `WakuMessageArchiveIndexMetadata` entry in the archives field `WakuMessageArchiveIndex`.
|
||||
|
||||
### Creating Message Archive Torrents
|
||||
|
||||
Control nodes MUST create a .torrent file containing metadata for all message history archives.
|
||||
To create a .torrent file, and
|
||||
later serve the message archive data on the BitTorrent network,
|
||||
control nodes MUST store the necessary data in dedicated files on the file system.
|
||||
|
||||
A torrent's source folder MUST contain the following two files:
|
||||
|
||||
- `data`: Contains all protobuf encoded `WakuMessageArchive`'s (as bit strings)
|
||||
concatenated in ascending order based on their time
|
||||
- `index`: Contains the protobuf encoded `WakuMessageArchiveIndex`
|
||||
|
||||
Control nodes SHOULD store these files in a dedicated folder that is identifiable via a community identifier.
|
||||
|
||||
### Ensuring Reproducible Data Pieces
|
||||
|
||||
The control node MUST ensure that the byte string from the protobuf encoded data
|
||||
is equal to the byte string data from the previously generated message archive torrent.
|
||||
Including the data of the latest seven days worth of messages encoded as `WakuMessageArchive`.
|
||||
Therefore, the size of data grows every seven days as it's append-only.
|
||||
|
||||
Control nodes MUST ensure that the byte size,
|
||||
for every individual `WakuMessageArchive` encoded protobuf,
|
||||
is a multiple of `pieceLength` using the padding field.
|
||||
If the `WakuMessageArchive` is not a multiple of `pieceLength`,
|
||||
its padding field MUST be filled with zero bytes and
|
||||
the `WakuMessageArchive` MUST be re-encoded until its size becomes a multiple of `pieceLength`.
|
||||
|
||||
This is necessary because the content of the data file will be split into pieces of `pieceLength` when the torrent file is created,
|
||||
and the SHA1 hash of every piece is then stored in the torrent file and
|
||||
later used by other nodes to request the data for each individual data piece.
|
||||
|
||||
By fitting message archives into a multiple of `pieceLength` and
|
||||
ensuring they fill the possible remaining space with zero bytes,
|
||||
control nodes prevent the next message archive from occupying that remaining space of the last piece,
|
||||
which will result in a different SHA1 hash for that piece.
|
||||
|
||||
Example: Without padding
|
||||
Let `WakuMessageArchive` "A1" be of size 20 bytes:
|
||||
|
||||
``` text
|
||||
0 11 22 33 44 55 66 77 88 99
|
||||
10 11 12 13 14 15 16 17 18 19
|
||||
```
|
||||
|
||||
With a `pieceLength` of 10 bytes, A1 will fit into 20 / 10 = 2 pieces:
|
||||
|
||||
```text
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
```
|
||||
|
||||
Example: With padding
|
||||
Let `WakuMessageArchive` "A2" be of size 21 bytes:
|
||||
|
||||
```text
|
||||
0 11 22 33 44 55 66 77 88 99
|
||||
10 11 12 13 14 15 16 17 18 19
|
||||
20
|
||||
```
|
||||
|
||||
With a `pieceLength` of 10 bytes,
|
||||
A2 will fit into 21 / 10 = 2 pieces.
|
||||
|
||||
The remainder will introduce a third piece:
|
||||
|
||||
```text
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
20 // piece[2] SHA1: 0x789
|
||||
```
|
||||
|
||||
The next `WakuMessageArchive` "A3" will be appended ("#3") to the existing data and
|
||||
occupy the remaining space of the third data piece.
|
||||
|
||||
The piece at index 2 will now produce a different SHA1 hash:
|
||||
|
||||
```text
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
20 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[2] SHA1: 0xeef
|
||||
#3 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[3]
|
||||
```
|
||||
|
||||
By filling up the remaining space of the third piece with A2 using its padding field,
|
||||
it is guaranteed that its SHA1 will stay the same:
|
||||
|
||||
```text
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
20 0 0 0 0 0 0 0 0 0 // piece[2] SHA1: 0x999
|
||||
#3 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[3]
|
||||
#3 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[4]
|
||||
```
|
||||
|
||||
### Seeding Message History Archives
|
||||
|
||||
The control node MUST seed the generated torrent until a new `WakuMessageArchive` is created.
|
||||
|
||||
The control node SHOULD NOT seed torrents for older message history archives.
|
||||
Only one torrent at a time SHOULD be seeded.
|
||||
|
||||
### Creating Magnet Links
|
||||
|
||||
Once a torrent file for all message archives is created,
|
||||
the control node MUST derive a magnet link,
|
||||
following the Magnet URI scheme using the underlying [BitTorrent protocol](https://www.bittorrent.org/beps/bep_0003.html) client.
|
||||
|
||||
#### Message Archive Distribution
|
||||
|
||||
Message archives are available via the BitTorrent network as they are being seeded by the control node.
|
||||
Other community member nodes will download the message archives, from the BitTorrent network,
|
||||
after receiving a magnet link that contains a message archive index.
|
||||
|
||||
The control node MUST send magnet links containing message archives and
|
||||
the message archive index to a special community channel.
|
||||
The `content_Topic` of that special channel follows the following format:
|
||||
|
||||
``` text
|
||||
|
||||
/{application-name}/{version-of-the-application}/{content-topic-name}/{encoding}
|
||||
```
|
||||
|
||||
All messages sent with this special channel's `content_Topic` MUST be instances of `ApplicationMetadataMessage`,
|
||||
with a [62/STATUS-PAYLOADS](../../status/62/payloads.md) of `CommunityMessageArchiveIndex`.
|
||||
|
||||
Only the control node MAY post to the special channel.
|
||||
Other messages on this specified channel MUST be ignored by clients.
|
||||
Community members MUST NOT have permission to send messages to the special channel.
|
||||
However, community member nodes MUST subscribe to a special channel,
|
||||
to receive a [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md) containing magnet links for message archives.
|
||||
|
||||
#### Canonical Message Histories
|
||||
|
||||
Only control nodes are allowed to distribute messages with magnet links,
|
||||
via the special channel for magnet link exchange.
|
||||
Status nodes MUST ignore all messages in the special channel that aren't signed by a control node.
|
||||
Since the magnet links are created from the control node's database
|
||||
(and previously distributed archives),
|
||||
the message history provided by the control node becomes the canonical message history and
|
||||
single source of truth for the community.
|
||||
|
||||
Community member nodes MUST replace messages in their local database with the messages extracted from archives
|
||||
within the same time range.
|
||||
Messages that the control node didn't receive MUST be removed and
|
||||
are no longer part of the message history of interest,
|
||||
even if it already existed in a community member node's database.
|
||||
|
||||
### Fetching Message History Archives
|
||||
|
||||
The process of fetching message history:
|
||||
|
||||
1. Receive message archive index magnet link as described in [Message archive distribution](#message-archive-distribution),
|
||||
2. Download the index file from the torrent, then determine which message archives to download
|
||||
3. Download individual archives
|
||||
|
||||
Community member nodes subscribe to the special channel of the control nodes that publish magnet links for message history archives.
|
||||
Two RECOMMENDED scenarios in which community member nodes can receive such a magnet link message from the special channel:
|
||||
|
||||
1. The member node receives it via live messages, by listening to the special channel.
|
||||
2. The member node requests messages for a time range of up to 30 days from store nodes
|
||||
(this is the case when a new community member joins a community.)
|
||||
3. Downloading message archives
|
||||
|
||||
When community member nodes receive a message with a `CommunityMessageHistoryArchive` [62/STATUS-PAYLOADS](../../status/62/payloads.md),
|
||||
they MUST extract the `magnet_uri`.
|
||||
Then SHOULD pass it to their underlying BitTorrent client to fetch the latest message history archive index,
|
||||
which is the index file of the torrent, see [Creating message archive torrents].
|
||||
|
||||
Due to the nature of distributed systems,
|
||||
there's no guarantee that a received message is the "last" message.
|
||||
This is especially true when community member nodes request historical messages from store nodes.
|
||||
Therefore, community member nodes MUST wait for 20 seconds after receiving the last `CommunityMessageArchive`,
|
||||
before they start extracting the magnet link to fetch the latest archive index.
|
||||
|
||||
Once a message history archive index is downloaded and
|
||||
parsed back into `WakuMessageArchiveIndex`,
|
||||
community member nodes use a local lookup table to determine which of the listed archives are missing,
|
||||
using the KECCAK-256 hashes stored in the index.
|
||||
|
||||
For this lookup to work,
|
||||
member nodes MUST store the KECCAK-256 hashes,
|
||||
of the `WakuMessageArchiveIndexMetadata` provided by the index file,
|
||||
for all of the message history archives that have been downloaded into their local database.
|
||||
|
||||
Given a `WakuMessageArchiveIndex`, member nodes can access individual `WakuMessageArchiveIndexMetadata` to download individual archives.
|
||||
|
||||
Community member nodes MUST choose one of the following options:
|
||||
|
||||
1. Download all archives: Request and download all data pieces for the data provided by the torrent
|
||||
(this is the case for new community member nodes that haven't downloaded any archives yet.)
|
||||
2. Download only the latest archive: Request and
|
||||
download all pieces starting at the offset of the latest `WakuMessageArchiveIndexMetadata`
|
||||
(this is the case for any member node that already has downloaded all previous history and
|
||||
is now interested in only the latest archive).
|
||||
3. Download specific archives: Look into from and
|
||||
to fields of every `WakuMessageArchiveIndexMetadata` and
|
||||
determine the pieces for archives of a specific time range
|
||||
(can be the case for member nodes that have recently joined the network and
|
||||
are only interested in a subset of the complete history).
|
||||
|
||||
#### Storing Historical Messages
|
||||
|
||||
When message archives are fetched,
|
||||
community member nodes MUST unwrap the resulting `WakuMessage` instances into `ApplicationMetadataMessage` instances
|
||||
and store them in their local database.
|
||||
Community member nodes SHOULD NOT store the wrapped `WakuMessage` messages.
|
||||
|
||||
All messages within the same time range MUST be replaced with the messages provided by the message history archive.
|
||||
|
||||
Community members' nodes MUST ignore the expiration state of each archive message.
|
||||
|
||||
### Security Considerations
|
||||
|
||||
#### Multiple Community Owners
|
||||
|
||||
It is possible for control nodes to export the private key of their owned community and
|
||||
pass it to other users so they become control nodes as well.
|
||||
This means it's possible for multiple control nodes to exist for one community.
|
||||
|
||||
This might conflict with the assumption that the control node serves as a single source of truth.
|
||||
Multiple control nodes can have different message histories.
|
||||
Not only will multiple control nodes multiply the amount of archive index messages being distributed to the network,
|
||||
but they might also contain different sets of magnet links and their corresponding hashes.
|
||||
Even if just a single message is missing in one of the histories,
|
||||
the hashes presented in the archive indices will look completely different,
|
||||
resulting in the community member node downloading the corresponding archive.
|
||||
This might be identical to an archive that was already downloaded,
|
||||
except for that one message.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [13/WAKU2-STORE](../../waku/standards/core/13/store.md)
|
||||
- [BitTorrent protocol](https://www.bittorrent.org/beps/bep_0003.html)
|
||||
- [Status network](https://status.network/)
|
||||
- [10/WAKU2](../../waku/standards/core/10/waku.md)
|
||||
- [11/WAKU2-RELAY](../../waku/standards/core/11/relay.md)
|
||||
- [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md)
|
||||
- [62/STATUS-PAYLOADS](../../status/62/payloads.md)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
# Nomos RFCs
|
||||
|
||||
Nomos is building a secure, flexible, and
|
||||
scalable infrastructure for developers creating applications for the network state.
|
||||
Published Specifications are currently available here,
|
||||
[Nomos Specifications](https://nomos-tech.notion.site/project).
|
||||
# Nomos RFCs
|
||||
|
||||
Nomos is building a secure, flexible, and
|
||||
scalable infrastructure for developers creating applications for the network state.
|
||||
To learn more about Nomos current protocols under discussion,
|
||||
head over to [Nomos Specs](https://github.com/logos-co/nomos-specs).
|
||||
|
||||
@@ -1,170 +0,0 @@
|
||||
---
|
||||
title: NOMOSDA-ENCODING
|
||||
name: NomosDA Encoding Protocol
|
||||
status: raw
|
||||
category:
|
||||
tags: data-availability
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Daniel Kashepava <danielkashepava@status.im>
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
- Thomas Lavaur <thomaslavaur@status.im>
|
||||
- Mehmet Gonen <mehmet@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes the encoding and verification processes of NomosDA, which is the data availability (DA) solution used by the Nomos blockchain. NomosDA provides an assurance that all data from Nomos blobs are accessible and verifiable by every network participant.
|
||||
|
||||
This document presents an implementation specification describing how:
|
||||
|
||||
- Encoders encode blobs they want to upload to the Data Availability layer.
|
||||
- Other nodes implement the verification of blobs that were already uploaded to DA.
|
||||
|
||||
## Definitions
|
||||
|
||||
- **Encoder**: An encoder is any actor who performs the encoding process described in this document. This involves committing to the data, generating proofs, and submitting the result to the DA layer.
|
||||
|
||||
In the Nomos architecture, the rollup sequencer typically acts as the encoder, but the role is not exclusive and any actor in the DA layer can also act as encoders.
|
||||
- **Verifier**: Verifies its portion of the distributed blob data as per the verification protocol. In the Nomos architecture, the DA nodes act as the verifiers.
|
||||
|
||||
## Overview
|
||||
|
||||
In the encoding stage, the encoder takes the DA parameters and the padded blob data and creates an initial matrix of data chunks. This matrix is expanded using Reed-Solomon coding and various commitments and proofs are created for the data.
|
||||
|
||||
When a verifier receives a sample, it verifies the data it receives from the encoder and broadcasts the information if the data is verified. Finally, the verifier stores the sample data for the required length of time.
|
||||
|
||||
## Construction
|
||||
|
||||
The encoder and verifier use the [NomosDA cryptographic protocol](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21) to carry out their respective functions. These functions are implemented as abstracted and configurable software entities that allow the original data to be encoded and verified via high-level operations.
|
||||
|
||||
### Glossary
|
||||
|
||||
| Name | Description | Representation |
|
||||
| --- | --- | --- |
|
||||
| `Commitment` | Commitment as per the [NomosDA Cryptographic Protocol](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21) | `bytes` |
|
||||
| `Proof` | Proof as per the [NomosDA Cryptographic Protocol](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21) | `bytes` |
|
||||
| `ChunksMatrix` | Matrix of chunked data. Each chunk is **31 bytes.** Row and Column sizes depend on the encoding necessities. | `List[List[bytes]]` |
|
||||
|
||||
### Encoder
|
||||
|
||||
An encoder takes a set of parameters and the blob data, and creates a matrix of chunks that it uses to compute the necessary cryptographic data. It produces the set of Reed-Solomon (RS) encoded data, the commitments, and the proofs that are needed prior to [dispersal](https://www.notion.so/NomosDA-Dispersal-1fd261aa09df815288c9caf45ed72c95?pvs=21).
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
A[DaEncoderParams] -->|Input| B(Encoder)
|
||||
I[31bytes-padded-input] -->|Input| B
|
||||
B -->|Creates| D[Chunks matrix]
|
||||
D --> |Input| C[NomosDA encoding]
|
||||
C --> E{Encoded data📄}
|
||||
```
|
||||
|
||||
#### Encoding Process
|
||||
|
||||
The encoder executes the encoding process as follows:
|
||||
|
||||
1. The encoder takes the following input parameters:
|
||||
|
||||
```python
|
||||
class DAEncoderParams:
|
||||
column_count: usize
|
||||
bytes_per_field_element: usize
|
||||
```
|
||||
|
||||
| Name | Description | Representation |
|
||||
| --- | --- | --- |
|
||||
| `column_count` | The number of subnets available for dispersal in the system | `usize`, `int` in Python |
|
||||
| `bytes_per_field_element` | The amount of bytes per data chunk. This is set to 31 bytes. Each chunk has 31 bytes rather than 32 to ensure that the chunk value does not exceed the maximum value on the [BLS12-381 elliptic curve](https://electriccoin.co/blog/new-snark-curve/). | `usize`, `int` in Python |
|
||||
|
||||
2. The encoder also includes the blob data to be encoded, which must be of a size that is a multiple of `bytes_per_field_element` bytes. Clients are responsible for padding the data so it fits this constraint.
|
||||
3. The encoder splits the data into `bytes_per_field_element`-sized chunks. It also arranges these chunks into rows and columns, creating a matrix.
|
||||
a. The amount of columns of the matrix needs to fit with the `column_count` parameter, taking into account the `rs_expansion_factor` (currently fixed to 2).
|
||||
i. This means that the size of each row in this matrix is `(bytes_per_field_element*column_count)/rs_expansion_factor`.
|
||||
b. The amount of rows depends on the size of the data.
|
||||
4. The data is encoded as per [the cryptographic details](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21).
|
||||
5. The encoder provides the encoded data set:
|
||||
|
||||
| Name | Description | Representation |
|
||||
| --- | --- | --- |
|
||||
| `data` | Original data | `bytes` |
|
||||
| `chunked_data` | Matrix before RS expansion | `ChunksMatrix` |
|
||||
| `extended_matrix` | Matrix after RS expansion | `ChunksMatrix` |
|
||||
| `row_commitments` | Commitments for each matrix row | `List[Commitment]` |
|
||||
| `combined_column_proofs` | Proofs for each matrix column | `List[Proof]` |
|
||||
|
||||
```python
|
||||
class EncodedData:
|
||||
data: bytes
|
||||
chunked_data: ChunksMatrix
|
||||
extended_matrix: ChunksMatrix
|
||||
row_commitments: List[Commitment]
|
||||
combined_column_proofs: List[Proof]
|
||||
```
|
||||
|
||||
#### Encoder Limits
|
||||
|
||||
NomosDA does not impose a fixed limit on blob size at the encoding level. However, protocols that involve resource-intensive operations must include upper bounds to prevent abuse. In the case of NomosDA, blob size limits are expected to be enforced, as part of the protocol's broader responsibility for resource management and fairness.
|
||||
|
||||
Larger blobs naturally result in higher computational and bandwidth costs, particularly for the encoder, who must compute a proof for each column. Without size limits, malicious clients could exploit the system by attempting to stream unbounded data to DA nodes. Since payment is provided before blob dispersal, DA nodes are protected from performing unnecessary work. This enables the protocol to safely accept very large blobs, as the primary computational cost falls on the encoder. The protocol can accommodate generous blob sizes in practice, while rejecting only absurdly large blobs, such as those exceeding 1 GB, to prevent denial-of-service attacks and ensure network stability.
|
||||
|
||||
To mitigate this, the protocol define acceptable blob size limits, and DA implementations enforce local mitigation strategies, such as flagging or blacklisting clients that violate these constraints.
|
||||
|
||||
### Verifier
|
||||
|
||||
A verifier checks the proper encoding of data blobs it receives. A verifier executes the verification process as follows:
|
||||
|
||||
1. The verifier receives a `DAShare` with the required verification data:
|
||||
|
||||
| Name | Description | Representation |
|
||||
| --- | --- | --- |
|
||||
| `column` | Column chunks (31 bytes) from the encoded matrix | `List[bytes]` |
|
||||
| `column_idx` | Column id (`0..2047`). It is directly related to the `subnetworks` in the [network specification](https://www.notion.so/NomosDA-Network-Specification-1fd261aa09df81188e76cb083791252d?pvs=21). | `u16`, unsigned int of 16 bits. `int` in Python |
|
||||
| `combined_column_proof` | Proof of the random linear combination of the column elements. | `Proof` |
|
||||
| `row_commitments` | Commitments for each matrix row | `List[Commitment]` |
|
||||
| `blob_id` | This is computed as the hash (**blake2b**) of `row_commitments` | `bytes` |
|
||||
|
||||
2. Upon receiving the above data it verifies the column data as per the [cryptographic details](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21). If the verification is successful, the node triggers the [replication protocol](https://www.notion.so/NomosDA-Subnetwork-Replication-1fd261aa09df811d93f8c6280136bfbb?pvs=21) and stores the blob.
|
||||
|
||||
```python
|
||||
class DAShare:
|
||||
column: Column
|
||||
column_idx: u16
|
||||
combined_column_proof: Proof
|
||||
row_commitments: List[Commitment]
|
||||
|
||||
def blob_id(self) -> BlobId:
|
||||
hasher = blake2b(digest_size=32)
|
||||
for c in self.row_commitments:
|
||||
hasher.update(bytes(c))
|
||||
return hasher.digest()
|
||||
```
|
||||
|
||||
### Verification Logic
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant N as Node
|
||||
participant S as Subnetwork Column N
|
||||
loop For each incoming blob column
|
||||
N-->>N: If blob is valid
|
||||
N-->>S: Replication
|
||||
N->>N: Stores blob
|
||||
end
|
||||
```
|
||||
|
||||
## Details
|
||||
|
||||
The encoder and verifier processes described above make use of a variety of cryptographic functions to facilitate the correct verification of column data by verifiers. These functions rely on primitives such as polynomial commitments and Reed-Solomon erasure codes, the details of which are outside the scope of this document. These details, as well as introductions to the cryptographic primitives being used, can be found in the NomosDA Cryptographic Protocol:
|
||||
|
||||
[NomosDA Cryptographic Protocol](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21)
|
||||
|
||||
## References
|
||||
|
||||
- Encoder Specification: [GitHub/encoder.py](https://github.com/logos-co/nomos-specs/blob/master/da/encoder.py)
|
||||
- Verifier Specification: [GitHub/verifier.py](https://github.com/logos-co/nomos-specs/blob/master/da/verifier.py)
|
||||
- Cryptographic protocol: [NomosDA Cryptographic Protocol](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
@@ -1,255 +0,0 @@
|
||||
---
|
||||
title: NOMOS-DA-NETWORK
|
||||
name: NomosDA Network
|
||||
status: raw
|
||||
category:
|
||||
tags: network, data-availability, da-nodes, executors, sampling
|
||||
editor: Daniel Sanchez Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Daniel Kashepava <danielkashepava@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
NomosDA is the scalability solution protocol for data availability within the Nomos network.
|
||||
This document delineates the protocol's structure at the network level,
|
||||
identifies participants,
|
||||
and describes the interactions among its components.
|
||||
Please note that this document does not delve into the cryptographic aspects of the design.
|
||||
For comprehensive details on the cryptographic operations,
|
||||
a detailed specification is a work in progress.
|
||||
|
||||
## Objectives
|
||||
|
||||
NomosDA was created to ensure that data from Nomos zones is distributed, verifiable, immutable, and accessible.
|
||||
At the same time, it is optimised for the following properties:
|
||||
|
||||
- **Decentralization**: NomosDA’s data availability guarantees must be achieved with minimal trust assumptions
|
||||
and centralised actors. Therefore,
|
||||
permissioned DA schemes involving a Data Availability Committee (DAC) had to be avoided in the design.
|
||||
Schemes that require some nodes to download the entire blob data were also off the list
|
||||
due to the disproportionate role played by these “supernodes”.
|
||||
|
||||
- **Scalability**: NomosDA is intended to be a bandwidth-scalable protocol, ensuring that its functions are maintained as the Nomos network grows. Therefore, NomosDA was designed to minimise the amount of data sent to participants, reducing the communication bottleneck and allowing more parties to participate in the DA process.
|
||||
|
||||
To achieve the above properties, NomosDA splits up zone data and
|
||||
distributes it among network participants,
|
||||
with cryptographic properties used to verify the data’s integrity.
|
||||
A major feature of this design is that parties who wish to receive an assurance of data availability
|
||||
can do so very quickly and with minimal hardware requirements.
|
||||
However, this comes at the cost of additional complexity and resources required by more integral participants.
|
||||
|
||||
## Requirements
|
||||
|
||||
In order to ensure that the above objectives are met,
|
||||
the NomosDA network requires a group of participants
|
||||
that undertake a greater burden in terms of active involvement in the protocol.
|
||||
Recognising that not all node operators can do so,
|
||||
NomosDA assigns different roles to different kinds of participants,
|
||||
depending on their ability and willingness to contribute more computing power
|
||||
and bandwidth to the protocol.
|
||||
It was therefore necessary for NomosDA to be implemented as an opt-in Service Network.
|
||||
|
||||
Because the NomosDA network has an arbitrary amount of participants,
|
||||
and the data is split into a fixed number of portions (see the [Encoding & Verification Specification](https://www.notion.so/NomosDA-Encoding-Verification-4d8ca269e96d4fdcb05abc70426c5e7c)),
|
||||
it was necessary to define exactly how each portion is assigned to a participant who will receive and verify it.
|
||||
This assignment algorithm must also be flexible enough to ensure smooth operation in a variety of scenarios,
|
||||
including where there are more or fewer participants than the number of portions.
|
||||
|
||||
## Overview
|
||||
|
||||
### Network Participants
|
||||
|
||||
The NomosDA network includes three categories of participants:
|
||||
|
||||
- **Executors**: Tasked with the encoding and dispersal of data blobs.
|
||||
- **DA Nodes**: Receive and verify the encoded data,
|
||||
subsequently temporarily storing it for further network validation through sampling.
|
||||
- **Light Nodes**: Employ sampling to ascertain data availability.
|
||||
|
||||
### Network Distribution
|
||||
|
||||
The NomosDA network is segmented into `num_subnets` subnetworks.
|
||||
These subnetworks represent subsets of peers from the overarching network,
|
||||
each responsible for a distinct portion of the distributed encoded data.
|
||||
Peers in the network may engage in one or multiple subnetworks,
|
||||
contingent upon network size and participant count.
|
||||
|
||||
### Sub-protocols
|
||||
|
||||
The NomosDA protocol consists of the following sub-protocols:
|
||||
|
||||
- **Dispersal**: Describes how executors distribute encoded data blobs to subnetworks.
|
||||
[NomosDA Dispersal](https://www.notion.so/NomosDA-Dispersal-1818f96fb65c805ca257cb14798f24d4?pvs=21)
|
||||
- **Replication**: Defines how DA nodes distribute encoded data blobs within subnetworks.
|
||||
[NomosDA Subnetwork Replication](https://www.notion.so/NomosDA-Subnetwork-Replication-1818f96fb65c80119fa0e958a087cc2b?pvs=21)
|
||||
- **Sampling**: Used by sampling clients (e.g., light clients) to verify the availability of previously dispersed
|
||||
and replicated data.
|
||||
[NomosDA Sampling](https://www.notion.so/NomosDA-Sampling-1538f96fb65c8031a44cf7305d271779?pvs=21)
|
||||
- **Reconstruction**: Describes gathering and decoding dispersed data back into its original form.
|
||||
[NomosDA Reconstruction](https://www.notion.so/NomosDA-Reconstruction-1828f96fb65c80b2bbb9f4c5a0cf26a5?pvs=21)
|
||||
- **Indexing**: Tracks and exposes blob metadata on-chain.
|
||||
[NomosDA Indexing](https://www.notion.so/NomosDA-Indexing-1bb8f96fb65c8044b635da9df20c2411?pvs=21)
|
||||
|
||||
## Construction
|
||||
|
||||
### NomosDA Network Registration
|
||||
|
||||
Entities wishing to participate in NomosDA must declare their role via [SDP](https://www.notion.so/Final-Draft-Validator-Role-Protocol-17b8f96fb65c80c69c2ef55e22e29506) (Service Declaration Protocol).
|
||||
Once declared, they're accounted for in the subnetwork construction.
|
||||
|
||||
This enables participation in:
|
||||
|
||||
- Dispersal (as executor)
|
||||
- Replication & sampling (as DA node)
|
||||
- Sampling (as light node)
|
||||
|
||||
### Subnetwork Assignment
|
||||
|
||||
The NomosDA network comprises `num_subnets` subnetworks,
|
||||
which are virtual in nature.
|
||||
A subnetwork is a subset of peers grouped together so nodes know who they should connect with,
|
||||
serving as groupings of peers tasked with executing the dispersal and replication sub-protocols.
|
||||
In each subnetwork, participants establish a fully connected overlay,
|
||||
ensuring all nodes maintain permanent connections for the lifetime of the SDP set
|
||||
with peers within the same subnetwork.
|
||||
Nodes refer to nodes in the Data Availability SDP set to ascertain their connectivity requirements across subnetworks.
|
||||
|
||||
#### Assignment Algorithm
|
||||
|
||||
The concrete distribution algorithm is described in the following specification:
|
||||
[DA Subnetwork Assignation](https://www.notion.so/DA-Subnetwork-Assignation-217261aa09df80fc8bb9cf46092741ce)
|
||||
|
||||
## Executor Connections
|
||||
|
||||
Each executor maintains a connection with one peer per subnetwork,
|
||||
necessitating at least num_subnets stable and healthy connections.
|
||||
Executors are expected to allocate adequate resources to sustain these connections.
|
||||
An example algorithm for peer selection would be:
|
||||
|
||||
```python
|
||||
def select_peers(
|
||||
subnetworks: Sequence[Set[PeerId]],
|
||||
filtered_subnetworks: Set[int],
|
||||
filtered_peers: Set[PeerId]
|
||||
) -> Set[PeerId]:
|
||||
result = set()
|
||||
for i, subnetwork in enumerate(subnetworks):
|
||||
available_peers = subnetwork - filtered_peers
|
||||
if i not in filtered_subnetworks and available_peers:
|
||||
result.add(next(iter(available_peers)))
|
||||
return result
|
||||
```
|
||||
|
||||
## NomosDA Protocol Steps
|
||||
|
||||
### Dispersal
|
||||
|
||||
1. The NomosDA protocol is initiated by executors
|
||||
who perform data encoding as outlined in the [Encoding Specification](https://www.notion.so/NomosDA-Encoding-Verification-4d8ca269e96d4fdcb05abc70426c5e7c).
|
||||
2. Executors prepare and distribute each encoded data portion
|
||||
to its designated subnetwork (from `0` to `num_subnets - 1` ).
|
||||
3. Executors might opt to perform sampling to confirm successful dispersal.
|
||||
4. Post-dispersal, executors publish the dispersed `blob_id` and metadata to the mempool. <!-- TODO: add link to dispersal document-->
|
||||
|
||||
### Replication
|
||||
|
||||
DA nodes receive columns from dispersal or replication
|
||||
and validate the data encoding.
|
||||
Upon successful validation,
|
||||
they replicate the validated column to connected peers within their subnetwork.
|
||||
Replication occurs once per blob; subsequent validations of the same blob are discarded.
|
||||
|
||||
### Sampling
|
||||
|
||||
1. Sampling is [invoked based on the node's current role](https://www.notion.so/1538f96fb65c8031a44cf7305d271779?pvs=25#15e8f96fb65c8006b9d7f12ffdd9a159).
|
||||
2. The node selects `sample_size` random subnetworks
|
||||
and queries each for the availability of the corresponding column for the sampled blob. Sampling is deemed successful only if all queried subnetworks respond affirmatively.
|
||||
|
||||
- If `num_subnets` is 2048, `sample_size` is [20 as per the sampling research](https://www.notion.so/1708f96fb65c80a08c97d728cb8476c3?pvs=25#1708f96fb65c80bab6f9c6a946940078)
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
SamplingClient ->> DANode_1: Request
|
||||
DANode_1 -->> SamplingClient: Response
|
||||
SamplingClient ->>DANode_2: Request
|
||||
DANode_2 -->> SamplingClient: Response
|
||||
SamplingClient ->> DANode_n: Request
|
||||
DANode_n -->> SamplingClient: Response
|
||||
```
|
||||
|
||||
### Network Schematics
|
||||
|
||||
The overall network and protocol interactions is represented by the following diagram
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph Replication
|
||||
subgraph Subnetwork_N
|
||||
N10 -->|Replicate| N20
|
||||
N20 -->|Replicate| N30
|
||||
N30 -->|Replicate| N10
|
||||
end
|
||||
subgraph ...
|
||||
end
|
||||
subgraph Subnetwork_0
|
||||
N1 -->|Replicate| N2
|
||||
N2 -->|Replicate| N3
|
||||
N3 -->|Replicate| N1
|
||||
end
|
||||
end
|
||||
subgraph Sampling
|
||||
N9 -->|Sample 0| N2
|
||||
N9 -->|Sample S| N20
|
||||
end
|
||||
subgraph Dispersal
|
||||
Executor -->|Disperse| N1
|
||||
Executor -->|Disperse| N10
|
||||
end
|
||||
```
|
||||
|
||||
## Details
|
||||
|
||||
### Network specifics
|
||||
|
||||
The NomosDA network is engineered for connection efficiency.
|
||||
Executors manage numerous open connections,
|
||||
utilizing their resource capabilities.
|
||||
DA nodes, with their resource constraints,
|
||||
are designed to maximize connection reuse.
|
||||
|
||||
NomosDA uses [multiplexed](https://docs.libp2p.io/concepts/transports/quic/#quic-native-multiplexing) streams over [QUIC](https://docs.libp2p.io/concepts/transports/quic/) connections.
|
||||
For each sub-protocol, a stream protocol ID is defined to negotiate the protocol,
|
||||
triggering the specific protocol once established:
|
||||
|
||||
- Dispersal: /nomos/da/{version}/dispersal
|
||||
- Replication: /nomos/da/{version}/replication
|
||||
- Sampling: /nomos/da/{version}/sampling
|
||||
|
||||
Through these multiplexed streams,
|
||||
DA nodes can utilize the same connection for all sub-protocols.
|
||||
This, combined with virtual subnetworks (membership sets),
|
||||
ensures the overlay node distribution is scalable for networks of any size.
|
||||
|
||||
## References
|
||||
|
||||
- [Encoding Specification](https://www.notion.so/NomosDA-Encoding-Verification-4d8ca269e96d4fdcb05abc70426c5e7c)
|
||||
- [Encoding & Verification Specification](https://www.notion.so/NomosDA-Encoding-Verification-4d8ca269e96d4fdcb05abc70426c5e7c)
|
||||
- [NomosDA Dispersal](https://www.notion.so/NomosDA-Dispersal-1818f96fb65c805ca257cb14798f24d4?pvs=21)
|
||||
- [NomosDA Subnetwork Replication](https://www.notion.so/NomosDA-Subnetwork-Replication-1818f96fb65c80119fa0e958a087cc2b?pvs=21)
|
||||
- [DA Subnetwork Assignation](https://www.notion.so/DA-Subnetwork-Assignation-217261aa09df80fc8bb9cf46092741ce)
|
||||
- [NomosDA Sampling](https://www.notion.so/NomosDA-Sampling-1538f96fb65c8031a44cf7305d271779?pvs=21)
|
||||
- [NomosDA Reconstruction](https://www.notion.so/NomosDA-Reconstruction-1828f96fb65c80b2bbb9f4c5a0cf26a5?pvs=21)
|
||||
- [NomosDA Indexing](https://www.notion.so/NomosDA-Indexing-1bb8f96fb65c8044b635da9df20c2411?pvs=21)
|
||||
- [SDP](https://www.notion.so/Final-Draft-Validator-Role-Protocol-17b8f96fb65c80c69c2ef55e22e29506)
|
||||
- [invoked based on the node's current role](https://www.notion.so/1538f96fb65c8031a44cf7305d271779?pvs=25#15e8f96fb65c8006b9d7f12ffdd9a159)
|
||||
- [20 as per the sampling research](https://www.notion.so/1708f96fb65c80a08c97d728cb8476c3?pvs=25#1708f96fb65c80bab6f9c6a946940078)
|
||||
- [multiplexed](https://docs.libp2p.io/concepts/transports/quic/#quic-native-multiplexing)
|
||||
- [QUIC](https://docs.libp2p.io/concepts/transports/quic/)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
@@ -1,236 +0,0 @@
|
||||
---
|
||||
title: P2P-HARDWARE-REQUIREMENTS
|
||||
name: Nomos p2p Network Hardware Requirements Specification
|
||||
status: raw
|
||||
category: infrastructure
|
||||
tags: [hardware, requirements, nodes, validators, services]
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification defines the hardware requirements for running various types of Nomos blockchain nodes. Hardware needs vary significantly based on the node's role, from lightweight verification nodes to high-performance Zone Executors. The requirements are designed to support diverse participation levels while ensuring network security and performance.
|
||||
|
||||
## Motivation
|
||||
|
||||
The Nomos network is designed to be inclusive and accessible across a wide range of hardware configurations. By defining clear hardware requirements for different node types, we enable:
|
||||
|
||||
1. **Inclusive Participation**: Allow users with limited resources to participate as Light Nodes
|
||||
2. **Scalable Infrastructure**: Support varying levels of network participation based on available resources
|
||||
3. **Performance Optimization**: Ensure adequate resources for computationally intensive operations
|
||||
4. **Network Security**: Maintain network integrity through properly resourced validator nodes
|
||||
5. **Service Quality**: Define requirements for optional services that enhance network functionality
|
||||
|
||||
**Important Notice**: These hardware requirements are preliminary and subject to revision based on implementation testing and real-world network performance data.
|
||||
|
||||
## Specification
|
||||
|
||||
### Node Types Overview
|
||||
|
||||
Hardware requirements vary based on the node's role and services:
|
||||
|
||||
- **Light Node**: Minimal verification with minimal resources
|
||||
- **Basic Bedrock Node**: Standard validation participation
|
||||
- **Service Nodes**: Enhanced capabilities for optional network services
|
||||
|
||||
### Light Node
|
||||
|
||||
Light Nodes provide network verification with minimal resource requirements, suitable for resource-constrained environments.
|
||||
|
||||
**Target Use Cases:**
|
||||
|
||||
- Mobile devices and smartphones
|
||||
- Single-board computers (Raspberry Pi, etc.)
|
||||
- IoT devices with network connectivity
|
||||
- Users with limited hardware resources
|
||||
|
||||
**Hardware Requirements:**
|
||||
|
||||
| Component | Specification |
|
||||
|-----------|---------------|
|
||||
| **CPU** | Low-power processor (smartphone/SBC capable) |
|
||||
| **Memory (RAM)** | 512 MB |
|
||||
| **Storage** | Minimal (few GB) |
|
||||
| **Network** | Reliable connection, 1 Mbps free bandwidth |
|
||||
|
||||
### Basic Bedrock Node (Validator)
|
||||
|
||||
Basic validators participate in Bedrock consensus using typical consumer hardware.
|
||||
|
||||
**Target Use Cases:**
|
||||
|
||||
- Individual validators on consumer hardware
|
||||
- Small-scale validation operations
|
||||
- Entry-level network participation
|
||||
|
||||
**Hardware Requirements:**
|
||||
|
||||
| Component | Specification |
|
||||
|-----------|---------------|
|
||||
| **CPU** | 2 cores, 2 GHz modern multi-core processor |
|
||||
| **Memory (RAM)** | 1 GB minimum |
|
||||
| **Storage** | SSD with 100+ GB free space, expandable |
|
||||
| **Network** | Reliable connection, 1 Mbps free bandwidth |
|
||||
|
||||
### Service-Specific Requirements
|
||||
|
||||
Nodes can optionally run additional Bedrock Services that require enhanced resources beyond basic validation.
|
||||
|
||||
#### Data Availability (DA) Service
|
||||
|
||||
DA Service nodes store and serve data shares for the network's data availability layer.
|
||||
|
||||
**Service Role:**
|
||||
|
||||
- Store blockchain data and blob data long-term
|
||||
- Serve data shares to requesting nodes
|
||||
- Maintain high availability for data retrieval
|
||||
|
||||
**Additional Requirements:**
|
||||
|
||||
| Component | Specification | Rationale |
|
||||
|-----------|---------------|-----------|
|
||||
| **CPU** | Same as Basic Bedrock Node | Standard processing needs |
|
||||
| **Memory (RAM)** | Same as Basic Bedrock Node | Standard memory needs |
|
||||
| **Storage** | **Fast SSD, 500+ GB free** | Long-term chain and blob storage |
|
||||
| **Network** | **High bandwidth (10+ Mbps)** | Concurrent data serving |
|
||||
| **Connectivity** | **Stable, accessible external IP** | Direct peer connections |
|
||||
|
||||
**Network Requirements:**
|
||||
|
||||
- Capacity to handle multiple concurrent connections
|
||||
- Stable external IP address for direct peer access
|
||||
- Low latency for efficient data serving
|
||||
|
||||
#### Blend Protocol Service
|
||||
|
||||
Blend Protocol nodes provide anonymous message routing capabilities.
|
||||
|
||||
**Service Role:**
|
||||
|
||||
- Route messages anonymously through the network
|
||||
- Provide timing obfuscation for privacy
|
||||
- Maintain multiple concurrent connections
|
||||
|
||||
**Additional Requirements:**
|
||||
|
||||
| Component | Specification | Rationale |
|
||||
|-----------|---------------|-----------|
|
||||
| **CPU** | Same as Basic Bedrock Node | Standard processing needs |
|
||||
| **Memory (RAM)** | Same as Basic Bedrock Node | Standard memory needs |
|
||||
| **Storage** | Same as Basic Bedrock Node | Standard storage needs |
|
||||
| **Network** | **Stable connection (10+ Mbps)** | Multiple concurrent connections |
|
||||
| **Connectivity** | **Stable, accessible external IP** | Direct peer connections |
|
||||
|
||||
**Network Requirements:**
|
||||
|
||||
- Low-latency connection for effective message blending
|
||||
- Stable connection for timing obfuscation
|
||||
- Capability to handle multiple simultaneous connections
|
||||
|
||||
#### Executor Network Service
|
||||
|
||||
Zone Executors perform the most computationally intensive work in the network.
|
||||
|
||||
**Service Role:**
|
||||
|
||||
- Execute Zone state transitions
|
||||
- Generate zero-knowledge proofs
|
||||
- Process complex computational workloads
|
||||
|
||||
**Critical Performance Note**: Zone Executors perform the heaviest computational work in the network. High-performance hardware is crucial for effective participation and may provide competitive advantages in execution markets.
|
||||
|
||||
**Hardware Requirements:**
|
||||
|
||||
| Component | Specification | Rationale |
|
||||
|-----------|---------------|-----------|
|
||||
| **CPU** | **Very high-performance multi-core processor** | Zone logic execution and ZK proving |
|
||||
| **Memory (RAM)** | **32+ GB strongly recommended** | Complex Zone execution requirements |
|
||||
| **Storage** | Same as Basic Bedrock Node | Standard storage needs |
|
||||
| **GPU** | **Highly recommended/often necessary** | Efficient ZK proof generation |
|
||||
| **Network** | **High bandwidth (10+ Mbps)** | Data dispersal and high connection load |
|
||||
|
||||
**GPU Requirements:**
|
||||
|
||||
- **NVIDIA**: CUDA-enabled GPU (RTX 3090 or equivalent recommended)
|
||||
- **Apple**: Metal-compatible Apple Silicon
|
||||
- **Performance Impact**: Strong GPU significantly reduces proving time
|
||||
|
||||
**Network Requirements:**
|
||||
|
||||
- Support for **2048+ direct UDP connections** to DA Nodes (for blob publishing)
|
||||
- High bandwidth for data dispersal operations
|
||||
- Stable connection for continuous operation
|
||||
|
||||
*Note: DA Nodes utilizing [libp2p](https://docs.libp2p.io/) connections need sufficient capacity to receive and serve data shares over many connections.*
|
||||
|
||||
## Implementation Requirements
|
||||
|
||||
### Minimum Requirements
|
||||
|
||||
All Nomos nodes MUST meet:
|
||||
|
||||
1. **Basic connectivity** to the Nomos network via [libp2p](https://docs.libp2p.io/)
|
||||
2. **Adequate storage** for their designated role
|
||||
3. **Sufficient processing power** for their service level
|
||||
4. **Reliable network connection** with appropriate bandwidth for [QUIC](https://docs.libp2p.io/concepts/transports/quic/) transport
|
||||
|
||||
### Optional Enhancements
|
||||
|
||||
Node operators MAY implement:
|
||||
|
||||
- Hardware redundancy for critical services
|
||||
- Enhanced cooling for high-performance configurations
|
||||
- Dedicated network connections for service nodes utilizing [libp2p](https://docs.libp2p.io/) protocols
|
||||
- Backup power systems for continuous operation
|
||||
|
||||
### Resource Scaling
|
||||
|
||||
Requirements may vary based on:
|
||||
|
||||
- **Network Load**: Higher network activity increases resource demands
|
||||
- **Zone Complexity**: More complex Zones require additional computational resources
|
||||
- **Service Combinations**: Running multiple services simultaneously increases requirements
|
||||
- **Geographic Location**: Network latency affects optimal performance requirements
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Hardware Security
|
||||
|
||||
1. **Secure Storage**: Use encrypted storage for sensitive node data
|
||||
2. **Network Security**: Implement proper firewall configurations
|
||||
3. **Physical Security**: Secure physical access to node hardware
|
||||
4. **Backup Strategies**: Maintain secure backups of critical data
|
||||
|
||||
### Performance Security
|
||||
|
||||
1. **Resource Monitoring**: Monitor resource usage to detect anomalies
|
||||
2. **Redundancy**: Plan for hardware failures in critical services
|
||||
3. **Isolation**: Consider containerization or virtualization for service isolation
|
||||
4. **Update Management**: Maintain secure update procedures for hardware drivers
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Scalability
|
||||
|
||||
- **Light Nodes**: Minimal resource footprint, high scalability
|
||||
- **Validators**: Moderate resource usage, network-dependent scaling
|
||||
- **Service Nodes**: High resource usage, specialized scaling requirements
|
||||
|
||||
### Resource Efficiency
|
||||
|
||||
- **CPU Usage**: Optimized algorithms for different hardware tiers
|
||||
- **Memory Usage**: Efficient data structures for constrained environments
|
||||
- **Storage Usage**: Configurable retention policies and compression
|
||||
- **Network Usage**: Adaptive bandwidth utilization based on [libp2p](https://docs.libp2p.io/) capacity and [QUIC](https://docs.libp2p.io/concepts/transports/quic/) connection efficiency
|
||||
|
||||
## References
|
||||
|
||||
1. [libp2p protocol](https://docs.libp2p.io/)
|
||||
2. [QUIC protocol](https://docs.libp2p.io/concepts/transports/quic/)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
@@ -1,377 +0,0 @@
|
||||
---
|
||||
title: P2P-NAT-SOLUTION
|
||||
name: Nomos P2P Network NAT Solution Specification
|
||||
status: raw
|
||||
category: networking
|
||||
tags: [nat, traversal, autonat, upnp, pcp, nat-pmp]
|
||||
editor: Antonio Antonino <antonio@status.im>
|
||||
contributors:
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
- Petar Radovic <petar@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- Youngjoon Lee <youngjoon@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification defines a comprehensive NAT (Network Address Translation) traversal solution for the Nomos P2P network. The solution enables nodes to automatically determine their NAT status and establish both outbound and inbound connections regardless of network configuration. The strategy combines [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md), dynamic port mapping protocols, and continuous verification to maximize public reachability while maintaining decentralized operation.
|
||||
|
||||
## Motivation
|
||||
|
||||
Network Address Translation presents a critical challenge for Nomos participants, particularly those operating on consumer hardware without technical expertise. The Nomos network requires a NAT traversal solution that:
|
||||
|
||||
1. **Automatic Operation**: Works out-of-the-box without user configuration
|
||||
2. **Inclusive Participation**: Enables nodes on consumer hardware to participate effectively
|
||||
3. **Decentralized Approach**: Leverages the existing Nomos P2P network rather than centralized services
|
||||
4. **Progressive Fallback**: Escalates through increasingly complex protocols as needed
|
||||
5. **Dynamic Adaptation**: Handles changing network environments and configurations
|
||||
|
||||
The solution must ensure that nodes can both establish outbound connections and accept inbound connections from other peers, maintaining network connectivity across diverse NAT configurations.
|
||||
|
||||
## Specification
|
||||
|
||||
### Terminology
|
||||
|
||||
- **Public Node**: A node that is publicly reachable via a public IP address or valid port mapping
|
||||
- **Private Node**: A node that is not publicly reachable due to NAT/firewall restrictions
|
||||
- **Dialing**: The process of establishing a connection using the [libp2p protocol](https://docs.libp2p.io/) stack
|
||||
- **NAT Status**: Whether a node is publicly reachable or hidden behind NAT
|
||||
|
||||
### Key Design Principles
|
||||
|
||||
#### Optional Configuration
|
||||
|
||||
The NAT traversal strategy must work out-of-the-box whenever possible. Users who do not want to engage in configuration should only need to install the node software package. However, users requiring full control must be able to configure every aspect of the strategy.
|
||||
|
||||
#### Decentralized Operation
|
||||
|
||||
The solution leverages the existing Nomos P2P network for coordination rather than relying on centralized third-party services. This maintains the decentralized nature of the network while providing necessary NAT traversal capabilities.
|
||||
|
||||
#### Progressive Fallback
|
||||
|
||||
The protocol begins with lightweight checks and escalates through more complex and resource-intensive protocols. Failure at any step moves the protocol to the next stage in the strategy, ensuring maximum compatibility across network configurations.
|
||||
|
||||
#### Dynamic Network Environment
|
||||
|
||||
Unless explicitly configured for static addresses, each node's public or private status is assumed to be dynamic. A once publicly-reachable node can become unreachable and vice versa, requiring continuous monitoring and adaptation.
|
||||
|
||||
### Node Discovery Considerations
|
||||
|
||||
The Nomos public network encourages participation from a large number of nodes, many deployed through simple installation procedures. Some nodes will not achieve Public status, but the discovery protocol must track these peers and allow other nodes to discover them. This prevents network partitioning and ensures Private nodes remain accessible to other participants.
|
||||
|
||||
### NAT Traversal Protocol
|
||||
|
||||
#### Protocol Requirements
|
||||
|
||||
**Each node MUST:**
|
||||
|
||||
- Run an [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) client, except for nodes statically configured as Public
|
||||
- Use the [Identify protocol](https://github.com/libp2p/specs/blob/master/identify/README.md) to advertise support for:
|
||||
- `/nomos/autonat/2/dial-request` for main network
|
||||
- `/nomos-testnet/autonat/2/dial-request` for public testnet
|
||||
- `/nomos/autonat/2/dial-back` and `/nomos-testnet/autonat/2/dial-back` respectively
|
||||
|
||||
#### NAT State Machine
|
||||
|
||||
The NAT traversal process follows a multi-phase state machine:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
Start@{shape: circle, label: "Start"} -->|Preconfigured public IP or port mapping| StaticPublic[Statically configured as<br/>**Public**]
|
||||
subgraph Phase0 [Phase 0]
|
||||
Start -->|Default configuration| Boot
|
||||
end
|
||||
subgraph Phase1 [Phase 1]
|
||||
Boot[Bootstrap and discover AutoNAT servers]--> Inspect
|
||||
Inspect[Inspect own IP addresses]-->|At least 1 IP address in the public range| ConfirmPublic[AutoNAT]
|
||||
end
|
||||
subgraph Phase2 [Phase 2]
|
||||
Inspect -->|No IP addresses in the public range| MapPorts[Port Mapping Client<br/>UPnP/NAT-PMP/PCP]
|
||||
MapPorts -->|Successful port map| ConfirmMapPorts[AutoNAT]
|
||||
end
|
||||
ConfirmPublic -->|Node's IP address reachable by AutoNAT server| Public[**Public** Node]
|
||||
ConfirmPublic -->|Node's IP address not reachable by AutoNAT server or Timeout| MapPorts
|
||||
ConfirmMapPorts -->|Mapped IP address and port reachable by AutoNAT server| Public
|
||||
ConfirmMapPorts -->|Mapped IP address and port not reachable by AutoNAT server or Timeout| Private
|
||||
MapPorts -->|Failure or Timeout| Private[**Private** Node]
|
||||
subgraph Phase3 [Phase 3]
|
||||
Public -->Monitor
|
||||
Private --> Monitor
|
||||
end
|
||||
Monitor[Network Monitoring] -->|Restart| Inspect
|
||||
```
|
||||
|
||||
### Phase Implementation
|
||||
|
||||
#### Phase 0: Bootstrapping and Identifying Public Nodes
|
||||
|
||||
If the node is statically configured by the operator to be Public, the procedure stops here.
|
||||
|
||||
The node utilizes bootstrapping and discovery mechanisms to find other Public nodes. The [Identify protocol](https://github.com/libp2p/specs/blob/master/identify/README.md) confirms which detected Public nodes support [AutoNAT v2](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md).
|
||||
|
||||
#### Phase 1: NAT Detection
|
||||
|
||||
The node starts an [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) client and inspects its own addresses. For each public IP address, the node verifies public reachability via [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md). If any public IP addresses are confirmed, the node assumes Public status and moves to Phase 3. Otherwise, it continues to Phase 2.
|
||||
|
||||
#### Phase 2: Automated Port Mapping
|
||||
|
||||
The node attempts to secure port mapping on the default gateway using:
|
||||
|
||||
- **[PCP](https://datatracker.ietf.org/doc/html/rfc6887)** (Port Control Protocol) - Most reliable
|
||||
- **[NAT-PMP](https://datatracker.ietf.org/doc/html/rfc6886)** (NAT Port Mapping Protocol) - Second most reliable
|
||||
- **[UPnP-IGD](https://datatracker.ietf.org/doc/html/rfc6970)** (Universal Plug and Play Internet Gateway Device) - Most widely deployed
|
||||
|
||||
**Port Mapping Algorithm:**
|
||||
|
||||
```python
|
||||
def try_port_mapping():
|
||||
# Step 1: Get the local IPv4 address
|
||||
local_ip = get_local_ipv4_address()
|
||||
|
||||
# Step 2: Get the default gateway IPv4 address
|
||||
gateway_ip = get_default_gateway_address()
|
||||
|
||||
# Step 3: Abort if local or gateway IP could not be determined
|
||||
if not local_ip or not gateway_ip:
|
||||
return "Mapping failed: Unable to get local or gateway IPv4"
|
||||
|
||||
# Step 4: Probe the gateway for protocol support
|
||||
supports_pcp = probe_pcp(gateway_ip)
|
||||
supports_nat_pmp = probe_nat_pmp(gateway_ip)
|
||||
supports_upnp = probe_upnp(gateway_ip) # Optional for logging
|
||||
|
||||
# Step 5-9: Try protocols in order of reliability
|
||||
# PCP (most reliable) -> NAT-PMP -> UPnP -> fallback attempts
|
||||
|
||||
protocols = [
|
||||
(supports_pcp, try_pcp_mapping),
|
||||
(supports_nat_pmp, try_nat_pmp_mapping),
|
||||
(True, try_upnp_mapping), # Always try UPnP
|
||||
(not supports_pcp, try_pcp_mapping), # Fallback
|
||||
(not supports_nat_pmp, try_nat_pmp_mapping) # Last resort
|
||||
]
|
||||
|
||||
for supported, mapping_func in protocols:
|
||||
if supported:
|
||||
mapping = mapping_func(local_ip, gateway_ip)
|
||||
if mapping:
|
||||
return mapping
|
||||
|
||||
return "Mapping failed: No protocol succeeded"
|
||||
```
|
||||
|
||||
If mapping succeeds, the node uses [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) to confirm public reachability. Upon confirmation, the node assumes Public status. Otherwise, it assumes Private status.
|
||||
|
||||
**Port Mapping Sequence:**
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
box Node
|
||||
participant AutoNAT Client
|
||||
participant NAT State Machine
|
||||
participant Port Mapping Client
|
||||
end
|
||||
participant Router
|
||||
|
||||
alt Mapping is successful
|
||||
Note left of AutoNAT Client: Phase 2
|
||||
Port Mapping Client ->> +Router: Requests new mapping
|
||||
Router ->> Port Mapping Client: Confirms new mapping
|
||||
Port Mapping Client ->> NAT State Machine: Mapping secured
|
||||
NAT State Machine ->> AutoNAT Client: Requests confirmation<br/>that mapped address<br/>is publicly reachable
|
||||
|
||||
alt Node asserts Public status
|
||||
AutoNAT Client ->> NAT State Machine: Mapped address<br/>is publicly reachable
|
||||
Note left of AutoNAT Client: Phase 3<br/>Network Monitoring
|
||||
else Node asserts Private status
|
||||
AutoNAT Client ->> NAT State Machine: Mapped address<br/>is not publicly reachable
|
||||
Note left of AutoNAT Client: Phase 3<br/>Network Monitoring
|
||||
end
|
||||
else Mapping fails, node asserts Private status
|
||||
Note left of AutoNAT Client: Phase 2
|
||||
Port Mapping Client ->> Router: Requests new mapping
|
||||
Router ->> Port Mapping Client: Refuses new mapping or Timeout
|
||||
Port Mapping Client ->> NAT State Machine: Mapping failed
|
||||
Note left of AutoNAT Client: Phase 3<br/>Network Monitoring
|
||||
end
|
||||
```
|
||||
|
||||
#### Phase 3: Network Monitoring
|
||||
|
||||
Unless explicitly configured, nodes must monitor their network status and restart from Phase 1 when changes are detected.
|
||||
|
||||
**Public Node Monitoring:**
|
||||
|
||||
A Public node must restart when:
|
||||
|
||||
- [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) client no longer confirms public reachability
|
||||
- A previously successful port mapping is lost or refresh fails
|
||||
|
||||
**Private Node Monitoring:**
|
||||
|
||||
A Private node must restart when:
|
||||
|
||||
- It gains a new public IP address
|
||||
- Port mapping is likely to succeed (gateway change, sufficient time passed)
|
||||
|
||||
**Network Monitoring Sequence:**
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant AutoNAT Server
|
||||
box Node
|
||||
participant AutoNAT Client
|
||||
participant NAT State Machine
|
||||
participant Port Mapping Client
|
||||
end
|
||||
participant Router
|
||||
|
||||
Note left of AutoNAT Server: Phase 3<br/>Network Monitoring
|
||||
par Refresh mapping and monitor changes
|
||||
loop periodically refreshes mapping
|
||||
Port Mapping Client ->> Router: Requests refresh
|
||||
Router ->> Port Mapping Client: Confirms mapping refresh
|
||||
end
|
||||
break Mapping is lost, the node loses Public status
|
||||
Router ->> Port Mapping Client: Refresh failed or mapping dropped
|
||||
Port Mapping Client ->> NAT State Machine: Mapping lost
|
||||
NAT State Machine ->> NAT State Machine: Restart
|
||||
end
|
||||
and Monitor public reachability of mapped addresses
|
||||
loop periodically checks public reachability
|
||||
AutoNAT Client ->> AutoNAT Server: Requests dialback
|
||||
AutoNAT Server ->> AutoNAT Client: Dialback successful
|
||||
end
|
||||
break
|
||||
AutoNAT Server ->> AutoNAT Client: Dialback failed or Timeout
|
||||
AutoNAT Client ->> NAT State Machine: Public reachability lost
|
||||
NAT State Machine ->> NAT State Machine: Restart
|
||||
end
|
||||
end
|
||||
Note left of AutoNAT Server: Phase 1
|
||||
```
|
||||
|
||||
### Public Node Responsibilities
|
||||
|
||||
**A Public node MUST:**
|
||||
|
||||
- Run an [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) server
|
||||
- Listen on and advertise via [Identify protocol](https://github.com/libp2p/specs/blob/master/identify/README.md) its publicly reachable [multiaddresses](https://github.com/libp2p/specs/blob/master/addressing/README.md):
|
||||
|
||||
`/{public_peer_ip}/udp/{port}/quic-v1/p2p/{public_peer_id}`
|
||||
|
||||
- Periodically renew port mappings according to protocol recommendations
|
||||
- Maintain high availability for [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) services
|
||||
|
||||
### Peer Dialing
|
||||
|
||||
Other peers can always dial a Public peer using its publicly reachable [multiaddresses](https://github.com/libp2p/specs/blob/master/addressing/README.md):
|
||||
|
||||
`/{public_peer_ip}/udp/{port}/quic-v1/p2p/{public_peer_id}`
|
||||
|
||||
## Implementation Requirements
|
||||
|
||||
### Mandatory Components
|
||||
|
||||
All Nomos nodes MUST implement:
|
||||
|
||||
1. **[AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) client** for NAT status detection
|
||||
2. **Port mapping clients** for [PCP](https://datatracker.ietf.org/doc/html/rfc6887), [NAT-PMP](https://datatracker.ietf.org/doc/html/rfc6886), and [UPnP-IGD](https://datatracker.ietf.org/doc/html/rfc6970)
|
||||
3. **[Identify protocol](https://github.com/libp2p/specs/blob/master/identify/README.md)** for capability advertisement
|
||||
4. **Network monitoring** for status change detection
|
||||
|
||||
### Optional Enhancements
|
||||
|
||||
Nodes MAY implement:
|
||||
|
||||
- Custom port mapping retry strategies
|
||||
- Enhanced network change detection
|
||||
- Advanced [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) server load balancing
|
||||
- Backup connectivity mechanisms
|
||||
|
||||
### Configuration Parameters
|
||||
|
||||
#### [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) Configuration
|
||||
|
||||
```yaml
|
||||
autonat:
|
||||
client:
|
||||
dial_timeout: 15s
|
||||
max_peer_addresses: 16
|
||||
throttle_global_limit: 30
|
||||
throttle_peer_limit: 3
|
||||
server:
|
||||
dial_timeout: 30s
|
||||
max_peer_addresses: 16
|
||||
throttle_global_limit: 30
|
||||
throttle_peer_limit: 3
|
||||
```
|
||||
|
||||
#### Port Mapping Configuration
|
||||
|
||||
```yaml
|
||||
port_mapping:
|
||||
pcp:
|
||||
timeout: 30s
|
||||
lifetime: 7200s # 2 hours
|
||||
retry_interval: 300s
|
||||
nat_pmp:
|
||||
timeout: 30s
|
||||
lifetime: 7200s
|
||||
retry_interval: 300s
|
||||
upnp:
|
||||
timeout: 30s
|
||||
lease_duration: 7200s
|
||||
retry_interval: 300s
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### NAT Traversal Security
|
||||
|
||||
1. **Port Mapping Validation**: Verify that requested port mappings are actually created
|
||||
2. **[AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) Server Trust**: Implement peer reputation for [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) servers
|
||||
3. **Gateway Communication**: Secure communication with NAT devices
|
||||
4. **Address Validation**: Validate public addresses before advertisement
|
||||
|
||||
### Privacy Considerations
|
||||
|
||||
1. **IP Address Exposure**: Public nodes necessarily expose IP addresses
|
||||
2. **Traffic Analysis**: Monitor for patterns that could reveal node behavior
|
||||
3. **Gateway Information**: Minimize exposure of internal network topology
|
||||
|
||||
### Denial of Service Protection
|
||||
|
||||
1. **[AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) Rate Limiting**: Implement request throttling for [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) services
|
||||
2. **Port Mapping Abuse**: Prevent excessive port mapping requests
|
||||
3. **Resource Exhaustion**: Limit concurrent NAT traversal attempts
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Scalability
|
||||
|
||||
- **[AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) Server Load**: Distributed across Public nodes
|
||||
- **Port Mapping Overhead**: Minimal ongoing resource usage
|
||||
- **Network Monitoring**: Efficient periodic checks
|
||||
|
||||
### Reliability
|
||||
|
||||
- **Fallback Mechanisms**: Multiple protocols ensure high success rates
|
||||
- **Continuous Monitoring**: Automatic recovery from connectivity loss
|
||||
- **Protocol Redundancy**: Multiple port mapping protocols increase reliability
|
||||
|
||||
## References
|
||||
|
||||
1. [Multiaddress spec](https://github.com/libp2p/specs/blob/master/addressing/README.md)
|
||||
2. [Identify protocol spec](https://github.com/libp2p/specs/blob/master/identify/README.md)
|
||||
3. [AutoNAT v2 protocol spec](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md)
|
||||
4. [Circuit Relay v2 protocol spec](https://github.com/libp2p/specs/blob/master/relay/circuit-v2.md)
|
||||
5. [PCP - RFC 6887](https://datatracker.ietf.org/doc/html/rfc6887)
|
||||
6. [NAT-PMP - RFC 6886](https://datatracker.ietf.org/doc/html/rfc6886)
|
||||
7. [UPnP IGD - RFC 6970](https://datatracker.ietf.org/doc/html/rfc6970)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
@@ -1,185 +0,0 @@
|
||||
---
|
||||
title: P2P-NETWORK-BOOTSTRAPPING
|
||||
name: Nomos P2P Network Bootstrapping Specification
|
||||
status: raw
|
||||
category: networking
|
||||
tags: [p2p, networking, bootstrapping, peer-discovery, libp2p]
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Petar Radovic <petar@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- Antonio Antonino <antonio@status.im>
|
||||
- Youngjoon Lee <youngjoon@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Nomos network bootstrapping is the process by which a new node discovers peers and synchronizes with the existing decentralized network. It ensures that a node can:
|
||||
|
||||
1. **Discover Peers** – Find other active nodes in the network.
|
||||
2. **Establish Connections** – Securely connect to trusted peers.
|
||||
3. **Negotiate (libp2p) Protocols** - Ensure that other peers operate in the same protocols as the node needs.
|
||||
|
||||
## Overview
|
||||
|
||||
The Nomos P2P network bootstrapping strategy relies on a designated subset of **bootstrap nodes** to facilitate secure and efficient node onboarding. These nodes serve as the initial entry points for new network participants.
|
||||
|
||||
### Key Design Principles
|
||||
|
||||
#### Trusted Bootstrap Nodes
|
||||
|
||||
A curated set of publicly announced and highly available nodes ensures reliability during initial peer discovery. These nodes are configured with elevated connection limits to handle a high volume of incoming bootstrapping requests from new participants.
|
||||
|
||||
#### Node Configuration & Onboarding
|
||||
|
||||
New node operators must explicitly configure their instances with the addresses of bootstrap nodes. This configuration may be preloaded or dynamically fetched from a trusted source to minimize manual setup.
|
||||
|
||||
#### Network Integration
|
||||
|
||||
Upon initialization, the node establishes connections with the bootstrap nodes and begins participating in Nomos networking protocols. Through these connections, the node discovers additional peers, synchronizes with the network state, and engages in protocol-specific communication (e.g., consensus, block propagation).
|
||||
|
||||
### Security & Decentralization Considerations
|
||||
|
||||
**Trust Minimization**: While bootstrap nodes provide initial connectivity, the network rapidly transitions to decentralized peer discovery to prevent over-reliance on any single entity.
|
||||
|
||||
**Authenticated Announcements**: The identities and addresses of bootstrap nodes are publicly verifiable to mitigate impersonation attacks. From [libp2p documentation](https://docs.libp2p.io/concepts/transports/quic/#quic-in-libp2p):
|
||||
|
||||
> To authenticate each others' peer IDs, peers encode their peer ID into a self-signed certificate, which they sign using their host's private key.
|
||||
|
||||
**Dynamic Peer Management**: After bootstrapping, nodes continuously refine their peer lists to maintain a resilient and distributed network topology.
|
||||
|
||||
This approach ensures **rapid, secure, and scalable** network participation while preserving the decentralized ethos of the Nomos protocol.
|
||||
|
||||
## Protocol
|
||||
|
||||
### Protocol Overview
|
||||
|
||||
The bootstrapping protocol follows libp2p conventions for peer discovery and connection establishment. Implementation details are handled by the underlying libp2p stack with Nomos-specific configuration parameters.
|
||||
|
||||
### Bootstrapping Process
|
||||
|
||||
#### Step-by-Step bootstrapping process
|
||||
|
||||
1. **Node Initial Configuration**: New nodes load pre-configured bootstrap node addresses. Addresses may be `IP` or `DNS` embedded in a compatible [libp2p PeerId multiaddress](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-ids-in-multiaddrs). Node operators may chose to advertise more than one address. This is out of the scope of this protocol. For example:
|
||||
|
||||
`/ip4/198.51.100.0/udp/4242/p2p/QmYyQSo1c1Ym7orWxLYvCrM2EmxFTANf8wXmmE7DWjhx5N` or
|
||||
|
||||
`/dns/foo.bar.net/udp/4242/p2p/QmYyQSo1c1Ym7orWxLYvCrM2EmxFTANf8wXmmE7DWjhx5N`
|
||||
|
||||
2. **Secure Connection**: Nodes establish connections to bootstrap nodes announced addresses. Verifies network identity and protocol compatibility.
|
||||
|
||||
3. **Peer Discovery**: Requests and receives validated peer lists from bootstrap nodes. Each entry includes connectivity details as per the peer discovery protocol engaging after the initial connection.
|
||||
|
||||
4. **Network Integration**: Iteratively connects to discovered peers. Gradually build peer connections.
|
||||
|
||||
5. **Protocol Engagement**: Establishes required protocol channels (gossip/consensus/sync). Begins participating in network operations.
|
||||
|
||||
6. **Ongoing Maintenance**: Continuously evaluates and refreshes peer connections. Ideally removes the connection to the bootstrap node itself. Bootstrap nodes may chose to remove the connection on their side to keep high availability for other nodes.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Nomos Network
|
||||
participant Node
|
||||
participant Bootstrap Node
|
||||
|
||||
Node->>Node: Fetches bootstrapping addresses
|
||||
|
||||
loop Interacts with bootstrap node
|
||||
Node->>+Bootstrap Node: Connects
|
||||
Bootstrap Node->>-Node: Sends discovered peers information
|
||||
end
|
||||
|
||||
loop Connects to Network participants
|
||||
Node->>Nomos Network: Engages in connections
|
||||
Node->>Nomos Network: Negotiates protocols
|
||||
end
|
||||
|
||||
loop Ongoing maintenance
|
||||
Node-->>Nomos Network: Evaluates peer connections
|
||||
alt Bootstrap connection no longer needed
|
||||
Node-->>Bootstrap Node: Disconnects
|
||||
else Bootstrap enforces disconnection
|
||||
Bootstrap Node-->>Node: Disconnects
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The bootstrapping process for the Nomos p2p network uses the **QUIC** transport as specified in the Nomos network specification.
|
||||
|
||||
Bootstrapping is separated from the network's peer discovery protocol. It assumes that there is one protocol that would engage as soon as the connection with the bootstrapping node triggers. Currently Nomos network uses `kademlia` as the current first approach for the Nomos p2p network, this comes granted.
|
||||
|
||||
### Bootstrap Node Requirements
|
||||
|
||||
Bootstrap nodes MUST fulfill the following requirements:
|
||||
|
||||
- **High Availability**: Maintain uptime of 99.5% or higher
|
||||
- **Connection Capacity**: Support minimum 1000 concurrent connections
|
||||
- **Geographic Distribution**: Deploy across multiple regions
|
||||
- **Protocol Compatibility**: Support all required Nomos network protocols
|
||||
- **Security**: Implement proper authentication and rate limiting
|
||||
|
||||
### Network Configuration
|
||||
|
||||
Bootstrap node addresses are distributed through:
|
||||
|
||||
- **Hardcoded addresses** in node software releases
|
||||
- **DNS seeds** for dynamic address resolution
|
||||
- **Community-maintained lists** with cryptographic verification
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Trust Model
|
||||
|
||||
Bootstrap nodes operate under a **minimal trust model**:
|
||||
|
||||
- Nodes verify peer identities through cryptographic authentication
|
||||
- Bootstrap connections are temporary and replaced by organic peer discovery
|
||||
- No single bootstrap node can control network participation
|
||||
|
||||
### Attack Mitigation
|
||||
|
||||
**Sybil Attack Protection**: Bootstrap nodes implement connection limits and peer verification to prevent malicious flooding.
|
||||
|
||||
**Eclipse Attack Prevention**: Nodes connect to multiple bootstrap nodes and rapidly diversify their peer connections.
|
||||
|
||||
**Denial of Service Resistance**: Rate limiting and connection throttling protect bootstrap nodes from resource exhaustion attacks.
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Bootstrapping Metrics
|
||||
|
||||
- **Initial Connection Time**: Target < 30 seconds to first bootstrap node
|
||||
- **Peer Discovery Duration**: Discover minimum viable peer set within 2 minutes
|
||||
- **Network Integration**: Full protocol engagement within 5 minutes
|
||||
|
||||
### Resource Requirements
|
||||
|
||||
#### Bootstrap Nodes
|
||||
|
||||
- Memory: Minimum 4GB RAM
|
||||
- Bandwidth: 100 Mbps sustained
|
||||
- Storage: 50GB available space
|
||||
|
||||
#### Regular Nodes
|
||||
|
||||
- Memory: 512MB for bootstrapping process
|
||||
- Bandwidth: 10 Mbps during initial sync
|
||||
- Storage: Minimal requirements
|
||||
|
||||
## References
|
||||
|
||||
- P2P Network Specification (internal document)
|
||||
- [libp2p QUIC Transport](https://docs.libp2p.io/concepts/transports/quic/)
|
||||
- [libp2p Peer IDs and Addressing](https://docs.libp2p.io/concepts/fundamentals/peers/)
|
||||
- [Ethereum bootnodes](https://ethereum.org/en/developers/docs/nodes-and-clients/bootnodes/)
|
||||
- [Bitcoin peer discovery](https://developer.bitcoin.org/devguide/p2p_network.html#peer-discovery)
|
||||
- [Cardano nodes connectivity](https://docs.cardano.org/stake-pool-operators/node-connectivity)
|
||||
- [Cardano peer sharing](https://www.coincashew.com/coins/overview-ada/guide-how-to-build-a-haskell-stakepool-node/part-v-tips/implementing-peer-sharing)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
@@ -1,307 +0,0 @@
|
||||
---
|
||||
title: NOMOS-P2P-NETWORK
|
||||
name: Nomos P2P Network Specification
|
||||
status: draft
|
||||
category: networking
|
||||
tags: [p2p, networking, libp2p, kademlia, gossipsub, quic]
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification defines the peer-to-peer (P2P) network layer for Nomos blockchain nodes. The network serves as the comprehensive communication infrastructure enabling transaction dissemination through mempool and block propagation. The specification leverages established libp2p protocols to ensure robust, scalable performance with low bandwidth requirements and minimal latency while maintaining accessibility for diverse hardware configurations and network environments.
|
||||
|
||||
## Motivation
|
||||
|
||||
The Nomos blockchain requires a reliable, scalable P2P network that can:
|
||||
|
||||
1. **Support diverse hardware**: From laptops to dedicated servers across various operating systems and geographic locations
|
||||
2. **Enable inclusive participation**: Allow non-technical users to operate nodes with minimal configuration
|
||||
3. **Maintain connectivity**: Ensure nodes remain reachable even with limited connectivity or behind NAT/routers
|
||||
4. **Scale efficiently**: Support large-scale networks (+10k nodes) with eventual consistency
|
||||
5. **Provide low-latency communication**: Enable efficient transaction and block propagation
|
||||
|
||||
## Specification
|
||||
|
||||
### Network Architecture Overview
|
||||
|
||||
The Nomos P2P network addresses three critical challenges:
|
||||
|
||||
- **Peer Connectivity**: Mechanisms for peers to join and connect to the network
|
||||
- **Peer Discovery**: Enabling peers to locate and identify network participants
|
||||
- **Message Transmission**: Facilitating efficient message exchange across the network
|
||||
|
||||
### Transport Protocol
|
||||
|
||||
#### QUIC Protocol Transport
|
||||
|
||||
The Nomos network employs **[QUIC protocol](https://docs.libp2p.io/concepts/transports/quic/)** as the primary transport protocol, leveraging the [libp2p protocol](https://docs.libp2p.io/) implementation.
|
||||
|
||||
**Rationale for [QUIC protocol](https://docs.libp2p.io/concepts/transports/quic/):**
|
||||
|
||||
- Rapid connection establishment
|
||||
- Enhanced NAT traversal capabilities (UDP-based)
|
||||
- Built-in multiplexing simplifies configuration
|
||||
- Production-tested reliability
|
||||
|
||||
### Peer Discovery
|
||||
|
||||
#### Kademlia DHT
|
||||
|
||||
The network utilizes libp2p's Kademlia Distributed Hash Table (DHT) for peer discovery.
|
||||
|
||||
**Protocol Identifiers:**
|
||||
|
||||
- **Mainnet**: `/nomos/kad/1.0.0`
|
||||
- **Testnet**: `/nomos-testnet/kad/1.0.0`
|
||||
|
||||
**Features:**
|
||||
|
||||
- Proximity-based peer discovery heuristics
|
||||
- Distributed peer routing table
|
||||
- Resilient to network partitions
|
||||
- Automatic peer replacement
|
||||
|
||||
#### Identify Protocol
|
||||
|
||||
Complements Kademlia by enabling peer information exchange.
|
||||
|
||||
**Protocol Identifiers:**
|
||||
|
||||
- **Mainnet**: `/nomos/identify/1.0.0`
|
||||
- **Testnet**: `/nomos-testnet/identify/1.0.0`
|
||||
|
||||
**Capabilities:**
|
||||
|
||||
- Protocol support advertisement
|
||||
- Peer capability negotiation
|
||||
- Network interoperability enhancement
|
||||
|
||||
#### Future Considerations
|
||||
|
||||
The current Kademlia implementation is acknowledged as interim. Future improvements target:
|
||||
|
||||
- Lightweight design without full DHT overhead
|
||||
- Highly-scalable eventual consistency
|
||||
- Support for 10k+ nodes with minimal resource usage
|
||||
|
||||
### NAT Traversal
|
||||
|
||||
The network implements comprehensive NAT traversal solutions to ensure connectivity across diverse network configurations.
|
||||
|
||||
**Objectives:**
|
||||
|
||||
- Configuration-free peer connections
|
||||
- Support for users with varying technical expertise
|
||||
- Enable nodes on standard consumer hardware
|
||||
|
||||
**Implementation:**
|
||||
|
||||
- Tailored solutions based on user network configuration
|
||||
- Automatic NAT type detection and adaptation
|
||||
- Fallback mechanisms for challenging network environments
|
||||
|
||||
*Note: Detailed NAT traversal specifications are maintained in a separate document.*
|
||||
|
||||
### Message Dissemination
|
||||
|
||||
#### Gossipsub Protocol
|
||||
|
||||
Nomos employs **gossipsub** for reliable message propagation across the network.
|
||||
|
||||
**Integration:**
|
||||
|
||||
- Seamless integration with Kademlia peer discovery
|
||||
- Automatic peer list updates
|
||||
- Efficient message routing and delivery
|
||||
|
||||
#### Topic Configuration
|
||||
|
||||
**Mempool Dissemination:**
|
||||
|
||||
- **Mainnet**: `/nomos/mempool/0.1.0`
|
||||
- **Testnet**: `/nomos-testnet/mempool/0.1.0`
|
||||
|
||||
**Block Propagation:**
|
||||
|
||||
- **Mainnet**: `/nomos/cryptarchia/0.1.0`
|
||||
- **Testnet**: `/nomos-testnet/cryptarchia/0.1.0`
|
||||
|
||||
#### Network Parameters
|
||||
|
||||
**Peering Degree:**
|
||||
|
||||
- **Minimum recommended**: 8 peers
|
||||
- **Rationale**: Ensures redundancy and efficient propagation
|
||||
- **Configurable**: Nodes may adjust based on resources and requirements
|
||||
|
||||
### Bootstrapping
|
||||
|
||||
#### Initial Network Entry
|
||||
|
||||
New nodes connect to the network through designated bootstrap nodes.
|
||||
|
||||
**Process:**
|
||||
|
||||
1. Connect to known bootstrap nodes
|
||||
2. Obtain initial peer list through Kademlia
|
||||
3. Establish gossipsub connections
|
||||
4. Begin participating in network protocols
|
||||
|
||||
**Bootstrap Node Requirements:**
|
||||
|
||||
- High availability and reliability
|
||||
- Geographic distribution
|
||||
- Version compatibility maintenance
|
||||
|
||||
### Message Encoding
|
||||
|
||||
All network messages follow the Nomos Wire Format specification for consistent encoding and decoding across implementations.
|
||||
|
||||
**Key Properties:**
|
||||
|
||||
- Deterministic serialization
|
||||
- Efficient binary encoding
|
||||
- Forward/backward compatibility support
|
||||
- Cross-platform consistency
|
||||
|
||||
*Note: Detailed wire format specifications are maintained in a separate document.*
|
||||
|
||||
## Implementation Requirements
|
||||
|
||||
### Mandatory Protocols
|
||||
|
||||
All Nomos nodes MUST implement:
|
||||
|
||||
1. **Kademlia DHT** for peer discovery
|
||||
2. **Identify protocol** for peer information exchange
|
||||
3. **Gossipsub** for message dissemination
|
||||
|
||||
### Optional Enhancements
|
||||
|
||||
Nodes MAY implement:
|
||||
|
||||
- Advanced NAT traversal techniques
|
||||
- Custom peering strategies
|
||||
- Enhanced message routing optimizations
|
||||
|
||||
### Network Versioning
|
||||
|
||||
Protocol versions follow semantic versioning:
|
||||
|
||||
- **Major version**: Breaking protocol changes
|
||||
- **Minor version**: Backward-compatible enhancements
|
||||
- **Patch version**: Bug fixes and optimizations
|
||||
|
||||
## Configuration Parameters
|
||||
|
||||
### Implementation Note
|
||||
|
||||
**Current Status**: The Nomos P2P network implementation uses hardcoded libp2p protocol parameters for optimal performance and reliability. While the node configuration file (`config.yaml`) contains network-related settings, the core libp2p protocol parameters (Kademlia DHT, Identify, and Gossipsub) are embedded in the source code.
|
||||
|
||||
### Node Configuration
|
||||
|
||||
The following network parameters are configurable via `config.yaml`:
|
||||
|
||||
#### Network Backend Settings
|
||||
|
||||
```yaml
|
||||
network:
|
||||
backend:
|
||||
host: 0.0.0.0
|
||||
port: 3000
|
||||
node_key: <node_private_key>
|
||||
initial_peers: []
|
||||
```
|
||||
|
||||
#### Protocol-Specific Topics
|
||||
|
||||
**Mempool Dissemination:**
|
||||
|
||||
- **Mainnet**: `/nomos/mempool/0.1.0`
|
||||
- **Testnet**: `/nomos-testnet/mempool/0.1.0`
|
||||
|
||||
**Block Propagation:**
|
||||
|
||||
- **Mainnet**: `/nomos/cryptarchia/0.1.0`
|
||||
- **Testnet**: `/nomos-testnet/cryptarchia/0.1.0`
|
||||
|
||||
### Hardcoded Protocol Parameters
|
||||
|
||||
The following libp2p protocol parameters are currently hardcoded in the implementation:
|
||||
|
||||
#### Peer Discovery Parameters
|
||||
|
||||
- **Protocol identifiers** for Kademlia DHT and Identify protocols
|
||||
- **DHT routing table** configuration and query timeouts
|
||||
- **Peer discovery intervals** and connection management
|
||||
|
||||
#### Message Dissemination Parameters
|
||||
|
||||
- **Gossipsub mesh parameters** (peer degree, heartbeat intervals)
|
||||
- **Message validation** and caching settings
|
||||
- **Topic subscription** and fanout management
|
||||
|
||||
#### Rationale for Hardcoded Parameters
|
||||
|
||||
1. **Network Stability**: Prevents misconfigurations that could fragment the network
|
||||
2. **Performance Optimization**: Parameters are tuned for the target network size and latency requirements
|
||||
3. **Security**: Reduces attack surface by limiting configurable network parameters
|
||||
4. **Simplicity**: Eliminates need for operators to understand complex P2P tuning
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Network-Level Security
|
||||
|
||||
1. **Peer Authentication**: Utilize libp2p's built-in peer identity verification
|
||||
2. **Message Validation**: Implement application-layer message validation
|
||||
3. **Rate Limiting**: Protect against spam and DoS attacks
|
||||
4. **Blacklisting**: Mechanism for excluding malicious peers
|
||||
|
||||
### Privacy Considerations
|
||||
|
||||
1. **Traffic Analysis**: Gossipsub provides some resistance to traffic analysis
|
||||
2. **Metadata Leakage**: Minimize identifiable information in protocol messages
|
||||
3. **Connection Patterns**: Randomize connection timing and patterns
|
||||
|
||||
### Denial of Service Protection
|
||||
|
||||
1. **Resource Limits**: Impose limits on connections and message rates
|
||||
2. **Peer Scoring**: Implement reputation-based peer management
|
||||
3. **Circuit Breakers**: Automatic protection against resource exhaustion
|
||||
|
||||
### Node Configuration Example
|
||||
|
||||
[Nomos Node Configuration](https://github.com/logos-co/nomos/blob/master/nodes/nomos-node/config.yaml) is an example node configuration
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Scalability
|
||||
|
||||
- **Target Network Size**: 10,000+ nodes
|
||||
- **Message Latency**: Sub-second for critical messages
|
||||
- **Bandwidth Efficiency**: Optimized for limited bandwidth environments
|
||||
|
||||
### Resource Requirements
|
||||
|
||||
- **Memory Usage**: Minimal DHT routing table overhead
|
||||
- **CPU Usage**: Efficient cryptographic operations
|
||||
- **Network Bandwidth**: Adaptive based on node role and capacity
|
||||
|
||||
## References
|
||||
|
||||
Original working document, from Nomos Notion: [P2P Network Specification](https://nomos-tech.notion.site/P2P-Network-Specification-206261aa09df81db8100d5f410e39d75).
|
||||
|
||||
1. [libp2p Specifications](https://docs.libp2p.io/)
|
||||
2. [QUIC Protocol Specification](https://docs.libp2p.io/concepts/transports/quic/)
|
||||
3. [Kademlia DHT](https://docs.libp2p.io/concepts/discovery-routing/kaddht/)
|
||||
4. [Gossipsub Protocol](https://github.com/libp2p/specs/tree/master/pubsub/gossipsub)
|
||||
5. [Identify Protocol](https://github.com/libp2p/specs/blob/master/identify/README.md)
|
||||
6. [Nomos Implementation](https://github.com/logos-co/nomos) - Reference implementation and source code
|
||||
7. [Nomos Node Configuration](https://github.com/logos-co/nomos/blob/master/nodes/nomos-node/config.yaml) - Example node configuration
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
345
nomos/raw/sdp.md
345
nomos/raw/sdp.md
@@ -1,345 +0,0 @@
|
||||
---
|
||||
title: NOMOS-SDP
|
||||
name: Nomos Service Declaration Protocol Specification
|
||||
status: raw
|
||||
category:
|
||||
tags: participation, validators, declarations
|
||||
editor: Marcin Pawlowski <marcin@status.im>
|
||||
contributors:
|
||||
- Mehmet <mehmet@status.im>
|
||||
- Daniel Sanchez Quiros <danielsq@status.im>
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Thomas Lavaur <thomaslavaur@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- David Rusu <davidrusu@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
This document defines a mechanism enabling validators to declare their participation in specific protocols that require a known and agreed-upon list of participants. Some examples of this are Data Availability and the Blend Network. We create a single repository of identifiers which is used to establish secure communication between validators and provide services. Before being admitted to the repository, the validator proves that it locked at least a minimum stake.
|
||||
|
||||
## Requirements
|
||||
|
||||
The requirements for the protocol are defined as follows:
|
||||
|
||||
- A declaration must be backed by a confirmation that the sender of the declaration owns a certain value of the stake.
|
||||
- A declaration is valid until it is withdrawn or is not used for a service-specific amount of time.
|
||||
|
||||
## Overview
|
||||
|
||||
The SDP enables nodes to declare their eligibility to serve a specific service in the system, and withdraw their declarations.
|
||||
|
||||
### Protocol Actions
|
||||
|
||||
The protocol defines the following actions:
|
||||
|
||||
- **Declare**: A node sends a declaration that confirms its willingness to provide a specific service, which is confirmed by locking a threshold of stake.
|
||||
- **Active**: A node marks that its participation in the protocol is active according to the service-specific activity logic. This action enables the protocol to monitor the node's activity. We utilize this as a non-intrusive differentiator of node activity. It is crucial to exclude inactive nodes from the set of active nodes, as it enhances the stability of services.
|
||||
- **Withdraw**: A node withdraws its declaration and stops providing a service.
|
||||
|
||||
The logic of the protocol is straightforward:
|
||||
|
||||
1. A node sends a declaration message for a specific service and proves it has a minimum stake.
|
||||
2. The declaration is registered on the ledger, and the node can commence its service according to the service-specific service logic.
|
||||
3. After a service-specific service-providing time, the node confirms its activity.
|
||||
4. The node must confirm its activity with a service-specific minimum frequency; otherwise, its declaration is inactive.
|
||||
5. After the service-specific locking period, the node can send a withdrawal message, and its declaration is removed from the ledger, which means that the node will no longer provide the service.
|
||||
|
||||
💡 The protocol messages are subject to a finality that means messages become part of the immutable ledger after a delay. The delay at which it happens is defined by the consensus.
|
||||
|
||||
## Construction
|
||||
|
||||
In this section, we present the main constructions of the protocol. First, we start with data definitions. Second, we describe the protocol actions. Finally, we present part of the Bedrock Mantle design responsible for storing and processing SDP-related messages and data.
|
||||
|
||||
### Data
|
||||
|
||||
In this section, we discuss and define data types, messages, and their storage.
|
||||
|
||||
#### Service Types
|
||||
|
||||
We define the following services which can be used for service declaration:
|
||||
|
||||
- `BN`: for Blend Network service.
|
||||
- `DA`: for Data Availability service.
|
||||
|
||||
```python
|
||||
class ServiceType(Enum):
|
||||
BN="BN" # Blend Network
|
||||
DA="DA" # Data Availability
|
||||
```
|
||||
|
||||
A declaration can be generated for any of the services above. Any declaration that is not one of the above must be rejected. The number of services might grow in the future.
|
||||
|
||||
#### Minimum Stake
|
||||
|
||||
The minimum stake is a global value that defines the minimum stake a node must have to perform any service.
|
||||
|
||||
The `MinStake` is a structure that holds the value of the stake `stake_threshold` and the block number it was set at: `timestamp`.
|
||||
|
||||
```python
|
||||
class MinStake:
|
||||
stake_threshold: StakeThreshold
|
||||
timestamp: BlockNumber
|
||||
```
|
||||
|
||||
The `stake_thresholds` is a structure aggregating all defined `MinStake` values.
|
||||
|
||||
```python
|
||||
stake_thresholds: list[MinStake]
|
||||
```
|
||||
|
||||
For more information on how the minimum stake is calculated, please refer to the Nomos documentation.
|
||||
|
||||
#### Service Parameters
|
||||
|
||||
The service parameters structure defines the parameters set necessary for correctly handling interaction between the protocol and services. Each of the service types defined above must be mapped to a set of the following parameters:
|
||||
|
||||
- `session_length` defines the session length expressed as the number of blocks; the sessions are counted from block `timestamp`.
|
||||
- `lock_period` defines the minimum time (as a number of sessions) during which the declaration cannot be withdrawn, this time must include the period necessary for finalizing the declaration (which might be implicit) and provision of a service for least a single session; it can be expressed as the number of blocks by multiplying its value by the `session_length`.
|
||||
- `inactivity_period` defines the maximum time (as a number of sessions) during which an activation message must be sent; otherwise, the declaration is considered inactive; it can be expressed as the number of blocks by multiplying its value by the `session_length`.
|
||||
- `retention_period` defines the time (as a number of sessions) after which the declaration can be safely deleted by the Garbage Collection mechanism; it can be expressed as the number of blocks by multiplying its value by the `session_length`.
|
||||
- `timestamp` defines the block number at which the parameter was set.
|
||||
|
||||
```python
|
||||
class ServiceParameters:
|
||||
session_length: NumberOfBlocks
|
||||
lock_period: NumberOfSessions
|
||||
inactivity_period: NumberOfSessions
|
||||
retention_period: NumberOfSessions
|
||||
timestamp: BlockNumber
|
||||
```
|
||||
|
||||
The `parameters` is a structure aggregating all defined `ServiceParameters` values.
|
||||
|
||||
```python
|
||||
parameters: list[ServiceParameters]
|
||||
```
|
||||
|
||||
#### Identifiers
|
||||
|
||||
We define the following set of identifiers which are used for service-specific cryptographic operations:
|
||||
|
||||
- `provider_id`: used to sign the SDP messages and to establish secure links between validators; it is `Ed25519PublicKey`.
|
||||
- `zk_id`: used for zero-knowledge operations by the validator that includes rewarding ([Zero Knowledge Signature Scheme (ZkSignature)](https://www.notion.so/Zero-Knowledge-Signature-Scheme-ZkSignature-21c261aa09df8119bfb2dc74a3430df6?pvs=21)).
|
||||
|
||||
#### Locators
|
||||
|
||||
A `Locator` is the address of a validator which is used to establish secure communication between validators. It follows the [multiaddr addressing scheme from libp2p](https://docs.libp2p.io/concepts/fundamentals/addressing/), but it must contain only the location part and must not contain the node identity (`peer_id`).
|
||||
|
||||
The `provider_id` must be used as the node identity. Therefore, the `Locator` must be completed by adding the `provider_id` at the end of it, which makes the `Locator` usable in the context of libp2p.
|
||||
|
||||
The length of the `Locator` is restricted to 329 characters.
|
||||
|
||||
The syntax of every `Locator` entry must be validated.
|
||||
|
||||
**The common formatting of every** `Locator` **must be applied to maintain its unambiguity, to make deterministic ID generation work consistently.** The `Locator` must at least contain only lower case letters and every part of the address must be explicit (no implicit defaults).
|
||||
|
||||
#### Declaration Message
|
||||
|
||||
The construction of the declaration message is as follows:
|
||||
|
||||
```python
|
||||
class DeclarationMessage:
|
||||
service_type: ServiceType
|
||||
locators: list[Locator]
|
||||
provider_id: Ed25519PublicKey
|
||||
zk_id: ZkPublicKey
|
||||
```
|
||||
|
||||
The `locators` list length must be limited to reduce the potential for abuse. Therefore, the length of the list cannot be longer than 8.
|
||||
|
||||
The message must be signed by the `provider_id` key to prove ownership of the key that is used for network-level authentication of the validator. The message is also signed by the `zk_id` key (by default all Mantle transactions are signed with `zk_id` key).
|
||||
|
||||
#### Declaration Storage
|
||||
|
||||
Only valid declaration messages can be stored on the ledger. We define the `DeclarationInfo` as follows:
|
||||
|
||||
```python
|
||||
class DeclarationInfo:
|
||||
service: ServiceType
|
||||
provider_id: Ed25519PublicKey
|
||||
zk_id: ZkPublicKey
|
||||
locators: list[Locator]
|
||||
created: BlockNumber
|
||||
active: BlockNumber
|
||||
withdrawn: BlockNumber
|
||||
nonce: Nonce
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `service` defines the service type of the declaration;
|
||||
- `provider_id` is an `Ed25519PublicKey` used to sign the message by the validator;
|
||||
- `zk_id` is used for zero-knowledge operations by the validator that includes rewarding ([Zero Knowledge Signature Scheme (ZkSignature)](https://www.notion.so/Zero-Knowledge-Signature-Scheme-ZkSignature-21c261aa09df8119bfb2dc74a3430df6?pvs=21));
|
||||
- `locators` are a copy of the `locators` from the `DeclarationMessage`;
|
||||
- `created` refers to the block number of the block that contained the declaration;
|
||||
- `active` refers to the latest block number for which the active message was sent (it is set to `created` by default);
|
||||
- `withdrawn` refers to the block number for which the service declaration was withdrawn (it is set to 0 by default).
|
||||
- The `nonce` must be set to 0 for the declaration message and must increase monotonically by every message sent for the `declaration_id`.
|
||||
|
||||
We also define the `declaration_id` (of a `DeclarationId` type) that is the unique identifier of `DeclarationInfo` calculated as a hash of the concatenation of `service`, `provider_id`, `locators` and `zk_id`. The implementation of the hash function is `blake2b` using 256 bits of the output.
|
||||
|
||||
```python
|
||||
declaration_id = Hash(service||provider_id||zk_id||locators)
|
||||
```
|
||||
|
||||
The `declaration_id` is not stored as part of the `DeclarationInfo` but it is used to index it.
|
||||
|
||||
All `DeclarationInfo` references are stored in the `declarations` and are indexed by `declaration_id`.
|
||||
|
||||
```python
|
||||
declarations: list[declaration_id]
|
||||
```
|
||||
|
||||
#### Active Message
|
||||
|
||||
The construction of the active message is as follows:
|
||||
|
||||
```python
|
||||
class ActiveMessage:
|
||||
declaration_id: DeclarationId
|
||||
nonce: Nonce
|
||||
metadata: Metadata
|
||||
```
|
||||
|
||||
where `metadata` is a service-specific node activeness metadata.
|
||||
|
||||
The message must be signed by the `zk_id` key associated with the `declaration_id`.
|
||||
|
||||
The `nonce` must increase monotonically by every message sent for the `declaration_id`.
|
||||
|
||||
#### Withdraw Message
|
||||
|
||||
The construction of the withdraw message is as follows:
|
||||
|
||||
```python
|
||||
class WithdrawMessage:
|
||||
declaration_id: DeclarationId
|
||||
nonce: Nonce
|
||||
```
|
||||
|
||||
The message must be signed by the `zk_id` key from the `declaration_id`.
|
||||
|
||||
The `nonce` must increase monotonically by every message sent for the `declaration_id`.
|
||||
|
||||
#### Indexing
|
||||
|
||||
Every event must be correctly indexed to enable lighter synchronization of the changes. Therefore, we index every `declaration_id` according to `EventType`, `ServiceType`, and `Timestamp`. Where `EventType = { "created", "active", "withdrawn" }` follows the type of the message.
|
||||
|
||||
```python
|
||||
events = {
|
||||
event_type: {
|
||||
service_type: {
|
||||
timestamp: {
|
||||
declarations: list[declaration_id]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Protocol
|
||||
|
||||
#### Declare
|
||||
|
||||
The Declare action associates a validator with a service it wants to provide. It requires sending a valid `DeclarationMessage` (as defined in Declaration Message), which is then processed (as defined below) and stored (as defined in Declaration Storage).
|
||||
|
||||
The declaration message is considered valid when all of the following are met:
|
||||
|
||||
- The sender meets the stake requirements.
|
||||
- The `declaration_id` is unique.
|
||||
- The sender knows the secret behind the `provider_id` identifier.
|
||||
- The length of the `locators` list must not be longer than 8.
|
||||
- The `nonce` is increasing monotonically.
|
||||
|
||||
If all of the above conditions are fulfilled, then the message is stored on the ledger; otherwise, the message is discarded.
|
||||
|
||||
#### Active
|
||||
|
||||
The Active action enables marking the provider as actively providing a service. It requires sending a valid `ActiveMessage` (as defined in Active Message), which is relayed to the service-specific node activity logic (as indicated by the service type in Common SDP Structures).
|
||||
|
||||
The Active action updates the `active` value of the `DeclarationInfo`, which means that it also activates inactive (but not expired) providers.
|
||||
|
||||
The SDP active action logic is:
|
||||
|
||||
1. A node sends a `ActiveMessage` transaction.
|
||||
2. The `ActiveMessage` is verified by the SDP logic.
|
||||
a. The `declaration_id` returns an existing `DeclarationInfo`.
|
||||
b. The transaction containing `ActiveMessage` is signed by the `zk_id`.
|
||||
c. The `withdrawn` from the `DeclarationInfo` is set to zero.
|
||||
d. The `nonce` is increasing monotonically.
|
||||
3. If any of these conditions fail, discard the message and stop processing.
|
||||
4. The message is processed by the service-specific activity logic alongside the `active` value indicating the period since the last active message was sent. The `active` value comes from the `DeclarationInfo`.
|
||||
5. If the service-specific activity logic approves the node active message, then the `active` field of the `DeclarationInfo` is set to the current block height.
|
||||
|
||||
#### Withdraw
|
||||
|
||||
The withdraw action enables a withdrawal of a service declaration. It requires sending a valid `WithdrawMessage` (as defined in Withdraw Message). The withdrawal cannot happen before the end of the locking period, which is defined as the number of blocks counted since `created`. This lock period is stored as `lock_period` in the Service Parameters.
|
||||
|
||||
The logic of the withdraw action is:
|
||||
|
||||
1. A node sends a `WithdrawMessage` transaction.
|
||||
2. The `WithdrawMessage` is verified by the SDP logic:
|
||||
a. The `declaration_id` returns an existing `DeclarationInfo`.
|
||||
b. The transaction containing `WithdrawMessage` is signed by the `zk_id`.
|
||||
c. The `withdrawn` from `DeclarationInfo` is set to zero.
|
||||
d. The `nonce` is increasing monotonically.
|
||||
3. If any of the above is not correct, then discard the message and stop.
|
||||
4. Set the `withdrawn` from the `DeclarationInfo` to the current block height.
|
||||
5. Unlock the stake.
|
||||
|
||||
#### Garbage Collection
|
||||
|
||||
The protocol requires a garbage collection mechanism that periodically removes unused `DeclarationInfo` entries.
|
||||
|
||||
The logic of garbage collection is:
|
||||
|
||||
For every `DeclarationInfo` in the `declarations` set, remove the entry if either:
|
||||
|
||||
1. The entry is past the retention period: `withdrawn + retention_period < current_block_height`.
|
||||
2. The entry is inactive beyond the inactivity and retention periods: `active + inactivity_period + retention_period < current_block_height`.
|
||||
|
||||
#### Query
|
||||
|
||||
The protocol must enable querying the ledger in at least the following manner:
|
||||
|
||||
- `GetAllProviderId(timestamp)`, returns all `provider_id`s associated with the `timestamp`.
|
||||
- `GetAllProviderIdSince(timestamp)`, returns all `provider_id`s since the `timestamp`.
|
||||
- `GetAllDeclarationInfo(timestamp)`, returns all `DeclarationInfo` entries associated with the `timestamp`.
|
||||
- `GetAllDeclarationInfoSince(timestamp)`, returns all `DeclarationInfo` entries since the `timestamp`.
|
||||
- `GetDeclarationInfo(provider_id)`, returns the `DeclarationInfo` entry identified by the `provider_id`.
|
||||
- `GetDeclarationInfo(declaration_id)`, returns the `DeclarationInfo` entry identified by the `declaration_id`.
|
||||
- `GetAllServiceParameters(timestamp)`, returns all entries of the `ServiceParameters` store for the requested `timestamp`.
|
||||
- `GetAllServiceParametersSince(timestamp)`, returns all entries of the `ServiceParameters` store since the requested `timestamp`.
|
||||
- `GetServiceParameters(service_type, timestamp)`, returns the service parameter entry from the `ServiceParameters` store of a `service_type` for a specified `timestamp`.
|
||||
- `GetMinStake(timestamp)`, returns the `MinStake` structure at the requested `timestamp`.
|
||||
- `GetMinStakeSince(timestamp)`, returns a set of `MinStake` structures since the requested `timestamp`.
|
||||
|
||||
The query must return an error if the retention period for the delegation has passed and the requested information is not available.
|
||||
|
||||
The list of queries may be extended.
|
||||
|
||||
Every query must return information for a finalized state only.
|
||||
|
||||
### Mantle and ZK Proof
|
||||
|
||||
For more information about the Mantle and ZK proofs, please refer to [Mantle Specification](https://www.notion.so/Mantle-Specification-21c261aa09df810c8820fab1d78b53d9?pvs=21).
|
||||
|
||||
## Appendix
|
||||
|
||||
### Future Improvements
|
||||
|
||||
Refer to the [Mantle Specification](https://www.notion.so/Mantle-Specification-21c261aa09df810c8820fab1d78b53d9?pvs=21) for a list of potential improvements to the protocol.
|
||||
|
||||
## References
|
||||
|
||||
- Mantle and ZK Proof: [Mantle Specification](https://www.notion.so/Mantle-Specification-21c261aa09df810c8820fab1d78b53d9?pvs=21)
|
||||
- Ed25519 Digital Signatures: [RFC 8032](https://datatracker.ietf.org/doc/html/rfc8032)
|
||||
- BLAKE2b Cryptographic Hash: [RFC 7693](https://datatracker.ietf.org/doc/html/rfc7693)
|
||||
- libp2p Multiaddr: [Addressing Specification](https://docs.libp2p.io/concepts/fundamentals/addressing/)
|
||||
- Zero Knowledge Signatures: [ZkSignature Scheme](https://www.notion.so/Zero-Knowledge-Signature-Scheme-ZkSignature-21c261aa09df8119bfb2dc74a3430df6?pvs=21)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
@@ -1,108 +1,108 @@
|
||||
---
|
||||
slug: 24
|
||||
title: 24/STATUS-CURATION
|
||||
name: Status Community Directory Curation Voting using Waku v2
|
||||
status: draft
|
||||
tags: waku-application
|
||||
description: A voting protocol for SNT holders to submit votes to a smart contract. Voting is immutable, which helps avoid sabotage from malicious peers.
|
||||
editor: Szymon Szlachtowicz <szymon.s@ethworks.io>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification is a voting protocol for peers to submit votes to a smart contract.
|
||||
Voting is immutable,
|
||||
this will help avoid sabotage from malicious peers.
|
||||
|
||||
## Motivation
|
||||
|
||||
In open p2p protocol there is an issue with voting off-chain
|
||||
as there is much room for malicious peers to only include votes that support
|
||||
their case when submitting votes to chain.
|
||||
|
||||
Proposed solution is to aggregate votes over waku and
|
||||
allow users to submit votes to smart contract that aren't already submitted.
|
||||
|
||||
### Smart contract
|
||||
|
||||
Voting should be finalized on chain so that the finished vote is immutable.
|
||||
Because of that, smart contract needs to be deployed.
|
||||
When votes are submitted
|
||||
smart contract has to verify what votes are properly signed and
|
||||
that sender has correct amount of SNT.
|
||||
When Vote is verified
|
||||
the amount of SNT voted on specific topic by specific sender is saved on chain.
|
||||
|
||||
### Double voting
|
||||
|
||||
Smart contract should also keep a list of all signatures so
|
||||
that no one can send the same vote twice.
|
||||
Another possibility is to allow each sender to only vote once.
|
||||
|
||||
### Initializing Vote
|
||||
|
||||
When someone wants to initialize vote
|
||||
he has to send a transaction to smart contract that will create a new voting session.
|
||||
When initializing a user has to specify type of vote (Addition, Deletion),
|
||||
amount of his initial SNT to submit and public key of community under vote.
|
||||
Smart contract will return a ID which is identifier of voting session.
|
||||
Also there will be function on Smart Contract that
|
||||
when given community public key it will return voting session ID or
|
||||
undefined if community isn't under vote.
|
||||
|
||||
## Voting
|
||||
|
||||
### Sending votes
|
||||
|
||||
Sending votes is simple every peer is able to send a message to Waku topic
|
||||
specific to given application:
|
||||
|
||||
```json
|
||||
|
||||
/status-community-directory-curation-vote/1/{voting-session-id}/json
|
||||
|
||||
```
|
||||
|
||||
vote object that is sent over waku should contain information about:
|
||||
|
||||
```ts
|
||||
type Vote = {
|
||||
sender: string // address of the sender
|
||||
vote: string // vote sent eg. 'yes' 'no'
|
||||
sntAmount: BigNumber //number of snt cast on vote
|
||||
sign: string // cryptographic signature of a transaction (signed fields: sender,vote,sntAmount,nonce,sessionID)
|
||||
nonce: number // number of votes cast from this address on current vote
|
||||
// (only if we allow multiple votes from the same sender)
|
||||
sessionID: number // ID of voting session
|
||||
}
|
||||
```
|
||||
|
||||
### Aggregating votes
|
||||
|
||||
Every peer that is opening specific voting session
|
||||
will listen to votes sent over p2p network, and
|
||||
aggregate them for a single transaction to chain.
|
||||
|
||||
### Submitting to chain
|
||||
|
||||
Every peer that has aggregated at least one vote
|
||||
will be able to send them to smart contract.
|
||||
When someone votes he will aggregate his own vote and
|
||||
will be able to immediately send it.
|
||||
|
||||
Peer doesn't need to vote to be able to submit the votes to the chain.
|
||||
|
||||
Smart contract needs to verify that all votes are valid
|
||||
(eg. all senders had enough SNT, all votes are correctly signed) and
|
||||
that votes aren't duplicated on smart contract.
|
||||
|
||||
### Finalizing
|
||||
|
||||
Once the vote deadline has expired, the smart contract will not accept votes anymore.
|
||||
Also directory will be updated according to vote results
|
||||
(community added to directory, removed etc.)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
---
|
||||
slug: 24
|
||||
title: 24/STATUS-CURATION
|
||||
name: Status Community Directory Curation Voting using Waku v2
|
||||
status: draft
|
||||
tags: waku-application
|
||||
description: A voting protocol for SNT holders to submit votes to a smart contract. Voting is immutable, which helps avoid sabotage from malicious peers.
|
||||
editor: Szymon Szlachtowicz <szymon.s@ethworks.io>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification is a voting protocol for peers to submit votes to a smart contract.
|
||||
Voting is immutable,
|
||||
this will help avoid sabotage from malicious peers.
|
||||
|
||||
## Motivation
|
||||
|
||||
In open p2p protocol there is an issue with voting off-chain
|
||||
as there is much room for malicious peers to only include votes that support
|
||||
their case when submitting votes to chain.
|
||||
|
||||
Proposed solution is to aggregate votes over waku and
|
||||
allow users to submit votes to smart contract that aren't already submitted.
|
||||
|
||||
### Smart contract
|
||||
|
||||
Voting should be finalized on chain so that the finished vote is immutable.
|
||||
Because of that, smart contract needs to be deployed.
|
||||
When votes are submitted
|
||||
smart contract has to verify what votes are properly signed and
|
||||
that sender has correct amount of SNT.
|
||||
When Vote is verified
|
||||
the amount of SNT voted on specific topic by specific sender is saved on chain.
|
||||
|
||||
### Double voting
|
||||
|
||||
Smart contract should also keep a list of all signatures so
|
||||
that no one can send the same vote twice.
|
||||
Another possibility is to allow each sender to only vote once.
|
||||
|
||||
### Initializing Vote
|
||||
|
||||
When someone wants to initialize vote
|
||||
he has to send a transaction to smart contract that will create a new voting session.
|
||||
When initializing a user has to specify type of vote (Addition, Deletion),
|
||||
amount of his initial SNT to submit and public key of community under vote.
|
||||
Smart contract will return a ID which is identifier of voting session.
|
||||
Also there will be function on Smart Contract that
|
||||
when given community public key it will return voting session ID or
|
||||
undefined if community isn't under vote.
|
||||
|
||||
## Voting
|
||||
|
||||
### Sending votes
|
||||
|
||||
Sending votes is simple every peer is able to send a message to Waku topic
|
||||
specific to given application:
|
||||
|
||||
```json
|
||||
|
||||
/status-community-directory-curation-vote/1/{voting-session-id}/json
|
||||
|
||||
```
|
||||
|
||||
vote object that is sent over waku should contain information about:
|
||||
|
||||
```ts
|
||||
type Vote = {
|
||||
sender: string // address of the sender
|
||||
vote: string // vote sent eg. 'yes' 'no'
|
||||
sntAmount: BigNumber //number of snt cast on vote
|
||||
sign: string // cryptographic signature of a transaction (signed fields: sender,vote,sntAmount,nonce,sessionID)
|
||||
nonce: number // number of votes cast from this address on current vote
|
||||
// (only if we allow multiple votes from the same sender)
|
||||
sessionID: number // ID of voting session
|
||||
}
|
||||
```
|
||||
|
||||
### Aggregating votes
|
||||
|
||||
Every peer that is opening specific voting session
|
||||
will listen to votes sent over p2p network, and
|
||||
aggregate them for a single transaction to chain.
|
||||
|
||||
### Submitting to chain
|
||||
|
||||
Every peer that has aggregated at least one vote
|
||||
will be able to send them to smart contract.
|
||||
When someone votes he will aggregate his own vote and
|
||||
will be able to immediately send it.
|
||||
|
||||
Peer doesn't need to vote to be able to submit the votes to the chain.
|
||||
|
||||
Smart contract needs to verify that all votes are valid
|
||||
(eg. all senders had enough SNT, all votes are correctly signed) and
|
||||
that votes aren't duplicated on smart contract.
|
||||
|
||||
### Finalizing
|
||||
|
||||
Once the vote deadline has expired, the smart contract will not accept votes anymore.
|
||||
Also directory will be updated according to vote results
|
||||
(community added to directory, removed etc.)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
@@ -1,67 +1,67 @@
|
||||
---
|
||||
slug: 28
|
||||
title: 28/STATUS-FEATURING
|
||||
name: Status community featuring using waku v2
|
||||
status: draft
|
||||
tags: waku-application
|
||||
description: To gain new members, current SNT holders can vote to feature an active Status community to the larger Status audience.
|
||||
editor: Szymon Szlachtowicz <szymon.s@ethworks.io>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes a voting method to feature different active Status Communities.
|
||||
|
||||
## Overview
|
||||
|
||||
When there is a active community that is seeking new members,
|
||||
current users of community should be able to feature their community so
|
||||
that it will be accessible to larger audience.
|
||||
Status community curation DApp should provide such a tool.
|
||||
|
||||
Rules of featuring:
|
||||
- Given community can't be featured twice in a row.
|
||||
- Only one vote per user per community (single user can vote on multiple communities)
|
||||
- Voting will be done off-chain
|
||||
- If community hasn't been featured
|
||||
votes for given community are still valid for the next 4 weeks
|
||||
|
||||
Since voting for featuring is similar to polling solutions proposed
|
||||
in this spec could be also used for different applications.
|
||||
|
||||
### Voting
|
||||
|
||||
Voting for featuring will be done through waku v2.
|
||||
|
||||
Payload of waku message will be :
|
||||
|
||||
```ts
|
||||
type FeatureVote = {
|
||||
voter: string // address of a voter
|
||||
sntAmount: BigNumber // amount of snt voted on featuring
|
||||
communityPK: string // public key of community
|
||||
timestamp: number // timestamp of message, must match timestamp of wakuMessage
|
||||
sign: string // cryptographic signature of a transaction (signed fields: voterAddress,sntAmount,communityPK,timestamp)
|
||||
}
|
||||
```
|
||||
|
||||
timestamp is necessary so that votes can't be reused after 4 week period
|
||||
|
||||
### Counting Votes
|
||||
|
||||
Votes will be counted by the DApp itself.
|
||||
DApp will aggregate all the votes in the last 4 weeks and
|
||||
calculate which communities should be displayed in the Featured tab of DApp.
|
||||
|
||||
Rules of counting:
|
||||
- When multiple votes from the same address on the same community are encountered
|
||||
only the vote with highest timestamp is considered valid.
|
||||
- If a community has been featured in a previous week
|
||||
it can't be featured in current week.
|
||||
- In a current week top 5 (or 10) communities with highest amount of SNT votes
|
||||
up to previous Sunday 23:59:59 UTC are considered featured.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
---
|
||||
slug: 28
|
||||
title: 28/STATUS-FEATURING
|
||||
name: Status community featuring using waku v2
|
||||
status: draft
|
||||
tags: waku-application
|
||||
description: To gain new members, current SNT holders can vote to feature an active Status community to the larger Status audience.
|
||||
editor: Szymon Szlachtowicz <szymon.s@ethworks.io>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes a voting method to feature different active Status Communities.
|
||||
|
||||
## Overview
|
||||
|
||||
When there is a active community that is seeking new members,
|
||||
current users of community should be able to feature their community so
|
||||
that it will be accessible to larger audience.
|
||||
Status community curation DApp should provide such a tool.
|
||||
|
||||
Rules of featuring:
|
||||
- Given community can't be featured twice in a row.
|
||||
- Only one vote per user per community (single user can vote on multiple communities)
|
||||
- Voting will be done off-chain
|
||||
- If community hasn't been featured
|
||||
votes for given community are still valid for the next 4 weeks
|
||||
|
||||
Since voting for featuring is similar to polling solutions proposed
|
||||
in this spec could be also used for different applications.
|
||||
|
||||
### Voting
|
||||
|
||||
Voting for featuring will be done through waku v2.
|
||||
|
||||
Payload of waku message will be :
|
||||
|
||||
```ts
|
||||
type FeatureVote = {
|
||||
voter: string // address of a voter
|
||||
sntAmount: BigNumber // amount of snt voted on featuring
|
||||
communityPK: string // public key of community
|
||||
timestamp: number // timestamp of message, must match timestamp of wakuMessage
|
||||
sign: string // cryptographic signature of a transaction (signed fields: voterAddress,sntAmount,communityPK,timestamp)
|
||||
}
|
||||
```
|
||||
|
||||
timestamp is necessary so that votes can't be reused after 4 week period
|
||||
|
||||
### Counting Votes
|
||||
|
||||
Votes will be counted by the DApp itself.
|
||||
DApp will aggregate all the votes in the last 4 weeks and
|
||||
calculate which communities should be displayed in the Featured tab of DApp.
|
||||
|
||||
Rules of counting:
|
||||
- When multiple votes from the same address on the same community are encountered
|
||||
only the vote with highest timestamp is considered valid.
|
||||
- If a community has been featured in a previous week
|
||||
it can't be featured in current week.
|
||||
- In a current week top 5 (or 10) communities with highest amount of SNT votes
|
||||
up to previous Sunday 23:59:59 UTC are considered featured.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
@@ -1,273 +1,273 @@
|
||||
---
|
||||
slug: 55
|
||||
title: 55/STATUS-1TO1-CHAT
|
||||
name: Status 1-to-1 Chat
|
||||
status: draft
|
||||
category: Standards Track
|
||||
tags: waku-application
|
||||
description: A chat protocol to send public and private messages to a single recipient by the Status app.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Andrea Piana <andreap@status.im>
|
||||
- Pedro Pombeiro <pedro@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes how the Status 1-to-1 chat protocol is implemented
|
||||
on top of the Waku v2 protocol.
|
||||
This protocol can be used to send messages to a single recipient.
|
||||
|
||||
## Terminology
|
||||
|
||||
- **Participant**: A participant is a user that is able to send and receive messages.
|
||||
- **1-to-1 chat**: A chat between two participants.
|
||||
- **Public chat**: A chat where any participant can join and read messages.
|
||||
- **Private chat**: A chat where only invited participants can join and read messages.
|
||||
- **Group chat**: A chat where multiple select participants can join and read messages.
|
||||
- **Group admin**: A participant that is able to
|
||||
add/remove participants from a group chat.
|
||||
|
||||
## Background
|
||||
|
||||
This document describes how 2 peers communicate with each other
|
||||
to send messages in a 1-to-1 chat, with privacy and authenticity guarantees.
|
||||
|
||||
## Specification
|
||||
|
||||
### Overview
|
||||
|
||||
This protocol MAY use any key-exchange mechanism previously discussed -
|
||||
|
||||
1. [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md)
|
||||
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md)
|
||||
|
||||
This protocol can provide end-to-end encryption
|
||||
to give peers a strong degree of privacy and security.
|
||||
Public chat messages are publicly readable by anyone since
|
||||
there's no permission model for who is participating in a public chat.
|
||||
|
||||
## Chat Flow
|
||||
|
||||
### Negotiation of a 1:1 chat
|
||||
|
||||
There are two phases in the initial negotiation of a 1:1 chat:
|
||||
|
||||
1. **Identity verification**
|
||||
(e.g., face-to-face contact exchange through QR code, Identicon matching).
|
||||
A QR code serves two purposes simultaneously -
|
||||
identity verification and initial key material retrieval;
|
||||
1. **Asynchronous initial key exchange**
|
||||
|
||||
For more information on account generation and trust establishment, see [65/ACCOUNT-ADDRESS](../65/account-address.md)
|
||||
|
||||
### Post Negotiation
|
||||
|
||||
After the peers have shared their public key material,
|
||||
a 1:1 chat can be established using the methods described in the
|
||||
key-exchange protocols mentioned above.
|
||||
|
||||
### Session management
|
||||
|
||||
The 1:1 chat is made robust by having sessions between peers.
|
||||
It is handled by the key-exchange protocol used. For example,
|
||||
|
||||
1. [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md),
|
||||
the session management is described in [54/WAKU2-X3DH-SESSIONS](../../waku/standards/application/54/x3dh-sessions.md)
|
||||
|
||||
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md),
|
||||
the session management is described in [WAKU2-NOISE-SESSIONS](https://github.com/waku-org/specs/blob/master/standards/application/noise-sessions.md)
|
||||
|
||||
## Negotiation of a 1:1 chat amongst multiple participants (group chat)
|
||||
|
||||
A small, private group chat can be constructed by having multiple participants
|
||||
negotiate a 1:1 chat amongst each other.
|
||||
Each participant MUST
|
||||
maintain a session with all other participants in the group chat.
|
||||
This allows for a group chat to be created with a small number of participants.
|
||||
|
||||
However, this method does not scale as the number of participants increases,
|
||||
for the following reasons -
|
||||
|
||||
1. The number of messages sent over the network increases with the number of participants.
|
||||
2. Handling the X3DH key exchange for each participant is computationally expensive.
|
||||
|
||||
The above issues are addressed in [56/STATUS-COMMUNITIES](../56/communities.md),
|
||||
with other trade-offs.
|
||||
|
||||
### Flow
|
||||
|
||||
The following flow describes how a group chat is created and maintained.
|
||||
|
||||
#### Membership Update Flow
|
||||
|
||||
Membership updates have the following wire format:
|
||||
|
||||
```protobuf
|
||||
message MembershipUpdateMessage {
|
||||
// The chat id of the private group chat
|
||||
// derived in the following way:
|
||||
// chat_id = hex(chat_creator_public_key) + "-" + random_uuid
|
||||
// This chat_id MUST be validated by all participants
|
||||
string chat_id = 1;
|
||||
// A list of events for this group chat, first 65 bytes are the signature,
|
||||
then is a
|
||||
// protobuf encoded MembershipUpdateEvent
|
||||
repeated bytes events = 2;
|
||||
oneof chat_entity {
|
||||
// An optional chat message
|
||||
ChatMessage message = 3;
|
||||
// An optional reaction to a message
|
||||
EmojiReaction emoji_reaction = 4;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note that in `events`, the first element is the signature, and
|
||||
all other elements after are encoded `MembershipUpdateEvent`'s.
|
||||
|
||||
where `MembershipUpdateEvent` is defined as follows:
|
||||
|
||||
```protobuf
|
||||
message MembershipUpdateEvent {
|
||||
// Lamport timestamp of the event
|
||||
uint64 clock = 1;
|
||||
// Optional list of public keys of the targets of the action
|
||||
repeated string members = 2;
|
||||
// Name of the chat for the CHAT_CREATED/NAME_CHANGED event types
|
||||
string name = 3;
|
||||
// The type of the event
|
||||
EventType type = 4;
|
||||
// Color of the chat for the CHAT_CREATED/COLOR_CHANGED event types
|
||||
string color = 5;
|
||||
// Chat image
|
||||
bytes image = 6;
|
||||
|
||||
enum EventType {
|
||||
UNKNOWN = 0;
|
||||
CHAT_CREATED = 1; // See [CHAT_CREATED](#chat-created)
|
||||
NAME_CHANGED = 2; // See [NAME_CHANGED](#name-changed)
|
||||
MEMBERS_ADDED = 3; // See [MEMBERS_ADDED](#members-added)
|
||||
MEMBER_JOINED = 4; // See [MEMBER_JOINED](#member-joined)
|
||||
MEMBER_REMOVED = 5; // See [MEMBER_REMOVED](#member-removed)
|
||||
ADMINS_ADDED = 6; // See [ADMINS_ADDED](#admins-added)
|
||||
ADMIN_REMOVED = 7; // See [ADMIN_REMOVED](#admin-removed)
|
||||
COLOR_CHANGED = 8; // See [COLOR_CHANGED](#color-changed)
|
||||
IMAGE_CHANGED = 9; // See [IMAGE_CHANGED](#image-changed)
|
||||
}
|
||||
}
|
||||
```
|
||||
<!-- Note:
|
||||
I don't like defining wire formats which are out of the scope of the rfc this way.
|
||||
Should explore alternatives -->
|
||||
Note that the definitions for `ChatMessage` and
|
||||
`EmojiReaction` can be found in
|
||||
[chat_message.proto](https://github.com/status-im/status-go/blob/5fd9e93e9c298ed087e6716d857a3951dbfb3c1e/protocol/protobuf/chat_message.proto#L1)
|
||||
and [emoji_reaction.proto](https://github.com/status-im/status-go/blob/5fd9e93e9c298ed087e6716d857a3951dbfb3c1e/protocol/protobuf/emoji_reaction.proto).
|
||||
|
||||
##### Chat Created
|
||||
|
||||
When creating a group chat, this is the first event that MUST be sent.
|
||||
Any event with a clock value lower than this MUST be discarded.
|
||||
Upon receiving this event a client MUST validate the `chat_id`
|
||||
provided with the update and
|
||||
create a chat with identified by `chat_id`.
|
||||
|
||||
By default, the creator of the group chat is the only group admin.
|
||||
|
||||
##### Name Changed
|
||||
|
||||
To change the name of the group chat, group admins MUST use a `NAME_CHANGED` event.
|
||||
Upon receiving this event,
|
||||
a client MUST validate the `chat_id` provided with the updates and
|
||||
MUST ensure the author of the event is an admin of the chat,
|
||||
otherwise the event MUST be ignored.
|
||||
If the event is valid the chat name SHOULD be changed according to the provided message.
|
||||
|
||||
##### Members Added
|
||||
|
||||
To add members to the chat, group admins MUST use a `MEMBERS_ADDED` event.
|
||||
Upon receiving this event,
|
||||
a participant MUST validate the `chat_id` provided with the updates and
|
||||
MUST ensure the author of the event is an admin of the chat,
|
||||
otherwise the event MUST be ignored.
|
||||
If the event is valid,
|
||||
a participant MUST update the list of members of the chat who have not joined,
|
||||
adding the members received.
|
||||
|
||||
##### Member Joined
|
||||
|
||||
To signal the intent to start receiving messages from a given chat,
|
||||
new participants MUST use a `MEMBER_JOINED` event.
|
||||
Upon receiving this event,
|
||||
a participant MUST validate the `chat_id` provided with the updates.
|
||||
If the event is valid a participant,
|
||||
a participant MUST add the new participant to the list of participants stored locally.
|
||||
Any message sent to the group chat MUST now include the new participant.
|
||||
|
||||
##### Member Removed
|
||||
|
||||
There are two ways in which a member MAY be removed from a group chat:
|
||||
|
||||
- A member MAY leave the chat by sending a `MEMBER_REMOVED` event,
|
||||
with the `members` field containing their own public key.
|
||||
- An admin MAY remove a member by sending a `MEMBER_REMOVED` event,
|
||||
with the `members` field containing the public key of the member to be removed.
|
||||
|
||||
Each participant MUST validate the `chat_id` provided with the updates and
|
||||
MUST ensure the author of the event is an admin of the chat,
|
||||
otherwise the event MUST be ignored.
|
||||
If the event is valid, a participant MUST update the local list of members accordingly.
|
||||
|
||||
##### Admins Added
|
||||
|
||||
To promote participants to group admin, group admins MUST use an `ADMINS_ADDED` event.
|
||||
Upon receiving this event,
|
||||
a participant MUST validate the `chat_id` provided with the updates,
|
||||
MUST ensure the author of the event is an admin of the chat,
|
||||
otherwise the event MUST be ignored.
|
||||
If the event is valid,
|
||||
a participant MUST update the list of admins of the chat accordingly.
|
||||
|
||||
##### Admin Removed
|
||||
|
||||
Group admins MUST NOT be able to remove other group admins.
|
||||
An admin MAY remove themselves by sending an `ADMIN_REMOVED` event,
|
||||
with the `members` field containing their own public key.
|
||||
Each participant MUST validate the `chat_id` provided with the updates and
|
||||
MUST ensure the author of the event is an admin of the chat,
|
||||
otherwise the event MUST be ignored.
|
||||
If the event is valid, a participant MUST update the list of admins of the chat accordingly.
|
||||
|
||||
##### Color Changed
|
||||
|
||||
To change the text color of the group chat name,
|
||||
group admins MUST use a `COLOR_CHANGED` event.
|
||||
|
||||
##### Image Changed
|
||||
|
||||
To change the display image of the group chat,
|
||||
group admins MUST use an `IMAGE_CHANGED` event.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. Inherits the security considerations of the key-exchange mechanism used,
|
||||
e.g., [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md) or [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md)
|
||||
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md)
|
||||
3. [65/STATUS-ACCOUNT](../65/account-address.md)
|
||||
4. [54/WAKU2-X3DH-SESSIONS](../../waku/standards/application/54/x3dh-sessions.md)
|
||||
5. [WAKU2-NOISE-SESSIONS](https://github.com/waku-org/specs/blob/master/standards/application/noise-sessions.md)
|
||||
6. [56/STATUS-COMMUNITIES](../56/communities.md)
|
||||
7. [chat_message.proto](https://github.com/status-im/status-go/blob/5fd9e93e9c298ed087e6716d857a3951dbfb3c1e/protocol/protobuf/chat_message.proto#L1)
|
||||
8. [emoji_reaction.proto](https://github.com/status-im/status-go/blob/5fd9e93e9c298ed087e6716d857a3951dbfb3c1e/protocol/protobuf/emoji_reaction.proto)
|
||||
---
|
||||
slug: 55
|
||||
title: 55/STATUS-1TO1-CHAT
|
||||
name: Status 1-to-1 Chat
|
||||
status: draft
|
||||
category: Standards Track
|
||||
tags: waku-application
|
||||
description: A chat protocol to send public and private messages to a single recipient by the Status app.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Andrea Piana <andreap@status.im>
|
||||
- Pedro Pombeiro <pedro@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes how the Status 1-to-1 chat protocol is implemented
|
||||
on top of the Waku v2 protocol.
|
||||
This protocol can be used to send messages to a single recipient.
|
||||
|
||||
## Terminology
|
||||
|
||||
- **Participant**: A participant is a user that is able to send and receive messages.
|
||||
- **1-to-1 chat**: A chat between two participants.
|
||||
- **Public chat**: A chat where any participant can join and read messages.
|
||||
- **Private chat**: A chat where only invited participants can join and read messages.
|
||||
- **Group chat**: A chat where multiple select participants can join and read messages.
|
||||
- **Group admin**: A participant that is able to
|
||||
add/remove participants from a group chat.
|
||||
|
||||
## Background
|
||||
|
||||
This document describes how 2 peers communicate with each other
|
||||
to send messages in a 1-to-1 chat, with privacy and authenticity guarantees.
|
||||
|
||||
## Specification
|
||||
|
||||
### Overview
|
||||
|
||||
This protocol MAY use any key-exchange mechanism previously discussed -
|
||||
|
||||
1. [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md)
|
||||
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md)
|
||||
|
||||
This protocol can provide end-to-end encryption
|
||||
to give peers a strong degree of privacy and security.
|
||||
Public chat messages are publicly readable by anyone since
|
||||
there's no permission model for who is participating in a public chat.
|
||||
|
||||
## Chat Flow
|
||||
|
||||
### Negotiation of a 1:1 chat
|
||||
|
||||
There are two phases in the initial negotiation of a 1:1 chat:
|
||||
|
||||
1. **Identity verification**
|
||||
(e.g., face-to-face contact exchange through QR code, Identicon matching).
|
||||
A QR code serves two purposes simultaneously -
|
||||
identity verification and initial key material retrieval;
|
||||
1. **Asynchronous initial key exchange**
|
||||
|
||||
For more information on account generation and trust establishment, see [65/ACCOUNT-ADDRESS](../65/account-address.md)
|
||||
|
||||
### Post Negotiation
|
||||
|
||||
After the peers have shared their public key material,
|
||||
a 1:1 chat can be established using the methods described in the
|
||||
key-exchange protocols mentioned above.
|
||||
|
||||
### Session management
|
||||
|
||||
The 1:1 chat is made robust by having sessions between peers.
|
||||
It is handled by the key-exchange protocol used. For example,
|
||||
|
||||
1. [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md),
|
||||
the session management is described in [54/WAKU2-X3DH-SESSIONS](../../waku/standards/application/54/x3dh-sessions.md)
|
||||
|
||||
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md),
|
||||
the session management is described in [WAKU2-NOISE-SESSIONS](https://github.com/waku-org/specs/blob/master/standards/application/noise-sessions.md)
|
||||
|
||||
## Negotiation of a 1:1 chat amongst multiple participants (group chat)
|
||||
|
||||
A small, private group chat can be constructed by having multiple participants
|
||||
negotiate a 1:1 chat amongst each other.
|
||||
Each participant MUST
|
||||
maintain a session with all other participants in the group chat.
|
||||
This allows for a group chat to be created with a small number of participants.
|
||||
|
||||
However, this method does not scale as the number of participants increases,
|
||||
for the following reasons -
|
||||
|
||||
1. The number of messages sent over the network increases with the number of participants.
|
||||
2. Handling the X3DH key exchange for each participant is computationally expensive.
|
||||
|
||||
The above issues are addressed in [56/STATUS-COMMUNITIES](../56/communities.md),
|
||||
with other trade-offs.
|
||||
|
||||
### Flow
|
||||
|
||||
The following flow describes how a group chat is created and maintained.
|
||||
|
||||
#### Membership Update Flow
|
||||
|
||||
Membership updates have the following wire format:
|
||||
|
||||
```protobuf
|
||||
message MembershipUpdateMessage {
|
||||
// The chat id of the private group chat
|
||||
// derived in the following way:
|
||||
// chat_id = hex(chat_creator_public_key) + "-" + random_uuid
|
||||
// This chat_id MUST be validated by all participants
|
||||
string chat_id = 1;
|
||||
// A list of events for this group chat, first 65 bytes are the signature,
|
||||
then is a
|
||||
// protobuf encoded MembershipUpdateEvent
|
||||
repeated bytes events = 2;
|
||||
oneof chat_entity {
|
||||
// An optional chat message
|
||||
ChatMessage message = 3;
|
||||
// An optional reaction to a message
|
||||
EmojiReaction emoji_reaction = 4;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note that in `events`, the first element is the signature, and
|
||||
all other elements after are encoded `MembershipUpdateEvent`'s.
|
||||
|
||||
where `MembershipUpdateEvent` is defined as follows:
|
||||
|
||||
```protobuf
|
||||
message MembershipUpdateEvent {
|
||||
// Lamport timestamp of the event
|
||||
uint64 clock = 1;
|
||||
// Optional list of public keys of the targets of the action
|
||||
repeated string members = 2;
|
||||
// Name of the chat for the CHAT_CREATED/NAME_CHANGED event types
|
||||
string name = 3;
|
||||
// The type of the event
|
||||
EventType type = 4;
|
||||
// Color of the chat for the CHAT_CREATED/COLOR_CHANGED event types
|
||||
string color = 5;
|
||||
// Chat image
|
||||
bytes image = 6;
|
||||
|
||||
enum EventType {
|
||||
UNKNOWN = 0;
|
||||
CHAT_CREATED = 1; // See [CHAT_CREATED](#chat-created)
|
||||
NAME_CHANGED = 2; // See [NAME_CHANGED](#name-changed)
|
||||
MEMBERS_ADDED = 3; // See [MEMBERS_ADDED](#members-added)
|
||||
MEMBER_JOINED = 4; // See [MEMBER_JOINED](#member-joined)
|
||||
MEMBER_REMOVED = 5; // See [MEMBER_REMOVED](#member-removed)
|
||||
ADMINS_ADDED = 6; // See [ADMINS_ADDED](#admins-added)
|
||||
ADMIN_REMOVED = 7; // See [ADMIN_REMOVED](#admin-removed)
|
||||
COLOR_CHANGED = 8; // See [COLOR_CHANGED](#color-changed)
|
||||
IMAGE_CHANGED = 9; // See [IMAGE_CHANGED](#image-changed)
|
||||
}
|
||||
}
|
||||
```
|
||||
<!-- Note:
|
||||
I don't like defining wire formats which are out of the scope of the rfc this way.
|
||||
Should explore alternatives -->
|
||||
Note that the definitions for `ChatMessage` and
|
||||
`EmojiReaction` can be found in
|
||||
[chat_message.proto](https://github.com/status-im/status-go/blob/5fd9e93e9c298ed087e6716d857a3951dbfb3c1e/protocol/protobuf/chat_message.proto#L1)
|
||||
and [emoji_reaction.proto](https://github.com/status-im/status-go/blob/5fd9e93e9c298ed087e6716d857a3951dbfb3c1e/protocol/protobuf/emoji_reaction.proto).
|
||||
|
||||
##### Chat Created
|
||||
|
||||
When creating a group chat, this is the first event that MUST be sent.
|
||||
Any event with a clock value lower than this MUST be discarded.
|
||||
Upon receiving this event a client MUST validate the `chat_id`
|
||||
provided with the update and
|
||||
create a chat with identified by `chat_id`.
|
||||
|
||||
By default, the creator of the group chat is the only group admin.
|
||||
|
||||
##### Name Changed
|
||||
|
||||
To change the name of the group chat, group admins MUST use a `NAME_CHANGED` event.
|
||||
Upon receiving this event,
|
||||
a client MUST validate the `chat_id` provided with the updates and
|
||||
MUST ensure the author of the event is an admin of the chat,
|
||||
otherwise the event MUST be ignored.
|
||||
If the event is valid the chat name SHOULD be changed according to the provided message.
|
||||
|
||||
##### Members Added
|
||||
|
||||
To add members to the chat, group admins MUST use a `MEMBERS_ADDED` event.
|
||||
Upon receiving this event,
|
||||
a participant MUST validate the `chat_id` provided with the updates and
|
||||
MUST ensure the author of the event is an admin of the chat,
|
||||
otherwise the event MUST be ignored.
|
||||
If the event is valid,
|
||||
a participant MUST update the list of members of the chat who have not joined,
|
||||
adding the members received.
|
||||
|
||||
##### Member Joined
|
||||
|
||||
To signal the intent to start receiving messages from a given chat,
|
||||
new participants MUST use a `MEMBER_JOINED` event.
|
||||
Upon receiving this event,
|
||||
a participant MUST validate the `chat_id` provided with the updates.
|
||||
If the event is valid a participant,
|
||||
a participant MUST add the new participant to the list of participants stored locally.
|
||||
Any message sent to the group chat MUST now include the new participant.
|
||||
|
||||
##### Member Removed
|
||||
|
||||
There are two ways in which a member MAY be removed from a group chat:
|
||||
|
||||
- A member MAY leave the chat by sending a `MEMBER_REMOVED` event,
|
||||
with the `members` field containing their own public key.
|
||||
- An admin MAY remove a member by sending a `MEMBER_REMOVED` event,
|
||||
with the `members` field containing the public key of the member to be removed.
|
||||
|
||||
Each participant MUST validate the `chat_id` provided with the updates and
|
||||
MUST ensure the author of the event is an admin of the chat,
|
||||
otherwise the event MUST be ignored.
|
||||
If the event is valid, a participant MUST update the local list of members accordingly.
|
||||
|
||||
##### Admins Added
|
||||
|
||||
To promote participants to group admin, group admins MUST use an `ADMINS_ADDED` event.
|
||||
Upon receiving this event,
|
||||
a participant MUST validate the `chat_id` provided with the updates,
|
||||
MUST ensure the author of the event is an admin of the chat,
|
||||
otherwise the event MUST be ignored.
|
||||
If the event is valid,
|
||||
a participant MUST update the list of admins of the chat accordingly.
|
||||
|
||||
##### Admin Removed
|
||||
|
||||
Group admins MUST NOT be able to remove other group admins.
|
||||
An admin MAY remove themselves by sending an `ADMIN_REMOVED` event,
|
||||
with the `members` field containing their own public key.
|
||||
Each participant MUST validate the `chat_id` provided with the updates and
|
||||
MUST ensure the author of the event is an admin of the chat,
|
||||
otherwise the event MUST be ignored.
|
||||
If the event is valid, a participant MUST update the list of admins of the chat accordingly.
|
||||
|
||||
##### Color Changed
|
||||
|
||||
To change the text color of the group chat name,
|
||||
group admins MUST use a `COLOR_CHANGED` event.
|
||||
|
||||
##### Image Changed
|
||||
|
||||
To change the display image of the group chat,
|
||||
group admins MUST use an `IMAGE_CHANGED` event.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. Inherits the security considerations of the key-exchange mechanism used,
|
||||
e.g., [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md) or [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md)
|
||||
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md)
|
||||
3. [65/STATUS-ACCOUNT](../65/account-address.md)
|
||||
4. [54/WAKU2-X3DH-SESSIONS](../../waku/standards/application/54/x3dh-sessions.md)
|
||||
5. [WAKU2-NOISE-SESSIONS](https://github.com/waku-org/specs/blob/master/standards/application/noise-sessions.md)
|
||||
6. [56/STATUS-COMMUNITIES](../56/communities.md)
|
||||
7. [chat_message.proto](https://github.com/status-im/status-go/blob/5fd9e93e9c298ed087e6716d857a3951dbfb3c1e/protocol/protobuf/chat_message.proto#L1)
|
||||
8. [emoji_reaction.proto](https://github.com/status-im/status-go/blob/5fd9e93e9c298ed087e6716d857a3951dbfb3c1e/protocol/protobuf/emoji_reaction.proto)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,365 +1,365 @@
|
||||
---
|
||||
slug: 63
|
||||
title: 63/STATUS-Keycard-Usage
|
||||
name: Status Keycard Usage
|
||||
status: draft
|
||||
category: Standards Track
|
||||
description: Describes how an application can use the Status Keycard to create, store and transact with different account addresses.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
---
|
||||
|
||||
## Terminology
|
||||
|
||||
- **Account**: A valid
|
||||
[BIP-32](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki)
|
||||
compliant key.
|
||||
- **Multiaccount**: An account from which multiple Accounts can be derived.
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes how an application can use the Status Keycard to -
|
||||
|
||||
1. Create Multiaccounts
|
||||
2. Store Multiaccounts
|
||||
3. Use Multiaccounts for transaction or message signing
|
||||
4. Derive Accounts from Multiaccounts
|
||||
|
||||
More documentation on the Status Keycard can be found [here](https://keycard.tech/docs/)
|
||||
|
||||
## Motivation
|
||||
|
||||
The Status Keycard is a hardware wallet that can be used to store and
|
||||
sign transactions.
|
||||
For the purpose of the Status App,
|
||||
this specification describes how the Keycard SHOULD be used to store and
|
||||
sign transactions.
|
||||
|
||||
## Usage
|
||||
|
||||
### Endpoints
|
||||
|
||||
#### 1. Initialize Keycard (`/init-keycard`)
|
||||
|
||||
To initialize the keycard for use with the application.
|
||||
The keycard is locked with a 6 digit pin.
|
||||
|
||||
Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"pin": 6_digit_pin
|
||||
}
|
||||
```
|
||||
|
||||
Response wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"password": password_to_unlock_keycard,
|
||||
"puk": 12_digit_recovery_code,
|
||||
"pin": provided_pin,
|
||||
}
|
||||
```
|
||||
|
||||
The keycard MUST be initialized before it can be used with the application.
|
||||
The application SHOULD provide a way to recover the keycard in case the pin is forgotten.
|
||||
|
||||
### 2. Get Application Info (`/get-application-info`)
|
||||
|
||||
To fetch if the keycard is ready to be used by the application.
|
||||
|
||||
Request wire format
|
||||
|
||||
The requester MAY add a `pairing` field to filter through the generated keys
|
||||
|
||||
```json
|
||||
{
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt> OR null
|
||||
}
|
||||
```
|
||||
|
||||
Response wire format
|
||||
|
||||
#### If the keycard is not initialized yet
|
||||
|
||||
```json
|
||||
{
|
||||
"initialized?": false
|
||||
}
|
||||
```
|
||||
|
||||
#### If the keycard is initialized
|
||||
|
||||
```json
|
||||
{
|
||||
"free-pairing-slots": number,
|
||||
"app-version": major_version.minor_version,
|
||||
"secure-channel-pub-key": valid_bip32_key,,
|
||||
"key-uid": unique_id_of_the_default_key,
|
||||
"instance-uid": unique_instance_id,
|
||||
"paired?": bool,
|
||||
"has-master-key?": bool,
|
||||
"initialized?" true
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Pairing the Keycard to the Client device (`/pair`)
|
||||
|
||||
To establish a secure communication channel described [here](https://keycard.tech/docs/apdu/opensecurechannel.html),
|
||||
the keycard and the client device need to be paired.
|
||||
|
||||
Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"password": password_to_unlock_keycard
|
||||
}
|
||||
```
|
||||
|
||||
Response wire format
|
||||
|
||||
```json
|
||||
"<shared_secret>/<pairing_index>/<256_bit_salt>"
|
||||
```
|
||||
|
||||
### 4. Generate a new set of keys (`/generate-and-load-keys`)
|
||||
|
||||
To generate a new set of keys and load them onto the keycard.
|
||||
|
||||
Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"mnemonic": 12_word_mnemonic,
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>,
|
||||
"pin": 6_digit_pin
|
||||
}
|
||||
```
|
||||
|
||||
Response wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"whisper-address": 20_byte_whisper_compatible_address,
|
||||
"whisper-private-key": whisper_private_key,
|
||||
"wallet-root-public-key": 256_bit_wallet_root_public_key,
|
||||
"encryption-public-key": 256_bit_encryption_public_key,,
|
||||
"wallet-root-address": 20_byte_wallet_root_address,
|
||||
"whisper-public-key": 256_bit_whisper_public_key,
|
||||
"address": 20_byte_address,
|
||||
"wallet-address": 20_byte_wallet_address,,
|
||||
"key-uid": 64_byte_unique_key_id,
|
||||
"wallet-public-key": 256_bit_wallet_public_key,
|
||||
"public-key": 256_bit_public_key,
|
||||
"instance-uid": 32_byte_unique_instance_id,
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Get a set of generated keys (`/get-keys`)
|
||||
|
||||
To fetch the keys that are currently loaded on the keycard.
|
||||
|
||||
Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>,
|
||||
"pin": 6_digit_pin
|
||||
}
|
||||
```
|
||||
|
||||
Response wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"whisper-address": 20_byte_whisper_compatible_address,
|
||||
"whisper-private-key": whisper_private_key,
|
||||
"wallet-root-public-key": 256_bit_wallet_root_public_key,
|
||||
"encryption-public-key": 256_bit_encryption_public_key,
|
||||
"wallet-root-address": 20_byte_wallet_root_address,
|
||||
"whisper-public-key": 256_bit_whisper_public_key,
|
||||
"address": 20_byte_address,
|
||||
"wallet-address": 20_byte_wallet_address,
|
||||
"key-uid": 64_byte_unique_key_id,
|
||||
"wallet-public-key": 256_bit_wallet_public_key,
|
||||
"public-key": 256_bit_public_key,
|
||||
"instance-uid": 32_byte_unique_instance_id,
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Sign a transaction (`/sign`)
|
||||
|
||||
To sign a transaction using the keycard, passing in the pairing information and
|
||||
the transaction to be signed.
|
||||
|
||||
Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"hash": 64_byte_hash_of_the_transaction,
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>,
|
||||
"pin": 6_digit_pin,
|
||||
"path": bip32_path_to_the_key
|
||||
}
|
||||
```
|
||||
|
||||
Response wire format
|
||||
|
||||
```json
|
||||
<256_bit_signature>
|
||||
```
|
||||
|
||||
### 7. Export a key (`/export-key`)
|
||||
|
||||
To export a key from the keycard, passing in the pairing information and
|
||||
the path to the key to be exported.
|
||||
|
||||
Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>,
|
||||
"pin": 6_digit_pin,
|
||||
"path": bip32_path_to_the_key
|
||||
}
|
||||
```
|
||||
|
||||
Response wire format
|
||||
|
||||
```json
|
||||
<256_bit_public_key>
|
||||
```
|
||||
|
||||
### 8. Verify a pin (`/verify-pin`)
|
||||
|
||||
To verify the pin of the keycard.
|
||||
|
||||
Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"pin": 6_digit_pin
|
||||
}
|
||||
```
|
||||
|
||||
Response wire format
|
||||
|
||||
```json
|
||||
1_digit_status_code
|
||||
```
|
||||
|
||||
Status code reference:
|
||||
|
||||
- 3: PIN is valid
|
||||
<!--TODO: what are the other status codes?-->
|
||||
|
||||
### 9. Change the pin (`/change-pin`)
|
||||
|
||||
To change the pin of the keycard.
|
||||
|
||||
Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"new-pin": 6_digit_new_pin,
|
||||
"current-pin": 6_digit_new_pin,
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>
|
||||
}
|
||||
```
|
||||
|
||||
Response wire format
|
||||
|
||||
#### If the operation was successful
|
||||
|
||||
```json
|
||||
true
|
||||
```
|
||||
|
||||
#### If the operation was unsuccessful
|
||||
|
||||
```json
|
||||
false
|
||||
```
|
||||
|
||||
### 10. Unblock the keycard (`/unblock-pin`)
|
||||
|
||||
If the Keycard is blocked due to too many incorrect pin attempts,
|
||||
it can be unblocked using the PUK.
|
||||
|
||||
Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"puk": 12_digit_recovery_code,
|
||||
"new-pin": 6_digit_new_pin,
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>
|
||||
}
|
||||
```
|
||||
|
||||
Response wire format
|
||||
|
||||
If the operation was successful
|
||||
|
||||
```json
|
||||
true
|
||||
```
|
||||
|
||||
If the operation was unsuccessful
|
||||
|
||||
```json
|
||||
false
|
||||
```
|
||||
|
||||
## Flows
|
||||
|
||||
Any application that uses the Status Keycard
|
||||
MAY implement the following flows according to the actions listed above.
|
||||
|
||||
### 1. A new user wants to use the Keycard with the application
|
||||
|
||||
1. The user initializes the Keycard using the `/init-keycard` endpoint.
|
||||
2. The user pairs the Keycard with the client device using the `/pair` endpoint.
|
||||
3. The user generates a new set of keys using the `/generate-and-load-keys` endpoint.
|
||||
4. The user can now use the Keycard to sign transactions using the `/sign` endpoint.
|
||||
|
||||
### 2. An existing user wants to use the Keycard with the application
|
||||
|
||||
1. The user pairs the Keycard with the client device using the `/pair` endpoint.
|
||||
2. The user can now use the Keycard to sign transactions using the `/sign` endpoint.
|
||||
|
||||
### 3. An existing user wants to use the Keycard with a new client device
|
||||
|
||||
1. The user pairs the Keycard with the new client device using the `/pair` endpoint.
|
||||
2. The user can now use the Keycard to sign transactions using the `/sign` endpoint.
|
||||
|
||||
### 4. An existing user wishes to verify the pin of the Keycard
|
||||
|
||||
1. The user verifies the pin of the Keycard using the `/verify-pin` endpoint.
|
||||
|
||||
### 5. An existing user wishes to change the pin of the Keycard
|
||||
|
||||
1. The user changes the pin of the Keycard using the `/change-pin` endpoint.
|
||||
|
||||
### 6. An existing user wishes to unblock the Keycard
|
||||
|
||||
1. The user unblocks the Keycard using the `/unblock-pin` endpoint.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
Inherits the security considerations of [Status Keycard](https://keycard.tech/docs/)
|
||||
|
||||
## Privacy Considerations
|
||||
|
||||
Inherits the privacy considerations of [Status Keycard](https://keycard.tech/docs/)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [BIP-32 specification](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki)
|
||||
2. [Keycard documentation](https://keycard.tech/docs/)
|
||||
3. [16/Keycard-Usage](https://specs.status.im/draft/16)
|
||||
---
|
||||
slug: 63
|
||||
title: 63/STATUS-Keycard-Usage
|
||||
name: Status Keycard Usage
|
||||
status: draft
|
||||
category: Standards Track
|
||||
description: Describes how an application can use the Status Keycard to create, store and transact with different account addresses.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
---
|
||||
|
||||
## Terminology
|
||||
|
||||
- **Account**: A valid
|
||||
[BIP-32](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki)
|
||||
compliant key.
|
||||
- **Multiaccount**: An account from which multiple Accounts can be derived.
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes how an application can use the Status Keycard to -
|
||||
|
||||
1. Create Multiaccounts
|
||||
2. Store Multiaccounts
|
||||
3. Use Multiaccounts for transaction or message signing
|
||||
4. Derive Accounts from Multiaccounts
|
||||
|
||||
More documentation on the Status Keycard can be found [here](https://keycard.tech/docs/)
|
||||
|
||||
## Motivation
|
||||
|
||||
The Status Keycard is a hardware wallet that can be used to store and
|
||||
sign transactions.
|
||||
For the purpose of the Status App,
|
||||
this specification describes how the Keycard SHOULD be used to store and
|
||||
sign transactions.
|
||||
|
||||
## Usage
|
||||
|
||||
### Endpoints
|
||||
|
||||
#### 1. Initialize Keycard (`/init-keycard`)
|
||||
|
||||
To initialize the keycard for use with the application.
|
||||
The keycard is locked with a 6 digit pin.
|
||||
|
||||
#### Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"pin": 6_digit_pin
|
||||
}
|
||||
```
|
||||
|
||||
#### Response wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"password": password_to_unlock_keycard,
|
||||
"puk": 12_digit_recovery_code,
|
||||
"pin": provided_pin,
|
||||
}
|
||||
```
|
||||
|
||||
The keycard MUST be initialized before it can be used with the application.
|
||||
The application SHOULD provide a way to recover the keycard in case the pin is forgotten.
|
||||
|
||||
### 2. Get Application Info (`/get-application-info`)
|
||||
|
||||
To fetch if the keycard is ready to be used by the application.
|
||||
|
||||
#### Request wire format
|
||||
|
||||
The requester MAY add a `pairing` field to filter through the generated keys
|
||||
|
||||
```json
|
||||
{
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt> OR null
|
||||
}
|
||||
```
|
||||
|
||||
#### Response wire format
|
||||
|
||||
##### If the keycard is not initialized yet
|
||||
|
||||
```json
|
||||
{
|
||||
"initialized?": false
|
||||
}
|
||||
```
|
||||
|
||||
##### If the keycard is initialized
|
||||
|
||||
```json
|
||||
{
|
||||
"free-pairing-slots": number,
|
||||
"app-version": major_version.minor_version,
|
||||
"secure-channel-pub-key": valid_bip32_key,,
|
||||
"key-uid": unique_id_of_the_default_key,
|
||||
"instance-uid": unique_instance_id,
|
||||
"paired?": bool,
|
||||
"has-master-key?": bool,
|
||||
"initialized?" true
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Pairing the Keycard to the Client device (`/pair`)
|
||||
|
||||
To establish a secure communication channel described [here](https://keycard.tech/docs/apdu/opensecurechannel.html),
|
||||
the keycard and the client device need to be paired.
|
||||
|
||||
#### Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"password": password_to_unlock_keycard
|
||||
}
|
||||
```
|
||||
|
||||
#### Response wire format
|
||||
|
||||
```json
|
||||
"<shared_secret>/<pairing_index>/<256_bit_salt>"
|
||||
```
|
||||
|
||||
### 4. Generate a new set of keys (`/generate-and-load-keys`)
|
||||
|
||||
To generate a new set of keys and load them onto the keycard.
|
||||
|
||||
#### Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"mnemonic": 12_word_mnemonic,
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>,
|
||||
"pin": 6_digit_pin
|
||||
}
|
||||
```
|
||||
|
||||
#### Response wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"whisper-address": 20_byte_whisper_compatible_address,
|
||||
"whisper-private-key": whisper_private_key,
|
||||
"wallet-root-public-key": 256_bit_wallet_root_public_key,
|
||||
"encryption-public-key": 256_bit_encryption_public_key,,
|
||||
"wallet-root-address": 20_byte_wallet_root_address,
|
||||
"whisper-public-key": 256_bit_whisper_public_key,
|
||||
"address": 20_byte_address,
|
||||
"wallet-address": 20_byte_wallet_address,,
|
||||
"key-uid": 64_byte_unique_key_id,
|
||||
"wallet-public-key": 256_bit_wallet_public_key,
|
||||
"public-key": 256_bit_public_key,
|
||||
"instance-uid": 32_byte_unique_instance_id,
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Get a set of generated keys (`/get-keys`)
|
||||
|
||||
To fetch the keys that are currently loaded on the keycard.
|
||||
|
||||
#### Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>,
|
||||
"pin": 6_digit_pin
|
||||
}
|
||||
```
|
||||
|
||||
#### Response wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"whisper-address": 20_byte_whisper_compatible_address,
|
||||
"whisper-private-key": whisper_private_key,
|
||||
"wallet-root-public-key": 256_bit_wallet_root_public_key,
|
||||
"encryption-public-key": 256_bit_encryption_public_key,
|
||||
"wallet-root-address": 20_byte_wallet_root_address,
|
||||
"whisper-public-key": 256_bit_whisper_public_key,
|
||||
"address": 20_byte_address,
|
||||
"wallet-address": 20_byte_wallet_address,
|
||||
"key-uid": 64_byte_unique_key_id,
|
||||
"wallet-public-key": 256_bit_wallet_public_key,
|
||||
"public-key": 256_bit_public_key,
|
||||
"instance-uid": 32_byte_unique_instance_id,
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Sign a transaction (`/sign`)
|
||||
|
||||
To sign a transaction using the keycard, passing in the pairing information and
|
||||
the transaction to be signed.
|
||||
|
||||
#### Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"hash": 64_byte_hash_of_the_transaction,
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>,
|
||||
"pin": 6_digit_pin,
|
||||
"path": bip32_path_to_the_key
|
||||
}
|
||||
```
|
||||
|
||||
#### Response wire format
|
||||
|
||||
```json
|
||||
<256_bit_signature>
|
||||
```
|
||||
|
||||
### 7. Export a key (`/export-key`)
|
||||
|
||||
To export a key from the keycard, passing in the pairing information and
|
||||
the path to the key to be exported.
|
||||
|
||||
#### Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>,
|
||||
"pin": 6_digit_pin,
|
||||
"path": bip32_path_to_the_key
|
||||
}
|
||||
```
|
||||
|
||||
#### Response wire format
|
||||
|
||||
```json
|
||||
<256_bit_public_key>
|
||||
```
|
||||
|
||||
### 8. Verify a pin (`/verify-pin`)
|
||||
|
||||
To verify the pin of the keycard.
|
||||
|
||||
#### Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"pin": 6_digit_pin
|
||||
}
|
||||
```
|
||||
|
||||
#### Response wire format
|
||||
|
||||
```json
|
||||
1_digit_status_code
|
||||
```
|
||||
|
||||
Status code reference:
|
||||
|
||||
- 3: PIN is valid
|
||||
<!--TODO: what are the other status codes?-->
|
||||
|
||||
### 9. Change the pin (`/change-pin`)
|
||||
|
||||
To change the pin of the keycard.
|
||||
|
||||
#### Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"new-pin": 6_digit_new_pin,
|
||||
"current-pin": 6_digit_new_pin,
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>
|
||||
}
|
||||
```
|
||||
|
||||
#### Response wire format
|
||||
|
||||
##### If the operation was successful
|
||||
|
||||
```json
|
||||
true
|
||||
```
|
||||
|
||||
##### If the operation was unsuccessful
|
||||
|
||||
```json
|
||||
false
|
||||
```
|
||||
|
||||
### 10. Unblock the keycard (`/unblock-pin`)
|
||||
|
||||
If the Keycard is blocked due to too many incorrect pin attempts,
|
||||
it can be unblocked using the PUK.
|
||||
|
||||
#### Request wire format
|
||||
|
||||
```json
|
||||
{
|
||||
"puk": 12_digit_recovery_code,
|
||||
"new-pin": 6_digit_new_pin,
|
||||
"pairing": <shared_secret>/<pairing_index>/<256_bit_salt>
|
||||
}
|
||||
```
|
||||
|
||||
#### Response wire format
|
||||
|
||||
##### If the operation was successful
|
||||
|
||||
```json
|
||||
true
|
||||
```
|
||||
|
||||
##### If the operation was unsuccessful
|
||||
|
||||
```json
|
||||
false
|
||||
```
|
||||
|
||||
## Flows
|
||||
|
||||
Any application that uses the Status Keycard
|
||||
MAY implement the following flows according to the actions listed above.
|
||||
|
||||
### 1. A new user wants to use the Keycard with the application
|
||||
|
||||
1. The user initializes the Keycard using the `/init-keycard` endpoint.
|
||||
2. The user pairs the Keycard with the client device using the `/pair` endpoint.
|
||||
3. The user generates a new set of keys using the `/generate-and-load-keys` endpoint.
|
||||
4. The user can now use the Keycard to sign transactions using the `/sign` endpoint.
|
||||
|
||||
### 2. An existing user wants to use the Keycard with the application
|
||||
|
||||
1. The user pairs the Keycard with the client device using the `/pair` endpoint.
|
||||
2. The user can now use the Keycard to sign transactions using the `/sign` endpoint.
|
||||
|
||||
### 3. An existing user wants to use the Keycard with a new client device
|
||||
|
||||
1. The user pairs the Keycard with the new client device using the `/pair` endpoint.
|
||||
2. The user can now use the Keycard to sign transactions using the `/sign` endpoint.
|
||||
|
||||
### 4. An existing user wishes to verify the pin of the Keycard
|
||||
|
||||
1. The user verifies the pin of the Keycard using the `/verify-pin` endpoint.
|
||||
|
||||
### 5. An existing user wishes to change the pin of the Keycard
|
||||
|
||||
1. The user changes the pin of the Keycard using the `/change-pin` endpoint.
|
||||
|
||||
### 6. An existing user wishes to unblock the Keycard
|
||||
|
||||
1. The user unblocks the Keycard using the `/unblock-pin` endpoint.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
Inherits the security considerations of [Status Keycard](https://keycard.tech/docs/)
|
||||
|
||||
## Privacy Considerations
|
||||
|
||||
Inherits the privacy considerations of [Status Keycard](https://keycard.tech/docs/)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [BIP-32 specification](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki)
|
||||
2. [Keycard documentation](https://keycard.tech/docs/)
|
||||
3. [16/Keycard-Usage](https://specs.status.im/draft/16)
|
||||
|
||||
@@ -1,141 +1,141 @@
|
||||
---
|
||||
slug: 65
|
||||
title: 65/STATUS-ACCOUNT-ADDRESS
|
||||
name: Status Account Address
|
||||
status: draft
|
||||
category: Standards Track
|
||||
description: Details of what a Status account address is and how account addresses are created and used.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification details what a Status account address is and
|
||||
how account addresses are created and used.
|
||||
|
||||
## Background
|
||||
|
||||
The core concept of an account in Status is a set of cryptographic keypairs.
|
||||
Namely, the combination of the following:
|
||||
|
||||
1. a Waku chat identity keypair
|
||||
1. a set of cryptocurrency wallet keypairs
|
||||
|
||||
The Status node verifies or
|
||||
derives everything else associated with the contact from the above items, including:
|
||||
|
||||
- Ethereum address (future verification, currently the same base keypair)
|
||||
- identicon
|
||||
- message signatures
|
||||
|
||||
## Initial Key Generation
|
||||
|
||||
### Public/Private Keypairs
|
||||
|
||||
- An ECDSA (secp256k1 curve) public/private keypair MUST be generated via a
|
||||
[BIP43](https://github.com/bitcoin/bips/blob/master/bip-0043.mediawiki)
|
||||
derived path from a
|
||||
[BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki)
|
||||
mnemonic seed phrase.
|
||||
|
||||
- The default paths are defined as such:
|
||||
- Waku Chat Key (`IK`): `m/43'/60'/1581'/0'/0` (post Multiaccount integration)
|
||||
- following [EIP1581](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1581.md)
|
||||
- Status Wallet paths: `m/44'/60'/0'/0/i` starting at `i=0`
|
||||
- following [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
|
||||
- NOTE: this (`i=0`) is also the current (and only)
|
||||
path for Waku key before Multiaccount integration
|
||||
|
||||
## Account Broadcasting
|
||||
|
||||
- A user is responsible for broadcasting certain information publicly so
|
||||
that others may contact them.
|
||||
|
||||
### X3DH Prekey bundles
|
||||
|
||||
- Refer to [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md)
|
||||
for details on the X3DH prekey bundle broadcasting, as well as regeneration.
|
||||
|
||||
## Optional Account additions
|
||||
|
||||
### ENS Username
|
||||
|
||||
- A user MAY register a public username on the Ethereum Name System (ENS).
|
||||
This username is a user-chosen subdomain of the `stateofus.eth`
|
||||
ENS registration that maps to their Waku identity key (`IK`).
|
||||
|
||||
### User Profile Picture
|
||||
|
||||
- An account MAY edit the `IK` generated identicon with a chosen picture.
|
||||
This picture will become part of the publicly broadcasted profile of the account.
|
||||
|
||||
<!-- TODO: Elaborate on wallet account and multiaccount -->
|
||||
|
||||
## Wire Format
|
||||
|
||||
Below is the wire format for the account information that is broadcasted publicly.
|
||||
An Account is referred to as a Multiaccount in the wire format.
|
||||
|
||||
```proto
|
||||
message MultiAccount {
|
||||
string name = 1; // name of the account
|
||||
int64 timestamp = 2; // timestamp of the message
|
||||
string identicon = 3; // base64 encoded identicon
|
||||
repeated ColorHash color_hash = 4; // color hash of the identicon
|
||||
int64 color_id = 5; // color id of the identicon
|
||||
string keycard_pairing = 6; // keycard pairing code
|
||||
string key_uid = 7; // unique identifier of the account
|
||||
repeated IdentityImage images = 8; // images associated with the account
|
||||
string customization_color = 9; // color of the identicon
|
||||
uint64 customization_color_clock = 10; // clock of the identicon color, to track updates
|
||||
|
||||
message ColorHash {
|
||||
repeated int64 index = 1;
|
||||
}
|
||||
|
||||
message IdentityImage {
|
||||
string key_uid = 1; // unique identifier of the image
|
||||
string name = 2; // name of the image
|
||||
bytes payload = 3; // payload of the image
|
||||
int64 width = 4; // width of the image
|
||||
int64 height = 5; // height of the image
|
||||
int64 filesize = 6; // filesize of the image
|
||||
int64 resize_target = 7; // resize target of the image
|
||||
uint64 clock = 8; // clock of the image, to track updates
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The above payload is broadcasted when 2 devices
|
||||
that belong to a user need to be paired.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- This specification inherits security considerations of
|
||||
[53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md) and
|
||||
[54/WAKU2-X3DH-SESSIONS](../../waku/standards/application/54/x3dh-sessions.md).
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
### normative
|
||||
|
||||
- [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md)
|
||||
- [54/WAKU2-X3DH-SESSIONS](../../waku/standards/application/54/x3dh-sessions.md)
|
||||
- [55/STATUS-1TO1-CHAT](../55/1to1-chat.md)
|
||||
|
||||
## informative
|
||||
|
||||
- [BIP43](https://github.com/bitcoin/bips/blob/master/bip-0043.mediawiki)
|
||||
- [BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki)
|
||||
- [EIP1581](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1581.md)
|
||||
- [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
|
||||
- [Ethereum Name System](https://ens.domains/)
|
||||
- [Status Multiaccount](../63/account-address.md)
|
||||
---
|
||||
slug: 65
|
||||
title: 65/STATUS-ACCOUNT-ADDRESS
|
||||
name: Status Account Address
|
||||
status: draft
|
||||
category: Standards Track
|
||||
description: Details of what a Status account address is and how account addresses are created and used.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification details what a Status account address is and
|
||||
how account addresses are created and used.
|
||||
|
||||
## Background
|
||||
|
||||
The core concept of an account in Status is a set of cryptographic keypairs.
|
||||
Namely, the combination of the following:
|
||||
|
||||
1. a Waku chat identity keypair
|
||||
1. a set of cryptocurrency wallet keypairs
|
||||
|
||||
The Status node verifies or
|
||||
derives everything else associated with the contact from the above items, including:
|
||||
|
||||
- Ethereum address (future verification, currently the same base keypair)
|
||||
- identicon
|
||||
- message signatures
|
||||
|
||||
## Initial Key Generation
|
||||
|
||||
### Public/Private Keypairs
|
||||
|
||||
- An ECDSA (secp256k1 curve) public/private keypair MUST be generated via a
|
||||
[BIP43](https://github.com/bitcoin/bips/blob/master/bip-0043.mediawiki)
|
||||
derived path from a
|
||||
[BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki)
|
||||
mnemonic seed phrase.
|
||||
|
||||
- The default paths are defined as such:
|
||||
- Waku Chat Key (`IK`): `m/43'/60'/1581'/0'/0` (post Multiaccount integration)
|
||||
- following [EIP1581](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1581.md)
|
||||
- Status Wallet paths: `m/44'/60'/0'/0/i` starting at `i=0`
|
||||
- following [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
|
||||
- NOTE: this (`i=0`) is also the current (and only)
|
||||
path for Waku key before Multiaccount integration
|
||||
|
||||
## Account Broadcasting
|
||||
|
||||
- A user is responsible for broadcasting certain information publicly so
|
||||
that others may contact them.
|
||||
|
||||
### X3DH Prekey bundles
|
||||
|
||||
- Refer to [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md)
|
||||
for details on the X3DH prekey bundle broadcasting, as well as regeneration.
|
||||
|
||||
## Optional Account additions
|
||||
|
||||
### ENS Username
|
||||
|
||||
- A user MAY register a public username on the Ethereum Name System (ENS).
|
||||
This username is a user-chosen subdomain of the `stateofus.eth`
|
||||
ENS registration that maps to their Waku identity key (`IK`).
|
||||
|
||||
### User Profile Picture
|
||||
|
||||
- An account MAY edit the `IK` generated identicon with a chosen picture.
|
||||
This picture will become part of the publicly broadcasted profile of the account.
|
||||
|
||||
<!-- TODO: Elaborate on wallet account and multiaccount -->
|
||||
|
||||
## Wire Format
|
||||
|
||||
Below is the wire format for the account information that is broadcasted publicly.
|
||||
An Account is referred to as a Multiaccount in the wire format.
|
||||
|
||||
```proto
|
||||
message MultiAccount {
|
||||
string name = 1; // name of the account
|
||||
int64 timestamp = 2; // timestamp of the message
|
||||
string identicon = 3; // base64 encoded identicon
|
||||
repeated ColorHash color_hash = 4; // color hash of the identicon
|
||||
int64 color_id = 5; // color id of the identicon
|
||||
string keycard_pairing = 6; // keycard pairing code
|
||||
string key_uid = 7; // unique identifier of the account
|
||||
repeated IdentityImage images = 8; // images associated with the account
|
||||
string customization_color = 9; // color of the identicon
|
||||
uint64 customization_color_clock = 10; // clock of the identicon color, to track updates
|
||||
|
||||
message ColorHash {
|
||||
repeated int64 index = 1;
|
||||
}
|
||||
|
||||
message IdentityImage {
|
||||
string key_uid = 1; // unique identifier of the image
|
||||
string name = 2; // name of the image
|
||||
bytes payload = 3; // payload of the image
|
||||
int64 width = 4; // width of the image
|
||||
int64 height = 5; // height of the image
|
||||
int64 filesize = 6; // filesize of the image
|
||||
int64 resize_target = 7; // resize target of the image
|
||||
uint64 clock = 8; // clock of the image, to track updates
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The above payload is broadcasted when 2 devices
|
||||
that belong to a user need to be paired.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- This specification inherits security considerations of
|
||||
[53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md) and
|
||||
[54/WAKU2-X3DH-SESSIONS](../../waku/standards/application/54/x3dh-sessions.md).
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
### normative
|
||||
|
||||
- [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md)
|
||||
- [54/WAKU2-X3DH-SESSIONS](../../waku/standards/application/54/x3dh-sessions.md)
|
||||
- [55/STATUS-1TO1-CHAT](../55/1to1-chat.md)
|
||||
|
||||
## informative
|
||||
|
||||
- [BIP43](https://github.com/bitcoin/bips/blob/master/bip-0043.mediawiki)
|
||||
- [BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki)
|
||||
- [EIP1581](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1581.md)
|
||||
- [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
|
||||
- [Ethereum Name System](https://ens.domains/)
|
||||
- [Status Multiaccount](../63/account-address.md)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,4 +1,4 @@
|
||||
# Status RFCs
|
||||
|
||||
Status is a communication tool providing privacy features for the user.
|
||||
Specifications can also be viewed at [Status](https://status.app/specs).
|
||||
# Status RFCs
|
||||
|
||||
Status is a communitication tool providing privacy features for the user.
|
||||
Specifcations can also be viewd at [Status](https://status.app/specs).
|
||||
|
||||
@@ -1,127 +0,0 @@
|
||||
---
|
||||
title: 3RD-PARTY
|
||||
name: 3rd party
|
||||
status: deprecated
|
||||
description: This specification discusses 3rd party APIs that Status relies on.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Volodymyr Kozieiev <volodymyr@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification discusses 3rd party APIs that Status relies on.
|
||||
These APIs provide various capabilities, including:
|
||||
|
||||
- communicating with the Ethereum network,
|
||||
- allowing users to view address and transaction details on external websites,
|
||||
- retrieving fiat/crypto exchange rates,
|
||||
- obtaining information about collectibles,
|
||||
- hosting the privacy policy.
|
||||
|
||||
## Definitions
|
||||
|
||||
| Term | Description |
|
||||
|-------------------|-------------------------------------------------------------------------------------------------------|
|
||||
| Fiat money | Currency established as money, often by government regulation, but without intrinsic value. |
|
||||
| Full node | A computer, connected to the Ethereum network, that enforces all Ethereum consensus rules. |
|
||||
| Crypto-collectible| A unique, non-fungible digital asset, distinct from cryptocurrencies where tokens are identical. |
|
||||
|
||||
## Why 3rd Party APIs Can Be a Problem
|
||||
|
||||
Relying on 3rd party APIs conflicts with Status’s censorship-resistance principle.
|
||||
Since Status aims to avoid suppression of information,
|
||||
it is important to minimize reliance on 3rd parties that are critical to app functionality.
|
||||
|
||||
## 3rd Party APIs Used by the Current Status App
|
||||
|
||||
### Infura
|
||||
|
||||
**What is it?**
|
||||
Infura hosts a collection of Ethereum full nodes and provides an API
|
||||
to access the Ethereum and IPFS networks without requiring a full node.
|
||||
|
||||
**How Status Uses It**
|
||||
Since Status operates on mobile devices,
|
||||
it cannot rely on a local node.
|
||||
Therefore, all Ethereum network communication happens via Infura.
|
||||
|
||||
**Concerns**
|
||||
Making an HTTP request can reveal user metadata,
|
||||
which could be exploited in attacks if Infura is compromised.
|
||||
Infura uses centralized hosting providers;
|
||||
if these providers fail or cut off service,
|
||||
Ethereum-dependent features in Status would be affected.
|
||||
|
||||
### Etherscan
|
||||
|
||||
**What is it?**
|
||||
Etherscan is a service that allows users to explore the Ethereum blockchain
|
||||
for transactions, addresses, tokens, prices,
|
||||
and other blockchain activities.
|
||||
|
||||
**How Status Uses It**
|
||||
The Status Wallet allows users to view address and transaction details on Etherscan.
|
||||
|
||||
**Concerns**
|
||||
If Etherscan becomes unavailable,
|
||||
users won’t be able to view address or transaction details through Etherscan.
|
||||
However, in-app information will still be accessible.
|
||||
|
||||
### CryptoCompare
|
||||
|
||||
**What is it?**
|
||||
CryptoCompare provides live crypto prices, charts, and analysis from major exchanges.
|
||||
|
||||
**How Status Uses It**
|
||||
Status regularly fetches crypto prices from CryptoCompare,
|
||||
using this information to calculate fiat values
|
||||
for transactions or wallet assets.
|
||||
|
||||
**Concerns**
|
||||
HTTP requests can reveal metadata,
|
||||
which could be exploited if CryptoCompare is compromised.
|
||||
If CryptoCompare becomes unavailable,
|
||||
Status won’t be able to show fiat equivalents for crypto in the wallet.
|
||||
|
||||
### Collectibles
|
||||
|
||||
Various services provide information on collectibles:
|
||||
|
||||
- [Service 1](https://api.pixura.io/graphql)
|
||||
- [Service 2](https://www.etheremon.com/api)
|
||||
- [Service 3](https://us-central1-cryptostrikers-prod.cloudfunctions.net/cards/)
|
||||
- [Service 4](https://api.cryptokitties.co/)
|
||||
|
||||
**Concerns**
|
||||
HTTP requests can reveal metadata,
|
||||
which could be exploited if these services are compromised.
|
||||
|
||||
### Iubenda
|
||||
|
||||
**What is it?**
|
||||
Iubenda helps create compliance documents for websites and apps across jurisdictions.
|
||||
|
||||
**How Status Uses It**
|
||||
Status’s privacy policy is hosted on Iubenda.
|
||||
|
||||
**Concerns**
|
||||
If Iubenda becomes unavailable,
|
||||
users will be unable to view the app's privacy policy.
|
||||
|
||||
## Changelog
|
||||
|
||||
| Version | Comment |
|
||||
|---------|-----------------|
|
||||
| 0.1.0 | Initial release |
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via CC0.
|
||||
|
||||
## References
|
||||
|
||||
- [GraphQL](https://api.pixura.io/graphql)
|
||||
- [Etheremon](https://www.etheremon.com/api)
|
||||
- [Cryptostrikers](https://us-central1-cryptostrikers-prod.cloudfunctions.net/cards/)
|
||||
- [Cryptokitties](https://api.cryptokitties.co/)
|
||||
@@ -1,138 +0,0 @@
|
||||
---
|
||||
title: IPFS-gateway-for-Sticker-Pack
|
||||
name: IPFS gateway for Sticker Pack
|
||||
status: deprecated
|
||||
description: This specification describes how Status uses the IPFS gateway to store stickers.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Gheorghe Pinzaru <gheorghe@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes how Status uses the IPFS gateway
|
||||
to store stickers.
|
||||
The specification explores image format,
|
||||
how a user uploads stickers,
|
||||
and how an end user can see them inside the Status app.
|
||||
|
||||
## Definition
|
||||
|
||||
| Term | Description |
|
||||
|------------------|----------------------------------------------------------------------------------------|
|
||||
| **Stickers** | A set of images which can be used to express emotions |
|
||||
| **Sticker Pack** | ERC721 token which includes the set of stickers |
|
||||
| **IPFS** | P2P network used to store and share data, in this case, the images for the stickerpack |
|
||||
|
||||
## Specification
|
||||
|
||||
### Image format
|
||||
|
||||
Accepted image file types are `PNG`, `JPG/JPEG` and `GIF`,
|
||||
with a maximum allowed size of 300kb.
|
||||
The minimum sticker image resolution is 512x512,
|
||||
and its background SHOULD be transparent.
|
||||
|
||||
### Distribution
|
||||
|
||||
The node implements sticker packs as [ERC721 token](https://eips.ethereum.org/EIPS/eip-721)
|
||||
and contain a set of stickers.
|
||||
The node stores these stickers inside the sticker pack as a set of hyperlinks pointing to IPFS storage.
|
||||
These hyperlinks are publicly available and can be accessed by any user inside the status chat.
|
||||
Stickers can be sent in chat only by accounts that own the sticker pack.
|
||||
|
||||
### IPFS gateway
|
||||
|
||||
At the moment of writing, the current main Status app uses the [Infura](https://infura.io/) gateway.
|
||||
However, clients could choose a different gateway or to run own IPFS node.
|
||||
Infura gateway is an HTTPS gateway,
|
||||
which based on an HTTP GET request with the multihash block will return the stored content at that block address.
|
||||
|
||||
The node requires the use of a gateway to enable easy access to the resources over HTTP.
|
||||
The node stores each image of a sticker inside IPFS using a unique address that is
|
||||
derived from the hash of the file.
|
||||
This ensures that a file can't be overridden,
|
||||
and an end-user of the IPFS will receive the same file at a given address.
|
||||
|
||||
### Security
|
||||
|
||||
The IPFS gateway acts as an end-user of the IPFS
|
||||
and allows users of the gateway to access IPFS without connection to the P2P network.
|
||||
Usage of a gateway introduces potential risk for the users of that gateway provider.
|
||||
In case of a compromise in the security of the provider, meta information such as IP address,
|
||||
User-Agent and other of its users can be leaked.
|
||||
If the provider servers are unavailable the node loses access through the gateway to the IPFS network.
|
||||
|
||||
### Status sticker usage
|
||||
|
||||
When the app shows a sticker, the Status app makes an HTTP GET request to IPFS gateway using the hyperlink.
|
||||
|
||||
To send a sticker in chat, a user of Status should buy or install a sticker pack.
|
||||
|
||||
To be available for installation a Sticker Pack should be submitted to Sticker market by an author.
|
||||
|
||||
#### Submit a sticker
|
||||
|
||||
To submit a sticker pack, the author should upload all assets to IPFS.
|
||||
Then generate a payload including name, author, thumbnail,
|
||||
preview and a list of stickers in the [EDN format](https://github.com/edn-format/edn). Following this structure:
|
||||
``
|
||||
{meta {:name "Sticker pack name"
|
||||
:author "Author Name"
|
||||
:thumbnail "e30101701220602163b4f56c747333f43775fdcbe4e62d6a3e147b22aaf6097ce0143a6b2373"
|
||||
:preview "e30101701220ef54a5354b78ef82e542bd468f58804de71c8ec268da7968a1422909357f2456"
|
||||
:stickers [{:hash "e301017012207737b75367b8068e5bdd027d7b71a25138c83e155d1f0c9bc5c48ff158724495"}
|
||||
{:hash "e301017012201a9cdea03f27cda1aede7315f79579e160c7b2b6a2eb51a66e47a96f47fe5284"}]}}
|
||||
``
|
||||
All asset fields, are contenthash fields as per [EIP 1577](https://eips.ethereum.org/EIPS/eip-1577).
|
||||
The node also uploads this payload to IPFS, and the node uses the IPFS address in the content field of the Sticker Market contract.
|
||||
See [Sticker Market spec](https://github.com/status-im/sticker-market/blob/651e88e5f38c690e57ecaad47f46b9641b8b1e27/docs/specification.md) for a detailed description of the contract.
|
||||
|
||||
#### Install a sticker pack
|
||||
|
||||
To install a sticker pack, the node fetches all sticker packs available in Sticker Market.
|
||||
The node needs the following steps to fetch all sticker packs:
|
||||
|
||||
#### 1. Get total number of sticker packs
|
||||
|
||||
Call `packCount()` on the sticker market contract, will return number of sticker pack registered as `uint256`.
|
||||
|
||||
#### 2. Get sticker pack by id
|
||||
|
||||
ID's are represented as `uint256` and are incremental from `0` to total number of sticker packs in the contract,
|
||||
received in the previous step.
|
||||
To get a sticker pack call `getPackData(sticker-pack-id)`, the return type is `["bytes4[]" "address" "bool" "uint256" "uint256" "bytes"]`
|
||||
which represents the following fields: `[category owner mintable timestamp price contenthash]`.
|
||||
Price is the SNT value in wei set by sticker pack owner.
|
||||
The contenthash is the IPFS address described in the [submit description](#submit-a-sticker) above.
|
||||
Other fields specification could be found in [Sticker Market spec](https://github.com/status-im/sticker-market/blob/651e88e5f38c690e57ecaad47f46b9641b8b1e27/docs/specification.md)
|
||||
|
||||
##### 3. Get owned sticker packs
|
||||
|
||||
The current Status app fetches owned sticker packs during the open of any sticker view
|
||||
(a screen which shows a sticker pack, or the list of sticker packs).
|
||||
To get owned packs, get all owned tokens for the current account address,
|
||||
by calling `balanceOf(address)` where address is the address for the current account.
|
||||
This method returns a `uint256` representing the count of available tokens. Using `tokenOfOwnerByIndex(address,uint256)` method,
|
||||
with the address of the user and ID in form of a `uint256`
|
||||
which is an incremented int from 0 to the total number of tokens, gives the token id.
|
||||
To get the sticker pack id from a token call`tokenPackId(uint256)` where `uint256` is the token id.
|
||||
This method will return an `uint256` which is the id of the owned sticker pack.
|
||||
|
||||
##### 4. Buy a sticker pack
|
||||
|
||||
To buy a sticker pack call `approveAndCall(address,uint256,bytes)`
|
||||
where `address` is the address of buyer,`uint256` is the price and third parameters `bytes` is the callback called if approved.
|
||||
In the callback, call `buyToken(uint256,address,uint256)`, first parameter is sticker pack id, second buyers address, and the last is the price.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [ERC721 Token Standard](https://eips.ethereum.org/EIPS/eip-721)
|
||||
- [Infura](https://infura.io/)
|
||||
- [EDN Format](https://github.com/edn-format/edn)
|
||||
- [EIP 1577](https://eips.ethereum.org/EIPS/eip-1577)
|
||||
- [Sticker Market Specification](https://github.com/status-im/sticker-market/blob/651e88e5f38c690e57ecaad47f46b9641b8b1e27/docs/specification.md)
|
||||
@@ -1,461 +0,0 @@
|
||||
---
|
||||
title: ACCOUNT
|
||||
name: Account
|
||||
status: deprecated
|
||||
description: This specification explains what a Status account is, and how a node establishes trust.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification explains what a Status account is,
|
||||
and how a node establishes trust.
|
||||
|
||||
## Introduction
|
||||
|
||||
The core concept of an account in Status is a set of cryptographic keypairs.
|
||||
Namely, the combination of the following:
|
||||
|
||||
1. a Whisper/Waku chat identity keypair
|
||||
1. a set of cryptocurrency wallet keypairs
|
||||
|
||||
The node verifies or derives everything else associated with the contact from the above items, including:
|
||||
|
||||
- Ethereum address (future verification, currently the same base keypair)
|
||||
- 3 word mnemonic name
|
||||
- identicon
|
||||
- message signatures
|
||||
|
||||
## Initial Key Generation
|
||||
|
||||
### Public/Private Keypairs
|
||||
|
||||
- An ECDSA (secp256k1 curve) public/private keypair MUST be generated via a [BIP43](https://github.com/bitcoin/bips/blob/master/bip-0043.mediawiki) derived path from a [BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki) mnemonic seed phrase.
|
||||
- The default paths are defined as such:
|
||||
- Whisper/Waku Chat Key (`IK`): `m/43'/60'/1581'/0'/0` (post Multiaccount integration)
|
||||
- following [EIP1581](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1581.md)
|
||||
<!-- WE CURRENTLY DO NOT IMPLEMENT ENCRYPTION KEY, FOR FUTURE - C.P. -->
|
||||
<!-- - DB encryption Key (`DBK`): `m/43'/60'/1581'/1'/0` (post Multiaccount integration) -->
|
||||
<!-- - following [EIP1581](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1581.md) -->
|
||||
- Status Wallet paths: `m/44'/60'/0'/0/i` starting at `i=0`
|
||||
- following [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
|
||||
- NOTE: this (`i=0`) is also the current (and only) path for Whisper/Waku key before Multiaccount integration
|
||||
|
||||
### X3DH Prekey bundle creation
|
||||
|
||||
- Status follows the X3DH prekey bundle scheme that [Open Whisper Systems](https://en.wikipedia.org/wiki/Signal_Messenger#2013%E2%80%932018:_Open_Whisper_Systems) (not to be confused with the Whisper sub-protocol) outlines [in their documentation](https://signal.org/docs/specifications/x3dh/#the-x3dh-protocol) with the following exceptions:
|
||||
|
||||
- Status does not publish one-time keys `OPK` or perform DH including them, because there are no central servers in the Status implementation.
|
||||
- A client MUST create X3DH prekey bundles, each defined by the following items:
|
||||
- Identity Key: `IK`
|
||||
- Signed prekey: `SPK`
|
||||
- Prekey signature: `Sig(IK, Encode(SPK))`
|
||||
- Timestamp
|
||||
- These bundles are made available in a variety of ways, as defined in section 2.1.
|
||||
|
||||
## Account Broadcasting
|
||||
|
||||
- A user is responsible for broadcasting certain information publicly so that others may contact them.
|
||||
|
||||
### X3DH Prekey bundles
|
||||
|
||||
- A client SHOULD regenerate a new X3DH prekey bundle every 24 hours. This MAY be done in a lazy way, such that a client that does not come online past this time period does not regenerate or broadcast bundles.
|
||||
- The current bundle SHOULD be broadcast on a Whisper/Waku topic specific to his Identity Key, `{IK}-contact-code`, intermittently. This MAY be done every 6 hours.
|
||||
- A bundle SHOULD accompany every message sent.
|
||||
- TODO: retrieval of long-time offline users bundle via `{IK}-contact-code`
|
||||
|
||||
## Optional Account additions
|
||||
|
||||
### ENS Username
|
||||
|
||||
- A user MAY register a public username on the Ethereum Name System (ENS). This username is a user-chosen subdomain of the `stateofus.eth` ENS registration that maps to their Whisper/Waku identity key (`IK`).
|
||||
|
||||
<!-- ### User Profile Picture
|
||||
- An account MAY edit the `IK` generated identicon with a chosen picture. This picture will become part of the publicly broadcast profile of the account. -->
|
||||
|
||||
<!-- TODO: Elaborate on wallet account and multiaccount -->
|
||||
<!-- TODO: Elaborate on security implications -->
|
||||
|
||||
## Trust establishment
|
||||
|
||||
**Trust establishment deals with users verifying they are communicating with who they think they are.**
|
||||
|
||||
### Terms Glossary
|
||||
|
||||
| term | description |
|
||||
| ------------------------- | ----------- |
|
||||
| privkey | ECDSA secp256k1 private key |
|
||||
| pubkey | ECDSA secp256k1 public key |
|
||||
| Whisper/Waku key | pubkey for chat with HD derivation path m/43'/60'/1581'/0'/0 |
|
||||
|
||||
### Contact Discovery
|
||||
|
||||
#### Public channels
|
||||
|
||||
- Public group channels in Status are a broadcast/subscription system. All public messages are encrypted with a symmetric key derived from the channel name, `K_{pub,sym}`, which is publicly known.
|
||||
- A public group channel's symmetric key MUST creation must follow the [web3 API](https://web3js.readthedocs.io/en/1.0/web3-shh.html#generatesymkeyfrompassword)'s `web3.ssh.generateSymKeyFromPassword` function
|
||||
- In order to post to a public group channel, a client MUST have a valid account created.
|
||||
- In order to listen to a public group channel, a client must subscribe to the channel name.
|
||||
The sender of a message is derived from the message's signature.
|
||||
- Discovery of channel names is not currently part of the protocol, and is typically done out of band.
|
||||
If a channel name is used that has not been used, it will be created.
|
||||
- A client MUST sign the message otherwise it will be discarded by the recipients.
|
||||
- channel name specification:
|
||||
- matches `[a-z0-9\-]`
|
||||
- is not a public key
|
||||
|
||||
#### Private 1:1 messages
|
||||
|
||||
This can be done in the following ways:
|
||||
|
||||
1. scanning a user generated QR code
|
||||
1. discovery through the Status app
|
||||
1. asynchronous X3DH key exchange
|
||||
1. public key via public channel listening
|
||||
- `status-mobile/src/status_im/contact_code/core.cljs`
|
||||
1. contact codes
|
||||
1. decentralized storage (not implemented)
|
||||
1. Whisper/Waku
|
||||
|
||||
### Initial Key Exchange
|
||||
|
||||
#### Bundles
|
||||
|
||||
- An X3DH prekey bundle is defined as ([code](https://github.com/status-im/status-go/messaging/chat/protobuf/encryption.pb.go)):
|
||||
|
||||
```golang
|
||||
Identity // Identity key
|
||||
SignedPreKeys // a map of installation id to array of signed prekeys by that installation id
|
||||
Signature // Prekey signature
|
||||
Timestamp // When the bundle was lasted created locally
|
||||
```
|
||||
|
||||
- include BundleContainer
|
||||
- a new bundle SHOULD be created at least every 12 hours
|
||||
- a node only generates a bundle when it is used
|
||||
- a bundle SHOULD be distributed on the contact code channel. This is the Whisper and Waku topic `{IK}-contact-code`,
|
||||
where `IK` is the hex encoded public key of the user, prefixed with `0x`.
|
||||
The node encrypts the channel in the same way it encrypted public chats.
|
||||
|
||||
### Contact Verification
|
||||
|
||||
To verify that contact key information is as it should be, use the following.
|
||||
|
||||
#### Identicon
|
||||
|
||||
A low-poly identicon is deterministically generated from the Whisper/Waku chat public key.
|
||||
This can be compared out of band to ensure the receiver's public key is the one stored locally.
|
||||
|
||||
#### 3 word pseudonym / Whisper/Waku key fingerprint
|
||||
|
||||
Status generates a deterministic 3-word random pseudonym from the Whisper/Waku chat public key.
|
||||
This pseudonym acts as a human readable fingerprint to the Whisper/Waku chat public key.
|
||||
This name also shows when viewing a contact's public profile and in the chat UI.
|
||||
|
||||
- implementation: [gfycat](https://github.com/status-im/status-mobile/tree/develop/src/status_im/utils/gfycat)
|
||||
|
||||
#### ENS name
|
||||
|
||||
Status offers the ability to register a mapping of a human readable subdomain of `stateofus.eth` to their Whisper/Waku chat public key.
|
||||
The user purchases this registration (currently by staking 10 SNT)
|
||||
and the node stores it on the Ethereum mainnet blockchain for public lookup.
|
||||
|
||||
<!-- TODO: Elaborate on security implications -->
|
||||
|
||||
<!-- TODO: Incorporate or cut below into proper spec
|
||||
|
||||
### Possible Connection Breakdown
|
||||
|
||||
possible connections
|
||||
- client - client (not really ever, this is facilitated through all other connections)
|
||||
- personal chat
|
||||
- ratcheted with X3DH
|
||||
- private group chat
|
||||
- pairwise ratcheted with X3DH
|
||||
- public chat
|
||||
- client - mailserver (statusd + ???)
|
||||
- a mailserver identifies itself by an [enode address](https://github.com/ethereum/wiki/wiki/enode-url-format)
|
||||
- client - Whisper/Waku node (statusd)
|
||||
- a node identifies itself by an enode address
|
||||
- client - bootnode (go-ethereum)
|
||||
- a bootnode identifies itself by
|
||||
- an enode address
|
||||
- `NOTE: redezvous information here`
|
||||
- client - ENS registry (ethereum blockchain -> default to infura)
|
||||
- client - Ethereum RPC (custom go-ethereum RPC API -> default to infura API)
|
||||
- client - IPFS (Status hosted IPFS gateway -> defaults to ???)
|
||||
- we have a status hosted IPFS gateway for pinning but it currently isn't used much.
|
||||
|
||||
### Notes
|
||||
|
||||
A user in the system is a public-private key pair using the Elliptic-Curve Cryptography secp256k1 that Ethereum uses.
|
||||
- A 3-word random name is derived from the public key using the following package
|
||||
- `NOTE: need to find package`
|
||||
- This provides an associated human-readble fingerprint to the user's public key
|
||||
- A user can optionally add additional layers on top of this keypair
|
||||
- Chosen username
|
||||
- ENS username
|
||||
|
||||
All messages sent are encrypted with the public key of the destination and signed by the private key of the given user using the following scheme:
|
||||
- private chat
|
||||
- X3DH is used to define shared secrets which is then double ratcheted
|
||||
- private group chat
|
||||
- considered pairwise private chats
|
||||
- public group chat
|
||||
- the message is encrypted with a symmetric key derived from the chat name
|
||||
|
||||
-->
|
||||
|
||||
## Public Key Serialization
|
||||
|
||||
Idiomatically known as "public key compression" and "public key decompression".
|
||||
|
||||
The node SHOULD provide functionality for the serialization and deserialization of public / chat keys.
|
||||
|
||||
For maximum flexibility, when implementing this functionality, the node MUST support public keys encoded in a range of encoding formats, detailed below.
|
||||
|
||||
### Basic Serialization Example
|
||||
|
||||
In the example of a typical hexadecimal encoded elliptical curve (EC) public key (such as a secp256k1 pk),
|
||||
|
||||
```text
|
||||
0x04261c55675e55ff25edb50b345cfb3a3f35f60712d251cbaaab97bd50054c6ebc3cd4e22200c68daf7493e1f8da6a190a68a671e2d3977809612424c7c3888bc6
|
||||
```
|
||||
|
||||
minor modification for compatibility and flexibility makes the key self-identifiable and easily parsable,
|
||||
|
||||
```text
|
||||
fe70104261c55675e55ff25edb50b345cfb3a3f35f60712d251cbaaab97bd50054c6ebc3cd4e22200c68daf7493e1f8da6a190a68a671e2d3977809612424c7c3888bc6
|
||||
```
|
||||
|
||||
EC serialization and compact encoding produces a much smaller string representation of the original key.
|
||||
|
||||
```text
|
||||
zQ3shPyZJnxZK4Bwyx9QsaksNKDYTPmpwPvGSjMYVHoXHeEgB
|
||||
```
|
||||
|
||||
### Public Key "Compression" Rationale
|
||||
|
||||
Serialized and compactly encoded ("compressed") public keys have a number of UI / UX advantages
|
||||
over non-serialized less densely encoded public keys.
|
||||
|
||||
Compressed public keys are smaller, and users may perceive them as less intimidating and less unnecessarily large.
|
||||
Compare the "compressed" and "uncompressed" version of the same public key from above example:
|
||||
|
||||
- `0xe70104261c55675e55ff25edb50b345cfb3a3f35f60712d251cbaaab97bd50054c6ebc3cd4e22200c68daf7493e1f8da6a190a68a671e2d3977809612424c7c3888bc6`
|
||||
- `zQ3shPyZJnxZK4Bwyx9QsaksNKDYTPmpwPvGSjMYVHoXHeEgB`
|
||||
|
||||
The user can transmit and share the same data, but at one third of the original size.
|
||||
136 characters uncompressed vs 49 characters compressed, giving a significant character length reduction of 64%.
|
||||
|
||||
The user client app MAY use the compressed public keys throughout the user interface.
|
||||
For example in the `status-mobile` implementation of the user interface
|
||||
the following places could take advantage of a significantly smaller public key:
|
||||
|
||||
- `Onboarding` > `Choose a chat name`
|
||||
- `Profile` > `Header`
|
||||
- `Profile` > `Share icon` > `QR code popover`
|
||||
- `Invite friends` url from `Invite friends` button and `+ -button` > `Invite friends`
|
||||
- Other user `Profile details`
|
||||
- `Profile details` > `Share icon` > `QR code popover`
|
||||
|
||||
In the case of QR codes a compressed public key can reduce the complexity of the derived codes:
|
||||
|
||||
| Uncompressed |
|
||||
| --- |
|
||||
| |
|
||||
|
||||
| Compressed |
|
||||
| --- |
|
||||
| |
|
||||
|
||||
### Key Encoding
|
||||
|
||||
When implementing the pk de/serialization functionality, the node MUST use the [multiformats/multibase](https://github.com/multiformats/multibase)
|
||||
encoding protocol to interpret incoming key data and to return key data in a desired encoding.
|
||||
|
||||
The node SHOULD support the following `multibase` encoding formats.
|
||||
|
||||
```csv
|
||||
encoding, code, description, status
|
||||
identity, 0x00, 8-bit binary (encoder and decoder keeps data unmodified), default
|
||||
base2, 0, binary (01010101), candidate
|
||||
base8, 7, octal, draft
|
||||
base10, 9, decimal, draft
|
||||
base16, f, hexadecimal, default
|
||||
base16upper, F, hexadecimal, default
|
||||
base32hex, v, rfc4648 case-insensitive - no padding - highest char, candidate
|
||||
base32hexupper, V, rfc4648 case-insensitive - no padding - highest char, candidate
|
||||
base32hexpad, t, rfc4648 case-insensitive - with padding, candidate
|
||||
base32hexpadupper, T, rfc4648 case-insensitive - with padding, candidate
|
||||
base32, b, rfc4648 case-insensitive - no padding, default
|
||||
base32upper, B, rfc4648 case-insensitive - no padding, default
|
||||
base32pad, c, rfc4648 case-insensitive - with padding, candidate
|
||||
base32padupper, C, rfc4648 case-insensitive - with padding, candidate
|
||||
base32z, h, z-base-32 (used by Tahoe-LAFS), draft
|
||||
base36, k, base36 [0-9a-z] case-insensitive - no padding, draft
|
||||
base36upper, K, base36 [0-9a-z] case-insensitive - no padding, draft
|
||||
base58btc, z, base58 bitcoin, default
|
||||
base58flickr, Z, base58 flicker, candidate
|
||||
base64, m, rfc4648 no padding, default
|
||||
base64pad, M, rfc4648 with padding - MIME encoding, candidate
|
||||
base64url, u, rfc4648 no padding, default
|
||||
base64urlpad, U, rfc4648 with padding, default
|
||||
```
|
||||
|
||||
**Note** this specification RECOMMENDs that implementations extend the standard `multibase` protocol
|
||||
to parse strings prepended with `0x` as `f` hexadecimal encoded bytes.
|
||||
|
||||
Implementing this recommendation will allow the node to correctly interpret traditionally identified hexadecimal strings (e.g. `0x1337c0de`).
|
||||
|
||||
*Example:*
|
||||
|
||||
`0xe70102261c55675e55ff25edb50b345cfb3a3f35f60712d251cbaaab97bd50054c6ebc`
|
||||
|
||||
SHOULD be interpreted as
|
||||
|
||||
`fe70102261c55675e55ff25edb50b345cfb3a3f35f60712d251cbaaab97bd50054c6ebc`
|
||||
|
||||
This specification RECOMMENDs that the consuming service of the node uses a compact encoding type,
|
||||
such as base64 or base58 to allow for as short representations of the key as possible.
|
||||
|
||||
### Public Key Types
|
||||
|
||||
When implementing the pk de/serialization functionality, The node MUST support the [multiformats/multicodec](https://github.com/multiformats/multicodec) key type identifiers for the following public key type.
|
||||
|
||||
| Name | Tag | Code | Description |
|
||||
| ------------------ | --- | ------ | ------------------------------------ |
|
||||
| `secp256k1-pub` | key | `0xe7` | Secp256k1 public key |
|
||||
|
||||
For a public key to be identifiable to the node the public key data MUST be prepended with the relevant [multiformats/unsigned-varint](https://github.com/multiformats/unsigned-varint) formatted code.
|
||||
|
||||
*Example:*
|
||||
|
||||
Below is a representation of an deserialized secp256k1 public key.
|
||||
|
||||
```text
|
||||
04
|
||||
26 | 1c | 55 | 67 | 5e | 55 | ff | 25
|
||||
ed | b5 | 0b | 34 | 5c | fb | 3a | 3f
|
||||
35 | f6 | 07 | 12 | d2 | 51 | cb | aa
|
||||
ab | 97 | bd | 50 | 05 | 4c | 6e | bc
|
||||
3c | d4 | e2 | 22 | 00 | c6 | 8d | af
|
||||
74 | 93 | e1 | f8 | da | 6a | 19 | 0a
|
||||
68 | a6 | 71 | e2 | d3 | 97 | 78 | 09
|
||||
61 | 24 | 24 | c7 | c3 | 88 | 8b | c6
|
||||
```
|
||||
|
||||
The `multicodec` code for a secp256k1 public key is `0xe7`.
|
||||
|
||||
After parsing the code `0xe7` as a `multiformats/uvarint`, the byte value is `0xe7 0x01`, prepending this to the public key results in the below representation.
|
||||
|
||||
```text
|
||||
e7 | 01 | 04
|
||||
26 | 1c | 55 | 67 | 5e | 55 | ff | 25
|
||||
ed | b5 | 0b | 34 | 5c | fb | 3a | 3f
|
||||
35 | f6 | 07 | 12 | d2 | 51 | cb | aa
|
||||
ab | 97 | bd | 50 | 05 | 4c | 6e | bc
|
||||
3c | d4 | e2 | 22 | 00 | c6 | 8d | af
|
||||
74 | 93 | e1 | f8 | da | 6a | 19 | 0a
|
||||
68 | a6 | 71 | e2 | d3 | 97 | 78 | 09
|
||||
61 | 24 | 24 | c7 | c3 | 88 | 8b | c6
|
||||
```
|
||||
|
||||
### De/Serialization Process Flow
|
||||
|
||||
When implementing the pk de/serialization functionality, the node MUST be passed a `multicodec` identified public key,
|
||||
of the above supported types, encoded with a valid `multibase` identifier.
|
||||
|
||||
This specification RECOMMENDs that the node also accept an encoding type parameter to encode the output data.
|
||||
This provides for the case where the user requires the de/serialization key to be in a different encoding to the encoding of the given key.
|
||||
|
||||
#### Serialization Example
|
||||
|
||||
A hexadecimal encoded secp256k1 public chat key typically is represented as below:
|
||||
|
||||
```text
|
||||
0x04261c55675e55ff25edb50b345cfb3a3f35f60712d251cbaaab97bd50054c6ebc3cd4e22200c68daf7493e1f8da6a190a68a671e2d3977809612424c7c3888bc6
|
||||
```
|
||||
|
||||
To be properly interpreted by the node for serialization the public key MUST be prepended with the `multicodec` `uvarint` code `0xea 0x01`
|
||||
and encoded with a valid `multibase` encoding, therefore giving the following:
|
||||
|
||||
```text
|
||||
fea0104261c55675e55ff25edb50b345cfb3a3f35f60712d251cbaaab97bd50054c6ebc3cd4e22200c68daf7493e1f8da6a190a68a671e2d3977809612424c7c3888bc6
|
||||
```
|
||||
|
||||
If adhering to the specification recommendation to provide the user with an output encoding parameter,
|
||||
the above string would be passed to the node with the following `multibase` encoding identifier.
|
||||
|
||||
In this example the output encoding is defined as `base58 bitcoin`.
|
||||
|
||||
```text
|
||||
z
|
||||
```
|
||||
|
||||
The return value in this case would be
|
||||
|
||||
```text
|
||||
zQ3shPyZJnxZK4Bwyx9QsaksNKDYTPmpwPvGSjMYVHoXHeEgB
|
||||
```
|
||||
|
||||
Which after `multibase` decoding can be represented in bytes as below:
|
||||
|
||||
```text
|
||||
e7 | 01 | 02
|
||||
26 | 1c | 55 | 67 | 5e | 55 | ff | 25
|
||||
ed | b5 | 0b | 34 | 5c | fb | 3a | 3f
|
||||
35 | f6 | 07 | 12 | d2 | 51 | cb | aa
|
||||
ab | 97 | bd | 50 | 05 | 4c | 6e | bc
|
||||
```
|
||||
|
||||
#### Deserialization Example
|
||||
|
||||
For the user, the deserialization process is exactly the same as serialization with the exception
|
||||
that the user MUST provide a serialized public key for deserialization. Else the deserialization algorithm will fail.
|
||||
|
||||
For further guidance on the implementation of public key de/serialization consult the [`status-go` implementation and tests](https://github.com/status-im/status-go/blob/c9772325f2dca76b3504191c53313663ca2efbe5/api/utils_test.go).
|
||||
|
||||
## Security Considerations
|
||||
|
||||
-
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 0.4
|
||||
|
||||
Released [June 24, 2020](https://github.com/status-im/specs/commit/e98a9b76b7d4e1ce93e0b692e1521c2d54f72c59)
|
||||
|
||||
- Added details of public key serialization and deserialization
|
||||
|
||||
### Version 0.3
|
||||
|
||||
Released [May 22, 2020](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
|
||||
- Added language to include Waku in all relevant places
|
||||
- Change to keep `Mailserver` term consistent
|
||||
- Added clarification to Open Whisper Systems
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [BIP43](https://github.com/bitcoin/bips/blob/master/bip-0043.mediawiki)
|
||||
- [BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki)
|
||||
- [EIP1581](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1581.md)
|
||||
- [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
|
||||
- [Open Whisper Systems](https://en.wikipedia.org/wiki/Signal_Messenger#2013%E2%80%932018:_Open_Whisper_Systems)
|
||||
- [X3DH](https://signal.org/docs/specifications/x3dh/#the-x3dh-protocol)
|
||||
- [web3 API](https://web3js.readthedocs.io/en/1.0/web3-shh.html#generatesymkeyfrompassword)
|
||||
- [Protobuf encryption](https://github.com/status-im/status-go/messaging/chat/protobuf/encryption.pb.go)
|
||||
- [gfycat in Status](https://github.com/status-im/status-mobile/tree/develop/src/status_im/utils/gfycat)
|
||||
- [multiformats](https://github.com/multiformats/)
|
||||
- [status-go implementation and tests](https://github.com/status-im/status-go/blob/c9772325f2dca76b3504191c53313663ca2efbe5/api/utils_test.go)
|
||||
- [June 24, 2020 change commit](https://github.com/status-im/specs/commit/e98a9b76b7d4e1ce93e0b692e1521c2d54f72c59)
|
||||
- [May 22, 2020 change commit](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
@@ -1,427 +0,0 @@
|
||||
---
|
||||
title: CLIENT
|
||||
name: Client
|
||||
status: deprecated
|
||||
description: This specification describes how to write a Status client for communicating with other Status clients.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes how to write a Status client for communicating
|
||||
with other Status clients.
|
||||
This specification presents a reference implementation of the protocol
|
||||
used in a command-line client and a mobile app.
|
||||
|
||||
This document consists of two parts.
|
||||
The first outlines the specifications required to be a full Status client.
|
||||
The second provides a design rationale and answers some common questions.
|
||||
|
||||
## Introduction
|
||||
|
||||
### Protocol layers
|
||||
|
||||
Implementing a Status clients largely means implementing the following layers.
|
||||
Additionally, there are separate specifications for things like key management and account lifecycle.
|
||||
|
||||
Other aspects, such as how a node uses IPFS for stickers or how the browser works, are currently underspecified.
|
||||
These specifications facilitate the implementation of a Status client for basic private communication.
|
||||
|
||||
| Layer | Purpose | Technology |
|
||||
| ----------------- | ------------------------------ | ---------------------------- |
|
||||
| Data and payloads | End user functionality | 1:1, group chat, public chat |
|
||||
| Data sync | Data consistency | MVDS. |
|
||||
| Secure transport | Confidentiality, PFS, etc | Double Ratchet |
|
||||
| Transport privacy | Routing, Metadata protection | Waku / Whisper |
|
||||
| P2P Overlay | Overlay routing, NAT traversal | devp2p |
|
||||
|
||||
### Protobuf
|
||||
|
||||
[`protobuf`](https://developers.google.com/protocol-buffers/) is used in different layers, version `proto3` used is unless stated otherwise.
|
||||
|
||||
## Components
|
||||
|
||||
### P2P Overlay
|
||||
|
||||
Status clients run on a public, permissionless peer-to-peer network, as specified by the devP2P
|
||||
network protocols. devP2P provides a protocol for node discovery which is in
|
||||
draft mode
|
||||
[here](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md). See
|
||||
more on node discovery and management in the next section.
|
||||
|
||||
To communicate between Status nodes, the [RLPx Transport
|
||||
Protocol, v5](https://github.com/ethereum/devp2p/blob/master/rlpx.md) is used, which
|
||||
allows for TCP-based communication between nodes.
|
||||
|
||||
On top of this RLPx-based subprotocols are ran, the client
|
||||
SHOULD NOT use [Whisper V6](https://eips.ethereum.org/EIPS/eip-627), the client
|
||||
SHOULD use [Waku V1](/waku/standards/legacy/6/waku1.md)
|
||||
for privacy-preserving messaging and efficient usage of a node's bandwidth.
|
||||
|
||||
#### Node discovery and roles
|
||||
|
||||
There are four types of node roles:
|
||||
|
||||
1. `Bootstrap node`
|
||||
1. `Whisper/Waku relayer`
|
||||
1. `Mailserver` (servers and clients)
|
||||
1. `Mobile node` (Status Clients)
|
||||
|
||||
A standard Status client MUST implement both `Whisper/Waku relayer` and `Mobile node` node types. The
|
||||
other node types are optional, but it is RECOMMEND to implement a `Mailserver`
|
||||
client mode, otherwise the user experience is likely to be poor.
|
||||
|
||||
#### Bootstrapping
|
||||
|
||||
Bootstrap nodes allow Status nodes to discover and connect to other Status nodes
|
||||
in the network.
|
||||
|
||||
Currently, Status Gmbh provides the main bootstrap nodes, but anyone can
|
||||
run these provided they are connected to the rest of the Whisper/Waku network.
|
||||
|
||||
Status maintains a list of production fleet bootstrap nodes in the following locations:
|
||||
|
||||
**Hong Kong:**
|
||||
|
||||
- `enode://6e6554fb3034b211398fcd0f0082cbb6bd13619e1a7e76ba66e1809aaa0c5f1ac53c9ae79cf2fd4a7bacb10d12010899b370c75fed19b991d9c0cdd02891abad@47.75.99.169:443`
|
||||
- `enode://23d0740b11919358625d79d4cac7d50a34d79e9c69e16831c5c70573757a1f5d7d884510bc595d7ee4da3c1508adf87bbc9e9260d804ef03f8c1e37f2fb2fc69@47.52.106.107:443`
|
||||
|
||||
**Amsterdam:**
|
||||
|
||||
- `enode://436cc6f674928fdc9a9f7990f2944002b685d1c37f025c1be425185b5b1f0900feaf1ccc2a6130268f9901be4a7d252f37302c8335a2c1a62736e9232691cc3a@178.128.138.128:443`
|
||||
- `enode://5395aab7833f1ecb671b59bf0521cf20224fe8162fc3d2675de4ee4d5636a75ec32d13268fc184df8d1ddfa803943906882da62a4df42d4fccf6d17808156a87@178.128.140.188:443`
|
||||
|
||||
**Central US:**
|
||||
|
||||
- `enode://32ff6d88760b0947a3dee54ceff4d8d7f0b4c023c6dad34568615fcae89e26cc2753f28f12485a4116c977be937a72665116596265aa0736b53d46b27446296a@34.70.75.208:443`
|
||||
- `enode://5405c509df683c962e7c9470b251bb679dd6978f82d5b469f1f6c64d11d50fbd5dd9f7801c6ad51f3b20a5f6c7ffe248cc9ab223f8bcbaeaf14bb1c0ef295fd0@35.223.215.156:443`
|
||||
|
||||
These bootstrap nodes MAY change and are not guaranteed to stay this way forever
|
||||
and at some point circumstances might force them to change.
|
||||
|
||||
#### Discovery
|
||||
|
||||
A Status client MUST discover or have a list of peers to connect to. Status uses a
|
||||
light discovery mechanism based on a combination of [Discovery v5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) and
|
||||
[Rendezvous Protocol](https://github.com/libp2p/specs/tree/master/rendezvous),
|
||||
(with some [modifications](https://github.com/status-im/rendezvous#differences-with-original-rendezvous)).
|
||||
Additionally, some static nodes MAY also be used.
|
||||
|
||||
A Status client MUST use at least one discovery method or use static nodes
|
||||
to communicate with other clients.
|
||||
|
||||
Discovery V5 uses bootstrap nodes to discover other peers. Bootstrap nodes MUST support
|
||||
Discovery V5 protocol as well in order to provide peers. It is kademlia-based discovery mechanism
|
||||
and it might consume significant (at least on mobile) amount of network traffic to operate.
|
||||
|
||||
In order to take advantage from simpler and more mobile-friendly peers discovery mechanism,
|
||||
i.e. Rendezvous protocol, one MUST provide a list of Rendezvous nodes which speak
|
||||
Rendezvous protocol. Rendezvous protocol is request-response discovery mechanism.
|
||||
It uses Ethereum Node Records (ENR) to report discovered peers.
|
||||
|
||||
Both peers discovery mechanisms use topics to provide peers with certain capabilities.
|
||||
There is no point in returning peers that do not support a particular protocol.
|
||||
Status nodes that want to be discovered MUST register to Discovery V5 and/or Rendezvous
|
||||
with the `whisper` topic. Status nodes that are `Mailservers` and want to
|
||||
be discoverable MUST additionally register with the `whispermail` topic.
|
||||
|
||||
It is RECOMMENDED to use both mechanisms but at the same time implement a structure
|
||||
called `PeerPool`. `PeerPool` is responsible for maintaining an optimal number of peers.
|
||||
For mobile nodes, there is no significant advantage to have more than 2-3 peers and one `Mailserver`.
|
||||
`PeerPool` can notify peers discovery protocol implementations that they should suspend
|
||||
their execution because the optimal number of peers is found. They should resume
|
||||
if the number of connected peers drops or a `Mailserver` disconnects.
|
||||
|
||||
It is worth noticing that an efficient caching strategy MAY be of great use, especially,
|
||||
on mobile devices. Discovered peers can be cached as they rarely change and used
|
||||
when the client starts again. In such a case, there might be no need to even start
|
||||
peers discovery protocols because cached peers will satisfy the optimal number of peers.
|
||||
|
||||
Alternatively, a client MAY rely exclusively on a list of static peers. This is the most efficient
|
||||
way because there are no peers discovery algorithm overhead introduced. The disadvantage
|
||||
is that these peers might be gone and without peers discovery mechanism, it won't be possible to find
|
||||
new ones.
|
||||
|
||||
The current list of static peers is published on <https://fleets.status.im/>. `eth.prod` is the current
|
||||
group of peers the official Status client uses. The others are test networks.
|
||||
|
||||
Finally, Waku node addresses can be retrieved by traversing
|
||||
the merkle tree found at [`fleets.status.im`](https://fleets.status.im), as described in [EIP-1459](https://eips.ethereum.org/EIPS/eip-1459#client-protocol).
|
||||
|
||||
#### Mobile nodes
|
||||
|
||||
A `Mobile node` is a Whisper and/or Waku node which connects to part of the respective Whisper
|
||||
and/or Waku network(s). A `Mobile node` MAY relay messages. See next section for more details on how
|
||||
to use Whisper and/or Waku to communicate with other Status nodes.
|
||||
|
||||
### Transport privacy and Whisper / Waku usage
|
||||
|
||||
Once a Whisper and/or Waku node is up and running there are some specific settings required
|
||||
to communicate with other Status nodes.
|
||||
|
||||
See [WHISPER-USAGE](/status/deprecated/whisper-usage.md) and [WAKU-USAGE](/status/deprecated/waku-usage.md) for more details.
|
||||
|
||||
For providing an offline inbox, see the complementary [WHISPER-MAILSERVER](/status/deprecated/whisper-mailserver.md) and [WAKU-MAILSERVER](/status/deprecated/waku-mailserver.md).
|
||||
|
||||
### Secure Transport
|
||||
|
||||
In order to provide confidentiality, integrity, authentication and forward
|
||||
secrecy of messages the node implements a secure transport on top of Whisper and Waku. This is
|
||||
used in 1:1 chats and group chats, but not for public chats. See [SECURE-TRANSPORT](/status/deprecated/secure-transport.md) for more.
|
||||
|
||||
### Data Sync
|
||||
|
||||
[MVDS](/vac/2/mvds.md) is used for 1:1 and group chats, however it is currently not in use for public chats.
|
||||
[Status payloads](#payloads-and-clients) are serialized and then wrapped inside an
|
||||
MVDS message which is added to an [MVDS payload](/vac/2/mvds.md#payloads),
|
||||
the node encrypts this payload (if necessary for 1-to-1 / group-chats) and sends it using
|
||||
Whisper or Waku which also encrypts it.
|
||||
|
||||
### Payloads and clients
|
||||
|
||||
On top of secure transport, various types of data sync clients and
|
||||
the node uses payload formats for things like 1:1 chat, group chat and public chat. These have
|
||||
various degrees of standardization. Please refer to [PAYLOADS](/status/deprecated/payloads.md) for more details.
|
||||
|
||||
### BIPs and EIPs Standards support
|
||||
|
||||
For a list of EIPs and BIPs that SHOULD be supported by Status client, please
|
||||
see [EIPS](/status/deprecated/eips.md).
|
||||
|
||||
## Security Considerations
|
||||
|
||||
See [Appendix A](#appendix-a-security-considerations)
|
||||
|
||||
## Design Rationale
|
||||
|
||||
P2P Overlay
|
||||
|
||||
### Why devp2p? Why not use libp2p?
|
||||
|
||||
At the time Status developed the main Status clients, devp2p was the most
|
||||
mature. However, in the future libp2p is likely to be used, as it'll
|
||||
provide us with multiple transports, better protocol negotiation, NAT traversal,
|
||||
etc.
|
||||
|
||||
For very experimental bridge support, see the bridge between libp2p and devp2p
|
||||
in [Murmur](https://github.com/status-im/murmur).
|
||||
|
||||
### What about other RLPx subprotocols like LES, and Swarm?
|
||||
|
||||
Status is primarily optimized for resource restricted devices, and at present
|
||||
time light client support for these protocols are suboptimal. This is a work in
|
||||
progress.
|
||||
|
||||
For better Ethereum light client support, see [Re-enable LES as
|
||||
option](https://github.com/status-im/status-go/issues/1025). For better Swarm
|
||||
support, see [Swarm adaptive
|
||||
nodes](https://github.com/ethersphere/SWIPs/pull/12).
|
||||
|
||||
For transaction support, Status clients currently have to rely on Infura.
|
||||
|
||||
Status clients currently do not offer native support for file storage.
|
||||
|
||||
### Why do you use Whisper?
|
||||
|
||||
Whisper is one of the [three parts](http://gavwood.com/dappsweb3.html) of the
|
||||
vision of Ethereum as the world computer, Ethereum and Swarm being the other
|
||||
two. Status was started as an encapsulation of and a clear window to this world
|
||||
computer.
|
||||
|
||||
### Why do you use Waku?
|
||||
|
||||
Waku is a direct upgrade and replacement for Whisper, the main motivation for
|
||||
developing and implementing Waku can be found in the [Waku specs](/waku/).
|
||||
|
||||
>Waku was created to incrementally improve in areas that Whisper is lacking in,
|
||||
>with special attention to resource restricted devices. We specify the standard for
|
||||
>Waku messages in order to ensure forward compatibility of different Waku clients,
|
||||
>backwards compatibility with Whisper clients, as well as to allow multiple
|
||||
>implementations of Waku and its capabilities. We also modify the language to be more
|
||||
>unambiguous, concise and consistent.
|
||||
|
||||
Considerable work has gone into the active development of Ethereum, in contrast Whisper
|
||||
is not currently under active development, and it has several drawbacks. Among others:
|
||||
|
||||
- Whisper is very wasteful bandwidth-wise and doesn't appear to be scalable
|
||||
- Proof of work is a poor spam protection mechanism for heterogeneous devices
|
||||
- The privacy guarantees provided are not rigorous
|
||||
- There are no incentives to run a node
|
||||
|
||||
Finding a more suitable transport privacy is an ongoing research effort,
|
||||
together with [Vac](https://vac.dev/vac-overview) and other teams in the space.
|
||||
|
||||
### Why is PoW for Waku set so low?
|
||||
|
||||
A higher PoW would be desirable, but this kills the battery on mobile phones,
|
||||
which is a prime target for Status clients.
|
||||
|
||||
This means the network is currently vulnerable to DDoS attacks. Alternative
|
||||
methods of spam protection are currently being researched.
|
||||
|
||||
### Why do you not use Discovery v5 for node discovery?
|
||||
|
||||
At the time of implementing dynamic node discovery, Discovery v5 wasn't completed
|
||||
yet. Additionally, running a DHT on a mobile leads to slow node discovery, bad
|
||||
battery and poor bandwidth usage. Instead, each client can choose to turn on
|
||||
Discovery v5 for a short period until the node populates their peer list.
|
||||
|
||||
For some further investigation, see
|
||||
[here](https://github.com/status-im/swarms/blob/master/ideas/092-disc-v5-research.md).
|
||||
|
||||
### I heard something about `Mailservers` being trusted somehow?
|
||||
|
||||
In order to use a `Mailserver`, a given node needs to connect to it directly, i.e. add the `Mailserver`
|
||||
as its peer and mark it as trusted.
|
||||
This means that the `Mailserver` is able to send direct p2p messages to the node instead of broadcasting them.
|
||||
Effectively, it knows the bloom filter of the topics the node is interested in,
|
||||
when it is online as well as many metadata like IP address.
|
||||
|
||||
### Data sync
|
||||
|
||||
#### Why is MVDS not used for public chats?
|
||||
|
||||
Currently, public chats are broadcast-based, and there's no direct way of finding
|
||||
out who is receiving messages. Hence there's no clear group sync state context
|
||||
whereby participants can sync. Additionally, MVDS is currently not optimized for
|
||||
large group contexts, which means bandwidth usage will be a lot higher than
|
||||
reasonable. See [P2P Data Sync for Mobile](https://vac.dev/p2p-data-sync-for-mobile) for more.
|
||||
This is an active area of research.
|
||||
|
||||
## Footnotes
|
||||
|
||||
1. <https://github.com/status-im/status-protocol-go/>
|
||||
2. <https://github.com/status-im/status-console-client/>
|
||||
3. <https://github.com/status-im/status-mobile/>
|
||||
|
||||
## Appendix A: Security considerations
|
||||
|
||||
There are several security considerations to take into account when running Status.
|
||||
Chief among them are: scalability, DDoS-resistance and privacy.
|
||||
These also vary depending on what capabilities are used, such as `Mailserver`, light node, and so on.
|
||||
|
||||
### Scalability and UX
|
||||
|
||||
**Bandwidth usage:**
|
||||
|
||||
In version 1 of Status, bandwidth usage is likely to be an issue.
|
||||
In Status version 1.1 this is partially addressed with Waku usage, see [the theoretical scaling model](https://github.com/vacp2p/research/tree/dcc71f4779be832d3b5ece9c4e11f1f7ec24aac2/whisper_scalability).
|
||||
|
||||
**`Mailserver` High Availability requirement:**
|
||||
|
||||
A `Mailserver` has to be online to receive messages for other nodes, this puts a high availability requirement on it.
|
||||
|
||||
**Gossip-based routing:**
|
||||
|
||||
Use of gossip-based routing doesn't necessarily scale.
|
||||
It means each node can see a message multiple times,
|
||||
and having too many light nodes can cause propagation probability that is too low.
|
||||
See [Whisper vs PSS](https://our.status.im/whisper-pss-comparison/) for more and a possible Kademlia based alternative.
|
||||
|
||||
**Lack of incentives:**
|
||||
|
||||
Status currently lacks incentives to run nodes, which means node operators are more likely to create centralized choke points.
|
||||
|
||||
### Privacy
|
||||
|
||||
**Light node privacy:**
|
||||
|
||||
The main privacy concern with light nodes is that directly connected peers will know that a message originates from them (as it are the only ones it sends). This means nodes can make assumptions about what messages (topics) their peers are interested in.
|
||||
|
||||
**Bloom filter privacy:**
|
||||
|
||||
A user reveals which messages they are interested in, by setting only the topics they are interested in on the bloom filter.
|
||||
This is a fundamental trade-off between bandwidth usage and privacy,
|
||||
though the trade-off space is likely suboptimal in terms of the [Anonymity](https://eprint.iacr.org/2017/954.pdf) [trilemma](https://petsymposium.org/2019/files/hotpets/slides/coordination-helps-anonymity-slides.pdf).
|
||||
|
||||
**`Mailserver client` privacy:**
|
||||
|
||||
A `Mailserver client` has to trust a `Mailserver`, which means they can send direct traffic. This reveals what topics / bloom filter a node is interested in, along with its peerID (with IP).
|
||||
|
||||
**Privacy guarantees not rigorous:**
|
||||
|
||||
Privacy for Whisper or Waku hasn't been studied rigorously for various threat models like global passive adversary, local active attacker, etc. This is unlike e.g. Tor and mixnets.
|
||||
|
||||
**Topic hygiene:**
|
||||
|
||||
Similar to bloom filter privacy, using a very specific topic reveals more information. See scalability model linked above.
|
||||
|
||||
### Spam resistance
|
||||
|
||||
**PoW bad for heterogeneous devices:**
|
||||
|
||||
Proof of work is a poor spam prevention mechanism. A mobile device can only have a very low PoW in order not to use too much CPU / burn up its phone battery. This means someone can spin up a powerful node and overwhelm the network.
|
||||
|
||||
**`Mailserver` trusted connection:**
|
||||
|
||||
A `Mailserver` has a direct TCP connection, which means they are trusted to send traffic. This means a malicious or malfunctioning `Mailserver` can overwhelm an individual node.
|
||||
|
||||
### Censorship resistance
|
||||
|
||||
**Devp2p TCP port blockable:**
|
||||
|
||||
By default Devp2p runs on port `30303`, which is not commonly used for any other service. This means it is easy to censor, e.g. airport WiFi. This can be mitigated somewhat by running on e.g. port `80` or `443`, but there are still outstanding issues. See libp2p and Tor's Pluggable Transport for how this can be improved.
|
||||
|
||||
See <https://github.com/status-im/status-mobile/issues/6351> for some discussion.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
Jacek Sieka
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 0.3
|
||||
|
||||
Released [May 22, 2020](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
|
||||
- Added that Waku SHOULD be used
|
||||
- Added that Whisper SHOULD NOT be used
|
||||
- Added language to include Waku in all relevant places
|
||||
- Change to keep `Mailserver` term consistent
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [Protobuf](https://developers.google.com/protocol-buffers/)
|
||||
- [Discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
|
||||
- [RLPx Transport Protocol, v5](https://github.com/ethereum/devp2p/blob/master/rlpx.md)
|
||||
- [Whisper V6](https://eips.ethereum.org/EIPS/eip-627)
|
||||
- [Waku V1](/waku/standards/legacy/6/waku1.md)
|
||||
- [Rendezvous Protocol](https://github.com/libp2p/specs/tree/master/rendezvous)
|
||||
- [Rendezvous Protocol modifications](https://github.com/status-im/rendezvous#differences-with-original-rendezvous)
|
||||
- [Fleets Status](https://fleets.status.im)
|
||||
- [EIP-1459](https://eips.ethereum.org/EIPS/eip-1459#client-protocol)
|
||||
- [WHISPER-USAGE](/status/deprecated/whisper-usage.md)
|
||||
- [WAKU-USAGE](/status/deprecated/waku-usage.md)
|
||||
- [WHISPER-MAILSERVER](/status/deprecated/whisper-mailserver.md)
|
||||
- [WAKU-MAILSERVER](/status/deprecated/waku-mailserver.md)
|
||||
- [SECURE-TRANSPORT](/status/deprecated/secure-transport.md)
|
||||
- [MVDS](/vac/2/mvds.md)
|
||||
- [PAYLOADS](/status/deprecated/payloads.md)
|
||||
- [EIPS](/status/deprecated/eips.md)
|
||||
- [Murmur](https://github.com/status-im/murmur)
|
||||
- [Re-enable LES as option](https://github.com/status-im/status-go/issues/1025)
|
||||
- [Swarm adaptive nodes](https://github.com/ethersphere/SWIPs/pull/12)
|
||||
- [Whisper vs PSS](https://our.status.im/whisper-pss-comparison/)
|
||||
- [Waku specs](/waku/)
|
||||
- [Vac](https://vac.dev/vac-overview)
|
||||
- [theoretical scaling model](https://github.com/vacp2p/research/tree/dcc71f4779be832d3b5ece9c4e11f1f7ec24aac2/whisper_scalability)
|
||||
- [Anonymity](https://eprint.iacr.org/2017/954.pdf)
|
||||
- [trilemma](https://petsymposium.org/2019/files/hotpets/slides/coordination-helps-anonymity-slides.pdf)
|
||||
- [Whisper vs PSS](https://our.status.im/whisper-pss-comparison/)
|
||||
- [Discovery v5 research](https://github.com/status-im/swarms/blob/master/ideas/092-disc-v5-research.md)
|
||||
- [P2P Data Sync for Mobile](https://vac.dev/p2p-data-sync-for-mobile)
|
||||
- [Status protocol go](https://github.com/status-im/status-protocol-go/)
|
||||
- [Status console client](https://github.com/status-im/status-console-client/)
|
||||
- [Status mobile](https://github.com/status-im/status-mobile/)
|
||||
- [Status mobile issue 6351](https://github.com/status-im/status-mobile/issues/6351)
|
||||
@@ -1,135 +0,0 @@
|
||||
---
|
||||
title: Dapp browser API usage
|
||||
name: Dapp browser API usage
|
||||
status: deprecated
|
||||
description: This document describes requirements that an application must fulfill in order to provide a proper environment for Dapps running inside a browser.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document describes requirements that an application must fulfill in order to provide a proper environment for Dapps running inside a browser.
|
||||
A description of the Status Dapp API is provided, along with an overview of bidirectional communication underlying the API implementation.
|
||||
The document also includes a list of EIPs that this API implements.
|
||||
|
||||
## Definitions
|
||||
|
||||
| Term | Description |
|
||||
|------------|-------------------------------------------------------------------------------------|
|
||||
| **Webview** | Platform-specific browser core implementation. |
|
||||
| **Ethereum Provider** | A JS object (`window.ethereum`) injected into each web page opened in the browser providing web3 compatible provider. |
|
||||
| **Bridge** | A set of facilities allow bidirectional communication between JS code and the application. |
|
||||
|
||||
## Overview
|
||||
|
||||
The application should expose an Ethereum Provider object (`window.ethereum`) to JS code running inside the browser.
|
||||
It is important to have the `window.ethereum` object available before the page loads, otherwise Dapps might not work correctly.
|
||||
|
||||
Additionally, the browser component should also provide bidirectional communication between JS code and the application.
|
||||
|
||||
## Usage in Dapps
|
||||
|
||||
Dapps can use the below properties and methods of `window.ethereum` object.
|
||||
|
||||
### Properties
|
||||
|
||||
#### `isStatus`
|
||||
|
||||
Returns true. Can be used by the Dapp to find out whether it's running inside Status.
|
||||
|
||||
#### `status`
|
||||
|
||||
Returns a `StatusAPI` object. For now it supports one method: `getContactCode` that sends a `contact-code` request to Status.
|
||||
|
||||
### Methods
|
||||
|
||||
#### `isConnected`
|
||||
|
||||
Similarly to Ethereum JS API [docs](https://github.com/ethereum/wiki/wiki/JavaScript-API#web3isconnected),
|
||||
it should be called to check if connection to a node exists. On Status, this fn always returns true, as once Status is up and running, node is automatically started.
|
||||
|
||||
#### `scanQRCode`
|
||||
|
||||
Sends a `qr-code` Status API request.
|
||||
|
||||
#### `request`
|
||||
|
||||
`request` method as defined by EIP-1193.
|
||||
|
||||
### Unused
|
||||
|
||||
Below are some legacy methods that some Dapps might still use.
|
||||
|
||||
#### `enable` (DEPRECATED)
|
||||
|
||||
Sends a `web3` Status API request. It returns a first entry in the list of available accounts.
|
||||
|
||||
Legacy `enable` method as defined by [EIP1102](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1102.md).
|
||||
|
||||
#### `send` (DEPRECATED)
|
||||
|
||||
Legacy `send` method as defined by [EIP1193](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1193.md).
|
||||
|
||||
#### `sendAsync` (DEPRECATED)
|
||||
|
||||
Legacy `sendAsync` method as defined by [EIP1193](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1193.md).
|
||||
|
||||
#### `sendSync` (DEPRECATED)
|
||||
|
||||
Legacy `send` method.
|
||||
|
||||
## Implementation
|
||||
|
||||
Status uses a [forked version](https://github.com/status-im/react-native-webview) of [react-native-webview](https://github.com/react-native-community/react-native-webview) to display web or dapps content.
|
||||
The fork provides an Android implementation of JS injection before page load.
|
||||
It is required in order to properly inject Ethereum Provider object.
|
||||
|
||||
Status injects two JS scripts:
|
||||
|
||||
- [provider.js](https://github.com/status-im/status-mobile/blob/develop/resources/js/provider.js): `window.ethereum` object
|
||||
- [webview.js](https://github.com/status-im/status-mobile/blob/develop/resources/js/webview.js): override for `history.pushState` used internally
|
||||
|
||||
Dapps running inside a browser communicate with Status Ethereum node by means of a *bridge* provided by react-native-webview library.
|
||||
The bridge allows for bidirectional communication between browser and Status. In order to do so, it injects a special `ReactNativeWebview` object into each page it loads.
|
||||
|
||||
On Status (React Native) end, `react-native-webview` library provides `WebView.injectJavascript` function
|
||||
on a webview component that allows to execute arbitrary code inside the webview.
|
||||
Thus it is possible to inject a function call passing Status node response back to the Dapp.
|
||||
|
||||
Below is the table briefly describing what functions/properties are used. More details available in package [docs](https://github.com/react-native-community/react-native-webview/blob/master/docs/Guide.md#communicating-between-js-and-native).
|
||||
|
||||
| Direction | Side | Method |
|
||||
|-----------|------|-----------|
|
||||
| Browser->Status | JS | `ReactNativeWebView.postMessage()`|
|
||||
| Browser->Status | RN | `WebView.onMessage()`|
|
||||
| Status->Browser | JS | `ReactNativeWebView.onMessage()`|
|
||||
| Status->Browser | RN | `WebView.injectJavascript()`|
|
||||
|
||||
## Compatibility
|
||||
|
||||
Status browser supports the following EIPs:
|
||||
|
||||
- [EIP1102](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1102.md): `eth_requestAccounts` support
|
||||
- [EIP1193](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1193.md): `connect`, `disconnect`, `chainChanged`, and `accountsChanged` event support is not implemented
|
||||
|
||||
## Changelog
|
||||
|
||||
| Version | Comment |
|
||||
| :-----: | ------- |
|
||||
| 0.1.0 | Initial Release |
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [Ethereum JS API docs](https://github.com/ethereum/wiki/wiki/JavaScript-API#web3isconnected)
|
||||
- [EIP1102](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1102.md)
|
||||
- [EIP1193](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1193.md)
|
||||
- [forked version](https://github.com/status-im/react-native-webview)
|
||||
- [react-native-webview](https://github.com/react-native-community/react-native-webview)
|
||||
- [provider.js](https://github.com/status-im/status-mobile/blob/develop/resources/js/provider.js)
|
||||
- [webview.js](https://github.com/status-im/status-mobile/blob/develop/resources/js/webview.js)
|
||||
- [docs](https://github.com/react-native-community/react-native-webview/blob/master/docs/Guide.md#communicating-between-js-and-native)
|
||||
@@ -1,286 +0,0 @@
|
||||
---
|
||||
title: EIPS
|
||||
name: EIPS
|
||||
status: deprecated
|
||||
description: Status relation with the EIPs
|
||||
editor: Ricardo Guilherme Schmidt <ricardo3@status.im>
|
||||
contributors:
|
||||
-
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes how Status relates with EIPs.
|
||||
|
||||
## Introduction
|
||||
|
||||
Status should follow all standards as possible.
|
||||
Whenever the Status app needs a feature, it should be first checked if there is a standard for that,
|
||||
if not, Status should propose a standard.
|
||||
|
||||
### Support table
|
||||
|
||||
| | Status v0 | Status v1 | Other | State |
|
||||
|----------|-----------|-----------|----------| -------- |
|
||||
| BIP32 | N | Y | N | `stable` |
|
||||
| BIP39 | Y | Y | Y | `stable` |
|
||||
| BIP43 | N | Y | N | `stable` |
|
||||
| BIP44 | N | Y | N | `stable` |
|
||||
| EIP20 | Y | Y | Y | `stable` |
|
||||
| EIP55 | Y | Y | Y | `stable` |
|
||||
| EIP67 | P | P | N | `stable` |
|
||||
| EIP137 | P | P | N | `stable` |
|
||||
| EIP155 | Y | Y | Y | `stable` |
|
||||
| EIP165 | P | N | N | `stable` |
|
||||
| EIP181 | P | N | N | `stable` |
|
||||
| EIP191 | Y? | N | Y | `stable` |
|
||||
| EIP627 | Y | Y | N | `stable` |
|
||||
| EIP681 | Y | N | Y | `stable` |
|
||||
| EIP712 | P | P | Y | `stable` |
|
||||
| EIP721 | P | P | Y | `stable` |
|
||||
| EIP831 | N | Y | N | `stable` |
|
||||
| EIP945 | Y | Y | N | `stable` |
|
||||
| EIP1102 | Y | Y | Y | `stable` |
|
||||
| EIP1193 | Y | Y | Y | `stable` |
|
||||
| EIP1577 | Y | P | N | `stable` |
|
||||
| EIP1581 | N | Y | N | `stable` |
|
||||
| EIP1459 | N | | N | `raw` |
|
||||
|
||||
## Components
|
||||
|
||||
### BIP32 - Hierarchical Deterministic Wallets
|
||||
|
||||
Support: Dependency.
|
||||
[Reference](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki)
|
||||
Description: Enable wallets to derive multiple private keys from the same seed.
|
||||
Used for: Dependency of BIP39 and BIP43.
|
||||
|
||||
### BIP39 - Mnemonic code for generating deterministic keys
|
||||
|
||||
Support: Dependency.
|
||||
[Reference](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki)
|
||||
Description: Enable wallet to create private key based on a safe seed phrase.
|
||||
Used for: Security and user experience.
|
||||
|
||||
### BIP43 - Purpose Field for Deterministic Wallets
|
||||
|
||||
Support: Dependency.
|
||||
[Reference](https://github.com/bitcoin/bips/blob/master/bip-0043.mediawiki)
|
||||
Description: Enable wallet to create private keys branched for a specific purpose.
|
||||
Used for: Dependency of BIP44, uses "ethereum" coin.
|
||||
|
||||
### BIP44 - Multi-Account Hierarchy for Deterministic Wallets
|
||||
|
||||
Support: Dependency.
|
||||
[Reference](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
|
||||
Description: Enable wallet to derive multiple accounts in top of BIP39.
|
||||
Used for: Privacy.
|
||||
[Source code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/constants.cljs#L240)
|
||||
Observation: BIP44 don't solve privacy issues regarding the transparency of transactions, therefore directly connected addresses through a transactions can be identifiable by a "network reconnaissance attack" over transaction history, this attack together with leakage of information from centralized services, such as exchanges, would be fatal against the whole privacy of users, regardless of BIP44.
|
||||
|
||||
### EIP20 - Fungible Token
|
||||
|
||||
Support: Full.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-20)
|
||||
Description: Enable wallets to use tokens based on smart contracts compliant with this standard.
|
||||
Used for: Wallet feature.
|
||||
[Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/tokens.cljs)
|
||||
|
||||
### EIP55 - Mixed-case checksum address encoding
|
||||
|
||||
Support: Full.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-55)
|
||||
Description: Checksum standard that uses lowercase and uppercase inside address hex value.
|
||||
Used for: Sanity check of forms using ethereum address.
|
||||
[Related](https://github.com/status-im/status-mobile/issues/4959) [Also](https://github.com/status-im/status-mobile/issues/8707)
|
||||
[Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/eip55.cljs)
|
||||
|
||||
### EIP67 - Standard URI scheme with metadata, value and byte code
|
||||
|
||||
Support: Partial.
|
||||
[Reference](https://github.com/ethereum/EIPs/issues/67)
|
||||
Description: A standard way of creating Ethereum URIs for various use-cases.
|
||||
Used for: Legacy support.
|
||||
[Issue](https://github.com/status-im/status-mobile/issues/875)
|
||||
|
||||
### EIP137 - Ethereum Domain Name Service - Specification
|
||||
|
||||
Support: Partial.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-137)
|
||||
Description: Enable wallets to lookup ENS names.
|
||||
Used for: User experience, as a wallet and identity feature, usernames.
|
||||
[Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/ens.cljs#L86)
|
||||
|
||||
### EIP155 - Simple replay attack protection
|
||||
|
||||
Support: Full.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-155)
|
||||
Description: Defined chainId parameter in the singed ethereum transaction payload.
|
||||
Used for: Signing transactions, crucial to safety of users against replay attacks.
|
||||
[Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/core.cljs)
|
||||
|
||||
### EIP165 - Standard Interface Detection
|
||||
|
||||
Support: Dependency/Partial.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-165)
|
||||
Description: Standard interface for contract to answer if it supports other interfaces.
|
||||
Used for: Dependency of ENS and EIP721.
|
||||
[Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/eip165.cljs)
|
||||
|
||||
### EIP181 - ENS support for reverse resolution of Ethereum addresses
|
||||
|
||||
Support: Partial.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-181)
|
||||
Description: Enable wallets to render reverse resolution of Ethereum addresses.
|
||||
Used for: Wallet feature.
|
||||
[Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/ens.cljs#L86)
|
||||
|
||||
### EIP191 - Signed Message
|
||||
|
||||
Support: Full.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-191)
|
||||
Description: Contract signature standard, adds an obligatory padding to signed message to differentiate from Ethereum Transaction messages.
|
||||
Used for: Dapp support, security, dependency of ERC712.
|
||||
|
||||
### EIP627 - Whisper Specification
|
||||
|
||||
Support: Full.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-627)
|
||||
Description: format of Whisper messages within the ÐΞVp2p Wire Protocol.
|
||||
Used for: Chat protocol.
|
||||
|
||||
### EIP681 - URL Format for Transaction Requests
|
||||
|
||||
Support: Partial.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-681)
|
||||
Description: A link that pop up a transaction in the wallet.
|
||||
Used for: Useful as QR code data for transaction requests, chat transaction requests and for dapp links to transaction requests.
|
||||
[Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/eip681.cljs)
|
||||
Related: [Issue #9183: URL Format for Transaction Requests (EIP681) is poorly supported](https://github.com/status-im/status-mobile/issues/9183) [Issue #9240](https://github.com/status-im/status-mobile/pull/9240) [Issue #9238](https://github.com/status-im/status-mobile/issues/9238) [Issue #7214](https://github.com/status-im/status-mobile/issues/7214) [Issue #7325](https://github.com/status-im/status-mobile/issues/7325) [Issue #8150](https://github.com/status-im/status-mobile/issues/8150)
|
||||
|
||||
### EIP712 - Typed Signed Message
|
||||
|
||||
Support: Partial.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-712)
|
||||
Description: Standardize types for contract signature, allowing users to easily inspect whats being signed.
|
||||
Used for: User experience, security.
|
||||
Related: [Isse #5461](https://github.com/status-im/status-mobile/issues/5461) [Commit](https://github.com/status-im/status-mobile/commit/ba37f7b8d029d3358c7b284f6a2383b9ef9526c9)
|
||||
|
||||
### EIP721 - Non Fungible Token
|
||||
|
||||
Support: Partial.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-721)
|
||||
Description: Enable wallets to use tokens based on smart contracts compliant with this standard.
|
||||
Used for: Wallet feature.
|
||||
Related: [Issue #8909](https://github.com/status-im/status-mobile/issues/8909)
|
||||
[Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/erc721.cljs) [Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/tokens.cljs)
|
||||
|
||||
### EIP945 - Web 3 QR Code Scanning API
|
||||
|
||||
Support: Full.
|
||||
[Reference](https://github.com/ethereum/EIPs/issues/945)
|
||||
Used for: Sharing contactcode, reading transaction requests.
|
||||
Related: [Issue #5870](https://github.com/status-im/status-mobile/issues/5870)
|
||||
|
||||
### EIP1102 - Opt-in account exposure
|
||||
|
||||
Support: Full.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-1102)
|
||||
Description: Allow users to opt-in the exposure of their ethereum address to dapps they browse.
|
||||
Used for: Privacy, DApp support.
|
||||
Related: [Issue #7985](https://github.com/status-im/status-mobile/issues/7985)
|
||||
|
||||
### EIP1193 - Ethereum Provider JavaScript API
|
||||
|
||||
Support: Full.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-1193)
|
||||
Description: Allows dapps to recognize event changes on wallet.
|
||||
Used for: DApp support.
|
||||
Related: [Issue #7246](https://github.com/status-im/status-mobile/pull/7246)
|
||||
|
||||
### EIP1577 - contenthash field for ENS
|
||||
|
||||
Support: Partial.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-1577)
|
||||
Description: Allows users browse ENS domains using contenthash standard.
|
||||
Used for: Browser, DApp support.
|
||||
Related: [Isse #6688](https://github.com/status-im/status-mobile/issues/6688)
|
||||
[Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/utils/contenthash.cljs) [Sourcecode](https://github.com/status-im/status-mobile/blob/develop/test/cljs/status_im/test/utils/contenthash.cljs#L5)
|
||||
|
||||
### EIP1581 - Non-wallet usage of keys derived from BIP-32 trees
|
||||
|
||||
Support: Partial.
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-1581)
|
||||
Description: Allow wallet to derive keys that are less sensible (non wallet).
|
||||
Used for: Security (don't reuse wallet key) and user experience (don't request keycard every login).
|
||||
Related: [Issue #9096](https://github.com/status-im/status-mobile/issues/9088) [Issue #9096](https://github.com/status-im/status-mobile/pull/9096)
|
||||
[Sourcecode](https://github.com/status-im/status-mobile/blob/develop/src/status_im/constants.cljs#L242)
|
||||
|
||||
### EIP1459 - Node Discovery via DNS
|
||||
|
||||
Support: -
|
||||
[Reference](https://eips.ethereum.org/EIPS/eip-1459)
|
||||
Description: Allows the storing and retrieving of nodes through merkle trees stored in TXT records of a domain.
|
||||
Used for: Finding Waku nodes.
|
||||
Related: -
|
||||
Sourcecode: -
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [BIP32 - Hierarchical Deterministic Wallets](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki)
|
||||
- [BIP39 - Mnemonic code for generating deterministic keys](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki)
|
||||
- [BIP43 - Purpose Field for Deterministic Wallets](https://github.com/bitcoin/bips/blob/master/bip-0043.mediawiki)
|
||||
- [BIP44 - Multi-Account Hierarchy for Deterministic Wallets](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
|
||||
- [BIP44 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/constants.cljs#L240)
|
||||
- [EIP20 - Fungible Token](https://eips.ethereum.org/EIPS/eip-20)
|
||||
- [EIP20 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/tokens.cljs)
|
||||
- [EIP55 - Mixed-case checksum address encoding](https://eips.ethereum.org/EIPS/eip-55)
|
||||
- [EIP55 Related Issue 4959](https://github.com/status-im/status-mobile/issues/4959)
|
||||
- [EIP55 Related Issue 8707](https://github.com/status-im/status-mobile/issues/8707)
|
||||
- [EIP55 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/eip55.cljs)
|
||||
- [EIP67 - Standard URI scheme with metadata, value and byte code](https://github.com/ethereum/EIPs/issues/67)
|
||||
- [EIP67 Related Issue 875](https://github.com/status-im/status-mobile/issues/875)
|
||||
- [EIP137 - Ethereum Domain Name Service - Specification](https://eips.ethereum.org/EIPS/eip-137)
|
||||
- [EIP137 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/ens.cljs#L86)
|
||||
- [EIP155 - Simple replay attack protection](https://eips.ethereum.org/EIPS/eip-155)
|
||||
- [EIP155 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/core.cljs)
|
||||
- [EIP165 - Standard Interface Detection](https://eips.ethereum.org/EIPS/eip-165)
|
||||
- [EIP165 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/eip165.cljs)
|
||||
- [EIP181 - ENS support for reverse resolution of Ethereum addresses](https://eips.ethereum.org/EIPS/eip-181)
|
||||
- [EIP181 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/ens.cljs#L86)
|
||||
- [EIP191 - Signed Message](https://eips.ethereum.org/EIPS/eip-191)
|
||||
- [EIP627 - Whisper Specification](https://eips.ethereum.org/EIPS/eip-627)
|
||||
- [EIP681 - URL Format for Transaction Requests](https://eips.ethereum.org/EIPS/eip-681)
|
||||
- [EIP681 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/eip681.cljs)
|
||||
- [EIP681 Related Issue 9183](https://github.com/status-im/status-mobile/issues/9183)
|
||||
- [EIP681 Related Issue 9240](https://github.com/status-im/status-mobile/pull/9240)
|
||||
- [EIP681 Related Issue 9238](https://github.com/status-im/status-mobile/issues/9238)
|
||||
- [EIP681 Related Issue 7214](https://github.com/status-im/status-mobile/issues/7214)
|
||||
- [EIP681 Related Issue 7325](https://github.com/status-im/status-mobile/issues/7325)
|
||||
- [EIP681 Related Issue 8150](https://github.com/status-im/status-mobile/issues/8150)
|
||||
- [EIP712 - Typed Signed Message](https://eips.ethereum.org/EIPS/eip-712)
|
||||
- [EIP712 Related Issue 5461](https://github.com/status-im/status-mobile/issues/5461)
|
||||
- [EIP712 Related Commit](https://github.com/status-im/status-mobile/commit/ba37f7b8d029d3358c7b284f6a2383b9ef9526c9)
|
||||
- [EIP721 - Non Fungible Token](https://eips.ethereum.org/EIPS/eip-721)
|
||||
- [EIP721 Related Issue 8909](https://github.com/status-im/status-mobile/issues/8909)
|
||||
- [EIP721 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/erc721.cljs)
|
||||
- [EIP721 Source Code (Tokens)](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/tokens.cljs)
|
||||
- [EIP945 - Web 3 QR Code Scanning API](https://github.com/ethereum/EIPs/issues/945)
|
||||
- [EIP945 Related Issue 5870](https://github.com/status-im/status-mobile/issues/5870)
|
||||
- [EIP1102 - Opt-in account exposure](https://eips.ethereum.org/EIPS/eip-1102)
|
||||
- [EIP1102 Related Issue 7985](https://github.com/status-im/status-mobile/issues/7985)
|
||||
- [EIP1193 - Ethereum Provider JavaScript API](https://eips.ethereum.org/EIPS/eip-1193)
|
||||
- [EIP1193 Related Issue 7246](https://github.com/status-im/status-mobile/pull/7246)
|
||||
- [EIP1577 - contenthash field for ENS](https://eips.ethereum.org/EIPS/eip-1577)
|
||||
- [EIP1577 Related Issue 6688](https://github.com/status-im/status-mobile/issues/6688)
|
||||
- [EIP1577 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/utils/contenthash.cljs)
|
||||
- [EIP1577 Test Source Code](https://github.com/status-im/status-mobile/blob/develop/test/cljs/status_im/test/utils/contenthash.cljs#L5)
|
||||
- [EIP1581 - Non-wallet usage of keys derived from BIP-32 trees](https://eips.ethereum.org/EIPS/eip-1581)
|
||||
- [EIP1581 Related Issue 9088](https://github.com/status-im/status-mobile/issues/9088)
|
||||
- [EIP1581 Related Issue 9096](https://github.com/status-im/status-mobile/pull/9096)
|
||||
- [EIP1581 Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/constants.cljs#L242)
|
||||
- [EIP1459 - Node Discovery via DNS](https://eips.ethereum.org/EIPS/eip-1459)
|
||||
@@ -1,237 +0,0 @@
|
||||
---
|
||||
title: ETHEREUM-USAGE
|
||||
name: Status interactions with the Ethereum blockchain
|
||||
status: deprecated
|
||||
description: All interactions that the Status client has with the Ethereum blockchain.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification documents all the interactions that the Status client has
|
||||
with the [Ethereum](https://ethereum.org/developers/) blockchain.
|
||||
|
||||
## Background
|
||||
|
||||
All the interactions are made through [JSON-RPC](https://github.com/ethereum/wiki/wiki/JSON-RPC).
|
||||
Currently [Infura](https://infura.io/) is used.
|
||||
The client assumes high-availability,
|
||||
otherwise it will not be able to interact with the Ethereum blockchain.
|
||||
Status nodes rely on these Infura nodes
|
||||
to validate the integrity of the transaction and report a consistent history.
|
||||
|
||||
Key handling is described [here](/status/deprecated/account.md)
|
||||
|
||||
1. [Wallet](#wallet)
|
||||
2. [ENS](#ens)
|
||||
|
||||
## Wallet
|
||||
|
||||
The wallet in Status has two main components:
|
||||
|
||||
1) Sending transactions
|
||||
2) Fetching balance
|
||||
|
||||
In the section below are described the `RPC` calls made the nodes, with a brief
|
||||
description of their functionality and how it is used by Status.
|
||||
|
||||
1.[Sending transactions](#sending-transactions)
|
||||
|
||||
- [EstimateGas](#estimategas)
|
||||
- [PendingNonceAt](#pendingnonceat)
|
||||
- [SuggestGasPrice](#suggestgasprice)
|
||||
- [SendTransaction](#sendtransaction)
|
||||
|
||||
2.[Fetching balance](#fetching-balance)
|
||||
|
||||
- [BlockByHash](#blockbyhash)
|
||||
- [BlockByNumber](#blockbynumber)
|
||||
- [FilterLogs](#filterlogs)
|
||||
- [HeaderByNumber](#headerbynumber)
|
||||
- [NonceAt](#nonceat)
|
||||
- [TransactionByHash](#transactionbyhash)
|
||||
- [TransactionReceipt](#transactionreceipt)
|
||||
|
||||
### Sending transactions
|
||||
|
||||
#### EstimateGas
|
||||
|
||||
EstimateGas tries to estimate the gas needed to execute a specific transaction
|
||||
based on the current pending state of the backend blockchain.
|
||||
There is no guarantee that this is the true gas limit requirement
|
||||
as other transactions may be added or removed by miners,
|
||||
but it should provide a basis for setting a reasonable default.
|
||||
|
||||
```go
|
||||
func (ec *Client) EstimateGas(ctx context.Context, msg ethereum.CallMsg) (uint64, error)
|
||||
```
|
||||
|
||||
[L499](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L499)
|
||||
|
||||
#### PendingNonceAt
|
||||
|
||||
`PendingNonceAt` returns the account nonce of the given account in the pending state.
|
||||
This is the nonce that should be used for the next transaction.
|
||||
|
||||
```go
|
||||
func (ec *Client) PendingNonceAt(ctx context.Context, account common.Address) (uint64, error)
|
||||
```
|
||||
|
||||
[L440](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L440)
|
||||
|
||||
#### SuggestGasPrice
|
||||
|
||||
`SuggestGasPrice` retrieves the currently suggested gas price to allow a timely
|
||||
execution of a transaction.
|
||||
|
||||
```go
|
||||
func (ec *Client) SuggestGasPrice(ctx context.Context) (*big.Int, error)
|
||||
```
|
||||
|
||||
[L487](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L487)
|
||||
|
||||
#### SendTransaction
|
||||
|
||||
`SendTransaction` injects a signed transaction into the pending pool for execution.
|
||||
|
||||
If the transaction was a contract creation use the TransactionReceipt method to get the
|
||||
contract address after the transaction has been mined.
|
||||
|
||||
```go
|
||||
func (ec *Client) SendTransaction(ctx context.Context, tx *types.Transaction) error
|
||||
```
|
||||
|
||||
[L512](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L512)
|
||||
|
||||
### Fetching balance
|
||||
|
||||
A Status node fetches the current and historical [ECR20](https://eips.ethereum.org/EIPS/eip-20) and ETH balance for the user wallet address.
|
||||
Collectibles following the [ERC-721](https://eips.ethereum.org/EIPS/eip-721) are also fetched if enabled.
|
||||
|
||||
A Status node supports by default the following [tokens](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/tokens.cljs). Custom tokens can be added by specifying the `address`, `symbol` and `decimals`.
|
||||
|
||||
#### BlockByHash
|
||||
|
||||
`BlockByHash` returns the given full block.
|
||||
|
||||
It is used by status to fetch a given block which will then be inspected
|
||||
for transfers to the user address, both tokens and ETH.
|
||||
|
||||
```go
|
||||
func (ec *Client) BlockByHash(ctx context.Context, hash common.Hash) (*types.Block, error)
|
||||
```
|
||||
|
||||
[L78](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L78)
|
||||
|
||||
#### BlockByNumber
|
||||
|
||||
`BlockByNumber` returns a block from the current canonical chain. If number is nil, the
|
||||
latest known block is returned.
|
||||
|
||||
```go
|
||||
func (ec *Client) BlockByNumber(ctx context.Context, number *big.Int) (*types.Block, error)
|
||||
```
|
||||
|
||||
[L82](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L82)
|
||||
|
||||
#### FilterLogs
|
||||
|
||||
`FilterLogs` executes a filter query.
|
||||
|
||||
Status uses this function to filter out logs, using the hash of the block
|
||||
and the address of interest, both inbound and outbound.
|
||||
|
||||
```go
|
||||
func (ec *Client) FilterLogs(ctx context.Context, q ethereum.FilterQuery) ([]types.Log, error)
|
||||
```
|
||||
|
||||
[L377](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L377)
|
||||
|
||||
#### NonceAt
|
||||
|
||||
`NonceAt` returns the account nonce of the given account.
|
||||
|
||||
```go
|
||||
func (ec *Client) NonceAt(ctx context.Context, account common.Address, blockNumber *big.Int) (uint64, error)
|
||||
```
|
||||
|
||||
[L366](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L366)
|
||||
|
||||
#### TransactionByHash
|
||||
|
||||
`TransactionByHash` returns the transaction with the given hash,
|
||||
used to inspect those transactions made/received by the user.
|
||||
|
||||
```go
|
||||
func (ec *Client) TransactionByHash(ctx context.Context, hash common.Hash) (tx *types.Transaction, isPending bool, err error)
|
||||
```
|
||||
|
||||
[L202](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L202)
|
||||
|
||||
#### HeaderByNumber
|
||||
|
||||
`HeaderByNumber` returns a block header from the current canonical chain.
|
||||
|
||||
```go
|
||||
func (ec *Client) HeaderByNumber(ctx context.Context, number *big.Int) (*types.Header, error)
|
||||
```
|
||||
|
||||
[L172](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L172)
|
||||
|
||||
#### TransactionReceipt
|
||||
|
||||
`TransactionReceipt` returns the receipt of a transaction by transaction hash.
|
||||
It is used in status to check if a token transfer was made to the user address.
|
||||
|
||||
```go
|
||||
func (ec *Client) TransactionReceipt(ctx context.Context, txHash common.Hash) (*types.Receipt, error)
|
||||
```
|
||||
|
||||
[L270](https://github.com/ethereum/go-ethereum/blob/26d271dfbba1367326dec38068f9df828d462c61/ethclient/ethclient.go#L270)
|
||||
|
||||
## ENS
|
||||
|
||||
All the interactions with `ENS` are made through the [ENS contract](https://github.com/ensdomains/ens)
|
||||
|
||||
For the `stateofus.eth` username, one can be registered through these [contracts](https://github.com/status-im/ens-usernames)
|
||||
|
||||
### Registering, releasing and updating
|
||||
|
||||
- [Registering a username](https://github.com/status-im/ens-usernames/blob/77d9394d21a5b6213902473b7a16d62a41d9cd09/contracts/registry/UsernameRegistrar.sol#L113)
|
||||
- [Releasing a username](https://github.com/status-im/ens-usernames/blob/77d9394d21a5b6213902473b7a16d62a41d9cd09/contracts/registry/UsernameRegistrar.sol#L131)
|
||||
- [Updating a username](https://github.com/status-im/ens-usernames/blob/77d9394d21a5b6213902473b7a16d62a41d9cd09/contracts/registry/UsernameRegistrar.sol#L174)
|
||||
|
||||
### Slashing
|
||||
|
||||
Usernames MUST be in a specific format, otherwise they MAY be slashed:
|
||||
|
||||
- They MUST only contain alphanumeric characters
|
||||
- They MUST NOT be in the form `0x[0-9a-f]{5}.*` and have more than 12 characters
|
||||
- They MUST NOT be in the [reserved list](https://github.com/status-im/ens-usernames/blob/47c4c6c2058be0d80b7d678e611e166659414a3b/config/ens-usernames/reservedNames.js)
|
||||
- They MUST NOT be too short, this is dynamically set in the contract and can be checked against the [contract](https://github.com/status-im/ens-usernames/blob/master/contracts/registry/UsernameRegistrar.sol#L26)
|
||||
|
||||
- [Slash a reserved username](https://github.com/status-im/ens-usernames/blob/77d9394d21a5b6213902473b7a16d62a41d9cd09/contracts/registry/UsernameRegistrar.sol#L237)
|
||||
- [Slash an invalid username](https://github.com/status-im/ens-usernames/blob/77d9394d21a5b6213902473b7a16d62a41d9cd09/contracts/registry/UsernameRegistrar.sol#L261)
|
||||
- [Slash a username too similar to an address](https://github.com/status-im/ens-usernames/blob/77d9394d21a5b6213902473b7a16d62a41d9cd09/contracts/registry/UsernameRegistrar.sol#L215)
|
||||
- [Slash a username that is too short](https://github.com/status-im/ens-usernames/blob/77d9394d21a5b6213902473b7a16d62a41d9cd09/contracts/registry/UsernameRegistrar.sol#L200)
|
||||
|
||||
ENS names are propagated through `ChatMessage` and `ContactUpdate` [payload](/status/deprecated/payloads.md).
|
||||
A client SHOULD verify ens names against the public key of the sender on receiving the message against the [ENS contract](https://github.com/ensdomains/ens)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [Ethereum Developers](https://ethereum.org/developers/)
|
||||
- [JSON-RPC](https://github.com/ethereum/wiki/wiki/JSON-RPC)
|
||||
- [Infura](https://infura.io/)
|
||||
- [Key Handling](/status/deprecated/account.md)
|
||||
- [ERC-20 Token Standard](https://eips.ethereum.org/EIPS/eip-20)
|
||||
- [ERC-721 Non-Fungible Token Standard](https://eips.ethereum.org/EIPS/eip-721)
|
||||
- [Supported Tokens Source Code](https://github.com/status-im/status-mobile/blob/develop/src/status_im/ethereum/tokens.cljs)
|
||||
- [go-ethereum](https://github.com/ethereum/go-ethereum/)
|
||||
- [ENS Contract](https://github.com/ensdomains/ens)
|
||||
@@ -1,162 +0,0 @@
|
||||
---
|
||||
title: GROUP-CHAT
|
||||
name: Group Chat
|
||||
status: deprecated
|
||||
description: This document describes the group chat protocol used by the Status application.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Andrea Piana <andreap@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document describes the group chat protocol used by the Status application.
|
||||
The node uses pairwise encryption among members so a message is exchanged
|
||||
between each participant, similarly to a one-to-one message.
|
||||
|
||||
## Membership updates
|
||||
|
||||
The node uses membership updates messages to propagate group chat membership changes.
|
||||
The protobuf format is described in the [PAYLOADS](/status/deprecated/payloads.md).
|
||||
Below describes each specific field.
|
||||
|
||||
The protobuf messages are:
|
||||
|
||||
```protobuf
|
||||
// MembershipUpdateMessage is a message used to propagate information
|
||||
// about group membership changes.
|
||||
message MembershipUpdateMessage {
|
||||
// The chat id of the private group chat
|
||||
string chat_id = 1;
|
||||
// A list of events for this group chat, first 65 bytes are the signature, then is a
|
||||
// protobuf encoded MembershipUpdateEvent
|
||||
repeated bytes events = 2;
|
||||
// An optional chat message
|
||||
ChatMessage message = 3;
|
||||
}
|
||||
|
||||
message MembershipUpdateEvent {
|
||||
// Lamport timestamp of the event as described in [Status Payload Specs](status-payload-specs.md#clock-vs-timestamp-and-message-ordering)
|
||||
uint64 clock = 1;
|
||||
// List of public keys of the targets of the action
|
||||
repeated string members = 2;
|
||||
// Name of the chat for the CHAT_CREATED/NAME_CHANGED event types
|
||||
string name = 3;
|
||||
// The type of the event
|
||||
EventType type = 4;
|
||||
|
||||
enum EventType {
|
||||
UNKNOWN = 0;
|
||||
CHAT_CREATED = 1; // See [CHAT_CREATED](#chat-created)
|
||||
NAME_CHANGED = 2; // See [NAME_CHANGED](#name-changed)
|
||||
MEMBERS_ADDED = 3; // See [MEMBERS_ADDED](#members-added)
|
||||
MEMBER_JOINED = 4; // See [MEMBER_JOINED](#member-joined)
|
||||
MEMBER_REMOVED = 5; // See [MEMBER_REMOVED](#member-removed)
|
||||
ADMINS_ADDED = 6; // See [ADMINS_ADDED](#admins-added)
|
||||
ADMIN_REMOVED = 7; // See [ADMIN_REMOVED](#admin-removed)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Payload
|
||||
|
||||
`MembershipUpdateMessage`:
|
||||
|
||||
| Field | Name | Type | Description |
|
||||
| ----- | ---- | ---- | ---- |
|
||||
| 1 | chat-id | `string` | The chat id of the chat where the change is to take place |
|
||||
| 2 | events | See details | A list of events that describe the membership changes, in their encoded protobuf form |
|
||||
| 3 | message | `ChatMessage` | An optional message, described in [Message](/status/deprecated/payloads.md/#message) |
|
||||
|
||||
`MembershipUpdateEvent`:
|
||||
|
||||
| Field | Name | Type | Description |
|
||||
| ----- | ---- | ---- | ---- |
|
||||
| 1 | clock | `uint64` | The clock value of the event |
|
||||
| 2 | members | `[]string` | An optional list of hex encoded (prefixed with `0x`) public keys, the targets of the action |
|
||||
| 3 | name | `name` | An optional name, for those events that make use of it |
|
||||
| 4 | type | `EventType` | The type of event sent, described below |
|
||||
|
||||
### Chat ID
|
||||
|
||||
Each membership update MUST be sent with a corresponding `chatId`.
|
||||
The format of this chat ID MUST be a string of [UUID](https://tools.ietf.org/html/rfc4122),
|
||||
concatenated with the hex-encoded public key of the creator of the chat, joined by `-`.
|
||||
This chatId MUST be validated by all clients, and MUST be discarded if it does not follow these rules.
|
||||
|
||||
### Signature
|
||||
|
||||
The node calculates the signature for each event by encoding each `MembershipUpdateEvent` in its protobuf representation
|
||||
and prepending the bytes of the chatID, lastly the node signs the `Keccak256` of the bytes
|
||||
using the private key by the author and added to the `events` field of MembershipUpdateMessage.
|
||||
|
||||
### Group membership event
|
||||
|
||||
Any `group membership` event received MUST be verified by calculating the signature as per the method described above.
|
||||
The author MUST be extracted from it, if the verification fails the event MUST be discarded.
|
||||
|
||||
#### CHAT_CREATED
|
||||
|
||||
Chat `created event` is the first event that needs to be sent.
|
||||
Any event with a clock value lower than this MUST be discarded.
|
||||
Upon receiving this event a client MUST validate the `chatId`
|
||||
provided with the updates and create a chat with identified by `chatId` and named `name`.
|
||||
|
||||
#### NAME_CHANGED
|
||||
|
||||
`admins` use a `name changed` event to change the name of the group chat.
|
||||
Upon receiving this event a client MUST validate the `chatId` provided with the updates
|
||||
and MUST ensure the author of the event is an admin of the chat, otherwise the event MUST be ignored.
|
||||
If the event is valid the chat name SHOULD be changed to `name`.
|
||||
|
||||
#### MEMBERS_ADDED
|
||||
|
||||
`admins` use a `members added` event to add members to the chat.
|
||||
Upon receiving this event a client MUST validate the `chatId`
|
||||
provided with the updates and MUST ensure the author of the event is an admin of the chat, otherwise the event MUST be ignored.
|
||||
If the event is valid a client MUST update the list of members of the chat who have not joined, adding the `members` received.
|
||||
`members` is an array of hex encoded public keys.
|
||||
|
||||
#### MEMBER_JOINED
|
||||
|
||||
`members` use a `members joined` event to signal that they want to start receiving messages from this chat.
|
||||
Upon receiving this event a client MUST validate the `chatId` provided with the updates.
|
||||
If the event is valid a client MUST update the list of members of the chat who joined, adding the signer.
|
||||
Any `message` sent to the group chat should now include the newly joined member.
|
||||
|
||||
#### ADMINS_ADDED
|
||||
|
||||
`admins` use an `admins added` event to add make other admins in the chat.
|
||||
Upon receiving this event a client MUST validate the `chatId` provided with the updates,
|
||||
MUST ensure the author of the event is an admin of the chat
|
||||
and MUST ensure all `members` are already `members` of the chat, otherwise the event MUST be ignored.
|
||||
If the event is valid a client MUST update the list of admins of the chat, adding the `members` received.
|
||||
`members` is an array of hex encoded public keys.
|
||||
|
||||
#### MEMBER_REMOVED
|
||||
|
||||
`members` and/or `admins` use a `member-removed` event to leave or kick members of the chat.
|
||||
Upon receiving this event a client MUST validate the `chatId` provided with the updates, MUST ensure that:
|
||||
|
||||
- If the author of the event is an admin, target can only be themselves or a non-admin member.
|
||||
- If the author of the event is not an admin, the target of the event can only be themselves.
|
||||
|
||||
If the event is valid a client MUST remove the member from the list of `members`/`admins` of the chat,
|
||||
and no further message should be sent to them.
|
||||
|
||||
#### ADMIN_REMOVED
|
||||
|
||||
`Admins` use an `admin-removed` event to drop admin privileges.
|
||||
Upon receiving this event a client MUST validate the `chatId` provided with the updates,
|
||||
MUST ensure that the author of the event is also the target of the event.
|
||||
|
||||
If the event is valid a client MUST remove the member from the list of `admins` of the chat.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [PAYLOADS](/status/deprecated/payloads.md)
|
||||
- [UUID](https://tools.ietf.org/html/rfc4122)
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 2.4 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 1.1 KiB |
@@ -1,303 +0,0 @@
|
||||
---
|
||||
title: Keycard Usage for Wallet and Chat Keys
|
||||
name: Keycard Usage for Wallet and Chat Keys
|
||||
status: deprecated
|
||||
description: In this specification, we describe how Status communicates with Keycard to create, store and use multiaccount.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Roman Volosovskyi <roman@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
In this specification, we describe how Status communicates with Keycard to create, store and use multiaccount.
|
||||
|
||||
## Definitions
|
||||
|
||||
| Term | Description |
|
||||
| ------------------ | -------------------------------------------------------- |
|
||||
| Keycard Hardwallet | [https://keycard.tech/docs/](https://keycard.tech/docs/) |
|
||||
| | |
|
||||
|
||||
## Multiaccount creation/restoring
|
||||
|
||||
### Creation and restoring via mnemonic
|
||||
|
||||
1. `status-im.hardwallet.card/get-application-info`
|
||||
request: `nil`
|
||||
response: `{"initialized?" false}`
|
||||
2. `status-im.hardwallet.card/init-card`
|
||||
request: `{:pin 123123}`
|
||||
response:
|
||||
|
||||
```clojure
|
||||
{"password" "nEJXqf6VWbqeC5oN",
|
||||
"puk" "411810112887",
|
||||
"pin" "123123"}
|
||||
```
|
||||
|
||||
3. `status-im.hardwallet.card/get-application-info`
|
||||
request: `nil`
|
||||
response:
|
||||
|
||||
```clojure
|
||||
{"free-pairing-slots" 5,
|
||||
"app-version" "2.2",
|
||||
"secure-channel-pub-key" "04e70d7af7d91b8cd23adbefdfc242c096adee6c1b5ad27a4013a8f926864c1a4f816b338238dc4a04226ab42f23672585c6dca03627885530643f1656ee69b025",
|
||||
"key-uid" "",
|
||||
"instance-uid" "9f149d438988a7af5e1a186f650c9328",
|
||||
"paired?" false,
|
||||
"has-master-key?" false,
|
||||
"initialized?" true}
|
||||
```
|
||||
|
||||
4. `status-im.hardwallet.card/pair`
|
||||
params: `{:password "nEJXqf6VWbqeC5oN"}`
|
||||
response: `AAVefVX0kPGsxnvQV5OXRbRTLGI3k8/S27rpsq/lZrVR` (`pairing`)
|
||||
|
||||
5. `status-im.hardwallet.card/generate-and-load-keys`
|
||||
|
||||
```clojure
|
||||
{:mnemonic "lift mansion moment version card type uncle sunny lock gather nerve math",
|
||||
:pairing "AAVefVX0kPGsxnvQV5OXRbRTLGI3k8/S27rpsq/lZrVR",
|
||||
:pin "123123"}
|
||||
```
|
||||
response:
|
||||
```clojure
|
||||
{"whisper-address" "1f29a1a60c8a12f80c397a91c6ae0323f420e609",
|
||||
"whisper-private-key" "123123123123123",
|
||||
"wallet-root-public-key" "04eb9d01990a106a65a6dfaa48300f72aecfeabe502d9f4f7aeaccb146dc2f16e2dec81dcec0a1a52c1df4450f441a48c210e1a73777c0161030378df22e4ae015",
|
||||
"encryption-public-key" "045ee42f012d72be74b31a28ce320df617e0cd5b9b343fad34fcd61e2f5dfa89ab23d880473ba4e95401a191764c7f872b7af92ea0d8c39462147df6f3f05c2a11",
|
||||
"wallet-root-address" "132dd67ff47cc1c376879c474fd2afd0f1eee6de",
|
||||
"whisper-public-key" "0450ad84bb95f32c64f4e5027cc11d1b363a0566a0cfc475c5653e8af9964c5c9b0661129b75e6e1bc6e96ba2443238e53e7f49f2c5f2d16fcf04aca4826765d46",
|
||||
"address" "bf93eb43fea2ce94bf3a6463c16680b56aa4a08a",
|
||||
"wallet-address" "7eee1060d8e4722d36c99f30ff8291caa3cfc40c",
|
||||
"key-uid" "472d8436ccedb64bcbd897bed5895ec3458b306352e1bcee377df87db32ef2c2",
|
||||
"wallet-public-key" "0495ab02978ea1f8b059140e0be5a87aad9b64bb7d9706735c47dda6e182fd5ca41744ca37583b9a10c316b01d4321d6c85760c61301874089acab041037246294",
|
||||
"public-key" "0465d452d12171711f32bb931f9ea26fe1b88fe2511a7909a042b914fde10a99719136365d506e2d1694fc14627f9d557da33865efc6001da3942fc1d4d2469ca1",
|
||||
"instance-uid" "9f149d438988a7af5e1a186f650c9328"}
|
||||
```
|
||||
|
||||
### Multiaccount restoring via pairing
|
||||
This flow is required in case if a user want to pair a card with an existing multiaccount on it.
|
||||
1. `status-im.hardwallet.card/get-application-info`
|
||||
request: `nil`
|
||||
response:
|
||||
|
||||
```clojure
|
||||
{"free-pairing-slots" 4,
|
||||
"app-version" "2.2",
|
||||
"secure-channel-pub-key" "04e70d7af7d91b8cd23adbefdfc242c096adee6c1b5ad27a4013a8f926864c1a4f816b338238dc4a04226ab42f23672585c6dca03627885530643f1656ee69b025",
|
||||
"key-uid" "",
|
||||
"instance-uid" "9f149d438988a7af5e1a186f650c9328",
|
||||
"paired?" false,
|
||||
"has-master-key?" false,
|
||||
"initialized?" true}
|
||||
```
|
||||
2. `status-im.hardwallet.card/pair`
|
||||
params: `{:password "nEJXqf6VWbqeC5oN"}`
|
||||
response: `AAVefVX0kPGsxnvQV5OXRbRTLGI3k8/S27rpsq/lZrVR` (`pairing`)
|
||||
|
||||
3. `status-im.hardwallet.card/generate-and-load-keys`
|
||||
```clojure
|
||||
{:mnemonic "lift mansion moment version card type uncle sunny lock gather nerve math",
|
||||
:pairing "AAVefVX0kPGsxnvQV5OXRbRTLGI3k8/S27rpsq/lZrVR",
|
||||
:pin "123123"}
|
||||
```
|
||||
response:
|
||||
```clojure
|
||||
{"whisper-address" "1f29a1a60c8a12f80c397a91c6ae0323f420e609",
|
||||
"whisper-private-key" "123123123123123123123",
|
||||
"wallet-root-public-key" "04eb9d01990a106a65a6dfaa48300f72aecfeabe502d9f4f7aeaccb146dc2f16e2dec81dcec0a1a52c1df4450f441a48c210e1a73777c0161030378df22e4ae015",
|
||||
"encryption-public-key" "045ee42f012d72be74b31a28ce320df617e0cd5b9b343fad34fcd61e2f5dfa89ab23d880473ba4e95401a191764c7f872b7af92ea0d8c39462147df6f3f05c2a11",
|
||||
"wallet-root-address" "132dd67ff47cc1c376879c474fd2afd0f1eee6de",
|
||||
"whisper-public-key" "0450ad84bb95f32c64f4e5027cc11d1b363a0566a0cfc475c5653e8af9964c5c9b0661129b75e6e1bc6e96ba2443238e53e7f49f2c5f2d16fcf04aca4826765d46",
|
||||
"address" "bf93eb43fea2ce94bf3a6463c16680b56aa4a08a",
|
||||
"wallet-address" "7eee1060d8e4722d36c99f30ff8291caa3cfc40c",
|
||||
"key-uid" "472d8436ccedb64bcbd897bed5895ec3458b306352e1bcee377df87db32ef2c2",
|
||||
"wallet-public-key" "0495ab02978ea1f8b059140e0be5a87aad9b64bb7d9706735c47dda6e182fd5ca41744ca37583b9a10c316b01d4321d6c85760c61301874089acab041037246294",
|
||||
"public-key" "0465d452d12171711f32bb931f9ea26fe1b88fe2511a7909a042b914fde10a99719136365d506e2d1694fc14627f9d557da33865efc6001da3942fc1d4d2469ca1",
|
||||
"instance-uid" "9f149d438988a7af5e1a186f650c9328"}
|
||||
```
|
||||
|
||||
## Multiaccount unlocking
|
||||
|
||||
1. `status-im.hardwallet.card/get-application-info`
|
||||
params:
|
||||
|
||||
```clojure
|
||||
{:pairing nil, :on-success nil}
|
||||
```
|
||||
|
||||
response:
|
||||
|
||||
```clojure
|
||||
{"free-pairing-slots" 4,
|
||||
"app-version" "2.2",
|
||||
"secure-channel-pub-key" "04b079ac513d5e0ebbe9becbae1618503419f5cb59edddc7d7bb09ce0db069a8e6dec1fb40c6b8e5454f7e1fcd0bb4a0b9750256afb4e4390e169109f3ea3ba91d",
|
||||
"key-uid" "a5424fb033f5cc66dce9cbbe464426b6feff70ca40aa952c56247aaeaf4764a9",
|
||||
"instance-uid" "2268254e3ed7898839abe0b40e1b4200",
|
||||
"paired?" false,
|
||||
"has-master-key?" true,
|
||||
"initialized?" true}
|
||||
```
|
||||
|
||||
2. `status-im.hardwallet.card/get-keys`
|
||||
params:
|
||||
|
||||
```clojure
|
||||
{:pairing "ACEWbvUlordYWOE6M1Narn/AXICRltjyuKIAn4kkPXQG",
|
||||
:pin "123123"}
|
||||
```
|
||||
|
||||
response:
|
||||
|
||||
```clojure
|
||||
{"whisper-address" "ec83f7354ca112203d2ce3e0b77b47e6e33258aa",
|
||||
"whisper-private-key" "123123123123123123123123",
|
||||
"wallet-root-public-key" "0424a93fe62a271ad230eb2957bf221b4644670589f5c0d69bd11f3371034674bf7875495816095006c2c0d5f834d628b87691a8bbe3bcc2225269020febd65a19",
|
||||
"encryption-public-key" "0437eef85e669f800570f444e64baa2d0580e61cf60c0e9236b4108455ec1943f385043f759fcb5bd8348e32d6d6550a844cf24e57f68e9397a0f7c824a8caee2d",
|
||||
"wallet-root-address" "6ff915f9f31f365511b1b8c1e40ce7f266caa5ce",
|
||||
"whisper-public-key" "04b195df4336c596cca1b89555dc55dd6bb4c5c4491f352f6fdfae140a2349213423042023410f73a862aa188f6faa05c80b0344a1e39c253756cb30d8753f9f8324",
|
||||
"address" "73509a1bb5f3b74d0dba143705cd9b4b55b8bba1",
|
||||
"wallet-address" "2f0cc0e0859e7a05f319d902624649c7e0f48955",
|
||||
"key-uid" "a5424fb033f5cc66dce9cbbe464426b6feff70ca40aa952c56247aaeaf4764a9",
|
||||
"wallet-public-key" "04d6fab73772933215872c239787b2281f3b10907d099d04b88c861e713bd2b95883e0b1710a266830da29e76bbf6b87ed034ab139e36cc235a1b2a5b5ddfd4e91",
|
||||
"public-key" "0437eef85e669f800570f444e64baa2d0580e61cf60c0e9236b4108455ec1943f385043f759fcb5bd8348e32d6d6550a844cf24e57f68e9397a0f7c824a8caee2d",
|
||||
"instance-uid" "2268254e3ed7898839abe0b40e1b4200"}
|
||||
```
|
||||
|
||||
3. `status-im.hardwallet.card/get-application-info`
|
||||
params:
|
||||
|
||||
```clojure
|
||||
{:pairing "ACEWbvUlordYWOE6M1Narn/AXICRltjyuKIAn4kkPXQG"}
|
||||
```
|
||||
|
||||
response:
|
||||
|
||||
```clojure
|
||||
{"paired?" true,
|
||||
"has-master-key?" true,
|
||||
"app-version" "2.2",
|
||||
"free-pairing-slots" 4,
|
||||
"pin-retry-counter" 3,
|
||||
"puk-retry-counter" 5,
|
||||
"initialized?" true,
|
||||
"secure-channel-pub-key" "04b079ac513d5e0ebbe9becbae1618503419f5cb59edddc7d7bb09ce0db069a8e6dec1fb40c6b8e5454f7e1fcd0bb4a0b9750256afb4e4390e169109f3ea3ba91d",
|
||||
"key-uid" "a5424fb033f5cc66dce9cbbe464426b6feff70ca40aa952c56247aaeaf4764a9",
|
||||
"instance-uid" "2268254e3ed7898839abe0b40e1b4200"}
|
||||
```
|
||||
|
||||
## Transaction signing
|
||||
|
||||
1. `status-im.hardwallet.card/get-application-info`
|
||||
|
||||
params:
|
||||
|
||||
```clojure
|
||||
{:pairing "ALecvegKyOW4szknl01yYWx60GLDK5gDhxMgJECRZ+7h",
|
||||
:on-success :hardwallet/sign}
|
||||
```
|
||||
|
||||
response:
|
||||
|
||||
```clojure
|
||||
{"paired?" true,
|
||||
"has-master-key?" true,
|
||||
"app-version" "2.2",
|
||||
"free-pairing-slots" 4,
|
||||
"pin-retry-counter" 3,
|
||||
"puk-retry-counter" 5,
|
||||
"initialized?" true,
|
||||
"secure-channel-pub-key" "0476d11f2ccdad4e7779b95a1ce063d7280cb6c09afe2c0e48ca0c64ab9cf2b3c901d12029d6c266bfbe227c73a802561302b2330ac07a3270fc638ad258fced4a",
|
||||
"key-uid" "d5c8cde8085e7a3fcf95aafbcbd7b3cfe32f61b85c2a8f662f60e76bdc100718",
|
||||
"instance-uid" "e20e27bfee115b431e6e81b8e9dcf04c"}
|
||||
```
|
||||
|
||||
2. `status-im.hardwallet.card/sign`
|
||||
params:
|
||||
|
||||
```clojure
|
||||
{:hash "92fc7ef54c3e0c42de256b93fbf2c49dc6948ee089406e204dec943b7a0142a9",
|
||||
:pairing "ALecvegKyOW4szknl01yYWx60GLDK5gDhxMgJECRZ+7h",
|
||||
:pin "123123",
|
||||
:path "m/44'/60'/0'/0/0"}
|
||||
```
|
||||
|
||||
response: `5d2ca075593cf50aa34007a0a1df7df14a369534450fce4a2ae8d023a9d9c0e216b5e5e3f64f81bee91613318d01601573fdb15c11887a3b8371e3291e894de600`
|
||||
|
||||
## Account derivation
|
||||
|
||||
1. `status-im.hardwallet.card/verify-pin`
|
||||
params:
|
||||
|
||||
```clojure
|
||||
{:pin "123123",
|
||||
:pairing "ALecvegKyOW4szknl01yYWx60GLDK5gDhxMgJECRZ+7h"}
|
||||
```
|
||||
|
||||
response: `3`
|
||||
1. `status-im.hardwallet.card/export-key`
|
||||
params:
|
||||
|
||||
```clojure
|
||||
{:pin "123123",
|
||||
:pairing "ALecvegKyOW4szknl01yYWx60GLDK5gDhxMgJECRZ+7h",
|
||||
:path "m/44'/60'/0'/0/1"}
|
||||
```
|
||||
|
||||
response: `046d1bcd2310a5e0094bc515b0ec995a8cb59e23d564094443af10011b6c00bdde44d160cdd32b4b6341ddd7dc83a4f31fdf60ec2276455649ccd7a22fa4ea01d8` (account's `public-key`)
|
||||
|
||||
## Reset pin
|
||||
|
||||
1. `status-im.hardwallet.card/change-pin`
|
||||
params:
|
||||
|
||||
```clojure
|
||||
{:new-pin "111111",
|
||||
:current-pin "222222",
|
||||
:pairing "AA0sKxPkN+jMHXZZeI8Rgz04AaY5Fg0CzVbm9189Khob"}
|
||||
```
|
||||
|
||||
response:
|
||||
`true`
|
||||
|
||||
## Unblock pin
|
||||
|
||||
If user enters a wrong pin three times in a row a card gets blocked. The user can use puk code then to unblock the card and set a new pin.
|
||||
|
||||
1. `status-im.hardwallet.card/unblock-pin`
|
||||
params:
|
||||
|
||||
```clojure
|
||||
{:puk "120702722103",
|
||||
:new-pin "111111",
|
||||
:pairing "AIoQl0OtCL0/uSN7Ct1/FHRMEk/eM2Lrhn0bw7f8sgOe"}
|
||||
```
|
||||
|
||||
response
|
||||
`true`
|
||||
|
||||
## Status go calls
|
||||
|
||||
In order to use the card in the app we need to use some parts of status-go API, specifically:
|
||||
|
||||
1. [`SaveAccountAndLoginWithKeycard`](https://github.com/status-im/status-go/blob/b33ad8147d23a932064f241e575511d70a601dcc/mobile/status.go#L337) after multiaccount creation/restoring to store multiaccount and login into it
|
||||
2. [`LoginWithKeycard`](https://github.com/status-im/status-go/blob/b33ad8147d23a932064f241e575511d70a601dcc/mobile/status.go#L373) to log into already existing account
|
||||
3. [`HashTransaction`](https://github.com/status-im/status-go/blob/b33ad8147d23a932064f241e575511d70a601dcc/mobile/status.go#L492) and [`HashMessage`](https://github.com/status-im/status-go/blob/b33ad8147d23a932064f241e575511d70a601dcc/mobile/status.go#L520) for hashing transaction/message before signing
|
||||
4. [`SendTransactionWithSignature`](https://github.com/status-im/status-go/blob/b33ad8147d23a932064f241e575511d70a601dcc/mobile/status.go#L471) to send transaction
|
||||
|
||||
## Where are the keys stored?
|
||||
|
||||
1. When we create a regular multiaccount all its keys are stored on device and are encrypted via key which is derived from user's password. In case if account was created using keycard all keys are stored on the card and are retrieved from it during signing into multiaccount.
|
||||
2. When we create a regular multiaccount we also create a separate database for it and this database is encrypted using key which is derived from user's password. For a keycard account we use `encryption-public-key` (returned by `status-im.hardwallet.card/get-keys`/`status-im.hardwallet.card/generate-and-load-keys`) as a password.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [Keycard Hardwallet Documentation](https://keycard.tech/docs/)
|
||||
- [Keycard Codebase](https://github.com/status-im/status-go/blob/b33ad8147d23a932064f241e575511d70a601dcc/mobile/status.go)
|
||||
@@ -1,52 +0,0 @@
|
||||
---
|
||||
title: NOTIFICATIONS
|
||||
name: Notifications
|
||||
status: deprecated
|
||||
description: A client should implement local notifications to offer notifications for any event in the app without the privacy cost and dependency on third party services.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Eric Dvorsak <eric@status.im>
|
||||
---
|
||||
|
||||
## Local Notifications
|
||||
|
||||
A client should implement local notifications to offer notifications
|
||||
for any event in the app without the privacy cost and dependency on third party services.
|
||||
This means that the client should run a background service to continuously or periodically check for updates.
|
||||
|
||||
### Android
|
||||
|
||||
Android allows running services on the device. When the user enables notifications,
|
||||
the client may start a ``Foreground Service`,
|
||||
and display a permanent notification indicating that the service is running,
|
||||
as required by Android guidelines.
|
||||
The service will simply keep the app from being killed by the system when it is in the background.
|
||||
The client will then be able to run in the background
|
||||
and display local notifications on events such as receiving a message in a one to one chat.
|
||||
|
||||
To facilitate the implementation of local notifications,
|
||||
a node implementation such as `status-go` may provide a specific `notification` signal.
|
||||
|
||||
Notifications are a separate process in Android, and interaction with a notification generates an `Intent`.
|
||||
To handle intents, the `NewMessageSignalHandler` may use a `BroadcastReceiver`,
|
||||
in order to update the state of local notifications when the user dismisses or tap a notification.
|
||||
If the user taps on a notification, the `BroadcastReceiver` generates a new intent to open the app should use universal links to get the user to the right place.
|
||||
|
||||
### iOS
|
||||
|
||||
We are not able to offer local notifications on iOS because there is no concept of services in iOS.
|
||||
It offers background updates but they’re not consistently triggered, and cannot be relied upon.
|
||||
The system decides when the background updates are triggered and the heuristics aren't known.
|
||||
|
||||
## Why is there no Push Notifications?
|
||||
|
||||
Push Notifications, as offered by Apple and Google are a privacy concern,
|
||||
they require a centralized service that is aware of who the notification needs to be delivered to.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- None
|
||||
@@ -1,386 +0,0 @@
|
||||
---
|
||||
title: PAYLOADS
|
||||
name: Payloads
|
||||
status: deprecated
|
||||
description: Payload of messages in Status, regarding chat and chat-related use cases.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes how the payload of each message in Status looks like.
|
||||
It is primarily centered around chat and chat-related use cases.
|
||||
|
||||
The payloads aims to be flexible enough to support messaging but also cases
|
||||
described in the [Status Whitepaper](https://status.im/whitepaper.pdf)
|
||||
as well as various clients created using different technologies.
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes the payload format and some special considerations.
|
||||
|
||||
## Payload wrapper
|
||||
|
||||
The node wraps all payloads in a [protobuf record](https://developers.google.com/protocol-buffers/):
|
||||
|
||||
```protobuf
|
||||
message ApplicationMetadataMessage {
|
||||
bytes signature = 1;
|
||||
bytes payload = 2;
|
||||
|
||||
Type type = 3;
|
||||
|
||||
enum Type {
|
||||
UNKNOWN = 0;
|
||||
CHAT_MESSAGE = 1;
|
||||
CONTACT_UPDATE = 2;
|
||||
MEMBERSHIP_UPDATE_MESSAGE = 3;
|
||||
PAIR_INSTALLATION = 4;
|
||||
SYNC_INSTALLATION = 5;
|
||||
REQUEST_ADDRESS_FOR_TRANSACTION = 6;
|
||||
ACCEPT_REQUEST_ADDRESS_FOR_TRANSACTION = 7;
|
||||
DECLINE_REQUEST_ADDRESS_FOR_TRANSACTION = 8;
|
||||
REQUEST_TRANSACTION = 9;
|
||||
SEND_TRANSACTION = 10;
|
||||
DECLINE_REQUEST_TRANSACTION = 11;
|
||||
SYNC_INSTALLATION_CONTACT = 12;
|
||||
SYNC_INSTALLATION_ACCOUNT = 13;
|
||||
SYNC_INSTALLATION_PUBLIC_CHAT = 14;
|
||||
CONTACT_CODE_ADVERTISEMENT = 15;
|
||||
PUSH_NOTIFICATION_REGISTRATION = 16;
|
||||
PUSH_NOTIFICATION_REGISTRATION_RESPONSE = 17;
|
||||
PUSH_NOTIFICATION_QUERY = 18;
|
||||
PUSH_NOTIFICATION_QUERY_RESPONSE = 19;
|
||||
PUSH_NOTIFICATION_REQUEST = 20;
|
||||
PUSH_NOTIFICATION_RESPONSE = 21;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`signature` is the bytes of the signed `SHA3-256` of the payload,
|
||||
signed with the key of the author of the message.
|
||||
The node needs the signature to validate authorship of the message,
|
||||
so that the message can be relayed to third parties.
|
||||
If a signature is not present, but an author is provided by a layer below,
|
||||
the message is not to be relayed to third parties, and it is considered plausibly deniable.
|
||||
|
||||
`payload` is the protobuf encoded content of the message, with the corresponding `type` set.
|
||||
|
||||
## Encoding
|
||||
|
||||
The node encodes the payload using [Protobuf](https://developers.google.com/protocol-buffers)
|
||||
|
||||
## Types of messages
|
||||
|
||||
### Message
|
||||
|
||||
The type `ChatMessage` represents a chat message exchanged between clients.
|
||||
|
||||
#### Payload
|
||||
|
||||
The protobuf description is:
|
||||
|
||||
```protobuf
|
||||
message ChatMessage {
|
||||
// Lamport timestamp of the chat message
|
||||
uint64 clock = 1;
|
||||
// Unix timestamps in milliseconds, currently not used as we use Whisper/Waku as more reliable, but here
|
||||
// so that we don't rely on it
|
||||
uint64 timestamp = 2;
|
||||
// Text of the message
|
||||
string text = 3;
|
||||
// Id of the message that we are replying to
|
||||
string response_to = 4;
|
||||
// Ens name of the sender
|
||||
string ens_name = 5;
|
||||
// Chat id, this field is symmetric for public-chats and private group chats,
|
||||
// but asymmetric in case of one-to-ones, as the sender will use the chat-id
|
||||
// of the received, while the receiver will use the chat-id of the sender.
|
||||
// Probably should be the concatenation of sender-pk & receiver-pk in alphabetical order
|
||||
string chat_id = 6;
|
||||
|
||||
// The type of message (public/one-to-one/private-group-chat)
|
||||
MessageType message_type = 7;
|
||||
// The type of the content of the message
|
||||
ContentType content_type = 8;
|
||||
|
||||
oneof payload {
|
||||
StickerMessage sticker = 9;
|
||||
}
|
||||
|
||||
enum MessageType {
|
||||
UNKNOWN_MESSAGE_TYPE = 0;
|
||||
ONE_TO_ONE = 1;
|
||||
PUBLIC_GROUP = 2;
|
||||
PRIVATE_GROUP = 3;
|
||||
// Only local
|
||||
SYSTEM_MESSAGE_PRIVATE_GROUP = 4;}
|
||||
enum ContentType {
|
||||
UNKNOWN_CONTENT_TYPE = 0;
|
||||
TEXT_PLAIN = 1;
|
||||
STICKER = 2;
|
||||
STATUS = 3;
|
||||
EMOJI = 4;
|
||||
TRANSACTION_COMMAND = 5;
|
||||
// Only local
|
||||
SYSTEM_MESSAGE_CONTENT_PRIVATE_GROUP = 6;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Payload
|
||||
|
||||
| Field | Name | Type | Description |
|
||||
| ----- | ---- | ---- | ---- |
|
||||
| 1 | clock | `uint64` | The clock of the chat|
|
||||
| 2 | timestamp | `uint64` | The sender timestamp at message creation |
|
||||
| 3 | text | `string` | The content of the message |
|
||||
| 4 | response_to | `string` | The ID of the message replied to |
|
||||
| 5 | ens_name | `string` | The ENS name of the user sending the message |
|
||||
| 6 | chat_id | `string` | The local ID of the chat the message is sent to |
|
||||
| 7 | message_type | `MessageType` | The type of message, different for one-to-one, public or group chats |
|
||||
| 8 | content_type | `ContentType` | The type of the content of the message |
|
||||
| 9 | payload | `Sticker\|nil` | The payload of the message based on the content type |
|
||||
|
||||
#### Content types
|
||||
|
||||
A node requires content types for a proper interpretation of incoming messages. Not each message is plain text but may carry different information.
|
||||
|
||||
The following content types MUST be supported:
|
||||
|
||||
* `TEXT_PLAIN` identifies a message which content is a plaintext.
|
||||
|
||||
There are other content types that MAY be implemented by the client:
|
||||
|
||||
* `STICKER`
|
||||
* `STATUS`
|
||||
* `EMOJI`
|
||||
* `TRANSACTION_COMMAND`
|
||||
|
||||
##### Mentions
|
||||
|
||||
A mention MUST be represented as a string with the `@0xpk` format, where `pk` is the public key of the [user account](/status/deprecated/account.md) to be mentioned,
|
||||
within the `text` field of a message with content_type `TEXT_PLAIN`.
|
||||
A message MAY contain more than one mention.
|
||||
This specification RECOMMENDs that the application does not require the user to enter the entire pk.
|
||||
This specification RECOMMENDs that the application allows the user to create a mention
|
||||
by typing @ followed by the related ENS or 3-word pseudonym.
|
||||
This specification RECOMMENDs that the application provides the user auto-completion functionality to create a mention.
|
||||
For better user experience, the client SHOULD display a known [ens name or the 3-word pseudonym corresponding to the key](/status/deprecated/account.md#contact-verification) instead of the `pk`.
|
||||
|
||||
##### Sticker content type
|
||||
|
||||
A `ChatMessage` with `STICKER` `Content/Type` MUST also specify the ID of the `Pack` and
|
||||
the `Hash` of the pack, in the `Sticker` field of `ChatMessage`
|
||||
|
||||
```protobuf
|
||||
message StickerMessage {
|
||||
string hash = 1;
|
||||
int32 pack = 2;
|
||||
}
|
||||
```
|
||||
|
||||
#### Message types
|
||||
|
||||
A node requires message types to decide how to encrypt a particular message
|
||||
and what metadata needs to be attached when passing a message to the transport layer.
|
||||
For more on this, see [WHISPER-USAGE](/status/deprecated/whisper-usage.md)
|
||||
and [WAKU-USAGE](/status/deprecated/waku-usage.md).
|
||||
|
||||
<!-- TODO: This reference is a bit odd, considering the layer payloads should interact with is Secure Transport, and not Whisper/Waku. This requires more detail -->
|
||||
|
||||
The following messages types MUST be supported:
|
||||
|
||||
* `ONE_TO_ONE` is a message to the public group
|
||||
* `PUBLIC_GROUP` is a private message
|
||||
* `PRIVATE_GROUP` is a message to the private group.
|
||||
|
||||
#### Clock vs Timestamp and message ordering
|
||||
|
||||
If a user sends a new message before the messages sent
|
||||
while the user was offline are received,
|
||||
the newmessage is supposed to be displayed last in a chat.
|
||||
This is where the basic algorithm of Lamport timestamp would fall short
|
||||
as it's only meant to order causally related events.
|
||||
|
||||
The status client therefore makes a "bid", speculating that it will beat the current chat-timestamp, s.t. the status client's
|
||||
Lamport timestamp format is: `clock = max({timestamp}, chat_clock + 1)`
|
||||
|
||||
This will satisfy the Lamport requirement, namely: a -> b then T(a) < T(b)
|
||||
|
||||
`timestamp` MUST be Unix time calculated, when the node creates the message, in milliseconds.
|
||||
This field SHOULD not be relied upon for message ordering.
|
||||
|
||||
`clock` SHOULD be calculated using the algorithm of [Lamport timestamps](https://en.wikipedia.org/wiki/Lamport_timestamps).
|
||||
When there are messages available in a chat,
|
||||
the node calculates `clock`'s value based on the last received message in a particular chat: `max(timeNowInMs, last-message-clock-value + 1)`.
|
||||
If there are no messages, `clock` is initialized with `timestamp`'s value.
|
||||
|
||||
Messages with a `clock` greater than `120` seconds over the Whisper/Waku timestamp SHOULD be discarded,
|
||||
in order to avoid malicious users to increase the `clock` of a chat arbitrarily.
|
||||
|
||||
Messages with a `clock` less than `120` seconds under the Whisper/Waku timestamp
|
||||
might indicate an attempt to insert messages in the chat history which is not distinguishable from a `datasync` layer re-transit event.
|
||||
A client MAY mark this messages with a warning to the user, or discard them.
|
||||
|
||||
The node uses `clock` value for the message ordering.
|
||||
The algorithm used, and the distributed nature of the system produces casual ordering, which might produce counter-intuitive results in some edge cases.
|
||||
For example, when a user joins a public chat and sends a message
|
||||
before receiving the exist messages, their message `clock` value might be lower
|
||||
and the message will end up in the past when the historical messages are fetched.
|
||||
|
||||
#### Chats
|
||||
|
||||
Chat is a structure that helps organize messages.
|
||||
It's usually desired to display messages only from a single recipient,
|
||||
or a group of recipients at a time and chats help to achieve that.
|
||||
|
||||
All incoming messages can be matched against a chat.
|
||||
The below table describes how to calculate a chat ID for each message type.
|
||||
|
||||
|Message Type|Chat ID Calculation|Direction|Comment|
|
||||
|------------|-------------------|---------|-------|
|
||||
|PUBLIC_GROUP|chat ID is equal to a public channel name; it should equal `chatId` from the message|Incoming/Outgoing||
|
||||
|ONE_TO_ONE|let `P` be a public key of the recipient; `hex-encode(P)` is a chat ID; use it as `chatId` value in the message|Outgoing||
|
||||
|user-message|let `P` be a public key of message's signature; `hex-encode(P)` is a chat ID; discard `chat-id` from message|Incoming|if there is no matched chat, it might be the first message from public key `P`; the node MAY discard the message or MAY create a new chat; Status official clients create a new chat|
|
||||
|PRIVATE_GROUP|use `chatId` from the message|Incoming/Outgoing|find an existing chat by `chatId`; if none is found, the user is not a member of that chat or the user hasn't joined that chat, the message MUST be discarded |
|
||||
|
||||
### Contact Update
|
||||
|
||||
`ContactUpdate` is a message exchange to notify peers that either the
|
||||
user has been added as a contact, or that information about the sending user have
|
||||
changed.
|
||||
|
||||
```protobuf
|
||||
message ContactUpdate {
|
||||
uint64 clock = 1;
|
||||
string ens_name = 2;
|
||||
string profile_image = 3;
|
||||
}
|
||||
```
|
||||
|
||||
Payload
|
||||
|
||||
| Field | Name | Type | Description |
|
||||
| ----- | ---- | ---- | ---- |
|
||||
| 1 | clock | `uint64` | The clock of the chat with the user |
|
||||
| 2 | ens_name | `string` | The ENS name if set |
|
||||
| 3 | profile_image | `string` | The base64 encoded profile picture of the user |
|
||||
|
||||
#### Contact update
|
||||
|
||||
A client SHOULD send a `ContactUpdate` to all the contacts each time:
|
||||
|
||||
* The ens_name has changed
|
||||
* A user edits the profile image
|
||||
|
||||
A client SHOULD also periodically send a `ContactUpdate` to all the contacts, the interval is up to the client,
|
||||
the Status official client sends these updates every 48 hours.
|
||||
|
||||
### SyncInstallationContact
|
||||
|
||||
The node uses `SyncInstallationContact` messages to synchronize in a best-effort the contacts to other devices.
|
||||
|
||||
```protobuf
|
||||
message SyncInstallationContact {
|
||||
uint64 clock = 1;
|
||||
string id = 2;
|
||||
string profile_image = 3;
|
||||
string ens_name = 4;
|
||||
uint64 last_updated = 5;
|
||||
repeated string system_tags = 6;
|
||||
}
|
||||
```
|
||||
|
||||
Payload
|
||||
|
||||
| Field | Name | Type | Description |
|
||||
| ----- | ---- | ---- | ---- |
|
||||
| 1 | clock | `uint64` | clock value of the chat |
|
||||
| 2 | id | `string` | id of the contact synced |
|
||||
| 3 | profile_image | `string` | `base64` encoded profile picture of the user |
|
||||
| 4 | ens_name | `string` | ENS name of the contact |
|
||||
| 5 | `array[string]` | Array of `system_tags` for the user, this can currently be: `":contact/added", ":contact/blocked", ":contact/request-received"`||
|
||||
|
||||
### SyncInstallationPublicChat
|
||||
|
||||
The node uses `SyncInstallationPublicChat` message to synchronize in a best-effort the public chats to other devices.
|
||||
|
||||
```protobuf
|
||||
message SyncInstallationPublicChat {
|
||||
uint64 clock = 1;
|
||||
string id = 2;
|
||||
}
|
||||
```
|
||||
|
||||
Payload
|
||||
|
||||
| Field | Name | Type | Description |
|
||||
| ----- | ---- | ---- | ---- |
|
||||
| 1 | clock | `uint64` | clock value of the chat |
|
||||
| 2 | id | `string` | id of the chat synced |
|
||||
|
||||
### PairInstallation
|
||||
|
||||
The node uses `PairInstallation` messages to propagate information about a device to its paired devices.
|
||||
|
||||
```protobuf
|
||||
message PairInstallation {
|
||||
uint64 clock = 1;
|
||||
string installation_id = 2;
|
||||
string device_type = 3;
|
||||
string name = 4;
|
||||
}
|
||||
```
|
||||
|
||||
Payload
|
||||
|
||||
| Field | Name | Type | Description |
|
||||
| ----- | ---- | ---- | ---- |
|
||||
| 1 | clock | `uint64` | clock value of the chat |
|
||||
| 2| installation_id | `string` | A randomly generated id that identifies this device |
|
||||
| 3 | device_type | `string` | The OS of the device `ios`,`android` or `desktop` |
|
||||
| 4 | name | `string` | The self-assigned name of the device |
|
||||
|
||||
### MembershipUpdateMessage and MembershipUpdateEvent
|
||||
|
||||
`MembershipUpdateEvent` is a message used to propagate information about group membership changes in a group chat.
|
||||
The details are in the [Group chats specs](/status/deprecated/group-chat.md).
|
||||
|
||||
## Upgradability
|
||||
|
||||
There are two ways to upgrade the protocol without breaking compatibility:
|
||||
|
||||
* A node always supports accretion
|
||||
* A node does not support deletion of existing fields or messages, which might break compatibility
|
||||
|
||||
## Security Considerations
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 0.3
|
||||
|
||||
Released [May 22, 2020](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
|
||||
* Added language to include Waku in all relevant places
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
[Status Whitepaper](https://status.im/whitepaper.pdf)
|
||||
[protobuf record](https://developers.google.com/protocol-buffers/)
|
||||
[Protobuf](https://developers.google.com/protocol-buffers)
|
||||
[Status user account](/status/deprecated/account.md)
|
||||
[ens name or the 3-word pseudonym corresponding to the key](deprecated/account/#contact-verification)
|
||||
[WHISPER-USAGE](/status/deprecated/whisper-usage.md)
|
||||
[WAKU-USAGE](/status/deprecated/waku-usage.md)
|
||||
[Lamport timestamps](https://en.wikipedia.org/wiki/Lamport_timestamps)
|
||||
[Group chats specs](/status/deprecated/group-chat.md)
|
||||
[May 22, 2020 change commit](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
@@ -1,753 +0,0 @@
|
||||
---
|
||||
title: PUSH-NOTIFICATION-SERVER
|
||||
name: Push notification server
|
||||
status: deprecated
|
||||
description: Status provides a set of Push notification services that can be used to achieve this functionality.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
---
|
||||
|
||||
## Reason
|
||||
|
||||
Push notifications for iOS devices and some Android devices can only be implemented by relying on [APN service](https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview.html#//apple_ref/doc/uid/TP40008194-CH8-SW1) for iOS or [Firebase](https://firebase.google.com/).
|
||||
|
||||
This is useful for Android devices that do not support foreground services
|
||||
or that often kill the foreground service.
|
||||
|
||||
iOS only allows certain kind of applications to keep a connection open when in the
|
||||
background, VoIP for example, which current status client does not qualify for.
|
||||
|
||||
Applications on iOS can also request execution time when they are in the [background](https://developer.apple.com/documentation/uikit/app_and_environment/scenes/preparing_your_ui_to_run_in_the_background/updating_your_app_with_background_app_refresh)
|
||||
but it has a limited set of use cases, for example it won't schedule any time
|
||||
if the application was force quit,
|
||||
and generally is not responsive enough to implement a push notification system.
|
||||
|
||||
Therefore Status provides a set of Push notification services
|
||||
that can be used to achieve this functionality.
|
||||
|
||||
Because this can't be safely implemented in a privacy preserving manner,
|
||||
clients MUST be given an option to opt-in to receiving and sending push notifications.
|
||||
They are disabled by default.
|
||||
|
||||
## Requirements
|
||||
|
||||
The party releasing the app MUST possess a certificate for the Apple Push Notification service
|
||||
and its has to run a [gorush](https://github.com/appleboy/gorush) publicly accessible server for sending the actual notification.
|
||||
The party releasing the app, Status in this case, needs to run its own [gorush](https://github.com/appleboy/gorush)
|
||||
|
||||
## Components
|
||||
|
||||
### Gorush instance
|
||||
|
||||
A [gorush](https://github.com/appleboy/gorush) instance MUST be publicly available,
|
||||
this will be used only by push notification servers.
|
||||
|
||||
### Push notification server
|
||||
|
||||
A push notification server used by clients to register for receiving and sending push notifications.
|
||||
|
||||
### Registering client
|
||||
|
||||
A Status client that wants to receive push notifications
|
||||
|
||||
### Sending client
|
||||
|
||||
A Status client that wants to send push notifications
|
||||
|
||||
## Registering with the push notification service
|
||||
|
||||
A client MAY register with one or more Push Notification services of their choice.
|
||||
|
||||
A `PNR message` (Push Notification Registration) MUST be sent to the [partitioned topic](status/deprecated/waku-usage/#partitioned-topic)
|
||||
for the public key of the node, encrypted with this key.
|
||||
|
||||
The message MUST be wrapped in a [`ApplicationMetadataMessage`](status/deprecated/payload/#payload-wrapper) with type set to `PUSH_NOTIFICATION_REGISTRATION`.
|
||||
|
||||
The marshaled protobuf payload MUST also be encrypted with AES-GCM
|
||||
using the Diffie–Hellman key generated from the client and server identity.
|
||||
|
||||
This is done in order to ensure that the extracted key from the signature will be
|
||||
considered invalid if it can't decrypt the payload.
|
||||
|
||||
The content of the message MUST contain the following [protobuf record](https://developers.google.com/protocol-buffers/):
|
||||
|
||||
```protobuf
|
||||
message PushNotificationRegistration {
|
||||
enum TokenType {
|
||||
UNKNOWN_TOKEN_TYPE = 0;
|
||||
APN_TOKEN = 1;
|
||||
FIREBASE_TOKEN = 2;
|
||||
}
|
||||
TokenType token_type = 1;
|
||||
string device_token = 2;
|
||||
string installation_id = 3;
|
||||
string access_token = 4;
|
||||
bool enabled = 5;
|
||||
uint64 version = 6;
|
||||
repeated bytes allowed_key_list = 7;
|
||||
repeated bytes blocked_chat_list = 8;
|
||||
bool unregister = 9;
|
||||
bytes grant = 10;
|
||||
bool allow_from_contacts_only = 11;
|
||||
string apn_topic = 12;
|
||||
bool block_mentions = 13;
|
||||
repeated bytes allowed_mentions_chat_list = 14;
|
||||
}
|
||||
```
|
||||
|
||||
A push notification server will handle the message according to the following rules:
|
||||
|
||||
- it MUST extract the public key of the sender from the signature and verify that
|
||||
the payload can be decrypted successfully.
|
||||
- it MUST verify that `token_type` is supported
|
||||
- it MUST verify that `device_token` is non empty
|
||||
- it MUST verify that `installation_id` is non empty
|
||||
- it MUST verify that `version` is non-zero and greater than the currently stored version for the public key and installation id of the sender, if any
|
||||
- it MUST verify that `grant` is non empty and according to the [specs](#server-grant)
|
||||
- it MUST verify that `access_token` is a valid [`uuid`](https://tools.ietf.org/html/rfc4122)
|
||||
- it MUST verify that `apn_topic` is set if `token_type` is `APN_TOKEN`
|
||||
|
||||
If the message can't be decrypted, the message MUST be discarded.
|
||||
|
||||
If `token_type` is not supported, a response MUST be sent with `error` set to
|
||||
`UNSUPPORTED_TOKEN_TYPE`.
|
||||
|
||||
If `token`,`installation_id`,`device_tokens`,`version` are empty, a response MUST
|
||||
be sent with `error` set to `MALFORMED_MESSAGE`.
|
||||
|
||||
If the `version` is equal or less than the currently stored version, a response MUST
|
||||
be sent with `error` set to `VERSION_MISMATCH`.
|
||||
|
||||
If any other error occurs the `error` should be set to `INTERNAL_ERROR`.
|
||||
|
||||
If the response is successful `success` MUST be set to `true` otherwise a response MUST be sent with `success` set to `false`.
|
||||
|
||||
`request_id` should be set to the `SHAKE-256` of the encrypted payload.
|
||||
|
||||
The response MUST be sent on the [partitioned topic](status/deprecated/waku-usage.md#partitioned-topic) of the sender
|
||||
and MUST not be encrypted using the [secure transport](status/deprecated/secure-transport) to facilitate the usage of ephemeral keys.
|
||||
|
||||
The payload of the response is:
|
||||
|
||||
```protobuf
|
||||
message PushNotificationRegistrationResponse {
|
||||
bool success = 1;
|
||||
ErrorType error = 2;
|
||||
bytes request_id = 3;
|
||||
|
||||
enum ErrorType {
|
||||
UNKNOWN_ERROR_TYPE = 0;
|
||||
MALFORMED_MESSAGE = 1;
|
||||
VERSION_MISMATCH = 2;
|
||||
UNSUPPORTED_TOKEN_TYPE = 3;
|
||||
INTERNAL_ERROR = 4;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The message MUST be wrapped in a [`ApplicationMetadataMessage`](status/deprecated/payloads.md#payload-wrapper) with type set to `PUSH_NOTIFICATION_REGISTRATION_RESPONSE`.
|
||||
|
||||
A client SHOULD listen for a response sent on the [partitioned topic](status/deprecated/waku-usage/#partitioned-topic)
|
||||
that the key used to register.
|
||||
|
||||
If `success` is `true` the client has registered successfully.
|
||||
|
||||
If `success` is `false`:
|
||||
|
||||
- If `MALFORMED_MESSAGE` is returned, the request SHOULD NOT be retried without ensuring that it is correctly formed.
|
||||
- If `INTERNAL_ERROR` is returned, the request MAY be retried, but the client MUST backoff exponentially
|
||||
|
||||
A client MAY register with multiple Push Notification Servers in order to increase availability.
|
||||
|
||||
A client SHOULD make sure that all the notification services they registered with have the same information about their tokens.
|
||||
|
||||
If no response is returned the request SHOULD be considered failed and MAY be retried with the same server or a different one, but clients MUST exponentially backoff after each trial.
|
||||
|
||||
If the request is successful the token SHOULD be [advertised](#advertising-a-push-notification-server) as described below
|
||||
|
||||
### Query topic
|
||||
|
||||
On successful registration the server MUST be listening to the topic derived from:
|
||||
|
||||
```protobuf
|
||||
0XHexEncode(Shake256(CompressedClientPublicKey))
|
||||
```
|
||||
|
||||
Using the topic derivation algorithm described [here](status/deprecated/waku-usage/#public-chats)
|
||||
and listen for client queries.
|
||||
|
||||
### Server grant
|
||||
|
||||
A push notification server needs to demonstrate to a client that it was authorized
|
||||
by the client to send them push notifications. This is done by building
|
||||
a grant which is specific to a given client-server pair.
|
||||
The grant is built as follow:
|
||||
|
||||
```protobuf
|
||||
Signature(Keccak256(CompressedPublicKeyOfClient . CompressedPublicKeyOfServer . AccessToken), PrivateKeyOfClient)
|
||||
```
|
||||
|
||||
When receiving a grant the server MUST be validate that the signature matches the registering client.
|
||||
|
||||
## Re-registering with the push notification server
|
||||
|
||||
A client SHOULD re-register with the node if the APN or FIREBASE token changes.
|
||||
|
||||
When re-registering a client SHOULD ensure that it has the most up-to-date
|
||||
`PushNotificationRegistration` and increment `version` if necessary.
|
||||
|
||||
Once re-registered, a client SHOULD advertise the changes.
|
||||
|
||||
## Changing options
|
||||
|
||||
This is handled in exactly the same way as re-registering above.
|
||||
|
||||
## Unregistering from push notifications
|
||||
|
||||
To unregister a client MUST send a `PushNotificationRegistration` request as described
|
||||
above with `unregister` set to `true`, or removing
|
||||
their device information.
|
||||
|
||||
The server MUST remove all data about this user if `unregistering` is `true`,
|
||||
apart from the `hash` of the public key and the `version` of the last options,
|
||||
in order to make sure that old messages are not processed.
|
||||
|
||||
A client MAY unregister from a server on explicit logout if multiple chat keys
|
||||
are used on a single device.
|
||||
|
||||
## Advertising a push notification server
|
||||
|
||||
Each user registered with one or more push notification servers SHOULD
|
||||
advertise periodically the push notification services that they have registered with for each device they own.
|
||||
|
||||
```protobuf
|
||||
message PushNotificationQueryInfo {
|
||||
string access_token = 1;
|
||||
string installation_id = 2;
|
||||
bytes public_key = 3;
|
||||
repeated bytes allowed_user_list = 4;
|
||||
bytes grant = 5;
|
||||
uint64 version = 6;
|
||||
bytes server_public_key = 7;
|
||||
}
|
||||
|
||||
message ContactCodeAdvertisement {
|
||||
repeated PushNotificationQueryInfo push_notification_info = 1;
|
||||
}
|
||||
```
|
||||
|
||||
The message MUST be wrapped in a [`ApplicationMetadataMessage`](status/deprecated/payloads/#payload-wrapper) with type set to `PUSH_NOTIFICATION_QUERY_INFO`.
|
||||
|
||||
If no filtering is done based on public keys,
|
||||
the access token SHOULD be included in the advertisement.
|
||||
Otherwise it SHOULD be left empty.
|
||||
|
||||
This SHOULD be advertised on the [contact code topic](/status/deprecated/waku-usage.md#contact-code-topic)
|
||||
and SHOULD be coupled with normal contact-code advertisement.
|
||||
|
||||
Every time a user register or re-register with a push notification service, their
|
||||
contact-code SHOULD be re-advertised.
|
||||
|
||||
Multiple servers MAY be advertised for the same `installation_id` for redundancy reasons.
|
||||
|
||||
## Discovering a push notification server
|
||||
|
||||
To discover a push notification service for a given user, their [contact code topic](status/deprecated/waku-usage/#contact-code-topic)
|
||||
SHOULD be listened to.
|
||||
A mailserver can be queried for the specific topic to retrieve the most up-to-date
|
||||
contact code.
|
||||
|
||||
## Querying the push notification server
|
||||
|
||||
If a token is not present in the latest advertisement for a user, the server
|
||||
SHOULD be queried directly.
|
||||
|
||||
To query a server a message:
|
||||
|
||||
```protobuf
|
||||
message PushNotificationQuery {
|
||||
repeated bytes public_keys = 1;
|
||||
}
|
||||
```
|
||||
|
||||
The message MUST be wrapped in a [`ApplicationMetadataMessage`](status/deprecated/payloads/#payload-wrapper) with type set to `PUSH_NOTIFICATION_QUERY`.
|
||||
|
||||
MUST be sent to the server on the topic derived from the hashed public key of the
|
||||
key we are querying, as [described above](#query-topic).
|
||||
|
||||
An ephemeral key SHOULD be used and SHOULD NOT be encrypted using the [secure transport](status/deprecated/secure-transport.md).
|
||||
|
||||
If the server has information about the client a response MUST be sent:
|
||||
|
||||
```protobuf
|
||||
message PushNotificationQueryInfo {
|
||||
string access_token = 1;
|
||||
string installation_id = 2;
|
||||
bytes public_key = 3;
|
||||
repeated bytes allowed_user_list = 4;
|
||||
bytes grant = 5;
|
||||
uint64 version = 6;
|
||||
bytes server_public_key = 7;
|
||||
}
|
||||
|
||||
message PushNotificationQueryResponse {
|
||||
repeated PushNotificationQueryInfo info = 1;
|
||||
bytes message_id = 2;
|
||||
bool success = 3;
|
||||
}
|
||||
```
|
||||
|
||||
A `PushNotificationQueryResponse` message MUST be wrapped in a [`ApplicationMetadataMessage`](status/deprecated/payloads.md#payload-wrapper) with type set to `PUSH_NOTIFICATION_QUERY_RESPONSE`.
|
||||
|
||||
Otherwise a response MUST NOT be sent.
|
||||
|
||||
If `allowed_key_list` is not set `access_token` MUST be set and `allowed_key_list` MUST NOT
|
||||
be set.
|
||||
|
||||
If `allowed_key_list` is set `allowed_key_list` MUST be set and `access_token` MUST NOT be set.
|
||||
|
||||
If `access_token` is returned, the `access_token` SHOULD be used to send push notifications.
|
||||
|
||||
If `allowed_key_list` are returned, the client SHOULD decrypt each
|
||||
token by generating an `AES-GCM` symmetric key from the Diffie–Hellman between the
|
||||
target client and itself
|
||||
If AES decryption succeeds it will return a valid [`uuid`](https://tools.ietf.org/html/rfc4122) which is what is used for access_token.
|
||||
The token SHOULD be used to send push notifications.
|
||||
|
||||
The response MUST be sent on the [partitioned topic](status/deprecated/waku-usage/#partitioned-topic) of the sender
|
||||
and MUST not be encrypted using the [secure transport](status/deprecated/secure-transport.md) to facilitate
|
||||
the usage of ephemeral keys.
|
||||
|
||||
On receiving a response a client MUST verify `grant` to ensure that the server
|
||||
has been authorized to send push notification to a given client.
|
||||
|
||||
## Sending a push notification
|
||||
|
||||
When sending a push notification, only the `installation_id` for the devices targeted
|
||||
by the message SHOULD be used.
|
||||
|
||||
If a message is for all the user devices, all the `installation_id` known to the client MAY be used.
|
||||
|
||||
The number of devices MAY be capped in order to reduce resource consumption.
|
||||
|
||||
At least 3 devices SHOULD be targeted, ordered by last activity.
|
||||
|
||||
For any device that a token is available, or that a token is successfully queried,
|
||||
a push notification message SHOULD be sent to the corresponding push notification server.
|
||||
|
||||
```protobuf
|
||||
message PushNotification {
|
||||
string access_token = 1;
|
||||
string chat_id = 2;
|
||||
bytes public_key = 3;
|
||||
string installation_id = 4;
|
||||
bytes message = 5;
|
||||
PushNotificationType type = 6;
|
||||
enum PushNotificationType {
|
||||
UNKNOWN_PUSH_NOTIFICATION_TYPE = 0;
|
||||
MESSAGE = 1;
|
||||
MENTION = 2;
|
||||
}
|
||||
bytes author = 7;
|
||||
}
|
||||
|
||||
message PushNotificationRequest {
|
||||
repeated PushNotification requests = 1;
|
||||
bytes message_id = 2;
|
||||
}
|
||||
```
|
||||
|
||||
A `PushNotificationRequest` message MUST be wrapped in a [`ApplicationMetadataMessage`](/status/deprecated/payloads.md#payload-wrapper) with type set to `PUSH_NOTIFICATION_REQUEST`.
|
||||
|
||||
Where `message` is the encrypted payload of the message and `chat_id` is the
|
||||
`SHAKE-256` of the `chat_id`.
|
||||
`message_id` is the id of the message
|
||||
`author` is the `SHAKE-256` of the public key of the sender.
|
||||
|
||||
If multiple server are available for a given push notification, only one notification
|
||||
MUST be sent.
|
||||
|
||||
If no response is received
|
||||
a client SHOULD wait at least 3 seconds, after which the request MAY be retried against a different server
|
||||
|
||||
This message SHOULD be sent using an ephemeral key.
|
||||
|
||||
On receiving the message, the push notification server MUST validate the access token.
|
||||
If the access token is valid, a notification MUST be sent to the gorush instance with the
|
||||
following data:
|
||||
|
||||
```protobuf
|
||||
{
|
||||
"notifications": [
|
||||
{
|
||||
"tokens": ["token_a", "token_b"],
|
||||
"platform": 1,
|
||||
"message": "You have a new message",
|
||||
"data": {
|
||||
"chat_id": chat_id,
|
||||
"message": message,
|
||||
"installation_ids": [installation_id_1, installation_id_2]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Where platform is `1` for IOS and `2` for Firebase, according to the [gorush documentation](https://github.com/appleboy/gorush)
|
||||
|
||||
A server MUST return a response message:
|
||||
|
||||
```protobuf
|
||||
message PushNotificationReport {
|
||||
bool success = 1;
|
||||
ErrorType error = 2;
|
||||
enum ErrorType {
|
||||
UNKNOWN_ERROR_TYPE = 0;
|
||||
WRONG_TOKEN = 1;
|
||||
INTERNAL_ERROR = 2;
|
||||
NOT_REGISTERED = 3;
|
||||
}
|
||||
bytes public_key = 3;
|
||||
string installation_id = 4;
|
||||
}
|
||||
|
||||
message PushNotificationResponse {
|
||||
bytes message_id = 1;
|
||||
repeated PushNotificationReport reports = 2;
|
||||
}
|
||||
```
|
||||
|
||||
A `PushNotificationResponse` message MUST be wrapped in a [`ApplicationMetadataMessage`](/status/deprecated/payloads.md#payload-wrapper) with type set to `PUSH_NOTIFICATION_RESPONSE`.
|
||||
|
||||
Where `message_id` is the `message_id` sent by the client.
|
||||
|
||||
The response MUST be sent on the [partitioned topic](/status/deprecated/waku-usage.md#partitioned-topic) of the sender
|
||||
and MUST not be encrypted using the [secure transport](/status/deprecated/secure-transport.md) to facilitate
|
||||
the usage of ephemeral keys.
|
||||
|
||||
If the request is accepted `success` MUST be set to `true`.
|
||||
Otherwise `success` MUST be set to `false`.
|
||||
|
||||
If `error` is `BAD_TOKEN` the client MAY query again the server for the token and
|
||||
retry the request.
|
||||
|
||||
If `error` is `INTERNAL_ERROR` the client MAY retry the request.
|
||||
|
||||
## Flow
|
||||
|
||||
### Registration process
|
||||
|
||||
- A client will generate a notification token through `APN` or `Firebase`.
|
||||
- The client will [register](#registering-with-the-push-notification-service) with one or more push notification server of their choosing.
|
||||
- The server should process the response and respond according to the success of the operation
|
||||
- If the request is not successful it might be retried, and adjusted according to the response. A different server can be also used.
|
||||
- Once the request is successful the client should [advertise](#advertising-a-push-notification-server) the new coordinates
|
||||
|
||||
### Sending a notification
|
||||
|
||||
- A client should prepare a message and extract the targeted installation-ids
|
||||
- It should retrieve the most up to date information for a given user, either by
|
||||
querying a push notification server, a mailserver if not listening already to the given topic, or checking
|
||||
the database locally
|
||||
- It should then [send](#sending-a-push-notification) a push notification according
|
||||
to the rules described
|
||||
- The server should then send a request to the gorush server including all the required
|
||||
information
|
||||
|
||||
### Receiving a push notification
|
||||
|
||||
- On receiving the notification, a client can open the right account by checking the
|
||||
`installation_id` included. The `chat_id` MAY be used to open the chat if present.
|
||||
- `message` can be decrypted and presented to the user. Otherwise messages can be pulled from the mailserver if the `message_id` is no already present.
|
||||
|
||||
## Protobuf description
|
||||
|
||||
### PushNotificationRegistration
|
||||
|
||||
`token_type`: the type of token. Currently supported is `APN_TOKEN` for Apple Push
|
||||
`device_token`: the actual push notification token sent by `Firebase` or `APN`
|
||||
and `FIREBASE_TOKEN` for firebase.
|
||||
`installation_id`: the [`installation_id`](/status/deprecated/account.md) of the device
|
||||
`access_token`: the access token that will be given to clients to send push notifications
|
||||
`enabled`: whether the device wants to be sent push notifications
|
||||
`version`: a monotonically increasing number identifying the current `PushNotificationRegistration`. Any time anything is changed in the record it MUST be increased by the client, otherwise the request will not be accepted.
|
||||
`allowed_key_list`: a list of `access_token` encrypted with the AES key generated
|
||||
by Diffie–Hellman between the publisher and the allowed
|
||||
contact.
|
||||
`blocked_chat_list`: a list of `SHA2-256` hashes of chat ids.
|
||||
Any chat id in this list will not trigger a notification.
|
||||
`unregister`: whether the account should be unregistered
|
||||
`grant`: the grant for this specific server
|
||||
`allow_from_contacts_only`: whether the client only wants push notifications from contacts
|
||||
`apn_topic`: the APN topic for the push notification
|
||||
`block_mentions`: whether the client does not want to be notified on mentions
|
||||
`allowed_mentions_chat_list`: a list of `SHA2-256` hashes of chat ids where we want to receive mentions
|
||||
|
||||
#### Data disclosed
|
||||
|
||||
- Type of device owned by a given user
|
||||
- The `FIREBASE` or `APN` push notification token
|
||||
- Hash of the chat_id a user is not interested in for notifications
|
||||
- The times a push notification record has been modified by the user
|
||||
- The number of contacts a client has, in case `allowed_key_list` is set
|
||||
|
||||
### PushNotificationRegistrationResponse
|
||||
|
||||
`success`: whether the registration was successful
|
||||
`error`: the error type, if any
|
||||
`request_id`: the `SHAKE-256` hash of the `signature` of the request
|
||||
`preferences`: the server stored preferences in case of an error
|
||||
|
||||
### ContactCodeAdvertisement
|
||||
|
||||
`push_notification_info`: the information for each device advertised
|
||||
|
||||
Data disclosed
|
||||
|
||||
- The chat key of the sender
|
||||
|
||||
### PushNotificationQuery
|
||||
|
||||
`public_keys`: the `SHAKE-256` of the public keys the client is interested in
|
||||
|
||||
Data disclosed
|
||||
|
||||
- The hash of the public keys the client is interested in
|
||||
|
||||
### PushNotificationQueryInfo
|
||||
|
||||
`access_token`: the access token used to send a push notification
|
||||
`installation_id`: the `installation_id` of the device associated with the `access_token`
|
||||
`public_key`: the `SHAKE-256` of the public key associated with this `access_token` and `installation_id`
|
||||
`allowed_key_list`: a list of encrypted access tokens to be returned
|
||||
to the client in case there's any filtering on public keys in place.
|
||||
`grant`: the grant used to register with this server.
|
||||
`version`: the version of the registration on the server.
|
||||
`server_public_key`: the compressed public key of the server.
|
||||
|
||||
### PushNotificationQueryResponse
|
||||
|
||||
`info`: a list of `PushNotificationQueryInfo`.
|
||||
`message_id`: the message id of the `PushNotificationQueryInfo` the server is replying to.
|
||||
`success`: whether the query was successful.
|
||||
|
||||
### PushNotification
|
||||
|
||||
`access_token`: the access token used to send a push notification.
|
||||
`chat_id`: the `SHAKE-256` of the `chat_id`.
|
||||
`public_key`: the `SHAKE-256` of the compressed public key of the receiving client.
|
||||
`installation_id`: the installation id of the receiving client.
|
||||
`message`: the encrypted message that is being notified on.
|
||||
`type`: the type of the push notification, either `MESSAGE` or `MENTION`
|
||||
`author`: the `SHAKE-256` of the public key of the sender
|
||||
|
||||
Data disclosed
|
||||
|
||||
- The `SHAKE-256` of the `chat_id` the notification is to be sent for
|
||||
- The cypher text of the message
|
||||
- The `SHAKE-256` of the public key of the sender
|
||||
- The type of notification
|
||||
|
||||
### PushNotificationRequest
|
||||
|
||||
`requests`: a list of `PushNotification`
|
||||
`message_id`: the [status message id](/status/deprecated/payloads.md)
|
||||
|
||||
Data disclosed
|
||||
|
||||
- The status message id for which the notification is for
|
||||
|
||||
### PushNotificationResponse
|
||||
|
||||
`message_id`: the `message_id` being notified on.
|
||||
`reports`: a list of `PushNotificationReport`
|
||||
|
||||
### PushNotificationReport
|
||||
|
||||
`success`: whether the push notification was successful.
|
||||
`error`: the type of the error in case of failure.
|
||||
`public_key`: the public key of the user being notified.
|
||||
`installation_id`: the installation id of the user being notified.
|
||||
|
||||
## Anonymous mode of operations
|
||||
|
||||
An anonymous mode of operations MAY be provided by the client, where the
|
||||
responsibility of propagating information about the user is left to the client,
|
||||
in order to preserve privacy.
|
||||
|
||||
A client in anonymous mode can register with the server using a key different
|
||||
from their chat key.
|
||||
This will hide their real chat key.
|
||||
|
||||
This public key is effectively a secret and SHOULD only be disclosed to clients that you the user wants to be notified by.
|
||||
|
||||
A client MAY advertise the access token on the contact-code topic of the key generated.
|
||||
A client MAY share their public key through [contact updates](/status/deprecated/payloads.md#contact-update)
|
||||
|
||||
A client receiving a push notification public key SHOULD listen to the contact code
|
||||
topic of the push notification public key for updates.
|
||||
|
||||
The method described above effectively does not share the identity of the sender
|
||||
nor the receiver to the server, but MAY result in missing push notifications as
|
||||
the propagation of the secret is left to the client.
|
||||
|
||||
This can be mitigated by [device syncing](/status/deprecated/payloads.md), but not completely
|
||||
addressed.
|
||||
|
||||
## Security considerations
|
||||
|
||||
If no anonymous mode is used, when registering with a push notification service a client discloses:
|
||||
|
||||
- The chat key
|
||||
- The devices that will receive notifications
|
||||
|
||||
A client MAY disclose:
|
||||
|
||||
- The hash of the chat_ids they want to filter out
|
||||
|
||||
When running in anonymous mode, the client's chat key is not disclosed.
|
||||
|
||||
When querying a push notification server a client will disclose:
|
||||
|
||||
- That it is interested in sending push notification to another client,
|
||||
but the querying client's chat key is not disclosed
|
||||
|
||||
When sending a push notification a client discloses:
|
||||
|
||||
- The `SHAKE-256` of the chat id
|
||||
|
||||
[//]: This section can be removed, for now leaving it here in order to help with the
|
||||
review process. Point can be integrated, suggestion welcome.
|
||||
|
||||
## FAQ
|
||||
|
||||
### Why having ACL done at the server side and not the client?
|
||||
|
||||
We looked into silent notification for
|
||||
[IOS](https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/pushing_background_updates_to_your_app) (android has no equivalent)
|
||||
but can't be used as it's expected to receive maximum 2/3 per hour, so not our use case. There
|
||||
are also issue when the user force quit the app.
|
||||
|
||||
### Why using an access token?
|
||||
|
||||
The access token is used to decouple the requesting information from the user from
|
||||
actually sending the push notification.
|
||||
|
||||
Some ACL is necessary otherwise it would be too easy to spam users (it's still fairly
|
||||
trivial, but with this method you could allow only contacts to send you push notifications).
|
||||
|
||||
Therefore your identity must be revealed to the server either when sending or querying.
|
||||
|
||||
By using an access token we increase deniability, as the server would know
|
||||
who requested the token but not necessarily who sent a push notification.
|
||||
Correlation between the two can be trivial in some cases.
|
||||
|
||||
This also allows a mode of use as we had before, where the server does not propagate
|
||||
info at all, and it's left to the user to propagate the token, through contact requests
|
||||
for example.
|
||||
|
||||
### Why advertise with the bundle?
|
||||
|
||||
Advertising with the bundle allows us to piggy-back on an already implemented behavior
|
||||
and save some bandwidth in cases where is not filtering by public keys
|
||||
|
||||
### What's the bandwidth impact for this?
|
||||
|
||||
Generally speaking, for each 1-to-1 message and group chat message you will sending
|
||||
1 and `number of participants` push notifications. This can be optimized if
|
||||
multiple users are using the same push notification server. Queries have also
|
||||
a bandwidth impact but they are made only when actually needed
|
||||
|
||||
### What's the information disclosed?
|
||||
|
||||
The data disclosed with each message sent by the client is above, but for a summary:
|
||||
|
||||
When you register with a push notification service you may disclose:
|
||||
|
||||
1) Your chat key
|
||||
2) Which devices you have
|
||||
3) The hash of the chat_ids you want to filter out
|
||||
4) The hash of the public keys you are interested/not interested in
|
||||
|
||||
When you query a notification service you may disclose:
|
||||
|
||||
1) Your chat key
|
||||
2) The fact that you are interested in sending push notification to a given user
|
||||
|
||||
Effectively this is fairly revealing if the user has a whitelist implemented.
|
||||
Therefore sending notification should be optional.
|
||||
|
||||
### What prevents a user from generating a random key and getting an access token and spamming?
|
||||
|
||||
Nothing really, that's the same as the status app as a whole. the only mechanism that prevents
|
||||
this is using a white-list as described above,
|
||||
but that implies disclosing your true identity to the push notification server.
|
||||
|
||||
### Why not 0-knowledge proofs/quantum computing
|
||||
|
||||
We start simple, we can iterate
|
||||
|
||||
### How to handle backward/forward compatibility
|
||||
|
||||
Most of the request have a target, so protocol negotiation can happen. We cannot negotiated
|
||||
the advertisement as that's effectively a broadcast, but those info should not change and we can
|
||||
always accrete the message.
|
||||
|
||||
### Why ack_key?
|
||||
|
||||
That's necessary to avoid duplicated push notifications and allow for the retry
|
||||
in case the notification is not successful.
|
||||
|
||||
Deduplication of the push notification is done on the client side, to reduce a bit
|
||||
of centralization and also in order not to have to modify gorush.
|
||||
|
||||
### Can I run my own node?
|
||||
|
||||
Sure, the methods allow that
|
||||
|
||||
### Can I register with multiple nodes for redundancy
|
||||
|
||||
Yep
|
||||
|
||||
### What does my node disclose?
|
||||
|
||||
Your node will disclose the IP address is running from, as it makes an HTTP post to
|
||||
gorush. A waku adapter could be used, but please not now.
|
||||
|
||||
### Does this have high-reliability requirements?
|
||||
|
||||
The gorush server yes, no way around it.
|
||||
|
||||
The rest, kind of, at least one node having your token needs to be up for you to receive notifications.
|
||||
But you can register with multiple servers (desktop, status, etc) if that's a concern.
|
||||
|
||||
### Can someone else (i.e not status) run this?
|
||||
|
||||
Push notification servers can be run by anyone. Gorush can be run by anyone I take,
|
||||
but we are in charge of the certificate, so they would not be able to notify status-clients.
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 0.1
|
||||
|
||||
[Released](https://github.com/status-im/specs/commit/)
|
||||
|
||||
- Initial version
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [APN Service](https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview.html#//apple_ref/doc/uid/TP40008194-CH8-SW1)
|
||||
- [Background Execution on iOS](https://developer.apple.com/documentation/uikit/app_and_environment/scenes/preparing_your_ui_to_run_in_the_background/updating_your_app_with_background_app_refresh)
|
||||
- [Firebase](https://firebase.google.com/)
|
||||
- [Gorush](https://github.com/appleboy/gorush)
|
||||
- [UUID Specification](https://tools.ietf.org/html/rfc4122)
|
||||
- [Secure Transport](/status/deprecated/secure-transport.md)
|
||||
- [Silent Notifications on iOS](https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/pushing_background_updates_to_your_app)
|
||||
- [Waku Usage](/status/deprecated/waku-usage.md)
|
||||
- [ENS Contract](https://github.com/ensdomains/ens)
|
||||
- [Payloads](/status/deprecated/payloads.md)
|
||||
@@ -1,583 +0,0 @@
|
||||
---
|
||||
title: SECURE-TRANSPORT
|
||||
name: Secure Transport
|
||||
status: deprecated
|
||||
description: This document describes how Status provides a secure channel between two peers, providing confidentiality, integrity, authenticity, and forward secrecy.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Pedro Pombeiro <pedro@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document describes how Status provides a secure channel between two peers,
|
||||
and thus provide confidentiality, integrity, authenticity and forward secrecy.
|
||||
It is transport-agnostic and works over asynchronous networks.
|
||||
|
||||
It builds on the [X3DH](https://signal.org/docs/specifications/x3dh/) and [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/) specifications, with some adaptations to operate in a decentralized environment.
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how nodes establish a secure channel,
|
||||
and how various conversational security properties are achieved.
|
||||
|
||||
### Definitions
|
||||
|
||||
- **Perfect Forward Secrecy** is a feature of specific key-agreement protocols
|
||||
which provide assurances that session keys will not be compromised even if the private keys of the participants are compromised.
|
||||
Specifically, past messages cannot be decrypted by a third-party who manages to get a hold of a private key.
|
||||
|
||||
- **Secret channel** describes a communication channel where Double Ratchet algorithm is in use.
|
||||
|
||||
### Design Requirements
|
||||
|
||||
- **Confidentiality**: The adversary should not be able to learn what data is being exchanged between two Status clients.
|
||||
- **Authenticity**: The adversary should not be able to cause either endpoint of a Status 1:1 chat
|
||||
to accept data from any third party as though it came from the other endpoint.
|
||||
- **Forward Secrecy**: The adversary should not be able to learn
|
||||
what data was exchanged between two Status clients if, at some later time,
|
||||
the adversary compromises one or both of the endpoint devices.
|
||||
- **Integrity**: The adversary should not be able to cause either endpoint of a Status 1:1 chat
|
||||
to accept data that has been tampered with.
|
||||
|
||||
All of these properties are ensured by the use of [Signal's Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
|
||||
### Conventions
|
||||
|
||||
Types used in this specification are defined using [Protobuf](https://developers.google.com/protocol-buffers/).
|
||||
|
||||
### Transport Layer
|
||||
|
||||
[Whisper](status/deprecated/whisper-usage) and [Waku](status/deprecated/waku-usage) serves as the transport layers for the Status chat protocol.
|
||||
|
||||
### User flow for 1-to-1 communications
|
||||
|
||||
#### Account generation
|
||||
|
||||
See [Account specification](status/deprecated/account)
|
||||
|
||||
#### Account recovery
|
||||
|
||||
If Alice later recovers her account, the Double Ratchet state information will not be available,
|
||||
so she is no longer able to decrypt any messages received from existing contacts.
|
||||
|
||||
If an incoming message (on the same Whisper/Waku topic) fails to decrypt,
|
||||
the node replies a message with the current bundle, so that the node notifies the other end of the new device.
|
||||
Subsequent communications will use this new bundle.
|
||||
|
||||
## Messaging
|
||||
|
||||
All 1:1 and group chat messaging in Status is subject to end-to-end encryption
|
||||
to provide users with a strong degree of privacy and security.
|
||||
Public chat messages are publicly readable by anyone since there's no permission model
|
||||
for who is participating in a public chat.
|
||||
|
||||
The rest of this document is purely about 1:1 and private group chat.
|
||||
Private group chat largely reduces to 1:1 chat, since there's a secure channel between each pair-wise participant.
|
||||
|
||||
### End-to-end encryption
|
||||
|
||||
End-to-end encryption (E2EE) takes place between two clients.
|
||||
The main cryptographic protocol is a [Status implementation](https://github.com/status-im/doubleratchet/) of the Double Ratchet protocol,
|
||||
which is in turn derived from the [Off-the-Record protocol](https://otr.cypherpunks.ca/Protocol-v3-4.1.1.html), using a different ratchet.
|
||||
The transport protocol subsequently encrypt the message payload - Whisper/Waku (see section [Transport Layer](#transport-layer)) -, using symmetric key encryption.
|
||||
Furthermore, Status uses the concept of prekeys (through the use of [X3DH](https://signal.org/docs/specifications/x3dh/))
|
||||
to allow the protocol to operate in an asynchronous environment.
|
||||
It is not necessary for two parties to be online at the same time to initiate an encrypted conversation.
|
||||
|
||||
Status uses the following cryptographic primitives:
|
||||
|
||||
- Whisper/Waku
|
||||
- AES-256-GCM
|
||||
- ECIES
|
||||
- ECDSA
|
||||
- KECCAK-256
|
||||
- X3DH
|
||||
- Elliptic curve Diffie-Hellman key exchange (secp256k1)
|
||||
- KECCAK-256
|
||||
- ECDSA
|
||||
- ECIES
|
||||
- Double Ratchet
|
||||
- HMAC-SHA-256 as MAC
|
||||
- Elliptic curve Diffie-Hellman key exchange (Curve25519)
|
||||
- AES-256-CTR with HMAC-SHA-256 and IV derived alongside an encryption key
|
||||
|
||||
The node achieves key derivation using HKDF.
|
||||
|
||||
### Prekeys
|
||||
|
||||
Every client initially generates some key material which is stored locally:
|
||||
|
||||
- Identity keypair based on secp256k1 - `IK`
|
||||
- A signed prekey based on secp256k1 - `SPK`
|
||||
- A prekey signature - `Sig(IK, Encode(SPK))`
|
||||
|
||||
More details can be found in the `X3DH Prekey bundle creation` section of [2/ACCOUNT](/status/deprecated/account.md#x3dh-prekey-bundles).
|
||||
|
||||
Prekey bundles can be extracted from any user's messages,
|
||||
or found via searching for their specific topic, `{IK}-contact-code`.
|
||||
|
||||
TODO: See below on bundle retrieval, this seems like enhancement and parameter for recommendation
|
||||
|
||||
### Bundle retrieval
|
||||
|
||||
<!-- TODO: Potentially move this completely over to [Trust Establishment](./status-account-spec.md) -->
|
||||
|
||||
X3DH works by having client apps create and make available a bundle of prekeys (the X3DH bundle)
|
||||
that can later be requested by other interlocutors when they wish to start a conversation with a given user.
|
||||
|
||||
In the X3DH specification, nodes typically use a shared server
|
||||
to store bundles and allow other users to download them upon request.
|
||||
Given Status' goal of decentralization,
|
||||
Status chat clients cannot rely on the same type of infrastructure
|
||||
and must achieve the same result using other means.
|
||||
By growing order of convenience and security, the considered approaches are:
|
||||
|
||||
- contact codes;
|
||||
- public and one-to-one chats;
|
||||
- QR codes;
|
||||
- ENS record;
|
||||
- Decentralized permanent storage (e.g. Swarm, IPFS).
|
||||
- Whisper/Waku
|
||||
|
||||
<!-- TODO: Comment, it isn't clear what we actually _do_. It seems as if this is exploring the problem space. From a protocol point of view, it might make sense to describe the interface, and then have a recommendation section later on that specifies what we do. See e.g. Signal's specs where they specify specifics later on. -->
|
||||
|
||||
Currently, only public and one-to-one message exchanges and Whisper/Waku is used to exchange bundles.
|
||||
|
||||
Since bundles stored in QR codes or ENS records cannot be updated to delete already used keys,
|
||||
the approach taken is to rotate more frequently the bundle (once every 24 hours),
|
||||
which will be propagated by the app through the channel available.
|
||||
|
||||
### 1:1 chat contact request
|
||||
|
||||
There are two phases in the initial negotiation of a 1:1 chat:
|
||||
|
||||
1. **Identity verification** (e.g., face-to-face contact exchange through QR code, Identicon matching).
|
||||
A QR code serves two purposes simultaneously - identity verification and initial bundle retrieval;
|
||||
1. **Asynchronous initial key exchange**, using X3DH.
|
||||
|
||||
For more information on account generation and trust establishment, see [2/ACCOUNT](/status/deprecated/account.md)
|
||||
|
||||
#### Initial key exchange flow (X3DH)
|
||||
|
||||
[Section 3 of the X3DH protocol](https://signal.org/docs/specifications/x3dh/#sending-the-initial-message) describes the initial key exchange flow, with some additional context:
|
||||
|
||||
- The users' identity keys `IK_A` and `IK_B` correspond to their respective Status chat public keys;
|
||||
- Since it is not possible to guarantee that a prekey will be used only once in a decentralized world,
|
||||
the one-time prekey `OPK_B` is not used in this scenario;
|
||||
- Nodes do not send Bundles to a centralized server, but instead served in a decentralized way as described in [bundle retrieval](#bundle-retrieval).
|
||||
|
||||
Alice retrieves Bob's prekey bundle, however it is not specific to Alice. It contains:
|
||||
|
||||
([protobuf](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L12))
|
||||
|
||||
``` protobuf
|
||||
// X3DH prekey bundle
|
||||
message Bundle {
|
||||
|
||||
bytes identity = 1;
|
||||
|
||||
map<string,SignedPreKey> signed_pre_keys = 2;
|
||||
|
||||
bytes signature = 4;
|
||||
|
||||
int64 timestamp = 5;
|
||||
}
|
||||
|
||||
```golang
|
||||
- `identity`: Identity key `IK_B`
|
||||
- `signed_pre_keys`: Signed prekey `SPK_B` for each device, indexed by `installation-id`
|
||||
- `signature`: Prekey signature <i>Sig(`IK_B`, Encode(`SPK_B`))</i>
|
||||
- `timestamp`: When the bundle was created locally
|
||||
|
||||
([protobuf](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L5))
|
||||
|
||||
``` protobuf
|
||||
message SignedPreKey {
|
||||
bytes signed_pre_key = 1;
|
||||
uint32 version = 2;
|
||||
}
|
||||
```
|
||||
|
||||
The `signature` is generated by sorting `installation-id` in lexicographical order, and concatenating the `signed-pre-key` and `version`:
|
||||
|
||||
`installation-id-1signed-pre-key1version1installation-id2signed-pre-key2-version-2`
|
||||
|
||||
#### Double Ratchet
|
||||
|
||||
Having established the initial shared secret `SK` through X3DH, it can be used to seed a Double Ratchet exchange between Alice and Bob.
|
||||
|
||||
Please refer to the [Double Ratchet spec](https://signal.org/docs/specifications/doubleratchet/) for more details.
|
||||
|
||||
The initial message sent by Alice to Bob is sent as a top-level `ProtocolMessage` ([protobuf](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L65))
|
||||
containing a map of `DirectMessageProtocol` indexed by `installation-id` ([protobuf](https://github.com/status-im/status-go/blob/1ac9dd974415c3f6dee95145b6644aeadf02f02c/services/shhext/chat/encryption.proto#L56)):
|
||||
|
||||
``` protobuf
|
||||
message ProtocolMessage {
|
||||
|
||||
string installation_id = 2;
|
||||
|
||||
repeated Bundle bundles = 3;
|
||||
|
||||
// One to one message, encrypted, indexed by installation_id
|
||||
map<string,DirectMessageProtocol> direct_message = 101;
|
||||
|
||||
// Public chats, not encrypted
|
||||
bytes public_message = 102;
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
- `bundles`: a sequence of bundles
|
||||
- `installation_id`: the installation id of the sender
|
||||
- `direct_message` is a map of `DirectMessageProtocol` indexed by `installation-id`
|
||||
- `public_message`: unencrypted public chat message.
|
||||
|
||||
``` protobuf
|
||||
message DirectMessageProtocol {
|
||||
X3DHHeader X3DH_header = 1;
|
||||
DRHeader DR_header = 2;
|
||||
DHHeader DH_header = 101;
|
||||
// Encrypted payload
|
||||
bytes payload = 3;
|
||||
}
|
||||
```
|
||||
|
||||
```protobuf
|
||||
- `X3DH_header`: the `X3DHHeader` field in `DirectMessageProtocol` contains:
|
||||
|
||||
([protobuf](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L47))
|
||||
``` protobuf
|
||||
message X3DHHeader {
|
||||
bytes key = 1;
|
||||
bytes id = 4;
|
||||
}
|
||||
```
|
||||
|
||||
- `key`: Alice's ephemeral key `EK_A`;
|
||||
- `id`: Identifier stating which of Bob's prekeys Alice used, in this case Bob's bundle signed prekey.
|
||||
|
||||
Alice's identity key `IK_A` is sent at the transport layer level (Whisper/Waku);
|
||||
|
||||
- `DR_header`: Double ratchet header ([protobuf](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L31)). Used when Bob's public bundle is available:
|
||||
``` protobuf
|
||||
message DRHeader {
|
||||
bytes key = 1;
|
||||
uint32 n = 2;
|
||||
uint32 pn = 3;
|
||||
bytes id = 4;
|
||||
}
|
||||
```
|
||||
- `key`: Alice's current ratchet public key (as mentioned in [DR spec section 2.2](https://signal.org/docs/specifications/doubleratchet/#symmetric-key-ratchet));
|
||||
- `n`: number of the message in the sending chain;
|
||||
- `pn`: length of the previous sending chain;
|
||||
- `id`: Bob's bundle ID.
|
||||
|
||||
- `DH_header`: Diffie-Helman header (used when Bob's bundle is not available):
|
||||
([protobuf](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L42))
|
||||
``` protobuf
|
||||
message DHHeader {
|
||||
bytes key = 1;
|
||||
}
|
||||
```
|
||||
- `key`: Alice's compressed ephemeral public key.
|
||||
|
||||
- `payload`:
|
||||
- if a bundle is available, contains payload encrypted with the Double Ratchet algorithm;
|
||||
- otherwise, payload encrypted with output key of DH exchange (no Perfect Forward Secrecy).
|
||||
```
|
||||
<!-- TODO: A lot of links to status-go, seems likely these should be updated to status-protocol-go -->
|
||||
|
||||
## Security Considerations
|
||||
|
||||
The same considerations apply as in [section 4 of the X3DH spec](https://signal.org/docs/specifications/x3dh/#security-considerations) and [section 6 of the Double Ratchet spec](https://signal.org/docs/specifications/doubleratchet/#security-considerations), with some additions detailed below.
|
||||
|
||||
<!-- TODO: Add any additional context here not covered in the X3DH and DR specs -->
|
||||
|
||||
<!--
|
||||
TODO: description here
|
||||
|
||||
### --- Security and Privacy Features
|
||||
#### Confidentiality (YES)
|
||||
> Only the intended recipients are able to read a message. Specifically, the message must not be readable by a server operator that is not a conversation participant
|
||||
|
||||
- Yes.
|
||||
- There's a layer of encryption at Whisper as well as above with Double Ratchet
|
||||
- Relay nodes and Mailservers can only read a topic of a Whisper message, and nothing within the payload.
|
||||
|
||||
#### Integrity (YES)
|
||||
> No honest party will accept a message that has been modified in transit.
|
||||
|
||||
- Yes.
|
||||
- Assuming a user validates (TODO: Check this assumption) every message they are able to decrypt and validate its signature from the sender, then it is not able to be altered in transit.
|
||||
* [igorm] i'm really not sure about it, Whisper provides a signature, but I'm not sure we check it anywhere (simple grepping didn't give anything)
|
||||
* [andrea] Whisper checks the signature and a public key is derived from it, we check the public key is a meaningful public key. The pk itself is not in the content of the message for public chats/1-to-1 so potentially you could send a message from a random account without having access to the private key, but that would not be much of a deal, as you might just as easily create a random account)
|
||||
|
||||
#### Authentication (YES)
|
||||
> Each participant in the conversation receives proof of possession of a known long-term secret from all other participants that they believe to be participating in the conversation. In addition, each participant is able to verify that a message was sent from the claimed source
|
||||
|
||||
- 1:1 --- one-to-one messages are encrypted with the recipient's public key, and digitally signed by the sender's. In order to provide Perfect Forward Secrecy, we build on the X3DH and Double Ratchet specifications from Open Whisper Systems, with some adaptations to operate in a decentralized environment.
|
||||
- group --- group chat is pairwise
|
||||
- public --- A user subscribes to a public channel topic and the decryption key is derived from the topic name
|
||||
|
||||
**TODO:** Need to verify that this is actually the case
|
||||
**TODO:** Fill in explicit details here
|
||||
|
||||
#### Participant Consistency (YES?)
|
||||
> At any point when a message is accepted by an honest party, all honest parties are guaranteed to have the same view of the participant list
|
||||
|
||||
- **TODO:** Need details here
|
||||
|
||||
#### Destination Validation (YES?)
|
||||
> When a message is accepted by an honest party, they can verify that they were included in the set of intended recipients for the message.
|
||||
|
||||
- Users are aware of the topic that a message was sent to, and that they have the ability to decrypt it.
|
||||
-
|
||||
|
||||
#### Forward Secrecy (PARTIAL)
|
||||
> Compromising all key material does not enable decryption of previously encrypted data
|
||||
|
||||
- After first back and forth between two contacts with PFS enabled, yes.
|
||||
|
||||
#### Backward Secrecy (YES)
|
||||
> Compromising all key material does not enable decryption of succeeding encrypted data
|
||||
|
||||
- PFS requires both backward and forwards secrecy
|
||||
[Andrea: This is not true, (Perfect) Forward Secrecy does not imply backward secrecy (which is also called post-compromise security, as signal calls it, or future secrecy, it's not well defined). Technically this is a NO , double ratchet offers good Backward secrecy, but not perfect. Effectively if all the key material is compromised, any future message received will be also compromised (due to the hash ratchet), until a DH ratchet step is completed (i.e. the compromised party generate a new random key and ratchet)]
|
||||
|
||||
#### Anonymity Preserving (PARTIAL)
|
||||
> Any anonymity features provided by the underlying transport privacy architecture are not undermined (e.g., if the transport privacy system provides anonymity, the conversation security level does not de-anonymize users by linking key identifiers).
|
||||
|
||||
- by default, yes
|
||||
- ENS Naming system attaches an identifier to a given public key
|
||||
|
||||
#### Speaker Consistency (PARTIAL)
|
||||
> All participants agree on the sequence of messages sent by each participant. A protocol might perform consistency checks on blocks of messages during the protocol, or after every message is sent.
|
||||
|
||||
- We use Lamport timestamps for ordering of events.
|
||||
- In addition to this, we use local timestamps to attempt a more intuitive ordering. [Andrea: currently this was introduced as a regression during performance optimization and might result in out-of-order messages if sent across day boundaries, so I consider it a bug and not part of the specs (it does not make the order more intuitive, quite the opposite as it might result in causally related messages being out-of-order, but helps dividing the messages in days)]
|
||||
- Fundamentally, there's no single source of truth, nor consensus process for global ordering [Andrea: Global ordering does not need a consensus process i.e. if you order messages alphabetically, and you break ties consistently, you have global ordering, as all the participants will see the same ordering (as opposed to say order by the time the message was received locally), of course is not useful, you want to have causal + global to be meaningful]
|
||||
|
||||
TODO: Understand how this is different from Global Transcript
|
||||
[Andrea: This is basically Global transcript for a single participants, we offer global transcript]
|
||||
|
||||
#### Causality Preserving (PARTIAL)
|
||||
> Implementations can avoid displaying a message before messages that causally precede it
|
||||
|
||||
- Not yet, but in pipeline (data sync layer)
|
||||
|
||||
[Andrea: Messages are already causally ordered, we don't display messages that are causally related out-of-order, that's already granted by lamport timestamps]
|
||||
|
||||
TODO: Verify if this can be done already by looking at Lamport clock difference
|
||||
|
||||
#### Global Transcript (PARTIAL)
|
||||
> All participants see all messages in the same order
|
||||
|
||||
- See directly above
|
||||
|
||||
[Andrea: messages are globally (total) ordered, so all participants see the same ordering]
|
||||
|
||||
#### Message Unlinkability (NO)
|
||||
> If a judge is convinced that a participant authored one message in the conversation, this does not provide evidence that they authored other messages
|
||||
|
||||
- Currently, the Status software signs every messages sent with the user's public key, thus making it unable to provide unlinkability.
|
||||
- This is not necessary though, and could be built in to have an option to not sign.
|
||||
- Side note: moot account allows for this but is a function of the anonymity set that uses it. The more people that use this account the stronger the unlinkability.
|
||||
|
||||
#### Message Repudiation (NO)
|
||||
> Given a conversation transcript and all cryptographic keys, there is no evidence that a given message was authored by any particular user
|
||||
|
||||
- All messages are digitally signed by their sender.
|
||||
- The underlying transport, Whisper/Waku, does allow for unsigned messages, but we don't use it.
|
||||
|
||||
#### Participant Repudiation (NO)
|
||||
> Given a conversation transcript and all cryptographic key material for all but one accused (honest) participant, there is no evidence that the honest participant was in a conversation with any of the other participants.
|
||||
|
||||
### --- Group related features
|
||||
#### Computational Equality (YES)
|
||||
> All chat participants share an equal computational load
|
||||
|
||||
- One a message is sent, all participants in a group chat perform the same steps to retrieve and decrypt it.
|
||||
- If proof of work is actually used at the Whisper layer (basically turned off in Status) then the sender would have to do additional computational work to send messages.
|
||||
|
||||
#### Trust Equality (PARTIAL)
|
||||
> No participant is more trusted or takes on more responsibility than any other
|
||||
|
||||
- 1:1 chats and public chats are equal
|
||||
- group chats have admins (on purpose)
|
||||
- Private Group chats have Administrators and Members. Upon construction, the creator is made an admin. These groups have the following privileges:
|
||||
- Admins:
|
||||
- Add group members
|
||||
- Promote group members to admin
|
||||
- Change group name
|
||||
- Members:
|
||||
- Accept invitation to group
|
||||
- Leave group
|
||||
- Non-Members:
|
||||
- Invited by admins show up as "invited" in group; this leaks contact information
|
||||
- Invited people don't opt-in to being invited
|
||||
|
||||
TODO: Group chat dynamics should have a documented state diagram
|
||||
TODO: create issues for identity leak of invited members as well as current members of a group showing up who have not accepted yet [Andrea: that's an interesting point, didn't think of that. Currently we have this behavior for 2 reasons, backward compatibility with previous releases, which had no concept of joining, and also because we rely on other peers to propagate group info, so we don't have a single-message point of failure (the invitation), the first can be addressed easily, the second is trickier, without giving up the propagation mechanism (if we choose to give this up, then it's trivial)]
|
||||
|
||||
#### Subgroup Messaging (NO)
|
||||
> Messages can be sent to a subset of participants without forming a new conversation
|
||||
|
||||
- This would require a new topic and either a new public chat or a new group chat
|
||||
[Andrea: This is a YES, as messages are pairwise encrypted, and client-side fanout, so anyone could potentially send a message only to a subset of the group]
|
||||
|
||||
#### Contractible Membership (PARTIAL)
|
||||
> After the conversation begins, participants can leave without restarting the protocol
|
||||
|
||||
- For 1:1, there is no way to ignore or block a user from sending you a message. This is currently in the pipeline.
|
||||
- For public chats, Yes. A member simply stops subscribing to a specific topic and will no longer receive messages.
|
||||
- For group chats: this assumes pairwise encryption OR key is renegotiated
|
||||
- This only currently works on the identity level, and not the device level. A ghost device will have access to anything other devices have.
|
||||
[Andrea: For group chats, that's possible as using pairwise encryption, also with group chats (which use device-to-device encryption), ghost devices is a bit more complicated, in general, they don't have access to the messages you send, i.e. If I send a message from device A1 to the group chat and there is a ghost device A2, it will not be able to decrypt the content, but will see that a message has been sent (as only paired devices are kept in sync, and those are explicitly approved by the user). Messages that you receive are different, so a ghost device (A2) will potentially be able to decrypt the message, but A1 can detect the ghost device (in most cases, it's complicated :), the pfs docs describe multi-device support), for one-to-one ghost devices are undetectable]
|
||||
|
||||
#### Expandable Membership (PARTIAL)
|
||||
> After the conversation begins, participants can join without restarting the protocol.
|
||||
|
||||
- 1:1: no, only 1:1
|
||||
- private group: yes, since it is pair-wise, each person in the group just creates a pair with the new member
|
||||
- public: yes, as members of a public chat are only subscribing to a topic and receiving anyone sending messages to it.
|
||||
|
||||
### --- Usability and Adoption
|
||||
|
||||
#### Out-of-Order Resilient (PARTIAL)
|
||||
> If a message is delayed in transit, but eventually arrives, its contents are accessible upon arrival
|
||||
|
||||
- Due to asynchronous forward secrecy and no additional services, private keys might be rotated
|
||||
|
||||
[Andrea: That's correct, in some cases if the message is delayed for too long, or really out-of-order, the specific message key might have been deleted, as we only keep the last 3000 message keys]
|
||||
[Igor: TTL of a Whisper message can expire, so any node-in-transit will drop it. Also, I believe we ignore messages with skewed timestamps]
|
||||
|
||||
#### Dropped Message Resilient (PARTIAL)
|
||||
> Messages can be decrypted without receipt of all previous messages. This is desirable for asynchronous and unreliable network services
|
||||
|
||||
- Public chats: yes, users are able to decrypt any message received at any time.
|
||||
- 1-to-1/group chat also, this is a YES in my opinion
|
||||
|
||||
#### Asynchronous (PARTIAL)
|
||||
> Messages can be sent securely to disconnected recipients and received upon their next connection
|
||||
|
||||
- The semantics around message reliability are currently poor
|
||||
* [Igor: messages are stored on mailservers for way longer than TTL (30 days), but that requires Status infrastructure]
|
||||
- There's a TTL in Whisper and mailserver can deliver messages after the fact
|
||||
|
||||
TODO: this requires more detail
|
||||
|
||||
#### Multi-Device Support (YES)
|
||||
> A user can participate in the conversation using multiple devices at once. Each device must be able to send and receive messages. Ideally, all devices have identical views of the conversation. The devices might use a synchronized long-term key or distinct keys.
|
||||
|
||||
- Yes
|
||||
- There is currently work being done to improve the syncing process between a user's devices.
|
||||
|
||||
#### No Additional Service (NO)
|
||||
> The protocol does not require any infrastructure other than the protocol participants. Specifically, the protocol must not require additional servers for relaying messages or storing any kind of key material.
|
||||
|
||||
- The protocol requires Whisper/Waku relay servers and mailservers currently.
|
||||
- The larger the number of Whisper/Waku relay servers, the better the transport security but there might be potential scaling problems.
|
||||
- Mailservers act to provide asynchronicity so users can retrieve messages after coming back from an offline period.
|
||||
|
||||
-->
|
||||
|
||||
## Session management
|
||||
|
||||
A node identifies a peer by two pieces of data:
|
||||
|
||||
1) An `installation-id` which is generated upon creating a new account in the `Status` application
|
||||
2) Their identity Whisper/Waku key
|
||||
|
||||
### Initialization
|
||||
|
||||
A node initializes a new session once a successful X3DH exchange has taken place. Subsequent messages will use the established session until re-keying is necessary.
|
||||
|
||||
### Concurrent sessions
|
||||
|
||||
If a node creates two sessions concurrently between two peers, the one with the symmetric key first in byte order SHOULD be used, this marks that the other has expired.
|
||||
|
||||
### Re-keying
|
||||
|
||||
On receiving a bundle from a given peer with a higher version, the old bundle SHOULD be marked as expired and a new session SHOULD be established on the next message sent.
|
||||
|
||||
### Multi-device support
|
||||
|
||||
Multi-device support is quite challenging as there is not a central place
|
||||
where information on which and how many devices (identified by their respective `installation-id`) belongs to a whisper-identity / waku-identity.
|
||||
|
||||
Furthermore, account recovery always needs to be taken into consideration,
|
||||
where a user wipes clean the whole device and the nodes loses all the information about any previous sessions.
|
||||
|
||||
Taking these considerations into account, the way the network propagates multi-device information using x3dh bundles,
|
||||
which will contain information about paired devices as well as information about the sending device.
|
||||
|
||||
This means that every time a new device is paired, the bundle needs to be updated and propagated with the new information,
|
||||
the user has the responsibility to make sure the pairing is successful.
|
||||
|
||||
The method is loosely based on [Sesame](https://signal.org/docs/specifications/sesame/).
|
||||
|
||||
### Pairing
|
||||
|
||||
When a user adds a new account in the `Status` application, a new `installation-id` will be generated.
|
||||
The device should be paired as soon as possible if other devices are present.
|
||||
Once paired the contacts will be notified of the new device and it will be included in further communications.
|
||||
|
||||
If a bundle received from the `IK` is different to the `installation-id`,
|
||||
the device will be shown to the user and will have to be manually approved, to a maximum of 3.
|
||||
Once that is done any message sent by one device will also be sent to any other enabled device.
|
||||
|
||||
Once a user enables a new device, a new bundle will be generated which will include pairing information.
|
||||
|
||||
The bundle will be propagated to contacts through the usual channels.
|
||||
|
||||
Removal of paired devices is a manual step that needs to be applied on each device,
|
||||
and consist simply in disabling the device, at which point pairing information will not be propagated anymore.
|
||||
|
||||
### Sending messages to a paired group
|
||||
|
||||
When sending a message, the peer will send a message to other `installation-id` that they have seen.
|
||||
The node caps the number of devices to 3, ordered by last activity.
|
||||
The node sends messages using pairwise encryption, including their own devices.
|
||||
|
||||
Account recovery
|
||||
|
||||
Account recovery is no different from adding a new device, and it is handled in exactly the same way.
|
||||
|
||||
### Partitioned devices
|
||||
|
||||
In some cases (i.e. account recovery when no other pairing device is available, device not paired),
|
||||
it is possible that a device will receive a message that is not targeted to its own `installation-id`.
|
||||
In this case an empty message containing bundle information is sent back,
|
||||
which will notify the receiving end of including this device in any further communication.
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 0.3
|
||||
|
||||
Released [May 22, 2020](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
|
||||
- Added language to include Waku in all relevant places
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [X3DH](https://signal.org/docs/specifications/x3dh/)
|
||||
- [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
- [Protobuf](https://developers.google.com/protocol-buffers/)
|
||||
- [Whisper](/status/deprecated/whisper-usage.md)
|
||||
- [Waku](/status/deprecated/waku-usage.md)
|
||||
- [Account specification](/status/deprecated/account.md)
|
||||
- [Status implementation](https://github.com/status-im/doubleratchet/)
|
||||
- [Off-the-Record protocol](https://otr.cypherpunks.ca/Protocol-v3-4.1.1.html)
|
||||
- [X3DH](https://signal.org/docs/specifications/x3dh/)
|
||||
- [ACCOUNT](/status/deprecated/account.md)
|
||||
- [Sesame](https://signal.org/docs/specifications/sesame/)
|
||||
- [May 22, 2020 commit change](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
@@ -1,138 +0,0 @@
|
||||
---
|
||||
title: WAKU-MAILSERVER
|
||||
name: Waku Mailserver
|
||||
status: deprecated
|
||||
description: Waku Mailserver is a specification that allows messages to be stored permanently and to allow the stored messages to be delivered to requesting client nodes, regardless if the messages are not available in the network due to the message TTL expiring.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
Being mostly offline is an intrinsic property of mobile clients.
|
||||
They need to save network transfer and battery consumption to avoid spending too much money or constant charging.
|
||||
Waku protocol, on the other hand, is an online protocol.
|
||||
Messages are available in the Waku network only for short period of time calculate in seconds.
|
||||
|
||||
Waku Mailserver is a specification that allows messages to be stored permanently
|
||||
and allows the stored messages to be delivered to requesting client nodes,
|
||||
regardless if the messages are not available in the network due to the message TTL expiring.
|
||||
|
||||
## `Mailserver`
|
||||
|
||||
From the network perspective, a `Mailserver` is just like any other Waku node.
|
||||
The only difference is that a `Mailserver` has the capability of archiving messages
|
||||
and delivering them to its peers on-demand.
|
||||
|
||||
It is important to notice that a `Mailserver` will only handle requests from its direct peers
|
||||
and exchanged packets between a `Mailserver` and a peer are p2p messages.
|
||||
|
||||
### Archiving messages
|
||||
|
||||
A node which wants to provide `Mailserver` functionality MUST store envelopes from
|
||||
incoming message packets (Waku packet-code `0x01`). The envelopes can be stored in any
|
||||
format, however they MUST be serialized and deserialized to the Waku envelope format.
|
||||
|
||||
A `Mailserver` SHOULD store envelopes for all topics to be generally useful for any peer,
|
||||
however for specific use cases it MAY store envelopes for a subset of topics.
|
||||
|
||||
### Requesting messages
|
||||
|
||||
In order to request historic messages, a node MUST send a packet P2P Request (`0x7e`) to a peer providing `Mailserver` functionality.
|
||||
This packet requires one argument which MUST be a Waku envelope.
|
||||
|
||||
In the Waku envelope's payload section, there MUST be RLP-encoded information about the details of the request:
|
||||
|
||||
```golang
|
||||
[ Lower, Upper, Bloom, Limit, Cursor ]
|
||||
```
|
||||
|
||||
`Lower`: 4-byte wide unsigned integer (UNIX time in seconds; oldest requested envelope's creation time)
|
||||
`Upper`: 4-byte wide unsigned integer (UNIX time in seconds; newest requested envelope's creation time)
|
||||
`Bloom`: 64-byte wide array of Waku topics encoded in a bloom filter to filter envelopes
|
||||
`Limit`: 4-byte wide unsigned integer limiting the number of returned envelopes
|
||||
`Cursor`: an array of a cursor returned from the previous request (optional)
|
||||
|
||||
The `Cursor` field SHOULD be filled in if a number of envelopes between `Lower` and `Upper` is greater than `Limit`
|
||||
so that the requester can send another request using the obtained `Cursor` value.
|
||||
What exactly is in the `Cursor` is up to the implementation.
|
||||
The requester SHOULD NOT use a `Cursor` obtained from one `Mailserver` in a request to another `Mailserver` because the format or the result MAY be different.
|
||||
|
||||
The envelope MUST be encrypted with a symmetric key agreed between the requester and the `Mailserver`.
|
||||
|
||||
### Receiving historic messages
|
||||
|
||||
Historic messages MUST be sent to a peer as a packet with a P2P Message code (`0x7f`) followed by an array of Waku envelopes.
|
||||
|
||||
In order to receive historic messages from a `Mailserver`, a node MUST trust the selected `Mailserver`,
|
||||
that is allowed to send packets with the P2P Message code. By default, the node discards such packets.
|
||||
|
||||
Received envelopes MUST be passed through the Waku envelope pipelines
|
||||
so that they are picked up by registered filters and passed to subscribers.
|
||||
|
||||
For a requester, to know that all messages have been sent by a `Mailserver`,
|
||||
it SHOULD handle P2P Request Complete code (`0x7d`). This code is followed by the following parameters:
|
||||
|
||||
```golang
|
||||
[ RequestID, LastEnvelopeHash, Cursor ]
|
||||
```
|
||||
|
||||
* `RequestID`: 32-byte wide array with a Keccak-256 hash of the envelope containing the original request
|
||||
* `LastEnvelopeHash`: 32-byte wide array with a Keccak-256 hash of the last sent envelope for the request
|
||||
* `Cursor`: an array of a cursor returned from the previous request (optional)
|
||||
|
||||
If `Cursor` is not empty, it means that not all messages were sent due to the set `Limit` in the request.
|
||||
One or more consecutive requests MAY be sent with `Cursor` field filled in order to receive the rest of the messages.
|
||||
|
||||
## Security considerations
|
||||
|
||||
### Confidentiality
|
||||
|
||||
The node encrypts all Waku envelopes. A `Mailserver` node can not inspect their contents.
|
||||
|
||||
### Altruistic and centralized operator risk
|
||||
|
||||
In order to be useful, a `Mailserver` SHOULD be online most of time.
|
||||
That means users either have to be a bit tech-savvy to run their own node,
|
||||
or rely on someone else to run it for them.
|
||||
|
||||
Currently, one of Status's legal entities provides `Mailservers` in an altruistic manner,
|
||||
but this is suboptimal from a decentralization, continuance and risk point of view.
|
||||
Coming up with a better system for this is ongoing research.
|
||||
|
||||
A Status client SHOULD allow the `Mailserver` selection to be customizable.
|
||||
|
||||
### Privacy concerns
|
||||
|
||||
In order to use a `Mailserver`, a given node needs to connect to it directly,
|
||||
i.e. add the `Mailserver` as its peer and mark it as trusted.
|
||||
This means that the `Mailserver` is able to send direct p2p messages to the node instead of broadcasting them.
|
||||
Effectively, it will have access to the bloom filter of topics that the user is interested in,
|
||||
when it is online as well as many metadata like IP address.
|
||||
|
||||
### Denial-of-service
|
||||
|
||||
Since a `Mailserver` is delivering expired envelopes and has a direct TCP connection with the recipient,
|
||||
the recipient is vulnerable to DoS attacks from a malicious `Mailserver` node.
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 0.1
|
||||
|
||||
Released [May 22, 2020](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
|
||||
* Created document
|
||||
* Forked from [whisper-mailserver](/status/deprecated/whisper-mailserver.md)
|
||||
* Change to keep `Mailserver` term consistent
|
||||
* Replaced Whisper references with Waku
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
* [May 22, 2020 change commit](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
@@ -1,392 +0,0 @@
|
||||
---
|
||||
title: WAKU-USAGE
|
||||
name: Waku Usage
|
||||
status: deprecated
|
||||
description: Status uses Waku to provide privacy-preserving routing and messaging on top of devP2P.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
Status uses [Waku](/waku/standards/legacy/6/waku1.md) to provide privacy-preserving routing
|
||||
and messaging on top of devP2P.
|
||||
Waku uses topics to partition its messages,
|
||||
and these are leveraged for all chat capabilities.
|
||||
In the case of public chats, the channel name maps directly to its Waku topic.
|
||||
This allows anyone to listen on a single channel.
|
||||
|
||||
Additionally, since anyone can receive Waku envelopes,
|
||||
it relies on the ability to decrypt messages to decide who is the correct recipient.
|
||||
Status nodes do not rely upon this property,
|
||||
and implement another secure transport layer on top of Whisper.
|
||||
|
||||
## Reason
|
||||
|
||||
Provide routing, metadata protection, topic-based multicasting and basic
|
||||
encryption properties to support asynchronous chat.
|
||||
|
||||
## Terminology
|
||||
|
||||
* *Waku node*: an Ethereum node with Waku V1 enabled
|
||||
* *Waku network*: a group of Waku nodes connected together through the internet connection and forming a graph
|
||||
* *Message*: a decrypted Waku message
|
||||
* *Offline message*: an archived envelope
|
||||
* *Envelope*: an encrypted message with metadata like topic and Time-To-Live
|
||||
|
||||
## Waku packets
|
||||
|
||||
| Packet Name | Code | References |
|
||||
| -------------------- | ---: | --- |
|
||||
| Status | 0 | [Status](status), [WAKU-1](/waku/standards/legacy/6/waku1.md#status) |
|
||||
| Messages | 1 | [WAKU-1](/waku/standards/legacy/6/waku1.md#messages) |
|
||||
| Batch Ack | 11 | Undocumented. Marked for Deprecation |
|
||||
| Message Response | 12 | [WAKU-1](/waku/standards/legacy/6/waku1.md#batch-ack-and-message-response) |
|
||||
| Status Update | 22 | [WAKU-1](/waku/standards/legacy/6/waku1.md#status-update) |
|
||||
| P2P Request Complete | 125 | [4/WAKU-MAILSERVER](/status/deprecated/waku-mailserver.md) |
|
||||
| P2P Request | 126 | [4/WAKU-MAILSERVER](/status/deprecated/waku-mailserver.md), [WAKU-1](/waku/standards/legacy/6/waku1.md#p2p-request) |
|
||||
| P2P Messages | 127 | [4/WAKU-MAILSERVER](/status/deprecated/waku-mailserver.md), [WAKU-1](/waku/standards/legacy/6/waku1.md#p2p-request-complete) |
|
||||
|
||||
## Waku node configuration
|
||||
|
||||
A Waku node must be properly configured to receive messages from Status clients.
|
||||
|
||||
Nodes use Waku's Proof Of Work algorithm to deter denial of service and various spam/flood attacks against the Whisper network.
|
||||
The sender of a message must perform some work which in this case means processing time.
|
||||
Because Status' main client is a mobile client, this easily leads to battery draining and poor performance of the app itself.
|
||||
Hence, all clients MUST use the following Whisper node settings:
|
||||
|
||||
* proof-of-work requirement not larger than `0.002` for payloads less than 50,000 bytes
|
||||
* proof-of-work requirement not larger than `0.000002` for payloads greater than or equal to 50,000 bytes
|
||||
* time-to-live not lower than `10` (in seconds)
|
||||
|
||||
## Status
|
||||
|
||||
Handshake is a RLP-encoded packet sent to a newly connected peer. It MUST start with a Status Code (`0x00`) and follow up with items:
|
||||
|
||||
```golang
|
||||
[
|
||||
[ pow-requirement-key pow-requirement ]
|
||||
[ bloom-filter-key bloom-filter ]
|
||||
[ light-node-key light-node ]
|
||||
[ confirmations-enabled-key confirmations-enabled ]
|
||||
[ rate-limits-key rate-limits ]
|
||||
[ topic-interest-key topic-interest ]
|
||||
]
|
||||
```
|
||||
|
||||
| Option Name | Key | Type | Description | References |
|
||||
| ----------------------- | ------ | -------- | ----------- | --- |
|
||||
| `pow-requirement` | `0x00` | `uint64` | minimum PoW accepted by the peer | [WAKU-1#pow-requirement](/waku/standards/legacy/6/waku1.md#pow-requirement-field) |
|
||||
| `bloom-filter` | `0x01` | `[]byte` | bloom filter of Waku topic accepted by the peer | [WAKU-1#bloom-filter](/waku/standards/legacy/6/waku1.md#bloom-filter-field) |
|
||||
| `light-node` | `0x02` | `bool` | when true, the peer won't forward envelopes through the Messages packet. | [WAKU-1#light-node](/waku/standards/legacy/6/waku1.md#light-node) |
|
||||
| `confirmations-enabled` | `0x03` | `bool` | when true, the peer will send message confirmations | [WAKU-1#confirmations-enabled-field](/waku/standards/legacy/6/waku1.md#confirmations-enabled-field) |
|
||||
| `rate-limits` | `0x04` | | See [Rate limiting](/waku/standards/legacy/6/waku1.md#rate-limits-field) | [WAKU-1#rate-limits](/waku/standards/legacy/6/waku1.md#rate-limits-field) |
|
||||
| `topic-interest` | `0x05` | `[10000][4]byte` | Topic interest is used to share a node's interest in envelopes with specific topics. It does this in a more bandwidth considerate way, at the expense of some metadata protection. Peers MUST only send envelopes with specified topics. | [WAKU-1#topic-interest](/waku/standards/legacy/6/waku1.md#topic-interest-field), [the theoretical scaling model](https://github.com/vacp2p/research/tree/dcc71f4779be832d3b5ece9c4e11f1f7ec24aac2/whisper_scalability) |
|
||||
|
||||
<!-- TODO Add `light-node` and `confirmations-enabled` links when https://github.com/vacp2p/specs/pull/128 is merged -->
|
||||
|
||||
## Rate limiting
|
||||
|
||||
In order to provide an optional very basic Denial-of-Service attack protection, each node SHOULD define its own rate limits.
|
||||
The rate limits SHOULD be applied on IPs, peer IDs, and envelope topics.
|
||||
|
||||
Each node MAY decide to whitelist, i.e. do not rate limit, selected IPs or peer IDs.
|
||||
|
||||
If a peer exceeds node's rate limits, the connection between them MAY be dropped.
|
||||
|
||||
Each node SHOULD broadcast its rate limits to its peers using `rate limits` in `status-options` via packet code `0x00` or `0x22`. The rate limits is RLP-encoded information:
|
||||
|
||||
```golang
|
||||
[ IP limits, PeerID limits, Topic limits ]
|
||||
```
|
||||
|
||||
`IP limits`: 4-byte wide unsigned integer
|
||||
`PeerID limits`: 4-byte wide unsigned integer
|
||||
`Topic limits`: 4-byte wide unsigned integer
|
||||
|
||||
The rate limits MAY also be sent as an optional parameter in the handshake.
|
||||
|
||||
Each node SHOULD respect rate limits advertised by its peers. The number of packets SHOULD be throttled in order not to exceed peer's rate limits.
|
||||
If the limit gets exceeded, the connection MAY be dropped by the peer.
|
||||
|
||||
## Keys management
|
||||
|
||||
The protocol requires a key (symmetric or asymmetric) for the following actions:
|
||||
|
||||
* signing & verifying messages (asymmetric key)
|
||||
* encrypting & decrypting messages (asymmetric or symmetric key).
|
||||
|
||||
As nodes require asymmetric keys and symmetric keys to process incoming messages,
|
||||
they must be available all the time and are stored in memory.
|
||||
|
||||
Keys management for PFS is described in [5/SECURE-TRANSPORT](/status/deprecated/secure-transport.md).
|
||||
|
||||
The Status protocols uses a few particular Waku topics to achieve its goals.
|
||||
|
||||
### Contact code topic
|
||||
|
||||
Nodes use the contact code topic to facilitate the discovery of X3DH bundles so that the first message can be PFS-encrypted.
|
||||
|
||||
Each user publishes periodically to this topic. If user A wants to contact user B, she SHOULD look for their bundle on this contact code topic.
|
||||
|
||||
Contact code topic MUST be created following the algorithm below:
|
||||
|
||||
```golang
|
||||
contactCode := "0x" + hexEncode(activePublicKey) + "-contact-code"
|
||||
|
||||
var hash []byte = keccak256(contactCode)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
### Partitioned topic
|
||||
|
||||
Waku is broadcast-based protocol. In theory, everyone could communicate using a single topic but that would be extremely inefficient.
|
||||
Opposite would be using a unique topic for each conversation, however,
|
||||
this brings privacy concerns because it would be much easier to detect whether and when two parties have an active conversation.
|
||||
|
||||
Nodes use partitioned topics to broadcast private messages efficiently.
|
||||
By selecting a number of topic, it is possible to balance efficiency and privacy.
|
||||
|
||||
Currently, nodes set the number of partitioned topics to `5000`. They MUST be generated following the algorithm below:
|
||||
|
||||
```golang
|
||||
var partitionsNum *big.Int = big.NewInt(5000)
|
||||
var partition *big.Int = big.NewInt(0).Mod(publicKey.X, partitionsNum)
|
||||
|
||||
partitionTopic := "contact-discovery-" + strconv.FormatInt(partition.Int64(), 10)
|
||||
|
||||
var hash []byte = keccak256(partitionTopic)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
### Public chats
|
||||
|
||||
A public chat MUST use a topic derived from a public chat name following the algorithm below:
|
||||
|
||||
```golang
|
||||
var hash []byte
|
||||
hash = keccak256(name)
|
||||
|
||||
topicLen = 4
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
<!-- NOTE: commented out as it is currently not used. In code for potential future use. - C.P. Oct 8, 2019
|
||||
### Personal discovery topic
|
||||
|
||||
Personal discovery topic is used to ???
|
||||
|
||||
A client MUST implement it following the algorithm below:
|
||||
```golang
|
||||
personalDiscoveryTopic := "contact-discovery-" + hexEncode(publicKey)
|
||||
|
||||
var hash []byte = keccak256(personalDiscoveryTopic)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
Each Status Client SHOULD listen to this topic in order to receive ??? -->
|
||||
|
||||
### Group chat topic
|
||||
|
||||
Group chats does not have a dedicated topic.
|
||||
All group chat messages (including membership updates) are sent as one-to-one messages to multiple recipients.
|
||||
|
||||
### Negotiated topic
|
||||
|
||||
When a client sends a one to one message to another client, it MUST listen to their negotiated topic.
|
||||
This is computed by generating a diffie-hellman key exchange between two members
|
||||
and taking the first four bytes of the `SHA3-256` of the key generated.
|
||||
|
||||
```golang
|
||||
|
||||
sharedKey, err := ecies.ImportECDSA(myPrivateKey).GenerateShared(
|
||||
ecies.ImportECDSAPublic(theirPublicKey),
|
||||
16,
|
||||
16,
|
||||
)
|
||||
|
||||
|
||||
hexEncodedKey := hex.EncodeToString(sharedKey)
|
||||
|
||||
var hash []byte = keccak256(hexEncodedKey)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
A client SHOULD send to the negotiated topic only if it has received a message from all the devices included in the conversation.
|
||||
|
||||
### Flow
|
||||
|
||||
To exchange messages with client `B`, a client `A` SHOULD:
|
||||
|
||||
* Listen to client's `B` Contact Code Topic to retrieve their bundle information, including a list of active devices
|
||||
* Send a message on client's `B` partitioned topic
|
||||
* Listen to the Negotiated Topic between `A` & `B`
|
||||
* Once client `A` receives a message from `B`, the Negotiated Topic SHOULD be used
|
||||
|
||||
## Message encryption
|
||||
|
||||
Even though, the protocol specifies an encryption layer that encrypts messages before passing them to the transport layer,
|
||||
Waku protocol requires each Waku message to be encrypted anyway.
|
||||
|
||||
The node encrypts public and group messages using symmetric encryption, and creates the key from a channel name string.
|
||||
The implementation is available in [`shh_generateSymKeyFromPassword`](https://github.com/ethereum/go-ethereum/wiki/Whisper-v6-RPC-API#shh_generatesymkeyfrompassword) JSON-RPC method of go-ethereum Whisper implementation.
|
||||
|
||||
The node encrypts one-to-one messages using asymmetric encryption.
|
||||
|
||||
## Message confirmations
|
||||
|
||||
Sending a message is a complex process where many things can go wrong.
|
||||
Message confirmations tell a node that a message originating from it has been seen by its direct peers.
|
||||
|
||||
A node MAY send a message confirmation for any batch of messages received in a packet Messages Code (`0x01`).
|
||||
|
||||
A node sends a message confirmation using Batch Acknowledge packet (`0x0b`) or Message Response packet (`0x0c`).
|
||||
|
||||
The Batch Acknowledge packet is followed by a keccak256 hash of the envelopes batch data (raw bytes).
|
||||
|
||||
The Message Response packet is more complex and is followed by a Versioned Message Response:
|
||||
|
||||
```golang
|
||||
[ Version, Response]
|
||||
```
|
||||
|
||||
`Version`: a version of the Message Response, equal to `1`,
|
||||
`Response`: `[ Hash, Errors ]` where `Hash` is a keccak256 hash of the envelopes batch data (raw bytes)
|
||||
for which the confirmation is sent and `Errors` is a list of envelope errors when processing the batch.
|
||||
A single error contains `[ Hash, Code, Description ]` where `Hash` is a hash of the processed envelope,
|
||||
`Code` is an error code and `Description` is a descriptive error message.
|
||||
|
||||
The supported codes:
|
||||
`1`: means time sync error which happens when an envelope is too old
|
||||
or created in the future (the root cause is no time sync between nodes).
|
||||
|
||||
The drawback of sending message confirmations is that it increases the noise in the network because for each sent message,
|
||||
one or more peers broadcast a corresponding confirmation. To limit that, both Batch Acknowledge packet (`0x0b`)
|
||||
and Message Response packet (`0x0c`) are not broadcast to peers of the peers, i.e. they do not follow epidemic spread.
|
||||
|
||||
In the current Status network setup, only `Mailservers` support message confirmations.
|
||||
A client posting a message to the network and after receiving a confirmation can be sure that the message got processed by the `Mailserver`.
|
||||
If additionally, sending a message is limited to non-`Mailserver` peers,
|
||||
it also guarantees that the message got broadcast through the network and it reached the selected `Mailserver`.
|
||||
|
||||
## Waku V1 extensions
|
||||
|
||||
### Request historic messages
|
||||
|
||||
Sends a request for historic messages to a `Mailserver`.
|
||||
The `Mailserver` node MUST be a direct peer and MUST be marked as trusted (using `waku_markTrustedPeer`).
|
||||
|
||||
The request does not wait for the response.
|
||||
It merely sends a peer-to-peer message to the `Mailserver` and it's up to `Mailserver` to process it and start sending historic messages.
|
||||
|
||||
The drawback of this approach is that it is impossible to tell which historic messages are the result of which request.
|
||||
|
||||
It's recommended to return messages from newest to oldest.
|
||||
To move further back in time, use `cursor` and `limit`.
|
||||
|
||||
#### wakuext_requestMessages
|
||||
|
||||
**Parameters**:
|
||||
|
||||
* Object - The message request object:
|
||||
* `mailServerPeer` - `String`: `Mailserver`'s enode address.
|
||||
* `from` - `Number` (optional): Lower bound of time range as unix timestamp, default is 24 hours back from now.
|
||||
* `to` - `Number` (optional): Upper bound of time range as unix timestamp, default is now.
|
||||
* `limit` - `Number` (optional): Limit the number of messages sent back, default is no limit.
|
||||
* `cursor` - `String` (optional): Used for paginated requests.
|
||||
* `topics` - `Array`: hex-encoded message topics.
|
||||
* `symKeyID` - `String`: an ID of a symmetric key used to authenticate with the `Mailserver`, derived from the `Mailserver` password.
|
||||
|
||||
**Returns**:
|
||||
`Boolean` - returns `true` if the request was sent.
|
||||
|
||||
The above `topics` is then converted into a bloom filter and then and sent to the `Mailserver`.
|
||||
|
||||
<!-- TODO: Clarify actual request with bloom filter to mailserver -->
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 0.1
|
||||
|
||||
Released [May 22, 2020](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
|
||||
* Created document
|
||||
* Forked from [3-whisper-usage](3-whisper-usage.md)
|
||||
* Change to keep `Mailserver` term consistent
|
||||
* Replaced Whisper references with Waku
|
||||
* Added [Status options](#status) section
|
||||
* Updated [Waku packets](#waku-packets) section to match Waku
|
||||
* Added that `Batch Ack` is marked for deprecation
|
||||
* Changed `shh_generateSymKeyFromPassword` to `waku_generateSymKeyFromPassword`
|
||||
* [Exists here](https://github.com/status-im/status-go/blob/2d13ccf5ec3db7e48d7a96a7954be57edb96f12f/waku/api.go#L172-L175)
|
||||
* [Exists here](https://github.com/status-im/status-go/blob/2d13ccf5ec3db7e48d7a96a7954be57edb96f12f/eth-node/bridge/geth/public_waku_api.go#L33-L36)
|
||||
* Changed `shh_markTrustedPeer` to `waku_markTrustedPeer`
|
||||
* [Exists here](https://github.com/status-im/status-go/blob/2d13ccf5ec3db7e48d7a96a7954be57edb96f12f/waku/api.go#L100-L108)
|
||||
* Changed `shhext_requestMessages` to `wakuext_requestMessages`
|
||||
* [Exists here](https://github.com/status-im/status-go/blob/2d13ccf5ec3db7e48d7a96a7954be57edb96f12f/services/wakuext/api.go#L76-L139)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
* [Waku](waku)
|
||||
* [WAKU1](/waku/standards/legacy/6/waku1.md)
|
||||
* [WAKU-MAILSERVER](/status/deprecated/waku-mailserver.md)
|
||||
* [The theoretical scaling model](https://github.com/vacp2p/research/tree/dcc71f4779be832d3b5ece9c4e11f1f7ec24aac2/whisper_scalability)
|
||||
* [SECURE-TRANSPORT](/status/deprecated/secure-transport.md)
|
||||
* [May 22, 2020 commit](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
* [`shh_generateSymKeyFromPassword`](https://github.com/ethereum/go-ethereum/wiki/Whisper-v6-RPC-API#shh_generatesymkeyfrompassword)
|
||||
* [Key Change #1](https://github.com/status-im/status-go/blob/2d13ccf5ec3db7e48d7a96a7954be57edb96f12f/waku/api.go#L172-L175)
|
||||
* [Key Change #2](https://github.com/status-im/status-go/blob/2d13ccf5ec3db7e48d7a96a7954be57edb96f12f/eth-node/bridge/geth/public_waku_api.go#L33-L36)
|
||||
* [Key Change #3](https://github.com/status-im/status-go/blob/2d13ccf5ec3db7e48d7a96a7954be57edb96f12f/waku/api.go#L100-L108)
|
||||
* [Key Change #4](https://github.com/status-im/status-go/blob/2d13ccf5ec3db7e48d7a96a7954be57edb96f12f/services/wakuext/api.go#L76-L139)
|
||||
@@ -1,148 +0,0 @@
|
||||
---
|
||||
title: WHISPER-MAILSERVER
|
||||
name: Whisper mailserver
|
||||
status: deprecated
|
||||
description: Whisper Mailserver is a Whisper extension that allows to store messages permanently and deliver them to the clients even though they are already not available in the network and expired.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
Being mostly offline is an intrinsic property of mobile clients.
|
||||
They need to save network transfer and battery consumption
|
||||
to avoid spending too much money or constant charging.
|
||||
Whisper protocol, on the other hand, is an online protocol.
|
||||
Messages are available in the Whisper network only for short period of time calculate in seconds.
|
||||
|
||||
Whisper `Mailserver` is a Whisper extension that allows to store messages permanently
|
||||
and deliver them to the clients even though they are already not available in the network and expired.
|
||||
|
||||
## `Mailserver`
|
||||
|
||||
From the network perspective, `Mailserver` is just like any other Whisper node.
|
||||
The only difference is that it has a capability of archiving messages and delivering them to its peers on-demand.
|
||||
|
||||
It is important to notice that `Mailserver` will only handle requests from its direct peers
|
||||
and exchanged packets between `Mailserver` and a peer are p2p messages.
|
||||
|
||||
### Archiving messages
|
||||
|
||||
A node which wants to provide `Mailserver` functionality MUST store envelopes
|
||||
from incoming message packets (Whisper packet-code `0x01`).
|
||||
The envelopes can be stored in any format,
|
||||
however they MUST be serialized and deserialized to the Whisper envelope format.
|
||||
|
||||
A `Mailserver` SHOULD store envelopes for all topics to be generally useful for any peer,
|
||||
however for specific use cases it MAY store envelopes for a subset of topics.
|
||||
|
||||
### Requesting messages
|
||||
|
||||
In order to request historic messages, a node MUST send a packet P2P Request (`0x7e`) to a peer providing `Mailserver` functionality.
|
||||
This packet requires one argument which MUST be a Whisper envelope.
|
||||
|
||||
In the Whisper envelope's payload section, there MUST be RLP-encoded information about the details of the request:
|
||||
|
||||
```golang
|
||||
[ Lower, Upper, Bloom, Limit, Cursor ]
|
||||
```
|
||||
|
||||
`Lower`: 4-byte wide unsigned integer (UNIX time in seconds; oldest requested envelope's creation time)
|
||||
`Upper`: 4-byte wide unsigned integer (UNIX time in seconds; newest requested envelope's creation time)
|
||||
`Bloom`: 64-byte wide array of Whisper topics encoded in a bloom filter to filter envelopes
|
||||
`Limit`: 4-byte wide unsigned integer limiting the number of returned envelopes
|
||||
`Cursor`: an array of a cursor returned from the previous request (optional)
|
||||
|
||||
The `Cursor` field SHOULD be filled in
|
||||
if a number of envelopes between `Lower` and `Upper` is greater than `Limit`
|
||||
so that the requester can send another request using the obtained `Cursor` value.
|
||||
What exactly is in the `Cursor` is up to the implementation.
|
||||
The requester SHOULD NOT use a `Cursor` obtained from one `Mailserver` in a request to another `Mailserver`
|
||||
because the format or the result MAY be different.
|
||||
|
||||
The envelope MUST be encrypted with a symmetric key agreed between the requester and `Mailserver`.
|
||||
|
||||
### Receiving historic messages
|
||||
|
||||
Historic messages MUST be sent to a peer as a packet with a P2P Message code (`0x7f`)
|
||||
followed by an array of Whisper envelopes.
|
||||
It is incompatible with the original Whisper spec (EIP-627) because it allows only a single envelope,
|
||||
however, an array of envelopes is much more performant.
|
||||
In order to stay compatible with EIP-627, a peer receiving historic message MUST handle both cases.
|
||||
|
||||
In order to receive historic messages from a `Mailserver`, a node MUST trust the selected `Mailserver`,
|
||||
that is allowed to send packets with the P2P Message code. By default, the node discards such packets.
|
||||
|
||||
Received envelopes MUST be passed through the Whisper envelope pipelines
|
||||
so that they are picked up by registered filters and passed to subscribers.
|
||||
|
||||
For a requester, to know that all messages have been sent by `Mailserver`,
|
||||
it SHOULD handle P2P Request Complete code (`0x7d`). This code is followed by the following parameters:
|
||||
|
||||
```golang
|
||||
[ RequestID, LastEnvelopeHash, Cursor ]
|
||||
```
|
||||
|
||||
`RequestID`: 32-byte wide array with a Keccak-256 hash of the envelope containing the original request
|
||||
`LastEnvelopeHash`: 32-byte wide array with a Keccak-256 hash of the last sent envelope for the request
|
||||
`Cursor`: an array of a cursor returned from the previous request (optional)
|
||||
|
||||
If `Cursor` is not empty, it means that not all messages were sent due to the set `Limit` in the request.
|
||||
One or more consecutive requests MAY be sent with `Cursor` field filled in order to receive the rest of the messages.
|
||||
|
||||
## Security considerations
|
||||
|
||||
### Confidentiality
|
||||
|
||||
The node encrypts all Whisper envelopes. A `Mailserver` node can not inspect their contents.
|
||||
|
||||
### Altruistic and centralized operator risk
|
||||
|
||||
In order to be useful, a `Mailserver` SHOULD be online most of the time. That means
|
||||
users either have to be a bit tech-savvy to run their own node, or rely on someone
|
||||
else to run it for them.
|
||||
|
||||
Currently, one of Status's legal entities provides `Mailservers` in an altruistic manner, but this is
|
||||
suboptimal from a decentralization, continuance and risk point of view. Coming
|
||||
up with a better system for this is ongoing research.
|
||||
|
||||
A Status client SHOULD allow the `Mailserver` selection to be customizable.
|
||||
|
||||
### Privacy concerns
|
||||
|
||||
In order to use a `Mailserver`, a given node needs to connect to it directly,
|
||||
i.e. add the `Mailserver` as its peer and mark it as trusted.
|
||||
This means that the `Mailserver` is able to send direct p2p messages to the node instead of broadcasting them.
|
||||
Effectively, it will have access to the bloom filter of topics
|
||||
that the user is interested in,
|
||||
when it is online as well as many metadata like IP address.
|
||||
|
||||
### Denial-of-service
|
||||
|
||||
Since a `Mailserver` is delivering expired envelopes and has a direct TCP connection with the recipient,
|
||||
the recipient is vulnerable to DoS attacks from a malicious `Mailserver` node.
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 0.3
|
||||
|
||||
Released [May 22, 2020](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
|
||||
- Change to keep `Mailserver` term consistent
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [Whisper](https://eips.ethereum.org/EIPS/eip-627)
|
||||
- [EIP-627](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-627.md)
|
||||
- [SECURE-TRANSPORT](/status/deprecated/secure-transport.md)
|
||||
- [`shh_generateSymKeyFromPassword`](https://github.com/ethereum/go-ethereum/wiki/Whisper-v6-RPC-API#shh_generatesymkeyfrompassword)
|
||||
- [Whisper v6](https://eips.ethereum.org/EIPS/eip-627)
|
||||
- [Waku V0](/waku/deprecated/5/waku0.md)
|
||||
- [Waku V1](/waku/standards/legacy/6/waku1.md)
|
||||
- [May 22, 2020 change commit](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
@@ -1,401 +0,0 @@
|
||||
---
|
||||
title: WHISPER-USAGE
|
||||
name: Whisper Usage
|
||||
status: deprecated
|
||||
description: Status uses Whisper to provide privacy-preserving routing and messaging on top of devP2P.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Andrea Piana <andreap@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
Status uses [Whisper](https://eips.ethereum.org/EIPS/eip-627) to provide
|
||||
privacy-preserving routing and messaging on top of devP2P.
|
||||
Whisper uses topics to partition its messages,
|
||||
and these are leveraged for all chat capabilities.
|
||||
In the case of public chats, the channel name maps directly to its Whisper topic.
|
||||
This allows anyone to listen on a single channel.
|
||||
|
||||
Additionally, since anyone can receive Whisper envelopes,
|
||||
it relies on the ability to decrypt messages to decide who is the correct recipient.
|
||||
Status nodes do not rely upon this property,
|
||||
and implement another secure transport layer on top of Whisper.
|
||||
|
||||
Finally, using an extension of Whisper provides the ability to do offline messaging.
|
||||
|
||||
## Reason
|
||||
|
||||
Provide routing, metadata protection, topic-based multicasting and basic
|
||||
encryption properties to support asynchronous chat.
|
||||
|
||||
## Terminology
|
||||
|
||||
* *Whisper node*: an Ethereum node with Whisper V6 enabled (in the case of go-ethereum, it's `--shh` option)
|
||||
* *Whisper network*: a group of Whisper nodes connected together through the internet connection and forming a graph
|
||||
* *Message*: a decrypted Whisper message
|
||||
* *Offline message*: an archived envelope
|
||||
* *Envelope*: an encrypted message with metadata like topic and Time-To-Live
|
||||
|
||||
## Whisper packets
|
||||
|
||||
| Packet Name | Code | EIP-627 | References |
|
||||
| --- | --: | --- | --- |
|
||||
| Status | 0 | ✔ | [Handshake](#handshake) |
|
||||
| Messages | 1 | ✔ | [EIP-627](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-627.md) |
|
||||
| PoW Requirement | 2 | ✔ | [EIP-627](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-627.md) |
|
||||
| Bloom Filter | 3 | ✔ | [EIP-627](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-627.md) |
|
||||
| Batch Ack | 11 | 𝘅 | Undocumented |
|
||||
| Message Response | 12 | 𝘅 | Undocumented |
|
||||
| P2P Sync Request | 123 | 𝘅 | Undocumented |
|
||||
| P2P Sync Response | 124 | 𝘅 | Undocumented |
|
||||
| P2P Request Complete | 125 | 𝘅 | [4/WHISPER-MAILSERVER](/status/deprecated/whisper-mailserver.md) |
|
||||
| P2P Request | 126 | ✔ | [4/WHISPER-MAILSERVER](/status/deprecated/whisper-mailserver.md) |
|
||||
| P2P Messages | 127 | ✔/𝘅 (EIP-627 supports only single envelope in a packet) | [4/WHISPER-MAILSERVER](/status/deprecated/whisper-mailserver.md) |
|
||||
|
||||
## Whisper node configuration
|
||||
|
||||
A Whisper node must be properly configured to receive messages from Status clients.
|
||||
|
||||
Nodes use Whisper's Proof Of Work algorithm to deter denial of service
|
||||
and various spam/flood attacks against the Whisper network.
|
||||
The sender of a message must perform some work which in this case means processing time.
|
||||
Because Status' main client is a mobile client, this easily leads to battery draining and poor performance of the app itself.
|
||||
Hence, all clients MUST use the following Whisper node settings:
|
||||
|
||||
* proof-of-work requirement not larger than `0.002`
|
||||
* time-to-live not lower than `10` (in seconds)
|
||||
|
||||
## Handshake
|
||||
|
||||
Handshake is a RLP-encoded packet sent to a newly connected peer. It MUST start with a Status Code (`0x00`) and follow up with items:
|
||||
|
||||
```golang
|
||||
[ protocolVersion, PoW, bloom, isLightNode, confirmationsEnabled, rateLimits ]
|
||||
```
|
||||
|
||||
`protocolVersion`: version of the Whisper protocol
|
||||
`PoW`: minimum PoW accepted by the peer
|
||||
`bloom`: bloom filter of Whisper topic accepted by the peer
|
||||
`isLightNode`: when true, the peer won't forward messages
|
||||
`confirmationsEnabled`: when true, the peer will send message confirmations
|
||||
`rateLimits`: is `[ RateLimitIP, RateLimitPeerID, RateLimitTopic ]` where each values is an integer with a number of accepted packets per second per IP, Peer ID, and Topic respectively
|
||||
|
||||
`bloom, isLightNode, confirmationsEnabled, and rateLimits` are all optional arguments in the handshake. However, if an optional field is specified, all optional fields preceding it MUST also be specified in order to be unambiguous.
|
||||
|
||||
## Rate limiting
|
||||
|
||||
In order to provide an optional very basic Denial-of-Service attack protection, each node SHOULD define its own rate limits.
|
||||
The rate limits SHOULD be applied on IPs, peer IDs, and envelope topics.
|
||||
|
||||
Each node MAY decide to whitelist, i.e. do not rate limit, selected IPs or peer IDs.
|
||||
|
||||
If a peer exceeds node's rate limits, the connection between them MAY be dropped.
|
||||
|
||||
Each node SHOULD broadcast its rate limits to its peers using rate limits packet code (`0x14`). The rate limits is RLP-encoded information:
|
||||
|
||||
```golang
|
||||
[ IP limits, PeerID limits, Topic limits ]
|
||||
```
|
||||
|
||||
`IP limits`: 4-byte wide unsigned integer
|
||||
`PeerID limits`: 4-byte wide unsigned integer
|
||||
`Topic limits`: 4-byte wide unsigned integer
|
||||
|
||||
The rate limits MAY also be sent as an optional parameter in the handshake.
|
||||
|
||||
Each node SHOULD respect rate limits advertised by its peers.
|
||||
The number of packets SHOULD be throttled in order not to exceed peer's rate limits.
|
||||
If the limit gets exceeded, the connection MAY be dropped by the peer.
|
||||
|
||||
## Keys management
|
||||
|
||||
The protocol requires a key (symmetric or asymmetric) for the following actions:
|
||||
|
||||
* signing & verifying messages (asymmetric key)
|
||||
* encrypting & decrypting messages (asymmetric or symmetric key).
|
||||
|
||||
As nodes require asymmetric keys and symmetric keys to process incoming messages,
|
||||
they must be available all the time and are stored in memory.
|
||||
|
||||
Keys management for PFS is described in [5/SECURE-TRANSPORT](/status/deprecated/whisper-mailserver.md).
|
||||
|
||||
The Status protocols uses a few particular Whisper topics to achieve its goals.
|
||||
|
||||
### Contact code topic
|
||||
|
||||
Nodes use the contact code topic to facilitate the discovery of X3DH bundles so that the first message can be PFS-encrypted.
|
||||
|
||||
Each user publishes periodically to this topic.
|
||||
If user A wants to contact user B, she SHOULD look for their bundle on this contact code topic.
|
||||
|
||||
Contact code topic MUST be created following the algorithm below:
|
||||
|
||||
```golang
|
||||
contactCode := "0x" + hexEncode(activePublicKey) + "-contact-code"
|
||||
|
||||
var hash []byte = keccak256(contactCode)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
### Partitioned topic
|
||||
|
||||
Whisper is broadcast-based protocol.
|
||||
In theory, everyone could communicate using a single topic but that would be extremely inefficient.
|
||||
Opposite would be using a unique topic for each conversation,
|
||||
however, this brings privacy concerns because it would be much easier to detect whether
|
||||
and when two parties have an active conversation.
|
||||
|
||||
Nodes use partitioned topics to broadcast private messages efficiently.
|
||||
By selecting a number of topic, it is possible to balance efficiency and privacy.
|
||||
|
||||
Currently, nodes set the number of partitioned topics to `5000`.
|
||||
They MUST be generated following the algorithm below:
|
||||
|
||||
```golang
|
||||
var partitionsNum *big.Int = big.NewInt(5000)
|
||||
var partition *big.Int = big.NewInt(0).Mod(publicKey.X, partitionsNum)
|
||||
|
||||
partitionTopic := "contact-discovery-" + strconv.FormatInt(partition.Int64(), 10)
|
||||
|
||||
var hash []byte = keccak256(partitionTopic)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
### Public chats
|
||||
|
||||
A public chat MUST use a topic derived from a public chat name following the algorithm below:
|
||||
|
||||
```golang
|
||||
var hash []byte
|
||||
hash = keccak256(name)
|
||||
|
||||
topicLen = 4
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
<!-- NOTE: commented out as it is currently not used. In code for potential future use. - C.P. Oct 8, 2019
|
||||
### Personal discovery topic
|
||||
|
||||
Personal discovery topic is used to ???
|
||||
|
||||
A client MUST implement it following the algorithm below:
|
||||
```golang
|
||||
personalDiscoveryTopic := "contact-discovery-" + hexEncode(publicKey)
|
||||
|
||||
var hash []byte = keccak256(personalDiscoveryTopic)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
Each Status Client SHOULD listen to this topic in order to receive ??? -->
|
||||
|
||||
<!-- NOTE: commented out as it is no longer valid as of V1. - C.P. Oct 8, 2019
|
||||
### Generic discovery topic
|
||||
|
||||
Generic discovery topic is a legacy topic used to handle all one-to-one chats. The newer implementation should rely on [Partitioned Topic](#partitioned-topic) and [Personal discovery topic](#personal-discovery-topic).
|
||||
|
||||
Generic discovery topic MUST be created following [Public chats](#public-chats) topic algorithm using string `contact-discovery` as a name. -->
|
||||
|
||||
### Group chat topic
|
||||
|
||||
Group chats does not have a dedicated topic.
|
||||
All group chat messages (including membership updates) are sent as one-to-one messages to multiple recipients.
|
||||
|
||||
### Negotiated topic
|
||||
|
||||
When a client sends a one to one message to another client, it MUST listen to their negotiated topic.
|
||||
This is computed by generating a diffie-hellman key exchange between two members
|
||||
and taking the first four bytes of the `SHA3-256` of the key generated.
|
||||
|
||||
```golang
|
||||
|
||||
sharedKey, err := ecies.ImportECDSA(myPrivateKey).GenerateShared(
|
||||
ecies.ImportECDSAPublic(theirPublicKey),
|
||||
16,
|
||||
16,
|
||||
)
|
||||
|
||||
|
||||
hexEncodedKey := hex.EncodeToString(sharedKey)
|
||||
|
||||
var hash []byte = keccak256(hexEncodedKey)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
A client SHOULD send to the negotiated topic only if it has received a message from all the devices included in the conversation.
|
||||
|
||||
### Flow
|
||||
|
||||
To exchange messages with client `B`, a client `A` SHOULD:
|
||||
|
||||
* Listen to client's `B` Contact Code Topic to retrieve their bundle information, including a list of active devices
|
||||
* Send a message on client's `B` partitioned topic
|
||||
* Listen to the Negotiated Topic between `A` & `B`
|
||||
* Once client `A` receives a message from `B`, the Negotiated Topic SHOULD be used
|
||||
|
||||
## Message encryption
|
||||
|
||||
Even though, the protocol specifies an encryption layer that encrypts messages before passing them to the transport layer,
|
||||
Whisper protocol requires each Whisper message to be encrypted anyway.
|
||||
|
||||
The node encrypts public and group messages using symmetric encryption, and creates the key from a channel name string.
|
||||
The implementation is available in [`shh_generateSymKeyFromPassword`](https://github.com/ethereum/go-ethereum/wiki/Whisper-v6-RPC-API#shh_generatesymkeyfrompassword) JSON-RPC method of go-ethereum Whisper implementation.
|
||||
|
||||
The node encrypts one-to-one messages using asymmetric encryption.
|
||||
|
||||
## Message confirmations
|
||||
|
||||
Sending a message is a complex process where many things can go wrong.
|
||||
Message confirmations tell a node that a message originating from it has been seen by its direct peers.
|
||||
|
||||
A node MAY send a message confirmation for any batch of messages received in a packet Messages Code (`0x01`).
|
||||
|
||||
A node sends a message confirmation using Batch Acknowledge packet (`0x0b`) or Message Response packet (`0x0c`).
|
||||
|
||||
The Batch Acknowledge packet is followed by a keccak256 hash of the envelopes batch data (raw bytes).
|
||||
|
||||
The Message Response packet is more complex and is followed by a Versioned Message Response:
|
||||
|
||||
```golang
|
||||
[ Version, Response]
|
||||
```
|
||||
|
||||
`Version`: a version of the Message Response, equal to `1`,
|
||||
`Response`: `[ Hash, Errors ]` where `Hash` is a keccak256 hash of the envelopes batch data (raw bytes)
|
||||
for which the confirmation is sent and `Errors` is a list of envelope errors when processing the batch.
|
||||
A single error contains `[ Hash, Code, Description ]` where `Hash` is a hash of the processed envelope,
|
||||
`Code` is an error code and `Description` is a descriptive error message.
|
||||
|
||||
The supported codes:
|
||||
`1`: means time sync error which happens when an envelope is too old
|
||||
or created in the future (the root cause is no time sync between nodes).
|
||||
|
||||
The drawback of sending message confirmations is that it increases the noise in the network because for each sent message,
|
||||
one or more peers broadcast a corresponding confirmation.
|
||||
To limit that, both Batch Acknowledge packet (`0x0b`) and Message Response packet (`0x0c`) are not broadcast to peers of the peers,
|
||||
i.e. they do not follow epidemic spread.
|
||||
|
||||
In the current Status network setup, only `Mailservers` support message confirmations.
|
||||
A client posting a message to the network and after receiving a confirmation can be sure that the message got processed by the `Mailserver`.
|
||||
If additionally, sending a message is limited to non-`Mailserver` peers,
|
||||
it also guarantees that the message got broadcast through the network and it reached the selected `Mailserver`.
|
||||
|
||||
## Whisper / Waku bridging
|
||||
|
||||
In order to maintain compatibility between Whisper and Waku nodes,
|
||||
a Status network that implements both Whisper and Waku messaging protocols
|
||||
MUST have at least one node that is capable of discovering peers and implements
|
||||
[Whisper v6](https://eips.ethereum.org/EIPS/eip-627),
|
||||
[Waku V0](/waku/deprecated/5/waku0.md) and
|
||||
[Waku V1](/waku/standards/legacy/6/waku1.md) specifications.
|
||||
|
||||
Additionally, any Status network that implements both Whisper and Waku messaging protocols
|
||||
MUST implement bridging capabilities as detailed in
|
||||
[Waku V1#Bridging](/waku/standards/legacy/6/waku1.md#waku-whisper-bridging).
|
||||
|
||||
## Whisper V6 extensions
|
||||
|
||||
### Request historic messages
|
||||
|
||||
Sends a request for historic messages to a `Mailserver`.
|
||||
The `Mailserver` node MUST be a direct peer and MUST be marked as trusted (using `shh_markTrustedPeer`).
|
||||
|
||||
The request does not wait for the response.
|
||||
It merely sends a peer-to-peer message to the `Mailserver`
|
||||
and it's up to `Mailserver` to process it and start sending historic messages.
|
||||
|
||||
The drawback of this approach is that it is impossible to tell
|
||||
which historic messages are the result of which request.
|
||||
|
||||
It's recommended to return messages from newest to oldest.
|
||||
To move further back in time, use `cursor` and `limit`.
|
||||
|
||||
#### shhext_requestMessages
|
||||
|
||||
**Parameters**:
|
||||
|
||||
1. Object - The message request object:
|
||||
* `mailServerPeer` - `String`: `Mailserver`'s enode address.
|
||||
* `from` - `Number` (optional): Lower bound of time range as unix timestamp, default is 24 hours back from now.
|
||||
* `to` - `Number` (optional): Upper bound of time range as unix timestamp, default is now.
|
||||
* `limit` - `Number` (optional): Limit the number of messages sent back, default is no limit.
|
||||
* `cursor` - `String` (optional): Used for paginated requests.
|
||||
* `topics` - `Array`: hex-encoded message topics.
|
||||
* `symKeyID` - `String`: an ID of a symmetric key used to authenticate with the `Mailserver`, derived from Mailserver password.
|
||||
|
||||
**Returns**:
|
||||
`Boolean` - returns `true` if the request was sent.
|
||||
|
||||
The above `topics` is then converted into a bloom filter and then and sent to the `Mailserver`.
|
||||
|
||||
<!-- TODO: Clarify actual request with bloom filter to mailserver -->
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 0.3
|
||||
|
||||
Released [May 22, 2020](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
|
||||
* Added Whisper / Waku Bridging section
|
||||
* Change to keep `Mailserver` term consistent
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
* [Whisper](https://eips.ethereum.org/EIPS/eip-627)
|
||||
* [WHISPER-MAILSERVER](/status/deprecated/whisper-mailserver.md)
|
||||
* [SECURE-TRANSPORT](/status/deprecated/secure-transport.md)
|
||||
* [`shh_generateSymKeyFromPassword`](https://github.com/ethereum/go-ethereum/wiki/Whisper-v6-RPC-API#shh_generatesymkeyfrompassword)
|
||||
* [Whisper v6](https://eips.ethereum.org/EIPS/eip-627)
|
||||
* [Waku V0](/waku/deprecated/5/waku0.md)
|
||||
* [Waku V1](/waku/standards/legacy/6/waku1.md)
|
||||
* [May 22, 2020 change commit](https://github.com/status-im/specs/commit/664dd1c9df6ad409e4c007fefc8c8945b8d324e8)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,415 +0,0 @@
|
||||
---
|
||||
title: STATUS-PROTOCOLS
|
||||
name: Status Protocol Stack
|
||||
status: raw
|
||||
category: Standards Track
|
||||
description: Specifies the Status application protocol stack.
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
- Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes the Status Application protocol stack.
|
||||
It focuses on elements and features in the protocol stack for all application-level functions:
|
||||
|
||||
- functional scope (also _broadcast audience_)
|
||||
- content topic
|
||||
- ephemerality
|
||||
- end-to-end reliability layer
|
||||
- encryption layer
|
||||
- transport layer (Waku)
|
||||
|
||||
It also introduces strategies to restrict resource usage, distribute large messages, etc.
|
||||
Application-level functions are out of scope and specified separately. See:
|
||||
|
||||
- [55/STATUS-1TO1-CHAT](../55/1to1-chat.md)
|
||||
- [56/STATUS-COMMUNITIES](../56/communities.md)
|
||||
|
||||
## Status protocol stack
|
||||
|
||||
The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
See the simplified diagram of the Status application protocol stack:
|
||||
|
||||
| |
|
||||
|---|
|
||||
| Status application layer |
|
||||
| End-to-end reliability layer |
|
||||
| Encryption layer |
|
||||
| Transport layer (Waku) |
|
||||
| |
|
||||
|
||||
## Status application layer
|
||||
|
||||
Application level functions are defined in the _application_ layer.
|
||||
Status currently defines functionality to support three main application features:
|
||||
|
||||
- Status Communities, as specified in [56/STATUS-COMMUNITIES](../56/communities.md)
|
||||
- Status 1:1 Chat, as specified in [55/STATUS-1TO1-CHAT](../55/1to1-chat.md)
|
||||
- Status Private Group Chat, as specified in a subsection of [55/STATUS-1TO1-CHAT](../55/1to1-chat.md#negotiation-of-a-11-chat-amongst-multiple-participants-group-chat)
|
||||
|
||||
<!-- TODO: list functions not related to main app features, such as user sync, backup, push notifications, etc. -->
|
||||
|
||||
Each application-level function, regardless which feature set it supports, has the following properties:
|
||||
|
||||
1. Functional scope
|
||||
2. Content topic
|
||||
3. Ephemerality
|
||||
|
||||
### Functional Scope
|
||||
|
||||
Each Status app-level message MUST define a functional scope.
|
||||
The functional scope MUST define the _minimum_ scope of the audience that should _participate_ in the app function the message is related to.
|
||||
In other words, it determines the minimum subset of Status app participants
|
||||
that should have access to messages related to that function.
|
||||
|
||||
Note that the functional scope is distinct from the number of participants that is _addressed_ by a specific message.
|
||||
For example, a participant will address a 1:1 chat to only one other participant.
|
||||
However, since all users of the Status app MUST be able to participate in 1:1 chats,
|
||||
the functional scope of messages enabling 1:1 chats MUST be a global scope.
|
||||
Similarly, since private group chats can be set up between any subset of Status app users,
|
||||
the functional scope for messages related to private group chats MUST be global.
|
||||
Along the same principle, messages that originate within communities are of global interest
|
||||
for all users who have an interest in the Status Communities feature.
|
||||
Such messages MUST have a global functional scope,
|
||||
that can be accessed by any app users interested in communities.
|
||||
A different group of messages are addressed only to the participant that generated those messages itself.
|
||||
These _self-addressed_ messages MUST have a local functional scope.
|
||||
|
||||
If we further make a distinction between "control" and "content" messages,
|
||||
we can distinguish five distinct functional scopes.
|
||||
|
||||
All Status messages MUST have one of these functional scopes:
|
||||
|
||||
#### Global general scope
|
||||
|
||||
1. _Global control_: messages enabling the basic functioning of the app to control general features that all app users should be able to participate in. Examples include Contact Requests, global Status Updates, Group Chat Invites, etc.
|
||||
2. _Global content_: messages carrying user-generated content for global functions. Examples include 1:1 chat messages, images shared over private group chats, etc.
|
||||
|
||||
#### Global community scope
|
||||
|
||||
1. _Global community control_: messages enabling the basic functioning of the app to control features related to communities. Examples include Community Invites, Community Membership Updates, community Status Updates, etc.
|
||||
2. _Global community content_: messages carrying user-generated content for members of any community.
|
||||
|
||||
> **Note:** a previous iteration of the Status Communities feature defined separate community-wide scopes for each community.
|
||||
However, this model was deprecated and all communities now operate on a global, shared scope.
|
||||
This implies that different communities will share shards on the routing layer.
|
||||
|
||||
#### Local scope
|
||||
|
||||
1. _Local_: messages related to functions that are only relevant to a single user. Also known as _self-addressed messages_. Examples include messages used to exchange information between app installations, such as User Backup and Sync messages.
|
||||
|
||||
Note that the functional scope is a logical property of Status messages.
|
||||
It SHOULD however inform the underlying [transport layer sharding](#pubsub-topics-and-sharding) and [transport layer subscriptions](#subscribing).
|
||||
In general a Status client SHOULD subscribe to participate in:
|
||||
|
||||
- all global functions
|
||||
- global community functions if it is interested in this feature, and
|
||||
- its own local functions.
|
||||
|
||||
### Content topics
|
||||
|
||||
Each Status app-level message MUST define a content topic that links messages in related app-level functions and sub-functions together.
|
||||
This MUST be based on the filter use cases for [transport layer subscriptions](#subscribing)
|
||||
and [retrieving historical messages](#retrieving-historical-messages).
|
||||
A content topic SHOULD be identical across all messages that are always part of the same filter use case (or always form part of the same content-filtered query criteria).
|
||||
In other words, the number of content topics defined in the app SHOULD match the number of filter use cases.
|
||||
For the sake of illustration, consider the following common content topic and filter use cases:
|
||||
|
||||
- if all messages belonging to the same 1:1 chat are always filtered together, they SHOULD use the same content topic (see [55/STATUS-1TO1-CHAT](../55/1to1-chat.md))
|
||||
- if all messages belonging to the same Community are always filtered together, they SHOULD use the same content topic (see [56/STATUS-COMMUNITIES](../56/communities.md)).
|
||||
|
||||
The app-level content topic MUST be populated in the `content_topic` field in the encapsulating Waku message (see [Waku messages](#waku-messages)).
|
||||
|
||||
### Ephemerality
|
||||
|
||||
Each Status app-level message MUST define its _ephemerality_.
|
||||
Ephemerality is a boolean value, set to `true` if a message is considered ephemeral.
|
||||
Ephemeral messages are messages emitted by the app that are transient in nature.
|
||||
They only have temporary "real-time" value
|
||||
and SHOULD NOT be stored and retrievable from historical message stores and sync caches.
|
||||
Similarly, ephemeral message delivery is best-effort in nature and SHOULD NOT be considered in message reliability mechanisms (see [End-to-end reliability layer](#end-to-end-reliability-layer)).
|
||||
|
||||
An example of ephemeral messages would be periodic status update messages, indicating a particular user's online status.
|
||||
Since only a user's current online status is of value, there is no need to store historical status update messages.
|
||||
Since status updates are periodic, there is no strong need for end-to-end reliability as subsequent updates are always to follow.
|
||||
|
||||
App-level messages that are considered ephemeral, MUST set the `ephemeral` field in the encapsulating Waku message to `true` (see [Waku messages](#waku-messages))
|
||||
|
||||
## End-to-end reliability layer
|
||||
|
||||
The end-to-end reliability layer contains the functions related to one of the two end-to-end reliability schemes defined for Status app messages:
|
||||
|
||||
1. Minimum Viable protocol for Data Synchronisation, or MVDS (see [STATUS-MVDS-USAGE](./status-mvds.md))
|
||||
2. Scalable distributed log reliability (spec and a punchier name TBD, see the [original forum post announcement](https://forum.vac.dev/t/end-to-end-reliability-for-scalable-distributed-logs/293/16))
|
||||
|
||||
Ephemeral messages SHOULD omit this layer.
|
||||
Non-ephemeral 1:1 chat messages SHOULD make use of MVDS to achieve reliable data synchronisation between the two parties involved in the communication.
|
||||
Non-ephemeral private group chat messages build on a set of 1:1 chat links
|
||||
and consequently SHOULD also make use of MVDS to achieve reliable data synchronisation between all parties involved in the communication.
|
||||
Non-ephemeral 1:1 and private group chat messages MAY make use of of [scalable distributed log reliability](https://forum.vac.dev/t/end-to-end-reliability-for-scalable-distributed-logs/293/16) in future.
|
||||
Since MVDS does not scale for large number of participants in the communication,
|
||||
non-ephemeral community messages MUST use scalable distributed log reliability as defined in this [original forum post announcement](https://forum.vac.dev/t/end-to-end-reliability-for-scalable-distributed-logs/293/16).
|
||||
The app MUST use a single channel ID per community.
|
||||
|
||||
## Encryption layer
|
||||
|
||||
The encryption layer wraps the Status App and Reliability layers in an encrypted payload.
|
||||
|
||||
<!-- TODO: This section is TBD. We may want to design a way for Communities to use de-MLS in a separate spec and generally simplify Status encryption. -->
|
||||
|
||||
## Waku transport layer
|
||||
|
||||
The Waku transport layer contains the functions allowing Status protocols to use [10/WAKU2](../../waku/standards/core/10/waku2.md) infrastructure as transport.
|
||||
|
||||
### Waku messages
|
||||
|
||||
Each Status application message MUST be transformed to a [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md) with the following structure:
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
message WakuMessage {
|
||||
bytes payload = 1;
|
||||
string content_topic = 2;
|
||||
optional uint32 version = 3;
|
||||
optional sint64 timestamp = 10;
|
||||
optional bytes meta = 11;
|
||||
optional bool ephemeral = 31;
|
||||
}
|
||||
```
|
||||
|
||||
- `payload` MUST be set to the full encrypted payload received from the higher layers
|
||||
- `version` MUST be set to `1`
|
||||
- `ephemeral` MUST be set to `true` if the app-level message is ephemeral
|
||||
- `content_topic` MUST be set to the app-level content topic
|
||||
- `timestamp` MUST be set to the current Unix epoch timestamp (in nanosecond precision)
|
||||
|
||||
### Pubsub topics and sharding
|
||||
|
||||
All Waku messages are published to pubsub topics as defined in [23/WAKU2-TOPICS](../../waku/informational/23/topics.md).
|
||||
Since pubsub topics define a routing layer for messages,
|
||||
they can be used to shard traffic.
|
||||
The pubsub topic used for publishing a message depends on the app-level [functional scope](#functional-scope).
|
||||
|
||||
#### Self-addressed messages
|
||||
|
||||
The application MUST define at least one distinct pubsub topic for self-addressed messages.
|
||||
The application MAY define a set of more than one pubsub topic for self-addressed messages to allow traffic sharding for scalability.
|
||||
|
||||
#### Global messages
|
||||
|
||||
The application MUST define at least one distinct pubsub topic for global control messages and global content messages.
|
||||
The application MAY defined a set of more than one pubsub topic for global messages to allow traffic sharding for scalability.
|
||||
It is RECOMMENDED that separate pubsub topics be used for global control messages and global content messages.
|
||||
|
||||
#### Community messages
|
||||
|
||||
The application SHOULD define at least one distinct pubsub topic for global community control messages and global community content messages.
|
||||
The application MAY define a set of more than one pubsub topic for global community messages to allow traffic sharding for scalability.
|
||||
It is RECOMMENDED that separate pubsub topics be used for global community control messages and global community content messages.
|
||||
|
||||
#### Large messages
|
||||
|
||||
The application MAY define separate pubsub topics for large messages.
|
||||
These pubsub topics for large messages MAY be distinct for each functional scope.
|
||||
|
||||
### Resource usage
|
||||
|
||||
The application SHOULD use a range of Waku protocols to interact with the Waku transport layer.
|
||||
The specific set of Waku protocols used depend on desired functionality and resource usage profile for the specific client.
|
||||
Resources can be restricted in terms of bandwidth and computing resources.
|
||||
|
||||
Waku protocols that are more appropriate for resource-restricted environments are often termed "light protocols".
|
||||
Waku protocols that consume more resources, but simultaneously contribute more to Waku infrastructure, are often termed "full protocols".
|
||||
The terms "full" and "light" is just a useful abstraction than a strict binary, though,
|
||||
and Status clients can operate along a continuum of resource usage profiles,
|
||||
each using the combination of "full" and "light" protocols most appropriate to match its environment and motivations.
|
||||
|
||||
To simplify interaction with the selection of "full" and "light" protocols,
|
||||
Status clients MUST define a "full mode" and "light mode"
|
||||
to allow users to select whether their client would prefer "full protocols" or "light protocols" by default.
|
||||
Status Desktop clients are assumed to have more resources available and SHOULD use full mode by default.
|
||||
Status Mobile clients are assumed to operate with more resource restrictions and SHOULD use light mode by default.
|
||||
|
||||
For the purposes of the rest of this document,
|
||||
clients in full mode will be referred to as "full clients" and
|
||||
clients in light mode will be referred to as "light clients".
|
||||
|
||||
### Discovery
|
||||
|
||||
The application MUST make use of at least one discovery method to discover and connect to Waku peers
|
||||
useful for the user functions specific to that instance of the application.
|
||||
|
||||
The specific Waku discovery protocol used for discovery depends on the use case and resource-availability of the client.
|
||||
|
||||
1. [EIP-1459: DNS-based discovery](https://eips.ethereum.org/EIPS/eip-1459) is useful for initial connection to bootstrap peers.
|
||||
2. [33/WAKU2-DISCV5](../../waku/standards/core/33/discv5.md) allows decentralized discovery of Waku peers.
|
||||
3. [34/WAKU2-PEER-EXCHANGE](https://github.com/waku-org/specs/blob/315264c202e0973476e2f1e2d0b01bea4fe1ad31/standards/core/peer-exchange.md) allows requesting peers from a service node
|
||||
and is appropriate for resource-restricted discovery.
|
||||
|
||||
All clients SHOULD use DNS-based discovery on startup
|
||||
to discover a set of bootstrap peers for initial connection.
|
||||
|
||||
Full clients SHOULD use [33/WAKU2-DISCV5](../../waku/standards/core/33/discv5.md) for continuous ambient peer discovery.
|
||||
|
||||
Light clients SHOULD use [34/WAKU2-PEER-EXCHANGE](https://github.com/waku-org/specs/blob/315264c202e0973476e2f1e2d0b01bea4fe1ad31/standards/core/peer-exchange.md) to discover a set of service peers
|
||||
used by that instance of the application.
|
||||
|
||||
### Subscribing
|
||||
|
||||
The application MUST subscribe to receive the traffic necessary for minimal app operation
|
||||
and to enable the user functions specific to that instance of the application.
|
||||
|
||||
The specific Waku protocol used for subscription depends on the resource-availability of the client:
|
||||
|
||||
1. Filter client protocol, as specified in [12/WAKU2-FILTER](../../waku/standards/core/12/filter.md), allows subscribing for traffic with content topic granularity and is appropriate for resource-restricted subscriptions.
|
||||
2. Relay protocol, as specified in [11/WAKU2-RELAY](../../waku/standards/core/11/relay.md), allows subscribing to traffic only with pubsub topic granularity and therefore is more resource-intensive. Relay subscription also allows the application instance to contribute to the overall routing infrastructure, which adds to its overall higher resource usage but benefits the ecosystem.
|
||||
|
||||
Full clients SHOULD use relay protocol as preferred method to subscribe to pubsub topics matching the scopes:
|
||||
|
||||
1. Global control
|
||||
2. Global content
|
||||
3. Global community control, if the client has activated the Status Communities feature
|
||||
4. Global community content, if the client has activated the Status Communities feature
|
||||
|
||||
Light clients SHOULD use filter protocol to subscribe only to the content topics relevant to the user.
|
||||
|
||||
#### Self-addressed messages
|
||||
|
||||
Status clients (full or light) MUST NOT subscribe to topics for messages with self-addressed scopes.
|
||||
See [Self-addressed messages](#self-addressed-messages-4).
|
||||
|
||||
#### Large messages
|
||||
|
||||
Status clients (full or light) SHOULD NOT subscribe to topics set aside for large messages.
|
||||
See [Large messages](#large-messages-4).
|
||||
|
||||
### Publishing
|
||||
|
||||
The application MUST publish user and app generated messages via the Waku transport layer.
|
||||
The specific Waku protocol used for publishing depends on the resource-availability of the client:
|
||||
|
||||
1. Lightpush protocol, as specified in [19/WAKU2-LIGHTPUSH](../../waku/standards/core/19/lightpush.md) allows publishing to a pubsub topic via an intermediate "full node" and is more appropriate for resource-restricted publishing.
|
||||
2. Relay protocol, as specified in [11/WAKU2-RELAY](../../waku/standards/core/11/relay.md), allows publishing directly into the relay routing network and is therefore more resource-intensive.
|
||||
|
||||
Full clients SHOULD use relay protocol to publish to pubsub topics matching the scopes:
|
||||
|
||||
1. Global control
|
||||
2. Global content
|
||||
3. Global community control, if the client has activated the Status Communities feature
|
||||
4. Global community content, if the client has activated the Status Communities feature
|
||||
|
||||
Light clients SHOULD use lightpush protocol to publish control and content messages.
|
||||
|
||||
#### Self-addressed messages
|
||||
|
||||
Status clients (full or light) MUST use lightpush protocol to publish self-addressed messages.
|
||||
See [Self-addressed messages](#self-addressed-messages-4).
|
||||
|
||||
#### Large messages
|
||||
|
||||
Status clients (full or light) SHOULD use lightpush protocols to publish to pubsub topics set aside for large messages.
|
||||
See [Large messages](#large-messages-4).
|
||||
|
||||
### Retrieving historical messages
|
||||
|
||||
Status clients SHOULD use the store query protocol, as specified in [WAKU2-STORE](https://github.com/waku-org/specs/blob/8fea97c36c7bbdb8ddc284fa32aee8d00a2b4467/standards/core/store.md), to retrieve historical messages relevant to the client from store service nodes in the network.
|
||||
|
||||
Status clients SHOULD use [content filtered queries](https://github.com/waku-org/specs/blob/8fea97c36c7bbdb8ddc284fa32aee8d00a2b4467/standards/core/store.md#content-filtered-queries) with `include_data` set to `true`,
|
||||
to retrieve the full contents of historical messages that the client may have missed during offline periods,
|
||||
or to populate the local message database when the client starts up for the first time.
|
||||
|
||||
#### Store queries for reliability
|
||||
|
||||
Status clients MAY use periodic content filtered queries with `include_data` set to `false`,
|
||||
to retrieve only the message hashes of past messages on content topics relevant to the client.
|
||||
This can be used to compare the hashes available in the local message database with the hashes in the query response
|
||||
in order to identify possible missing messages.
|
||||
Once the Status client has identified a set of missing message hashes
|
||||
it SHOULD use [message hash lookup queries](https://github.com/waku-org/specs/blob/8fea97c36c7bbdb8ddc284fa32aee8d00a2b4467/standards/core/store.md#message-hash-lookup-queries) with `include_data` set to `true`
|
||||
to retrieve the full contents of the missing messages based on the hash.
|
||||
|
||||
Status clients MAY use [presence queries](https://github.com/waku-org/specs/blob/8fea97c36c7bbdb8ddc284fa32aee8d00a2b4467/standards/core/store.md#presence-queries)
|
||||
to determine if one or more message hashes known to the client is present in the store service node.
|
||||
Clients MAY use this method to determine if a message that originated from the client
|
||||
has been successfully stored.
|
||||
|
||||
#### Self-addressed messages
|
||||
|
||||
Status clients (full or light) SHOULD use store queries (rather than subscriptions) to retrieve self-addressed messages relevant to that client.
|
||||
See [Self-addressed messages](#self-addressed-messages-4).
|
||||
|
||||
#### Large messages
|
||||
|
||||
Status clients (full or light) SHOULD use store queries (rather than subscriptions) to retrieve large messages relevant to that client.
|
||||
See [Large messages](#large-messages-4).
|
||||
|
||||
### Providing services
|
||||
|
||||
Status clients MAY provide service-side protocols to other clients.
|
||||
|
||||
Full clients SHOULD mount
|
||||
the filter service protocol (see [12/WAKU2-FILTER](../../waku/standards/core/12/filter.md))
|
||||
and lightpush service protocol (see [19/WAKU2-LIGHTPUSH](../../waku/standards/core/19/lightpush.md))
|
||||
in order to provide light subscription and publishing services to other clients
|
||||
for each pubsub topic to which they have a relay subscription.
|
||||
|
||||
Full clients SHOULD mount
|
||||
the peer exchange service protocol (see [34/WAKU2-PEER-EXCHANGE](https://github.com/waku-org/specs/blob/315264c202e0973476e2f1e2d0b01bea4fe1ad31/standards/core/peer-exchange.md))
|
||||
to provide light discovery services to other clients.
|
||||
|
||||
Status clients MAY mount the store query protocol as service node (see [WAKU2-STORE](https://github.com/waku-org/specs/blob/8fea97c36c7bbdb8ddc284fa32aee8d00a2b4467/standards/core/store.md))
|
||||
to store historical messages and
|
||||
provide store services to other clients
|
||||
for each pubsub topic to which they have a relay subscription
|
||||
|
||||
### Self-addressed messages
|
||||
|
||||
Messages with a _local_ functional scope (see [Functional scope](#functional-scope)),
|
||||
also known as _self-addressed_ messages,
|
||||
MUST be published to a distinct pubsub topic or a distinct _set_ of pubsub topics
|
||||
used exclusively for messages with local scope (see [Pubsub topics and sharding](#pubsub-topics-and-sharding)).
|
||||
Status clients (full or light) MUST use lightpush protocol to publish self-addressed messages (see [Publishing](#publishing)).
|
||||
Status clients (full or light) MUST NOT subscribe to topics for messages with self-addressed scopes (see [Subscribing](#subscribing)).
|
||||
Status clients (full or light) SHOULD use store queries (rather than subscriptions) to retrieve self-addressed messages relevant to that client (see [Retrieving historical messages](#retrieving-historical-messages)).
|
||||
|
||||
### Large messages
|
||||
|
||||
The application MAY define separate pubsub topics for large messages.
|
||||
These pubsub topics for large messages MAY be distinct for each functional scope (see [Pubsub topics and sharding](#pubsub-topics-and-sharding)).
|
||||
Status clients (full or light) SHOULD use lightpush protocols to publish to pubsub topics set aside for large messages (see [Publishing](#publishing)).
|
||||
Status clients (full or light) SHOULD NOT subscribe to topics set aside for large messages (see [Subscribing](#subscribing)).
|
||||
Status clients (full or light) SHOULD use store queries (rather than subscriptions) to retrieve large messages relevant to that client (see [Retrieving historical messages](#retrieving-historical-messages)).
|
||||
|
||||
#### Chunking
|
||||
|
||||
The Status application MAY use a chunking mechanism to break down large payloads
|
||||
into smaller segments for individual Waku transport.
|
||||
The definition of a large message is up to the application.
|
||||
However, the maximum size for a [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md) payload is 150KB.
|
||||
Status application payloads that exceed this size MUST be chunked into smaller pieces
|
||||
and MUST be considered a "large message".
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [55/STATUS-1TO1-CHAT](../55/1to1-chat.md)
|
||||
2. [56/STATUS-COMMUNITIES](../56/communities.md)
|
||||
3. [10/WAKU2](../../waku/standards/core/10/waku2.md)
|
||||
4. [11/WAKU2-RELAY](../../waku/standards/core/11/relay.md)
|
||||
5. [12/WAKU2-FILTER](../../waku/standards/core/12/filter.md)
|
||||
6. [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md)
|
||||
7. [23/WAKU2-TOPICS](../../waku/informational/23/topics.md)
|
||||
8. [19/WAKU2-LIGHTPUSH](../../waku/standards/core/19/lightpush.md)
|
||||
9. [Scalable distributed log reliability](https://forum.vac.dev/t/end-to-end-reliability-for-scalable-distributed-logs/293/16)
|
||||
10. [STATUS-MVDS-USAGE](./status-mvds.md)
|
||||
11. [WAKU2-STORE](https://github.com/waku-org/specs/blob/8fea97c36c7bbdb8ddc284fa32aee8d00a2b4467/standards/core/store.md)
|
||||
@@ -1,129 +1,129 @@
|
||||
---
|
||||
title: STATUS-MVDS-USAGE
|
||||
name: MVDS Usage in Status
|
||||
status: raw
|
||||
category: Best Current Practice
|
||||
description: Defines how MVDS protocol used by different message types in Status.
|
||||
editor: Kaichao Sun <kaichao@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document lists the types of messages that are using [MVDS](/vac/2/mvds.md)
|
||||
in the Status application.
|
||||
|
||||
## Background
|
||||
|
||||
Status app uses MVDS to ensure messages going through Waku
|
||||
are acknolwedged by the recipient.
|
||||
This is to ensure that the messages are not missed by any interested parties.
|
||||
|
||||
## Message types
|
||||
|
||||
Various Message Types contain distinct information defined by the app
|
||||
to facilitate convenient serialization and deserialization.
|
||||
|
||||
E2E reliability is a feature that ensures messages are delivered to the recipient.
|
||||
This is initially achieved by using MVDS in Status.
|
||||
|
||||
Chat Type specifies the category of chat that a message belongs to.
|
||||
It can be OneToOne (aka Direct Message), GroupChat, or CommunityChat.
|
||||
These are the three main types of chats in Status.
|
||||
|
||||
| Message Type | Use MVDS | Need e2e reliability | Chat Type |
|
||||
|----------------------------------------------------------------------------|-------------------------------------|----------------------|-------------------------|
|
||||
| ApplicationMetadataMessage_UNKNOWN | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_CHAT_MESSAGE | Yes for OneToOne & PrivateGroupChat | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_CONTACT_UPDATE | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_MEMBERSHIP_UPDATE_MESSAGE | No | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_PAIR_INSTALLATION | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_DEPRECATED_SYNC_INSTALLATION | No | No | Pair |
|
||||
| ApplicationMetadataMessage_REQUEST_ADDRESS_FOR_TRANSACTION | Yes for OneToOne | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_ACCEPT_REQUEST_ADDRESS_FOR_TRANSACTION | Yes for OneToOne | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_DECLINE_REQUEST_ADDRESS_FOR_TRANSACTION | Yes for OneToOne | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_REQUEST_TRANSACTION | Yes for OneToOne | Yes | OneToOne & GroupChat |
|
||||
| ApplicationMetadataMessage_SEND_TRANSACTION | Yes for OneToOne | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_DECLINE_REQUEST_TRANSACTION | Yes for OneToOne | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_SYNC_INSTALLATION_CONTACT_V2 | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_INSTALLATION_ACCOUNT | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_CONTACT_CODE_ADVERTISEMENT | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_REGISTRATION | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_REGISTRATION_RESPONSE | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_QUERY | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_QUERY_RESPONSE | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_REQUEST | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_RESPONSE | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_EMOJI_REACTION | No | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_GROUP_CHAT_INVITATION | Yes | Yes | GroupChat |
|
||||
| ApplicationMetadataMessage_CHAT_IDENTITY | No | No | OneToOne |
|
||||
| ApplicationMetadataMessage_COMMUNITY_DESCRIPTION | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_INVITATION | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_REQUEST_TO_JOIN | No | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_PIN_MESSAGE | Yes for OneToOne & PrivateGroupChat | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_EDIT_MESSAGE | Yes for OneToOne & PrivateGroupChat | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_STATUS_UPDATE | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_DELETE_MESSAGE | Yes for OneToOne & PrivateGroupChat | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_SYNC_INSTALLATION_COMMUNITY | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_ANONYMOUS_METRIC_BATCH | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_SYNC_CHAT_REMOVED | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_CHAT_MESSAGES_READ | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_BACKUP | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_READ | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_ACCEPTED | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_DISMISSED | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_BOOKMARK | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_CLEAR_HISTORY | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_SETTING | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_MESSAGE_ARCHIVE_MAGNETLINK | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_PROFILE_PICTURES | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACCOUNT | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_ACCEPT_CONTACT_REQUEST | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_RETRACT_CONTACT_REQUEST | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_COMMUNITY_REQUEST_TO_JOIN_RESPONSE | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_COMMUNITY_SETTINGS | Yes | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_REQUEST_CONTACT_VERIFICATION | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_ACCEPT_CONTACT_VERIFICATION | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_DECLINE_CONTACT_VERIFICATION | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_SYNC_TRUSTED_USER | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_VERIFICATION_REQUEST | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_CONTACT_REQUEST_DECISION | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_REQUEST_TO_LEAVE | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_DELETE_FOR_ME_MESSAGE | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_SAVED_ADDRESS | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_CANCEL_REQUEST_TO_JOIN | No | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_CANCEL_CONTACT_VERIFICATION | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_SYNC_KEYPAIR | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_SOCIAL_LINKS | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_SYNC_ENS_USERNAME_DETAIL | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_EVENTS_MESSAGE | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_EDIT_SHARED_ADDRESSES | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_ACCOUNT_CUSTOMIZATION_COLOR | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACCOUNTS_POSITIONS | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_PRIVILEGED_USER_SYNC_MESSAGE | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_SHARD_KEY | Yes | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_CHAT | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_DELETED | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_UNREAD | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_COMMUNITY_REQUEST_DECISION | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_TOKEN_PREFERENCES | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_PUBLIC_SHARD_INFO | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_COLLECTIBLE_PREFERENCES | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_USER_KICKED | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_PROFILE_SHOWCASE_PREFERENCES | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_PUBLIC_STORENODES_INFO | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_REEVALUATE_PERMISSIONS_REQUEST | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_DELETE_COMMUNITY_MEMBER_MESSAGES | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_UPDATE_GRANT | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_ENCRYPTION_KEYS_REQUEST | No | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_TOKEN_ACTION | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_SHARED_ADDRESSES_REQUEST | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_SHARED_ADDRESSES_RESPONSE | No | No | CommunityChat |
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [MVDS](/vac/2/mvds.md)
|
||||
---
|
||||
title: STATUS-MVDS-USAGE
|
||||
name: MVDS Usage in Status
|
||||
status: raw
|
||||
category: Best Current Practice
|
||||
description: Defines how MVDS protocol used by different message types in Status.
|
||||
editor: Kaichao Sun <kaichao@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document lists the types of messages that are using [MVDS](/vac/2/mvds.md)
|
||||
in the Status application.
|
||||
|
||||
## Background
|
||||
|
||||
Status app uses MVDS to ensure messages going through Waku
|
||||
are acknolwedged by the recipient.
|
||||
This is to ensure that the messages are not missed by any interested parties.
|
||||
|
||||
## Message types
|
||||
|
||||
Various Message Types contain distinct information defined by the app
|
||||
to facilitate convenient serialization and deserialization.
|
||||
|
||||
E2E reliability is a feature that ensures messages are delivered to the recipient.
|
||||
This is initially achieved by using MVDS in Status.
|
||||
|
||||
Chat Type specifies the category of chat that a message belongs to.
|
||||
It can be OneToOne (aka Direct Message), GroupChat, or CommunityChat.
|
||||
These are the three main types of chats in Status.
|
||||
|
||||
| Message Type | Use MVDS | Need e2e reliability | Chat Type |
|
||||
|----------------------------------------------------------------------------|-------------------------------------|----------------------|-------------------------|
|
||||
| ApplicationMetadataMessage_UNKNOWN | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_CHAT_MESSAGE | Yes for OneToOne & PrivateGroupChat | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_CONTACT_UPDATE | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_MEMBERSHIP_UPDATE_MESSAGE | No | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_PAIR_INSTALLATION | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_DEPRECATED_SYNC_INSTALLATION | No | No | Pair |
|
||||
| ApplicationMetadataMessage_REQUEST_ADDRESS_FOR_TRANSACTION | Yes for OneToOne | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_ACCEPT_REQUEST_ADDRESS_FOR_TRANSACTION | Yes for OneToOne | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_DECLINE_REQUEST_ADDRESS_FOR_TRANSACTION | Yes for OneToOne | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_REQUEST_TRANSACTION | Yes for OneToOne | Yes | OneToOne & GroupChat |
|
||||
| ApplicationMetadataMessage_SEND_TRANSACTION | Yes for OneToOne | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_DECLINE_REQUEST_TRANSACTION | Yes for OneToOne | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_SYNC_INSTALLATION_CONTACT_V2 | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_INSTALLATION_ACCOUNT | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_CONTACT_CODE_ADVERTISEMENT | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_REGISTRATION | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_REGISTRATION_RESPONSE | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_QUERY | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_QUERY_RESPONSE | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_REQUEST | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_PUSH_NOTIFICATION_RESPONSE | No | No | One & Group & Community |
|
||||
| ApplicationMetadataMessage_EMOJI_REACTION | No | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_GROUP_CHAT_INVITATION | Yes | Yes | GroupChat |
|
||||
| ApplicationMetadataMessage_CHAT_IDENTITY | No | No | OneToOne |
|
||||
| ApplicationMetadataMessage_COMMUNITY_DESCRIPTION | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_INVITATION | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_REQUEST_TO_JOIN | No | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_PIN_MESSAGE | Yes for OneToOne & PrivateGroupChat | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_EDIT_MESSAGE | Yes for OneToOne & PrivateGroupChat | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_STATUS_UPDATE | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_DELETE_MESSAGE | Yes for OneToOne & PrivateGroupChat | Yes | One & Group & Community |
|
||||
| ApplicationMetadataMessage_SYNC_INSTALLATION_COMMUNITY | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_ANONYMOUS_METRIC_BATCH | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_SYNC_CHAT_REMOVED | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_CHAT_MESSAGES_READ | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_BACKUP | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_READ | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_ACCEPTED | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_DISMISSED | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_BOOKMARK | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_CLEAR_HISTORY | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_SETTING | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_MESSAGE_ARCHIVE_MAGNETLINK | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_PROFILE_PICTURES | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACCOUNT | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_ACCEPT_CONTACT_REQUEST | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_RETRACT_CONTACT_REQUEST | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_COMMUNITY_REQUEST_TO_JOIN_RESPONSE | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_COMMUNITY_SETTINGS | Yes | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_REQUEST_CONTACT_VERIFICATION | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_ACCEPT_CONTACT_VERIFICATION | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_DECLINE_CONTACT_VERIFICATION | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_SYNC_TRUSTED_USER | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_VERIFICATION_REQUEST | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_CONTACT_REQUEST_DECISION | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_REQUEST_TO_LEAVE | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_DELETE_FOR_ME_MESSAGE | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_SAVED_ADDRESS | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_CANCEL_REQUEST_TO_JOIN | No | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_CANCEL_CONTACT_VERIFICATION | Yes | Yes | OneToOne |
|
||||
| ApplicationMetadataMessage_SYNC_KEYPAIR | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_SOCIAL_LINKS | No | No | Not Applied |
|
||||
| ApplicationMetadataMessage_SYNC_ENS_USERNAME_DETAIL | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_EVENTS_MESSAGE | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_EDIT_SHARED_ADDRESSES | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_ACCOUNT_CUSTOMIZATION_COLOR | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACCOUNTS_POSITIONS | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_PRIVILEGED_USER_SYNC_MESSAGE | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_SHARD_KEY | Yes | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_CHAT | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_DELETED | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_UNREAD | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACTIVITY_CENTER_COMMUNITY_REQUEST_DECISION | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_TOKEN_PREFERENCES | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_PUBLIC_SHARD_INFO | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_COLLECTIBLE_PREFERENCES | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_USER_KICKED | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_PROFILE_SHOWCASE_PREFERENCES | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_PUBLIC_STORENODES_INFO | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_REEVALUATE_PERMISSIONS_REQUEST | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_DELETE_COMMUNITY_MEMBER_MESSAGES | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_UPDATE_GRANT | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_ENCRYPTION_KEYS_REQUEST | No | Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_TOKEN_ACTION | No | Weak Yes | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_SHARED_ADDRESSES_REQUEST | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_SHARED_ADDRESSES_RESPONSE | No | No | CommunityChat |
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [MVDS](/vac/2/mvds.md)
|
||||
|
||||
218
status/raw/status-waku-usage.md
Normal file
218
status/raw/status-waku-usage.md
Normal file
@@ -0,0 +1,218 @@
|
||||
---
|
||||
title: STATUS-WAKU2-USAGE
|
||||
name: Status Waku2 Usage
|
||||
status: raw
|
||||
category: Best Current Practice
|
||||
description: Defines how the Status application uses the Waku protocols.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
Status is a chat application which has several features,
|
||||
including, but not limited to -
|
||||
|
||||
- Private 1:1 chats, described by [55/STATUS-1TO1-CHAT](/spec/55)
|
||||
- Large scale group chats, described by [56/STATUS-COMMUNITIES](/spec/56)
|
||||
|
||||
This specification describes how a Status implementation will make use of
|
||||
the underlying infrastructure, Waku,
|
||||
which is described in [10/WAKU2](/spec/10).
|
||||
|
||||
## Background
|
||||
|
||||
The Status application aspires to achieve censorship resistance and
|
||||
incorporates specific privacy features,
|
||||
leveraging the comprehensive set of protocols offered by Waku to enhance these attributes.
|
||||
Waku protocols provide secure communication capabilities over decentralized networks.
|
||||
Once integrated, an application will benefit from privacy-preserving,
|
||||
censorship resistance and spam protected communcation.
|
||||
|
||||
Since Status uses a large set of Waku protocols,
|
||||
it is imperative to describe how each are used.
|
||||
|
||||
## Terminology
|
||||
|
||||
| Name | Description |
|
||||
| --------------- | --------- |
|
||||
| `RELAY`| This refers to the Waku Relay protocol, described in [11/WAKU2-RELAY](/spec/11) |
|
||||
| `FILTER` | This refers to the Waku Filter protocol, described in [12/WAKU2-FILTER](/spec/12) |
|
||||
| `STORE` | This refers to the Waku Store protocol, described in [13/WAKU2-STORE](/spec/13) |
|
||||
| `MESSAGE` | This refers to the Waku Message format, described in [14/WAKU2-MESSAGE](/spec/14) |
|
||||
| `LIGHTPUSH` | This refers to the Waku Lightpush protocol, described in [19/WAKU2-LIGHTPUSH](/spec/19) |
|
||||
| Discovery | This refers to a peer discovery method used by a Waku node. |
|
||||
| `Pubsub Topic` / `Content Topic` | This refers to the routing of messages within the Waku network, described in [23/WAKU2-TOPICS](/spec/23/) |
|
||||
|
||||
### Waku Node
|
||||
|
||||
Software that is configured with a set of Waku protocols.
|
||||
A Status client comprises of a Waku node that is a `RELAY` node or a non-relay node.
|
||||
|
||||
### Light Client
|
||||
|
||||
A Status client that operates within resource constrained environments
|
||||
is a node configured as light client.
|
||||
Light clients do not run a `RELAY`.
|
||||
Instead, Status light clients,
|
||||
can request services from other `RELAY` node that provide `LIGHTPUSH` service.
|
||||
|
||||
## Protocol Usage
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
|
||||
“SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
The following is a list of Waku Protocols used by a Status application.
|
||||
|
||||
### 1. `RELAY`
|
||||
|
||||
The `RELAY` MUST NOT be used by Status light clients.
|
||||
The `RELAY` is used to broadcast messages between Status clients.
|
||||
All Status messages are transformed into [14/WAKU2-MESSAGE](/spec/14),
|
||||
which are sent over the wire.
|
||||
|
||||
All Status message types are described in [62/STATUS-PAYLOAD](/spec/62).
|
||||
Status Clients MUST transform the following object into a `MESSAGE`
|
||||
as described below -
|
||||
|
||||
```go
|
||||
|
||||
type StatusMessage struct {
|
||||
SymKey[] []byte // [optional] The symmetric key used to encrypt the message
|
||||
PublicKey []byte // [optional] The public key to use for asymmetric encryption
|
||||
Sig string // [optional] The private key used to sign the message
|
||||
PubsubTopic string // The Pubsub topic to publish the message to
|
||||
ContentTopic string // The Content topic to publish the message to
|
||||
Payload []byte // A serialized representation of a Status message to be sent
|
||||
Padding []byte // Padding that must be applied to the Payload
|
||||
TargetPeer string // [optional] The target recipient of the message
|
||||
Ephemeral bool // If the message is not to be stored, this is set to `true`
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
1. A user MUST only provide either a Symmetric key OR
|
||||
an Asymmetric keypair to encrypt the message.
|
||||
If both are received, the implementation MUST throw an error.
|
||||
2. `WakuMessage.Payload` MUST be set to `StatusMessage.Payload`
|
||||
3. `WakuMessage.Key` MUST be set to `StatusMessage.SymKey`
|
||||
4. `WakuMessage.Version` MUST be set to `1`
|
||||
5. `WakuMessage.Ephemeral` MUST be set to `StatusMessage.Ephemeral`
|
||||
6. `WakuMessage.ContentTopic` MUST be set to `StatusMessage.ContentTopic`
|
||||
7. `WakuMessage.Timestamp` MUST be set to the current Unix epoch timestamp
|
||||
(in nanosecond precision)
|
||||
|
||||
### 2. `STORE`
|
||||
|
||||
This protocol MUST remain optional according to the user's preferences,
|
||||
it MAY be enabled on Light clients as well.
|
||||
|
||||
Messages received via [11/WAKU2-RELAY](/spec/11), are stored in a database.
|
||||
When Waku node running this protocol is service node,
|
||||
it MUST provide the complete list of network messages.
|
||||
Status clients SHOULD request historical messages from this service node.
|
||||
|
||||
The messages that have the `WakuMessage.Ephemeral` flag set to true will not be stored.
|
||||
|
||||
The Status client MAY provide a method to prune the database of
|
||||
older records to save storage.
|
||||
|
||||
### 3. `FILTER`
|
||||
|
||||
This protocol SHOULD be enabled on Light clients.
|
||||
|
||||
This protocol SHOULD be used to filter messages based on a given criteria,
|
||||
such as the `Content Topic` of a `MESSAGE`.
|
||||
This allows a reduction in bandwidth consumption by the Status client.
|
||||
|
||||
#### Content filtering protocol identifers
|
||||
|
||||
The `filter-subcribe` SHOULD be implemented on `RELAY` nodes
|
||||
to provide `FILTER` services.
|
||||
|
||||
`filter-subscribe`:
|
||||
|
||||
> /vac/waku/filter-subscribe/2.0.0-beta1
|
||||
|
||||
The `filter-push` SHOULD be implemented on light clients to receive messages.
|
||||
|
||||
`filter-push`:
|
||||
|
||||
> /vac/waku/filter-push/2.0.0-beta1
|
||||
|
||||
Status clients SHOULD apply a filter for all the `Content Topic`
|
||||
they are interested in, such as `Content Topic` derived from -
|
||||
|
||||
1. 1:1 chats with other users, described in [55/STATUS-1TO1-CHAT](/spec/55)
|
||||
2. Group chats
|
||||
3. Community Channels, described in [56/STATUS-COMMUNITIES](/spec/56)
|
||||
|
||||
### 4. `LIGHTPUSH`
|
||||
|
||||
The `LIGHTPUSH` protocol MUST be enabled on Status light clients.
|
||||
A Status `RELAY` node MAY implement `LIGHTPUSH` to support light clients.
|
||||
Peers will be able to publish messages,
|
||||
without running a full-fledged [11/WAKU2-RELAY](/spec/11) protocol.
|
||||
|
||||
When a Status client is publishing a message,
|
||||
it MUST check if Light mode is enabled,
|
||||
and if so, it MUST publish the message via this protocol.
|
||||
|
||||
### 5. Discovery
|
||||
|
||||
A discovery method MUST be supported by Light clients and Full clients
|
||||
|
||||
Status clients SHOULD make use of the following peer discovery methods
|
||||
that are provided by Waku, such as -
|
||||
|
||||
1. [EIP-1459: DNS-Based Discovery](https://eips.ethereum.org/EIPS/eip-1459)
|
||||
2. [33/WAKU2-DISCV5](/spec/33): A node discovery protocol to
|
||||
create decentralized network of interconnected Waku nodes.
|
||||
3. [34/WAKU2-PEER-EXCHANGE](/spec/34):
|
||||
A peer discovery protocol for resource restricted devices.
|
||||
|
||||
Status clients MAY use any combination of the above peer discovery methods,
|
||||
which is suited best for their implementation.
|
||||
|
||||
## Security/Privacy Considerations
|
||||
|
||||
This specification inherits the security and
|
||||
privacy considerations from the following specifications -
|
||||
|
||||
1. [10/WAKU2](/spec/10)
|
||||
2. [11/WAKU2-RELAY](/spec/11)
|
||||
3. [12/WAKU2-FILTER](/spec/12)
|
||||
4. [13/WAKU2-STORE](/spec/13)
|
||||
5. [14/WAKU2-MESSAGE](/spec/14)
|
||||
6. [23/WAKU2-TOPICS](/spec/23)
|
||||
7. [19/WAKU2-LIGHTPUSH](/spec/19)
|
||||
8. [55/STATUS-1TO1-CHAT](/spec/55)
|
||||
9. [56/STATUS-COMMUNITIES](/spec/56)
|
||||
10. [62/STATUS-PAYLOAD](/spec/62)
|
||||
11. [EIP-1459: DNS-Based Discovery](https://eips.ethereum.org/EIPS/eip-1459)
|
||||
12. [33/WAKU2-DISCV5](/spec/33)
|
||||
13. [34/WAKU2-PEER-EXCHANGE](/spec/34)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [55/STATUS-1TO1-CHAT](/spec/55)
|
||||
2. [56/STATUS-COMMUNITIES](/spec/56)
|
||||
3. [10/WAKU2](/spec/10)
|
||||
4. [11/WAKU2-RELAY](/spec/11)
|
||||
5. [12/WAKU2-FILTER](/spec/12)
|
||||
6. [13/WAKU2-STORE](/spec/13)
|
||||
7. [14/WAKU2-MESSAGE](/spec/14)
|
||||
8. [23/WAKU2-TOPICS](/spec/23)
|
||||
9. [19/WAKU2-LIGHTPUSH](/spec/19)
|
||||
10. [64/WAKU2-NETWORK](/spec/64)
|
||||
11. [62/STATUS-PAYLOAD](/spec/62)
|
||||
12. [EIP-1459: DNS-Based Discovery](https://eips.ethereum.org/EIPS/eip-1459)
|
||||
13. [33/WAKU2-DISCV5](/spec/33)
|
||||
14. [34/WAKU2-PEER-EXCHANGE](/spec/34)
|
||||
@@ -1,157 +0,0 @@
|
||||
---
|
||||
title: STATUS-URL-DATA
|
||||
name: Status URL Data
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Felicio Mununga <felicio@status.im>
|
||||
contributors:
|
||||
- Aaryamann Challani <aaryamann@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document specifies serialization, compression, and
|
||||
encoding techniques used to transmit data within URLs in the context of Status protocols.
|
||||
|
||||
## Motivation
|
||||
|
||||
When sharing URLs,
|
||||
link previews often expose metadata to the websites behind those links.
|
||||
To reduce reliance on external servers for providing appropriate link previews,
|
||||
this specification proposes a standard method for encoding data within URLs.
|
||||
|
||||
## Terminology
|
||||
|
||||
- Community: Refer to [STATUS-COMMUNITIES](../56/communities.md)
|
||||
- Channel: Refer to terminology in [STATUS-COMMUNITIES](../56/communities.md)
|
||||
- User: Refer to terminology in [STATUS-COMMUNITIES](../56/communities.md)
|
||||
- Shard Refer to terminology in [WAKU2-RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md)
|
||||
|
||||
## Wire Format
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
message Community {
|
||||
// Display name of the community
|
||||
string display_name = 1;
|
||||
// Description of the community
|
||||
string description = 2;
|
||||
// Number of members in the community
|
||||
uint32 members_count = 3;
|
||||
// Color of the community title
|
||||
string color = 4;
|
||||
// List of tag indices
|
||||
repeated uint32 tag_indices = 5;
|
||||
}
|
||||
|
||||
message Channel {
|
||||
// Display name of the channel
|
||||
string display_name = 1;
|
||||
// Description of the channel
|
||||
string description = 2;
|
||||
// Emoji of the channel
|
||||
string emoji = 3;
|
||||
// Color of the channel title
|
||||
string color = 4;
|
||||
// Community the channel belongs to
|
||||
Community community = 5;
|
||||
// UUID of the channel
|
||||
string uuid = 6;
|
||||
}
|
||||
|
||||
message User {
|
||||
// Display name of the user
|
||||
string display_name = 1;
|
||||
// Description of the user
|
||||
string description = 2;
|
||||
// Color of the user title
|
||||
string color = 3;
|
||||
}
|
||||
|
||||
message URLData {
|
||||
// Community, Channel, or User
|
||||
bytes content = 1;
|
||||
uint32 shard_cluster = 2;
|
||||
uint32 shard_index = 3;
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
The above wire format describes the data encoded in the URL.
|
||||
The data MUST be serialized, compressed, and encoded using the following standards:
|
||||
|
||||
Encoding
|
||||
|
||||
- [Base64url](https://datatracker.ietf.org/doc/html/rfc4648)
|
||||
|
||||
### Compression
|
||||
|
||||
- [Brotli](https://datatracker.ietf.org/doc/html/rfc7932)
|
||||
|
||||
### Serialization
|
||||
|
||||
- [Protocol buffers version 3](https://protobuf.dev/reference/protobuf/proto3-spec/)
|
||||
|
||||
### Implementation Pseudocode
|
||||
|
||||
Encoding
|
||||
|
||||
Encoding the URL MUST be done in the following order:
|
||||
|
||||
```protobuf
|
||||
raw_data = {User | Channel | Community}
|
||||
serialized_data = protobuf_serialize(raw_data)
|
||||
compressed_data = brotli_compress(serialized_data)
|
||||
encoded_url_data = base64url_encode(compressed_data)
|
||||
```
|
||||
|
||||
The `encoded_url_data` is then used to generate a signature using the private key.
|
||||
|
||||
#### Decoding
|
||||
|
||||
Decoding the URL MUST be done in the following order:
|
||||
|
||||
```protobuf
|
||||
url_data = base64url_decode(encoded_url_data)
|
||||
decompressed_data = brotli_decompress(url_data)
|
||||
deserialized_data = protobuf_deserialize(decompressed_data)
|
||||
raw_data = deserialized_data.content
|
||||
```
|
||||
|
||||
The `raw_data` is then used to construct the appropriate data structure
|
||||
(User, Channel, or Community).
|
||||
|
||||
### Example
|
||||
|
||||
- See <https://github.com/status-im/status-web/pull/345/files>
|
||||
|
||||
<!-- # (Further Optional Sections) -->
|
||||
|
||||
## Discussions
|
||||
|
||||
- See <https://github.com/status-im/status-web/issues/327>
|
||||
|
||||
## Proof of concept
|
||||
|
||||
- See <https://github.com/felicio/status-web/blob/825262c4f07a68501478116c7382862607a5544e/packages/status-js/src/utils/encode-url-data.compare.test.ts#L4>
|
||||
|
||||
<!-- # Security Considerations -->
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [Proposal Google Sheet](https://docs.google.com/spreadsheets/d/1JD4kp0aUm90piUZ7FgM_c2NGe2PdN8BFB11wmt5UZIY/edit?usp=sharing)
|
||||
2. [Base64url](https://datatracker.ietf.org/doc/html/rfc4648)
|
||||
3. [Brotli](https://datatracker.ietf.org/doc/html/rfc7932)
|
||||
4. [Protocol buffers version 3](https://protobuf.dev/reference/protobuf/proto3-spec/)
|
||||
5. [STATUS-URL-SCHEME](./url-scheme.md)
|
||||
|
||||
<!-- ## informative
|
||||
|
||||
A list of additional references. -->
|
||||
656
vac/1/coss.md
656
vac/1/coss.md
@@ -1,328 +1,328 @@
|
||||
---
|
||||
slug: 1
|
||||
title: 1/COSS
|
||||
name: Consensus-Oriented Specification System
|
||||
status: draft
|
||||
category: Best Current Practice
|
||||
editor: Daniel Kaiser <danielkaiser@status.im>
|
||||
contributors:
|
||||
- Oskar Thoren <oskarth@titanproxy.com>
|
||||
- Pieter Hintjens <ph@imatix.com>
|
||||
- André Rebentisch <andre@openstandards.de>
|
||||
- Alberto Barrionuevo <abarrio@opentia.es>
|
||||
- Chris Puttick <chris.puttick@thehumanjourney.net>
|
||||
- Yurii Rashkovskii <yrashk@gmail.com>
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
---
|
||||
|
||||
This document describes a consensus-oriented specification system (COSS)
|
||||
for building interoperable technical specifications.
|
||||
COSS is based on a lightweight editorial process that
|
||||
seeks to engage the widest possible range of interested parties and
|
||||
move rapidly to consensus through working code.
|
||||
|
||||
This specification is based on [Unprotocols 2/COSS](https://github.com/unprotocols/rfc/blob/master/2/README.md),
|
||||
used by the [ZeromMQ](https://rfc.zeromq.org/) project.
|
||||
It is equivalent except for some areas:
|
||||
|
||||
- recommending the use of a permissive licenses,
|
||||
such as CC0 (with the exception of this document);
|
||||
- miscellaneous metadata, editor, and format/link updates;
|
||||
- more inheritance from the [IETF Standards Process](https://www.rfc-editor.org/rfc/rfc2026.txt),
|
||||
e.g. using RFC categories: Standards Track, Informational, and Best Common Practice;
|
||||
- standards track specifications SHOULD
|
||||
follow a specific structure that both streamlines editing,
|
||||
and helps implementers to quickly comprehend the specification
|
||||
- specifications MUST feature a header providing specific meta information
|
||||
- raw specifications will not be assigned numbers
|
||||
- section explaining the [IFT](https://free.technology/)
|
||||
Request For Comments specification process managed by the Vac service department
|
||||
|
||||
## License
|
||||
|
||||
Copyright (c) 2008-26 the Editor and Contributors.
|
||||
|
||||
This Specification is free software;
|
||||
you can redistribute it and/or
|
||||
modify it under the terms of the GNU General Public License
|
||||
as published by the Free Software Foundation;
|
||||
either version 3 of the License, or (at your option) any later version.
|
||||
|
||||
This specification is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY;
|
||||
without even the implied warranty of MERCHANTABILITY or
|
||||
FITNESS FOR A PARTICULAR PURPOSE.
|
||||
See the GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program;
|
||||
if not, see [gnu.org](http://www.gnu.org/licenses).
|
||||
|
||||
## Change Process
|
||||
|
||||
This document is governed by the [1/COSS](./coss.md) (COSS).
|
||||
|
||||
## Language
|
||||
|
||||
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
|
||||
"SHOULD NOT", "RECOMMENDED", "MAY", and
|
||||
"OPTIONAL" in this document are to be interpreted as described in
|
||||
[RFC 2119](http://tools.ietf.org/html/rfc2119).
|
||||
|
||||
## Goals
|
||||
|
||||
The primary goal of COSS is to facilitate the process of writing, proving, and
|
||||
improving new technical specifications.
|
||||
A "technical specification" defines a protocol, a process, an API, a use of language,
|
||||
a methodology, or any other aspect of a technical environment that
|
||||
can usefully be documented for the purposes of technical or social interoperability.
|
||||
|
||||
COSS is intended to above all be economical and rapid,
|
||||
so that it is useful to small teams with little time to spend on more formal processes.
|
||||
|
||||
Principles:
|
||||
|
||||
- We aim for rough consensus and running code; [inspired by the IETF Tao](https://www.ietf.org/about/participate/tao/).
|
||||
- Specifications are small pieces, made by small teams.
|
||||
- Specifications should have a clearly responsible editor.
|
||||
- The process should be visible, objective, and accessible to anyone.
|
||||
- The process should clearly separate experiments from solutions.
|
||||
- The process should allow deprecation of old specifications.
|
||||
|
||||
Specifications should take minutes to explain, hours to design, days to write,
|
||||
weeks to prove, months to become mature, and years to replace.
|
||||
Specifications have no special status except that accorded by the community.
|
||||
|
||||
## Architecture
|
||||
|
||||
COSS is designed around fast, easy to use communications tools.
|
||||
Primarily, COSS uses a wiki model for editing and publishing specifications texts.
|
||||
|
||||
- The *domain* is the conservancy for a set of specifications.
|
||||
- The *domain* is implemented as an Internet domain.
|
||||
- Each specification is a document together with references and attached resources.
|
||||
- A *sub-domain* is a initiative under a specific domain.
|
||||
|
||||
Individuals can become members of the *domain*
|
||||
by completing the necessary legal clearance.
|
||||
The copyright, patent, and trademark policies of the domain must be clarified
|
||||
in an Intellectual Property policy that applies to the domain.
|
||||
|
||||
Specifications exist as multiple pages, one page per version,
|
||||
(discussed below in "Branching and Merging"),
|
||||
which should be assigned URIs that MAY include an number identifier.
|
||||
|
||||
Thus, we refer to new specifications by specifying its domain,
|
||||
its sub-domain and short name.
|
||||
The syntax for a new specification reference is:
|
||||
|
||||
<domain>/<sub-domain>/<shortname>
|
||||
|
||||
For example, this specification should be **rfc.vac.dev/vac/COSS**,
|
||||
if the status were **raw**.
|
||||
|
||||
A number will be assigned to the specification when obtaining **draft** status.
|
||||
New versions of the same specification will be assigned a new number.
|
||||
The syntax for a specification reference is:
|
||||
|
||||
<domain>/<sub-domain>/<number>/<shortname>
|
||||
|
||||
For example, this specification is **rfc.vac.dev/vac/1/COSS**.
|
||||
The short form **1/COSS** may be used when referring to the specification
|
||||
from other specifications in the same domain.
|
||||
|
||||
Specifications (excluding raw specifications)
|
||||
carries a different number including branches.
|
||||
|
||||
## COSS Lifecycle
|
||||
|
||||
Every specification has an independent lifecycle that
|
||||
documents clearly its current status.
|
||||
For a specification to receive a lifecycle status,
|
||||
a new specification SHOULD be presented by the team of the sub-domain.
|
||||
After discussion amongst the contributors has reached a rough consensus,
|
||||
as described in [RFC7282](https://www.rfc-editor.org/rfc/rfc7282.html),
|
||||
the specification MAY begin the process to upgrade it's status.
|
||||
|
||||
A specification has five possible states that reflect its maturity and
|
||||
contractual weight:
|
||||
|
||||

|
||||
|
||||
### Raw Specifications
|
||||
|
||||
All new specifications are **raw** specifications.
|
||||
Changes to raw specifications can be unilateral and arbitrary.
|
||||
A sub-domain MAY use the **raw** status for new specifications
|
||||
that live under their domain.
|
||||
Raw specifications have no contractual weight.
|
||||
|
||||
### Draft Specifications
|
||||
|
||||
When raw specifications can be demonstrated,
|
||||
they become **draft** specifications and are assigned numbers.
|
||||
Changes to draft specifications should be done in consultation with users.
|
||||
Draft specifications are contracts between the editors and implementers.
|
||||
|
||||
### Stable Specifications
|
||||
|
||||
When draft specifications are used by third parties, they become **stable** specifications.
|
||||
Changes to stable specifications should be restricted to cosmetic ones,
|
||||
errata and clarifications.
|
||||
Stable specifications are contracts between editors, implementers, and end-users.
|
||||
|
||||
### Deprecated Specifications
|
||||
|
||||
When stable specifications are replaced by newer draft specifications,
|
||||
they become **deprecated** specifications.
|
||||
Deprecated specifications should not be changed except
|
||||
to indicate their replacements, if any.
|
||||
Deprecated specifications are contracts between editors, implementers and end-users.
|
||||
|
||||
### Retired Specifications
|
||||
|
||||
When deprecated specifications are no longer used in products,
|
||||
they become **retired** specifications.
|
||||
Retired specifications are part of the historical record.
|
||||
They should not be changed except to indicate their replacements, if any.
|
||||
Retired specifications have no contractual weight.
|
||||
|
||||
### Deleted Specifications
|
||||
|
||||
Deleted specifications are those that have not reached maturity (stable) and
|
||||
were discarded.
|
||||
They should not be used and are only kept for their historical value.
|
||||
Only Raw and Draft specifications can be deleted.
|
||||
|
||||
## Editorial control
|
||||
|
||||
A specification MUST have a single responsible editor,
|
||||
the only person who SHALL change the status of the specification
|
||||
through the lifecycle stages.
|
||||
|
||||
A specification MAY also have additional contributors who contribute changes to it.
|
||||
It is RECOMMENDED to use a process similar to [C4 process](https://github.com/unprotocols/rfc/blob/master/1/README.md)
|
||||
to maximize the scale and diversity of contributions.
|
||||
|
||||
Unlike the original C4 process however,
|
||||
it is RECOMMENDED to use CC0 as a more permissive license alternative.
|
||||
We SHOULD NOT use GPL or GPL-like license.
|
||||
One exception is this specification, as this was the original license for this specification.
|
||||
|
||||
The editor is responsible for accurately maintaining the state of specifications,
|
||||
for retiring different versions that may live in other places and
|
||||
for handling all comments on the specification.
|
||||
|
||||
## Branching and Merging
|
||||
|
||||
Any member of the domain MAY branch a specification at any point.
|
||||
This is done by copying the existing text, and
|
||||
creating a new specification with the same name and content, but a new number.
|
||||
Since **raw** specifications are not assigned a number,
|
||||
branching by any member of a sub-domain MAY differentiate specifications
|
||||
based on date, contributors, or
|
||||
version number within the document.
|
||||
The ability to branch a specification is necessary in these circumstances:
|
||||
|
||||
- To change the responsible editor for a specification,
|
||||
with or without the cooperation of the current responsible editor.
|
||||
- To rejuvenate a specification that is stable but needs functional changes.
|
||||
This is the proper way to make a new version of a specification
|
||||
that is in stable or deprecated status.
|
||||
- To resolve disputes between different technical opinions.
|
||||
|
||||
The responsible editor of a branched specification is the person who makes the branch.
|
||||
|
||||
Branches, including added contributions, are derived works and
|
||||
thus licensed under the same terms as the original specification.
|
||||
This means that contributors are guaranteed the right to merge changes made in branches
|
||||
back into their original specifications.
|
||||
|
||||
Technically speaking, a branch is a *different* specification,
|
||||
even if it carries the same name.
|
||||
Branches have no special status except that accorded by the community.
|
||||
|
||||
## Conflict resolution
|
||||
|
||||
COSS resolves natural conflicts between teams and
|
||||
vendors by allowing anyone to define a new specification.
|
||||
There is no editorial control process except
|
||||
that practised by the editor of a new specification.
|
||||
The administrators of a domain (moderators)
|
||||
may choose to interfere in editorial conflicts,
|
||||
and may suspend or ban individuals for behaviour they consider inappropriate.
|
||||
|
||||
## Specification Structure
|
||||
|
||||
### Meta Information
|
||||
|
||||
Specifications MUST contain the following metadata.
|
||||
It is RECOMMENDED that specification metadata is specified as a YAML header
|
||||
(where possible).
|
||||
This will enable programmatic access to specification metadata.
|
||||
|
||||
| Key | Value | Type | Example |
|
||||
|------------------|----------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **shortname** | short name | string | 1/COSS |
|
||||
| **title** | full name | string | Consensus-Oriented Specification System |
|
||||
| **status** | status | string | draft |
|
||||
| **category** | category | string | Best Current Practice |
|
||||
| **tags** | 0 or several tags | list | waku-application, waku-core-protocol |
|
||||
| **editor** | editor name/email | string | Oskar Thoren <oskarth@titanproxy.com> |
|
||||
| **contributors** | contributors | list | - Pieter Hintjens <ph@imatix.com> - André Rebentisch <andre@openstandards.de> - Alberto Barrionuevo <abarrio@opentia.es> - Chris Puttick <chris.puttick@thehumanjourney.net> - Yurii Rashkovskii <yrashk@gmail.com> |
|
||||
|
||||
### IFT/Vac RFC Process
|
||||
|
||||
> [!Note]
|
||||
This section is introduced to allow contributors to understand the IFT
|
||||
(Institute of Free Technology) Vac RFC specification process.
|
||||
Other organizations may make changes to this section according to their needs.
|
||||
|
||||
Vac is a department under the IFT organization that provides RFC (Request For Comments)
|
||||
specification services.
|
||||
This service works to help facilitate the RFC process, assuring standards are followed.
|
||||
Contributors within the service SHOULD assist a *sub-domain* in creating a new specification,
|
||||
editing a specification, and
|
||||
promoting the status of a specification along with other tasks.
|
||||
Once a specification reaches some level of maturity by rough consensus,
|
||||
the specification SHOULD enter the [Vac RFC](https://rfc.vac.dev/) process.
|
||||
Similar to the IETF working group adoption described in [RFC6174](https://www.rfc-editor.org/rfc/rfc6174.html),
|
||||
the Vac RFC process SHOULD facilitate all updates to the specification.
|
||||
|
||||
Specifications are introduced by projects,
|
||||
under a specific *domain*, with the intention of becoming technically mature documents.
|
||||
The IFT domain currently houses the following projects:
|
||||
|
||||
- [Status](https://status.app/)
|
||||
- [Waku](https://waku.org/)
|
||||
- [Codex](https://codex.storage/)
|
||||
- [Nimbus](https://nimbus.team/)
|
||||
- [Nomos](https://nomos.tech/)
|
||||
|
||||
When a specification is promoted to *draft* status,
|
||||
the number that is assigned MAY be incremental
|
||||
or by the *sub-domain* and the Vac RFC process.
|
||||
Standards track specifications MUST be based on the
|
||||
[Vac RFC template](../template.md) before obtaining a new status.
|
||||
All changes, comments, and contributions SHOULD be documented.
|
||||
|
||||
## Conventions
|
||||
|
||||
Where possible editors and contributors are encouraged to:
|
||||
|
||||
- Refer to and build on existing work when possible, especially IETF specifications.
|
||||
- Contribute to existing specifications rather than reinvent their own.
|
||||
- Use collaborative branching and merging as a tool for experimentation.
|
||||
- Use Semantic Line Breaks: [sembr](https://sembr.org/).
|
||||
|
||||
## Appendix A. Color Coding
|
||||
|
||||
It is RECOMMENDED to use color coding to indicate specification's status.
|
||||
Color coded specifications SHOULD use the following color scheme:
|
||||
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
---
|
||||
slug: 1
|
||||
title: 1/COSS
|
||||
name: Consensus-Oriented Specification System
|
||||
status: draft
|
||||
category: Best Current Practice
|
||||
editor: Daniel Kaiser <danielkaiser@status.im>
|
||||
contributors:
|
||||
- Oskar Thoren <oskarth@titanproxy.com>
|
||||
- Pieter Hintjens <ph@imatix.com>
|
||||
- André Rebentisch <andre@openstandards.de>
|
||||
- Alberto Barrionuevo <abarrio@opentia.es>
|
||||
- Chris Puttick <chris.puttick@thehumanjourney.net>
|
||||
- Yurii Rashkovskii <yrashk@gmail.com>
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
---
|
||||
|
||||
This document describes a consensus-oriented specification system (COSS)
|
||||
for building interoperable technical specifications.
|
||||
COSS is based on a lightweight editorial process that
|
||||
seeks to engage the widest possible range of interested parties and
|
||||
move rapidly to consensus through working code.
|
||||
|
||||
This specification is based on [Unprotocols 2/COSS](https://github.com/unprotocols/rfc/blob/master/2/README.md),
|
||||
used by the [ZeromMQ](https://rfc.zeromq.org/) project.
|
||||
It is equivalent except for some areas:
|
||||
|
||||
- recommending the use of a permissive licenses,
|
||||
such as CC0 (with the exception of this document);
|
||||
- miscellaneous metadata, editor, and format/link updates;
|
||||
- more inheritance from the [IETF Standards Process](https://www.rfc-editor.org/rfc/rfc2026.txt),
|
||||
e.g. using RFC categories: Standards Track, Informational, and Best Common Practice;
|
||||
- standards track specifications SHOULD
|
||||
follow a specific structure that both streamlines editing,
|
||||
and helps implementers to quickly comprehend the specification
|
||||
- specifications MUST feature a header providing specific meta information
|
||||
- raw specifications will not be assigned numbers
|
||||
- section explaining the [IFT](https://free.technology/)
|
||||
Request For Comments specification process managed by the Vac service department
|
||||
|
||||
## License
|
||||
|
||||
Copyright (c) 2008-24 the Editor and Contributors.
|
||||
|
||||
This Specification is free software;
|
||||
you can redistribute it and/or
|
||||
modify it under the terms of the GNU General Public License
|
||||
as published by the Free Software Foundation;
|
||||
either version 3 of the License, or (at your option) any later version.
|
||||
|
||||
This specification is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY;
|
||||
without even the implied warranty of MERCHANTABILITY or
|
||||
FITNESS FOR A PARTICULAR PURPOSE.
|
||||
See the GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program;
|
||||
if not, see [gnu.org](http://www.gnu.org/licenses).
|
||||
|
||||
## Change Process
|
||||
|
||||
This document is governed by the [1/COSS](./coss.md) (COSS).
|
||||
|
||||
## Language
|
||||
|
||||
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
|
||||
"SHOULD NOT", "RECOMMENDED", "MAY", and
|
||||
"OPTIONAL" in this document are to be interpreted as described in
|
||||
[RFC 2119](http://tools.ietf.org/html/rfc2119).
|
||||
|
||||
## Goals
|
||||
|
||||
The primary goal of COSS is to facilitate the process of writing, proving, and
|
||||
improving new technical specifications.
|
||||
A "technical specification" defines a protocol, a process, an API, a use of language,
|
||||
a methodology, or any other aspect of a technical environment that
|
||||
can usefully be documented for the purposes of technical or social interoperability.
|
||||
|
||||
COSS is intended to above all be economical and rapid,
|
||||
so that it is useful to small teams with little time to spend on more formal processes.
|
||||
|
||||
Principles:
|
||||
|
||||
- We aim for rough consensus and running code; [inspired by the IETF Tao](https://www.ietf.org/about/participate/tao/).
|
||||
- Specifications are small pieces, made by small teams.
|
||||
- Specifications should have a clearly responsible editor.
|
||||
- The process should be visible, objective, and accessible to anyone.
|
||||
- The process should clearly separate experiments from solutions.
|
||||
- The process should allow deprecation of old specifications.
|
||||
|
||||
Specifications should take minutes to explain, hours to design, days to write,
|
||||
weeks to prove, months to become mature, and years to replace.
|
||||
Specifications have no special status except that accorded by the community.
|
||||
|
||||
## Architecture
|
||||
|
||||
COSS is designed around fast, easy to use communications tools.
|
||||
Primarily, COSS uses a wiki model for editing and publishing specifications texts.
|
||||
|
||||
- The *domain* is the conservancy for a set of specifications.
|
||||
- The *domain* is implemented as an Internet domain.
|
||||
- Each specification is a document together with references and attached resources.
|
||||
- A *sub-domain* is a initiative under a specific domain.
|
||||
|
||||
Individuals can become members of the *domain*
|
||||
by completing the necessary legal clearance.
|
||||
The copyright, patent, and trademark policies of the domain must be clarified
|
||||
in an Intellectual Property policy that applies to the domain.
|
||||
|
||||
Specifications exist as multiple pages, one page per version,
|
||||
(discussed below in "Branching and Merging"),
|
||||
which should be assigned URIs that MAY include an number identifier.
|
||||
|
||||
Thus, we refer to new specifications by specifying its domain,
|
||||
its sub-domain and short name.
|
||||
The syntax for a new specification reference is:
|
||||
|
||||
<domain>/<sub-domain>/<shortname>
|
||||
|
||||
For example, this specification should be **rfc.vac.dev/vac/COSS**,
|
||||
if the status were **raw**.
|
||||
|
||||
A number will be assigned to the specification when obtaining **draft** status.
|
||||
New versions of the same specification will be assigned a new number.
|
||||
The syntax for a specification reference is:
|
||||
|
||||
<domain>/<sub-domain>/<number>/<shortname>
|
||||
|
||||
For example, this specification is **rfc.vac.dev/vac/1/COSS**.
|
||||
The short form **1/COSS** may be used when referring to the specification
|
||||
from other specifications in the same domain.
|
||||
|
||||
Specifications (excluding raw specifications)
|
||||
carries a different number including branches.
|
||||
|
||||
## COSS Lifecycle
|
||||
|
||||
Every specification has an independent lifecycle that
|
||||
documents clearly its current status.
|
||||
For a specification to receive a lifecycle status,
|
||||
a new specification SHOULD be presented by the team of the sub-domain.
|
||||
After discussion amongst the contributors has reached a rough consensus,
|
||||
as described in [RFC7282](https://www.rfc-editor.org/rfc/rfc7282.html),
|
||||
the specification MAY begin the process to upgrade it's status.
|
||||
|
||||
A specification has five possible states that reflect its maturity and
|
||||
contractual weight:
|
||||
|
||||

|
||||
|
||||
### Raw Specifications
|
||||
|
||||
All new specifications are **raw** specifications.
|
||||
Changes to raw specifications can be unilateral and arbitrary.
|
||||
A sub-domain MAY use the **raw** status for new specifications
|
||||
that live under their domain.
|
||||
Raw specifications have no contractual weight.
|
||||
|
||||
### Draft Specifications
|
||||
|
||||
When raw specifications can be demonstrated,
|
||||
they become **draft** specifications and are assigned numbers.
|
||||
Changes to draft specifications should be done in consultation with users.
|
||||
Draft specifications are contracts between the editors and implementers.
|
||||
|
||||
### Stable Specifications
|
||||
|
||||
When draft specifications are used by third parties, they become **stable** specifications.
|
||||
Changes to stable specifications should be restricted to cosmetic ones,
|
||||
errata and clarifications.
|
||||
Stable specifications are contracts between editors, implementers, and end-users.
|
||||
|
||||
### Deprecated Specifications
|
||||
|
||||
When stable specifications are replaced by newer draft specifications,
|
||||
they become **deprecated** specifications.
|
||||
Deprecated specifications should not be changed except
|
||||
to indicate their replacements, if any.
|
||||
Deprecated specifications are contracts between editors, implementers and end-users.
|
||||
|
||||
### Retired Specifications
|
||||
|
||||
When deprecated specifications are no longer used in products,
|
||||
they become **retired** specifications.
|
||||
Retired specifications are part of the historical record.
|
||||
They should not be changed except to indicate their replacements, if any.
|
||||
Retired specifications have no contractual weight.
|
||||
|
||||
### Deleted Specifications
|
||||
|
||||
Deleted specifications are those that have not reached maturity (stable) and
|
||||
were discarded.
|
||||
They should not be used and are only kept for their historical value.
|
||||
Only Raw and Draft specifications can be deleted.
|
||||
|
||||
## Editorial control
|
||||
|
||||
A specification MUST have a single responsible editor,
|
||||
the only person who SHALL change the status of the specification
|
||||
through the lifecycle stages.
|
||||
|
||||
A specification MAY also have additional contributors who contribute changes to it.
|
||||
It is RECOMMENDED to use a process similar to [C4 process](https://github.com/unprotocols/rfc/blob/master/1/README.md)
|
||||
to maximize the scale and diversity of contributions.
|
||||
|
||||
Unlike the original C4 process however,
|
||||
it is RECOMMENDED to use CC0 as a more permissive license alternative.
|
||||
We SHOULD NOT use GPL or GPL-like license.
|
||||
One exception is this specification, as this was the original license for this specification.
|
||||
|
||||
The editor is responsible for accurately maintaining the state of specifications,
|
||||
for retiring different versions that may live in other places and
|
||||
for handling all comments on the specification.
|
||||
|
||||
## Branching and Merging
|
||||
|
||||
Any member of the domain MAY branch a specification at any point.
|
||||
This is done by copying the existing text, and
|
||||
creating a new specification with the same name and content, but a new number.
|
||||
Since **raw** specifications are not assigned a number,
|
||||
branching by any member of a sub-domain MAY differentiate specifications
|
||||
based on date, contributors, or
|
||||
version number within the document.
|
||||
The ability to branch a specification is necessary in these circumstances:
|
||||
|
||||
- To change the responsible editor for a specification,
|
||||
with or without the cooperation of the current responsible editor.
|
||||
- To rejuvenate a specification that is stable but needs functional changes.
|
||||
This is the proper way to make a new version of a specification
|
||||
that is in stable or deprecated status.
|
||||
- To resolve disputes between different technical opinions.
|
||||
|
||||
The responsible editor of a branched specification is the person who makes the branch.
|
||||
|
||||
Branches, including added contributions, are derived works and
|
||||
thus licensed under the same terms as the original specification.
|
||||
This means that contributors are guaranteed the right to merge changes made in branches
|
||||
back into their original specifications.
|
||||
|
||||
Technically speaking, a branch is a *different* specification,
|
||||
even if it carries the same name.
|
||||
Branches have no special status except that accorded by the community.
|
||||
|
||||
## Conflict resolution
|
||||
|
||||
COSS resolves natural conflicts between teams and
|
||||
vendors by allowing anyone to define a new specification.
|
||||
There is no editorial control process except
|
||||
that practised by the editor of a new specification.
|
||||
The administrators of a domain (moderators)
|
||||
may choose to interfere in editorial conflicts,
|
||||
and may suspend or ban individuals for behaviour they consider inappropriate.
|
||||
|
||||
## Specification Structure
|
||||
|
||||
### Meta Information
|
||||
|
||||
Specifications MUST contain the following metadata.
|
||||
It is RECOMMENDED that specification metadata is specified as a YAML header
|
||||
(where possible).
|
||||
This will enable programmatic access to specification metadata.
|
||||
|
||||
| Key | Value | Type | Example |
|
||||
|------------------|----------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **shortname** | short name | string | 1/COSS |
|
||||
| **title** | full name | string | Consensus-Oriented Specification System |
|
||||
| **status** | status | string | draft |
|
||||
| **category** | category | string | Best Current Practice |
|
||||
| **tags** | 0 or several tags | list | waku-application, waku-core-protocol |
|
||||
| **editor** | editor name/email | string | Oskar Thoren <oskarth@titanproxy.com> |
|
||||
| **contributors** | contributors | list | - Pieter Hintjens <ph@imatix.com> - André Rebentisch <andre@openstandards.de> - Alberto Barrionuevo <abarrio@opentia.es> - Chris Puttick <chris.puttick@thehumanjourney.net> - Yurii Rashkovskii <yrashk@gmail.com> |
|
||||
|
||||
### IFT/Vac RFC Process
|
||||
|
||||
> [!Note]
|
||||
This section is introduced to allow contributors to understand the IFT
|
||||
(Institute of Free Technology) Vac RFC specification process.
|
||||
Other organizations may make changes to this section according to their needs.
|
||||
|
||||
Vac is a department under the IFT organization that provides RFC (Request For Comments)
|
||||
specification services.
|
||||
This service works to help facilitate the RFC process, assuring standards are followed.
|
||||
Contributors within the service SHOULD assist a *sub-domain* in creating a new specification,
|
||||
editing a specification, and
|
||||
promoting the status of a specification along with other tasks.
|
||||
Once a specification reaches some level of maturity by rough consensus,
|
||||
the specification SHOULD enter the [Vac RFC](rfc.vac.dev) process.
|
||||
Similar to the IETF working group adoption described in [RFC6174](https://www.rfc-editor.org/rfc/rfc6174.html),
|
||||
the Vac RFC process SHOULD facilitate all updates to the specification.
|
||||
|
||||
Specifications are introduced by projects,
|
||||
under a specific *domain*, with the intention of becoming technically mature documents.
|
||||
The IFT domain currently houses the following projects:
|
||||
|
||||
- [Status](status.app)
|
||||
- [Waku](https://waku.org/)
|
||||
- [Codex](https://codex.storage/)
|
||||
- [Nimbus](https://nimbus.team/)
|
||||
- [Nomos](https://nomos.tech/)
|
||||
|
||||
When a specification is promoted to *draft* status,
|
||||
the number that is assigned MAY be incremental
|
||||
or by the *sub-domain* and the Vac RFC process.
|
||||
Standards track specifications MUST be based on the
|
||||
[Vac RFC template](../template.md) before obtaining a new status.
|
||||
All changes, comments, and contributions SHOULD be documented.
|
||||
|
||||
## Conventions
|
||||
|
||||
Where possible editors and contributors are encouraged to:
|
||||
|
||||
- Refer to and build on existing work when possible, especially IETF specifications.
|
||||
- Contribute to existing specifications rather than reinvent their own.
|
||||
- Use collaborative branching and merging as a tool for experimentation.
|
||||
- Use Semantic Line Breaks: [sembr](https://sembr.org/).
|
||||
|
||||
## Appendix A. Color Coding
|
||||
|
||||
It is RECOMMENDED to use color coding to indicate specification's status.
|
||||
Color coded specifications SHOULD use the following color scheme:
|
||||
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
# Alice and Bob: batch data sync
|
||||
msc {
|
||||
hscale="2", wordwraparcs=on;
|
||||
|
||||
alice [label="Alice"],
|
||||
bob [label="Bob"];
|
||||
|
||||
--- [label="batch data sync"];
|
||||
alice => alice [label="add messages to payload state"];
|
||||
alice >> bob [label="send payload with messages"];
|
||||
|
||||
bob => bob [label="add acks to payload state"];
|
||||
bob >> alice [label="send payload with acks"];
|
||||
}
|
||||
# Alice and Bob: batch data sync
|
||||
msc {
|
||||
hscale="2", wordwraparcs=on;
|
||||
|
||||
alice [label="Alice"],
|
||||
bob [label="Bob"];
|
||||
|
||||
--- [label="batch data sync"];
|
||||
alice => alice [label="add messages to payload state"];
|
||||
alice >> bob [label="send payload with messages"];
|
||||
|
||||
bob => bob [label="add acks to payload state"];
|
||||
bob >> alice [label="send payload with acks"];
|
||||
}
|
||||
|
||||
@@ -1,20 +1,20 @@
|
||||
# Alice and Bob: interactive data sync
|
||||
msc {
|
||||
hscale="2", wordwraparcs=on;
|
||||
|
||||
alice [label="Alice"],
|
||||
bob [label="Bob"];
|
||||
|
||||
--- [label="interactive data sync"];
|
||||
alice => alice [label="add offers to payload state"];
|
||||
alice >> bob [label="send payload with offers"];
|
||||
|
||||
bob => bob [label="add requests to payload state"];
|
||||
bob >> alice [label="send payload with requests"];
|
||||
|
||||
alice => alice [label="add requested messages to state"];
|
||||
alice >> bob [label="send payload with messages"];
|
||||
|
||||
bob => bob [label="add acks to payload state"];
|
||||
bob >> alice [label="send payload with acks"];
|
||||
}
|
||||
# Alice and Bob: interactive data sync
|
||||
msc {
|
||||
hscale="2", wordwraparcs=on;
|
||||
|
||||
alice [label="Alice"],
|
||||
bob [label="Bob"];
|
||||
|
||||
--- [label="interactive data sync"];
|
||||
alice => alice [label="add offers to payload state"];
|
||||
alice >> bob [label="send payload with offers"];
|
||||
|
||||
bob => bob [label="add requests to payload state"];
|
||||
bob >> alice [label="send payload with requests"];
|
||||
|
||||
alice => alice [label="add requested messages to state"];
|
||||
alice >> bob [label="send payload with messages"];
|
||||
|
||||
bob => bob [label="add acks to payload state"];
|
||||
bob >> alice [label="send payload with acks"];
|
||||
}
|
||||
|
||||
385
vac/2/mvds.md
385
vac/2/mvds.md
@@ -1,191 +1,194 @@
|
||||
---
|
||||
slug: 2
|
||||
title: 2/MVDS
|
||||
name: Minimum Viable Data Synchronization
|
||||
status: stable
|
||||
editor: Sanaz Taheri <sanaz@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
---
|
||||
|
||||
In this specification, we describe a minimum viable protocol for
|
||||
data synchronization inspired by the Bramble Synchronization Protocol[^1].
|
||||
This protocol is designed to ensure reliable messaging
|
||||
between peers across an unreliable peer-to-peer (P2P) network where
|
||||
they may be unreachable or unresponsive.
|
||||
|
||||
We present a reference implementation[^2]
|
||||
including a simulation to demonstrate its performance.
|
||||
|
||||
## Definitions
|
||||
|
||||
| Term | Description |
|
||||
|------------|-------------------------------------------------------------------------------------|
|
||||
| **Peer** | The other nodes that a node is connected to. |
|
||||
| **Record** | Defines a payload element of either the type `OFFER`, `REQUEST`, `MESSAGE` or `ACK` |
|
||||
| **Node** | Some process that is able to store data, do processing and communicate for MVDS. |
|
||||
|
||||
## Wire Protocol
|
||||
|
||||
### Secure Transport
|
||||
|
||||
This specification does not define anything related to the transport of packets.
|
||||
It is assumed that this is abstracted in such a way that
|
||||
any secure transport protocol could be easily implemented.
|
||||
Likewise, properties such as confidentiality, integrity, authenticity and
|
||||
forward secrecy are assumed to be provided by a layer below.
|
||||
|
||||
### Payloads
|
||||
|
||||
Payloads are implemented using [protocol buffers v3](https://developers.google.com/protocol-buffers/).
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.mvds;
|
||||
|
||||
message Payload {
|
||||
repeated bytes acks = 5001;
|
||||
repeated bytes offers = 5002;
|
||||
repeated bytes requests = 5003;
|
||||
repeated Message messages = 5004;
|
||||
}
|
||||
|
||||
message Message {
|
||||
bytes group_id = 6001;
|
||||
int64 timestamp = 6002;
|
||||
bytes body = 6003;
|
||||
}
|
||||
```
|
||||
|
||||
*The payload field numbers are kept more "unique" to*
|
||||
*ensure no overlap with other protocol buffers.*
|
||||
|
||||
Each payload contains the following fields:
|
||||
|
||||
- **Acks:** This field contains a list (can be empty)
|
||||
of `message identifiers` informing the recipient that sender holds a specific message.
|
||||
- **Offers:** This field contains a list (can be empty)
|
||||
of `message identifiers` that the sender would like to give to the recipient.
|
||||
- **Requests:** This field contains a list (can be empty)
|
||||
of `message identifiers` that the sender would like to receive from the recipient.
|
||||
- **Messages:** This field contains a list of messages (can be empty).
|
||||
|
||||
**Message Identifiers:** Each `message` has a message identifier calculated by
|
||||
hashing the `group_id`, `timestamp` and `body` fields as follows:
|
||||
|
||||
```js
|
||||
HASH("MESSAGE_ID", group_id, timestamp, body);
|
||||
```
|
||||
|
||||
**Group Identifiers:** Each `message` is assigned into a **group**
|
||||
using the `group_id` field,
|
||||
groups are independent synchronization contexts between peers.
|
||||
|
||||
The current `HASH` function used is `sha256`.
|
||||
|
||||
## Synchronization
|
||||
|
||||
### State
|
||||
|
||||
We refer to `state` as set of records for the types `OFFER`, `REQUEST` and
|
||||
`MESSAGE` that every node SHOULD store per peer.
|
||||
`state` MUST NOT contain `ACK` records as we do not retransmit those periodically.
|
||||
The following information is stored for records:
|
||||
|
||||
- **Type** - Either `OFFER`, `REQUEST` or `MESSAGE`
|
||||
- **Send Count** - The amount of times a record has been sent to a peer.
|
||||
- **Send Epoch** - The next epoch at which a record can be sent to a peer.
|
||||
|
||||
### Flow
|
||||
|
||||
A maximum of one payload SHOULD be sent to peers per epoch,
|
||||
this payload contains all `ACK`, `OFFER`, `REQUEST` and
|
||||
`MESSAGE` records for the specific peer.
|
||||
Payloads are created every epoch,
|
||||
containing reactions to previously received records by peers or
|
||||
new records being sent out by nodes.
|
||||
|
||||
Nodes MAY have two modes with which they can send records:
|
||||
`BATCH` and `INTERACTIVE` mode.
|
||||
The following rules dictate how nodes construct payloads
|
||||
every epoch for any given peer for both modes.
|
||||
|
||||
> ***NOTE:** A node may send messages both in interactive and in batch mode.*
|
||||
|
||||
#### Interactive Mode
|
||||
|
||||
- A node initially offers a `MESSAGE` when attempting to send it to a peer.
|
||||
This means an `OFFER` is added to the next payload and state for the given peer.
|
||||
- When a node receives an `OFFER`, a `REQUEST` is added to the next payload and
|
||||
state for the given peer.
|
||||
- When a node receives a `REQUEST` for a previously sent `OFFER`,
|
||||
the `OFFER` is removed from the state and
|
||||
the corresponding `MESSAGE` is added to the next payload and
|
||||
state for the given peer.
|
||||
- When a node receives a `MESSAGE`, the `REQUEST` is removed from the state and
|
||||
an `ACK` is added to the next payload for the given peer.
|
||||
- When a node receives an `ACK`,
|
||||
the `MESSAGE` is removed from the state for the given peer.
|
||||
- All records that require retransmission are added to the payload,
|
||||
given `Send Epoch` has been reached.
|
||||
|
||||

|
||||
|
||||
Figure 1: Delivery without retransmissions in interactive mode.
|
||||
|
||||
#### Batch Mode
|
||||
|
||||
1. When a node sends a `MESSAGE`,
|
||||
it is added to the next payload and the state for the given peer.
|
||||
2. When a node receives a `MESSAGE`,
|
||||
an `ACK` is added to the next payload for the corresponding peer.
|
||||
3. When a node receives an `ACK`,
|
||||
the `MESSAGE` is removed from the state for the given peer.
|
||||
4. All records that require retransmission are added to the payload,
|
||||
given `Send Epoch` has been reached.
|
||||
|
||||
<!-- diagram -->
|
||||
|
||||

|
||||
|
||||
Figure 2: Delivery without retransmissions in batch mode.
|
||||
|
||||
> ***NOTE:** Batch mode is higher bandwidth whereas interactive mode is higher latency.*
|
||||
|
||||
<!-- Interactions with state, flow chart with retransmissions? -->
|
||||
|
||||
### Retransmission
|
||||
|
||||
The record of the type `Type` SHOULD be retransmitted
|
||||
every time `Send Epoch` is smaller than or equal to the current epoch.
|
||||
|
||||
`Send Epoch` and `Send Count` MUST be increased every time a record is retransmitted.
|
||||
Although no function is defined on how to increase `Send Epoch`,
|
||||
it SHOULD be exponentially increased until reaching an upper bound
|
||||
where it then goes back to a lower epoch in order to
|
||||
prevent a record's `Send Epoch`'s from becoming too large.
|
||||
|
||||
> ***NOTE:** We do not retransmission `ACK`s as we do not know when they have arrived,
|
||||
therefore we simply resend them every time we receive a `MESSAGE`.*
|
||||
|
||||
## Formal Specification
|
||||
|
||||
MVDS has been formally specified using TLA+: <https://github.com/vacp2p/formalities/tree/master/MVDS>.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
- Preston van Loon
|
||||
- Greg Markou
|
||||
- Rene Nayman
|
||||
- Jacek Sieka
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## Footnotes
|
||||
|
||||
[^1]: akwizgran et al. [BSP](https://code.briarproject.org/briar/briar-spec/blob/master/protocols/BSP.md). Briar.
|
||||
[^2]: <https://github.com/vacp2p/mvds>
|
||||
---
|
||||
slug: 2
|
||||
title: 2/MVDS
|
||||
name: Minimum Viable Data Synchronization
|
||||
status: stable
|
||||
editor: Sanaz Taheri <sanaz@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
---
|
||||
|
||||
In this specification, we describe a minimum viable protocol for
|
||||
data synchronization inspired by the Bramble Synchronization Protocol[^1].
|
||||
This protocol is designed to ensure reliable messaging
|
||||
between peers across an unreliable peer-to-peer (P2P) network where
|
||||
they may be unreachable or unresponsive.
|
||||
|
||||
We present a reference implementation[^2]
|
||||
including a simulation to demonstrate its performance.
|
||||
|
||||
## Definitions
|
||||
|
||||
| Term | Description |
|
||||
|------------|-------------------------------------------------------------------------------------|
|
||||
| **Peer** | The other nodes that a node is connected to. |
|
||||
| **Record** | Defines a payload element of either the type `OFFER`, `REQUEST`, `MESSAGE` or `ACK` |
|
||||
| **Node** | Some process that is able to store data, do processing and communicate for MVDS. |
|
||||
|
||||
## Wire Protocol
|
||||
|
||||
### Secure Transport
|
||||
|
||||
This specification does not define anything related to the transport of packets.
|
||||
It is assumed that this is abstracted in such a way that
|
||||
any secure transport protocol could be easily implemented.
|
||||
Likewise, properties such as confidentiality, integrity, authenticity and
|
||||
forward secrecy are assumed to be provided by a layer below.
|
||||
|
||||
### Payloads
|
||||
|
||||
Payloads are implemented using [protocol buffers v3](https://developers.google.com/protocol-buffers/).
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.mvds;
|
||||
|
||||
message Payload {
|
||||
repeated bytes acks = 5001;
|
||||
repeated bytes offers = 5002;
|
||||
repeated bytes requests = 5003;
|
||||
repeated Message messages = 5004;
|
||||
}
|
||||
|
||||
message Message {
|
||||
bytes group_id = 6001;
|
||||
int64 timestamp = 6002;
|
||||
bytes body = 6003;
|
||||
}
|
||||
```
|
||||
|
||||
*The payload field numbers are kept more "unique" to*
|
||||
*ensure no overlap with other protocol buffers.*
|
||||
|
||||
Each payload contains the following fields:
|
||||
|
||||
- **Acks:** This field contains a list (can be empty)
|
||||
of `message identifiers` informing the recipient that sender holds a specific message.
|
||||
- **Offers:** This field contains a list (can be empty)
|
||||
of `message identifiers` that the sender would like to give to the recipient.
|
||||
- **Requests:** This field contains a list (can be empty)
|
||||
of `message identifiers` that the sender would like to receive from the recipient.
|
||||
- **Messages:** This field contains a list of messages (can be empty).
|
||||
|
||||
**Message Identifiers:** Each `message` has a message identifier calculated by
|
||||
hashing the `group_id`, `timestamp` and `body` fields as follows:
|
||||
|
||||
```js
|
||||
HASH("MESSAGE_ID", group_id, timestamp, body);
|
||||
```
|
||||
|
||||
**Group Identifiers:** Each `message` is assigned into a **group**
|
||||
using the `group_id` field,
|
||||
groups are independent synchronization contexts between peers.
|
||||
|
||||
The current `HASH` function used is `sha256`.
|
||||
|
||||
## Synchronization
|
||||
|
||||
### State
|
||||
|
||||
We refer to `state` as set of records for the types `OFFER`, `REQUEST` and
|
||||
`MESSAGE` that every node SHOULD store per peer.
|
||||
`state` MUST NOT contain `ACK` records as we do not retransmit those periodically.
|
||||
The following information is stored for records:
|
||||
|
||||
- **Type** - Either `OFFER`, `REQUEST` or `MESSAGE`
|
||||
- **Send Count** - The amount of times a record has been sent to a peer.
|
||||
- **Send Epoch** - The next epoch at which a record can be sent to a peer.
|
||||
|
||||
### Flow
|
||||
|
||||
A maximum of one payload SHOULD be sent to peers per epoch,
|
||||
this payload contains all `ACK`, `OFFER`, `REQUEST` and
|
||||
`MESSAGE` records for the specific peer.
|
||||
Payloads are created every epoch,
|
||||
containing reactions to previously received records by peers or
|
||||
new records being sent out by nodes.
|
||||
|
||||
Nodes MAY have two modes with which they can send records:
|
||||
`BATCH` and `INTERACTIVE` mode.
|
||||
The following rules dictate how nodes construct payloads
|
||||
every epoch for any given peer for both modes.
|
||||
|
||||
> ***NOTE:** A node may send messages both in interactive and in batch mode.*
|
||||
|
||||
#### Interactive Mode
|
||||
|
||||
- A node initially offers a `MESSAGE` when attempting to send it to a peer.
|
||||
This means an `OFFER` is added to the next payload and state for the given peer.
|
||||
- When a node receives an `OFFER`, a `REQUEST` is added to the next payload and
|
||||
state for the given peer.
|
||||
- When a node receives a `REQUEST` for a previously sent `OFFER`,
|
||||
the `OFFER` is removed from the state and
|
||||
the corresponding `MESSAGE` is added to the next payload and
|
||||
state for the given peer.
|
||||
- When a node receives a `MESSAGE`, the `REQUEST` is removed from the state and
|
||||
an `ACK` is added to the next payload for the given peer.
|
||||
- When a node receives an `ACK`,
|
||||
the `MESSAGE` is removed from the state for the given peer.
|
||||
- All records that require retransmission are added to the payload,
|
||||
given `Send Epoch` has been reached.
|
||||
|
||||
<p align="center">
|
||||
<img src="./images/interactive.png" />
|
||||
<br />
|
||||
Figure 1: Delivery without retransmissions in interactive mode.
|
||||
</p>
|
||||
|
||||
#### Batch Mode
|
||||
|
||||
1. When a node sends a `MESSAGE`,
|
||||
it is added to the next payload and the state for the given peer.
|
||||
2. When a node receives a `MESSAGE`,
|
||||
an `ACK` is added to the next payload for the corresponding peer.
|
||||
3. When a node receives an `ACK`,
|
||||
the `MESSAGE` is removed from the state for the given peer.
|
||||
4. All records that require retransmission are added to the payload,
|
||||
given `Send Epoch` has been reached.
|
||||
|
||||
<!-- diagram -->
|
||||
|
||||
<p align="center">
|
||||
<img src="./images/batch.png" />
|
||||
<br />
|
||||
Figure 2: Delivery without retransmissions in batch mode.
|
||||
</p>
|
||||
|
||||
> ***NOTE:** Batch mode is higher bandwidth whereas interactive mode is higher latency.*
|
||||
|
||||
<!-- Interactions with state, flow chart with retransmissions? -->
|
||||
|
||||
### Retransmission
|
||||
|
||||
The record of the type `Type` SHOULD be retransmitted
|
||||
every time `Send Epoch` is smaller than or equal to the current epoch.
|
||||
|
||||
`Send Epoch` and `Send Count` MUST be increased every time a record is retransmitted.
|
||||
Although no function is defined on how to increase `Send Epoch`,
|
||||
it SHOULD be exponentially increased until reaching an upper bound
|
||||
where it then goes back to a lower epoch in order to
|
||||
prevent a record's `Send Epoch`'s from becoming too large.
|
||||
|
||||
> ***NOTE:** We do not retransmission `ACK`s as we do not know when they have arrived, therefore we simply resend them every time we receive a `MESSAGE`.*
|
||||
|
||||
## Formal Specification
|
||||
|
||||
MVDS has been formally specified using TLA+: <https://github.com/vacp2p/formalities/tree/master/MVDS>.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
- Preston van Loon
|
||||
- Greg Markou
|
||||
- Rene Nayman
|
||||
- Jacek Sieka
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## Footnotes
|
||||
|
||||
[^1]: akwizgran et al. [BSP](https://code.briarproject.org/briar/briar-spec/blob/master/protocols/BSP.md). Briar.
|
||||
[^2]: <https://github.com/vacp2p/mvds>
|
||||
|
||||
@@ -1,153 +1,153 @@
|
||||
---
|
||||
slug: 25
|
||||
title: 25/LIBP2P-DNS-DISCOVERY
|
||||
name: Libp2p Peer Discovery via DNS
|
||||
status: deleted
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
`25/LIBP2P-DNS-DISCOVERY` specifies a scheme to implement [`libp2p`](https://libp2p.io/)
|
||||
peer discovery via DNS for Waku v2.
|
||||
The generalised purpose is to retrieve an arbitrarily long, authenticated,
|
||||
updateable list of [`libp2p` peers](https://docs.libp2p.io/concepts/peer-id/)
|
||||
to bootstrap connection to a `libp2p` network.
|
||||
Since [`10/WAKU2`](../../waku/standards/core/10/waku2.md)
|
||||
currently specifies use of [`libp2p` peer identities](https://docs.libp2p.io/concepts/peer-id/),
|
||||
this method is suitable for a new Waku v2 node
|
||||
to discover other Waku v2 nodes to connect to.
|
||||
|
||||
This specification is largely based on [EIP-1459](https://eips.ethereum.org/EIPS/eip-1459),
|
||||
with the only deviation being the type of address being encoded (`multiaddr` vs `enr`).
|
||||
Also see [this earlier explainer](https://vac.dev/dns-based-discovery)
|
||||
for more background on the suitability of DNS based discovery for Waku v2.
|
||||
|
||||
## List encoding
|
||||
|
||||
The peer list MUST be encoded as a [Merkle tree](https://www.wikiwand.com/en/Merkle_tree).
|
||||
EIP-1459 specifies [the URL scheme](https://eips.ethereum.org/EIPS/eip-1459#specification)
|
||||
to refer to such a DNS node list.
|
||||
This specification uses the same approach, but with a `matree` scheme:
|
||||
|
||||
```yaml
|
||||
matree://<key>@<fqdn>
|
||||
```
|
||||
|
||||
where
|
||||
|
||||
- `matree` is the selected `multiaddr` Merkle tree scheme
|
||||
- `<fqdn>` is the fully qualified domain name on which the list can be found
|
||||
- `<key>` is the base32 encoding of the compressed 32-byte binary public key
|
||||
that signed the list.
|
||||
|
||||
The example URL from EIP-1459, adapted to the above scheme becomes:
|
||||
|
||||
```yaml
|
||||
matree://AM5FCQLWIZX2QFPNJAP7VUERCCRNGRHWZG3YYHIUV7BVDQ5FDPRT2@peers.example.org
|
||||
```
|
||||
|
||||
Each entry within the Merkle tree MUST be contained within a [DNS TXT record](https://www.rfc-editor.org/rfc/rfc1035.txt)
|
||||
and stored in a subdomain (except for the base URL `matree` entry).
|
||||
The content of any TXT record
|
||||
MUST be small enough to fit into the 512-byte limit imposed on UDP DNS packets,
|
||||
which limits the number of hashes that can be contained within a branch entry.
|
||||
The subdomain name for each entry
|
||||
is the base32 encoding of the abbreviated keccak256 hash of its text content.
|
||||
See [this example](https://eips.ethereum.org/EIPS/eip-1459#dns-record-structure)
|
||||
of a fully populated tree for more information.
|
||||
|
||||
## Entry types
|
||||
|
||||
The following entry types are derived from [EIP-1459](https://eips.ethereum.org/EIPS/eip-1459)
|
||||
and adapted for use with `multiaddrs`:
|
||||
|
||||
## Root entry
|
||||
|
||||
The tree root entry MUST use the following format:
|
||||
|
||||
```yaml
|
||||
matree-root:v1 m=<ma-root> l=<link-root> seq=<sequence number> sig=<signature>
|
||||
```
|
||||
|
||||
where
|
||||
|
||||
- `ma-root` and `link-root` refer to the root hashes of subtrees
|
||||
containing `multiaddrs` and links to other subtrees, respectively
|
||||
- `sequence-number` is the tree's update sequence number.
|
||||
This number SHOULD increase with each update to the tree.
|
||||
- `signature` is a 65-byte secp256k1 EC signature
|
||||
over the keccak256 hash of the root record content,
|
||||
excluding the `sig=` part,
|
||||
encoded as URL-safe base64
|
||||
|
||||
## Branch entry
|
||||
|
||||
Branch entries MUST take the format:
|
||||
|
||||
```yaml
|
||||
matree-branch:<h₁>,<h₂>,...,<hₙ>
|
||||
```
|
||||
|
||||
where
|
||||
|
||||
- `<h₁>,<h₂>,...,<hₙ>` are the hashes of other subtree entries
|
||||
|
||||
## Leaf entries
|
||||
|
||||
There are two types of leaf entries:
|
||||
|
||||
### Link entries
|
||||
|
||||
For the subtree pointed to by `link-root`,
|
||||
leaf entries MUST take the format:
|
||||
|
||||
```yaml
|
||||
matree://<key>@<fqdn>
|
||||
```
|
||||
|
||||
which links to a different list located in another domain.
|
||||
|
||||
### `multiaddr` entries
|
||||
|
||||
For the subtree pointed to by `ma-root`,
|
||||
leaf entries MUST take the format:
|
||||
|
||||
```yaml
|
||||
ma:<multiaddr>
|
||||
```
|
||||
|
||||
which contains the `multiaddr` of a `libp2p` peer.
|
||||
|
||||
## Client protocol
|
||||
|
||||
A client MUST adhere to the [client protocol](https://eips.ethereum.org/EIPS/eip-1459#client-protocol)
|
||||
as specified in EIP-1459,
|
||||
and adapted for usage with `multiaddr` entry types below:
|
||||
|
||||
To find nodes at a given DNS name a client MUST perform the following steps:
|
||||
|
||||
1. Resolve the TXT record of the DNS name and
|
||||
check whether it contains a valid `matree-root:v1` entry.
|
||||
2. Verify the signature on the root against the known public key
|
||||
and check whether the sequence number is larger than or
|
||||
equal to any previous number seen for that name.
|
||||
3. Resolve the TXT record of a hash subdomain indicated in the record
|
||||
and verify that the content matches the hash.
|
||||
4. If the resolved entry is of type:
|
||||
|
||||
- `matree-branch`: parse the list of hashes and continue resolving them (step 3).
|
||||
- `ma`: import the `multiaddr` and add it to a local list of discovered nodes.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [`10/WAKU2`](../../waku/standards/core/10/waku2.md)
|
||||
1. [EIP-1459: Client Protocol](https://eips.ethereum.org/EIPS/eip-1459#client-protocol)
|
||||
1. [EIP-1459: Node Discovery via DNS](https://eips.ethereum.org/EIPS/eip-1459)
|
||||
1. [`libp2p`](https://libp2p.io/)
|
||||
1. [`libp2p` peer identity](https://docs.libp2p.io/concepts/peer-id/)
|
||||
1. [Merkle trees](https://www.wikiwand.com/en/Merkle_tree)
|
||||
---
|
||||
slug: 25
|
||||
title: 25/LIBP2P-DNS-DISCOVERY
|
||||
name: Libp2p Peer Discovery via DNS
|
||||
status: deleted
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
`25/LIBP2P-DNS-DISCOVERY` specifies a scheme to implement [`libp2p`](https://libp2p.io/)
|
||||
peer discovery via DNS for Waku v2.
|
||||
The generalised purpose is to retrieve an arbitrarily long, authenticated,
|
||||
updateable list of [`libp2p` peers](https://docs.libp2p.io/concepts/peer-id/)
|
||||
to bootstrap connection to a `libp2p` network.
|
||||
Since [`10/WAKU2`](../../waku/standards/core/10/waku2.md)
|
||||
currently specifies use of [`libp2p` peer identities](https://docs.libp2p.io/concepts/peer-id/),
|
||||
this method is suitable for a new Waku v2 node
|
||||
to discover other Waku v2 nodes to connect to.
|
||||
|
||||
This specification is largely based on [EIP-1459](https://eips.ethereum.org/EIPS/eip-1459),
|
||||
with the only deviation being the type of address being encoded (`multiaddr` vs `enr`).
|
||||
Also see [this earlier explainer](https://vac.dev/dns-based-discovery)
|
||||
for more background on the suitability of DNS based discovery for Waku v2.
|
||||
|
||||
## List encoding
|
||||
|
||||
The peer list MUST be encoded as a [Merkle tree](https://www.wikiwand.com/en/Merkle_tree).
|
||||
EIP-1459 specifies [the URL scheme](https://eips.ethereum.org/EIPS/eip-1459#specification)
|
||||
to refer to such a DNS node list.
|
||||
This specification uses the same approach, but with a `matree` scheme:
|
||||
|
||||
```yaml
|
||||
matree://<key>@<fqdn>
|
||||
```
|
||||
|
||||
where
|
||||
|
||||
- `matree` is the selected `multiaddr` Merkle tree scheme
|
||||
- `<fqdn>` is the fully qualified domain name on which the list can be found
|
||||
- `<key>` is the base32 encoding of the compressed 32-byte binary public key
|
||||
that signed the list.
|
||||
|
||||
The example URL from EIP-1459, adapted to the above scheme becomes:
|
||||
|
||||
```yaml
|
||||
matree://AM5FCQLWIZX2QFPNJAP7VUERCCRNGRHWZG3YYHIUV7BVDQ5FDPRT2@peers.example.org
|
||||
```
|
||||
|
||||
Each entry within the Merkle tree MUST be contained within a [DNS TXT record](https://www.rfc-editor.org/rfc/rfc1035.txt)
|
||||
and stored in a subdomain (except for the base URL `matree` entry).
|
||||
The content of any TXT record
|
||||
MUST be small enough to fit into the 512-byte limit imposed on UDP DNS packets,
|
||||
which limits the number of hashes that can be contained within a branch entry.
|
||||
The subdomain name for each entry
|
||||
is the base32 encoding of the abbreviated keccak256 hash of its text content.
|
||||
See [this example](https://eips.ethereum.org/EIPS/eip-1459#dns-record-structure)
|
||||
of a fully populated tree for more information.
|
||||
|
||||
## Entry types
|
||||
|
||||
The following entry types are derived from [EIP-1459](https://eips.ethereum.org/EIPS/eip-1459)
|
||||
and adapted for use with `multiaddrs`:
|
||||
|
||||
## Root entry
|
||||
|
||||
The tree root entry MUST use the following format:
|
||||
|
||||
```yaml
|
||||
matree-root:v1 m=<ma-root> l=<link-root> seq=<sequence number> sig=<signature>
|
||||
```
|
||||
|
||||
where
|
||||
|
||||
- `ma-root` and `link-root` refer to the root hashes of subtrees
|
||||
containing `multiaddrs` and links to other subtrees, respectively
|
||||
- `sequence-number` is the tree's update sequence number.
|
||||
This number SHOULD increase with each update to the tree.
|
||||
- `signature` is a 65-byte secp256k1 EC signature
|
||||
over the keccak256 hash of the root record content,
|
||||
excluding the `sig=` part,
|
||||
encoded as URL-safe base64
|
||||
|
||||
## Branch entry
|
||||
|
||||
Branch entries MUST take the format:
|
||||
|
||||
```yaml
|
||||
matree-branch:<h₁>,<h₂>,...,<hₙ>
|
||||
```
|
||||
|
||||
where
|
||||
|
||||
- `<h₁>,<h₂>,...,<hₙ>` are the hashes of other subtree entries
|
||||
|
||||
## Leaf entries
|
||||
|
||||
There are two types of leaf entries:
|
||||
|
||||
### Link entries
|
||||
|
||||
For the subtree pointed to by `link-root`,
|
||||
leaf entries MUST take the format:
|
||||
|
||||
```yaml
|
||||
matree://<key>@<fqdn>
|
||||
```
|
||||
|
||||
which links to a different list located in another domain.
|
||||
|
||||
### `multiaddr` entries
|
||||
|
||||
For the subtree pointed to by `ma-root`,
|
||||
leaf entries MUST take the format:
|
||||
|
||||
```yaml
|
||||
ma:<multiaddr>
|
||||
```
|
||||
|
||||
which contains the `multiaddr` of a `libp2p` peer.
|
||||
|
||||
## Client protocol
|
||||
|
||||
A client MUST adhere to the [client protocol](https://eips.ethereum.org/EIPS/eip-1459#client-protocol)
|
||||
as specified in EIP-1459,
|
||||
and adapted for usage with `multiaddr` entry types below:
|
||||
|
||||
To find nodes at a given DNS name a client MUST perform the following steps:
|
||||
|
||||
1. Resolve the TXT record of the DNS name and
|
||||
check whether it contains a valid `matree-root:v1` entry.
|
||||
2. Verify the signature on the root against the known public key
|
||||
and check whether the sequence number is larger than or
|
||||
equal to any previous number seen for that name.
|
||||
3. Resolve the TXT record of a hash subdomain indicated in the record
|
||||
and verify that the content matches the hash.
|
||||
4. If the resolved entry is of type:
|
||||
|
||||
- `matree-branch`: parse the list of hashes and continue resolving them (step 3).
|
||||
- `ma`: import the `multiaddr` and add it to a local list of discovered nodes.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [`10/WAKU2`](../../waku/standards/core/10/waku2.md)
|
||||
1. [EIP-1459: Client Protocol](https://eips.ethereum.org/EIPS/eip-1459#client-protocol)
|
||||
1. [EIP-1459: Node Discovery via DNS](https://eips.ethereum.org/EIPS/eip-1459)
|
||||
1. [`libp2p`](https://libp2p.io/)
|
||||
1. [`libp2p` peer identity](https://docs.libp2p.io/concepts/peer-id/)
|
||||
1. [Merkle trees](https://www.wikiwand.com/en/Merkle_tree)
|
||||
|
||||
@@ -1,21 +1,21 @@
|
||||
# Alice and Bob: remote log data sync
|
||||
msc {
|
||||
hscale="2", wordwraparcs=on;
|
||||
|
||||
alice [label="Alice"],
|
||||
cas [label="CAS"],
|
||||
ns [label="NS"],
|
||||
bob [label="Bob"];
|
||||
|
||||
--- [label="Alice replicates data to a remote log"];
|
||||
alice => cas [label="Add content"];
|
||||
cas => alice [label="Address"];
|
||||
alice => ns [label="Update NameUpdate"];
|
||||
ns => alice [label="Response"];
|
||||
|
||||
--- [label="Bob comes online"];
|
||||
bob => ns [label="Fetch"];
|
||||
ns => bob [label="Content"];
|
||||
bob => cas [label="Fetch Query"];
|
||||
cas => bob [label="Content"];
|
||||
}
|
||||
# Alice and Bob: remote log data sync
|
||||
msc {
|
||||
hscale="2", wordwraparcs=on;
|
||||
|
||||
alice [label="Alice"],
|
||||
cas [label="CAS"],
|
||||
ns [label="NS"],
|
||||
bob [label="Bob"];
|
||||
|
||||
--- [label="Alice replicates data to a remote log"];
|
||||
alice => cas [label="Add content"];
|
||||
cas => alice [label="Address"];
|
||||
alice => ns [label="Update NameUpdate"];
|
||||
ns => alice [label="Response"];
|
||||
|
||||
--- [label="Bob comes online"];
|
||||
bob => ns [label="Fetch"];
|
||||
ns => bob [label="Content"];
|
||||
bob => cas [label="Fetch Query"];
|
||||
cas => bob [label="Content"];
|
||||
}
|
||||
|
||||
@@ -1,225 +1,229 @@
|
||||
---
|
||||
slug: 3
|
||||
title: 3/REMOTE-LOG
|
||||
name: Remote log specification
|
||||
status: draft
|
||||
editor: Oskar Thorén <oskarth@titanproxy.com>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
---
|
||||
|
||||
A remote log is a replication of a local log.
|
||||
This means a node can read data that originally came from a node that is offline.
|
||||
|
||||
This specification is complemented by a proof of concept implementation[^1].
|
||||
|
||||
## Definitions
|
||||
|
||||
| Term | Definition |
|
||||
| ----------- | -------------------------------------------------------------------------------------- |
|
||||
| CAS | Content-addressed storage. Stores data that can be addressed by its hash. |
|
||||
| NS | Name system. Associates mutable data to a name. |
|
||||
| Remote log | Replication of a local log at a different location. |
|
||||
|
||||
## Wire Protocol
|
||||
|
||||
### Secure Transport, storage, and name system
|
||||
|
||||
This specification does not define anything related to: secure transport,
|
||||
content addressed storage, or the name system. It is assumed these capabilities
|
||||
are abstracted away in such a way that any such protocol can easily be
|
||||
implemented.
|
||||
|
||||
<!-- TODO: Elaborate on properties required here. -->
|
||||
|
||||
### Payloads
|
||||
|
||||
Payloads are implemented using [protocol buffers v3](https://developers.google.com/protocol-buffers/).
|
||||
|
||||
**CAS service**:
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.cas;
|
||||
|
||||
service CAS {
|
||||
rpc Add(Content) returns (Address) {}
|
||||
rpc Get(Address) returns (Content) {}
|
||||
}
|
||||
|
||||
message Address {
|
||||
bytes id = 1;
|
||||
}
|
||||
|
||||
message Content {
|
||||
bytes data = 1;
|
||||
}
|
||||
```
|
||||
|
||||
<!-- XXX/TODO: Can we get rid of the id/data complication and just use bytes? -->
|
||||
|
||||
**NS service**:
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.cas;
|
||||
|
||||
service NS {
|
||||
rpc Update(NameUpdate) returns (Response) {}
|
||||
rpc Fetch(Query) returns (Content) {}
|
||||
}
|
||||
|
||||
message NameUpdate {
|
||||
string name = 1;
|
||||
bytes content = 2;
|
||||
}
|
||||
|
||||
message Query {
|
||||
string name = 1;
|
||||
}
|
||||
|
||||
message Content {
|
||||
bytes data = 1;
|
||||
}
|
||||
|
||||
message Response {
|
||||
bytes data = 1;
|
||||
}
|
||||
```
|
||||
|
||||
<!-- XXX: Response and data type a bit weird, Ok/Err enum? -->
|
||||
<!-- TODO: Do we want NameInit here? -->
|
||||
|
||||
**Remote log:**
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.cas;
|
||||
|
||||
message RemoteLog {
|
||||
repeated Pair pair = 1;
|
||||
bytes tail = 2;
|
||||
|
||||
message Pair {
|
||||
bytes remoteHash = 1;
|
||||
bytes localHash = 2;
|
||||
bytes data = 3;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<!-- TODO: Better name for Pair, Mapping? -->
|
||||
|
||||
<!-- TODO: Consider making more useful in conjunction with metadata field.
|
||||
It makes sense to explicitly list what sequence a message is <local hash,
|
||||
remote hash, data, seqid> this way I can easily sync a messages prior or
|
||||
after a specific number.
|
||||
To enable this to be dynamic it might make sense to add page info so
|
||||
that I am aware which page I can find seqid on -->
|
||||
|
||||
## Synchronization
|
||||
|
||||
### Roles
|
||||
|
||||
There are four fundamental roles:
|
||||
|
||||
1. Alice
|
||||
2. Bob
|
||||
3. Name system (NS)
|
||||
4. Content-addressed storage (CAS)
|
||||
|
||||
The *remote log* protobuf is what is stored in the name system.
|
||||
|
||||
"Bob" can represent anything from 0 to N participants. Unlike Alice,
|
||||
Bob only needs read-only access to NS and CAS.
|
||||
|
||||
<!-- TODO: Document random node as remote log -->
|
||||
<!-- TODO: Document how to find initial remote log (e.g. per sync contexts -->
|
||||
|
||||
### Flow
|
||||
|
||||
<!-- diagram -->
|
||||
|
||||

|
||||
|
||||
<!-- Document the flow wrt operations -->
|
||||
|
||||
### Remote log
|
||||
|
||||
The remote log lets receiving nodes know what data they are missing. Depending
|
||||
on the specific requirements and capabilities of the nodes and name system, the
|
||||
information can be referred to differently. We distinguish between three rough
|
||||
modes:
|
||||
|
||||
1. Fully replicated log
|
||||
2. Normal sized page with CAS mapping
|
||||
3. "Linked list" mode - minimally sized page with CAS mapping
|
||||
|
||||
**Data format:**
|
||||
|
||||
```yaml
|
||||
| H1_3 | H2_3 |
|
||||
| H1_2 | H2_2 |
|
||||
| H1_1 | H2_1 |
|
||||
| ------------|
|
||||
| next_page |
|
||||
```
|
||||
|
||||
Here the upper section indicates a list of ordered pairs, and the lower section
|
||||
contains the address for the next page chunk. `H1` is the native hash function,
|
||||
and `H2` is the one used by the CAS. The numbers corresponds to the messages.
|
||||
|
||||
To indicate which CAS is used, a remote log SHOULD use a multiaddr.
|
||||
|
||||
**Embedded data:**
|
||||
|
||||
A remote log MAY also choose to embed the wire payloads that corresponds to the
|
||||
native hash. This bypasses the need for a dedicated CAS and additional
|
||||
round-trips, with a trade-off in bandwidth usage.
|
||||
|
||||
```yaml
|
||||
| H1_3 | | C_3 |
|
||||
| H1_2 | | C_2 |
|
||||
| H1_1 | | C_1 |
|
||||
| -------------|
|
||||
| next_page |
|
||||
```
|
||||
|
||||
Here `C` stands for the content that would be stored at the CAS.
|
||||
|
||||
Both patterns can be used in parallel, e,g. by storing the last `k` messages
|
||||
directly and use CAS pointers for the rest. Together with the `next_page` page
|
||||
semantics, this gives users flexibility in terms of bandwidth and
|
||||
latency/indirection, all the way from a simple linked list to a fully replicated
|
||||
log. The latter is useful for things like backups on durable storage.
|
||||
|
||||
### Next page semantics
|
||||
|
||||
The pointer to the 'next page' is another remote log entry, at a previous point
|
||||
in time.
|
||||
|
||||
<!-- TODO: Determine requirement re overlapping, adjacent,
|
||||
and/or missing entries -->
|
||||
|
||||
<!-- TODO: Document message ordering append only requirements -->
|
||||
|
||||
### Interaction with MVDS
|
||||
|
||||
[vac.mvds.Message](../2/mvds.md/#payloads) payloads are the only payloads
|
||||
that MUST be uploaded.
|
||||
Other messages types MAY be uploaded, depending on the implementation.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
TBD.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## Footnotes
|
||||
|
||||
[^1]: <https://github.com/vacp2p/research/tree/master/remote_log>
|
||||
---
|
||||
slug: 3
|
||||
title: 3/REMOTE-LOG
|
||||
name: Remote log specification
|
||||
status: draft
|
||||
editor: Oskar Thorén <oskarth@titanproxy.com>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
---
|
||||
|
||||
A remote log is a replication of a local log.
|
||||
This means a node can read data that originally came from a node that is offline.
|
||||
|
||||
This specification is complemented by a proof of concept implementation[^1].
|
||||
|
||||
## Definitions
|
||||
|
||||
| Term | Definition |
|
||||
| ----------- | -------------------------------------------------------------------------------------- |
|
||||
| CAS | Content-addressed storage. Stores data that can be addressed by its hash. |
|
||||
| NS | Name system. Associates mutable data to a name. |
|
||||
| Remote log | Replication of a local log at a different location. |
|
||||
|
||||
## Wire Protocol
|
||||
|
||||
### Secure Transport, storage, and name system
|
||||
|
||||
This specification does not define anything related to: secure transport,
|
||||
content addressed storage, or the name system. It is assumed these capabilities
|
||||
are abstracted away in such a way that any such protocol can easily be
|
||||
implemented.
|
||||
|
||||
<!-- TODO: Elaborate on properties required here. -->
|
||||
|
||||
### Payloads
|
||||
|
||||
Payloads are implemented using [protocol buffers v3](https://developers.google.com/protocol-buffers/).
|
||||
|
||||
**CAS service**:
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.cas;
|
||||
|
||||
service CAS {
|
||||
rpc Add(Content) returns (Address) {}
|
||||
rpc Get(Address) returns (Content) {}
|
||||
}
|
||||
|
||||
message Address {
|
||||
bytes id = 1;
|
||||
}
|
||||
|
||||
message Content {
|
||||
bytes data = 1;
|
||||
}
|
||||
```
|
||||
|
||||
<!-- XXX/TODO: Can we get rid of the id/data complication and just use bytes? -->
|
||||
|
||||
**NS service**:
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.cas;
|
||||
|
||||
service NS {
|
||||
rpc Update(NameUpdate) returns (Response) {}
|
||||
rpc Fetch(Query) returns (Content) {}
|
||||
}
|
||||
|
||||
message NameUpdate {
|
||||
string name = 1;
|
||||
bytes content = 2;
|
||||
}
|
||||
|
||||
message Query {
|
||||
string name = 1;
|
||||
}
|
||||
|
||||
message Content {
|
||||
bytes data = 1;
|
||||
}
|
||||
|
||||
message Response {
|
||||
bytes data = 1;
|
||||
}
|
||||
```
|
||||
|
||||
<!-- XXX: Response and data type a bit weird, Ok/Err enum? -->
|
||||
<!-- TODO: Do we want NameInit here? -->
|
||||
|
||||
**Remote log:**
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.cas;
|
||||
|
||||
message RemoteLog {
|
||||
repeated Pair pair = 1;
|
||||
bytes tail = 2;
|
||||
|
||||
message Pair {
|
||||
bytes remoteHash = 1;
|
||||
bytes localHash = 2;
|
||||
bytes data = 3;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<!-- TODO: Better name for Pair, Mapping? -->
|
||||
|
||||
<!-- TODO: Consider making more useful in conjunction with metadata field.
|
||||
It makes sense to explicitly list what sequence a message is <local hash,
|
||||
remote hash, data, seqid> this way I can easily sync a messages prior or
|
||||
after a specific number.
|
||||
To enable this to be dynamic it might make sense to add page info so
|
||||
that I am aware which page I can find seqid on -->
|
||||
|
||||
## Synchronization
|
||||
|
||||
### Roles
|
||||
|
||||
There are four fundamental roles:
|
||||
|
||||
1. Alice
|
||||
2. Bob
|
||||
3. Name system (NS)
|
||||
4. Content-addressed storage (CAS)
|
||||
|
||||
The *remote log* protobuf is what is stored in the name system.
|
||||
|
||||
"Bob" can represent anything from 0 to N participants. Unlike Alice,
|
||||
Bob only needs read-only access to NS and CAS.
|
||||
|
||||
<!-- TODO: Document random node as remote log -->
|
||||
<!-- TODO: Document how to find initial remote log (e.g. per sync contexts -->
|
||||
|
||||
### Flow
|
||||
|
||||
<!-- diagram -->
|
||||
|
||||
<p align="center">
|
||||
<img src="./images/remote-log.png" />
|
||||
<br />
|
||||
Figure 1: Remote log data synchronization.
|
||||
</p>
|
||||
|
||||
<!-- Document the flow wrt operations -->
|
||||
|
||||
### Remote log
|
||||
|
||||
The remote log lets receiving nodes know what data they are missing. Depending
|
||||
on the specific requirements and capabilities of the nodes and name system, the
|
||||
information can be referred to differently. We distinguish between three rough
|
||||
modes:
|
||||
|
||||
1. Fully replicated log
|
||||
2. Normal sized page with CAS mapping
|
||||
3. "Linked list" mode - minimally sized page with CAS mapping
|
||||
|
||||
**Data format:**
|
||||
|
||||
```yaml
|
||||
| H1_3 | H2_3 |
|
||||
| H1_2 | H2_2 |
|
||||
| H1_1 | H2_1 |
|
||||
| ------------|
|
||||
| next_page |
|
||||
```
|
||||
|
||||
Here the upper section indicates a list of ordered pairs, and the lower section
|
||||
contains the address for the next page chunk. `H1` is the native hash function,
|
||||
and `H2` is the one used by the CAS. The numbers corresponds to the messages.
|
||||
|
||||
To indicate which CAS is used, a remote log SHOULD use a multiaddr.
|
||||
|
||||
**Embedded data:**
|
||||
|
||||
A remote log MAY also choose to embed the wire payloads that corresponds to the
|
||||
native hash. This bypasses the need for a dedicated CAS and additional
|
||||
round-trips, with a trade-off in bandwidth usage.
|
||||
|
||||
```yaml
|
||||
| H1_3 | | C_3 |
|
||||
| H1_2 | | C_2 |
|
||||
| H1_1 | | C_1 |
|
||||
| -------------|
|
||||
| next_page |
|
||||
```
|
||||
|
||||
Here `C` stands for the content that would be stored at the CAS.
|
||||
|
||||
Both patterns can be used in parallel, e,g. by storing the last `k` messages
|
||||
directly and use CAS pointers for the rest. Together with the `next_page` page
|
||||
semantics, this gives users flexibility in terms of bandwidth and
|
||||
latency/indirection, all the way from a simple linked list to a fully replicated
|
||||
log. The latter is useful for things like backups on durable storage.
|
||||
|
||||
### Next page semantics
|
||||
|
||||
The pointer to the 'next page' is another remote log entry, at a previous point
|
||||
in time.
|
||||
|
||||
<!-- TODO: Determine requirement re overlapping, adjacent,
|
||||
and/or missing entries -->
|
||||
|
||||
<!-- TODO: Document message ordering append only requirements -->
|
||||
|
||||
### Interaction with MVDS
|
||||
|
||||
[vac.mvds.Message](../2/mvds.md/#payloads) payloads are the only payloads
|
||||
that MUST be uploaded.
|
||||
Other messages types MAY be uploaded, depending on the implementation.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
TBD.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## Footnotes
|
||||
|
||||
[^1]: <https://github.com/vacp2p/research/tree/master/remote_log>
|
||||
|
||||
1602
vac/32/rln-v1.md
1602
vac/32/rln-v1.md
File diff suppressed because it is too large
Load Diff
@@ -1,104 +1,104 @@
|
||||
---
|
||||
slug: 4
|
||||
title: 4/MVDS-META
|
||||
name: MVDS Metadata Field
|
||||
status: draft
|
||||
editor: Sanaz Taheri <sanaz@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
---
|
||||
|
||||
In this specification, we describe a method to construct message history that
|
||||
will aid the consistency guarantees of [2/MVDS](../2/mvds.md).
|
||||
Additionally,
|
||||
we explain how data sync can be used for more lightweight messages that
|
||||
do not require full synchronization.
|
||||
|
||||
## Motivation
|
||||
|
||||
In order for more efficient synchronization of conversational messages,
|
||||
information should be provided allowing a node to more effectively synchronize
|
||||
the dependencies for any given message.
|
||||
|
||||
## Format
|
||||
|
||||
We introduce the metadata message which is used to convey information about a message
|
||||
and how it SHOULD be handled.
|
||||
|
||||
```protobuf
|
||||
package vac.mvds;
|
||||
|
||||
message Metadata {
|
||||
repeated bytes parents = 1;
|
||||
bool ephemeral = 2;
|
||||
}
|
||||
```
|
||||
|
||||
Nodes MAY transmit a `Metadata` message by extending the MVDS [message](../2/mvds.md/#payloads)
|
||||
with a `metadata` field.
|
||||
|
||||
```diff
|
||||
message Message {
|
||||
bytes group_id = 6001;
|
||||
int64 timestamp = 6002;
|
||||
bytes body = 6003;
|
||||
+ Metadata metadata = 6004;
|
||||
}
|
||||
```
|
||||
|
||||
### Fields
|
||||
|
||||
| Name | Description |
|
||||
| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `parents` | list of parent [`message identifier`s](../2/mvds.md/#payloads) for the specific message. |
|
||||
| `ephemeral` | indicates whether a message is ephemeral or not. |
|
||||
|
||||
## Usage
|
||||
|
||||
### `parents`
|
||||
|
||||
This field contains a list of parent [`message identifier`s](../2/mvds.md/#payloads)
|
||||
for the specific message.
|
||||
It MUST NOT contain any messages as parent whose `ack` flag was set to `false`.
|
||||
This establishes a directed acyclic graph (DAG)[^2] of persistent messages.
|
||||
|
||||
Nodes MAY buffer messages until dependencies are satisfied for causal consistency[^3],
|
||||
they MAY also pass the messages straight away for eventual consistency[^4].
|
||||
|
||||
A parent is any message before a new message that
|
||||
a node is aware of that has no children.
|
||||
|
||||
The number of parents for a given message is bound by [0, N],
|
||||
where N is the number of nodes participating in the conversation,
|
||||
therefore the space requirements for the `parents` field is O(N).
|
||||
|
||||
If a message has no parents it is considered a root.
|
||||
There can be multiple roots, which might be disconnected,
|
||||
giving rise to multiple DAGs.
|
||||
|
||||
### `ephemeral`
|
||||
|
||||
When the `ephemeral` flag is set to `false`,
|
||||
a node MUST send an acknowledgment when they have received and processed a message.
|
||||
If it is set to `true`, it SHOULD NOT send any acknowledgment.
|
||||
The flag is `false` by default.
|
||||
|
||||
Nodes MAY decide to not persist ephemeral messages,
|
||||
however they MUST NOT be shared as part of the message history.
|
||||
|
||||
Nodes SHOULD send ephemeral messages in batch mode.
|
||||
As their delivery is not needed to be guaranteed.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## Footnotes
|
||||
|
||||
1: [2/MVDS](../2/mvds.md)
|
||||
2: [directed_acyclic_graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph)
|
||||
3: Jepsen. [Causal Consistency](https://jepsen.io/consistency/models/causal)
|
||||
Jepsen, LLC.
|
||||
4: <https://en.wikipedia.org/wiki/Eventual_consistency>
|
||||
---
|
||||
slug: 4
|
||||
title: 4/MVDS-META
|
||||
name: MVDS Metadata Field
|
||||
status: draft
|
||||
editor: Sanaz Taheri <sanaz@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
---
|
||||
|
||||
In this specification, we describe a method to construct message history that
|
||||
will aid the consistency guarantees of [2/MVDS](../2/mvds.md).
|
||||
Additionally,
|
||||
we explain how data sync can be used for more lightweight messages that
|
||||
do not require full synchronization.
|
||||
|
||||
## Motivation
|
||||
|
||||
In order for more efficient synchronization of conversational messages,
|
||||
information should be provided allowing a node to more effectively synchronize
|
||||
the dependencies for any given message.
|
||||
|
||||
## Format
|
||||
|
||||
We introduce the metadata message which is used to convey information about a message
|
||||
and how it SHOULD be handled.
|
||||
|
||||
```protobuf
|
||||
package vac.mvds;
|
||||
|
||||
message Metadata {
|
||||
repeated bytes parents = 1;
|
||||
bool ephemeral = 2;
|
||||
}
|
||||
```
|
||||
|
||||
Nodes MAY transmit a `Metadata` message by extending the MVDS [message](../2/mvds.md/#payloads)
|
||||
with a `metadata` field.
|
||||
|
||||
```diff
|
||||
message Message {
|
||||
bytes group_id = 6001;
|
||||
int64 timestamp = 6002;
|
||||
bytes body = 6003;
|
||||
+ Metadata metadata = 6004;
|
||||
}
|
||||
```
|
||||
|
||||
### Fields
|
||||
|
||||
| Name | Description |
|
||||
| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `parents` | list of parent [`message identifier`s](../2/mvds.md/#payloads) for the specific message. |
|
||||
| `ephemeral` | indicates whether a message is ephemeral or not. |
|
||||
|
||||
## Usage
|
||||
|
||||
### `parents`
|
||||
|
||||
This field contains a list of parent [`message identifier`s](../2/mvds.md/#payloads)
|
||||
for the specific message.
|
||||
It MUST NOT contain any messages as parent whose `ack` flag was set to `false`.
|
||||
This establishes a directed acyclic graph (DAG)[^2] of persistent messages.
|
||||
|
||||
Nodes MAY buffer messages until dependencies are satisfied for causal consistency[^3],
|
||||
they MAY also pass the messages straight away for eventual consistency[^4].
|
||||
|
||||
A parent is any message before a new message that
|
||||
a node is aware of that has no children.
|
||||
|
||||
The number of parents for a given message is bound by [0, N],
|
||||
where N is the number of nodes participating in the conversation,
|
||||
therefore the space requirements for the `parents` field is O(N).
|
||||
|
||||
If a message has no parents it is considered a root.
|
||||
There can be multiple roots, which might be disconnected,
|
||||
giving rise to multiple DAGs.
|
||||
|
||||
### `ephemeral`
|
||||
|
||||
When the `ephemeral` flag is set to `false`,
|
||||
a node MUST send an acknowledgment when they have received and processed a message.
|
||||
If it is set to `true`, it SHOULD NOT send any acknowledgment.
|
||||
The flag is `false` by default.
|
||||
|
||||
Nodes MAY decide to not persist ephemeral messages,
|
||||
however they MUST NOT be shared as part of the message history.
|
||||
|
||||
Nodes SHOULD send ephemeral messages in batch mode.
|
||||
As their delivery is not needed to be guaranteed.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## Footnotes
|
||||
|
||||
1: [2/MVDS](../2/mvds.md)
|
||||
2: [directed_acyclic_graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph)
|
||||
3: Jepsen. [Causal Consistency](https://jepsen.io/consistency/models/causal)
|
||||
Jepsen, LLC.
|
||||
4: <https://en.wikipedia.org/wiki/Eventual_consistency>
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
# Vac RFCs
|
||||
|
||||
Vac builds public good protocols for the decentralised web.
|
||||
Vac acts as a custodian for the protocols that live in the RFC-Index repository.
|
||||
With the goal of widespread adoption,
|
||||
Vac will make sure the protocols adhere to a set of principles,
|
||||
including but not limited to liberty, security, privacy, decentralisation and inclusivity.
|
||||
|
||||
To learn more, visit [Vac Research](https://vac.dev/)
|
||||
# Vac RFCs
|
||||
|
||||
Vac builds public good protocols for the decentralise web.
|
||||
Vac acts as a custodian for the protocols that live in the RFC-Index repository.
|
||||
With the goal of widespread adoption,
|
||||
Vac will make sure the protocols adhere to the set of principles,
|
||||
including but not limited to liberty, security, privacy, decentralisation, and inclusivity.
|
||||
|
||||
To learn more, visit [Vac Research](https://vac.dev/)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Vac Raw Specifications
|
||||
|
||||
All Vac specifications that have not reached **draft** status will live in this repository.
|
||||
To learn more about **raw** specifications, take a look at [1/COSS](../1/coss.md).
|
||||
# Vac Raw Specifications
|
||||
|
||||
All Vac specifications that have not reached **draft** status will live in this repository.
|
||||
To learn more about **raw** specifications, take a look at [1/COSS](../1/coss.md).
|
||||
|
||||
@@ -1,252 +0,0 @@
|
||||
---
|
||||
title: HASHGRAPHLIKE CONSENSUS
|
||||
name: Hashgraphlike Consensus Protocol
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Ugur Sen [ugur@status.im](mailto:ugur@status.im)
|
||||
contributors: seemenkina [ekaterina@status.im](mailto:ekaterina@status.im)
|
||||
---
|
||||
## Abstract
|
||||
|
||||
This document specifies a scalable, decentralized, and Byzantine Fault Tolerant (BFT)
|
||||
consensus mechanism inspired by Hashgraph, designed for binary decision-making in P2P networks.
|
||||
|
||||
## Motivation
|
||||
|
||||
Consensus is one of the essential components of decentralization.
|
||||
In particular, in the decentralized group messaging application is used for
|
||||
binary decision-making to govern the group.
|
||||
Therefore, each user contributes to the decision-making process.
|
||||
Besides achieving decentralization, the consensus mechanism MUST be strong:
|
||||
|
||||
- Under the assumption of at least `2/3` honest users in the network.
|
||||
|
||||
- Each user MUST conclude the same decision and scalability:
|
||||
message propagation in the network MUST occur within `O(log n)` rounds,
|
||||
where `n` is the total number of peers,
|
||||
in order to preserve the scalability of the messaging application.
|
||||
|
||||
## Format Specification
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
|
||||
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
## Flow
|
||||
|
||||
Any user in the group initializes the consensus by creating a proposal.
|
||||
Next, the user broadcasts the proposal to the whole network.
|
||||
Upon each user receives the proposal, validates the proposal,
|
||||
adds its vote as yes or no and with its signature and timestamp.
|
||||
The user then sends the proposal and vote to a random peer in a P2P setup,
|
||||
or to a subscribed gossipsub channel if gossip-based messaging is used.
|
||||
Therefore, each user first validates the signature and then adds its new vote.
|
||||
Each sending message counts as a round.
|
||||
After `log(n)` rounds all users in the network have the others vote
|
||||
if at least `2/3` number of users are honest where honesty follows the protocol.
|
||||
|
||||
In general, the voting-based consensus consists of the following phases:
|
||||
|
||||
1. Initialization of voting
|
||||
2. Exchanging votes across the rounds
|
||||
3. Counting the votes
|
||||
|
||||
### Assumptions
|
||||
|
||||
- The users in the P2P network can discover the nodes or they are subscribing same channel in a gossipsub.
|
||||
- We MAY have non-reliable (silent) nodes.
|
||||
- Proposal owners MUST know the number of voters.
|
||||
|
||||
## 1. Initialization of voting
|
||||
|
||||
A user initializes the voting with the proposal payload which is
|
||||
implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
|
||||
|
||||
```bash
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.voting;
|
||||
|
||||
message Proposal {
|
||||
string name = 10; // Proposal name
|
||||
string payload = 11; // Proposal description
|
||||
uint32 proposal_id = 12; // Unique identifier of the proposal
|
||||
bytes proposal_owner = 13; // Public key of the creator
|
||||
repeated Votes = 14; // Vote list in the proposal
|
||||
uint32 expected_voters_count = 15; // Maximum number of distinct voters
|
||||
uint32 round = 16; // Number of Votes
|
||||
uint64 timestamp = 17; // Creation time of proposal
|
||||
uint64 expiration_time = 18; // The time interval that the proposal is active.
|
||||
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
|
||||
}
|
||||
|
||||
message Vote {
|
||||
uint32 vote_id = 20; // Unique identifier of the vote
|
||||
bytes vote_owner = 21; // Voter's public key
|
||||
uint32 proposal_id = 22; // Linking votes and proposals
|
||||
int64 timestamp = 23; // Time when the vote was cast
|
||||
bool vote = 24; // Vote bool value (true/false)
|
||||
bytes parent_hash = 25; // Hash of previous owner's Vote
|
||||
bytes received_hash = 26; // Hash of previous received Vote
|
||||
bytes vote_hash = 27; // Hash of all previously defined fields in Vote
|
||||
bytes signature = 28; // Signature of vote_hash
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
To initiate a consensus for a proposal,
|
||||
a user MUST complete all the fields in the proposal, including attaching its `vote`
|
||||
and the `payload` that shows the purpose of the proposal.
|
||||
Notably, `parent_hash` and `received_hash` are empty strings because there is no previous or received hash.
|
||||
Then the initialization section ends when the user who creates the proposal sends it
|
||||
to the random peer from the network or sends it to the proposal to the specific channel.
|
||||
|
||||
## 2. Exchanging votes across the peers
|
||||
|
||||
Once the peer receives the proposal message `P_1` from a 1-1 or a gossipsub channel does the following checks:
|
||||
|
||||
1. Check the signatures of the each votes in proposal, in particular for proposal `P_1`,
|
||||
verify the signature of `V_1` where `V_1 = P_1.votes[0]` with `V_1.signature` and `V_1.vote_owner`
|
||||
2. Do `parent_hash` check: If there are repeated votes from the same sender,
|
||||
check that the hash of the former vote is equal to the `parent_hash` of the later vote.
|
||||
3. Do `received_hash` check: If there are multiple votes in a proposal, check that the hash of a vote is equal to the `received_hash` of the next one.
|
||||
4. After successful verification of the signature and hashes, the receiving peer proceeds to generate `P_2` containing a new vote `V_2` as following:
|
||||
|
||||
4.1. Add its public key as `P_2.vote_owner`.
|
||||
|
||||
4.2. Set `timestamp`.
|
||||
|
||||
4.3. Set boolean `vote`.
|
||||
|
||||
4.4. Define `V_2.parent_hash = 0` if there is no previous peer's vote, otherwise hash of previous owner's vote.
|
||||
|
||||
4.5. Set `V_2.received_hash = hash(P_1.votes[0])`.
|
||||
|
||||
4.6. Set `proposal_id` for the `vote`.
|
||||
|
||||
4.7. Calculate `vote_hash` by hash of all previously defined fields in Vote:
|
||||
`V_2.vote_hash = hash(vote_id, owner, proposal_id, timestamp, vote, parent_hash, received_hash)`
|
||||
|
||||
4.8. Sign `vote_hash` with its private key corresponding the public key as `vote_owner` component then adds `V_2.vote_hash`.
|
||||
|
||||
5. Create `P_2` with by adding `V_2` as follows:
|
||||
|
||||
5.1. Assign `P_2.name`, `P_2.proposal_id`, and `P_2.proposal_owner` to be identical to those in `P_1`.
|
||||
|
||||
5.2. Add the `V_2` to the `P_2.Votes` list.
|
||||
|
||||
5.3. Increase the round by one, namely `P_2.round = P_1.round + 1`.
|
||||
|
||||
5.4. Verify that the proposal has not expired by checking that: `P_2.timestamp - current_time < P_1.expiration_time`.
|
||||
If this does not hold, other peers ignore the message.
|
||||
|
||||
After the peer creates the proposal `P_2` with its vote `V_2`,
|
||||
sends it to the random peer from the network or
|
||||
sends it to the proposal to the specific channel.
|
||||
|
||||
## 3. Determining the result
|
||||
|
||||
Because consensus depends on meeting a quorum threshold,
|
||||
each peer MUST verify the accumulated votes to determine whether the necessary conditions have been satisfied.
|
||||
The voting result is set YES if the majority of the `2n/3` from the distinct peers vote YES.
|
||||
|
||||
To verify, the `findDistinctVoter` method processes the proposal by traversing its `Votes` list to determine the number of unique voters.
|
||||
|
||||
If this method returns true, the peer proceeds with strong validation,
|
||||
which ensures that if any honest peer reaches a decision,
|
||||
no other honest peer can arrive at a conflicting result.
|
||||
|
||||
1. Check each `signature` in the vote as shown in the [Section 2](#2-exchanging-votes-across-the-peers).
|
||||
|
||||
2. Check the `parent_hash` chain if there are multiple votes from the same owner namely `vote_i` and `vote_i+1` respectively,
|
||||
the parent hash of `vote_i+1` should be the hash of `vote_i`
|
||||
|
||||
3. Check the `previous_hash` chain, each received hash of `vote_i+1` should be equal to the hash of `vote_i`.
|
||||
|
||||
4. Check the `timestamp` against the replay attack.
|
||||
In particular, the `timestamp` cannot be the old in the determined threshold.
|
||||
|
||||
5. Check that the liveness criteria defined in the Liveness section are satisfied.
|
||||
|
||||
If a proposal is verified by all the checks,
|
||||
the `countVote` method counts each YES vote from the list of Votes.
|
||||
|
||||
## 4. Properties
|
||||
|
||||
The consensus mechanism satisfies liveness and security properties as follows:
|
||||
|
||||
### Liveness
|
||||
|
||||
Liveness refers to the ability of the protocol to eventually reach a decision when sufficient honest participation is present.
|
||||
In this protocol, if `n > 2` and more than `n/2` of the votes among at least `2n/3` distinct peers are YES,
|
||||
then the consensus result is defined as YES; otherwise, when `n ≤ 2`, unanimous agreement (100% YES votes) is required.
|
||||
|
||||
The peer calculates the result locally as shown in the [Section 3](#3-determining-the-result).
|
||||
From the [hashgraph property](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus),
|
||||
if a node could calculate the result of a proposal,
|
||||
it implies that no peer can calculate the opposite of the result.
|
||||
Still, reliability issues can cause some situations where peers cannot receive enough messages,
|
||||
so they cannot calculate the consensus result.
|
||||
|
||||
Rounds are incremented when a peer adds and sends the new proposal.
|
||||
Calculating the required number of rounds, `2n/3` from the distinct peers' votes is achieved in two ways:
|
||||
|
||||
1. `2n/3` rounds in pure P2P networks
|
||||
2. `2` rounds in gossipsub
|
||||
|
||||
Since the message complexity is `O(1)` in the gossipsub channel,
|
||||
in case the network has reliability issues,
|
||||
the second round is used for the peers cannot receive all the messages from the first round.
|
||||
|
||||
If an honest and online peer has received at least one vote but not enough to reach consensus,
|
||||
it MAY continue to propagate its own vote — and any votes it has received — to support message dissemination.
|
||||
This process can continue beyond the expected round count,
|
||||
as long as it remains within the expiration time defined in the proposal.
|
||||
The expiration time acts as a soft upper bound to ensure that consensus is either reached or aborted within a bounded timeframe.
|
||||
|
||||
#### Equality of votes
|
||||
|
||||
An equality of votes occurs when verifying at least `2n/3` distinct voters and
|
||||
applying `liveness_criteria_yes` the number of YES and NO votes is equal.
|
||||
|
||||
Handling ties is an application-level decision. The application MUST define a deterministic tie policy:
|
||||
|
||||
RETRY: re-run the vote with a new proposal_id, optionally adjusting parameters.
|
||||
|
||||
REJECT: abort the proposal and return voting result as NO.
|
||||
|
||||
The chosen policy SHOULD be consistent for all peers via proposal's `payload` to ensure convergence on the same outcome.
|
||||
|
||||
### Silent Node Management
|
||||
|
||||
Silent nodes are the nodes that not participate the voting as YES or NO.
|
||||
There are two possible counting votes for the silent peers.
|
||||
|
||||
1. **Silent peers means YES:**
|
||||
Silent peers counted as YES vote, if the application prefer the strong rejection for NO votes.
|
||||
2. **Silent peers means NO:**
|
||||
Silent peers counted as NO vote, if the application prefer the strong acception for NO votes.
|
||||
|
||||
The proposal is set to default true, which means silent peers' votes are counted as YES namely `liveness_criteria_yes` is set true by default.
|
||||
|
||||
### Security
|
||||
|
||||
This RFC uses cryptographic primitives to prevent the
|
||||
malicious behaviours as follows:
|
||||
|
||||
- Vote forgery attempt: creating unsigned invalid votes
|
||||
- Inconsistent voting: a malicious peer submits conflicting votes (e.g., YES to some peers and NO to others)
|
||||
in different stages of the protocol, violating vote consistency and attempting to undermine consensus.
|
||||
- Integrity breaking attempt: tampering history by changing previous votes.
|
||||
- Replay attack: storing the old votes to maliciously use in fresh voting.
|
||||
|
||||
## 5. Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
|
||||
|
||||
## 6. References
|
||||
|
||||
- [Hedera Hashgraph](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus)
|
||||
- [Gossip about gossip](https://docs.hedera.com/hedera/core-concepts/hashgraph-consensus-algorithms/gossip-about-gossip)
|
||||
- [Simple implementation of hashgraph consensus](https://github.com/conanwu777/hashgraph)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,241 +0,0 @@
|
||||
---
|
||||
title: ETH-MLS-OFFCHAIN
|
||||
name: Secure channel setup using decentralized MLS and Ethereum accounts
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Ugur Sen [ugur@status.im](mailto:ugur@status.im)
|
||||
contributors: seemenkina [ekaterina@status.im](mailto:ekaterina@status.im)
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The following document specifies Ethereum authenticated scalable
|
||||
and decentralized secure group messaging application by
|
||||
integrating Message Layer Security (MLS) backend.
|
||||
Decentralization refers each user is a node in P2P network and
|
||||
each user has voice for any changes in group.
|
||||
This is achieved by integrating a consensus mechanism.
|
||||
Lastly, this RFC can also be referred to as de-MLS,
|
||||
decentralized MLS, to emphasize its deviation
|
||||
from the centralized trust assumptions of traditional MLS deployments.
|
||||
|
||||
## Motivation
|
||||
|
||||
Group messaging is a fundamental part of digital communication,
|
||||
yet most existing systems depend on centralized servers,
|
||||
which introduce risks around privacy, censorship, and unilateral control.
|
||||
In restrictive settings, servers can be blocked or surveilled;
|
||||
in more open environments, users still face opaque moderation policies,
|
||||
data collection, and exclusion from decision-making processes.
|
||||
To address this, we propose a decentralized, scalable peer-to-peer
|
||||
group messaging system where each participant runs a node, contributes
|
||||
to message propagation, and takes part in governance autonomously.
|
||||
Group membership changes are decided collectively through a lightweight
|
||||
partially synchronous, fault-tolerant consensus protocol without a centralized identity.
|
||||
This design enables truly democratic group communication and is well-suited
|
||||
for use cases like activist collectives, research collaborations, DAOs, support groups,
|
||||
and decentralized social platforms.
|
||||
|
||||
## Format Specification
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
|
||||
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Assumptions
|
||||
|
||||
- The nodes in the P2P network can discover other nodes or will connect to other nodes when subscribing to same topic in a gossipsub.
|
||||
- We MAY have non-reliable (silent) nodes.
|
||||
- We MUST have a consensus that is lightweight, scalable and finalized in a specific time.
|
||||
|
||||
## Roles
|
||||
|
||||
The three roles used in de-MLS is as follows:
|
||||
|
||||
- `node`: Nodes are members of network without being in any secure group messaging.
|
||||
- `member`: Members are special nodes in the secure group messaging who
|
||||
obtains current group key of secure group messaging.
|
||||
- `steward`: Stewards are special and transparent members in secure group
|
||||
messaging who organizes the changes upon the voted-proposals.
|
||||
|
||||
## MLS Background
|
||||
|
||||
The de-MLS consists of MLS backend, so the MLS services and other MLS components
|
||||
are taken from the original [MLS specification](https://datatracker.ietf.org/doc/rfc9420/), with or without modifications.
|
||||
|
||||
### MLS Services
|
||||
|
||||
MLS is operated in two services authentication service (AS) and delivery service (DS).
|
||||
Authentication service enables group members to authenticate the credentials presented by other group members.
|
||||
The delivery service routes MLS messages among the nodes or
|
||||
members in the protocol in the correct order and
|
||||
manage the `keyPackage` of the users where the `keyPackage` is the objects
|
||||
that provide some public information about a user.
|
||||
|
||||
### MLS Objects
|
||||
|
||||
Following section presents the MLS objects and components that used in this RFC:
|
||||
|
||||
`Epoch`: Fixed time intervals that changes the state that is defined by members,
|
||||
section 3.4 in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
|
||||
`MLS proposal message:` Members MUST receive the proposal message prior to the
|
||||
corresponding commit message that initiates a new epoch with key changes,
|
||||
in order to ensure the intended security properties, section 12.1 in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
Here, the add and remove proposals are used.
|
||||
|
||||
`Application message`: This message type used in arbitrary encrypted communication between group members.
|
||||
This is restricted by [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) as if there is pending proposal,
|
||||
the application message should be cut.
|
||||
Note that: Since the MLS is based on servers, this delay between proposal and commit messages are very small.
|
||||
`Commit message:` After members receive the proposals regarding group changes,
|
||||
the committer, who may be any member of the group, as specified in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/),
|
||||
generates the necessary key material for the next epoch, including the appropriate welcome messages
|
||||
for new joiners and new entropy for removed members. In this RFC, the committers only MUST be stewards.
|
||||
|
||||
### de-MLS Objects
|
||||
|
||||
`Voting proposal` Similar to MLS proposals, but processed only if approved through a voting process.
|
||||
They function as application messages in the MLS group,
|
||||
allowing the steward to collect them without halting the protocol.
|
||||
|
||||
## Flow
|
||||
|
||||
General flow is as follows:
|
||||
|
||||
- A steward initializes a group just once, and then sends out Group Announcements (GA) periodically.
|
||||
|
||||
- Meanwhile, each`node`creates and sends their`credential` includes `keyPackage`.
|
||||
- Each `member`creates `voting proposals` sends them to from MLS group during epoch E.
|
||||
- Meanwhile, the `steward` collects finalized `voting proposals` from MLS group and converts them into
|
||||
`MLS proposals` then sends them with correspondng `commit messages`
|
||||
- Evantually, with the commit messages, all members starts the next epoch E+1.
|
||||
|
||||
## Creating Voting Proposal
|
||||
|
||||
A `member` MAY initializes the voting with the proposal payload
|
||||
which is implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
|
||||
|
||||
```protobuf
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
message Proposal {
|
||||
string name = 10; // Proposal name
|
||||
string payload = 11; // Describes the what is voting fore
|
||||
int32 proposal_id = 12; // Unique identifier of the proposal
|
||||
bytes proposal_owner = 13; // Public key of the creator
|
||||
repeated Vote votes = 14; // Vote list in the proposal
|
||||
int32 expected_voters_count = 15; // Maximum number of distinct voters
|
||||
int32 round = 16; // Number of Votes
|
||||
int64 timestamp = 17; // Creation time of proposal
|
||||
int64 expiration_time = 18; // Time interval that the proposal is active
|
||||
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
message Vote {
|
||||
int32 vote_id = 20; // Unique identifier of the vote
|
||||
bytes vote_owner = 21; // Voter's public key
|
||||
int64 timestamp = 22; // Time when the vote was cast
|
||||
bool vote = 23; // Vote bool value (true/false)
|
||||
bytes parent_hash = 24; // Hash of previous owner's Vote
|
||||
bytes received_hash = 25; // Hash of previous received Vote
|
||||
bytes vote_hash = 26; // Hash of all previously defined fields in Vote
|
||||
bytes signature = 27; // Signature of vote_hash
|
||||
}
|
||||
```
|
||||
|
||||
The voting proposal MAY include adding a `node` or removing a `member`.
|
||||
After the `member` creates the voting proposal,
|
||||
it is emitted to the network via the MLS `Application message` with a lightweight,
|
||||
epoch based voting such as [hashgraphlike consensus.](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
|
||||
This consensus result MUST be finalized within the epoch as YES or NO.
|
||||
|
||||
If the voting result is YES, this points out the voting proposal will be converted into
|
||||
the MLS proposal by the `steward` and following commit message that starts the new epoch.
|
||||
|
||||
## Creating welcome message
|
||||
|
||||
When a MLS `MLS proposal message` is created by the `steward`,
|
||||
a `commit message` SHOULD follow,
|
||||
as in section 12.04 [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) to the members.
|
||||
In order for the new `member` joining the group to synchronize with the current members
|
||||
who received the `commit message`,
|
||||
the `steward` sends a welcome message to the node as the new `member`,
|
||||
as in section 12.4.3.1. [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
|
||||
## Single steward
|
||||
|
||||
To naive way to create a decentralized secure group messaging is having a single transparent `steward`
|
||||
who only applies the changes regarding the result of the voting.
|
||||
|
||||
This is mostly similar with the general flow and specified in voting proposal and welcome message creation sections.
|
||||
|
||||
1. Each time a single `steward` initializes a group with group parameters with parameters
|
||||
as in section 8.1. Group Context in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
2. `steward` creates a group anouncement (GA) according to the previous step and
|
||||
broadcast it to the all network periodically. GA message is visible in network to all `nodes`.
|
||||
3. The each `node` who wants to be a member needs to obtain this anouncement and create `credential`
|
||||
includes `keyPackage` that is specified in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) section 10.
|
||||
4. The `steward` aggregates all `KeyPackages` utilizes them to provision group additions for new members,
|
||||
based on the outcome of the voting process.
|
||||
5. Any `member` start to create `voting proposals` for adding or removing users,
|
||||
and present them to the voting in the MLS group as an application message.
|
||||
|
||||
However, unlimited use of `voting proposals` within the group may be misused by
|
||||
malicious or overly active members.
|
||||
Therefore, an application-level constraint can be introduced to limit the number
|
||||
or frequency of proposals initiated by each member to prevent spam or abuse.
|
||||
6. Meanwhile, the `steward` collects finalized `voting proposals` with in epoch `E`,
|
||||
that have received affirmative votes from members via application messages.
|
||||
Otherwise, the `steward` discards proposals that did not receive a majority of "YES" votes.
|
||||
Since voting proposals are transmitted as application messages, omitting them does not affect
|
||||
the protocol’s correctness or consistency.
|
||||
7. The `steward` converts all approved `voting proposals` into
|
||||
corresponding `MLS proposals` and `commit message`, and
|
||||
transmits both in a single operation as in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) section 12.4,
|
||||
including welcome messages for the new members. Therefore, the `commit message` ends the previous epoch and create new ones.
|
||||
|
||||
## Multi stewards
|
||||
|
||||
Decentralization has already been achieved in the previous section.
|
||||
However, to improve availability and ensure censorship resistance,
|
||||
the single-steward protocol is extended to a multi-steward architecture.
|
||||
In this design, each epoch is coordinated by a designated steward,
|
||||
operating under the same protocol as the single-steward model.
|
||||
Thus, the multi-steward approach primarily defines how steward roles
|
||||
rotate across epochs while preserving the underlying structure and logic of the original protocol.
|
||||
Two variants of the multi-steward design are introduced to address different system requirements.
|
||||
|
||||
### Multi steward with single consensus
|
||||
|
||||
In this model, all group modifications, such as adding or removing members,
|
||||
must be approved through consensus by all participants,
|
||||
including the steward assigned for epoch `E`.
|
||||
A configuration with multiple stewards operating under a shared consensus protocol offers
|
||||
increased decentralization and stronger protection against censorship.
|
||||
However, this benefit comes with reduced operational efficiency.
|
||||
The model is therefore best suited for small groups that value
|
||||
decentralization and censorship resistance more than performance.
|
||||
|
||||
### Multi steward with two consensuses
|
||||
|
||||
The two-consensus model offers improved efficiency with a trade-off in decentralization.
|
||||
In this design, group changes require consensus only among the stewards, rather than all members.
|
||||
Regular members participate by periodically selecting the stewards but do not take part in each decision.
|
||||
This structure enables faster coordination since consensus is achieved within a smaller group of stewards.
|
||||
It is particularly suitable for large user groups, where involving every member in each decision would be impractical.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
|
||||
|
||||
### References
|
||||
|
||||
- [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/)
|
||||
- [Hashgraphlike Consensus](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
|
||||
- [vacp2p/de-mls](https://github.com/vacp2p/de-mls)
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,226 +1,226 @@
|
||||
---
|
||||
title: GOSSIPSUB-TOR-PUSH
|
||||
name: Gossipsub Tor Push
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Daniel Kaiser <danielkaiser@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document extends the [libp2p gossipsub specification](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md)
|
||||
specifying gossipsub Tor Push,
|
||||
a gossipsub-internal way of pushing messages into a gossipsub network via Tor.
|
||||
Tor Push adds sender identity protection to gossipsub.
|
||||
|
||||
**Protocol identifier**: /meshsub/1.1.0
|
||||
|
||||
Note: Gossipsub Tor Push does not have a dedicated protocol identifier.
|
||||
It uses the same identifier as gossipsub and
|
||||
works with all [pubsub](https://github.com/libp2p/specs/tree/master/pubsub)
|
||||
based protocols.
|
||||
This allows nodes that are oblivious to Tor Push to process messages received via
|
||||
Tor Push.
|
||||
|
||||
## Background
|
||||
|
||||
Without extensions, [libp2p gossipsub](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md)
|
||||
does not protect sender identities.
|
||||
|
||||
A possible design of an anonymity extension to gossipsub
|
||||
is pushing messages through an anonymization network
|
||||
before they enter the gossipsub network.
|
||||
[Tor](https://www.torproject.org/) is currently the largest anonymization network.
|
||||
It is well researched and works reliably.
|
||||
Basing our solution on Tor both inherits existing security research,
|
||||
as well as allows for a quick deployment.
|
||||
|
||||
Using the anonymization network approach,
|
||||
even the first gossipsub node that relays a given message
|
||||
cannot link the message to its sender
|
||||
(within a relatively strong adversarial model).
|
||||
Taking the low bandwidth overhead and the low latency overhead into consideration,
|
||||
Tor offers very good anonymity properties.
|
||||
|
||||
## Functional Operation
|
||||
|
||||
Tor Push allows nodes to push messages over Tor into the gossipsub network.
|
||||
The approach specified in this document is fully backwards compatible.
|
||||
Gossipsub nodes that do not support Tor Push can receive and relay Tor Push messages,
|
||||
because Tor Push uses the same Protocol ID as gossipsub.
|
||||
|
||||
Messages are sent over Tor via [SOCKS5](https://www.rfc-editor.org/rfc/rfc1928).
|
||||
Tor Push uses a dedicated libp2p context to prevent information leakage.
|
||||
To significantly increase resilience and mitigate circuit failures,
|
||||
Tor Push establishes several connections,
|
||||
each to a different randomly selected gossipsub node.
|
||||
|
||||
## Specification
|
||||
|
||||
This section specifies the format of Tor Push messages,
|
||||
as well as how Tor Push messages are received and sent, respectively.
|
||||
|
||||
### Wire Format
|
||||
|
||||
The wire format of a Tor Push message corresponds verbatim to a typical
|
||||
[libp2p pubsub message](https://github.com/libp2p/specs/tree/master/pubsub#the-message).
|
||||
|
||||
```protobuf
|
||||
message Message {
|
||||
optional string from = 1;
|
||||
optional bytes data = 2;
|
||||
optional bytes seqno = 3;
|
||||
required string topic = 4;
|
||||
optional bytes signature = 5;
|
||||
optional bytes key = 6;
|
||||
}
|
||||
```
|
||||
|
||||
### Receiving Tor Push Messages
|
||||
|
||||
Any node supporting a protocol with ID `/meshsub/1.1.0` (e.g. gossipsub),
|
||||
can receive Tor Push messages.
|
||||
Receiving nodes are oblivious to Tor Push and
|
||||
will process incoming messages according to the respective `meshsub/1.1.0` specification.
|
||||
|
||||
### Sending Tor Push Messages
|
||||
|
||||
In the following, we refer to nodes sending Tor Push messages as Tp-nodes
|
||||
(Tor Push nodes).
|
||||
|
||||
Tp-nodes MUST setup a separate libp2p context, i.e. [libp2p switch](https://docs.libp2p.io/concepts/multiplex/switch/),
|
||||
which MUST NOT be used for any purpose other than Tor Push.
|
||||
We refer to this context as Tp-context.
|
||||
The Tp-context MUST NOT share any data, e.g. peer lists, with the default context.
|
||||
|
||||
Tp-peers are peers a Tp-node plans to send Tp-messages to.
|
||||
Tp-peers MUST support `/meshsub/1.1.0`.
|
||||
For retrieving Tp-peers,
|
||||
Tp-nodes SHOULD use an ambient peer discovery method
|
||||
that retrieves a random peer sample (from the set of all peers),
|
||||
e.g. [33/WAKU2-DISCV5](../../waku/standards/core/33/discv5.md).
|
||||
|
||||
Tp-nodes MUST establish a connection as described in sub-section
|
||||
[Tor Push Connection Establishment](#connection-establishment) to at least one Tp-peer.
|
||||
To significantly increase resilience,
|
||||
Tp-nodes SHOULD establish Tp-connections to `D` peers,
|
||||
where `D` is the [desired gossipsub out-degree](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md#parameters),
|
||||
with a default value of `8`.
|
||||
|
||||
Each Tp-message MUST be sent via the Tp-context over at least one Tp-connection.
|
||||
To increase resilience,
|
||||
Tp-messages SHOULD be sent via the Tp-context over all available Tp-connections.
|
||||
|
||||
Control messages of any kind, e.g. gossipsub graft, MUST NOT be sent via Tor Push.
|
||||
|
||||
#### Connection Establishment
|
||||
|
||||
Tp-nodes establish a `/meshsub/1.1.0` connection to tp-peers via
|
||||
[SOCKS5](https://www.rfc-editor.org/rfc/rfc1928) over [Tor](https://www.torproject.org/).
|
||||
|
||||
Establishing connections, which in turn establishes the respective Tor circuits,
|
||||
can be done ahead of time.
|
||||
|
||||
#### Epochs
|
||||
|
||||
Tor Push introduces epochs.
|
||||
The default epoch duration is 10 minutes.
|
||||
(We might adjust this default value based on experiments and
|
||||
evaluation in future versions of this document.
|
||||
It seems a good trade-off between traceablity and circuit building overhead.)
|
||||
|
||||
For each epoch, the Tp-context SHOULD be refreshed, which includes
|
||||
|
||||
* libp2p peer-ID
|
||||
* Tp-peer list
|
||||
* connections to Tp-peers
|
||||
|
||||
Both Tp-peer selection for the next epoch and
|
||||
establishing connections to the newly selected peers
|
||||
SHOULD be done during the current epoch
|
||||
and be completed before the new epoch starts.
|
||||
This avoids adding latency to message transmission.
|
||||
|
||||
## Security/Privacy Considerations
|
||||
|
||||
### Fingerprinting Attacks
|
||||
|
||||
Protocols that feature distinct patterns are prone to fingerprinting attacks
|
||||
when using them over Tor Push.
|
||||
Both malicious guards and exit nodes could detect these patterns
|
||||
and link the sender and receiver, respectively, to transmitted traffic.
|
||||
As a mitigation, such protocols can introduce dummy messages and/or
|
||||
padding to hide patterns.
|
||||
|
||||
### DoS
|
||||
|
||||
#### General DoS against Tor
|
||||
|
||||
Using untargeted DoS to prevent Tor Push messages
|
||||
from entering the gossipsub network would cost vast resources,
|
||||
because Tor Push transmits messages over several circuits and
|
||||
the Tor network is well established.
|
||||
|
||||
#### Targeting the Guard
|
||||
|
||||
Denying the service of a specific guard node
|
||||
blocks Tp-nodes using the respective guard.
|
||||
Tor guard selection will replace this guard [TODO elaborate].
|
||||
Still, messages might be delayed during this window
|
||||
which might be critical to certain applications.
|
||||
|
||||
#### Targeting the Gossipsub Network
|
||||
|
||||
Without sophisticated rate limiting (for example using [17/WAKU2-RLN-RELAY](../../waku/standards/core/17/rln-relay.md)),
|
||||
attackers can spam the gossipsub network.
|
||||
It is not enough to just block peers that send too many messages,
|
||||
because these messages might actually come from a Tor exit node
|
||||
that many honest Tp-nodes use.
|
||||
Without Tor Push,
|
||||
protocols on top of gossipsub could block peers
|
||||
if they exceed a certain message rate.
|
||||
With Tor Push, this would allow the reputation-based DoS attack described in
|
||||
[Bitcoin over Tor isn't a Good Idea](https://ieeexplore.ieee.org/abstract/document/7163022).
|
||||
|
||||
#### Peer Discovery
|
||||
|
||||
The discovery mechanism could be abused to link requesting nodes
|
||||
to their Tor connections to discovered nodes.
|
||||
An attacker that controls both the node that responds to a discovery query,
|
||||
and the node who’s ENR the response contains,
|
||||
can link the requester to a Tor connection
|
||||
that is expected to be opened to the node represented by the returned ENR soon after.
|
||||
|
||||
Further, the discovery mechanism (e.g. discv5)
|
||||
could be abused to distribute disproportionately many malicious nodes.
|
||||
For instance if p% of the nodes in the network are malicious,
|
||||
an attacker could manipulate the discovery to return malicious nodes with 2p% probability.
|
||||
The discovery mechanism needs to be resilient against this attack.
|
||||
|
||||
### Roll-out Phase
|
||||
|
||||
During the roll-out phase of Tor Push, during which only a few nodes use Tor Push,
|
||||
attackers can narrow down the senders of Tor messages
|
||||
to the set of gossipsub nodes that do not originate messages.
|
||||
Nodes who want anonymity guarantees even during the roll-out phase
|
||||
can use separate network interfaces for their default context and
|
||||
Tp-context, respectively.
|
||||
For the best protection, these contexts should run on separate physical machines.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
* [libp2p gossipsub](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md)
|
||||
* [libp2p pubsub](https://github.com/libp2p/specs/tree/master/pubsub)
|
||||
* [libp2p pubsub message](https://github.com/libp2p/specs/tree/master/pubsub#the-message)
|
||||
* [libp2p switch](https://docs.libp2p.io/concepts/multiplex/switch)
|
||||
* [SOCKS5](https://www.rfc-editor.org/rfc/rfc1928)
|
||||
* [Tor](https://www.torproject.org/)
|
||||
* [33/WAKU2-DISCV5](../../waku/standards/core/33/discv5.md)
|
||||
* [Bitcoin over Tor isn't a Good Idea](https://ieeexplore.ieee.org/abstract/document/7163022)
|
||||
* [17/WAKU2-RLN-RELAY](../../waku/standards/core/17/rln-relay.md)
|
||||
---
|
||||
title: GOSSIPSUB-TOR-PUSH
|
||||
name: Gossipsub Tor Push
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Daniel Kaiser <danielkaiser@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document extends the [libp2p gossipsub specification](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md)
|
||||
specifying gossipsub Tor Push,
|
||||
a gossipsub-internal way of pushing messages into a gossipsub network via Tor.
|
||||
Tor Push adds sender identity protection to gossipsub.
|
||||
|
||||
**Protocol identifier**: /meshsub/1.1.0
|
||||
|
||||
Note: Gossipsub Tor Push does not have a dedicated protocol identifier.
|
||||
It uses the same identifier as gossipsub and
|
||||
works with all [pubsub](https://github.com/libp2p/specs/tree/master/pubsub)
|
||||
based protocols.
|
||||
This allows nodes that are oblivious to Tor Push to process messages received via
|
||||
Tor Push.
|
||||
|
||||
## Background
|
||||
|
||||
Without extensions, [libp2p gossipsub](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md)
|
||||
does not protect sender identities.
|
||||
|
||||
A possible design of an anonymity extension to gossipsub
|
||||
is pushing messages through an anonymization network
|
||||
before they enter the gossipsub network.
|
||||
[Tor](https://www.torproject.org/) is currently the largest anonymization network.
|
||||
It is well researched and works reliably.
|
||||
Basing our solution on Tor both inherits existing security research,
|
||||
as well as allows for a quick deployment.
|
||||
|
||||
Using the anonymization network approach,
|
||||
even the first gossipsub node that relays a given message
|
||||
cannot link the message to its sender
|
||||
(within a relatively strong adversarial model).
|
||||
Taking the low bandwidth overhead and the low latency overhead into consideration,
|
||||
Tor offers very good anonymity properties.
|
||||
|
||||
## Functional Operation
|
||||
|
||||
Tor Push allows nodes to push messages over Tor into the gossipsub network.
|
||||
The approach specified in this document is fully backwards compatible.
|
||||
Gossipsub nodes that do not support Tor Push can receive and relay Tor Push messages,
|
||||
because Tor Push uses the same Protocol ID as gossipsub.
|
||||
|
||||
Messages are sent over Tor via [SOCKS5](https://www.rfc-editor.org/rfc/rfc1928).
|
||||
Tor Push uses a dedicated libp2p context to prevent information leakage.
|
||||
To significantly increase resilience and mitigate circuit failures,
|
||||
Tor Push establishes several connections,
|
||||
each to a different randomly selected gossipsub node.
|
||||
|
||||
## Specification
|
||||
|
||||
This section specifies the format of Tor Push messages,
|
||||
as well as how Tor Push messages are received and sent, respectively.
|
||||
|
||||
### Wire Format
|
||||
|
||||
The wire format of a Tor Push message corresponds verbatim to a typical
|
||||
[libp2p pubsub message](https://github.com/libp2p/specs/tree/master/pubsub#the-message).
|
||||
|
||||
```protobuf
|
||||
message Message {
|
||||
optional string from = 1;
|
||||
optional bytes data = 2;
|
||||
optional bytes seqno = 3;
|
||||
required string topic = 4;
|
||||
optional bytes signature = 5;
|
||||
optional bytes key = 6;
|
||||
}
|
||||
```
|
||||
|
||||
### Receiving Tor Push Messages
|
||||
|
||||
Any node supporting a protocol with ID `/meshsub/1.1.0` (e.g. gossipsub),
|
||||
can receive Tor Push messages.
|
||||
Receiving nodes are oblivious to Tor Push and
|
||||
will process incoming messages according to the respective `meshsub/1.1.0` specification.
|
||||
|
||||
### Sending Tor Push Messages
|
||||
|
||||
In the following, we refer to nodes sending Tor Push messages as Tp-nodes
|
||||
(Tor Push nodes).
|
||||
|
||||
Tp-nodes MUST setup a separate libp2p context, i.e. [libp2p switch](https://docs.libp2p.io/concepts/multiplex/switch/),
|
||||
which MUST NOT be used for any purpose other than Tor Push.
|
||||
We refer to this context as Tp-context.
|
||||
The Tp-context MUST NOT share any data, e.g. peer lists, with the default context.
|
||||
|
||||
Tp-peers are peers a Tp-node plans to send Tp-messages to.
|
||||
Tp-peers MUST support `/meshsub/1.1.0`.
|
||||
For retrieving Tp-peers,
|
||||
Tp-nodes SHOULD use an ambient peer discovery method
|
||||
that retrieves a random peer sample (from the set of all peers),
|
||||
e.g. [33/WAKU2-DISCV5](../../waku/standards/core/33/discv5.md).
|
||||
|
||||
Tp-nodes MUST establish a connection as described in sub-section
|
||||
[Tor Push Connection Establishment](#connection-establishment) to at least one Tp-peer.
|
||||
To significantly increase resilience,
|
||||
Tp-nodes SHOULD establish Tp-connections to `D` peers,
|
||||
where `D` is the [desired gossipsub out-degree](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md#parameters),
|
||||
with a default value of `8`.
|
||||
|
||||
Each Tp-message MUST be sent via the Tp-context over at least one Tp-connection.
|
||||
To increase resilience,
|
||||
Tp-messages SHOULD be sent via the Tp-context over all available Tp-connections.
|
||||
|
||||
Control messages of any kind, e.g. gossipsub graft, MUST NOT be sent via Tor Push.
|
||||
|
||||
#### Connection Establishment
|
||||
|
||||
Tp-nodes establish a `/meshsub/1.1.0` connection to tp-peers via
|
||||
[SOCKS5](https://www.rfc-editor.org/rfc/rfc1928) over [Tor](https://www.torproject.org/).
|
||||
|
||||
Establishing connections, which in turn establishes the respective Tor circuits,
|
||||
can be done ahead of time.
|
||||
|
||||
#### Epochs
|
||||
|
||||
Tor Push introduces epochs.
|
||||
The default epoch duration is 10 minutes.
|
||||
(We might adjust this default value based on experiments and
|
||||
evaluation in future versions of this document.
|
||||
It seems a good trade-off between traceablity and circuit building overhead.)
|
||||
|
||||
For each epoch, the Tp-context SHOULD be refreshed, which includes
|
||||
|
||||
* libp2p peer-ID
|
||||
* Tp-peer list
|
||||
* connections to Tp-peers
|
||||
|
||||
Both Tp-peer selection for the next epoch and
|
||||
establishing connections to the newly selected peers
|
||||
SHOULD be done during the current epoch
|
||||
and be completed before the new epoch starts.
|
||||
This avoids adding latency to message transmission.
|
||||
|
||||
## Security/Privacy Considerations
|
||||
|
||||
### Fingerprinting Attacks
|
||||
|
||||
Protocols that feature distinct patterns are prone to fingerprinting attacks
|
||||
when using them over Tor Push.
|
||||
Both malicious guards and exit nodes could detect these patterns
|
||||
and link the sender and receiver, respectively, to transmitted traffic.
|
||||
As a mitigation, such protocols can introduce dummy messages and/or
|
||||
padding to hide patterns.
|
||||
|
||||
### DoS
|
||||
|
||||
#### General DoS against Tor
|
||||
|
||||
Using untargeted DoS to prevent Tor Push messages
|
||||
from entering the gossipsub network would cost vast resources,
|
||||
because Tor Push transmits messages over several circuits and
|
||||
the Tor network is well established.
|
||||
|
||||
#### Targeting the Guard
|
||||
|
||||
Denying the service of a specific guard node
|
||||
blocks Tp-nodes using the respective guard.
|
||||
Tor guard selection will replace this guard [TODO elaborate].
|
||||
Still, messages might be delayed during this window
|
||||
which might be critical to certain applications.
|
||||
|
||||
#### Targeting the Gossipsub Network
|
||||
|
||||
Without sophisticated rate limiting (for example using [17/WAKU2-RLN-RELAY](../../waku/standards/core/17/rln-relay.md)),
|
||||
attackers can spam the gossipsub network.
|
||||
It is not enough to just block peers that send too many messages,
|
||||
because these messages might actually come from a Tor exit node
|
||||
that many honest Tp-nodes use.
|
||||
Without Tor Push,
|
||||
protocols on top of gossipsub could block peers
|
||||
if they exceed a certain message rate.
|
||||
With Tor Push, this would allow the reputation-based DoS attack described in
|
||||
[Bitcoin over Tor isn't a Good Idea](https://ieeexplore.ieee.org/abstract/document/7163022).
|
||||
|
||||
#### Peer Discovery
|
||||
|
||||
The discovery mechanism could be abused to link requesting nodes
|
||||
to their Tor connections to discovered nodes.
|
||||
An attacker that controls both the node that responds to a discovery query,
|
||||
and the node who’s ENR the response contains,
|
||||
can link the requester to a Tor connection
|
||||
that is expected to be opened to the node represented by the returned ENR soon after.
|
||||
|
||||
Further, the discovery mechanism (e.g. discv5)
|
||||
could be abused to distribute disproportionately many malicious nodes.
|
||||
For instance if p% of the nodes in the network are malicious,
|
||||
an attacker could manipulate the discovery to return malicious nodes with 2p% probability.
|
||||
The discovery mechanism needs to be resilient against this attack.
|
||||
|
||||
### Roll-out Phase
|
||||
|
||||
During the roll-out phase of Tor Push, during which only a few nodes use Tor Push,
|
||||
attackers can narrow down the senders of Tor messages
|
||||
to the set of gossipsub nodes that do not originate messages.
|
||||
Nodes who want anonymity guarantees even during the roll-out phase
|
||||
can use separate network interfaces for their default context and
|
||||
Tp-context, respectively.
|
||||
For the best protection, these contexts should run on separate physical machines.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
* [libp2p gossipsub](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md)
|
||||
* [libp2p pubsub](https://github.com/libp2p/specs/tree/master/pubsub)
|
||||
* [libp2p pubsub message](https://github.com/libp2p/specs/tree/master/pubsub#the-message)
|
||||
* [libp2p switch](https://docs.libp2p.io/concepts/multiplex/switch)
|
||||
* [SOCKS5](https://www.rfc-editor.org/rfc/rfc1928)
|
||||
* [Tor](https://www.torproject.org/)
|
||||
* [33/WAKU2-DISCV5](../../waku/standards/core/33/discv5.md)
|
||||
* [Bitcoin over Tor isn't a Good Idea](https://ieeexplore.ieee.org/abstract/document/7163022)
|
||||
* [17/WAKU2-RLN-RELAY](../../waku/standards/core/17/rln-relay.md)
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 32 KiB |
@@ -1 +1 @@
|
||||
|
||||
|
||||
|
||||
1372
vac/raw/mix.md
1372
vac/raw/mix.md
File diff suppressed because it is too large
Load Diff
@@ -1,345 +0,0 @@
|
||||
---
|
||||
title: NOISE-X3DH-DOUBLE-RATCHET
|
||||
name: Secure 1-to-1 channel setup using X3DH and the double ratchet
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Ramses Fernandez <ramses@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Motivation
|
||||
|
||||
The need for secure communications has become paramount.
|
||||
This specification outlines a protocol describing a
|
||||
secure 1-to-1 comunication channel between 2 users. The
|
||||
main components are the X3DH key establishment mechanism,
|
||||
combined with the double ratchet. The aim of this
|
||||
combination of schemes is providing a protocol with both
|
||||
forward secrecy and post-compromise security.
|
||||
|
||||
## Theory
|
||||
|
||||
The specification is based on the noise protocol framework.
|
||||
It corresponds to the double ratchet scheme combined with
|
||||
the X3DH algorithm, which will be used to initialize the former.
|
||||
We chose to express the protocol in noise to be be able to use
|
||||
the noise streamlined implementation and proving features.
|
||||
The X3DH algorithm provides both authentication and forward
|
||||
secrecy, as stated in the
|
||||
[X3DH specification](https://signal.org/docs/specifications/x3dh/).
|
||||
|
||||
This protocol will consist of several stages:
|
||||
|
||||
1. Key setting for X3DH: this step will produce
|
||||
prekey bundles for Bob which will be fed into X3DH.
|
||||
It will also allow Alice to generate the keys required
|
||||
to run the X3DH algorithm correctly.
|
||||
2. Execution of X3DH: This step will output
|
||||
a common secret key `SK` together with an additional
|
||||
data vector `AD`. Both will be used in the double
|
||||
ratchet algorithm initialization.
|
||||
3. Execution of the double ratchet algorithm
|
||||
for forward secure, authenticated communications,
|
||||
using the common secret key `SK`, obtained from X3DH, as a root key.
|
||||
|
||||
The protocol assumes the following requirements:
|
||||
|
||||
- Alice knows Bob’s Ethereum address.
|
||||
- Bob is willing to participate in the protocol,
|
||||
and publishes his public key.
|
||||
- Bob’s ownership of his public key is verifiable,
|
||||
- Alice wants to send message M to Bob.
|
||||
- An eavesdropper cannot read M’s content
|
||||
even if she is storing it or relaying it.
|
||||
|
||||
## Syntax
|
||||
|
||||
### Cryptographic suite
|
||||
|
||||
The following cryptographic functions MUST be used:
|
||||
|
||||
- `X488` as Diffie-Hellman function `DH`.
|
||||
- `SHA256` as KDF.
|
||||
- `AES256-GCM` as AEAD algorithm.
|
||||
- `SHA512` as hash function.
|
||||
- `XEd448` for digital signatures.
|
||||
|
||||
### X3DH initialization
|
||||
|
||||
This scheme MUST work on the curve curve448.
|
||||
The X3DH algorithm corresponds to the IX pattern in Noise.
|
||||
|
||||
Bob and Alice MUST define personal key pairs
|
||||
`(ik_B, IK_B)` and `(ik_A, IK_A)` respectively where:
|
||||
|
||||
- The key `ik` must be kept secret,
|
||||
- and the key `IK` is public.
|
||||
|
||||
Bob MUST generate new keys using
|
||||
`(ik_B, IK_B) = GENERATE_KEYPAIR(curve = curve448)`.
|
||||
|
||||
Bob MUST also generate a public key pair
|
||||
`(spk_B, SPK_B) = GENERATE_KEYPAIR(curve = curve448)`.
|
||||
|
||||
`SPK` is a public key generated and stored at medium-term.
|
||||
Both signed prekey and the certificate MUST
|
||||
undergo periodic replacement.
|
||||
After replacing the key,
|
||||
Bob keeps the old private key of `SPK`
|
||||
for some interval, dependant on the implementation.
|
||||
This allows Bob to decrypt delayed messages.
|
||||
|
||||
Bob MUST sign `SPK` for authentication:
|
||||
`SigSPK = XEd448(ik, Encode(SPK))`
|
||||
|
||||
A final step requires the definition of
|
||||
`prekey_bundle = (IK, SPK, SigSPK, OPK_i)`
|
||||
|
||||
One-time keys `OPK` MUST be generated as
|
||||
`(opk_B, OPK_B) = GENERATE_KEYPAIR(curve = curve448)`.
|
||||
|
||||
Before sending an initial message to Bob,
|
||||
Alice MUST generate an AD: `AD = Encode(IK_A) || Encode(IK_B)`.
|
||||
|
||||
Alice MUST generate ephemeral key pairs
|
||||
`(ek, EK) = GENERATE_KEYPAIR(curve = curve448)`.
|
||||
|
||||
The function `Encode()` transforms a
|
||||
curve448 public key into a byte sequence.
|
||||
This is specified in the [RFC 7748](http://www.ietf.org/rfc/rfc7748.txt)
|
||||
on elliptic curves for security.
|
||||
|
||||
One MUST consider `q = 2^446 - 13818066809895115352007386748515426880336692474882178609894547503885`
|
||||
for digital signatures with `(XEd448_sign, XEd448_verify)`:
|
||||
|
||||
```text
|
||||
XEd448_sign((ik, IK), message):
|
||||
Z = randbytes(64)
|
||||
r = SHA512(2^456 - 2 || ik || message || Z )
|
||||
R = (r * convert_mont(5)) % q
|
||||
h = SHA512(R || IK || M)
|
||||
s = (r + h * ik) % q
|
||||
return (R || s)
|
||||
```
|
||||
|
||||
```text
|
||||
XEd448_verify(u, message, (R || s)):
|
||||
if (R.y >= 2^448) or (s >= 2^446): return FALSE
|
||||
h = (SHA512(R || 156326 || message)) % q
|
||||
R_check = s * convert_mont(5) - h * 156326
|
||||
if R == R_check: return TRUE
|
||||
return FALSE
|
||||
```
|
||||
|
||||
```text
|
||||
convert_mont(u):
|
||||
u_masked = u % mod 2^448
|
||||
inv = ((1 - u_masked)^(2^448 - 2^224 - 3)) % (2^448 - 2^224 - 1)
|
||||
P.y = ((1 + u_masked) * inv)) % (2^448 - 2^224 - 1)
|
||||
P.s = 0
|
||||
return P
|
||||
```
|
||||
|
||||
### Use of X3DH
|
||||
|
||||
This specification combines the double ratchet
|
||||
with X3DH using the following data as initialization for the former:
|
||||
|
||||
- The `SK` output from X3DH becomes the `SK`
|
||||
input of the double ratchet. See section 3.3 of
|
||||
[Signal Specification](https://signal.org/docs/specifications/doubleratchet/)
|
||||
for a detailed description.
|
||||
- The `AD` output from X3DH becomes the `AD`
|
||||
input of the double ratchet. See sections 3.4 and 3.5 of
|
||||
[Signal Specification](https://signal.org/docs/specifications/doubleratchet/)
|
||||
for a detailed description.
|
||||
- Bob’s signed prekey `SigSPKB` from X3DH is used as Bob’s
|
||||
initial ratchet public key of the double ratchet.
|
||||
|
||||
X3DH has three phases:
|
||||
|
||||
1. Bob publishes his identity key and prekeys to a server,
|
||||
a network, or dedicated smart contract.
|
||||
2. Alice fetches a prekey bundle from the server,
|
||||
and uses it to send an initial message to Bob.
|
||||
3. Bob receives and processes Alice's initial message.
|
||||
|
||||
Alice MUST perform the following computations:
|
||||
|
||||
```text
|
||||
dh1 = DH(IK_A, SPK_B, curve = curve448)
|
||||
dh2 = DH(EK_A, IK_B, curve = curve448)
|
||||
dh3 = DH(EK_A, SPK_B)
|
||||
SK = KDF(dh1 || dh2 || dh3)
|
||||
```
|
||||
|
||||
Alice MUST send to Bob a message containing:
|
||||
|
||||
- `IK_A, EK_A`.
|
||||
- An identifier to Bob's prekeys used.
|
||||
- A message encrypted with AES256-GCM using `AD` and `SK`.
|
||||
|
||||
Upon reception of the initial message, Bob MUST:
|
||||
|
||||
1. Perform the same computations above with the `DH()` function.
|
||||
2. Derive `SK` and construct `AD`.
|
||||
3. Decrypt the initial message encrypted with `AES256-GCM`.
|
||||
4. If decryption fails, abort the protocol.
|
||||
|
||||
### Initialization of the double datchet
|
||||
|
||||
In this stage Bob and Alice have generated key pairs
|
||||
and agreed a shared secret `SK` using X3DH.
|
||||
|
||||
Alice calls `RatchetInitAlice()` defined below:
|
||||
|
||||
```text
|
||||
RatchetInitAlice(SK, IK_B):
|
||||
state.DHs = GENERATE_KEYPAIR(curve = curve448)
|
||||
state.DHr = IK_B
|
||||
state.RK, state.CKs = HKDF(SK, DH(state.DHs, state.DHr))
|
||||
state.CKr = None
|
||||
state.Ns, state.Nr, state.PN = 0
|
||||
state.MKSKIPPED = {}
|
||||
```
|
||||
|
||||
The HKDF function MUST be the proposal by
|
||||
[Krawczyk and Eronen](http://www.ietf.org/rfc/rfc5869.txt).
|
||||
In this proposal `chaining_key` and `input_key_material`
|
||||
MUST be replaced with `SK` and the output of `DH` respectively.
|
||||
|
||||
Similarly, Bob calls the function `RatchetInitBob()` defined below:
|
||||
|
||||
```text
|
||||
RatchetInitBob(SK, (ik_B,IK_B)):
|
||||
state.DHs = (ik_B, IK_B)
|
||||
state.Dhr = None
|
||||
state.RK = SK
|
||||
state.CKs, state.CKr = None
|
||||
state.Ns, state.Nr, state.PN = 0
|
||||
state.MKSKIPPED = {}
|
||||
```
|
||||
|
||||
### Encryption
|
||||
|
||||
This function performs the symmetric key ratchet.
|
||||
|
||||
```text
|
||||
RatchetEncrypt(state, plaintext, AD):
|
||||
state.CKs, mk = HMAC-SHA256(state.CKs)
|
||||
header = HEADER(state.DHs, state.PN, state.Ns)
|
||||
state.Ns = state.Ns + 1
|
||||
return header, AES256-GCM_Enc(mk, plaintext, AD || header)
|
||||
```
|
||||
|
||||
The `HEADER` function creates a new message header
|
||||
containing the public key from the key pair output of the `DH`function.
|
||||
It outputs the previous chain length `pn`,
|
||||
and the message number `n`.
|
||||
The returned header object contains ratchet public key
|
||||
`dh` and integers `pn` and `n`.
|
||||
|
||||
### Decryption
|
||||
|
||||
The function `RatchetDecrypt()` decrypts incoming messages:
|
||||
|
||||
```text
|
||||
RatchetDecrypt(state, header, ciphertext, AD):
|
||||
plaintext = TrySkippedMessageKeys(state, header, ciphertext, AD)
|
||||
if plaintext != None:
|
||||
return plaintext
|
||||
if header.dh != state.DHr:
|
||||
SkipMessageKeys(state, header.pn)
|
||||
DHRatchet(state, header)
|
||||
SkipMessageKeys(state, header.n)
|
||||
state.CKr, mk = HMAC-SHA256(state.CKr)
|
||||
state.Nr = state.Nr + 1
|
||||
return AES256-GCM_Dec(mk, ciphertext, AD || header)
|
||||
```
|
||||
|
||||
Auxiliary functions follow:
|
||||
|
||||
```text
|
||||
DHRatchet(state, header):
|
||||
state.PN = state.Ns
|
||||
state.Ns = state.Nr = 0
|
||||
state.DHr = header.dh
|
||||
state.RK, state.CKr = HKDF(state.RK, DH(state.DHs, state.DHr))
|
||||
state.DHs = GENERATE_KEYPAIR(curve = curve448)
|
||||
state.RK, state.CKs = HKDF(state.RK, DH(state.DHs, state.DHr))
|
||||
```
|
||||
|
||||
```text
|
||||
SkipMessageKeys(state, until):
|
||||
if state.NR + MAX_SKIP < until:
|
||||
raise Error
|
||||
if state.CKr != none:
|
||||
while state.Nr < until:
|
||||
state.CKr, mk = HMAC-SHA256(state.CKr)
|
||||
state.MKSKIPPED[state.DHr, state.Nr] = mk
|
||||
state.Nr = state.Nr + 1
|
||||
```
|
||||
|
||||
```text
|
||||
TrySkippedMessageKey(state, header, ciphertext, AD):
|
||||
if (header.dh, header.n) in state.MKSKIPPED:
|
||||
mk = state.MKSKIPPED[header.dh, header.n]
|
||||
delete state.MKSKIPPED[header.dh, header.n]
|
||||
return AES256-GCM_Dec(mk, ciphertext, AD || header)
|
||||
else: return None
|
||||
```
|
||||
|
||||
## Information retrieval
|
||||
|
||||
### Static data
|
||||
|
||||
Some data, such as the key pairs `(ik, IK)` for Alice and Bob,
|
||||
MAY NOT be regenerated after a period of time.
|
||||
Therefore the prekey bundle MAY be stored in long-term
|
||||
storage solutions, such as a dedicated smart contract
|
||||
which outputs such a key pair when receiving an Ethereum wallet
|
||||
address.
|
||||
|
||||
Storing static data is done using a dedicated
|
||||
smart contract `PublicKeyStorage` which associates
|
||||
the Ethereum wallet address of a user with his public key.
|
||||
This mapping is done by `PublicKeyStorage`
|
||||
using a `publicKeys` function, or a `setPublicKey` function.
|
||||
This mapping is done if the user passed an authorization process.
|
||||
A user who wants to retrieve a public key associated
|
||||
with a specific wallet address calls a function `getPublicKey`.
|
||||
The user provides the wallet address as the only
|
||||
input parameter for `getPublicKey`.
|
||||
The function outputs the associated public key
|
||||
from the smart contract.
|
||||
|
||||
### Ephemeral data
|
||||
|
||||
Storing ephemeral data on Ethereum MAY be done using
|
||||
a combination of on-chain and off-chain solutions.
|
||||
This approach provides an efficient solution to
|
||||
the problem of storing updatable data in Ethereum.
|
||||
|
||||
1. Ethereum stores a reference or a hash
|
||||
that points to the off-chain data.
|
||||
2. Off-chain solutions can include systems like IPFS,
|
||||
traditional cloud storage solutions, or
|
||||
decentralized storage networks such as a
|
||||
[Swarm](https://www.ethswarm.org).
|
||||
|
||||
In any case, the user stores the associated
|
||||
IPFS hash, URL or reference in Ethereum.
|
||||
|
||||
The fact of a user not updating the ephemeral information
|
||||
can be understood as Bob not willing to participate in any
|
||||
communication.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [The Double Ratchet Algorithm](https://signal.org/docs/specifications/doubleratchet/)
|
||||
- [The X3DH Key Agreement Protocol](https://signal.org/docs/specifications/x3dh/)
|
||||
@@ -1,140 +1,140 @@
|
||||
---
|
||||
title: RLN-INTEREP-SPEC
|
||||
name: Interep as group management for RLN
|
||||
status: raw
|
||||
category:
|
||||
tags: rln
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This spec integrates [Interep](https://interep.link)
|
||||
into the [RLN](../32/rln-v1.md) spec.
|
||||
Interep is a group management protocol
|
||||
that allows for the creation of groups of users and
|
||||
the management of their membership.
|
||||
It is used to manage the membership of the RLN group.
|
||||
|
||||
Interep ties in web2 identities with reputation, and
|
||||
sorts the users into groups based on their reputation score.
|
||||
For example, a GitHub user with over 100 followers is considered to have "gold" reputation.
|
||||
|
||||
Interep uses [Semaphore](https://semaphore.appliedzkp.org/)
|
||||
under the hood to allow anonymous signaling of membership in a group.
|
||||
Therefore, a user with a "gold" reputation can prove the existence
|
||||
of their membership without revealing their identity.
|
||||
|
||||
RLN is used for spam prevention, and Interep is used for group management.
|
||||
|
||||
By using Interep with RLN,
|
||||
we allow users to join RLN membership groups
|
||||
without the need for on-chain financial stake.
|
||||
|
||||
## Motivation
|
||||
|
||||
To have Sybil-Resistant group management,
|
||||
there are [implementations](https://github.com/vacp2p/rln-contract)
|
||||
of RLN which make use of financial stake on-chain.
|
||||
However, this is not ideal because it reduces the barrier of entry for honest participants.
|
||||
|
||||
In this case,
|
||||
honest participants will most likely have a web2 identity accessible to them,
|
||||
which can be used for joining an Interep reputation group.
|
||||
By modifying the RLN spec to use Interep,
|
||||
we can have Sybil-Resistant group management
|
||||
without the need for on-chain financial stake.
|
||||
|
||||
Since RLN and Interep both use Semaphore-style credentials,
|
||||
it is possible to use the same set of credentials for both.
|
||||
|
||||
## Functional Operation
|
||||
|
||||
Using Interep with RLN involves the following steps -
|
||||
|
||||
1. Generate Semaphore credentials
|
||||
2. Verify reputation and join Interep group
|
||||
3. Join RLN membership group via interaction with Smart Contract,
|
||||
by passing a proof of membership to the Interep group
|
||||
|
||||
### 1. Generate Semaphore credentials
|
||||
|
||||
Semaphore credentials are generated in a standard way,
|
||||
depicted in the [Semaphore documentation](https://semaphore.appliedzkp.org/docs/guides/identities#create-deterministic-identities).
|
||||
|
||||
### 2. Verify reputation and join Interep group
|
||||
|
||||
Using the Interep app deployed on [Goerli](https://goerli.interep.link/),
|
||||
the user can check their reputation tier and join the corresponding group.
|
||||
This results in a transaction to the Interep contract, which adds them to the group.
|
||||
|
||||
### 3. Join RLN membership group
|
||||
|
||||
Instead of sending funds to the RLN contract to join the membership group,
|
||||
the user can send a proof of membership to the Interep group.
|
||||
This proof is generated by the user, and
|
||||
is verified by the contract.
|
||||
The contract ensures that the user is a member of the Interep group, and
|
||||
then adds them to the RLN membership group.
|
||||
|
||||
Following is the modified signature of the register function
|
||||
in the RLN contract -
|
||||
|
||||
```solidity
|
||||
/// @param groupId: Id of the group.
|
||||
/// @param signal: Semaphore signal.
|
||||
/// @param nullifierHash: Nullifier hash.
|
||||
/// @param externalNullifier: External nullifier.
|
||||
/// @param proof: Zero-knowledge proof.
|
||||
/// @param idCommitment: ID Commitment of the member.
|
||||
function register(
|
||||
uint256 groupId,
|
||||
bytes32 signal,
|
||||
uint256 nullifierHash,
|
||||
uint256 externalNullifier,
|
||||
uint256[8] calldata proof,
|
||||
uint256 idCommitment
|
||||
)
|
||||
```
|
||||
|
||||
## Verification of messages
|
||||
|
||||
Messages are verified the same way as in the [RLN spec](../32/rln-v1.md/#verification).
|
||||
|
||||
## Slashing
|
||||
|
||||
The slashing mechanism is the same as in the [RLN spec](../32/rln-v1.md/#slashing).
|
||||
It is important to note that the slashing
|
||||
may not have the intended effect on the user,
|
||||
since the only consequence is that they cannot send messages.
|
||||
This is due to the fact that the user
|
||||
can send a identity commitment in the registration to the RLN contract,
|
||||
which is different than the one used in the Interep group.
|
||||
|
||||
## Proof of Concept
|
||||
|
||||
A proof of concept is available at
|
||||
[vacp2p/rln-interp-contract](https://github.com/vacp2p/rln-interep-contract)
|
||||
which integrates Interep with RLN.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. As mentioned in [Slashing](#slashing),
|
||||
the slashing mechanism may not have the intended effect on the user.
|
||||
2. This spec inherits the security considerations of the [RLN spec](../32/rln-v1.md/#security-considerations).
|
||||
3. This spec inherits the security considerations of [Interep](https://docs.interep.link/).
|
||||
4. A user may make multiple registrations using the same Interep proofs but
|
||||
different identity commitments.
|
||||
The way to mitigate this is to check if the nullifier hash has been detected
|
||||
previously in proof verification.
|
||||
|
||||
## References
|
||||
|
||||
1. [RLN spec](../32/rln-v1.md)
|
||||
2. [Interep](https://interep.link)
|
||||
3. [Semaphore](https://semaphore.appliedzkp.org/)
|
||||
4. [Decentralized cloudflare using Interep](https://ethresear.ch/t/decentralised-cloudflare-using-rln-and-rich-user-identities/10774)
|
||||
5. [Interep contracts](https://github.com/interep-project/contracts)
|
||||
6. [RLN contract](https://github.com/vacp2p/rln-contract)
|
||||
7. [RLNP2P](https://rlnp2p.vac.dev/)
|
||||
---
|
||||
title: RLN-INTEREP-SPEC
|
||||
name: Interep as group management for RLN
|
||||
status: raw
|
||||
category:
|
||||
tags: rln
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This spec integrates [Interep](https://interep.link)
|
||||
into the [RLN](../32/rln-v1.md) spec.
|
||||
Interep is a group management protocol
|
||||
that allows for the creation of groups of users and
|
||||
the management of their membership.
|
||||
It is used to manage the membership of the RLN group.
|
||||
|
||||
Interep ties in web2 identities with reputation, and
|
||||
sorts the users into groups based on their reputation score.
|
||||
For example, a GitHub user with over 100 followers is considered to have "gold" reputation.
|
||||
|
||||
Interep uses [Semaphore](https://semaphore.appliedzkp.org/)
|
||||
under the hood to allow anonymous signaling of membership in a group.
|
||||
Therefore, a user with a "gold" reputation can prove the existence
|
||||
of their membership without revealing their identity.
|
||||
|
||||
RLN is used for spam prevention, and Interep is used for group management.
|
||||
|
||||
By using Interep with RLN,
|
||||
we allow users to join RLN membership groups
|
||||
without the need for on-chain financial stake.
|
||||
|
||||
## Motivation
|
||||
|
||||
To have Sybil-Resistant group management,
|
||||
there are [implementations](https://github.com/vacp2p/rln-contract)
|
||||
of RLN which make use of financial stake on-chain.
|
||||
However, this is not ideal because it reduces the barrier of entry for honest participants.
|
||||
|
||||
In this case,
|
||||
honest participants will most likely have a web2 identity accessible to them,
|
||||
which can be used for joining an Interep reputation group.
|
||||
By modifying the RLN spec to use Interep,
|
||||
we can have Sybil-Resistant group management
|
||||
without the need for on-chain financial stake.
|
||||
|
||||
Since RLN and Interep both use Semaphore-style credentials,
|
||||
it is possible to use the same set of credentials for both.
|
||||
|
||||
## Functional Operation
|
||||
|
||||
Using Interep with RLN involves the following steps -
|
||||
|
||||
1. Generate Semaphore credentials
|
||||
2. Verify reputation and join Interep group
|
||||
3. Join RLN membership group via interaction with Smart Contract,
|
||||
by passing a proof of membership to the Interep group
|
||||
|
||||
### 1. Generate Semaphore credentials
|
||||
|
||||
Semaphore credentials are generated in a standard way,
|
||||
depicted in the [Semaphore documentation](https://semaphore.appliedzkp.org/docs/guides/identities#create-deterministic-identities).
|
||||
|
||||
### 2. Verify reputation and join Interep group
|
||||
|
||||
Using the Interep app deployed on [Goerli](https://goerli.interep.link/),
|
||||
the user can check their reputation tier and join the corresponding group.
|
||||
This results in a transaction to the Interep contract, which adds them to the group.
|
||||
|
||||
### 3. Join RLN membership group
|
||||
|
||||
Instead of sending funds to the RLN contract to join the membership group,
|
||||
the user can send a proof of membership to the Interep group.
|
||||
This proof is generated by the user, and
|
||||
is verified by the contract.
|
||||
The contract ensures that the user is a member of the Interep group, and
|
||||
then adds them to the RLN membership group.
|
||||
|
||||
Following is the modified signature of the register function
|
||||
in the RLN contract -
|
||||
|
||||
```solidity
|
||||
/// @param groupId: Id of the group.
|
||||
/// @param signal: Semaphore signal.
|
||||
/// @param nullifierHash: Nullifier hash.
|
||||
/// @param externalNullifier: External nullifier.
|
||||
/// @param proof: Zero-knowledge proof.
|
||||
/// @param idCommitment: ID Commitment of the member.
|
||||
function register(
|
||||
uint256 groupId,
|
||||
bytes32 signal,
|
||||
uint256 nullifierHash,
|
||||
uint256 externalNullifier,
|
||||
uint256[8] calldata proof,
|
||||
uint256 idCommitment
|
||||
)
|
||||
```
|
||||
|
||||
## Verification of messages
|
||||
|
||||
Messages are verified the same way as in the [RLN spec](../32/rln-v1.md/#verification).
|
||||
|
||||
## Slashing
|
||||
|
||||
The slashing mechanism is the same as in the [RLN spec](../32/rln-v1.md/#slashing).
|
||||
It is important to note that the slashing
|
||||
may not have the intended effect on the user,
|
||||
since the only consequence is that they cannot send messages.
|
||||
This is due to the fact that the user
|
||||
can send a identity commitment in the registration to the RLN contract,
|
||||
which is different than the one used in the Interep group.
|
||||
|
||||
## Proof of Concept
|
||||
|
||||
A proof of concept is available at
|
||||
[vacp2p/rln-interp-contract](https://github.com/vacp2p/rln-interep-contract)
|
||||
which integrates Interep with RLN.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. As mentioned in [Slashing](#slashing),
|
||||
the slashing mechanism may not have the intended effect on the user.
|
||||
2. This spec inherits the security considerations of the [RLN spec](../32/rln-v1.md/#security-considerations).
|
||||
3. This spec inherits the security considerations of [Interep](https://docs.interep.link/).
|
||||
4. A user may make multiple registrations using the same Interep proofs but
|
||||
different identity commitments.
|
||||
The way to mitigate this is to check if the nullifier hash has been detected
|
||||
previously in proof verification.
|
||||
|
||||
## References
|
||||
|
||||
1. [RLN spec](../32/rln-v1.md)
|
||||
2. [Interep](https://interep.link)
|
||||
3. [Semaphore](https://semaphore.appliedzkp.org/)
|
||||
4. [Decentralized cloudflare using Interep](https://ethresear.ch/t/decentralised-cloudflare-using-rln-and-rich-user-identities/10774)
|
||||
5. [Interep contracts](https://github.com/interep-project/contracts)
|
||||
6. [RLN contract](https://github.com/vacp2p/rln-contract)
|
||||
7. [RLNP2P](https://rlnp2p.vac.dev/)
|
||||
|
||||
@@ -1,126 +1,126 @@
|
||||
---
|
||||
title: RLN-STEALTH-COMMITMENTS
|
||||
name: RLN Stealth Commitment Usage
|
||||
category: Standards Track
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes the usage of stealth commitments
|
||||
to add prospective users to a network-governed
|
||||
[32/RLN-V1](./32/rln-v1.md) membership set.
|
||||
|
||||
## Motivation
|
||||
|
||||
When [32/RLN-V1](./32/rln-v1.md) is enforced in [10/Waku2](../waku/standards/core/10/waku2.md),
|
||||
all users are required to register to a membership set.
|
||||
The membership set will store user identities
|
||||
allowing the secure interaction within an application.
|
||||
Forcing a user to do an on-chain transaction
|
||||
to join a membership set is an onboarding friction,
|
||||
and some projects may be opposed to this method.
|
||||
To improve the user experience,
|
||||
stealth commitments can be used by a counterparty
|
||||
to register identities on the user's behalf,
|
||||
while maintaining the user's anonymity.
|
||||
|
||||
This document specifies a privacy-preserving mechanism,
|
||||
allowing a counterparty to utilize [32/RLN-V1](./32/rln-v1.md)
|
||||
to register an `identityCommitment` on-chain.
|
||||
Counterparties will be able to register members
|
||||
to a RLN membership set without exposing the user's private keys.
|
||||
|
||||
## Background
|
||||
|
||||
The [32/RLN-V1](./32/rln-v1.md) protocol,
|
||||
consists of a smart contract that stores a `idenitityCommitment`
|
||||
in a membership set.
|
||||
In order for a user to join the membership set,
|
||||
the user is required to make a transaction on the blockchain.
|
||||
A set of public keys is used to compute a stealth commitment for a user,
|
||||
as described in [ERC-5564](https://eips.ethereum.org/EIPS/eip-5564).
|
||||
This specification is an implementation of the
|
||||
[ERC-5564](https://eips.ethereum.org/EIPS/eip-5564) scheme,
|
||||
tailored to the curve that is used in the [32/RLN-V1](./32/rln-v1.md) protocol.
|
||||
|
||||
This can be used in a couple of ways in applications:
|
||||
|
||||
1. Applications can add users
|
||||
to the [32/RLN-V1](./32/rln-v1.md) membership set in a batch.
|
||||
2. Users of the application
|
||||
can register other users to the [32/RLN-V1](./32/rln-v1.md) membership set.
|
||||
|
||||
This is useful when the prospective user does not have access to funds
|
||||
on the network that [32/RLN-V1](./32/rln-v1.md) is deployed on.
|
||||
|
||||
## Wire Format Specification
|
||||
|
||||
The two parties, the requester and the receiver,
|
||||
MUST exchange the following information:
|
||||
|
||||
```protobuf
|
||||
|
||||
message Request {
|
||||
// The spending public key of the requester
|
||||
bytes spending_public_key = 1;
|
||||
|
||||
// The viewing public key of the requester
|
||||
bytes viewing_public_key = 2;
|
||||
}
|
||||
```
|
||||
|
||||
### Generate Stealth Commitment
|
||||
|
||||
The application or user SHOULD generate a `stealth_commitment`
|
||||
after a request to do so is received.
|
||||
This commitment MAY be inserted into the corresponding application membership set.
|
||||
|
||||
Once the membership set is updated,
|
||||
the receiver SHOULD exchange the following as a response to the request:
|
||||
|
||||
```protobuf
|
||||
|
||||
message Response {
|
||||
|
||||
// The used to check if the stealth_commitment belongs to the requester
|
||||
bytes view_tag = 2;
|
||||
|
||||
// The stealth commitment for the requester
|
||||
bytes stealth_commitment = 3;
|
||||
|
||||
// The ephemeral public key used to generate the commitment
|
||||
bytes ephemeral_public_key = 4;
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The receiver MUST generate an `ephemeral_public_key`,
|
||||
`view_tag` and `stealth_commitment`.
|
||||
This will be used to check the stealth commitment
|
||||
used to register to the membership set,
|
||||
and the user MUST be able to check ownership with their `viewing_public_key`.
|
||||
|
||||
## Implementation Suggestions
|
||||
|
||||
An implementation of the Stealth Address scheme is available in the
|
||||
[erc-5564-bn254](https://github.com/rymnc/erc-5564-bn254) repository,
|
||||
which also includes a test to generate a stealth commitment for a given user.
|
||||
|
||||
## Security/Privacy Considerations
|
||||
|
||||
This specification inherits the security and privacy considerations of the
|
||||
[Stealth Address](https://eips.ethereum.org/EIPS/eip-5564) scheme.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [10/Waku2](../waku/standards/core/10/waku2.md)
|
||||
- [32/RLN-V1](./32/rln-v1.md)
|
||||
- [ERC-5564](https://eips.ethereum.org/EIPS/eip-5564)
|
||||
---
|
||||
title: RLN-STEALTH-COMMITMENTS
|
||||
name: RLN Stealth Commitment Usage
|
||||
category: Standards Track
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes the usage of stealth commitments
|
||||
to add prospective users to a network-governed
|
||||
[32/RLN-V1](./32/rln-v1.md) membership set.
|
||||
|
||||
## Motivation
|
||||
|
||||
When [32/RLN-V1](./32/rln-v1.md) is enforced in [10/Waku2](../waku/standards/core/10/waku2.md),
|
||||
all users are required to register to a membership set.
|
||||
The membership set will store user identities
|
||||
allowing the secure interaction within an application.
|
||||
Forcing a user to do an on-chain transaction
|
||||
to join a membership set is an onboarding friction,
|
||||
and some projects may be opposed to this method.
|
||||
To improve the user experience,
|
||||
stealth commitments can be used by a counterparty
|
||||
to register identities on the user's behalf,
|
||||
while maintaining the user's anonymity.
|
||||
|
||||
This document specifies a privacy-preserving mechanism,
|
||||
allowing a counterparty to utilize [32/RLN-V1](./32/rln-v1.md)
|
||||
to register an `identityCommitment` on-chain.
|
||||
Counterparties will be able to register members
|
||||
to a RLN membership set without exposing the user's private keys.
|
||||
|
||||
## Background
|
||||
|
||||
The [32/RLN-V1](./32/rln-v1.md) protocol,
|
||||
consists of a smart contract that stores a `idenitityCommitment`
|
||||
in a membership set.
|
||||
In order for a user to join the membership set,
|
||||
the user is required to make a transaction on the blockchain.
|
||||
A set of public keys is used to compute a stealth commitment for a user,
|
||||
as described in [ERC-5564](https://eips.ethereum.org/EIPS/eip-5564).
|
||||
This specification is an implementation of the
|
||||
[ERC-5564](https://eips.ethereum.org/EIPS/eip-5564) scheme,
|
||||
tailored to the curve that is used in the [32/RLN-V1](./32/rln-v1.md) protocol.
|
||||
|
||||
This can be used in a couple of ways in applications:
|
||||
|
||||
1. Applications can add users
|
||||
to the [32/RLN-V1](./32/rln-v1.md) membership set in a batch.
|
||||
2. Users of the application
|
||||
can register other users to the [32/RLN-V1](./32/rln-v1.md) membership set.
|
||||
|
||||
This is useful when the prospective user does not have access to funds
|
||||
on the network that [32/RLN-V1](./32/rln-v1.md) is deployed on.
|
||||
|
||||
## Wire Format Specification
|
||||
|
||||
The two parties, the requester and the receiver,
|
||||
MUST exchange the following information:
|
||||
|
||||
```protobuf
|
||||
|
||||
message Request {
|
||||
// The spending public key of the requester
|
||||
bytes spending_public_key = 1;
|
||||
|
||||
// The viewing public key of the requester
|
||||
bytes viewing_public_key = 2;
|
||||
}
|
||||
```
|
||||
|
||||
### Generate Stealth Commitment
|
||||
|
||||
The application or user SHOULD generate a `stealth_commitment`
|
||||
after a request to do so is received.
|
||||
This commitment MAY be inserted into the corresponding application membership set.
|
||||
|
||||
Once the membership set is updated,
|
||||
the receiver SHOULD exchange the following as a response to the request:
|
||||
|
||||
```protobuf
|
||||
|
||||
message Response {
|
||||
|
||||
// The used to check if the stealth_commitment belongs to the requester
|
||||
bytes view_tag = 2;
|
||||
|
||||
// The stealth commitment for the requester
|
||||
bytes stealth_commitment = 3;
|
||||
|
||||
// The ephemeral public key used to generate the commitment
|
||||
bytes ephemeral_public_key = 4;
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The receiver MUST generate an `ephemeral_public_key`,
|
||||
`view_tag` and `stealth_commitment`.
|
||||
This will be used to check the stealth commitment
|
||||
used to register to the membership set,
|
||||
and the user MUST be able to check ownership with their `viewing_public_key`.
|
||||
|
||||
## Implementation Suggestions
|
||||
|
||||
An implementation of the Stealth Address scheme is available in the
|
||||
[erc-5564-bn254](https://github.com/rymnc/erc-5564-bn254) repository,
|
||||
which also includes a test to generate a stealth commitment for a given user.
|
||||
|
||||
## Security/Privacy Considerations
|
||||
|
||||
This specification inherits the security and privacy considerations of the
|
||||
[Stealth Address](https://eips.ethereum.org/EIPS/eip-5564) scheme.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [10/Waku2](../waku/standards/core/10/waku2.md)
|
||||
- [32/RLN-V1](./32/rln-v1.md)
|
||||
- [ERC-5564](https://eips.ethereum.org/EIPS/eip-5564)
|
||||
|
||||
@@ -1,233 +1,237 @@
|
||||
---
|
||||
title: RLN-V2
|
||||
name: Rate Limit Nullifier V2
|
||||
status: raw
|
||||
editor: Rasul Ibragimov <curryrasul@gmail.com>
|
||||
contributors:
|
||||
- Lev Soukhanov <0xdeadfae@gmail.com>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The protocol specified in this document is an improvement of [32/RLN-V1](../32/rln-v1.md),
|
||||
being more general construct, that allows to set various limits for an epoch
|
||||
(it's 1 message per epoch in [32/RLN-V1](../32/rln-v1.md))
|
||||
while remaining almost as simple as it predecessor.
|
||||
Moreover, it allows to set different rate-limits
|
||||
for different RLN app users based on some public data,
|
||||
e.g. stake or reputation.
|
||||
|
||||
## Motivation
|
||||
|
||||
The main goal of this RFC is to generalize [32/RLN-V1](../32/rln-v1.md) and
|
||||
expand its applications.
|
||||
There are two different subprotocols based on this protocol:
|
||||
|
||||
* RLN-Same - RLN with the same rate-limit for all users;
|
||||
* RLN-Diff - RLN that allows to set different rate-limits for different users.
|
||||
|
||||
It is important to note that by using a large epoch limit value,
|
||||
users will be able to remain anonymous,
|
||||
because their `internal_nullifiers` will not be repeated until they exceed the limit.
|
||||
|
||||
## Flow
|
||||
|
||||
As in [32/RLN-V1](../32/rln-v1.md), the general flow can be described by three steps:
|
||||
|
||||
1. Registration
|
||||
2. Signaling
|
||||
3. Verification and slashing
|
||||
|
||||
The two sub-protocols have different flows, and
|
||||
hence are defined separately.
|
||||
|
||||
### Important note
|
||||
|
||||
All terms and parameters used remain the same as in [32/RLN-V1](../32/rln-v1.md),
|
||||
more details [here](../32/rln-v1.md/#technical-overview)
|
||||
|
||||
## RLN-Same flow
|
||||
|
||||
### Registration
|
||||
|
||||
The registration process in the RLN-Same subprotocol does not differ from [32/RLN-V1](../32/rln-v1.md).
|
||||
|
||||
Signalling
|
||||
|
||||
For proof generation, the user needs to submit the following fields to the circuit:
|
||||
|
||||
```js
|
||||
{
|
||||
identity_secret: identity_secret_hash,
|
||||
path_elements: Merkle_proof.path_elements,
|
||||
identity_path_index: Merkle_proof.indices,
|
||||
x: signal_hash,
|
||||
message_id: message_id,
|
||||
external_nullifier: external_nullifier,
|
||||
message_limit: message_limit
|
||||
}
|
||||
```
|
||||
|
||||
Calculating output
|
||||
|
||||
The following fields are needed for proof output calculation:
|
||||
|
||||
```js
|
||||
{
|
||||
identity_secret_hash: bigint,
|
||||
external_nullifier: bigint,
|
||||
message_id: bigint,
|
||||
x: bigint,
|
||||
}
|
||||
```
|
||||
|
||||
The output `[y, internal_nullifier]` is calculated in the following way:
|
||||
|
||||
```js
|
||||
a_0 = identity_secret_hash
|
||||
a_1 = poseidonHash([a0, external_nullifier, message_id])
|
||||
|
||||
y = a_0 + x * a_1
|
||||
|
||||
internal_nullifier = poseidonHash([a_1])
|
||||
```
|
||||
|
||||
## RLN-Diff flow
|
||||
|
||||
Registration
|
||||
|
||||
**id_commitment** in [32/RLN-V1](../32/rln-v1.md) is equal to `poseidonHash(identity_secret)`.
|
||||
The goal of RLN-Diff is to set different rate-limits for different users.
|
||||
It follows that **id_commitment** must somehow depend
|
||||
on the `user_message_limit` parameter,
|
||||
where 0 <= `user_message_limit` <= `message_limit`.
|
||||
There are few ways to do that:
|
||||
|
||||
1. Sending `identity_secret_hash` = `poseidonHash(identity_secret, userMessageLimit)`
|
||||
and zk proof that `user_message_limit` is valid (is in the right range).
|
||||
This approach requires zkSNARK verification,
|
||||
which is an expensive operation on the blockchain.
|
||||
2. Sending the same `identity_secret_hash` as in [32/RLN-V1](../32/rln-v1.md)
|
||||
(`poseidonHash(identity_secret)`) and a user_message_limit publicly to a server
|
||||
or smart-contract where
|
||||
`rate_commitment` = `poseidonHash(identity_secret_hash, userMessageLimit)` is calculated.
|
||||
The leaves in the membership Merkle tree would be the rate_commitments of the users.
|
||||
This approach requires additional hashing in the Circuit, but
|
||||
it eliminates the need for zk proof verification for the registration.
|
||||
|
||||
Both methods are correct, and the choice of the method is left to the implementer.
|
||||
It is recommended to use second method for the reasons already described.
|
||||
The following flow description will also be based on the second method.
|
||||
|
||||
Signalling
|
||||
|
||||
For proof generation, the user need to submit the following fields to the circuit:
|
||||
|
||||
```js
|
||||
{
|
||||
identity_secret: identity_secret_hash,
|
||||
path_elements: Merkle_proof.path_elements,
|
||||
identity_path_index: Merkle_proof.indices,
|
||||
x: signal_hash,
|
||||
message_id: message_id,
|
||||
external_nullifier: external_nullifier,
|
||||
user_message_limit: message_limit
|
||||
}
|
||||
```
|
||||
|
||||
Calculating output
|
||||
|
||||
The Output is calculated in the same way as the RLN-Same sub-protocol.
|
||||
|
||||
### Verification and slashing
|
||||
|
||||
Verification and slashing in both subprotocols remain the same as in [32/RLN-V1](../32/rln-v1.md).
|
||||
The only difference that may arise is the `message_limit` check in RLN-Same,
|
||||
since it is now a public input of the Circuit.
|
||||
|
||||
### ZK Circuits specification
|
||||
|
||||
The design of the [32/RLN-V1](../32/rln-v1.md) circuits
|
||||
is different from the circuits of this protocol.
|
||||
RLN-v2 requires additional algebraic constraints.
|
||||
The membership proof and Shamir's Secret Sharing constraints remain unchanged.
|
||||
|
||||
The ZK Circuit is implemented using a [Groth-16 ZK-SNARK](https://eprint.iacr.org/2016/260.pdf),
|
||||
using the [circomlib](https://docs.circom.io/) library.
|
||||
Both schemes contain compile-time constants/system parameters:
|
||||
|
||||
* DEPTH - depth of membership Merkle tree
|
||||
* LIMIT_BIT_SIZE - bit size of `limit` numbers,
|
||||
e.g. for the 16 - maximum `limit` number is 65535.
|
||||
|
||||
The main difference of the protocol is that instead of a new polynomial
|
||||
(a new value `a_1`) for a new epoch, a new polynomial is generated for each message.
|
||||
The user assigns an identifier to each message;
|
||||
the main requirement is that this identifier be in the range from 1 to `limit`.
|
||||
This is proven using range constraints.
|
||||
|
||||
### RLN-Same circuit
|
||||
|
||||
#### Circuit parameters
|
||||
|
||||
Public Inputs
|
||||
|
||||
* `x`
|
||||
* `external_nullifier`
|
||||
* `message_limit` - limit per epoch
|
||||
|
||||
Private Inputs
|
||||
|
||||
* `identity_secret_hash`
|
||||
* `path_elements`
|
||||
* `identity_path_index`
|
||||
* `message_id`
|
||||
|
||||
Outputs
|
||||
|
||||
* `y`
|
||||
* `root`
|
||||
* `internal_nullifier`
|
||||
|
||||
### RLN-Diff circuit
|
||||
|
||||
In the RLN-Diff scheme, instead of the public parameter `message_limit`,
|
||||
a parameter is used that is set for each user during registration (`user_message_limit`);
|
||||
the `message_id` value is compared to it in the same way
|
||||
as it is compared to `message_limit` in the case of RLN-Same.
|
||||
|
||||
Circuit parameters
|
||||
|
||||
Public Inputs
|
||||
|
||||
* `x`
|
||||
* `external_nullifier`
|
||||
|
||||
Private Inputs
|
||||
|
||||
* `identity_secret_hash`
|
||||
* `path_elements`
|
||||
* `identity_path_index`
|
||||
* `message_id`
|
||||
* `user_message_limit`
|
||||
|
||||
Outputs
|
||||
|
||||
* `y`
|
||||
* `root`
|
||||
* `internal_nullifier`
|
||||
|
||||
## Appendix A: Security considerations
|
||||
|
||||
Although there are changes in the circuits,
|
||||
this spec inherits all the security considerations of [32/RLN-V1](../32/rln-v1.md).
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
* [1](https://zkresear.ch/t/rate-limit-nullifier-v2-circuits/102)
|
||||
* [2](https://github.com/Rate-Limiting-Nullifier/rln-circuits-v2)
|
||||
* [3](../32/rln-v1.md/#technical-overview)
|
||||
---
|
||||
title: RLN-V2
|
||||
name: Rate Limit Nullifier V2
|
||||
status: raw
|
||||
editor: Rasul Ibragimov <curryrasul@gmail.com>
|
||||
contributors:
|
||||
- Lev Soukhanov <0xdeadfae@gmail.com>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The protocol specified in this document is an improvement of [32/RLN-V1](../32/rln-v1.md),
|
||||
being more general construct, that allows to set various limits for an epoch
|
||||
(it's 1 message per epoch in [32/RLN-V1](../32/rln-v1.md))
|
||||
while remaining almost as simple as it predecessor.
|
||||
Moreover, it allows to set different rate-limits
|
||||
for different RLN app users based on some public data,
|
||||
e.g. stake or reputation.
|
||||
|
||||
## Motivation
|
||||
|
||||
The main goal of this RFC is to generalize [32/RLN-V1](../32/rln-v1.md) and
|
||||
expand its applications.
|
||||
There are two different subprotocols based on this protocol:
|
||||
|
||||
* RLN-Same - RLN with the same rate-limit for all users;
|
||||
* RLN-Diff - RLN that allows to set different rate-limits for different users.
|
||||
|
||||
It is important to note that by using a large epoch limit value,
|
||||
users will be able to remain anonymous,
|
||||
because their `internal_nullifiers` will not be repeated until they exceed the limit.
|
||||
|
||||
## Flow
|
||||
|
||||
As in [32/RLN-V1](../32/rln-v1.md), the general flow can be described by three steps:
|
||||
|
||||
1. Registration
|
||||
2. Signaling
|
||||
3. Verification and slashing
|
||||
|
||||
The two sub-protocols have different flows, and
|
||||
hence are defined separately.
|
||||
|
||||
### Important note
|
||||
|
||||
All terms and parameters used remain the same as in [32/RLN-V1](../32/rln-v1.md),
|
||||
more details [here](../32/rln-v1.md/#technical-overview)
|
||||
|
||||
## RLN-Same flow
|
||||
|
||||
### Registration
|
||||
|
||||
The registration process in the RLN-Same subprotocol does not differ from [32/RLN-V1](../32/rln-v1.md).
|
||||
|
||||
### Signalling
|
||||
|
||||
#### Proof generation
|
||||
|
||||
For proof generation, the user needs to submit the following fields to the circuit:
|
||||
|
||||
```js
|
||||
{
|
||||
identity_secret: identity_secret_hash,
|
||||
path_elements: Merkle_proof.path_elements,
|
||||
identity_path_index: Merkle_proof.indices,
|
||||
x: signal_hash,
|
||||
message_id: message_id,
|
||||
external_nullifier: external_nullifier,
|
||||
message_limit: message_limit
|
||||
}
|
||||
```
|
||||
|
||||
#### Calculating output
|
||||
|
||||
The following fields are needed for proof output calculation:
|
||||
|
||||
```js
|
||||
{
|
||||
identity_secret_hash: bigint,
|
||||
external_nullifier: bigint,
|
||||
message_id: bigint,
|
||||
x: bigint,
|
||||
}
|
||||
```
|
||||
|
||||
The output `[y, internal_nullifier]` is calculated in the following way:
|
||||
|
||||
```js
|
||||
a_0 = identity_secret_hash
|
||||
a_1 = poseidonHash([a0, external_nullifier, message_id])
|
||||
|
||||
y = a_0 + x * a_1
|
||||
|
||||
internal_nullifier = poseidonHash([a_1])
|
||||
```
|
||||
|
||||
## RLN-Diff flow
|
||||
|
||||
Registration
|
||||
|
||||
**id_commitment** in [32/RLN-V1](../32/rln-v1.md) is equal to `poseidonHash(identity_secret)`.
|
||||
The goal of RLN-Diff is to set different rate-limits for different users.
|
||||
It follows that **id_commitment** must somehow depend
|
||||
on the `user_message_limit` parameter,
|
||||
where 0 <= `user_message_limit` <= `message_limit`.
|
||||
There are few ways to do that:
|
||||
|
||||
1. Sending `identity_secret_hash` = `poseidonHash(identity_secret, userMessageLimit)`
|
||||
and zk proof that `user_message_limit` is valid (is in the right range).
|
||||
This approach requires zkSNARK verification,
|
||||
which is an expensive operation on the blockchain.
|
||||
2. Sending the same `identity_secret_hash` as in [32/RLN-V1](../32/rln-v1.md)
|
||||
(`poseidonHash(identity_secret)`) and a user_message_limit publicly to a server
|
||||
or smart-contract where
|
||||
`rate_commitment` = `poseidonHash(identity_secret_hash, userMessageLimit)` is calculated.
|
||||
The leaves in the membership Merkle tree would be the rate_commitments of the users.
|
||||
This approach requires additional hashing in the Circuit, but
|
||||
it eliminates the need for zk proof verification for the registration.
|
||||
|
||||
Both methods are correct, and the choice of the method is left to the implementer.
|
||||
It is recommended to use second method for the reasons already described.
|
||||
The following flow description will also be based on the second method.
|
||||
|
||||
### Signalling
|
||||
|
||||
#### Proof generation
|
||||
|
||||
For proof generation, the user need to submit the following fields to the circuit:
|
||||
|
||||
```js
|
||||
{
|
||||
identity_secret: identity_secret_hash,
|
||||
path_elements: Merkle_proof.path_elements,
|
||||
identity_path_index: Merkle_proof.indices,
|
||||
x: signal_hash,
|
||||
message_id: message_id,
|
||||
external_nullifier: external_nullifier,
|
||||
user_message_limit: message_limit
|
||||
}
|
||||
```
|
||||
|
||||
#### Calculating output
|
||||
|
||||
The Output is calculated in the same way as the RLN-Same sub-protocol.
|
||||
|
||||
### Verification and slashing
|
||||
|
||||
Verification and slashing in both subprotocols remain the same as in [32/RLN-V1](../32/rln-v1.md).
|
||||
The only difference that may arise is the `message_limit` check in RLN-Same,
|
||||
since it is now a public input of the Circuit.
|
||||
|
||||
### ZK Circuits specification
|
||||
|
||||
The design of the [32/RLN-V1](../32/rln-v1.md) circuits
|
||||
is different from the circuits of this protocol.
|
||||
RLN-v2 requires additional algebraic constraints.
|
||||
The membership proof and Shamir's Secret Sharing constraints remain unchanged.
|
||||
|
||||
The ZK Circuit is implemented using a [Groth-16 ZK-SNARK](https://eprint.iacr.org/2016/260.pdf),
|
||||
using the [circomlib](https://docs.circom.io/) library.
|
||||
Both schemes contain compile-time constants/system parameters:
|
||||
|
||||
* DEPTH - depth of membership Merkle tree
|
||||
* LIMIT_BIT_SIZE - bit size of `limit` numbers,
|
||||
e.g. for the 16 - maximum `limit` number is 65535.
|
||||
|
||||
The main difference of the protocol is that instead of a new polynomial
|
||||
(a new value `a_1`) for a new epoch, a new polynomial is generated for each message.
|
||||
The user assigns an identifier to each message;
|
||||
the main requirement is that this identifier be in the range from 1 to `limit`.
|
||||
This is proven using range constraints.
|
||||
|
||||
### RLN-Same circuit
|
||||
|
||||
#### Circuit parameters
|
||||
|
||||
**Public Inputs**
|
||||
|
||||
* `x`
|
||||
* `external_nullifier`
|
||||
* `message_limit` - limit per epoch
|
||||
|
||||
**Private Inputs**
|
||||
|
||||
* `identity_secret_hash`
|
||||
* `path_elements`
|
||||
* `identity_path_index`
|
||||
* `message_id`
|
||||
|
||||
**Outputs**
|
||||
|
||||
* `y`
|
||||
* `root`
|
||||
* `internal_nullifier`
|
||||
|
||||
### RLN-Diff circuit
|
||||
|
||||
In the RLN-Diff scheme, instead of the public parameter `message_limit`,
|
||||
a parameter is used that is set for each user during registration (`user_message_limit`);
|
||||
the `message_id` value is compared to it in the same way
|
||||
as it is compared to `message_limit` in the case of RLN-Same.
|
||||
|
||||
#### Circuit parameters
|
||||
|
||||
**Public Inputs**
|
||||
|
||||
* `x`
|
||||
* `external_nullifier`
|
||||
|
||||
**Private Inputs**
|
||||
|
||||
* `identity_secret_hash`
|
||||
* `path_elements`
|
||||
* `identity_path_index`
|
||||
* `message_id`
|
||||
* `user_message_limit`
|
||||
|
||||
**Outputs**
|
||||
|
||||
* `y`
|
||||
* `root`
|
||||
* `internal_nullifier`
|
||||
|
||||
## Appendix A: Security considerations
|
||||
|
||||
Although there are changes in the circuits,
|
||||
this spec inherits all the security considerations of [32/RLN-V1](../32/rln-v1.md).
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
* [1](https://zkresear.ch/t/rate-limit-nullifier-v2-circuits/102)
|
||||
* [2](https://github.com/Rate-Limiting-Nullifier/rln-circuits-v2)
|
||||
* [3](../32/rln-v1.md/#technical-overview)
|
||||
|
||||
495
vac/raw/sds.md
495
vac/raw/sds.md
@@ -1,495 +0,0 @@
|
||||
---
|
||||
title: SDS
|
||||
name: Scalable Data Sync protocol for distributed logs
|
||||
status: raw
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Akhil Peddireddy <akhil@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification introduces the Scalable Data Sync (SDS) protocol
|
||||
to achieve end-to-end reliability
|
||||
when consolidating distributed logs in a decentralized manner.
|
||||
The protocol is designed for a peer-to-peer (p2p) topology
|
||||
where an append-only log is maintained by each member of a group of nodes
|
||||
who may individually append new entries to their local log at any time and
|
||||
is interested in merging new entries from other nodes in real-time or close to real-time
|
||||
while maintaining a consistent order.
|
||||
The outcome of the log consolidation procedure is
|
||||
that all nodes in the group eventually reflect in their own logs
|
||||
the same entries in the same order.
|
||||
The protocol aims to scale to very large groups.
|
||||
|
||||
## Motivation
|
||||
|
||||
A common application that fits this model is a p2p group chat (or group communication),
|
||||
where the participants act as log nodes
|
||||
and the group conversation is modelled as the consolidated logs
|
||||
maintained on each node.
|
||||
The problem of end-to-end reliability can then be stated as
|
||||
ensuring that all participants eventually see the same sequence of messages
|
||||
in the same causal order,
|
||||
despite the challenges of network latency, message loss,
|
||||
and scalability present in any communications transport layer.
|
||||
The rest of this document will assume the terminology of a group communication:
|
||||
log nodes being the _participants_ in the group chat
|
||||
and the logged entries being the _messages_ exchanged between participants.
|
||||
|
||||
## Design Assumptions
|
||||
|
||||
We make the following simplifying assumptions for a proposed reliability protocol:
|
||||
|
||||
* **Broadcast routing:**
|
||||
Messages are broadcast disseminated by the underlying transport.
|
||||
The selected transport takes care of routing messages
|
||||
to all participants of the communication.
|
||||
* **Store nodes:**
|
||||
There are high-availability caches (a.k.a. Store nodes)
|
||||
from which missed messages can be retrieved.
|
||||
These caches maintain the full history of all messages that have been broadcast.
|
||||
This is an optional element in the protocol design,
|
||||
but improves scalability by reducing direct interactions between participants.
|
||||
* **Message ID:**
|
||||
Each message has a globally unique, immutable ID (or hash).
|
||||
Messages can be requested from the high-availability caches or
|
||||
other participants using the corresponding message ID.
|
||||
* **Participant ID:**
|
||||
Each participant has a globally unique, immutable ID
|
||||
visible to other participants in the communication.
|
||||
* **Sender ID:**
|
||||
The **Participant ID** of the original sender of a message,
|
||||
often coupled with a **Message ID**.
|
||||
|
||||
## Wire protocol
|
||||
|
||||
The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
|
||||
“SHOULD NOT”, “RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Message
|
||||
|
||||
Messages MUST adhere to the following meta structure:
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
message HistoryEntry {
|
||||
string message_id = 1; // Unique identifier of the SDS message, as defined in `Message`
|
||||
optional bytes retrieval_hint = 2; // Optional information to help remote parties retrieve this SDS message; For example, A Waku deterministic message hash or routing payload hash
|
||||
|
||||
optional string sender_id = 3; // Participant ID of original message sender. Only populated if using optional SDS Repair extension
|
||||
}
|
||||
|
||||
message Message {
|
||||
string sender_id = 1; // Participant ID of the message sender
|
||||
string message_id = 2; // Unique identifier of the message
|
||||
string channel_id = 3; // Identifier of the channel to which the message belongs
|
||||
optional uint64 lamport_timestamp = 10; // Logical timestamp for causal ordering in channel
|
||||
repeated HistoryEntry causal_history = 11; // List of preceding message IDs that this message causally depends on. Generally 2 or 3 message IDs are included.
|
||||
optional bytes bloom_filter = 12; // Bloom filter representing received message IDs in channel
|
||||
|
||||
repeated HistoryEntry repair_request = 13; // Capped list of history entries missing from sender's causal history. Only populated if using the optional SDS Repair extension.
|
||||
|
||||
optional bytes content = 20; // Actual content of the message
|
||||
}
|
||||
```
|
||||
|
||||
The sending participant MUST include its own globally unique identifier in the `sender_id` field.
|
||||
In addition, it MUST include a globally unique identifier for the message in the `message_id` field,
|
||||
likely based on a message hash.
|
||||
The `channel_id` field MUST be set to the identifier of the channel of group communication
|
||||
that is being synchronized.
|
||||
For simple group communications without individual channels,
|
||||
the `channel_id` SHOULD be set to `0`.
|
||||
The `lamport_timestamp`, `causal_history` and
|
||||
`bloom_filter` fields MUST be set according to the [protocol steps](#protocol-steps)
|
||||
set out below.
|
||||
These fields MAY be left unset in the case of [ephemeral messages](#ephemeral-messages).
|
||||
The message `content` MAY be left empty for [periodic sync messages](#periodic-sync-message),
|
||||
otherwise it MUST contain the application-level content
|
||||
|
||||
> **_Note:_** Close readers may notice that,
|
||||
outside of filtering messages originating from the sender itself,
|
||||
the `sender_id` field is not used for much.
|
||||
Its importance is expected to increase once a p2p retrieval mechanism is added to SDS,
|
||||
as is planned for the protocol.
|
||||
|
||||
### Participant state
|
||||
|
||||
Each participant MUST maintain:
|
||||
|
||||
* A Lamport timestamp for each channel of communication,
|
||||
initialized to current epoch time in millisecond resolution.
|
||||
The Lamport timestamp is increased as described in the [protocol steps](#protocol-steps)
|
||||
to maintain a logical ordering of events while staying close to the current epoch time.
|
||||
This allows the messages from new joiners to be correctly ordered with other recent messages,
|
||||
without these new participants first having to synchronize past messages to discover the current Lamport timestamp.
|
||||
* A bloom filter for received message IDs per channel.
|
||||
The bloom filter SHOULD be rolled over and
|
||||
recomputed once it reaches a predefined capacity of message IDs.
|
||||
Furthermore,
|
||||
it SHOULD be designed to minimize false positives through an optimal selection of
|
||||
size and hash functions.
|
||||
* A buffer for unacknowledged outgoing messages
|
||||
* A buffer for incoming messages with unmet causal dependencies
|
||||
* A local log (or history) for each channel,
|
||||
containing all message IDs in the communication channel,
|
||||
ordered by Lamport timestamp.
|
||||
|
||||
Messages in the unacknowledged outgoing buffer can be in one of three states:
|
||||
|
||||
1. **Unacknowledged** - there has been no acknowledgement of message receipt
|
||||
by any participant in the channel
|
||||
2. **Possibly acknowledged** - there has been ambiguous indication that the message
|
||||
has been _possibly_ received by at least one participant in the channel
|
||||
3. **Acknowledged** - there has been sufficient indication that the message
|
||||
has been received by at least some of the participants in the channel.
|
||||
This state will also remove the message from the outgoing buffer.
|
||||
|
||||
### Protocol Steps
|
||||
|
||||
For each channel of communication,
|
||||
participants MUST follow these protocol steps to populate and interpret
|
||||
the `lamport_timestamp`, `causal_history` and `bloom_filter` fields.
|
||||
|
||||
#### Send Message
|
||||
|
||||
Before broadcasting a message:
|
||||
|
||||
* the participant MUST set its local Lamport timestamp
|
||||
to the maximum between the current value + `1`
|
||||
and the current epoch time in milliseconds.
|
||||
In other words the local Lamport timestamp is set to `max(timeNowInMs, current_lamport_timestamp + 1)`.
|
||||
* the participant MUST include the increased Lamport timestamp in the message's `lamport_timestamp` field.
|
||||
* the participant MUST determine the preceding few message IDs in the local history
|
||||
and include these in an ordered list in the `causal_history` field.
|
||||
The number of message IDs to include in the `causal_history` depends on the application.
|
||||
We recommend a causal history of two message IDs.
|
||||
* the participant MAY include a `retrieval_hint` in the `HistoryEntry`
|
||||
for each message ID in the `causal_history` field.
|
||||
This is an application-specific field to facilitate retrieval of messages,
|
||||
e.g. from high-availability caches.
|
||||
* the participant MUST include the current `bloom_filter`
|
||||
state in the broadcast message.
|
||||
|
||||
After broadcasting a message,
|
||||
the message MUST be added to the participant’s buffer
|
||||
of unacknowledged outgoing messages.
|
||||
|
||||
#### Receive Message
|
||||
|
||||
Upon receiving a message,
|
||||
|
||||
* the participant SHOULD ignore the message if it has a `sender_id` matching its own.
|
||||
* the participant MAY deduplicate the message by comparing its `message_id` to previously received message IDs.
|
||||
* the participant MUST [review the ACK status](#review-ack-status) of messages
|
||||
in its unacknowledged outgoing buffer
|
||||
using the received message's causal history and bloom filter.
|
||||
* if the message has a populated `content` field,
|
||||
the participant MUST include the received message ID in its local bloom filter.
|
||||
* the participant MUST verify that all causal dependencies are met
|
||||
for the received message.
|
||||
Dependencies are met if the message IDs in the `causal_history` of the received message
|
||||
appear in the local history of the receiving participant.
|
||||
|
||||
If all dependencies are met and the message has a populated `content` field,
|
||||
the participant MUST [deliver the message](#deliver-message).
|
||||
If dependencies are unmet,
|
||||
the participant MUST add the message to the incoming buffer of messages
|
||||
with unmet causal dependencies.
|
||||
|
||||
#### Deliver Message
|
||||
|
||||
Triggered by the [Receive Message](#receive-message) procedure.
|
||||
|
||||
If the received message’s Lamport timestamp is greater than the participant's
|
||||
local Lamport timestamp,
|
||||
the participant MUST update its local Lamport timestamp to match the received message.
|
||||
The participant MUST insert the message ID into its local log,
|
||||
based on Lamport timestamp.
|
||||
If one or more message IDs with the same Lamport timestamp already exists,
|
||||
the participant MUST follow the [Resolve Conflicts](#resolve-conflicts) procedure.
|
||||
|
||||
#### Resolve Conflicts
|
||||
|
||||
Triggered by the [Deliver Message](#deliver-message) procedure.
|
||||
|
||||
The participant MUST order messages with the same Lamport timestamp
|
||||
in ascending order of message ID.
|
||||
If the message ID is implemented as a hash of the message,
|
||||
this means the message with the lowest hash would precede
|
||||
other messages with the same Lamport timestamp in the local log.
|
||||
|
||||
#### Review ACK Status
|
||||
|
||||
Triggered by the [Receive Message](#receive-message) procedure.
|
||||
|
||||
For each message in the unacknowledged outgoing buffer,
|
||||
based on the received `bloom_filter` and `causal_history`:
|
||||
|
||||
* the participant MUST mark all messages in the received `causal_history` as **acknowledged**.
|
||||
* the participant MUST mark all messages included in the `bloom_filter`
|
||||
as **possibly acknowledged**.
|
||||
If a message appears as **possibly acknowledged** in multiple received bloom filters,
|
||||
the participant MAY mark it as acknowledged based on probabilistic grounds,
|
||||
taking into account the bloom filter size and hash number.
|
||||
|
||||
#### Periodic Incoming Buffer Sweep
|
||||
|
||||
The participant MUST periodically check causal dependencies for each message
|
||||
in the incoming buffer.
|
||||
For each message in the incoming buffer:
|
||||
|
||||
* the participant MAY attempt to retrieve missing dependencies from the Store node
|
||||
(high-availability cache) or other peers.
|
||||
It MAY use the application-specific `retrieval_hint` in the `HistoryEntry` to facilitate retrieval.
|
||||
* if all dependencies of a message are met,
|
||||
the participant MUST proceed to [deliver the message](#deliver-message).
|
||||
|
||||
If a message's causal dependencies have failed to be met
|
||||
after a predetermined amount of time,
|
||||
the participant MAY mark them as **irretrievably lost**.
|
||||
|
||||
#### Periodic Outgoing Buffer Sweep
|
||||
|
||||
The participant MUST rebroadcast **unacknowledged** outgoing messages
|
||||
after a set period.
|
||||
The participant SHOULD use distinct resend periods for **unacknowledged** and
|
||||
**possibly acknowledged** messages,
|
||||
prioritizing **unacknowledged** messages.
|
||||
|
||||
#### Periodic Sync Message
|
||||
|
||||
For each channel of communication,
|
||||
participants SHOULD periodically send sync messages to maintain state.
|
||||
These sync messages:
|
||||
|
||||
* MUST be sent with empty content
|
||||
* MUST include a Lamport timestamp increased to `max(timeNowInMs, current_lamport_timestamp + 1)`,
|
||||
where `timeNowInMs` is the current epoch time in milliseconds.
|
||||
* MUST include causal history and bloom filter according to regular message rules
|
||||
* MUST NOT be added to the unacknowledged outgoing buffer
|
||||
* MUST NOT be included in causal histories of subsequent messages
|
||||
* MUST NOT be included in bloom filters
|
||||
* MUST NOT be added to the local log
|
||||
|
||||
Since sync messages are not persisted,
|
||||
they MAY have non-unique message IDs without impacting the protocol.
|
||||
To avoid network activity bursts in large groups,
|
||||
a participant MAY choose to only send periodic sync messages
|
||||
if no other messages have been broadcast in the channel after a random backoff period.
|
||||
|
||||
Participants MUST process the causal history and bloom filter of these sync messages
|
||||
following the same steps as regular messages,
|
||||
but MUST NOT persist the sync messages themselves.
|
||||
|
||||
#### Ephemeral Messages
|
||||
|
||||
Participants MAY choose to send short-lived messages for which no synchronization
|
||||
or reliability is required.
|
||||
These messages are termed _ephemeral_.
|
||||
|
||||
Ephemeral messages SHOULD be sent with `lamport_timestamp`, `causal_history`, and
|
||||
`bloom_filter` unset.
|
||||
Ephemeral messages SHOULD NOT be added to the unacknowledged outgoing buffer
|
||||
after broadcast.
|
||||
Upon reception,
|
||||
ephemeral messages SHOULD be delivered immediately without buffering for causal dependencies
|
||||
or including in the local log.
|
||||
|
||||
### SDS Repair (SDS-R)
|
||||
|
||||
SDS Repair (SDS-R) is an optional extension module for SDS,
|
||||
allowing participants in a communication to collectively repair any gaps in causal history (missing messages)
|
||||
preferably over a limited time window.
|
||||
Since SDS-R acts as coordinated rebroadcasting of missing messages,
|
||||
which involves all participants of the communication,
|
||||
it is most appropriate in a limited use case for repairing relatively recent missed dependencies.
|
||||
It is not meant to replace mechanisms for long-term consistency,
|
||||
such as peer-to-peer syncing or the use of a high-availability centralised cache (Store node).
|
||||
|
||||
#### SDS-R message fields
|
||||
|
||||
SDS-R adds the following fields to SDS messages:
|
||||
|
||||
* `sender_id` in `HistoryEntry`:
|
||||
the original message sender's participant ID.
|
||||
This is used to determine the group of participants who will respond to a repair request.
|
||||
* `repair_request` in `Message`:
|
||||
a capped list of history entries missing for the message sender
|
||||
and for which it's requesting a repair.
|
||||
|
||||
#### SDS-R participant state
|
||||
|
||||
SDS-R adds the following to each participant state:
|
||||
|
||||
* Outgoing **repair request buffer**:
|
||||
a list of locally missing `HistoryEntry`s
|
||||
each mapped to a future request timestamp, `T_req`,
|
||||
after which this participant will request a repair if at that point the missing dependency has not been repaired yet.
|
||||
`T_req` is computed as a pseudorandom backoff from the timestamp when the dependency was detected missing.
|
||||
[Determining `T_req`](#determine-t_req) is described below.
|
||||
We RECOMMEND that the outgoing repair request buffer be chronologically ordered in ascending order of `T_req`.
|
||||
|
||||
* Incoming **repair request buffer**:
|
||||
a list of locally available `HistoryEntry`s
|
||||
that were requested for repair by a remote participant
|
||||
AND for which this participant might be an eligible responder,
|
||||
each mapped to a future response timestamp, `T_resp`,
|
||||
after which this participant will rebroadcast the corresponding requested `Message` if at that point no other participant had rebroadcast the `Message`.
|
||||
`T_resp` is computed as a pseudorandom backoff from the timestamp when the repair was first requested.
|
||||
[Determining `T_resp`](#determine-t_resp) is described below.
|
||||
We describe below how a participant can [determine if they're an eligible responder](#determine-response-group) for a specific repair request.
|
||||
|
||||
* Augmented local history log:
|
||||
for each message ID kept in the local log for which the participant could be a repair responder,
|
||||
the full SDS `Message` must be cached rather than just the message ID,
|
||||
in case this participant is called upon to rebroadcast the message.
|
||||
We describe below how a participant can [determine if they're an eligible responder](#determine-response-group) for a specific message.
|
||||
|
||||
**_Note:_** The required state can likely be significantly reduced in future by simply requiring that a responding participant should _reconstruct_ the original `Message` when rebroadcasting, rather than the simpler, but heavier,
|
||||
requirement of caching the entire received `Message` content in local history.
|
||||
|
||||
#### SDS-R global state
|
||||
|
||||
For a specific channel (that is, within a specific SDS-controlled communication)
|
||||
the following SDS-R configuration state SHOULD be common for all participants in the conversation:
|
||||
|
||||
* `T_min`: the _minimum_ time period to wait before a missing causal entry can be repaired.
|
||||
We RECOMMEND a value of at least 30 seconds.
|
||||
* `T_max`: the _maximum_ time period over which missing causal entries can be repaired.
|
||||
We RECOMMEND a value of between 120 and 600 seconds.
|
||||
|
||||
Furthermore, to avoid a broadcast storm with multiple participants responding to a repair request,
|
||||
participants in a single channel MAY be divided into discrete response groups.
|
||||
Participants will only respond to a repair request if they are in the response group for that request.
|
||||
The global `num_response_groups` variable configures the number of response groups for this communication.
|
||||
Its use is described below.
|
||||
A reasonable default value for `num_response_groups` is one response group for every `128` participants.
|
||||
In other words, if the (roughly) expected number of participants is expressed as `num_participants`, then
|
||||
`num_response_groups = num_participants div 128 + 1`.
|
||||
In other words, if there are fewer than 128 participants in a communication,
|
||||
they will all belong to the same response group.
|
||||
|
||||
We RECOMMEND that the global state variables `T_min`, `T_max` and `num_response_groups`
|
||||
be set _statically_ for a specific SDS-R application,
|
||||
based on expected number of group participants and volume of traffic.
|
||||
|
||||
**_Note:_** Future versions of this protocol will recommend dynamic global SDS-R variables,
|
||||
based on the current number of participants.
|
||||
|
||||
#### SDS-R send message
|
||||
|
||||
SDS-R adds the following steps when sending a message:
|
||||
|
||||
Before broadcasting a message,
|
||||
|
||||
* the participant SHOULD populate the `repair_request` field in the message
|
||||
with _eligible_ entries from the outgoing repair request buffer.
|
||||
An entry is eligible to be included in a `repair_request`
|
||||
if its corresponding request timestamp, `T_req`, has expired (in other words,
|
||||
`T_req <= current_time`).
|
||||
The maximum number of repair request entries to include is up to the application.
|
||||
We RECOMMEND that this quota be filled by the eligible entries from the outgoing repair request buffer with the lowest `T_req`.
|
||||
We RECOMMEND a maximum of 3 entries.
|
||||
If there are no eligible entries in the buffer,
|
||||
this optional field MUST be left unset.
|
||||
|
||||
#### SDS-R receive message
|
||||
|
||||
On receiving a message,
|
||||
|
||||
* the participant MUST remove entries matching the received message ID from its _outgoing_ repair request buffer.
|
||||
This ensures that the participant does not request repairs for dependencies that have now been met.
|
||||
* the participant MUST remove entries matching the received message ID from its _incoming_ repair request buffer.
|
||||
This ensures that the participant does not respond to repair requests that another participant has already responded to.
|
||||
* the participant SHOULD add any unmet causal dependencies to its outgoing repair request buffer against a unique `T_req` timestamp for that entry.
|
||||
It MUST compute the `T_req` for each such HistoryEntry according to the steps outlined in [_Determine T_req_](#determine-t_req).
|
||||
* for each item in the `repair_request` field:
|
||||
* the participant MUST remove entries matching the repair message ID from its own outgoing repair request buffer.
|
||||
This limits the number of participants that will request a common missing dependency.
|
||||
* if the participant has the requested `Message` in its local history _and_ is an eligible responder for the repair request,
|
||||
it SHOULD add the request to its incoming repair request buffer against a unique `T_resp` timestamp for that entry.
|
||||
It MUST compute the `T_resp` for each such repair request according to the steps outlined in [_Determine T_resp_](#determine-t_resp).
|
||||
It MUST determine if it's an eligible responder for a repair request according to the steps outlined in [_Determine response group_](#determine-response-group).
|
||||
|
||||
#### Determine T_req
|
||||
|
||||
A participant determines the repair request timestamp, `T_req`,
|
||||
for a missing `HistoryEntry` as follows:
|
||||
|
||||
```text
|
||||
T_req = current_time + hash(participant_id, message_id) % (T_max - T_min) + T_min
|
||||
```
|
||||
|
||||
where `current_time` is the current timestamp,
|
||||
`participant_id` is the participant's _own_ participant ID
|
||||
(not the `sender_id` in the missing `HistoryEntry`),
|
||||
`message_id` is the missing `HistoryEntry`'s message ID,
|
||||
and `T_min` and `T_max` are as set out in [SDS-R global state](#sds-r-global-state).
|
||||
|
||||
This allows `T_req` to be pseudorandomly and linearly distributed as a backoff of between `T_min` and `T_max` from current time.
|
||||
|
||||
> **_Note:_** placing `T_req` values on an exponential backoff curve will likely be more appropriate and is left for a future improvement.
|
||||
|
||||
#### Determine T_resp
|
||||
|
||||
A participant determines the repair response timestamp, `T_resp`,
|
||||
for a `HistoryEntry` that it could repair as follows:
|
||||
|
||||
```text
|
||||
distance = hash(participant_id) XOR hash(sender_id)
|
||||
T_resp = current_time + distance*hash(message_id) % T_max
|
||||
```
|
||||
|
||||
where `current_time` is the current timestamp,
|
||||
`participant_id` is the participant's _own_ (local) participant ID,
|
||||
`sender_id` is the requested `HistoryEntry` sender ID,
|
||||
`message_id` is the requested `HistoryEntry` message ID,
|
||||
and `T_max` is as set out in [SDS-R global state](#sds-r-global-state).
|
||||
|
||||
We first calculate the logical `distance` between the local `participant_id` and
|
||||
the original `sender_id`.
|
||||
If this participant is the original sender, the `distance` will be `0`.
|
||||
It should then be clear that the original participant will have a response backoff time of `0`,
|
||||
making it the most likely responder.
|
||||
The `T_resp` values for other eligible participants will be pseudorandomly and
|
||||
linearly distributed as a backoff of up to `T_max` from current time.
|
||||
|
||||
> **_Note:_** placing `T_resp` values on an exponential backoff curve will likely be more appropriate and
|
||||
is left for a future improvement.
|
||||
|
||||
#### Determine response group
|
||||
|
||||
Given a message with `sender_id` and `message_id`,
|
||||
a participant with `participant_id` is in the response group for that message if
|
||||
|
||||
```text
|
||||
hash(participant_id, message_id) % num_response_groups == hash(sender_id, message_id) % num_response_groups
|
||||
```
|
||||
|
||||
where `num_response_groups` is as set out in [SDS-R global state](#sds-r-global-state).
|
||||
This ensures that a participant will always be in the response group for its own published messages.
|
||||
It also allows participants to determine immediately on first reception of a message or
|
||||
a history entry if they are in the associated response group.
|
||||
|
||||
#### SDS-R incoming repair request buffer sweep
|
||||
|
||||
An SDS-R participant MUST periodically check if there are any incoming requests in the **incoming** repair request buffer* that is due for a response.
|
||||
For each item in the buffer,
|
||||
the participant SHOULD broadcast the corresponding `Message` from local history
|
||||
if its corresponding response timestamp, `T_resp`, has expired
|
||||
(in other words, `T_resp <= current_time`).
|
||||
|
||||
#### SDS-R Periodic Sync Message
|
||||
|
||||
If the participant is due to send a periodic sync message,
|
||||
it SHOULD send the message according to [SDS-R send message](#sds-r-send-message)
|
||||
if there are any eligible items in the outgoing repair request buffer,
|
||||
regardless of whether other participants have also recently broadcast a Periodic Sync message.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
@@ -1,72 +1,72 @@
|
||||
---
|
||||
title: STATUS-URL-SCHEME
|
||||
name: Status URL Scheme
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Felicio Mununga <felicio@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document describes URL scheme for previewing and
|
||||
deep linking content as well as for triggering actions.
|
||||
|
||||
## Background / Rationale / Motivation
|
||||
|
||||
### Requirements
|
||||
|
||||
#### Related scope
|
||||
|
||||
##### Features
|
||||
|
||||
- Onboarding website
|
||||
- Link preview
|
||||
- Link sharing
|
||||
- Deep linking
|
||||
- Routing and navigation
|
||||
- Payment requests
|
||||
- Chat creation
|
||||
|
||||
## Wire Format Specification / Syntax
|
||||
|
||||
### Schemes
|
||||
|
||||
- Internal `status-app://`
|
||||
- External `https://` (i.e. univers/deep links)
|
||||
|
||||
### Paths
|
||||
|
||||
| Name | Url | Description |
|
||||
| ----- | ---- | ---- |
|
||||
| User profile | `/u/<encoded_data>#<user_chat_key>` | Preview/Open user profile |
|
||||
| | `/u#<user_chat_key>` | |
|
||||
| | `/u#<ens_name>` | |
|
||||
| Community | `/c/<encoded_data>#<community_chat_key>` | Preview/Open community |
|
||||
| | `/c#<community_chat_key>` | |
|
||||
| Community channel | `/cc/<encoded_data>#<community_chat_key >`| Preview/Open community channel |
|
||||
| | `/cc/<channel_uuid>#<community_chat_key>` | |
|
||||
|
||||
<!-- # Security/Privacy Considerations
|
||||
|
||||
A standard track RFC in `stable` status MUST feature this section.
|
||||
A standard track RFC in `raw` or `draft` status SHOULD feature this section.
|
||||
Informational RFCs (in any state) may feature this section.
|
||||
If there are none, this section MUST explicitly state that fact.
|
||||
This section MAY contain additional relevant information,
|
||||
e.g. an explanation as to why there are no security consideration
|
||||
for the respective document. -->
|
||||
|
||||
## Discussions
|
||||
|
||||
- See <https://github.com/status-im/specs/pull/159>
|
||||
- See <https://github.com/status-im/status-web/issues/327>
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [STATUS-URL-DATA](./url-data.md)
|
||||
---
|
||||
title: STATUS-URL-SCHEME
|
||||
name: Status URL Scheme
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Felicio Mununga <felicio@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document describes URL scheme for previewing and
|
||||
deep linking content as well as for triggering actions.
|
||||
|
||||
## Background / Rationale / Motivation
|
||||
|
||||
### Requirements
|
||||
|
||||
#### Related scope
|
||||
|
||||
##### Features
|
||||
|
||||
- Onboarding website
|
||||
- Link preview
|
||||
- Link sharing
|
||||
- Deep linking
|
||||
- Routing and navigation
|
||||
- Payment requests
|
||||
- Chat creation
|
||||
|
||||
## Wire Format Specification / Syntax
|
||||
|
||||
### Schemes
|
||||
|
||||
- Internal `status-app://`
|
||||
- External `https://` (i.e. univers/deep links)
|
||||
|
||||
### Paths
|
||||
|
||||
| Name | Url | Description |
|
||||
| ----- | ---- | ---- |
|
||||
| User profile | `/u/<encoded_data>#<user_chat_key>` | Preview/Open user profile |
|
||||
| | `/u#<user_chat_key>` | |
|
||||
| | `/u#<ens_name>` | |
|
||||
| Community | `/c/<encoded_data>#<community_chat_key>` | Preview/Open community |
|
||||
| | `/c#<community_chat_key>` | |
|
||||
| Community channel | `/cc/<encoded_data>#<community_chat_key >`| Preview/Open community channel |
|
||||
| | `/cc/<channel_uuid>#<community_chat_key>` | |
|
||||
|
||||
<!-- # Security/Privacy Considerations
|
||||
|
||||
A standard track RFC in `stable` status MUST feature this section.
|
||||
A standard track RFC in `raw` or `draft` status SHOULD feature this section.
|
||||
Informational RFCs (in any state) may feature this section.
|
||||
If there are none, this section MUST explicitly state that fact.
|
||||
This section MAY contain additional relevant information,
|
||||
e.g. an explanation as to why there are no security consideration
|
||||
for the respective document. -->
|
||||
|
||||
## Discussions
|
||||
|
||||
- See <https://github.com/status-im/specs/pull/159>
|
||||
- See <https://github.com/status-im/status-web/issues/327>
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [STATUS-URL-DATA](./url-data.md)
|
||||
184
vac/template.md
184
vac/template.md
@@ -1,91 +1,93 @@
|
||||
---
|
||||
slug: XX
|
||||
title: TEMPLATE
|
||||
name: RFC Template
|
||||
status: raw/draft/stable/deprecated
|
||||
category: Standards Track/Informational/Best Current Practice
|
||||
tags: an optional list of tags, not standard
|
||||
editor: Daniel Kaiser <danielkaiser@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## (Info, remove this section)
|
||||
|
||||
This section contains meta info about writing RFCs.
|
||||
This section (including its subsections) MUST be removed.
|
||||
|
||||
[COSS](https://rfc.vac.dev/spec/1/) explains the Vac RFC process.
|
||||
|
||||
## Tags
|
||||
|
||||
The `tags` metadata SHOULD contain a list of tags if applicable.
|
||||
|
||||
Currently identified tags comprise
|
||||
|
||||
* `waku/core-protocol` for Waku protocol definitions (e.g. store, relay, light push),
|
||||
* `waku/application` for applications built on top of Waku protocol
|
||||
(e.g. eth-dm, toy-chat),
|
||||
|
||||
## Abstract
|
||||
|
||||
## Background / Rationale / Motivation
|
||||
|
||||
This section serves as an introduction providing background information and
|
||||
a motivation/rationale for why the specified protocol is useful.
|
||||
|
||||
## Theory / Semantics
|
||||
|
||||
A standard track RFC in `stable` status MUST feature this section.
|
||||
A standard track RFC in `raw` or `draft` status SHOULD feature this section.
|
||||
This section SHOULD explain in detail how the proposed protocol works.
|
||||
It may touch on the wire format where necessary for the explanation.
|
||||
This section MAY also specify endpoint behaviour when receiving specific messages,
|
||||
e.g. the behaviour of certain caches etc.
|
||||
|
||||
## Wire Format Specification / Syntax
|
||||
|
||||
A standard track RFC in `stable` status MUST feature this section.
|
||||
A standard track RFC in `raw` or `draft` status SHOULD feature this section.
|
||||
This section SHOULD not contain explanations of semantics and
|
||||
focus on concisely defining the wire format.
|
||||
Implementations MUST adhere to these exact formats to interoperate with other implementations.
|
||||
It is fine, if parts of the previous section that touch on the wire format are repeated.
|
||||
The purpose of this section is having a concise definition
|
||||
of what an implementation sends and accepts.
|
||||
Parts that are not specified here are considered implementation details.
|
||||
Implementors are free to decide on how to implement these details.
|
||||
An optional *implementation suggestions* section may provide suggestions
|
||||
on how to approach implementation details, and,
|
||||
if available, point to existing implementations for reference.
|
||||
|
||||
## Implementation Suggestions (optional)
|
||||
|
||||
## (Further Optional Sections)
|
||||
|
||||
## Security/Privacy Considerations
|
||||
|
||||
A standard track RFC in `stable` status MUST feature this section.
|
||||
A standard track RFC in `raw` or `draft` status SHOULD feature this section.
|
||||
Informational RFCs (in any state) may feature this section.
|
||||
If there are none, this section MUST explicitly state that fact.
|
||||
This section MAY contain additional relevant information,
|
||||
e.g. an explanation as to why there are no security consideration
|
||||
for the respective document.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
References MAY be subdivided into normative and informative.
|
||||
|
||||
## normative
|
||||
|
||||
A list of references that MUST be read to fully understand and/or
|
||||
implement this protocol.
|
||||
See [RFC3967 Section 1.1](https://datatracker.ietf.org/doc/html/rfc3967#section-1.1).
|
||||
|
||||
## informative
|
||||
|
||||
A list of additional references.
|
||||
---
|
||||
slug: XX
|
||||
title: XX/(WAKU2|LOGOS|CODEX|*)-TEMPLATE
|
||||
name: (Waku v2 | Logos | Codex) RFC Template
|
||||
status: (raw|draft|stable)
|
||||
category: (Standards Track|Informational|Best Current Practice)
|
||||
tags: an optional list of tags, not standard
|
||||
editor: Daniel Kaiser <danielkaiser@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## (Info, remove this section)
|
||||
|
||||
This section contains meta info about writing RFCs.
|
||||
This section (including its subsections) MUST be removed.
|
||||
|
||||
[COSS](https://rfc.vac.dev/spec/1/) explains the Vac RFC process.
|
||||
|
||||
## Tags
|
||||
|
||||
The `tags` metadata SHOULD contain a list of tags if applicable.
|
||||
|
||||
Currently identified tags comprise
|
||||
|
||||
* `waku/core-protocol` for Waku protocol definitions (e.g. store, relay, light push),
|
||||
* `waku/application` for applications built on top of Waku protocol
|
||||
(e.g. eth-dm, toy-chat),
|
||||
|
||||
## Abstract
|
||||
|
||||
## Background / Rationale / Motivation
|
||||
|
||||
This section serves as an introduction providing background information and
|
||||
a motivation/rationale for why the specified protocol is useful.
|
||||
|
||||
## Theory / Semantics
|
||||
|
||||
A standard track RFC in `stable` status MUST feature this section.
|
||||
A standard track RFC in `raw` or `draft` status SHOULD feature this section.
|
||||
This section SHOULD explain in detail how the proposed protocol works.
|
||||
It may touch on the wire format where necessary for the explanation.
|
||||
This section MAY also specify endpoint behaviour when receiving specific messages,
|
||||
e.g. the behaviour of certain caches etc.
|
||||
|
||||
## Wire Format Specification / Syntax
|
||||
|
||||
A standard track RFC in `stable` status MUST feature this section.
|
||||
A standard track RFC in `raw` or `draft` status SHOULD feature this section.
|
||||
This section SHOULD not contain explanations of semantics and
|
||||
focus on concisely defining the wire format.
|
||||
Implementations MUST adhere to these exact formats to interoperate with other implementations.
|
||||
It is fine, if parts of the previous section that touch on the wire format are repeated.
|
||||
The purpose of this section is having a concise definition
|
||||
of what an implementation sends and accepts.
|
||||
Parts that are not specified here are considered implementation details.
|
||||
Implementors are free to decide on how to implement these details.
|
||||
An optional *implementation suggestions* section may provide suggestions
|
||||
on how to approach implementation details, and,
|
||||
if available, point to existing implementations for reference.
|
||||
|
||||
## Implementation Suggestions (optional)
|
||||
|
||||
|
||||
## (Further Optional Sections)
|
||||
|
||||
|
||||
## Security/Privacy Considerations
|
||||
|
||||
A standard track RFC in `stable` status MUST feature this section.
|
||||
A standard track RFC in `raw` or `draft` status SHOULD feature this section.
|
||||
Informational RFCs (in any state) may feature this section.
|
||||
If there are none, this section MUST explicitly state that fact.
|
||||
This section MAY contain additional relevant information,
|
||||
e.g. an explanation as to why there are no security consideration
|
||||
for the respective document.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
References MAY be subdivided into normative and informative.
|
||||
|
||||
## normative
|
||||
|
||||
A list of references that MUST be read to fully understand and/or
|
||||
implement this protocol.
|
||||
See [RFC3967 Section 1.1](https://datatracker.ietf.org/doc/html/rfc3967#section-1.1).
|
||||
|
||||
## informative
|
||||
|
||||
A list of additional references.
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
# Waku RFCs
|
||||
|
||||
Waku builds a family of privacy-preserving,
|
||||
censorship-resistant communication protocols for web3 applications.
|
||||
|
||||
Contributors can visit [Waku RFCs](https://github.com/waku-org/specs)
|
||||
for new Waku specifications under discussion.
|
||||
# Waku RFCs
|
||||
|
||||
Waku builds a family of privacy-preserving, censorship-resistant communication protocols for web3 applications.
|
||||
|
||||
Contributors can visit [Waku RFCs](https://github.com/waku-org/specs) for new Waku specifications under discussion.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,263 +1,263 @@
|
||||
---
|
||||
slug: 18
|
||||
title: 18/WAKU2-SWAP
|
||||
name: Waku SWAP Accounting
|
||||
status: deprecated
|
||||
editor: Oskar Thorén <oskarth@titanproxy.com>
|
||||
contributor: Ebube Ud <ebube@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification outlines how we do accounting and settlement based on the provision
|
||||
and usage of resources, most immediately bandwidth usage and/or
|
||||
storing and retrieving of Waku message.
|
||||
This enables nodes to cooperate and efficiently share resources,
|
||||
and in the case of unequal nodes to settle the difference
|
||||
through a relaxed payment mechanism in the form of sending cheques.
|
||||
|
||||
**Protocol identifier***: `/vac/waku/swap/2.0.0-beta1`
|
||||
|
||||
## Motivation
|
||||
|
||||
The Waku network makes up a service network, and
|
||||
some nodes provide a useful service to other nodes.
|
||||
We want to account for that, and when imbalances arise, settle this.
|
||||
The core of this approach has some theoretical backing in game theory, and
|
||||
variants of it have practically been proven to work in systems such as Bittorrent.
|
||||
The specific model use was developed by the Swarm project
|
||||
(previously part of Ethereum), and
|
||||
we re-use contracts that were written for this purpose.
|
||||
|
||||
By using a delayed payment mechanism in the form of cheques,
|
||||
a barter-like mechanism can arise, and
|
||||
nodes can decide on their own policy
|
||||
as opposed to be strictly tied to a specific payment scheme.
|
||||
Additionally, this delayed settlement eases requirements
|
||||
on the underlying network in terms of transaction speed or costs.
|
||||
|
||||
Theoretically, nodes providing and using resources over a long,
|
||||
indefinite, period of time can be seen as an iterated form of
|
||||
[Prisoner's Dilemma (PD)](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma).
|
||||
Specifically, and more intuitively,
|
||||
since we have a cost and benefit profile for each provision/usage
|
||||
(of Waku Message's, e.g.), and
|
||||
the pricing can be set such that mutual cooperation is incentivized,
|
||||
this can be analyzed as a form of donations game.
|
||||
|
||||
## Game Theory - Iterated prisoner's dilemma / donation game
|
||||
|
||||
What follows is a sketch of what the game looks like between two nodes.
|
||||
We can look at it as a special case of iterated prisoner's dilemma called a
|
||||
[Donation game](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma#Special_case:_donation_game)
|
||||
where each node can cooperate with some benefit `b` at a personal cost `c`,
|
||||
where `b>c`.
|
||||
|
||||
From A's point of view:
|
||||
|
||||
A/B | Cooperate | Defect
|
||||
-----|----------|-------
|
||||
Cooperate | b-c | -c
|
||||
Defect | b | 0
|
||||
|
||||
What this means is that if A and B cooperates,
|
||||
A gets some benefit `b` minus a cost `c`.
|
||||
If A cooperates and B defects she only gets the cost,
|
||||
and if she defects and B cooperates A only gets the benefit.
|
||||
If both defect they get neither benefit nor cost.
|
||||
|
||||
The generalized form of PD is:
|
||||
|
||||
A/B | Cooperate | Defect
|
||||
-----|----------|-------
|
||||
Cooperate | R | S
|
||||
Defect | T | P
|
||||
|
||||
With R=reward, S=Sucker's payoff, T=temptation, P=punishment
|
||||
|
||||
And the following holds:
|
||||
|
||||
- `T>R>P>S`
|
||||
- `2R>T+S`
|
||||
|
||||
In our case, this means `b>b-c>0>-c` and `2(b-c)> b-c` which is trivially true.
|
||||
|
||||
As this is an iterated game with no clear finishing point in most circumstances,
|
||||
a tit-for-tat strategy is simple, elegant and functional.
|
||||
To be more theoretically precise,
|
||||
this also requires reasonable assumptions on error rate and discount parameter.
|
||||
This captures notions such as
|
||||
"does the perceived action reflect the intended action" and
|
||||
"how much do you value future (uncertain) actions compared to previous actions".
|
||||
See [Axelrod - Evolution of Cooperation (book)](https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation)
|
||||
for more details.
|
||||
In specific circumstances,
|
||||
nodes can choose slightly different policies if there's a strong need for it.
|
||||
A policy is simply how a node chooses to act given a set of circumstances.
|
||||
|
||||
A tit-for-tat strategy basically means:
|
||||
|
||||
- cooperate first (perform service/beneficial action to other node)
|
||||
- defect when node stops cooperating (disconnect and similar actions),
|
||||
i.e. when it stops performing according to set parameters re settlement
|
||||
- resume cooperation if other node does so
|
||||
|
||||
This can be complemented with node selection mechanisms.
|
||||
|
||||
## SWAP protocol overview
|
||||
|
||||
We use SWAP for accounting and
|
||||
settlement in conjunction with other request/reply protocols in Waku v2,
|
||||
where accounting is done in a pairwise manner.
|
||||
It is an acronym with several possible meanings (as defined in the Book
|
||||
of Swarm), for example:
|
||||
|
||||
- service wanted and provided
|
||||
- settle with automated payments
|
||||
- send waiver as payment
|
||||
- start without a penny
|
||||
|
||||
This approach is based on communicating payment thresholds and
|
||||
sending cheques as indications of later payments.
|
||||
Communicating payment thresholds MAY be done out-of-band or as part of the handshake.
|
||||
Sending cheques is done once payment threshold is hit.
|
||||
|
||||
See [Book of Swarm](https://web.archive.org/web/20210126130038/https://gateway.ethswarm.org/bzz/latest.bookofswarm.eth)
|
||||
section 3.2. on Peer-to-peer accounting etc., for more context and details.
|
||||
|
||||
### Accounting
|
||||
|
||||
Nodes perform their own accounting for each relevant peer
|
||||
based on some "volume"/bandwidth metric.
|
||||
For now we take this to mean the number of `WakuMessage`s exchanged.
|
||||
|
||||
Additionally, a price is attached to each unit.
|
||||
Currently, this is simply a "karma counter" and equal to 1 per message.
|
||||
|
||||
Each accounting balance SHOULD be w.r.t. to a given protocol it is accounting for.
|
||||
|
||||
NOTE: This may later be complemented with other metrics,
|
||||
either as part of SWAP or more likely outside of it.
|
||||
For example, online time can be communicated and
|
||||
attested to as a form of enhanced quality of service to inform peer selection.
|
||||
|
||||
### Flow
|
||||
|
||||
Assuming we have two store nodes,
|
||||
one operating mostly as a client (A) and another as server (B).
|
||||
|
||||
1. Node A performs a handshake with B node.
|
||||
B node responds and both nodes communicate their payment threshold.
|
||||
2. Node A and B creates an accounting entry for the other peer,
|
||||
keep track of peer and current balance.
|
||||
3. Node A issues a `HistoryRequest`, and B responds with a `HistoryResponse`.
|
||||
Based on the number of WakuMessages in the response,
|
||||
both nodes update their accounting records.
|
||||
4. When payment threshold is reached,
|
||||
Node A sends over a cheque to reach a neutral balance.
|
||||
Settlement of this is currently out of scope,
|
||||
but would occur through a SWAP contract (to be specified).
|
||||
(mock and hard phase).
|
||||
5. If disconnect threshold is reached, Node B disconnects Node A (mock and hard phase).
|
||||
|
||||
Note that not all of these steps are mandatory in initial stages,
|
||||
see below for more details.
|
||||
For example, the payment threshold MAY initially be set out of bounds,
|
||||
and policy is only activated in the mock and hard phase.
|
||||
|
||||
### Protobufs
|
||||
|
||||
We use protobuf to specify the handshake and signature.
|
||||
This current protobuf is a work in progress.
|
||||
This is needed for mock and hard phase.
|
||||
|
||||
A handshake gives initial information about payment thresholds and
|
||||
possibly other information.
|
||||
A cheque is best thought of as a promise to pay at a later date.
|
||||
|
||||
```protobuf
|
||||
|
||||
message Handshake {
|
||||
bytes payment_threshold = 1;
|
||||
}
|
||||
|
||||
// TODO Signature?
|
||||
// Should probably be over the whole Cheque type
|
||||
message Cheque {
|
||||
bytes beneficiary = 1;
|
||||
// TODO epoch time or block time?
|
||||
uint32 date = 2;
|
||||
// TODO ERC20 extension?
|
||||
// For now karma counter
|
||||
uint32 amount = 3;
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
## Incremental integration and roll-out
|
||||
|
||||
To incrementally integrate this into Waku v2,
|
||||
we have divided up the roll-out into three phases:
|
||||
|
||||
- Soft - accounting only
|
||||
- Mock - send mock cheques and take word for it
|
||||
- Hard Test - blockchain integration and deployed to public testnet
|
||||
(Goerli, Optimism testnet or similar)
|
||||
- Hard Main - deployed to a public mainnet
|
||||
|
||||
An implementation MAY support any of these phases.
|
||||
|
||||
### Soft phase
|
||||
|
||||
In the soft phase only accounting is performed, without consequence for the
|
||||
peers. No disconnect or sending of cheques is performed at this tage.
|
||||
|
||||
SWAP protocol is performed in conjunction with another request-reply protocol
|
||||
to account for its usage.
|
||||
It SHOULD be done for [13/WAKU2-STORE](../../core/13/store.md)
|
||||
and it MAY be done for other request/reply protocols.
|
||||
|
||||
A client SHOULD log accounting state per peer
|
||||
and SHOULD indicate when a peer is out of bounds (either of its thresholds met).
|
||||
|
||||
### Mock phase
|
||||
|
||||
In the mock phase, we send mock cheques and send cheques/disconnect peers as appropriate.
|
||||
|
||||
- If a node reaches a disconnect threshold,
|
||||
which MUST be outside the payment threshold, it SHOULD disconnect the other peer.
|
||||
- If a node is within payment balance, the other node SHOULD stay connected to it.
|
||||
- If a node receives a valid Cheque it SHOULD update its internal accounting records.
|
||||
- If any node behaves badly, the other node is free to disconnect and
|
||||
pick another node.
|
||||
- Peer rating is out of scope of this specification.
|
||||
|
||||
### Hard phase
|
||||
|
||||
In the hard phase, in addition to sending cheques and activating policy, this is
|
||||
done with blockchain integration on a public testnet. More details TBD.
|
||||
|
||||
This also includes settlements where cheques can be redeemed.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [Prisoner's Dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma)
|
||||
2. [Axelrod - Evolution of Cooperation (book)](https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation)
|
||||
3. [Book of Swarm](https://web.archive.org/web/20210126130038/https://gateway.ethswarm.org/bzz/latest.bookofswarm.eth)
|
||||
4. [13/WAKU2-STORE](../../core/13/store.md)
|
||||
|
||||
<!--
|
||||
|
||||
General TODOs:
|
||||
|
||||
- Find new link for book of swarm
|
||||
- Illustrate payment and disconnection thresholds (mscgen not great for this?)
|
||||
- Elaborate on how accounting works with amount in the context of e.g. store
|
||||
- Illustrate flow
|
||||
- Specify chequeboo
|
||||
|
||||
-->
|
||||
---
|
||||
slug: 18
|
||||
title: 18/WAKU2-SWAP
|
||||
name: Waku SWAP Accounting
|
||||
status: deprecated
|
||||
editor: Oskar Thorén <oskarth@titanproxy.com>
|
||||
contributor: Ebube Ud <ebube@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification outlines how we do accounting and settlement based on the provision
|
||||
and usage of resources, most immediately bandwidth usage and/or
|
||||
storing and retrieving of Waku message.
|
||||
This enables nodes to cooperate and efficiently share resources,
|
||||
and in the case of unequal nodes to settle the difference
|
||||
through a relaxed payment mechanism in the form of sending cheques.
|
||||
|
||||
**Protocol identifier***: `/vac/waku/swap/2.0.0-beta1`
|
||||
|
||||
## Motivation
|
||||
|
||||
The Waku network makes up a service network, and
|
||||
some nodes provide a useful service to other nodes.
|
||||
We want to account for that, and when imbalances arise, settle this.
|
||||
The core of this approach has some theoretical backing in game theory, and
|
||||
variants of it have practically been proven to work in systems such as Bittorrent.
|
||||
The specific model use was developed by the Swarm project
|
||||
(previously part of Ethereum), and
|
||||
we re-use contracts that were written for this purpose.
|
||||
|
||||
By using a delayed payment mechanism in the form of cheques,
|
||||
a barter-like mechanism can arise, and
|
||||
nodes can decide on their own policy
|
||||
as opposed to be strictly tied to a specific payment scheme.
|
||||
Additionally, this delayed settlement eases requirements
|
||||
on the underlying network in terms of transaction speed or costs.
|
||||
|
||||
Theoretically, nodes providing and using resources over a long,
|
||||
indefinite, period of time can be seen as an iterated form of
|
||||
[Prisoner's Dilemma (PD)](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma).
|
||||
Specifically, and more intuitively,
|
||||
since we have a cost and benefit profile for each provision/usage
|
||||
(of Waku Message's, e.g.), and
|
||||
the pricing can be set such that mutual cooperation is incentivized,
|
||||
this can be analyzed as a form of donations game.
|
||||
|
||||
## Game Theory - Iterated prisoner's dilemma / donation game
|
||||
|
||||
What follows is a sketch of what the game looks like between two nodes.
|
||||
We can look at it as a special case of iterated prisoner's dilemma called a
|
||||
[Donation game](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma#Special_case:_donation_game)
|
||||
where each node can cooperate with some benefit `b` at a personal cost `c`,
|
||||
where `b>c`.
|
||||
|
||||
From A's point of view:
|
||||
|
||||
A/B | Cooperate | Defect
|
||||
-----|----------|-------
|
||||
Cooperate | b-c | -c
|
||||
Defect | b | 0
|
||||
|
||||
What this means is that if A and B cooperates,
|
||||
A gets some benefit `b` minus a cost `c`.
|
||||
If A cooperates and B defects she only gets the cost,
|
||||
and if she defects and B cooperates A only gets the benefit.
|
||||
If both defect they get neither benefit nor cost.
|
||||
|
||||
The generalized form of PD is:
|
||||
|
||||
A/B | Cooperate | Defect
|
||||
-----|----------|-------
|
||||
Cooperate | R | S
|
||||
Defect | T | P
|
||||
|
||||
With R=reward, S=Sucker's payoff, T=temptation, P=punishment
|
||||
|
||||
And the following holds:
|
||||
|
||||
- `T>R>P>S`
|
||||
- `2R>T+S`
|
||||
|
||||
In our case, this means `b>b-c>0>-c` and `2(b-c)> b-c` which is trivially true.
|
||||
|
||||
As this is an iterated game with no clear finishing point in most circumstances,
|
||||
a tit-for-tat strategy is simple, elegant and functional.
|
||||
To be more theoretically precise,
|
||||
this also requires reasonable assumptions on error rate and discount parameter.
|
||||
This captures notions such as
|
||||
"does the perceived action reflect the intended action" and
|
||||
"how much do you value future (uncertain) actions compared to previous actions".
|
||||
See [Axelrod - Evolution of Cooperation (book)](https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation)
|
||||
for more details.
|
||||
In specific circumstances,
|
||||
nodes can choose slightly different policies if there's a strong need for it.
|
||||
A policy is simply how a node chooses to act given a set of circumstances.
|
||||
|
||||
A tit-for-tat strategy basically means:
|
||||
|
||||
- cooperate first (perform service/beneficial action to other node)
|
||||
- defect when node stops cooperating (disconnect and similar actions),
|
||||
i.e. when it stops performing according to set parameters re settlement
|
||||
- resume cooperation if other node does so
|
||||
|
||||
This can be complemented with node selection mechanisms.
|
||||
|
||||
## SWAP protocol overview
|
||||
|
||||
We use SWAP for accounting and
|
||||
settlement in conjunction with other request/reply protocols in Waku v2,
|
||||
where accounting is done in a pairwise manner.
|
||||
It is an acronym with several possible meanings (as defined in the Book
|
||||
of Swarm), for example:
|
||||
|
||||
- service wanted and provided
|
||||
- settle with automated payments
|
||||
- send waiver as payment
|
||||
- start without a penny
|
||||
|
||||
This approach is based on communicating payment thresholds and
|
||||
sending cheques as indications of later payments.
|
||||
Communicating payment thresholds MAY be done out-of-band or as part of the handshake.
|
||||
Sending cheques is done once payment threshold is hit.
|
||||
|
||||
See [Book of Swarm](https://web.archive.org/web/20210126130038/https://gateway.ethswarm.org/bzz/latest.bookofswarm.eth)
|
||||
section 3.2. on Peer-to-peer accounting etc., for more context and details.
|
||||
|
||||
### Accounting
|
||||
|
||||
Nodes perform their own accounting for each relevant peer
|
||||
based on some "volume"/bandwidth metric.
|
||||
For now we take this to mean the number of `WakuMessage`s exchanged.
|
||||
|
||||
Additionally, a price is attached to each unit.
|
||||
Currently, this is simply a "karma counter" and equal to 1 per message.
|
||||
|
||||
Each accounting balance SHOULD be w.r.t. to a given protocol it is accounting for.
|
||||
|
||||
NOTE: This may later be complemented with other metrics,
|
||||
either as part of SWAP or more likely outside of it.
|
||||
For example, online time can be communicated and
|
||||
attested to as a form of enhanced quality of service to inform peer selection.
|
||||
|
||||
### Flow
|
||||
|
||||
Assuming we have two store nodes,
|
||||
one operating mostly as a client (A) and another as server (B).
|
||||
|
||||
1. Node A performs a handshake with B node.
|
||||
B node responds and both nodes communicate their payment threshold.
|
||||
2. Node A and B creates an accounting entry for the other peer,
|
||||
keep track of peer and current balance.
|
||||
3. Node A issues a `HistoryRequest`, and B responds with a `HistoryResponse`.
|
||||
Based on the number of WakuMessages in the response,
|
||||
both nodes update their accounting records.
|
||||
4. When payment threshold is reached,
|
||||
Node A sends over a cheque to reach a neutral balance.
|
||||
Settlement of this is currently out of scope,
|
||||
but would occur through a SWAP contract (to be specified).
|
||||
(mock and hard phase).
|
||||
5. If disconnect threshold is reached, Node B disconnects Node A (mock and hard phase).
|
||||
|
||||
Note that not all of these steps are mandatory in initial stages,
|
||||
see below for more details.
|
||||
For example, the payment threshold MAY initially be set out of bounds,
|
||||
and policy is only activated in the mock and hard phase.
|
||||
|
||||
### Protobufs
|
||||
|
||||
We use protobuf to specify the handshake and signature.
|
||||
This current protobuf is a work in progress.
|
||||
This is needed for mock and hard phase.
|
||||
|
||||
A handshake gives initial information about payment thresholds and
|
||||
possibly other information.
|
||||
A cheque is best thought of as a promise to pay at a later date.
|
||||
|
||||
```protobuf
|
||||
|
||||
message Handshake {
|
||||
bytes payment_threshold = 1;
|
||||
}
|
||||
|
||||
// TODO Signature?
|
||||
// Should probably be over the whole Cheque type
|
||||
message Cheque {
|
||||
bytes beneficiary = 1;
|
||||
// TODO epoch time or block time?
|
||||
uint32 date = 2;
|
||||
// TODO ERC20 extension?
|
||||
// For now karma counter
|
||||
uint32 amount = 3;
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
## Incremental integration and roll-out
|
||||
|
||||
To incrementally integrate this into Waku v2,
|
||||
we have divided up the roll-out into three phases:
|
||||
|
||||
- Soft - accounting only
|
||||
- Mock - send mock cheques and take word for it
|
||||
- Hard Test - blockchain integration and deployed to public testnet
|
||||
(Goerli, Optimism testnet or similar)
|
||||
- Hard Main - deployed to a public mainnet
|
||||
|
||||
An implementation MAY support any of these phases.
|
||||
|
||||
### Soft phase
|
||||
|
||||
In the soft phase only accounting is performed, without consequence for the
|
||||
peers. No disconnect or sending of cheques is performed at this tage.
|
||||
|
||||
SWAP protocol is performed in conjunction with another request-reply protocol
|
||||
to account for its usage.
|
||||
It SHOULD be done for [13/WAKU2-STORE](../../core/13/store.md)
|
||||
and it MAY be done for other request/reply protocols.
|
||||
|
||||
A client SHOULD log accounting state per peer
|
||||
and SHOULD indicate when a peer is out of bounds (either of its thresholds met).
|
||||
|
||||
### Mock phase
|
||||
|
||||
In the mock phase, we send mock cheques and send cheques/disconnect peers as appropriate.
|
||||
|
||||
- If a node reaches a disconnect threshold,
|
||||
which MUST be outside the payment threshold, it SHOULD disconnect the other peer.
|
||||
- If a node is within payment balance, the other node SHOULD stay connected to it.
|
||||
- If a node receives a valid Cheque it SHOULD update its internal accounting records.
|
||||
- If any node behaves badly, the other node is free to disconnect and
|
||||
pick another node.
|
||||
- Peer rating is out of scope of this specification.
|
||||
|
||||
### Hard phase
|
||||
|
||||
In the hard phase, in addition to sending cheques and activating policy, this is
|
||||
done with blockchain integration on a public testnet. More details TBD.
|
||||
|
||||
This also includes settlements where cheques can be redeemed.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [Prisoner's Dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma)
|
||||
2. [Axelrod - Evolution of Cooperation (book)](https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation)
|
||||
3. [Book of Swarm](https://web.archive.org/web/20210126130038/https://gateway.ethswarm.org/bzz/latest.bookofswarm.eth)
|
||||
4. [13/WAKU2-STORE](../../core/13/store.md)
|
||||
|
||||
<!--
|
||||
|
||||
General TODOs:
|
||||
|
||||
- Find new link for book of swarm
|
||||
- Illustrate payment and disconnection thresholds (mscgen not great for this?)
|
||||
- Elaborate on how accounting works with amount in the context of e.g. store
|
||||
- Illustrate flow
|
||||
- Specify chequeboo
|
||||
|
||||
-->
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,7 +1,6 @@
|
||||
# Deprecated RFCs
|
||||
|
||||
Deprecated specifications are no longer used in Waku products.
|
||||
This subdirectory is for achrive purpose and
|
||||
should not be used in production ready implementations.
|
||||
Visit [Waku RFCs](https://github.com/waku-org/specs)
|
||||
for new Waku specifications under discussion.
|
||||
# Deprecated RFCs
|
||||
|
||||
Deprecated specifications are no longer used in Waku products.
|
||||
This subdirectory is for achrive purpose and
|
||||
should not be used in production ready implementations.
|
||||
Visit [Waku RFCs](https://github.com/waku-org/specs) for new Waku specifications under discussion.
|
||||
|
||||
@@ -1,55 +1,55 @@
|
||||
---
|
||||
slug: 22
|
||||
title: 22/TOY-CHAT
|
||||
name: Waku v2 Toy Chat
|
||||
status: draft
|
||||
tags: waku/application
|
||||
editor: Franck Royer <franck@status.im>
|
||||
contributors:
|
||||
- Hanno Cornelius <hanno@status.im>
|
||||
---
|
||||
|
||||
**Content Topic**: `/toy-chat/2/huilong/proto`.
|
||||
|
||||
This specification explains a toy chat example using Waku v2.
|
||||
This protocol is mainly used to:
|
||||
|
||||
1. Dogfood Waku v2,
|
||||
2. Show an example of how to use Waku v2.
|
||||
|
||||
Currently, all main Waku v2 implementations support the toy chat protocol:
|
||||
[nim-waku](https://github.com/status-im/nim-waku/blob/master/examples/v2/chat2.nim),
|
||||
js-waku ([NodeJS](https://github.com/status-im/js-waku/tree/main/examples/cli-chat)
|
||||
and [web](https://github.com/status-im/js-waku/tree/main/examples/web-chat))
|
||||
and [go-waku](https://github.com/status-im/go-waku/tree/master/examples/chat2).
|
||||
|
||||
Note that this is completely separate from the protocol the Status app
|
||||
is using for its chat functionality.
|
||||
|
||||
## Design
|
||||
|
||||
The chat protocol enables sending and receiving messages in a chat room.
|
||||
There is currently only one chat room, which is tied to the content topic.
|
||||
The messages SHOULD NOT be encrypted.
|
||||
|
||||
The `contentTopic` MUST be set to `/toy-chat/2/huilong/proto`.
|
||||
|
||||
## Payloads
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
message Chat2Message {
|
||||
uint64 timestamp = 1;
|
||||
string nick = 2;
|
||||
bytes payload = 3;
|
||||
}
|
||||
```
|
||||
|
||||
- `timestamp`: The time at which the message was sent, in Unix Epoch seconds,
|
||||
- `nick`: The nickname of the user sending the message,
|
||||
- `payload`: The text of the messages, UTF-8 encoded.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
---
|
||||
slug: 22
|
||||
title: 22/TOY-CHAT
|
||||
name: Waku v2 Toy Chat
|
||||
status: draft
|
||||
tags: waku/application
|
||||
editor: Franck Royer <franck@status.im>
|
||||
contributors:
|
||||
- Hanno Cornelius <hanno@status.im>
|
||||
---
|
||||
|
||||
**Content Topic**: `/toy-chat/2/huilong/proto`.
|
||||
|
||||
This specification explains a toy chat example using Waku v2.
|
||||
This protocol is mainly used to:
|
||||
|
||||
1. Dogfood Waku v2,
|
||||
2. Show an example of how to use Waku v2.
|
||||
|
||||
Currently, all main Waku v2 implementations support the toy chat protocol:
|
||||
[nim-waku](https://github.com/status-im/nim-waku/blob/master/examples/v2/chat2.nim),
|
||||
js-waku ([NodeJS](https://github.com/status-im/js-waku/tree/main/examples/cli-chat)
|
||||
and [web](https://github.com/status-im/js-waku/tree/main/examples/web-chat))
|
||||
and [go-waku](https://github.com/status-im/go-waku/tree/master/examples/chat2).
|
||||
|
||||
Note that this is completely separate from the protocol the Status app
|
||||
is using for its chat functionality.
|
||||
|
||||
## Design
|
||||
|
||||
The chat protocol enables sending and receiving messages in a chat room.
|
||||
There is currently only one chat room, which is tied to the content topic.
|
||||
The messages SHOULD NOT be encrypted.
|
||||
|
||||
The `contentTopic` MUST be set to `/toy-chat/2/huilong/proto`.
|
||||
|
||||
## Payloads
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
message Chat2Message {
|
||||
uint64 timestamp = 1;
|
||||
string nick = 2;
|
||||
bytes payload = 3;
|
||||
}
|
||||
```
|
||||
|
||||
- `timestamp`: The time at which the message was sent, in Unix Epoch seconds,
|
||||
- `nick`: The nickname of the user sending the message,
|
||||
- `payload`: The text of the messages, UTF-8 encoded.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
@@ -1,208 +1,185 @@
|
||||
---
|
||||
slug: 23
|
||||
title: 23/WAKU2-TOPICS
|
||||
name: Waku v2 Topic Usage Recommendations
|
||||
status: draft
|
||||
category: Informational
|
||||
editor: Oskar Thoren <oskarth@titanproxy.com>
|
||||
contributors:
|
||||
- Hanno Cornelius <hanno@status.im>
|
||||
- Daniel Kaiser <danielkaiser@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
This document outlines recommended usage of topic names in Waku v2.
|
||||
In [10/WAKU2 spec](/waku/standards/core/10/waku2.md) there are two types of topics:
|
||||
|
||||
- Pubsub topics, used for routing
|
||||
- Content topics, used for content-based filtering
|
||||
|
||||
## Pubsub Topics
|
||||
|
||||
Pubsub topics are used for routing of messages (see [11/WAKU2-RELAY](/waku/standards/core/11/relay.md)),
|
||||
and can be named implicitly by Waku sharding (see [RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md)).
|
||||
This document comprises recommendations for explicitly naming pubsub topics
|
||||
(e.g. when choosing *named sharding* as specified in [RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md)).
|
||||
|
||||
### Pubsub Topic Format
|
||||
|
||||
Pubsub topics SHOULD follow the following structure:
|
||||
|
||||
`/waku/2/{topic-name}`
|
||||
|
||||
This namespaced structure makes compatibility, discoverability,
|
||||
and automatic handling of new topics easier.
|
||||
|
||||
The first two parts indicate:
|
||||
|
||||
1) it relates to the Waku protocol domain, and
|
||||
2) the version is 2.
|
||||
|
||||
If applicable, it is RECOMMENDED to structure `{topic-name}`
|
||||
in a hierarchical way as well.
|
||||
|
||||
> *Note*: In previous versions of this document, the structure was `/waku/2/{topic-name}/{encoding}`.
|
||||
The now deprecated `/{encoding}` was always set to `/proto`,
|
||||
which indicated that the [data field](/waku/standards/core/11/relay.md#protobuf-definition)
|
||||
in pubsub is serialized/encoded as protobuf.
|
||||
The inspiration for this format was taken from
|
||||
[Ethereum 2 P2P spec](https://github.com/ethereum/eth2.0-specs/blob/dev/specs/phase0/p2p-interface.md#topics-and-messages).
|
||||
However, because the payload of messages transmitted over [11/WAKU2-RELAY](/waku/standards/core/11/relay.md)
|
||||
must be a [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md),
|
||||
which specifies the wire format as protobuf,`/proto` is the only valid encoding.
|
||||
This makes the `/proto` indication obsolete.
|
||||
The encoding of the `payload` field of a WakuMessage
|
||||
is indicated by the `/{encoding}` part of the content topic name.
|
||||
Specifying an encoding is only significant for the actual payload/data field.
|
||||
Waku preserves this option by allowing to specify an encoding
|
||||
for the WakuMessage payload field as part of the content topic name.
|
||||
|
||||
### Default PubSub Topic
|
||||
|
||||
The Waku v2 default pubsub topic is:
|
||||
|
||||
`/waku/2/default-waku/proto`
|
||||
|
||||
The `{topic name}` part is `default-waku/proto`,
|
||||
which indicates it is default topic for exchanging WakuMessages;
|
||||
`/proto` remains for backwards compatibility.
|
||||
|
||||
### Application Specific Names
|
||||
|
||||
Larger apps can segregate their pubsub meshes using topics named like:
|
||||
|
||||
```text
|
||||
/waku/2/status/
|
||||
/waku/2/walletconnect/
|
||||
```
|
||||
|
||||
This indicates that these networks carry WakuMessages,
|
||||
but for different domains completely.
|
||||
|
||||
### Named Topic Sharding Example
|
||||
|
||||
The following is an example of named sharding, as specified in [RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md).
|
||||
|
||||
```text
|
||||
waku/2/waku-9_shard-0/
|
||||
...
|
||||
waku/2/waku-9_shard-9/
|
||||
```
|
||||
|
||||
This indicates explicitly that the network traffic has been partitioned into 10 buckets.
|
||||
|
||||
## Content Topics
|
||||
|
||||
The other type of topic that exists in Waku v2 is a content topic.
|
||||
This is used for content based filtering.
|
||||
See [14/WAKU2-MESSAGE spec](/waku/standards/core/14/message.md)
|
||||
for where this is specified.
|
||||
Note that this doesn't impact routing of messages between relaying nodes,
|
||||
but it does impact using request/reply protocols such as
|
||||
[12/WAKU2-FILTER](/waku/standards/core/12/filter.md) and
|
||||
[13/WAKU2-STORE](/waku/standards/core/13/store.md).
|
||||
|
||||
This is especially useful for nodes that have limited bandwidth,
|
||||
and only want to pull down messages that match this given content topic.
|
||||
|
||||
Since all messages are relayed using the relay protocol regardless of content topic,
|
||||
you MAY use any content topic you wish
|
||||
without impacting how messages are relayed.
|
||||
|
||||
### Content Topic Format
|
||||
|
||||
The format for content topics is as follows:
|
||||
|
||||
`/{application-name}/{version-of-the-application}/{content-topic-name}/{encoding}`
|
||||
|
||||
The name of a content topic is application-specific.
|
||||
As an example, here's the content topic used for an upcoming testnet:
|
||||
|
||||
`/toychat/2/huilong/proto`
|
||||
|
||||
### Content Topic Naming Recommendations
|
||||
|
||||
Application names SHOULD be unique to avoid conflicting issues with other protocols.
|
||||
Application version (if applicable) SHOULD be specified in the version field.
|
||||
The `{content-topic-name}` portion of the content topic is up to the application,
|
||||
and depends on the problem domain.
|
||||
It can be hierarchical, for instance to separate content, or
|
||||
to indicate different bandwidth and privacy guarantees.
|
||||
The encoding field indicates the serialization/encoding scheme
|
||||
for the [WakuMessage payload](/waku/standards/core/14/message.md#payloads) field.
|
||||
|
||||
### Content Topic usage guidelines
|
||||
|
||||
Applications SHOULD be mindful while designing/using content topics
|
||||
so that a bloat of content-topics does not happen.
|
||||
A content-topic bloat causes performance degradation in Store
|
||||
and Filter protocols while trying to retrieve messages.
|
||||
|
||||
Store queries have been noticed to be considerably slow
|
||||
(e.g doubling of response-time when content-topic count is increased from 10 to 100)
|
||||
when a lot of content-topics are involved in a single query.
|
||||
Similarly, a number of filter subscriptions increase,
|
||||
which increases complexity on client side to maintain
|
||||
and manage these subscriptions.
|
||||
|
||||
Applications SHOULD analyze the query/filter criteria for fetching messages from the network
|
||||
and select/design content topics to match such filter criteria.
|
||||
e.g: even though applications may want to segregate messages into different sets
|
||||
based on some application logic,
|
||||
if those sets of messages are always fetched/queried together from the network,
|
||||
then all those messages SHOULD use a single content-topic.
|
||||
|
||||
## Differences with Waku v1
|
||||
|
||||
In [5/WAKU1](/waku/deprecated/5/waku0.md) there is no actual routing.
|
||||
All messages are sent to all other nodes.
|
||||
This means that we are implicitly using the same pubsub topic
|
||||
that would be something like:
|
||||
|
||||
```text
|
||||
/waku/1/default-waku/rlp
|
||||
```
|
||||
|
||||
Topics in Waku v1 correspond to Content Topics in Waku v2.
|
||||
|
||||
### Bridging Waku v1 and Waku v2
|
||||
|
||||
To bridge Waku v1 and Waku v2 we have a [15/WAKU-BRIDGE](/waku/standards/core/15/bridge.md).
|
||||
For mapping Waku v1 topics to Waku v2 content topics,
|
||||
the following structure for the content topic SHOULD be used:
|
||||
|
||||
```text
|
||||
/waku/1/<4bytes-waku-v1-topic>/rfc26
|
||||
```
|
||||
|
||||
The `<4bytes-waku-v1-topic>` SHOULD be the lowercase hex representation
|
||||
of the 4-byte Waku v1 topic.
|
||||
A `0x` prefix SHOULD be used.
|
||||
`/rfc26` indicates that the bridged content is encoded according to RFC [26/WAKU2-PAYLOAD](/waku/standards/application/26/payload.md).
|
||||
See [15/WAKU-BRIDGE](/waku/standards/core/15/bridge.md)
|
||||
for a description of the bridged fields.
|
||||
|
||||
This creates a direct mapping between the two protocols.
|
||||
For example:
|
||||
|
||||
```text
|
||||
/waku/1/0x007f80ff/rfc26
|
||||
```
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [10/WAKU2 spec](/waku/standards/core/10/waku2.md)
|
||||
- [11/WAKU2-RELAY](/waku/standards/core/11/relay.md)
|
||||
- [RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md)
|
||||
- [Ethereum 2 P2P spec](https://github.com/ethereum/eth2.0-specs/blob/dev/specs/phase0/p2p-interface.md#topics-and-messages)
|
||||
- [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md)
|
||||
- [12/WAKU2-FILTER](/waku/standards/core/12/filter.md)
|
||||
- [13/WAKU2-STORE](/waku/standards/core/13/store.md)
|
||||
- [6/WAKU1](/waku/deprecated/5/waku0.md)
|
||||
- [15/WAKU-BRIDGE](/waku/standards/core/15/bridge.md)
|
||||
- [26/WAKU-PAYLOAD](/waku/standards/application/26/payload.md)
|
||||
---
|
||||
slug: 23
|
||||
title: 23/WAKU2-TOPICS
|
||||
name: Waku v2 Topic Usage Recommendations
|
||||
status: draft
|
||||
category: Informational
|
||||
editor: Oskar Thoren <oskarth@titanproxy.com>
|
||||
contributors:
|
||||
- Hanno Cornelius <hanno@status.im>
|
||||
- Daniel Kaiser <danielkaiser@status.im>
|
||||
---
|
||||
|
||||
This document outlines recommended usage of topic names in Waku v2.
|
||||
In [10/WAKU2 spec](../../standards/core/10/waku2.md) there are two types of topics:
|
||||
|
||||
- pubsub topics, used for routing
|
||||
- Content topics, used for content-based filtering
|
||||
|
||||
## Pubsub Topics
|
||||
|
||||
Pubsub topics are used for routing of messages (see [11/WAKU2-RELAY](../../standards/core/11/relay.md)),
|
||||
and can be named implicitly by Waku sharding (see [RELAY-SHARDING](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/relay-sharding.md)).
|
||||
This document comprises recommendations for explicitly naming pubsub topics
|
||||
(e.g. when choosing *named sharding* as specified in [RELAY-SHARDING](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/relay-sharding.md)).
|
||||
|
||||
### Pubsub Topic Format
|
||||
|
||||
Pubsub topics SHOULD follow the following structure:
|
||||
|
||||
`/waku/2/{topic-name}`
|
||||
|
||||
This namespaced structure makes compatibility, discoverability,
|
||||
and automatic handling of new topics easier.
|
||||
|
||||
The first two parts indicate
|
||||
|
||||
1) it relates to the Waku protocol domain, and
|
||||
2) the version is 2.
|
||||
|
||||
If applicable, it is RECOMMENDED to structure `{topic-name}`
|
||||
in a hierarchical way as well.
|
||||
|
||||
> *Note*: In previous versions of this document, the structure was `/waku/2/{topic-name}/{encoding}`.
|
||||
The now deprecated `/{encoding}` was always set to `/proto`,
|
||||
which indicated that the [data field](../../standards/core/11/RELAY.md/#protobuf-definition)
|
||||
in pubsub is serialized/encoded as protobuf.
|
||||
The inspiration for this format was taken from
|
||||
[Ethereum 2 P2P spec](https://github.com/ethereum/eth2.0-specs/blob/dev/specs/phase0/p2p-interface.md#topics-and-messages).
|
||||
However, because the payload of messages transmitted over [11/WAKU2-RELAY](../../standards/core/11/relay.md)
|
||||
must be a [14/WAKU2-MESSAGE](../../standards/core/14/message.md),
|
||||
which specifies the wire format as protobuf,`/proto` is the only valid encoding.
|
||||
This makes the `/proto` indication obsolete.
|
||||
The encoding of the `payload` field of a Waku Message
|
||||
is indicated by the `/{encoding}` part of the content topic name.
|
||||
Specifying an encoding is only significant for the actual payload/data field.
|
||||
Waku preserves this option by allowing to specify an encoding
|
||||
for the WakuMessage payload field as part of the content topic name.
|
||||
|
||||
### Default PubSub Topic
|
||||
|
||||
The Waku v2 default pubsub topic is:
|
||||
|
||||
`/waku/2/default-waku/proto`
|
||||
|
||||
The `{topic name}` part is `default-waku/proto`,
|
||||
which indicates it is default topic for exchanging WakuMessages;
|
||||
`/proto` remains for backwards compatibility.
|
||||
|
||||
### Application Specific Names
|
||||
|
||||
Larger apps can segregate their pubsub meshes using topics named like:
|
||||
|
||||
```text
|
||||
/waku/2/status/
|
||||
/waku/2/walletconnect/
|
||||
```
|
||||
|
||||
This indicates that these networks carry WakuMessages,
|
||||
but for different domains completely.
|
||||
|
||||
### Named Topic Sharding Example
|
||||
|
||||
The following is an example of named sharding, as specified in [RELAY-SHARDING](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/relay-sharding.md).
|
||||
|
||||
```text
|
||||
waku/2/waku-9_shard-0/
|
||||
...
|
||||
waku/2/waku-9_shard-9/
|
||||
```
|
||||
|
||||
This indicates explicitly that the network traffic has been partitioned into 10 buckets.
|
||||
|
||||
## Content Topics
|
||||
|
||||
The other type of topic that exists in Waku v2 is a content topic.
|
||||
This is used for content based filtering.
|
||||
See [14/WAKU2-MESSAGE spec](../../standards/core/14/message.md)
|
||||
for where this is specified.
|
||||
Note that this doesn't impact routing of messages between relaying nodes,
|
||||
but it does impact how request/reply protocols such as
|
||||
[12/WAKU2-FILTER](../../standards/core/12/filter.md) and
|
||||
[13/WAKU2-STORE](../../standards/core/13/store.md) are used.
|
||||
|
||||
This is especially useful for nodes that have limited bandwidth,
|
||||
and only want to pull down messages that match this given content topic.
|
||||
|
||||
Since all messages are relayed using the relay protocol regardless of content topic,
|
||||
you MAY use any content topic you wish without impacting how messages are relayed.
|
||||
|
||||
### Content Topic Format
|
||||
|
||||
The format for content topics is as follows:
|
||||
|
||||
`/{application-name}/{version-of-the-application}/{content-topic-name}/{encoding}`
|
||||
|
||||
The name of a content topic is application-specific.
|
||||
As an example, here's the content topic used for an upcoming testnet:
|
||||
|
||||
`/toychat/2/huilong/proto`
|
||||
|
||||
### Content Topic Naming Recommendations
|
||||
|
||||
Application names should be unique to avoid conflicting issues with other protocols.
|
||||
Applications should specify their version (if applicable) in the version field.
|
||||
The `{content-topic-name}` portion of the content topic is up to the application,
|
||||
and depends on the problem domain.
|
||||
It can be hierarchical, for instance to separate content, or
|
||||
to indicate different bandwidth and privacy guarantees.
|
||||
The encoding field indicates the serialization/encoding scheme
|
||||
for the [WakuMessage payload](../../standards/core/14/message.md/#payloads) field.
|
||||
|
||||
## Differences with Waku v1
|
||||
|
||||
In [5/WAKU1](../../deprecated/5/waku0.md) there is no actual routing.
|
||||
All messages are sent to all other nodes.
|
||||
This means that we are implicitly using the same pubsub topic
|
||||
that would be something like:
|
||||
|
||||
```text
|
||||
/waku/1/default-waku/rlp
|
||||
```
|
||||
|
||||
Topics in Waku v1 correspond to Content Topics in Waku v2.
|
||||
|
||||
### Bridging Waku v1 and Waku v2
|
||||
|
||||
To bridge Waku v1 and Waku v2 we have a [15/WAKU-BRIDGE](../../standards/core/15/bridge.md).
|
||||
For mapping Waku v1 topics to Waku v2 content topics,
|
||||
the following structure for the content topic SHOULD be used:
|
||||
|
||||
```text
|
||||
/waku/1/<4bytes-waku-v1-topic>/rfc26
|
||||
```
|
||||
|
||||
The `<4bytes-waku-v1-topic>` SHOULD be the lowercase hex representation
|
||||
of the 4-byte Waku v1 topic.
|
||||
A `0x` prefix SHOULD be used.
|
||||
`/rfc26` indicates that the bridged content is encoded according to RFC [26/WAKU2-PAYLOAD](../../standards/application/26/payload.md).
|
||||
See [15/WAKU-BRIDGE](../../standards/core/15/bridge.md) for a description
|
||||
of the bridged fields.
|
||||
|
||||
This creates a direct mapping between the two protocols.
|
||||
For example:
|
||||
|
||||
```text
|
||||
/waku/1/0x007f80ff/rfc26
|
||||
```
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [10/WAKU2 spec](../../standards/core/10/waku2.md)
|
||||
- [11/WAKU2-RELAY](../../standards/core/11/relay.md)
|
||||
- [RELAY-SHARDING](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/relay-sharding.md)
|
||||
- [Ethereum 2 P2P spec](https://github.com/ethereum/eth2.0-specs/blob/dev/specs/phase0/p2p-interface.md#topics-and-messages)
|
||||
- [14/WAKU2-MESSAGE](../../standards/core/14/message.md)
|
||||
- [12/WAKU2-FILTER](../../standards/core/12/filter.md)
|
||||
- [13/WAKU2-STORE](../../standards/core/13/store.md)
|
||||
- [6/WAKU1](../../deprecated/5/waku0.md)
|
||||
- [15/WAKU-BRIDGE](../../standards/core/15/bridge.md)
|
||||
- [26/WAKU-PAYLOAD](../../standards/application/26/payload.md)
|
||||
|
||||
@@ -1,121 +1,119 @@
|
||||
---
|
||||
slug: 27
|
||||
title: 27/WAKU2-PEERS
|
||||
name: Waku v2 Client Peer Management Recommendations
|
||||
status: draft
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
`27/WAKU2-PEERS` describes a recommended minimal set of peer storage and
|
||||
peer management features to be implemented by Waku v2 clients.
|
||||
|
||||
In this context, peer _storage_ refers to a client's ability to keep track of discovered
|
||||
or statically-configured peers and their metadata.
|
||||
It also deals with matters of peer _persistence_,
|
||||
or the ability to store peer data on disk to resume state after a client restart.
|
||||
|
||||
Peer _management_ is a closely related concept and
|
||||
refers to the set of actions a client MAY choose to perform
|
||||
based on its knowledge of its connected peers,
|
||||
e.g. triggering reconnects/disconnects,
|
||||
keeping certain connections alive, etc.
|
||||
|
||||
## Peer store
|
||||
|
||||
The peer store SHOULD be an in-memory data structure
|
||||
where information about discovered or configured peers are stored.
|
||||
It SHOULD be considered the main source of truth
|
||||
for peer-related information in a Waku v2 client.
|
||||
Clients MAY choose to persist this store on-disk.
|
||||
|
||||
### Tracked peer metadata
|
||||
|
||||
It is RECOMMENDED that a Waku v2 client tracks at least the following information
|
||||
about each of its peers in a peer store:
|
||||
|
||||
| Metadata | Description |
|
||||
| --- | --- |
|
||||
| _Public key_ | The public key for this peer. This is related to the libp2p [`Peer ID`](https://docs.libp2p.io/concepts/peer-id/). |
|
||||
| _Addresses_ | Known transport layer [`multiaddrs`](https://docs.libp2p.io/concepts/addressing/) for this peer. |
|
||||
| _Protocols_ | The libp2p [`protocol IDs`](https://docs.libp2p.io/concepts/protocols/#protocol-ids) supported by this peer. This can be used to track the client's connectivity to peers supporting different Waku v2 protocols, e.g. [`11/WAKU2-RELAY`](../../standards/core/11/relay.md) or [`13/WAKU2-STORE`](../../standards/core/13/store.md). |
|
||||
| _Connectivity_ | Tracks the peer's current connectedness state. See [**Peer connectivity**](#peer-connectivity) below. |
|
||||
| _Disconnect time_ | The timestamp at which this peer last disconnected. This becomes important when managing [peer reconnections](#reconnecting-peers) |
|
||||
|
||||
### Peer connectivity
|
||||
|
||||
A Waku v2 client SHOULD track _at least_ the following connectivity states
|
||||
for each of its peers:
|
||||
|
||||
- **`NotConnected`**: The peer has been discovered or configured on this client,
|
||||
but no attempt has yet been made to connect to this peer.
|
||||
This is the default state for a new peer.
|
||||
- **`CannotConnect`**: The client attempted to connect to this peer, but failed.
|
||||
- **`CanConnect`**: The client was recently connected to this peer and
|
||||
disconnected gracefully.
|
||||
- **`Connected`**: The client is actively connected to this peer.
|
||||
|
||||
This list does not preclude clients from tracking more advanced connectivity metadata,
|
||||
such as a peer's blacklist status (see [`18/WAKU2-SWAP`](/waku/deprecated/18/swap.md)).
|
||||
|
||||
### Persistence
|
||||
|
||||
A Waku v2 client MAY choose to persist peers across restarts,
|
||||
using any offline storage technology, such as an on-disk database.
|
||||
Peer persistence MAY be used to resume peer connections after a client restart.
|
||||
|
||||
## Peer management
|
||||
|
||||
Waku v2 clients will have different requirements
|
||||
when it comes to managing the peers tracked in the [**peer store**](#peer-store).
|
||||
It is RECOMMENDED that clients support:
|
||||
|
||||
- [automatic reconnection](#reconnecting-peers) to peers under certain conditions
|
||||
- [connection keep-alive](#connection-keep-alive)
|
||||
|
||||
### Reconnecting peers
|
||||
|
||||
A Waku v2 client MAY choose to reconnect to previously connected,
|
||||
managed peers under certain conditions.
|
||||
Such conditions include, but are not limited to:
|
||||
|
||||
- Reconnecting to all `relay`-capable peers after a client restart.
|
||||
This will require [persistent peer storage](#persistence).
|
||||
|
||||
If a client chooses to automatically reconnect to previous peers,
|
||||
it MUST respect the
|
||||
[backing off period](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#prune-backoff-and-peer-exchange)
|
||||
specified for GossipSub v1.1 before attempting to reconnect.
|
||||
This requires keeping track of the [last time each peer was disconnected](#tracked-peer-metadata).
|
||||
|
||||
### Connection keep-alive
|
||||
|
||||
A Waku v2 client MAY choose to implement a keep-alive mechanism to certain peers.
|
||||
If a client chooses to implement keep-alive on a connection,
|
||||
it SHOULD do so by sending periodic [libp2p pings](https://docs.libp2p.io/concepts/fundamentals/protocols/#ping)
|
||||
as per `10/WAKU2` [client recommendations](/waku/standards/core/10/waku2.md#recommendations-for-clients).
|
||||
The recommended period between pings SHOULD be _at most_ 50%
|
||||
of the shortest idle connection timeout for the specific client and transport.
|
||||
For example, idle TCP connections often times out after 10 to 15 minutes.
|
||||
|
||||
> **Implementation note:**
|
||||
the `nim-waku` client currently implements a keep-alive mechanism every `5 minutes`,
|
||||
in response to a TCP connection timeout of `10 minutes`.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [`Peer ID`](https://docs.libp2p.io/concepts/peer-id/)
|
||||
- [`multiaddrs`](https://docs.libp2p.io/concepts/addressing/)
|
||||
- [`protocol IDs`](https://docs.libp2p.io/concepts/protocols/#protocol-ids)
|
||||
- [`11/WAKU2-RELAY`](/waku/standards/core/11/relay.md)
|
||||
- [`13/WAKU2-STORE`](/waku/standards/core/13/store.md)
|
||||
- [`18/WAKU2-SWAP`](/waku/deprecated/18/swap.md)
|
||||
- [backing off period](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md/#prune-backoff-and-peer-exchange)
|
||||
- [libp2p pings](https://docs.libp2p.io/concepts/fundamentals/protocols/#ping)
|
||||
- [`10/WAKU2` client recommendations](/waku/standards/core/10/waku2.md#recommendations-for-clients)
|
||||
---
|
||||
slug: 27
|
||||
title: 27/WAKU2-PEERS
|
||||
name: Waku v2 Client Peer Management Recommendations
|
||||
status: draft
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
`27/WAKU2-PEERS` describes a recommended minimal set of peer storage and
|
||||
peer management features to be implemented by Waku v2 clients.
|
||||
|
||||
In this context, peer _storage_ refers to a client's ability to keep track of discovered
|
||||
or statically-configured peers and their metadata.
|
||||
It also deals with matters of peer _persistence_,
|
||||
or the ability to store peer data on disk to resume state after a client restart.
|
||||
|
||||
Peer _management_ is a closely related concept and
|
||||
refers to the set of actions a client MAY choose to perform
|
||||
based on its knowledge of its connected peers,
|
||||
e.g. triggering reconnects/disconnects, keeping certain connections alive, etc.
|
||||
|
||||
## Peer store
|
||||
|
||||
The peer store SHOULD be an in-memory data structure
|
||||
where information about discovered or configured peers are stored.
|
||||
It SHOULD be considered the main source of truth
|
||||
for peer-related information in a Waku v2 client.
|
||||
Clients MAY choose to persist this store on-disk.
|
||||
|
||||
### Tracked peer metadata
|
||||
|
||||
It is RECOMMENDED that a Waku v2 client tracks at least the following information
|
||||
about each of its peers in a peer store:
|
||||
|
||||
| Metadata | Description |
|
||||
| --- | --- |
|
||||
| _Public key_ | The public key for this peer. This is related to the libp2p [`Peer ID`](https://docs.libp2p.io/concepts/peer-id/). |
|
||||
| _Addresses_ | Known transport layer [`multiaddrs`](https://docs.libp2p.io/concepts/addressing/) for this peer. |
|
||||
| _Protocols_ | The libp2p [`protocol IDs`](https://docs.libp2p.io/concepts/protocols/#protocol-ids) supported by this peer. This can be used to track the client's connectivity to peers supporting different Waku v2 protocols, e.g. [`11/WAKU2-RELAY`](../../standards/core/11/relay.md) or [`13/WAKU2-STORE`](../../standards/core/13/store.md). |
|
||||
| _Connectivity_ | Tracks the peer's current connectedness state. See [**Peer connectivity**](#peer-connectivity) below. |
|
||||
| _Disconnect time_ | The timestamp at which this peer last disconnected. This becomes important when managing [peer reconnections](#reconnecting-peers) |
|
||||
|
||||
### Peer connectivity
|
||||
|
||||
A Waku v2 client SHOULD track _at least_ the following connectivity states
|
||||
for each of its peers:
|
||||
|
||||
- **`NotConnected`**: The peer has been discovered or configured on this client,
|
||||
but no attempt has yet been made to connect to this peer.
|
||||
This is the default state for a new peer.
|
||||
- **`CannotConnect`**: The client attempted to connect to this peer, but failed.
|
||||
- **`CanConnect`**: The client was recently connected to this peer and
|
||||
disconnected gracefully.
|
||||
- **`Connected`**: The client is actively connected to this peer.
|
||||
|
||||
This list does not preclude clients from tracking more advanced connectivity metadata,
|
||||
such as a peer's blacklist status (see [`18/WAKU2-SWAP`](../../standards/application/18/swap.md)).
|
||||
|
||||
### Persistence
|
||||
|
||||
A Waku v2 client MAY choose to persist peers across restarts,
|
||||
using any offline storage technology, such as an on-disk database.
|
||||
Peer persistence MAY be used to resume peer connections after a client restart.
|
||||
|
||||
## Peer management
|
||||
|
||||
Waku v2 clients will have different requirements
|
||||
when it comes to managing the peers tracked in the [**peer store**](#peer-store).
|
||||
It is RECOMMENDED that clients support:
|
||||
|
||||
- [automatic reconnection](#reconnecting-peers) to peers under certain conditions
|
||||
- [connection keep-alive](#connection-keep-alive)
|
||||
|
||||
### Reconnecting peers
|
||||
|
||||
A Waku v2 client MAY choose to reconnect to previously connected,
|
||||
managed peers under certain conditions.
|
||||
Such conditions include, but are not limited to:
|
||||
|
||||
- Reconnecting to all `relay`-capable peers after a client restart.
|
||||
This will require [persistent peer storage](#persistence).
|
||||
|
||||
If a client chooses to automatically reconnect to previous peers,
|
||||
it MUST respect the
|
||||
[backing off period](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#prune-backoff-and-peer-exchange)
|
||||
specified for GossipSub v1.1 before attempting to reconnect.
|
||||
This requires keeping track of the [last time each peer was disconnected](#tracked-peer-metadata).
|
||||
|
||||
### Connection keep-alive
|
||||
|
||||
A Waku v2 client MAY choose to implement a keep-alive mechanism to certain peers.
|
||||
If a client chooses to implement keep-alive on a connection,
|
||||
it SHOULD do so by sending periodic [libp2p pings](https://docs.libp2p.io/concepts/protocols/#ping)
|
||||
as per `10/WAKU2` [client recommendations](../../standards/core/10/waku2.md/#recommendations-for-clients).
|
||||
The recommended period between pings SHOULD be _at most_ 50%
|
||||
of the shortest idle connection timeout for the specific client and transport.
|
||||
For example, idle TCP connections often times out after 10 to 15 minutes.
|
||||
|
||||
> **Implementation note:**
|
||||
the `nim-waku` client currently implements a keep-alive mechanism every `5 minutes`,
|
||||
in response to a TCP connection timeout of `10 minutes`.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [`Peer ID`](https://docs.libp2p.io/concepts/peer-id/)
|
||||
- [`multiaddrs`](https://docs.libp2p.io/concepts/addressing/)
|
||||
- [`protocol IDs`](https://docs.libp2p.io/concepts/protocols/#protocol-ids)
|
||||
- [`11/WAKU2-RELAY`](../../standards/core/11/relay.md)
|
||||
- [`13/WAKU2-STORE`](../../standards/core/13/store.md)
|
||||
- [`18/WAKU2-SWAP`](../../standards/application/18/swap.md)
|
||||
- [backing off period](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#prune-backoff-and-peer-exchange)
|
||||
- [libp2p pings](https://docs.libp2p.io/concepts/protocols/#ping)
|
||||
- [`10/WAKU2` client recommendations](../../standards/core/10/waku2.md/#recommendations-for-clients)
|
||||
|
||||
@@ -1,81 +1,79 @@
|
||||
---
|
||||
slug: 29
|
||||
title: 29/WAKU2-CONFIG
|
||||
name: Waku v2 Client Parameter Configuration Recommendations
|
||||
status: draft
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
`29/WAKU2-CONFIG` describes the RECOMMENDED values
|
||||
to assign to configurable parameters for Waku v2 clients.
|
||||
Since Waku v2 is built on [libp2p](https://github.com/libp2p/specs),
|
||||
most of the parameters and reasonable defaults are derived from there.
|
||||
|
||||
Waku v2 relay messaging is specified in [`11/WAKU2-RELAY`](/waku/standards/core/11/relay.md),
|
||||
a minor extension of the [libp2p GossipSub protocol](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md).
|
||||
GossipSub behaviour is controlled by a series of adjustable parameters.
|
||||
Waku v2 clients SHOULD configure these parameters to the recommended values below.
|
||||
|
||||
## GossipSub v1.0 parameters
|
||||
|
||||
GossipSub v1.0 parameters are defined in the [corresponding libp2p specification](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md#parameters).
|
||||
We repeat them here with RECOMMMENDED values for `11/WAKU2-RELAY` implementations.
|
||||
|
||||
| Parameter | Purpose | RECOMMENDED value |
|
||||
|----------------------|-------------------------------------------------------|-------------------|
|
||||
| `D` | The desired outbound degree of the network | 6 |
|
||||
| `D_low` | Lower bound for outbound degree | 4 |
|
||||
| `D_high` | Upper bound for outbound degree | 8 |
|
||||
| `D_lazy` | (Optional) the outbound degree for gossip emission | `D` |
|
||||
| `heartbeat_interval` | Time between heartbeats | 1 second |
|
||||
| `fanout_ttl` | Time-to-live for each topic's fanout state | 60 seconds |
|
||||
| `mcache_len` | Number of history windows in message cache | 5 |
|
||||
| `mcache_gossip` | Number of history windows to use when emitting gossip | 3 |
|
||||
| `seen_ttl` | Expiry time for cache of seen message ids | 2 minutes |
|
||||
|
||||
## GossipSub v1.1 parameters
|
||||
|
||||
GossipSub v1.1 extended GossipSub v1.0 and
|
||||
introduced [several new parameters](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#overview-of-new-parameters).
|
||||
We repeat the global parameters here
|
||||
with RECOMMMENDED values for `11/WAKU2-RELAY` implementations.
|
||||
|
||||
| Parameter | Description | RECOMMENDED value |
|
||||
|----------------|------------------------------------------------------------------------|-------------------|
|
||||
| `PruneBackoff` | Time after pruning a mesh peer before we consider grafting them again. | 1 minute |
|
||||
| `FloodPublish` | Whether to enable flood publishing | true |
|
||||
| `GossipFactor` | % of peers to send gossip to, if we have more than `D_lazy` available | 0.25 |
|
||||
| `D_score` | Number of peers to retain by score when pruning from oversubscription | `D_low` |
|
||||
| `D_out` | Number of outbound connections to keep in the mesh. | `D_low` - 1 |
|
||||
|
||||
`11/WAKU2-RELAY` clients SHOULD implement a peer scoring mechanism
|
||||
with the parameter constraints as
|
||||
[specified by libp2p](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#overview-of-new-parameters).
|
||||
|
||||
## Other configuration
|
||||
|
||||
The following behavioural parameters are not specified by `libp2p`,
|
||||
but nevertheless describes constraints that `11/WAKU2-RELAY` clients
|
||||
MAY choose to implement.
|
||||
|
||||
| Parameter | Description | RECOMMENDED value |
|
||||
|--------------------|---------------------------------------------------------------------------|-------------------|
|
||||
| `BackoffSlackTime` | Slack time to add to prune backoff before attempting to graft again | 2 seconds |
|
||||
| `IWantPeerBudget` | Maximum number of IWANT messages to accept from a peer within a heartbeat | 25 |
|
||||
| `IHavePeerBudget` | Maximum number of IHAVE messages to accept from a peer within a heartbeat | 10 |
|
||||
| `IHaveMaxLength` | Maximum number of messages to include in an IHAVE message | 5000 |
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [libp2p](https://github.com/libp2p/specs)
|
||||
- [11/WAKU2-RELAY](/waku/standards/core/11/relay.md)
|
||||
- [libp2p GossipSub protocol](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md)
|
||||
- [corresponding libp2p specification](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md#parameters)
|
||||
- [several new parameters](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#overview-of-new-parameters)
|
||||
---
|
||||
slug: 29
|
||||
title: 29/WAKU2-CONFIG
|
||||
name: Waku v2 Client Parameter Configuration Recommendations
|
||||
status: draft
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
`29/WAKU2-CONFIG` describes the RECOMMENDED values
|
||||
to assign to configurable parameters for Waku v2 clients.
|
||||
Since Waku v2 is built on [libp2p](https://github.com/libp2p/specs),
|
||||
most of the parameters and reasonable defaults are derived from there.
|
||||
|
||||
Waku v2 relay messaging is specified in [`11/WAKU2-RELAY`](../../standards/core/11/relay.md),
|
||||
a minor extension of the [libp2p GossipSub protocol](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md).
|
||||
GossipSub behaviour is controlled by a series of adjustable parameters.
|
||||
Waku v2 clients SHOULD configure these parameters to the recommended values below.
|
||||
|
||||
## GossipSub v1.0 parameters
|
||||
|
||||
GossipSub v1.0 parameters are defined in the [corresponding libp2p specification](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md#parameters).
|
||||
We repeat them here with RECOMMMENDED values for `11/WAKU2-RELAY` implementations.
|
||||
|
||||
| Parameter | Purpose | RECOMMENDED value |
|
||||
|----------------------|-------------------------------------------------------|-------------------|
|
||||
| `D` | The desired outbound degree of the network | 6 |
|
||||
| `D_low` | Lower bound for outbound degree | 4 |
|
||||
| `D_high` | Upper bound for outbound degree | 8 |
|
||||
| `D_lazy` | (Optional) the outbound degree for gossip emission | `D` |
|
||||
| `heartbeat_interval` | Time between heartbeats | 1 second |
|
||||
| `fanout_ttl` | Time-to-live for each topic's fanout state | 60 seconds |
|
||||
| `mcache_len` | Number of history windows in message cache | 5 |
|
||||
| `mcache_gossip` | Number of history windows to use when emitting gossip | 3 |
|
||||
| `seen_ttl` | Expiry time for cache of seen message ids | 2 minutes |
|
||||
|
||||
## GossipSub v1.1 parameters
|
||||
|
||||
GossipSub v1.1 extended GossipSub v1.0 and introduced [several new parameters](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#overview-of-new-parameters).
|
||||
We repeat the global parameters here with RECOMMMENDED values
|
||||
for `11/WAKU2-RELAY` implementations.
|
||||
|
||||
| Parameter | Description | RECOMMENDED value |
|
||||
|----------------|------------------------------------------------------------------------|-------------------|
|
||||
| `PruneBackoff` | Time after pruning a mesh peer before we consider grafting them again. | 1 minute |
|
||||
| `FloodPublish` | Whether to enable flood publishing | true |
|
||||
| `GossipFactor` | % of peers to send gossip to, if we have more than `D_lazy` available | 0.25 |
|
||||
| `D_score` | Number of peers to retain by score when pruning from oversubscription | `D_low` |
|
||||
| `D_out` | Number of outbound connections to keep in the mesh. | `D_low` - 1 |
|
||||
|
||||
`11/WAKU2-RELAY` clients SHOULD implement a peer scoring mechanism
|
||||
with the parameter constraints as
|
||||
[specified by libp2p](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#overview-of-new-parameters).
|
||||
|
||||
## Other configuration
|
||||
|
||||
The following behavioural parameters are not specified by `libp2p`,
|
||||
but nevertheless describes constraints that `11/WAKU2-RELAY` clients
|
||||
MAY choose to implement.
|
||||
|
||||
| Parameter | Description | RECOMMENDED value |
|
||||
|--------------------|---------------------------------------------------------------------------|-------------------|
|
||||
| `BackoffSlackTime` | Slack time to add to prune backoff before attempting to graft again | 2 seconds |
|
||||
| `IWantPeerBudget` | Maximum number of IWANT messages to accept from a peer within a heartbeat | 25 |
|
||||
| `IHavePeerBudget` | Maximum number of IHAVE messages to accept from a peer within a heartbeat | 10 |
|
||||
| `IHaveMaxLength` | Maximum number of messages to include in an IHAVE message | 5000 |
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [libp2p](https://github.com/libp2p/specs)
|
||||
- [11/WAKU2-RELAY](../../standards/core/11/relay.md)
|
||||
- [libp2p GossipSub protocol](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md)
|
||||
- [corresponding libp2p specification](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md#parameters)
|
||||
- [several new parameters](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#overview-of-new-parameters)
|
||||
|
||||
@@ -1,125 +1,117 @@
|
||||
---
|
||||
slug: 30
|
||||
title: 30/ADAPTIVE-NODES
|
||||
name: Adaptive nodes
|
||||
status: draft
|
||||
editor: Oskar Thorén <oskarth@titanproxy.com>
|
||||
contributors:
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
This is an informational spec that show cases the concept of adaptive nodes.
|
||||
|
||||
## Node types - a continuum
|
||||
|
||||
We can look at node types as a continuum,
|
||||
from more restricted to less restricted,
|
||||
fewer resources to more resources.
|
||||
|
||||

|
||||
|
||||
### Possible limitations
|
||||
|
||||
- Connectivity: Not publicly connectable vs static IP and DNS
|
||||
- Connectivity: Mostly offline to mostly online to always online
|
||||
- Resources: Storage, CPU, Memory, Bandwidth
|
||||
|
||||
### Accessibility and motivation
|
||||
|
||||
Some examples:
|
||||
|
||||
- Opening browser window: costs nothing, but contribute nothing
|
||||
- Desktop: download, leave in background, contribute somewhat
|
||||
- Cluster: expensive, upkeep, but can contribute a lot
|
||||
|
||||
These are also illustrative,
|
||||
so a node in a browser in certain environment might contribute similarly to Desktop.
|
||||
|
||||
### Adaptive nodes
|
||||
|
||||
We call these nodes *adaptive nodes* to highlights different modes of contributing,
|
||||
such as:
|
||||
|
||||
- Only leeching from the network
|
||||
- Relaying messages for one or more topics
|
||||
- Providing services for lighter nodes such as lightpush and filter
|
||||
- Storing historical messages to various degrees
|
||||
- Ensuring relay network can't be spammed with RLN
|
||||
|
||||
### Planned incentives
|
||||
|
||||
Incentives to run a node is currently planned around:
|
||||
|
||||
- SWAP for accounting and settlement of services provided
|
||||
- RLN RELAY for spam protection
|
||||
- Other incentivization schemes are likely to follow and is an area of active research
|
||||
|
||||
## Node protocol selection
|
||||
|
||||
Each node can choose which protocols to support, depending on its resources and goals.
|
||||
|
||||

|
||||
|
||||
Protocols like [11/WAKU2-RELAY](/waku/standards/core/11/relay.md),
|
||||
as well as [12], [13], [19], and [21], correspond to libp2p protocols.
|
||||
|
||||
However, other protocols like 16/WAKU2-RPC
|
||||
(local HTTP JSON-RPC), 25/LIBP2P-DNS-DISCOVERY,
|
||||
Discovery v5 (DevP2P) or interfacing with distributed storage,
|
||||
are running on different network stacks.
|
||||
|
||||
This is in addition to protocols that specify payloads, such as 14/WAKU2-MESSAGE,
|
||||
26/WAKU2-PAYLOAD, or application specific ones.
|
||||
As well as specs that act more as recommendations,
|
||||
such as 23/WAKU2-TOPICS or 27/WAKU2-PEERS.
|
||||
|
||||
## Waku network visualization
|
||||
|
||||
We can better visualize the network with some illustrative examples.
|
||||
|
||||
### Topology and topics
|
||||
|
||||
This illustration shows an example topology with different PubSub topics
|
||||
for the relay protocol.
|
||||
|
||||

|
||||
|
||||
### Legend
|
||||
|
||||
This illustration shows an example of content topics a node is interested in.
|
||||
|
||||

|
||||
|
||||
The dotted box shows what content topics (application-specific)
|
||||
a node is interested in.
|
||||
|
||||
A node that is purely providing a service to the network might not care.
|
||||
|
||||
In this example, we see support for toy chat,
|
||||
a topic in Waku v1 (Status chat), WalletConnect, and SuperRare community.
|
||||
|
||||
### Auxiliary network
|
||||
|
||||
This is a separate component with its own topology.
|
||||
|
||||
Behavior and interaction with other protocols specified in Vac RFCs,
|
||||
e.g. [25/LIBP2P-DNS-DISCOVERY](/vac/25/libp2p-dns-discovery.md)
|
||||
and [15/WAKU-BRIDGE](/waku/standards/core/15/bridge.md).
|
||||
|
||||
### Node Cross Section
|
||||
|
||||
This one shows a cross-section of nodes in different dimensions and
|
||||
shows how the connections look different for different protocols.
|
||||
|
||||

|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [11/WAKU2-RELAY](/waku/standards/core/11/relay.md)
|
||||
- [25/LIBP2P-DNS-DISCOVERY](/vac/25/libp2p-dns-discovery.md)
|
||||
- [15/WAKU-BRIDGE](/waku/standards/core/15/bridge.md)
|
||||
---
|
||||
slug: 30
|
||||
title: 30/ADAPTIVE-NODES
|
||||
name: Adaptive nodes
|
||||
status: draft
|
||||
editor: Oskar Thorén <oskarth@titanproxy.com>
|
||||
contributors:
|
||||
---
|
||||
|
||||
This is an informational spec that show cases the concept of adaptive nodes.
|
||||
|
||||
## Node types - a continuum
|
||||
|
||||
We can look at node types as a continuum,
|
||||
from more restricted to less restricted, fewer resources to more resources.
|
||||
|
||||

|
||||
|
||||
### Possible limitations
|
||||
|
||||
- Connectivity: Not publicly connectable vs static IP and DNS
|
||||
- Connectivity: Mostly offline to mostly online to always online
|
||||
- Resources: Storage, CPU, Memory, Bandwidth
|
||||
|
||||
### Accessibility and motivation
|
||||
|
||||
Some examples:
|
||||
|
||||
- Opening browser window: costs nothing, but contribute nothing
|
||||
- Desktop: download, leave in background, contribute somewhat
|
||||
- Cluster: expensive, upkeep, but can contribute a lot
|
||||
|
||||
These are also illustrative,
|
||||
so a node in a browser in certain environment might contribute similarly to Desktop.
|
||||
|
||||
### Adaptive nodes
|
||||
|
||||
We call these nodes *adaptive nodes* to highlights different modes of contributing,
|
||||
such as:
|
||||
|
||||
- Only leeching from the network
|
||||
- Relaying messages for one or more topics
|
||||
- Providing services for lighter nodes such as lightpush and filter
|
||||
- Storing historical messages to various degrees
|
||||
- Ensuring relay network can't be spammed with RLN
|
||||
|
||||
### Planned incentives
|
||||
|
||||
Incentives to run a node is currently planned around:
|
||||
|
||||
- SWAP for accounting and settlement of services provided
|
||||
- RLN RELAY for spam protection
|
||||
- Other incentivization schemes are likely to follow and is an area of active research
|
||||
|
||||
## Node protocol selection
|
||||
|
||||
Each node can choose which protocols to support, depending on its resources and goals.
|
||||
|
||||

|
||||
|
||||
In the case of protocols like [11/WAKU2-RELAY](../../standards/core/11/relay.md)
|
||||
etc (12, 13, 19, 21) these correspond to Libp2p protocols.
|
||||
|
||||
However, other protocols like 16/WAKU2-RPC
|
||||
(local HTTP JSON-RPC), 25/LIBP2P-DNS-DISCOVERY,
|
||||
Discovery v5 (DevP2P) or interfacing with distributed storage,
|
||||
are running on different network stacks.
|
||||
|
||||
This is in addition to protocols that specify payloads, such as 14/WAKU2-MESSAGE,
|
||||
26/WAKU2-PAYLOAD, or application specific ones.
|
||||
As well as specs that act more as recommendations,
|
||||
such as 23/WAKU2-TOPICS or 27/WAKU2-PEERS.
|
||||
|
||||
## Waku network visualization
|
||||
|
||||
We can better visualize the network with some illustrative examples.
|
||||
|
||||
### Topology and topics
|
||||
|
||||
The first one shows an example topology with different PubSub topics
|
||||
for the relay protocol.
|
||||
|
||||

|
||||
|
||||
### Legend
|
||||
|
||||

|
||||
|
||||
The dotted box shows what content topics (application-specific)
|
||||
a node is interested in.
|
||||
|
||||
A node that is purely providing a service to the network might not care.
|
||||
|
||||
In this example, we see support for toy chat,
|
||||
a topic in Waku v1 (Status chat), WalletConnect, and SuperRare community.
|
||||
|
||||
### Auxiliary network
|
||||
|
||||
This is a separate component with its own topology.
|
||||
|
||||
Behavior and interaction with other protocols specified in Vac RFCs,
|
||||
e.g. 25/LIBP2P-DNS-DISCOVERY, 15/WAKU-BRIDGE, etc.
|
||||
|
||||
### Node Cross Section
|
||||
|
||||
This one shows a cross-section of nodes in different dimensions and
|
||||
shows how the connections look different for different protocols.
|
||||
|
||||

|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [11/WAKU2-RELAY](../../standards/core/11/relay.md)
|
||||
|
||||
@@ -1,268 +1,262 @@
|
||||
---
|
||||
slug: 20
|
||||
title: 20/TOY-ETH-PM
|
||||
name: Toy Ethereum Private Message
|
||||
status: draft
|
||||
tags: waku/application
|
||||
editor: Franck Royer <franck@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
**Content Topics**:
|
||||
|
||||
- Public Key Broadcast: `/eth-pm/1/public-key/proto`
|
||||
- Private Message: `/eth-pm/1/private-message/proto`
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification explains the Toy Ethereum Private Message protocol
|
||||
which enables a peer to send an encrypted message to another peer
|
||||
over the Waku network using the peer's Ethereum address.
|
||||
|
||||
## Goal
|
||||
|
||||
Alice wants to send an encrypted message to Bob,
|
||||
where only Bob can decrypt the message.
|
||||
Alice only knows Bob's Ethereum Address.
|
||||
|
||||
The goal of this specification
|
||||
is to demonstrate how Waku can be used for encrypted messaging purposes,
|
||||
using Ethereum accounts for identity.
|
||||
This protocol caters to Web3 wallet restrictions,
|
||||
allowing it to be implemented using standard Web3 API.
|
||||
In the current state,
|
||||
Toy Ethereum Private Message, ETH-PM, has privacy and features [limitations](#limitations),
|
||||
has not been audited and hence is not fit for production usage.
|
||||
We hope this can be an inspiration for developers
|
||||
wishing to build on top of Waku.
|
||||
|
||||
## Design Requirements
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
|
||||
“SHOULD NOT”, “RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
## Variables
|
||||
|
||||
Here are the variables used in the protocol and their definition:
|
||||
|
||||
- `B` is Bob's Ethereum address (or account),
|
||||
- `b` is the private key of `B`, and is only known by Bob.
|
||||
- `B'` is Bob's Encryption Public Key, for which `b'` is the private key.
|
||||
- `M` is the private message that Alice sends to Bob.
|
||||
|
||||
The proposed protocol MUST adhere to the following design requirements:
|
||||
|
||||
1. Alice knows Bob's Ethereum address
|
||||
2. Bob is willing to participate to Eth-PM, and publishes `B'`
|
||||
3. Bob's ownership of `B'` MUST be verifiable
|
||||
4. Alice wants to send message `M` to Bob
|
||||
5. Bob SHOULD be able to get `M` using [10/WAKU2](waku/standards/core/10/waku2.md)
|
||||
6. Participants only have access to their Ethereum Wallet via the Web3 API
|
||||
7. Carole MUST NOT be able to read `M`'s content,
|
||||
even if she is storing it or relaying it
|
||||
8. [Waku Message Version 1](waku/standards/application/26/payload.md) Asymmetric Encryption
|
||||
is used for encryption purposes.
|
||||
|
||||
## Limitations
|
||||
|
||||
Alice's details are not included in the message's structure,
|
||||
meaning that there is no programmatic way for Bob to reply to Alice
|
||||
or verify her identity.
|
||||
|
||||
Private messages are sent on the same content topic for all users.
|
||||
As the recipient data is encrypted,
|
||||
all participants must decrypt all messages which can lead to scalability issues.
|
||||
|
||||
This protocol does not guarantee Perfect Forward Secrecy nor Future Secrecy:
|
||||
If Bob's private key is compromised, past and future messages could be decrypted.
|
||||
A solution combining regular [X3DH](https://www.signal.org/docs/specifications/x3dh/)
|
||||
bundle broadcast with [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
encryption would remove these limitations;
|
||||
See the [Status secure transport specification](status/deprecated/secure-transport.md)
|
||||
for an example of a protocol that achieves this in a peer-to-peer setting.
|
||||
|
||||
Bob MUST decide to participate in the protocol before Alice can send him a message.
|
||||
This is discussed in more detail in
|
||||
[Consideration for a non-interactive/uncoordinated protocol](#consideration-for-a-non-interactiveuncoordinated-protocol)
|
||||
|
||||
## The Protocol
|
||||
|
||||
### Generate Encryption KeyPair
|
||||
|
||||
First, Bob needs to generate a keypair for Encryption purposes.
|
||||
|
||||
Bob SHOULD get 32 bytes from a secure random source as Encryption Private Key, `b'`.
|
||||
Then Bob can compute the corresponding SECP-256k1 Public Key, `B'`.
|
||||
|
||||
### Broadcast Encryption Public Key
|
||||
|
||||
For Alice to encrypt messages for Bob,
|
||||
Bob SHOULD broadcast his Encryption Public Key `B'`.
|
||||
To prove that the Encryption Public Key `B'`
|
||||
is indeed owned by the owner of Bob's Ethereum Account `B`,
|
||||
Bob MUST sign `B'` using `B`.
|
||||
|
||||
### Sign Encryption Public Key
|
||||
|
||||
To prove ownership of the Encryption Public Key,
|
||||
Bob must sign it using [EIP-712](https://eips.ethereum.org/EIPS/eip-712) v3,
|
||||
meaning calling `eth_signTypedData_v3` on his wallet's API.
|
||||
|
||||
Note: While v4 also exists, it is not available on all wallets and
|
||||
the features brought by v4 is not needed for the current use case.
|
||||
|
||||
The `TypedData` to be passed to `eth_signTypedData_v3` MUST be as follows, where:
|
||||
|
||||
- `encryptionPublicKey` is Bob's Encryption Public Key, `B'`,
|
||||
in hex format, **without** `0x` prefix.
|
||||
- `bobAddress` is Bob's Ethereum address, corresponding to `B`,
|
||||
in hex format, **with** `0x` prefix.
|
||||
|
||||
```js
|
||||
const typedData = {
|
||||
domain: {
|
||||
chainId: 1,
|
||||
name: 'Ethereum Private Message over Waku',
|
||||
version: '1',
|
||||
},
|
||||
message: {
|
||||
encryptionPublicKey: encryptionPublicKey,
|
||||
ownerAddress: bobAddress,
|
||||
},
|
||||
primaryType: 'PublishEncryptionPublicKey',
|
||||
types: {
|
||||
EIP712Domain: [
|
||||
{ name: 'name', type: 'string' },
|
||||
{ name: 'version', type: 'string' },
|
||||
{ name: 'chainId', type: 'uint256' },
|
||||
],
|
||||
PublishEncryptionPublicKey: [
|
||||
{ name: 'encryptionPublicKey', type: 'string' },
|
||||
{ name: 'ownerAddress', type: 'string' },
|
||||
],
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### Public Key Message
|
||||
|
||||
The resulting signature is then included in a `PublicKeyMessage`, where
|
||||
|
||||
- `encryption_public_key` is Bob's Encryption Public Key `B'`, not compressed,
|
||||
- `eth_address` is Bob's Ethereum Address `B`,
|
||||
- `signature` is the EIP-712 as described above.
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
message PublicKeyMessage {
|
||||
bytes encryption_public_key = 1;
|
||||
bytes eth_address = 2;
|
||||
bytes signature = 3;
|
||||
}
|
||||
```
|
||||
|
||||
This MUST be wrapped in a [14/WAKU-MESSAGE](/waku/standards/core/14/message.md) version 0,
|
||||
with the Public Key Broadcast content topic.
|
||||
Finally, Bob SHOULD publish the message on Waku.
|
||||
|
||||
## Consideration for a non-interactive/uncoordinated protocol
|
||||
|
||||
Alice has to get Bob's public Key to send a message to Bob.
|
||||
Because an Ethereum Address is part of the hash of the public key's account,
|
||||
it is not enough in itself to deduce Bob's Public Key.
|
||||
|
||||
This is why the protocol dictates that Bob MUST send his Public Key first,
|
||||
and Alice MUST receive it before she can send him a message.
|
||||
|
||||
Moreover, nwaku, the reference implementation of [13/WAKU2-STORE](/waku/standards/core/13/store.md),
|
||||
stores messages for a maximum period of 30 days.
|
||||
This means that Bob would need to broadcast his public key
|
||||
at least every 30 days to be reachable.
|
||||
|
||||
Below we are reviewing possible solutions to mitigate this "sign up" step.
|
||||
|
||||
### Retrieve the public key from the blockchain
|
||||
|
||||
If Bob has signed at least one transaction with his account
|
||||
then his Public Key can be extracted from the transaction's ECDSA signature.
|
||||
The challenge with this method is that standard Web3 Wallet API
|
||||
does not allow Alice to specifically retrieve all/any transaction sent by Bob.
|
||||
|
||||
Alice would instead need to use the `eth.getBlock` API
|
||||
to retrieve Ethereum blocks one by one.
|
||||
For each block, she would need to check the `from` value of each transaction
|
||||
until she finds a transaction sent by Bob.
|
||||
|
||||
This process is resource intensive and
|
||||
can be slow when using services such as Infura due to rate limits in place,
|
||||
which makes it inappropriate for a browser or mobile phone environment.
|
||||
|
||||
An alternative would be to either run a backend
|
||||
that can connect directly to an Ethereum node,
|
||||
use a centralized blockchain explorer
|
||||
or use a decentralized indexing service such as [The Graph](https://thegraph.com/).
|
||||
|
||||
Note that these would resolve a UX issue
|
||||
only if a sender wants to proceed with _air drops_.
|
||||
|
||||
Indeed, if Bob does not publish his Public Key in the first place
|
||||
then it MAY be an indication that he does not participate in this protocol
|
||||
and hence will not receive messages.
|
||||
|
||||
However, these solutions would be helpful
|
||||
if the sender wants to proceed with an _air drop_ of messages:
|
||||
Send messages over Waku for users to retrieve later,
|
||||
once they decide to participate in this protocol.
|
||||
Bob may not want to participate first but may decide to participate at a later stage
|
||||
and would like to access previous messages.
|
||||
This could make sense in an NFT offer scenario:
|
||||
Users send offers to any NFT owner,
|
||||
NFT owner may decide at some point to participate in the protocol and
|
||||
retrieve previous offers.
|
||||
|
||||
### Publishing the public in long term storage
|
||||
|
||||
Another improvement would be for Bob not having to re-publish his public key
|
||||
every 30 days or less.
|
||||
Similarly to above,
|
||||
if Bob stops publishing his public key
|
||||
then it MAY be an indication that he does not participate in the protocol anymore.
|
||||
|
||||
In any case,
|
||||
the protocol could be modified to store the Public Key in a more permanent storage,
|
||||
such as a dedicated smart contract on the blockchain.
|
||||
|
||||
## Send Private Message
|
||||
|
||||
Alice MAY monitor the Waku network to collect Ethereum Address and
|
||||
Encryption Public Key tuples.
|
||||
Alice SHOULD verify that the `signature`s of `PublicKeyMessage`s she receives
|
||||
are valid as per EIP-712.
|
||||
She SHOULD drop any message without a signature or with an invalid signature.
|
||||
|
||||
Using Bob's Encryption Public Key,
|
||||
retrieved via [10/WAKU2](/waku/standards/core/10/waku2.md),
|
||||
Alice MAY now send an encrypted message to Bob.
|
||||
|
||||
If she wishes to do so,
|
||||
Alice MUST encrypt her message `M` using Bob's Encryption Public Key `B'`,
|
||||
as per [26/WAKU-PAYLOAD Asymmetric Encryption specs](waku/standards/application/26/payload.md/#asymmetric).
|
||||
|
||||
Alice SHOULD now publish this message on the Private Message content topic.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [10/WAKU2](/waku/standards/core/10/waku2.md)
|
||||
- [Waku Message Version 1](waku/standards/application/26/payload.md)
|
||||
- [X3DH](https://www.signal.org/docs/specifications/x3dh/)
|
||||
- [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
- [Status secure transport specification](status/deprecated/secure-transport.md)
|
||||
- [EIP-712](https://eips.ethereum.org/EIPS/eip-712)
|
||||
- [13/WAKU2-STORE](/waku/standards/core/13/store.md)
|
||||
- [The Graph](https://thegraph.com/)
|
||||
---
|
||||
slug: 20
|
||||
title: 20/TOY-ETH-PM
|
||||
name: Toy Ethereum Private Message
|
||||
status: draft
|
||||
tags: waku/application
|
||||
editor: Franck Royer <franck@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
**Content Topics**:
|
||||
|
||||
- Public Key Broadcast: `/eth-pm/1/public-key/proto`,
|
||||
- Private Message: `/eth-pm/1/private-message/proto`.
|
||||
|
||||
This specification explains the Toy Ethereum Private Message protocol
|
||||
which enables a peer to send an encrypted message to another peer
|
||||
using the Waku v2 network, and the peer's Ethereum address.
|
||||
|
||||
The main purpose of this specification
|
||||
is to demonstrate how Waku v2 can be used for encrypted messaging purposes,
|
||||
using Ethereum accounts for identity.
|
||||
This protocol caters for Web3 wallets restrictions,
|
||||
allowing it to be implemented only using standard Web3 API.
|
||||
In the current state,
|
||||
the protocol has privacy and features [limitations](#limitations),
|
||||
has not been audited and hence is not fit for production usage.
|
||||
We hope this can be an inspiration for developers
|
||||
wishing to build on top of Waku v2.
|
||||
|
||||
## Goal
|
||||
|
||||
Alice wants to send an encrypted message to Bob, where only Bob can decrypt the message.
|
||||
Alice only knows Bob's Ethereum Address.
|
||||
|
||||
## Variables
|
||||
|
||||
Here are the variables used in the protocol and their definition:
|
||||
|
||||
- `B` is Bob's Ethereum address (or account),
|
||||
- `b` is the private key of `B`, and is only known by Bob.
|
||||
- `B'` is Bob's Encryption Public Key, for which `b'` is the private key.
|
||||
- `M` is the private message that Alice sends to Bob.
|
||||
|
||||
## Design Requirements
|
||||
|
||||
The proposed protocol MUST adhere to the following design requirements:
|
||||
|
||||
1. Alice knows Bob's Ethereum address,
|
||||
2. Bob is willing to participate to Eth-PM, and publishes `B'`,
|
||||
3. Bob's ownership of `B'` MUST be verifiable,
|
||||
4. Alice wants to send message `M` to Bob,
|
||||
5. Bob SHOULD be able to get `M` using [10/WAKU2 spec](../../core/10/waku2.md),
|
||||
6. Participants only have access to their Ethereum Wallet via the Web3 API,
|
||||
7. Carole MUST NOT be able to read `M`'s content
|
||||
even if she is storing it or relaying it,
|
||||
8. [Waku Message Version 1](../26/payload.md) Asymmetric Encryption
|
||||
is used for encryption purposes.
|
||||
|
||||
## Limitations
|
||||
|
||||
Alice's details are not included in the message's structure,
|
||||
meaning that there is no programmatic way for Bob to reply to Alice
|
||||
or verify her identity.
|
||||
|
||||
Private messages are sent on the same content topic for all users.
|
||||
As the recipient data is encrypted,
|
||||
all participants must decrypt all messages which can lead to scalability issues.
|
||||
|
||||
This protocol does not guarantee Perfect Forward Secrecy nor Future Secrecy:
|
||||
If Bob's private key is compromised, past and future messages could be decrypted.
|
||||
A solution combining regular [X3DH](https://www.signal.org/docs/specifications/x3dh/)
|
||||
bundle broadcast with [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
encryption would remove these limitations;
|
||||
See the [Status secure transport spec](https://specs.status.im/spec/5)
|
||||
for an example of a protocol that achieves this in a peer-to-peer setting.
|
||||
|
||||
Bob MUST decide to participate in the protocol before Alice can send him a message.
|
||||
This is discussed in more in details in
|
||||
[Consideration for a non-interactive/uncoordinated protocol](#consideration-for-a-non-interactiveuncoordinated-protocol)
|
||||
|
||||
## The protocol
|
||||
|
||||
### Generate Encryption KeyPair
|
||||
|
||||
First, Bob needs to generate a keypair for Encryption purposes.
|
||||
|
||||
Bob SHOULD get 32 bytes from a secure random source as Encryption Private Key, `b'`.
|
||||
Then Bob can compute the corresponding SECP-256k1 Public Key, `B'`.
|
||||
|
||||
### Broadcast Encryption Public Key
|
||||
|
||||
For Alice to encrypt messages for Bob,
|
||||
Bob SHOULD broadcast his Encryption Public Key `B'`.
|
||||
To prove that the Encryption Public Key `B'`
|
||||
is indeed owned by the owner of Bob's Ethereum Account `B`,
|
||||
Bob MUST sign `B'` using `B`.
|
||||
|
||||
### Sign Encryption Public Key
|
||||
|
||||
To prove ownership of the Encryption Public Key,
|
||||
Bob must sign it using [EIP-712](https://eips.ethereum.org/EIPS/eip-712) v3,
|
||||
meaning calling `eth_signTypedData_v3` on his Wallet's API.
|
||||
|
||||
Note: While v4 also exists, it is not available on all wallets and
|
||||
the features brought by v4 is not needed for the current use case.
|
||||
|
||||
The `TypedData` to be passed to `eth_signTypedData_v3` MUST be as follows, where:
|
||||
|
||||
- `encryptionPublicKey` is Bob's Encryption Public Key, `B'`,
|
||||
in hex format, **without** `0x` prefix.
|
||||
- `bobAddress` is Bob's Ethereum address, corresponding to `B`,
|
||||
in hex format, **with** `0x` prefix.
|
||||
|
||||
```js
|
||||
const typedData = {
|
||||
domain: {
|
||||
chainId: 1,
|
||||
name: 'Ethereum Private Message over Waku',
|
||||
version: '1',
|
||||
},
|
||||
message: {
|
||||
encryptionPublicKey: encryptionPublicKey,
|
||||
ownerAddress: bobAddress,
|
||||
},
|
||||
primaryType: 'PublishEncryptionPublicKey',
|
||||
types: {
|
||||
EIP712Domain: [
|
||||
{ name: 'name', type: 'string' },
|
||||
{ name: 'version', type: 'string' },
|
||||
{ name: 'chainId', type: 'uint256' },
|
||||
],
|
||||
PublishEncryptionPublicKey: [
|
||||
{ name: 'encryptionPublicKey', type: 'string' },
|
||||
{ name: 'ownerAddress', type: 'string' },
|
||||
],
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### Public Key Message
|
||||
|
||||
The resulting signature is then included in a `PublicKeyMessage`, where
|
||||
|
||||
- `encryption_public_key` is Bob's Encryption Public Key `B'`, not compressed,
|
||||
- `eth_address` is Bob's Ethereum Address `B`,
|
||||
- `signature` is the EIP-712 as described above.
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
message PublicKeyMessage {
|
||||
bytes encryption_public_key = 1;
|
||||
bytes eth_address = 2;
|
||||
bytes signature = 3;
|
||||
}
|
||||
```
|
||||
|
||||
This MUST be wrapped in a Waku Message version 0,
|
||||
with the Public Key Broadcast content topic.
|
||||
Finally, Bob SHOULD publish the message on Waku v2.
|
||||
|
||||
## Consideration for a non-interactive/uncoordinated protocol
|
||||
|
||||
Alice has to get Bob's public Key to send a message to Bob.
|
||||
Because an Ethereum Address is part of the hash of the public key's account,
|
||||
it is not enough in itself to deduce Bob's Public Key.
|
||||
|
||||
This is why the protocol dictates that Bob MUST send his Public Key first,
|
||||
and Alice MUST receive it before she can send him a message.
|
||||
|
||||
Moreover, nim-waku, the reference implementation of [13/WAKU2-STORE](../../core/13/store.md),
|
||||
stores messages for a maximum period of 30 days.
|
||||
This means that Bob would need to broadcast his public key
|
||||
at least every 30 days to be reachable.
|
||||
|
||||
Below we are reviewing possible solutions to mitigate this "sign up" step.
|
||||
|
||||
### Retrieve the public key from the blockchain
|
||||
|
||||
If Bob has signed at least one transaction with his account
|
||||
then his Public Key can be extracted from the transaction's ECDSA signature.
|
||||
The challenge with this method is that standard Web3 Wallet API
|
||||
does not allow Alice to specifically retrieve all/any transaction sent by Bob.
|
||||
|
||||
Alice would instead need to use the `eth.getBlock` API
|
||||
to retrieve Ethereum blocks one by one.
|
||||
For each block, she would need to check the `from` value of each transaction
|
||||
until she finds a transaction sent by Bob.
|
||||
|
||||
This process is resource intensive and
|
||||
can be slow when using services such as Infura due to rate limits in place,
|
||||
which makes it inappropriate for a browser or mobile phone environment.
|
||||
|
||||
An alternative would be to either run a backend
|
||||
that can connect directly to an Ethereum node,
|
||||
use a centralized blockchain explorer
|
||||
or use a decentralized indexing service such as [The Graph](https://thegraph.com/).
|
||||
|
||||
Note that these would resolve a UX issue
|
||||
only if a sender wants to proceed with _air drops_.
|
||||
|
||||
Indeed, if Bob does not publish his Public Key in the first place
|
||||
then it can be an indication that he simply does not participate in this protocol
|
||||
and hence will not receive messages.
|
||||
|
||||
However, these solutions would be helpful
|
||||
if the sender wants to proceed with an _air drop_ of messages:
|
||||
Send messages over Waku for users to retrieve later,
|
||||
once they decide to participate in this protocol.
|
||||
Bob may not want to participate first but may decide to participate at a later stage
|
||||
and would like to access previous messages.
|
||||
This could make sense in an NFT offer scenario:
|
||||
Users send offers to any NFT owner,
|
||||
NFT owner may decide at some point to participate in the protocol and
|
||||
retrieve previous offers.
|
||||
|
||||
### Publishing the public in long term storage
|
||||
|
||||
Another improvement would be for Bob not having to re-publish his public key
|
||||
every 30 days or less.
|
||||
Similarly to above,
|
||||
if Bob stops publishing his public key
|
||||
then it may be an indication that he does not participate in the protocol anymore.
|
||||
|
||||
In any case,
|
||||
the protocol could be modified to store the Public Key in a more permanent storage,
|
||||
such as a dedicated smart contract on the blockchain.
|
||||
|
||||
## Send Private Message
|
||||
|
||||
Alice MAY monitor the Waku v2 to collect Ethereum Address and
|
||||
Encryption Public Key tuples.
|
||||
Alice SHOULD verify that the `signature`s of `PublicKeyMessage`s she receives
|
||||
are valid as per EIP-712.
|
||||
She SHOULD drop any message without a signature or with an invalid signature.
|
||||
|
||||
Using Bob's Encryption Public Key,
|
||||
retrieved via [10/WAKU2 spec](../../core/10/waku2.md),
|
||||
Alice MAY now send an encrypted message to Bob.
|
||||
|
||||
If she wishes to do so,
|
||||
Alice MUST encrypt her message `M` using Bob's Encryption Public Key `B'`,
|
||||
as per [26/WAKU-PAYLOAD Asymmetric Encryption specs](../26/payload.md/#asymmetric).
|
||||
|
||||
Alice SHOULD now publish this message on the Private Message content topic.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [10/WAKU2 spec](../../core/10/waku2.md)
|
||||
- [Waku Message Version 1](../26/payload.md)
|
||||
- [X3DH](https://www.signal.org/docs/specifications/x3dh/)
|
||||
- [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
- [Status secure transport spec](https://specs.status.im/spec/5)
|
||||
- [EIP-712](https://eips.ethereum.org/EIPS/eip-712)
|
||||
- [13/WAKU2-STORE](../../core/13/store.md)
|
||||
- [The Graph](https://thegraph.com/)
|
||||
|
||||
@@ -1,115 +1,115 @@
|
||||
---
|
||||
slug: 21
|
||||
title: 21/WAKU2-FAULT-TOLERANT-STORE
|
||||
name: Waku v2 Fault-Tolerant Store
|
||||
status: deleted
|
||||
editor: Sanaz Taheri <sanaz@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
The reliability of [13/WAKU2-STORE](../../core/13/store.md)
|
||||
protocol heavily relies on the fact that full nodes i.e.,
|
||||
those who persist messages have high availability and
|
||||
uptime and do not miss any messages.
|
||||
If a node goes offline,
|
||||
then it will risk missing all the messages transmitted
|
||||
in the network during that time.
|
||||
In this specification,
|
||||
we provide a method that makes the store protocol resilient
|
||||
in presence of faulty nodes.
|
||||
Relying on this method,
|
||||
nodes that have been offline for a time window will be able to fix the gap
|
||||
in their message history when getting back online.
|
||||
Moreover, nodes with lower availability and
|
||||
uptime can leverage this method to reliably provide the store protocol services
|
||||
as a full node.
|
||||
|
||||
## Method description
|
||||
|
||||
As the first step
|
||||
towards making the [13/WAKU2-STORE](../../core/13/store.md) protocol fault-tolerant,
|
||||
we introduce a new type of time-based query through which nodes fetch message history
|
||||
from each other based on their desired time window.
|
||||
This method operates based on the assumption that the querying node
|
||||
knows some other nodes in the store protocol
|
||||
which have been online for that targeted time window.
|
||||
|
||||
## Security Consideration
|
||||
|
||||
The main security consideration to take into account
|
||||
while using this method is that a querying node
|
||||
has to reveal its offline time to the queried node.
|
||||
This will gradually result in the extraction of the node's activity pattern
|
||||
which can lead to inference attacks.
|
||||
|
||||
## Wire Specification
|
||||
|
||||
We extend the [HistoryQuery](../../core/13/store.md/#payloads) protobuf message
|
||||
with two fields of `start_time` and `end_time` to signify the time range to be queried.
|
||||
|
||||
### Payloads
|
||||
|
||||
```diff
|
||||
syntax = "proto3";
|
||||
|
||||
message HistoryQuery {
|
||||
// the first field is reserved for future use
|
||||
string pubsubtopic = 2;
|
||||
repeated ContentFilter contentFilters = 3;
|
||||
PagingInfo pagingInfo = 4;
|
||||
+ sint64 start_time = 5;
|
||||
+ sint64 end_time = 6;
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### HistoryQuery
|
||||
|
||||
RPC call to query historical messages.
|
||||
|
||||
- `start_time`:
|
||||
this field MAY be filled out to signify the starting point of the queried time window.
|
||||
This field holds the Unix epoch time in nanoseconds.
|
||||
The `messages` field of the corresponding
|
||||
[`HistoryResponse`](../../core/13/store.md/#HistoryResponse)
|
||||
MUST contain historical waku messages whose
|
||||
[`timestamp`](../../core/14/message.md/#Payloads)
|
||||
is larger than or equal to the `start_time`.
|
||||
- `end_time`:
|
||||
this field MAY be filled out to signify the ending point of the queried time window.
|
||||
This field holds the Unix epoch time in nanoseconds.
|
||||
The `messages` field of the corresponding
|
||||
[`HistoryResponse`](../../core/13/store.md/#HistoryResponse)
|
||||
MUST contain historical waku messages whose
|
||||
[`timestamp`](../../core/14/message.md/#Payloads) is less than or equal to the `end_time`.
|
||||
|
||||
A time-based query is considered valid if
|
||||
its `end_time` is larger than or equal to the `start_time`.
|
||||
Queries that do not adhere to this condition will not get through e.g.
|
||||
an open-end time query in which the `start_time` is given but
|
||||
no `end_time` is supplied is not valid.
|
||||
If both `start_time` and
|
||||
`end_time` are omitted then no time-window filter takes place.
|
||||
|
||||
In order to account for nodes asynchrony, and
|
||||
assuming that nodes may be out of sync for at most 20 seconds
|
||||
(i.e., 20000000000 nanoseconds),
|
||||
the querying nodes SHOULD add an offset of 20 seconds to their offline time window.
|
||||
That is if the original window is [`l`,`r`]
|
||||
then the history query SHOULD be made for `[start_time: l - 20s, end_time: r + 20s]`.
|
||||
|
||||
Note that `HistoryQuery` preserves `AND` operation among the queried attributes.
|
||||
As such, the `messages` field of the corresponding
|
||||
[`HistoryResponse`](../../core/13/store.md/#HistoryResponse)
|
||||
MUST contain historical waku messages that satisfy the indicated `pubsubtopic` AND
|
||||
`contentFilters` AND the time range [`start_time`, `end_time`].
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [13/WAKU2-STORE](../../core/13/store.md)
|
||||
- [`timestamp`](../../standards/core/14/message.md/#Payloads)
|
||||
---
|
||||
slug: 21
|
||||
title: 21/WAKU2-FAULT-TOLERANT-STORE
|
||||
name: Waku v2 Fault-Tolerant Store
|
||||
status: draft
|
||||
editor: Sanaz Taheri <sanaz@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
The reliability of [13/WAKU2-STORE](../../core/13/store.md)
|
||||
protocol heavily relies on the fact that full nodes i.e.,
|
||||
those who persist messages have high availability and
|
||||
uptime and do not miss any messages.
|
||||
If a node goes offline,
|
||||
then it will risk missing all the messages transmitted
|
||||
in the network during that time.
|
||||
In this specification,
|
||||
we provide a method that makes the store protocol resilient
|
||||
in presence of faulty nodes.
|
||||
Relying on this method,
|
||||
nodes that have been offline for a time window will be able to fix the gap
|
||||
in their message history when getting back online.
|
||||
Moreover, nodes with lower availability and
|
||||
uptime can leverage this method to reliably provide the store protocol services
|
||||
as a full node.
|
||||
|
||||
## Method description
|
||||
|
||||
As the first step
|
||||
towards making the [13/WAKU2-STORE](../../core/13/store.md) protocol fault-tolerant,
|
||||
we introduce a new type of time-based query through which nodes fetch message history
|
||||
from each other based on their desired time window.
|
||||
This method operates based on the assumption that the querying node
|
||||
knows some other nodes in the store protocol
|
||||
which have been online for that targeted time window.
|
||||
|
||||
## Security Consideration
|
||||
|
||||
The main security consideration to take into account
|
||||
while using this method is that a querying node
|
||||
has to reveal its offline time to the queried node.
|
||||
This will gradually result in the extraction of the node's activity pattern
|
||||
which can lead to inference attacks.
|
||||
|
||||
## Wire Specification
|
||||
|
||||
We extend the [HistoryQuery](../../core/13/store.md/#payloads) protobuf message
|
||||
with two fields of `start_time` and `end_time` to signify the time range to be queried.
|
||||
|
||||
### Payloads
|
||||
|
||||
```diff
|
||||
syntax = "proto3";
|
||||
|
||||
message HistoryQuery {
|
||||
// the first field is reserved for future use
|
||||
string pubsubtopic = 2;
|
||||
repeated ContentFilter contentFilters = 3;
|
||||
PagingInfo pagingInfo = 4;
|
||||
+ sint64 start_time = 5;
|
||||
+ sint64 end_time = 6;
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### HistoryQuery
|
||||
|
||||
RPC call to query historical messages.
|
||||
|
||||
- `start_time`:
|
||||
this field MAY be filled out to signify the starting point of the queried time window.
|
||||
This field holds the Unix epoch time in nanoseconds.
|
||||
The `messages` field of the corresponding
|
||||
[`HistoryResponse`](../../core/13/store.md/#HistoryResponse)
|
||||
MUST contain historical waku messages whose
|
||||
[`timestamp`](../../core/14/message.md/#Payloads)
|
||||
is larger than or equal to the `start_time`.
|
||||
- `end_time`:
|
||||
this field MAY be filled out to signify the ending point of the queried time window.
|
||||
This field holds the Unix epoch time in nanoseconds.
|
||||
The `messages` field of the corresponding
|
||||
[`HistoryResponse`](../../core/13/store.md/#HistoryResponse)
|
||||
MUST contain historical waku messages whose
|
||||
[`timestamp`](../../core/14/message.md/#Payloads) is less than or equal to the `end_time`.
|
||||
|
||||
A time-based query is considered valid if
|
||||
its `end_time` is larger than or equal to the `start_time`.
|
||||
Queries that do not adhere to this condition will not get through e.g.
|
||||
an open-end time query in which the `start_time` is given but
|
||||
no `end_time` is supplied is not valid.
|
||||
If both `start_time` and
|
||||
`end_time` are omitted then no time-window filter takes place.
|
||||
|
||||
In order to account for nodes asynchrony, and
|
||||
assuming that nodes may be out of sync for at most 20 seconds
|
||||
(i.e., 20000000000 nanoseconds),
|
||||
the querying nodes SHOULD add an offset of 20 seconds to their offline time window.
|
||||
That is if the original window is [`l`,`r`]
|
||||
then the history query SHOULD be made for `[start_time: l - 20s, end_time: r + 20s]`.
|
||||
|
||||
Note that `HistoryQuery` preserves `AND` operation among the queried attributes.
|
||||
As such, the `messages` field of the corresponding
|
||||
[`HistoryResponse`](../../core/13/store.md/#HistoryResponse)
|
||||
MUST contain historical waku messages that satisfy the indicated `pubsubtopic` AND
|
||||
`contentFilters` AND the time range [`start_time`, `end_time`].
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [13/WAKU2-STORE](../../core/13/store.md)
|
||||
- [`timestamp`](../../standards/core/14/message.md/#Payloads)
|
||||
@@ -1,215 +1,208 @@
|
||||
---
|
||||
slug: 26
|
||||
title: 26/WAKU2-PAYLOAD
|
||||
name: Waku Message Payload Encryption
|
||||
status: draft
|
||||
editor: Oskar Thoren <oskarth@titanproxy.com>
|
||||
contributors:
|
||||
- Oskar Thoren <oskarth@titanproxy.com>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes how Waku provides confidentiality, authenticity, and
|
||||
integrity, as well as some form of unlinkability.
|
||||
Specifically, it describes how encryption, decryption and
|
||||
signing works in [6/WAKU1](waku/standards/legacy/6/waku1.md) and
|
||||
in [10/WAKU2](waku/standards/core/10/waku2.md) with [14/WAKU-MESSAGE](waku/standards/core/14/message.md/#version1).
|
||||
|
||||
This specification effectively replaces [7/WAKU-DATA](waku/standards/legacy/7/data.md)
|
||||
as well as [6/WAKU1 Payload encryption](waku/standards/legacy/6/waku1.md/#payload-encryption)
|
||||
but written in a way that is agnostic and self-contained for [6/WAKU1](waku/standards/legacy/6/waku1.md) and [10/WAKU2](waku/standards/core/10/waku2.md).
|
||||
|
||||
Large sections of the specification originate from
|
||||
[EIP-627: Whisper spec](https://eips.ethereum.org/EIPS/eip-627) as well from
|
||||
[RLPx Transport Protocol spec (ECIES encryption)](https://github.com/ethereum/devp2p/blob/master/rlpx.md#ecies-encryption)
|
||||
with some modifications.
|
||||
|
||||
## Specification
|
||||
|
||||
The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
For [6/WAKU1](waku/standards/legacy/6/waku1.md),
|
||||
the `data` field is used in the [waku envelope](waku/standards/legacy/6/waku1.md#abnf-specification)
|
||||
and the field MAY contain the encrypted payload.
|
||||
|
||||
For [10/WAKU2](waku/standards/core/10/waku2.md),
|
||||
the `payload` field is used in `WakuMessage`
|
||||
and MAY contain the encrypted payload.
|
||||
|
||||
The fields that are concatenated and
|
||||
encrypted as part of the `data` (Waku legacy) or
|
||||
`payload` (Waku) field are:
|
||||
|
||||
- `flags`
|
||||
- `payload-length`
|
||||
- `payload`
|
||||
- `padding`
|
||||
- `signature`
|
||||
|
||||
## Design requirements
|
||||
|
||||
- *Confidentiality*:
|
||||
The adversary SHOULD NOT be able to learn what data is being sent from one Waku node
|
||||
to one or several other Waku nodes.
|
||||
- *Authenticity*:
|
||||
The adversary SHOULD NOT be able to cause Waku endpoint
|
||||
to accept data from any third party as though it came from the other endpoint.
|
||||
- *Integrity*:
|
||||
The adversary SHOULD NOT be able to cause a Waku endpoint to
|
||||
accept data that has been tampered with.
|
||||
|
||||
Notable, *forward secrecy* is not provided for at this layer.
|
||||
If this property is desired,
|
||||
a more fully featured secure communication protocol can be used on top.
|
||||
|
||||
It also provides some form of *unlinkability* since:
|
||||
|
||||
- only participants who are able to decrypt a message can see its signature
|
||||
- payload are padded to a fixed length
|
||||
|
||||
## Cryptographic primitives
|
||||
|
||||
- AES-256-GCM (for symmetric encryption)
|
||||
- ECIES
|
||||
- ECDSA
|
||||
- KECCAK-256
|
||||
|
||||
ECIES is using the following cryptosystem:
|
||||
|
||||
- Curve: secp256k1
|
||||
- KDF: NIST SP 800-56 Concatenation Key Derivation Function, with SHA-256 option
|
||||
- MAC: HMAC with SHA-256
|
||||
- AES: AES-128-CTR
|
||||
|
||||
### ABNF
|
||||
|
||||
Using [Augmented Backus-Naur form (ABNF)](https://tools.ietf.org/html/rfc5234)
|
||||
we have the following format:
|
||||
|
||||
```abnf
|
||||
; 1 byte; first two bits contain the size of payload-length field,
|
||||
; third bit indicates whether the signature is present.
|
||||
flags = 1OCTET
|
||||
|
||||
; contains the size of payload.
|
||||
payload-length = 4*OCTET
|
||||
|
||||
; byte array of arbitrary size (may be zero).
|
||||
payload = *OCTET
|
||||
|
||||
; byte array of arbitrary size (may be zero).
|
||||
padding = *OCTET
|
||||
|
||||
; 65 bytes, if present.
|
||||
signature = 65OCTET
|
||||
|
||||
data = flags payload-length payload padding [signature]
|
||||
|
||||
; This field is called payload in Waku
|
||||
payload = data
|
||||
```
|
||||
|
||||
### Signature
|
||||
|
||||
Those unable to decrypt the payload/data are also unable to access the signature.
|
||||
The signature, if provided,
|
||||
SHOULD be the ECDSA signature of the Keccak-256 hash of the unencrypted data
|
||||
using the secret key of the originator identity.
|
||||
The signature is serialized as the concatenation of the `r`, `s` and `v` parameters
|
||||
of the SECP-256k1 ECDSA signature, in that order.
|
||||
`r` and `s` MUST be big-endian encoded, fixed-width 256-bit unsigned.
|
||||
`v` MUST be an 8-bit big-endian encoded,
|
||||
non-normalized and should be either 27 or 28.
|
||||
|
||||
See [Ethereum "Yellow paper": Appendix F Signing transactions](https://ethereum.github.io/yellowpaper/paper.pdf)
|
||||
for more information on signature generation, parameters and public key recovery.
|
||||
|
||||
### Encryption
|
||||
|
||||
#### Symmetric
|
||||
|
||||
Symmetric encryption uses AES-256-GCM for
|
||||
[authenticated encryption](https://en.wikipedia.org/wiki/Authenticated_encryption).
|
||||
The output of encryption is of the form (`ciphertext`, `tag`, `iv`)
|
||||
where `ciphertext` is the encrypted message,
|
||||
`tag` is a 16 byte message authentication tag and
|
||||
`iv` is a 12 byte initialization vector (nonce).
|
||||
The message authentication `tag` and
|
||||
initialization vector `iv` field MUST be appended to the resulting `ciphertext`,
|
||||
in that order.
|
||||
Note that previous specifications and
|
||||
some implementations might refer to `iv` as `nonce` or `salt`.
|
||||
|
||||
#### Asymmetric
|
||||
|
||||
Asymmetric encryption uses the standard Elliptic Curve Integrated Encryption Scheme
|
||||
(ECIES) with SECP-256k1 public key.
|
||||
|
||||
#### ECIES
|
||||
|
||||
This section originates from the [RLPx Transport Protocol spec](https://github.com/ethereum/devp2p/blob/master/rlpx.md#ecies-encryption)
|
||||
specification with minor modifications.
|
||||
|
||||
The cryptosystem used is:
|
||||
|
||||
- The elliptic curve secp256k1 with generator `G`.
|
||||
- `KDF(k, len)`: the NIST SP 800-56 Concatenation Key Derivation Function.
|
||||
- `MAC(k, m)`: HMAC using the SHA-256 hash function.
|
||||
- `AES(k, iv, m)`: the AES-128 encryption function in CTR mode.
|
||||
|
||||
Special notation used: `X || Y` denotes concatenation of `X` and `Y`.
|
||||
|
||||
Alice wants to send an encrypted message that can be decrypted by
|
||||
Bob's static private key `kB`.
|
||||
Alice knows about Bobs static public key `KB`.
|
||||
|
||||
To encrypt the message `m`, Alice generates a random number `r` and
|
||||
corresponding elliptic curve public key `R = r * G` and
|
||||
computes the shared secret `S = Px` where `(Px, Py) = r * KB`.
|
||||
She derives key material for encryption and
|
||||
authentication as `kE || kM = KDF(S, 32)`
|
||||
as well as a random initialization vector `iv`.
|
||||
Alice sends the encrypted message `R || iv || c || d` where `c = AES(kE, iv , m)`
|
||||
and `d = MAC(sha256(kM), iv || c)` to Bob.
|
||||
|
||||
For Bob to decrypt the message `R || iv || c || d`,
|
||||
he derives the shared secret `S = Px` where `(Px, Py) = kB * R`
|
||||
as well as the encryption and authentication keys `kE || kM = KDF(S, 32)`.
|
||||
Bob verifies the authenticity of the message
|
||||
by checking whether `d == MAC(sha256(kM), iv || c)`
|
||||
then obtains the plaintext as `m = AES(kE, iv || c)`.
|
||||
|
||||
### Padding
|
||||
|
||||
The `padding` field is used to align data size,
|
||||
since data size alone might reveal important metainformation.
|
||||
Padding can be arbitrary size.
|
||||
However, it is recommended that the size of `data` field
|
||||
(excluding the `iv` and `tag`) before encryption (i.e. plain text)
|
||||
SHOULD be a multiple of 256 bytes.
|
||||
|
||||
### Decoding a message
|
||||
|
||||
In order to decode a message, a node SHOULD try to apply both symmetric and
|
||||
asymmetric decryption operations.
|
||||
This is because the type of encryption is not included in the message.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [6/WAKU1](waku/standards/legacy/6/waku1.md)
|
||||
2. [10/WAKU2 spec](waku/standards/core/10/waku2.md)
|
||||
3. [14/WAKU-MESSAGE version 1](waku/standards/core/14/message.md/#version1)
|
||||
4. [7/WAKU-DATA](waku/standards/legacy/7/data.md)
|
||||
5. [EIP-627: Whisper spec](https://eips.ethereum.org/EIPS/eip-627)
|
||||
6. [RLPx Transport Protocol spec (ECIES encryption)](https://github.com/ethereum/devp2p/blob/master/rlpx.md#ecies-encryption)
|
||||
7. [Status 5/SECURE-TRANSPORT](status/deprecated/secure-transport.md)
|
||||
8. [Augmented Backus-Naur form (ABNF)](https://tools.ietf.org/html/rfc5234)
|
||||
9. [Ethereum "Yellow paper": Appendix F Signing transactions](https://ethereum.github.io/yellowpaper/paper.pdf)
|
||||
10. [authenticated encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
|
||||
---
|
||||
slug: 26
|
||||
title: 26/WAKU2-PAYLOAD
|
||||
name: Waku Message Payload Encryption
|
||||
status: draft
|
||||
editor: Oskar Thoren <oskarth@titanproxy.com>
|
||||
contributors:
|
||||
---
|
||||
|
||||
This specification describes how Waku provides confidentiality, authenticity, and
|
||||
integrity, as well as some form of unlinkability.
|
||||
Specifically, it describes how encryption, decryption and
|
||||
signing works in [6/WAKU1](../../legacy/6/waku1.md) and
|
||||
in [10/WAKU2 spec](../../core/10/waku2.md) with [14/WAKU-MESSAGE version 1](../../core/14/message.md/#version1).
|
||||
|
||||
This specification effectively replaces [7/WAKU-DATA](../../legacy/7/data.md)
|
||||
as well as [6/WAKU1 Payload encryption](../../legacy/6/waku1.md/#payload-encryption)
|
||||
but written in a way that is agnostic and self-contained for Waku v1 and Waku v2.
|
||||
|
||||
Large sections of the specification originate from
|
||||
[EIP-627: Whisper spec](https://eips.ethereum.org/EIPS/eip-627) as well from
|
||||
[RLPx Transport Protocol spec (ECIES encryption)](https://github.com/ethereum/devp2p/blob/master/rlpx.md#ecies-encryption)
|
||||
with some modifications.
|
||||
|
||||
## Design requirements
|
||||
|
||||
- *Confidentiality*:
|
||||
The adversary should not be able to learn what data is being sent from one Waku node
|
||||
to one or several other Waku nodes.
|
||||
- *Authenticity*:
|
||||
The adversary should not be able to cause Waku endpoint
|
||||
to accept data from any third party as though it came from the other endpoint.
|
||||
- *Integrity*:
|
||||
The adversary should not be able to cause a Waku endpoint to
|
||||
accept data that has been tampered with.
|
||||
|
||||
Notable, *forward secrecy* is not provided for at this layer.
|
||||
If this property is desired,
|
||||
a more fully featured secure communication protocol can be used on top,
|
||||
such as [Status 5/SECURE-TRANSPORT](https://specs.status.im/spec/5).
|
||||
|
||||
It also provides some form of *unlinkability* since:
|
||||
|
||||
- only participants who are able to decrypt a message can see its signature
|
||||
- payload are padded to a fixed length
|
||||
|
||||
## Cryptographic primitives
|
||||
|
||||
- AES-256-GCM (for symmetric encryption)
|
||||
- ECIES
|
||||
- ECDSA
|
||||
- KECCAK-256
|
||||
|
||||
ECIES is using the following cryptosystem:
|
||||
|
||||
- Curve: secp256k1
|
||||
- KDF: NIST SP 800-56 Concatenation Key Derivation Function, with SHA-256 option
|
||||
- MAC: HMAC with SHA-256
|
||||
- AES: AES-128-CTR
|
||||
|
||||
## Specification
|
||||
|
||||
For [6/WAKU1](../../legacy/6/waku1.md),
|
||||
the `data` field is used in the `waku envelope`,
|
||||
and the field MAY contain the encrypted payload.
|
||||
|
||||
For [10/WAKU2 spec](../../core/10/waku2.md),
|
||||
the `payload` field is used in `WakuMessage` and
|
||||
MAY contain the encrypted payload.
|
||||
|
||||
The fields that are concatenated and
|
||||
encrypted as part of the `data` (Waku v1) / `payload` (Waku v2) field are:
|
||||
|
||||
- flags
|
||||
- payload-length
|
||||
- payload
|
||||
- padding
|
||||
- signature
|
||||
|
||||
### ABNF
|
||||
|
||||
Using [Augmented Backus-Naur form (ABNF)](https://tools.ietf.org/html/rfc5234)
|
||||
we have the following format:
|
||||
|
||||
```abnf
|
||||
; 1 byte; first two bits contain the size of payload-length field,
|
||||
; third bit indicates whether the signature is present.
|
||||
flags = 1OCTET
|
||||
|
||||
; contains the size of payload.
|
||||
payload-length = 4*OCTET
|
||||
|
||||
; byte array of arbitrary size (may be zero).
|
||||
payload = *OCTET
|
||||
|
||||
; byte array of arbitrary size (may be zero).
|
||||
padding = *OCTET
|
||||
|
||||
; 65 bytes, if present.
|
||||
signature = 65OCTET
|
||||
|
||||
data = flags payload-length payload padding [signature]
|
||||
|
||||
; This field is called payload in Waku v2
|
||||
payload = data
|
||||
```
|
||||
|
||||
### Signature
|
||||
|
||||
Those unable to decrypt the payload/data are also unable to access the signature.
|
||||
The signature, if provided,
|
||||
is the ECDSA signature of the Keccak-256 hash of the unencrypted data
|
||||
using the secret key of the originator identity.
|
||||
The signature is serialized as the concatenation of the `r`, `s` and `v` parameters
|
||||
of the SECP-256k1 ECDSA signature, in that order.
|
||||
`r` and `s` MUST be big-endian encoded, fixed-width 256-bit unsigned.
|
||||
`v` MUST be an 8-bit big-endian encoded,
|
||||
non-normalized and should be either 27 or 28.
|
||||
|
||||
See [Ethereum "Yellow paper": Appendix F Signing transactions](https://ethereum.github.io/yellowpaper/paper.pdf)
|
||||
for more information on signature generation, parameters and public key recovery.
|
||||
|
||||
### Encryption
|
||||
|
||||
#### Symmetric
|
||||
|
||||
Symmetric encryption uses AES-256-GCM for
|
||||
[authenticated encryption](https://en.wikipedia.org/wiki/Authenticated_encryption).
|
||||
The output of encryption is of the form (`ciphertext`, `tag`, `iv`)
|
||||
where `ciphertext` is the encrypted message,
|
||||
`tag` is a 16 byte message authentication tag and
|
||||
`iv` is a 12 byte initialization vector (nonce).
|
||||
The message authentication `tag` and
|
||||
initialization vector `iv` field MUST be appended to the resulting `ciphertext`,
|
||||
in that order.
|
||||
Note that previous specifications and
|
||||
some implementations might refer to `iv` as `nonce` or `salt`.
|
||||
|
||||
#### Asymmetric
|
||||
|
||||
Asymmetric encryption uses the standard Elliptic Curve Integrated Encryption Scheme
|
||||
(ECIES) with SECP-256k1 public key.
|
||||
|
||||
#### ECIES
|
||||
|
||||
This section originates from the [RLPx Transport Protocol spec](https://github.com/ethereum/devp2p/blob/master/rlpx.md#ecies-encryption)
|
||||
spec with minor modifications.
|
||||
|
||||
The cryptosystem used is:
|
||||
|
||||
- The elliptic curve secp256k1 with generator `G`.
|
||||
- `KDF(k, len)`: the NIST SP 800-56 Concatenation Key Derivation Function.
|
||||
- `MAC(k, m)`: HMAC using the SHA-256 hash function.
|
||||
- `AES(k, iv, m)`: the AES-128 encryption function in CTR mode.
|
||||
|
||||
Special notation used: `X || Y` denotes concatenation of `X` and `Y`.
|
||||
|
||||
Alice wants to send an encrypted message that can be decrypted by
|
||||
Bob's static private key `kB`.
|
||||
Alice knows about Bobs static public key `KB`.
|
||||
|
||||
To encrypt the message `m`, Alice generates a random number `r` and
|
||||
corresponding elliptic curve public key `R = r * G` and
|
||||
computes the shared secret `S = Px` where `(Px, Py) = r * KB`.
|
||||
She derives key material for encryption and
|
||||
authentication as `kE || kM = KDF(S, 32)`
|
||||
as well as a random initialization vector `iv`.
|
||||
Alice sends the encrypted message `R || iv || c || d` where `c = AES(kE, iv , m)`
|
||||
and `d = MAC(sha256(kM), iv || c)` to Bob.
|
||||
|
||||
For Bob to decrypt the message `R || iv || c || d`,
|
||||
he derives the shared secret `S = Px` where `(Px, Py) = kB * R`
|
||||
as well as the encryption and authentication keys `kE || kM = KDF(S, 32)`.
|
||||
Bob verifies the authenticity of the message
|
||||
by checking whether `d == MAC(sha256(kM), iv || c)`
|
||||
then obtains the plaintext as `m = AES(kE, iv || c)`.
|
||||
|
||||
### Padding
|
||||
|
||||
The padding field is used to align data size,
|
||||
since data size alone might reveal important metainformation.
|
||||
Padding can be arbitrary size.
|
||||
However, it is recommended that the size of Data Field
|
||||
(excluding the IV and tag) before encryption (i.e. plain text)
|
||||
SHOULD be a multiple of 256 bytes.
|
||||
|
||||
### Decoding a message
|
||||
|
||||
In order to decode a message, a node SHOULD try to apply both symmetric and
|
||||
asymmetric decryption operations.
|
||||
This is because the type of encryption is not included in the message.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [6/WAKU1](../../legacy/6/waku1.md)
|
||||
2. [10/WAKU2 spec](../../core/10/waku2.md)
|
||||
3. [14/WAKU-MESSAGE version 1](../../core/14/message.md/#version1)
|
||||
4. [7/WAKU-DATA](../../legacy/7/data.md)
|
||||
5. [EIP-627: Whisper spec](https://eips.ethereum.org/EIPS/eip-627)
|
||||
6. [RLPx Transport Protocol spec (ECIES encryption)](https://github.com/ethereum/devp2p/blob/master/rlpx.md#ecies-encryption)
|
||||
7. [Status 5/SECURE-TRANSPORT](https://specs.status.im/spec/5)
|
||||
8. [Augmented Backus-Naur form (ABNF)](https://tools.ietf.org/html/rfc5234)
|
||||
9. [Ethereum "Yellow paper": Appendix F Signing transactions](https://ethereum.github.io/yellowpaper/paper.pdf)
|
||||
10. [authenticated encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
|
||||
|
||||
@@ -1,320 +1,316 @@
|
||||
---
|
||||
slug: 53
|
||||
title: 53/WAKU2-X3DH
|
||||
name: X3DH usage for Waku payload encryption
|
||||
status: draft
|
||||
category: Standards Track
|
||||
tags: waku-application
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Andrea Piana <andreap@status.im>
|
||||
- Pedro Pombeiro <pedro@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document describes a method that can be used to provide a secure channel
|
||||
between two peers, and thus provide confidentiality, integrity,
|
||||
authenticity and forward secrecy.
|
||||
It is transport-agnostic and works over asynchronous networks.
|
||||
|
||||
It builds on the [X3DH](https://signal.org/docs/specifications/x3dh/)
|
||||
and [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/) specifications,
|
||||
with some adaptations to operate in a decentralized environment.
|
||||
|
||||
## Motivation
|
||||
|
||||
Nodes on a network may want to communicate with each other in a secure manner,
|
||||
without other nodes network being able to read their messages.
|
||||
|
||||
## Specification
|
||||
|
||||
### Definitions
|
||||
|
||||
- **Perfect Forward Secrecy** is a feature of specific key-agreement protocols
|
||||
which provide assurances that session keys will not be compromised
|
||||
even if the private keys of the participants are compromised.
|
||||
Specifically, past messages cannot be decrypted by a third-party
|
||||
who manages to obtain those private key.
|
||||
|
||||
- **Secret channel** describes a communication channel
|
||||
where a Double Ratchet algorithm is in use.
|
||||
|
||||
### Design Requirements
|
||||
|
||||
- **Confidentiality**:
|
||||
The adversary should not be able to learn what data is being exchanged
|
||||
between two Status clients.
|
||||
- **Authenticity**:
|
||||
The adversary should not be able to cause either endpoint
|
||||
to accept data from any third party as though it came from the other endpoint.
|
||||
- **Forward Secrecy**:
|
||||
The adversary should not be able to learn what data was exchanged
|
||||
between two clients if, at some later time,
|
||||
the adversary compromises one or both of the endpoints.
|
||||
- **Integrity**:
|
||||
The adversary should not be able to cause either endpoint
|
||||
to accept data that has been tampered with.
|
||||
|
||||
All of these properties are ensured by the use of [Signal's Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
|
||||
### Conventions
|
||||
|
||||
Types used in this specification are defined using the
|
||||
[Protobuf](https://developers.google.com/protocol-buffers/) wire format.
|
||||
|
||||
### End-to-End Encryption
|
||||
|
||||
End-to-end encryption (E2EE) takes place between two clients.
|
||||
The main cryptographic protocol is a Double Ratchet protocol,
|
||||
which is derived from the
|
||||
[Off-the-Record protocol](https://otr.cypherpunks.ca/Protocol-v3-4.1.1.html),
|
||||
using a different ratchet.
|
||||
[The Waku v2 protocol](/waku/standards/core/10/waku2.md)
|
||||
subsequently encrypts the message payload, using symmetric key encryption.
|
||||
Furthermore, the concept of prekeys
|
||||
(through the use of [X3DH](https://signal.org/docs/specifications/x3dh/))
|
||||
is used to allow the protocol to operate in an asynchronous environment.
|
||||
It is not necessary for two parties to be online at the same time
|
||||
to initiate an encrypted conversation.
|
||||
|
||||
### Cryptographic Protocols
|
||||
|
||||
This protocol uses the following cryptographic primitives:
|
||||
|
||||
- X3DH
|
||||
- Elliptic curve Diffie-Hellman key exchange (secp256k1)
|
||||
- KECCAK-256
|
||||
- ECDSA
|
||||
- ECIES
|
||||
- Double Ratchet
|
||||
- HMAC-SHA-256 as MAC
|
||||
- Elliptic curve Diffie-Hellman key exchange (Curve25519)
|
||||
- AES-256-CTR with HMAC-SHA-256 and IV derived alongside an encryption key
|
||||
|
||||
The node achieves key derivation using [HKDF](https://www.rfc-editor.org/rfc/rfc5869).
|
||||
|
||||
### Pre-keys
|
||||
|
||||
Every client SHOULD initially generate some key material which is stored locally:
|
||||
|
||||
- Identity keypair based on secp256k1 - `IK`
|
||||
- A signed prekey based on secp256k1 - `SPK`
|
||||
- A prekey signature - `Sig(IK, Encode(SPK))`
|
||||
|
||||
More details can be found in the `X3DH Prekey bundle creation` section of [2/ACCOUNT](https://specs.status.im/spec/2#x3dh-prekey-bundles).
|
||||
|
||||
Prekey bundles MAY be extracted from any peer's messages,
|
||||
or found via searching for their specific topic, `{IK}-contact-code`.
|
||||
|
||||
The following methods can be used to retrieve prekey bundles from a peer's messages:
|
||||
|
||||
- contact codes;
|
||||
- public and one-to-one chats;
|
||||
- QR codes;
|
||||
- ENS record;
|
||||
- Decentralized permanent storage (e.g. Swarm, IPFS).
|
||||
- Waku
|
||||
|
||||
Waku SHOULD be used for retrieving prekey bundles.
|
||||
|
||||
Since bundles stored in QR codes or
|
||||
ENS records cannot be updated to delete already used keys,
|
||||
the bundle MAY be rotated every 24 hours, and distributed via Waku.
|
||||
|
||||
### Flow
|
||||
|
||||
The key exchange can be summarized as follows:
|
||||
|
||||
1. Initial key exchange: Two parties, Alice and Bob, exchange their prekey bundles,
|
||||
and derive a shared secret.
|
||||
|
||||
2. Double Ratchet:
|
||||
The two parties use the shared secret to derive a new encryption key
|
||||
for each message they send.
|
||||
|
||||
3. Chain key update: The two parties update their chain keys.
|
||||
The chain key is used to derive new encryption keys for future messages.
|
||||
|
||||
4. Message key derivation:
|
||||
The two parties derive a new message key from their chain key, and
|
||||
use it to encrypt a message.
|
||||
|
||||
#### 1. Initial key exchange flow (X3DH)
|
||||
|
||||
[Section 3 of the X3DH protocol](https://signal.org/docs/specifications/x3dh/#sending-the-initial-message)
|
||||
describes the initial key exchange flow, with some additional context:
|
||||
|
||||
- The peers' identity keys `IK_A` and `IK_B` correspond to their public keys;
|
||||
- Since it is not possible to guarantee that a prekey will be used only once
|
||||
in a decentralized world, the one-time prekey `OPK_B` is not used in this scenario;
|
||||
- Nodes SHOULD not send Bundles to a centralized server,
|
||||
but instead provide them in a decentralized way as described in the [Pre-keys section](#pre-keys).
|
||||
|
||||
Alice retrieves Bob's prekey bundle, however it is not specific to Alice.
|
||||
It contains:
|
||||
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L12))
|
||||
|
||||
**Wire format:**
|
||||
|
||||
``` protobuf
|
||||
// X3DH prekey bundle
|
||||
message Bundle {
|
||||
// Identity key 'IK_B'
|
||||
bytes identity = 1;
|
||||
// Signed prekey 'SPK_B' for each device, indexed by 'installation-id'
|
||||
map<string,SignedPreKey> signed_pre_keys = 2;
|
||||
// Prekey signature 'Sig(IK_B, Encode(SPK_B))'
|
||||
bytes signature = 4;
|
||||
// When the bundle was created locally
|
||||
int64 timestamp = 5;
|
||||
}
|
||||
```
|
||||
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L5))
|
||||
|
||||
``` protobuf
|
||||
message SignedPreKey {
|
||||
bytes signed_pre_key = 1;
|
||||
uint32 version = 2;
|
||||
}
|
||||
```
|
||||
|
||||
The `signature` is generated by sorting `installation-id` in lexicographical order,
|
||||
and concatenating the `signed-pre-key` and `version`:
|
||||
|
||||
`installation-id-1signed-pre-key1version1installation-id2signed-pre-key2-version-2`
|
||||
|
||||
#### 2. Double Ratchet
|
||||
|
||||
Having established the initial shared secret `SK` through X3DH,
|
||||
it SHOULD be used to seed a Double Ratchet exchange between Alice and Bob.
|
||||
|
||||
Refer to the [Double Ratchet spec](https://signal.org/docs/specifications/doubleratchet/)
|
||||
for more details.
|
||||
|
||||
The initial message sent by Alice to Bob is sent as a top-level `ProtocolMessage`
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L65))
|
||||
containing a map of `DirectMessageProtocol` indexed by `installation-id`
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/1ac9dd974415c3f6dee95145b6644aeadf02f02c/services/shhext/chat/encryption.proto#L56)):
|
||||
|
||||
``` protobuf
|
||||
message ProtocolMessage {
|
||||
// The installation id of the sender
|
||||
string installation_id = 2;
|
||||
// A sequence of bundles
|
||||
repeated Bundle bundles = 3;
|
||||
// One to one message, encrypted, indexed by installation_id
|
||||
map<string,DirectMessageProtocol> direct_message = 101;
|
||||
// Public message, not encrypted
|
||||
bytes public_message = 102;
|
||||
}
|
||||
```
|
||||
|
||||
``` protobuf
|
||||
message EncryptedMessageProtocol {
|
||||
X3DHHeader X3DH_header = 1;
|
||||
DRHeader DR_header = 2;
|
||||
DHHeader DH_header = 101;
|
||||
// Encrypted payload
|
||||
// if a bundle is available, contains payload encrypted with the Double Ratchet algorithm;
|
||||
// otherwise, payload encrypted with output key of DH exchange (no Perfect Forward Secrecy).
|
||||
bytes payload = 3;
|
||||
}
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `X3DH_header`: the `X3DHHeader` field in `DirectMessageProtocol` contains:
|
||||
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L47))
|
||||
|
||||
```protobuf
|
||||
message X3DHHeader {
|
||||
// Alice's ephemeral key `EK_A`
|
||||
bytes key = 1;
|
||||
// Bob's bundle signed prekey
|
||||
bytes id = 4;
|
||||
}
|
||||
```
|
||||
|
||||
- `DR_header`: Double ratchet header ([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L31)).
|
||||
Used when Bob's public bundle is available:
|
||||
|
||||
``` protobuf
|
||||
message DRHeader {
|
||||
// Alice's current ratchet public key
|
||||
bytes key = 1;
|
||||
// number of the message in the sending chain
|
||||
uint32 n = 2;
|
||||
// length of the previous sending chain
|
||||
uint32 pn = 3;
|
||||
// Bob's bundle ID
|
||||
bytes id = 4;
|
||||
}
|
||||
```
|
||||
|
||||
Alice's current ratchet public key (above) is mentioned in
|
||||
[DR spec section 2.2](https://signal.org/docs/specifications/doubleratchet/#symmetric-key-ratchet)
|
||||
|
||||
- `DH_header`: Diffie-Hellman header (used when Bob's bundle is not available):
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L42))
|
||||
|
||||
``` protobuf
|
||||
message DHHeader {
|
||||
// Alice's compressed ephemeral public key.
|
||||
bytes key = 1;
|
||||
}
|
||||
```
|
||||
|
||||
#### 3. Chain key update
|
||||
|
||||
The chain key MUST be updated according to the `DR_Header`
|
||||
received in the `EncryptedMessageProtocol` message,
|
||||
described in [2.Double Ratchet](#2-double-ratchet).
|
||||
|
||||
#### 4. Message key derivation
|
||||
|
||||
The message key MUST be derived from a single ratchet step in the symmetric-key ratchet
|
||||
as described in [Symmetric key ratchet](https://signal.org/docs/specifications/doubleratchet/#symmetric-key-ratchet)
|
||||
|
||||
The message key MUST be used to encrypt the next message to be sent.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. Inherits the security considerations of [X3DH](https://signal.org/docs/specifications/x3dh/#security-considerations)
|
||||
and [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/#security-considerations).
|
||||
|
||||
2. Inherits the security considerations of the [Waku v2 protocol](/waku/standards/core/10/waku2.md).
|
||||
|
||||
3. The protocol is designed to be used in a decentralized manner, however,
|
||||
it is possible to use a centralized server to serve prekey bundles.
|
||||
In this case, the server is trusted.
|
||||
|
||||
## Privacy Considerations
|
||||
|
||||
1. This protocol does not provide message unlinkability.
|
||||
It is possible to link messages signed by the same keypair.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [X3DH](https://signal.org/docs/specifications/x3dh/)
|
||||
- [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
- [Signal's Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
- [Protobuf](https://developers.google.com/protocol-buffers/)
|
||||
- [Off-the-Record protocol](https://otr.cypherpunks.ca/Protocol-v3-4.1.1.html)
|
||||
- [The Waku v2 protocol](/waku/standards/core/10/waku2.md)
|
||||
- [HKDF](https://www.rfc-editor.org/rfc/rfc5869)
|
||||
- [2/ACCOUNT](https://specs.status.im/spec/2#x3dh-prekey-bundles)
|
||||
- [reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L12)
|
||||
- [Symmetric key ratchet](https://signal.org/docs/specifications/doubleratchet/#symmetric-key-ratchet)
|
||||
---
|
||||
slug: 53
|
||||
title: 53/WAKU2-X3DH
|
||||
name: X3DH usage for Waku payload encryption
|
||||
status: draft
|
||||
category: Standards Track
|
||||
tags: waku-application
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Andrea Piana <andreap@status.im>
|
||||
- Pedro Pombeiro <pedro@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document describes a method that can be used to provide a secure channel
|
||||
between two peers, and thus provide confidentiality, integrity,
|
||||
authenticity and forward secrecy.
|
||||
It is transport-agnostic and works over asynchronous networks.
|
||||
|
||||
It builds on the [X3DH](https://signal.org/docs/specifications/x3dh/)
|
||||
and [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/) specifications,
|
||||
with some adaptations to operate in a decentralized environment.
|
||||
|
||||
## Motivation
|
||||
|
||||
Nodes on a network may want to communicate with each other in a secure manner,
|
||||
without other nodes network being able to read their messages.
|
||||
|
||||
## Specification
|
||||
|
||||
### Definitions
|
||||
|
||||
- **Perfect Forward Secrecy** is a feature of specific key-agreement protocols
|
||||
which provide assurances that session keys will not be compromised
|
||||
even if the private keys of the participants are compromised.
|
||||
Specifically, past messages cannot be decrypted by a third-party
|
||||
who manages to get a hold of a private key.
|
||||
|
||||
- **Secret channel** describes a communication channel
|
||||
where a Double Ratchet algorithm is in use.
|
||||
|
||||
### Design Requirements
|
||||
|
||||
- **Confidentiality**:
|
||||
The adversary should not be able to learn what data is being exchanged
|
||||
between two Status clients.
|
||||
- **Authenticity**:
|
||||
The adversary should not be able to cause either endpoint
|
||||
to accept data from any third party as though it came from the other endpoint.
|
||||
- **Forward Secrecy**:
|
||||
The adversary should not be able to learn what data was exchanged
|
||||
between two clients if, at some later time,
|
||||
the adversary compromises one or both of the endpoints.
|
||||
- **Integrity**:
|
||||
The adversary should not be able to cause either endpoint
|
||||
to accept data that has been tampered with.
|
||||
|
||||
All of these properties are ensured by the use of [Signal's Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
|
||||
### Conventions
|
||||
|
||||
Types used in this specification are defined using the
|
||||
[Protobuf](https://developers.google.com/protocol-buffers/) wire format.
|
||||
|
||||
### End-to-End Encryption
|
||||
|
||||
End-to-end encryption (E2EE) takes place between two clients.
|
||||
The main cryptographic protocol is a Double Ratchet protocol,
|
||||
which is derived from the
|
||||
[Off-the-Record protocol](https://otr.cypherpunks.ca/Protocol-v3-4.1.1.html),
|
||||
using a different ratchet.
|
||||
[The Waku v2 protocol](../../core/10/waku2.md)
|
||||
subsequently encrypts the message payload, using symmetric key encryption.
|
||||
Furthermore, the concept of prekeys
|
||||
(through the use of [X3DH](https://signal.org/docs/specifications/x3dh/))
|
||||
is used to allow the protocol to operate in an asynchronous environment.
|
||||
It is not necessary for two parties to be online at the same time
|
||||
to initiate an encrypted conversation.
|
||||
|
||||
### Cryptographic Protocols
|
||||
|
||||
This protocol uses the following cryptographic primitives:
|
||||
|
||||
- X3DH
|
||||
- Elliptic curve Diffie-Hellman key exchange (secp256k1)
|
||||
- KECCAK-256
|
||||
- ECDSA
|
||||
- ECIES
|
||||
- Double Ratchet
|
||||
- HMAC-SHA-256 as MAC
|
||||
- Elliptic curve Diffie-Hellman key exchange (Curve25519)
|
||||
- AES-256-CTR with HMAC-SHA-256 and IV derived alongside an encryption key
|
||||
|
||||
The node achieves key derivation using [HKDF](https://www.rfc-editor.org/rfc/rfc5869).
|
||||
|
||||
### Pre-keys
|
||||
|
||||
Every client SHOULD initially generate some key material which is stored locally:
|
||||
|
||||
- Identity keypair based on secp256k1 - `IK`
|
||||
- A signed prekey based on secp256k1 - `SPK`
|
||||
- A prekey signature - `Sig(IK, Encode(SPK))`
|
||||
|
||||
More details can be found in the `X3DH Prekey bundle creation` section of [2/ACCOUNT](https://specs.status.im/spec/2#x3dh-prekey-bundles).
|
||||
|
||||
Prekey bundles MAY be extracted from any peer's messages,
|
||||
or found via searching for their specific topic, `{IK}-contact-code`.
|
||||
|
||||
The following methods can be used to retrieve prekey bundles from a peer's messages:
|
||||
|
||||
- contact codes;
|
||||
- public and one-to-one chats;
|
||||
- QR codes;
|
||||
- ENS record;
|
||||
- Decentralized permanent storage (e.g. Swarm, IPFS).
|
||||
- Waku
|
||||
|
||||
Waku SHOULD be used for retrieving prekey bundles.
|
||||
|
||||
Since bundles stored in QR codes or
|
||||
ENS records cannot be updated to delete already used keys,
|
||||
the bundle MAY be rotated every 24 hours, and distributed via Waku.
|
||||
|
||||
### Flow
|
||||
|
||||
The key exchange can be summarized as follows:
|
||||
|
||||
1. Initial key exchange: Two parties, Alice and Bob, exchange their prekey bundles,
|
||||
and derive a shared secret.
|
||||
|
||||
2. Double Ratchet:
|
||||
The two parties use the shared secret to derive a new encryption key
|
||||
for each message they send.
|
||||
|
||||
3. Chain key update: The two parties update their chain keys.
|
||||
The chain key is used to derive new encryption keys for future messages.
|
||||
|
||||
4. Message key derivation:
|
||||
The two parties derive a new message key from their chain key, and
|
||||
use it to encrypt a message.
|
||||
|
||||
#### 1. Initial key exchange flow (X3DH)
|
||||
|
||||
[Section 3 of the X3DH protocol](https://signal.org/docs/specifications/x3dh/#sending-the-initial-message)
|
||||
describes the initial key exchange flow, with some additional context:
|
||||
|
||||
- The peers' identity keys `IK_A` and `IK_B` correspond to their public keys;
|
||||
- Since it is not possible to guarantee that a prekey will be used only once
|
||||
in a decentralized world, the one-time prekey `OPK_B` is not used in this scenario;
|
||||
- Nodes SHOULD not send Bundles to a centralized server,
|
||||
but instead provide them in a decentralized way as described in the [Pre-keys section](#pre-keys).
|
||||
|
||||
Alice retrieves Bob's prekey bundle, however it is not specific to Alice.
|
||||
It contains:
|
||||
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L12))
|
||||
|
||||
**Wire format:**
|
||||
|
||||
``` protobuf
|
||||
// X3DH prekey bundle
|
||||
message Bundle {
|
||||
// Identity key 'IK_B'
|
||||
bytes identity = 1;
|
||||
// Signed prekey 'SPK_B' for each device, indexed by 'installation-id'
|
||||
map<string,SignedPreKey> signed_pre_keys = 2;
|
||||
// Prekey signature 'Sig(IK_B, Encode(SPK_B))'
|
||||
bytes signature = 4;
|
||||
// When the bundle was created locally
|
||||
int64 timestamp = 5;
|
||||
}
|
||||
```
|
||||
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L5))
|
||||
|
||||
``` protobuf
|
||||
message SignedPreKey {
|
||||
bytes signed_pre_key = 1;
|
||||
uint32 version = 2;
|
||||
}
|
||||
```
|
||||
|
||||
The `signature` is generated by sorting `installation-id` in lexicographical order,
|
||||
and concatenating the `signed-pre-key` and `version`:
|
||||
|
||||
`installation-id-1signed-pre-key1version1installation-id2signed-pre-key2-version-2`
|
||||
|
||||
#### 2. Double Ratchet
|
||||
|
||||
Having established the initial shared secret `SK` through X3DH,
|
||||
it SHOULD be used to seed a Double Ratchet exchange between Alice and Bob.
|
||||
|
||||
Refer to the [Double Ratchet spec](https://signal.org/docs/specifications/doubleratchet/)
|
||||
for more details.
|
||||
|
||||
The initial message sent by Alice to Bob is sent as a top-level `ProtocolMessage`
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L65))
|
||||
containing a map of `DirectMessageProtocol` indexed by `installation-id`
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/1ac9dd974415c3f6dee95145b6644aeadf02f02c/services/shhext/chat/encryption.proto#L56)):
|
||||
|
||||
``` protobuf
|
||||
message ProtocolMessage {
|
||||
// The installation id of the sender
|
||||
string installation_id = 2;
|
||||
// A sequence of bundles
|
||||
repeated Bundle bundles = 3;
|
||||
// One to one message, encrypted, indexed by installation_id
|
||||
map<string,DirectMessageProtocol> direct_message = 101;
|
||||
// Public message, not encrypted
|
||||
bytes public_message = 102;
|
||||
}
|
||||
```
|
||||
|
||||
``` protobuf
|
||||
message EncryptedMessageProtocol {
|
||||
X3DHHeader X3DH_header = 1;
|
||||
DRHeader DR_header = 2;
|
||||
DHHeader DH_header = 101;
|
||||
// Encrypted payload
|
||||
// if a bundle is available, contains payload encrypted with the Double Ratchet algorithm;
|
||||
// otherwise, payload encrypted with output key of DH exchange (no Perfect Forward Secrecy).
|
||||
bytes payload = 3;
|
||||
}
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `X3DH_header`: the `X3DHHeader` field in `DirectMessageProtocol` contains:
|
||||
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L47))
|
||||
|
||||
```protobuf
|
||||
message X3DHHeader {
|
||||
// Alice's ephemeral key `EK_A`
|
||||
bytes key = 1;
|
||||
// Bob's bundle signed prekey
|
||||
bytes id = 4;
|
||||
}
|
||||
```
|
||||
|
||||
- `DR_header`: Double ratchet header ([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L31)).
|
||||
Used when Bob's public bundle is available:
|
||||
|
||||
``` protobuf
|
||||
message DRHeader {
|
||||
// Alice's current ratchet public key (as mentioned in [DR spec section 2.2](https://signal.org/docs/specifications/doubleratchet/#symmetric-key-ratchet))
|
||||
bytes key = 1;
|
||||
// number of the message in the sending chain
|
||||
uint32 n = 2;
|
||||
// length of the previous sending chain
|
||||
uint32 pn = 3;
|
||||
// Bob's bundle ID
|
||||
bytes id = 4;
|
||||
}
|
||||
```
|
||||
|
||||
- `DH_header`: Diffie-Hellman header (used when Bob's bundle is not available):
|
||||
([reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L42))
|
||||
|
||||
``` protobuf
|
||||
message DHHeader {
|
||||
// Alice's compressed ephemeral public key.
|
||||
bytes key = 1;
|
||||
}
|
||||
```
|
||||
|
||||
#### 3. Chain key update
|
||||
|
||||
The chain key MUST be updated according to the `DR_Header`
|
||||
received in the `EncryptedMessageProtocol` message,
|
||||
described in [2.Double Ratchet](#2-double-ratchet).
|
||||
|
||||
#### 4. Message key derivation
|
||||
|
||||
The message key MUST be derived from a single ratchet step in the symmetric-key ratchet
|
||||
as described in [Symmetric key ratchet](https://signal.org/docs/specifications/doubleratchet/#symmetric-key-ratchet)
|
||||
|
||||
The message key MUST be used to encrypt the next message to be sent.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. Inherits the security considerations of [X3DH](https://signal.org/docs/specifications/x3dh/#security-considerations)
|
||||
and [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/#security-considerations).
|
||||
|
||||
2. Inherits the security considerations of the [Waku v2 protocol](../../core/10/waku2.md).
|
||||
|
||||
3. The protocol is designed to be used in a decentralized manner, however,
|
||||
it is possible to use a centralized server to serve prekey bundles.
|
||||
In this case, the server is trusted.
|
||||
|
||||
## Privacy Considerations
|
||||
|
||||
1. This protocol does not provide message unlinkability.
|
||||
It is possible to link messages signed by the same keypair.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [X3DH](https://signal.org/docs/specifications/x3dh/)
|
||||
- [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
- [Signal's Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
|
||||
- [Protobuf](https://developers.google.com/protocol-buffers/)
|
||||
- [Off-the-Record protocol](https://otr.cypherpunks.ca/Protocol-v3-4.1.1.html)
|
||||
- [The Waku v2 protocol](../../core/10/waku2.md)
|
||||
- [HKDF](https://www.rfc-editor.org/rfc/rfc5869)
|
||||
- [2/ACCOUNT](https://specs.status.im/spec/2#x3dh-prekey-bundles)
|
||||
- [reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L12)
|
||||
- [Symmetric key ratchet](https://signal.org/docs/specifications/doubleratchet/#symmetric-key-ratchet)
|
||||
-
|
||||
|
||||
@@ -1,203 +1,202 @@
|
||||
---
|
||||
slug: 54
|
||||
title: 54/WAKU2-X3DH-SESSIONS
|
||||
name: Session management for Waku X3DH
|
||||
status: draft
|
||||
category: Standards Track
|
||||
tags: waku-application
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Andrea Piana <andreap@status.im>
|
||||
- Pedro Pombeiro <pedro@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document specifies how to manage sessions based on an X3DH key exchange.
|
||||
This includes how to establish new sessions,
|
||||
how to re-establish them, how to maintain them, and how to close them.
|
||||
|
||||
[53/WAKU2-X3DH](/waku/standards/application/53/x3dh.md) specifies the Waku `X3DH` protocol
|
||||
for end-to-end encryption.
|
||||
Once two peers complete an X3DH handshake, they SHOULD establish an X3DH session.
|
||||
|
||||
## Session Establishment
|
||||
|
||||
A node identifies a peer by their `installation-id`
|
||||
which MAY be interpreted as a device identifier.
|
||||
|
||||
### Discovery of pre-key bundles
|
||||
|
||||
The node's pre-key bundle MUST be broadcast on a content topic
|
||||
derived from the node's public key, so that the first message may be PFS-encrypted.
|
||||
Each peer MUST publish their pre-key bundle periodically to this topic,
|
||||
otherwise they risk not being able to perform key-exchanges with other peers.
|
||||
Each peer MAY publish to this topic when their metadata changes,
|
||||
so that the other peer can update their local record.
|
||||
|
||||
If peer A wants to send a message to peer B,
|
||||
it MUST derive the topic from peer B's public key, which has been shared out of band.
|
||||
Partitioned topics have been used to balance privacy and
|
||||
efficiency of broadcasting pre-key bundles.
|
||||
|
||||
The number of partitions that MUST be used is 5000.
|
||||
|
||||
The topic MUST be derived as follows:
|
||||
|
||||
```js
|
||||
var partitionsNum *big.Int = big.NewInt(5000)
|
||||
var partition *big.Int = big.NewInt(0).Mod(peerBPublicKey, partitionsNum)
|
||||
|
||||
partitionTopic := "contact-discovery-" + strconv.FormatInt(partition.Int64(), 10)
|
||||
|
||||
var hash []byte = keccak256(partitionTopic)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var contactCodeTopic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
contactCodeTopic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
### Initialization
|
||||
|
||||
A node initializes a new session once a successful X3DH exchange has taken place.
|
||||
Subsequent messages will use the established session until re-keying is necessary.
|
||||
|
||||
### Negotiated topic to be used for the session
|
||||
|
||||
After the peers have performed the initial key exchange,
|
||||
they MUST derive a topic from their shared secret to send messages on.
|
||||
To obtain this value, take the first four bytes of the keccak256 hash
|
||||
of the shared secret encoded in hexadecimal format.
|
||||
|
||||
```js
|
||||
sharedKey, err := ecies.ImportECDSA(myPrivateKey).GenerateShared(
|
||||
ecies.ImportECDSAPublic(theirPublicKey),
|
||||
16,
|
||||
16,
|
||||
)
|
||||
|
||||
hexEncodedKey := hex.EncodeToString(sharedKey)
|
||||
|
||||
var hash []byte = keccak256(hexEncodedKey)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
To summarize,
|
||||
following is the process for peer B to establish a session with peer A:
|
||||
|
||||
1. Listen to peer B's Contact Code Topic to retrieve their bundle information,
|
||||
including a list of active devices
|
||||
2. Peer A sends their pre-key bundle on peer B's partitioned topic
|
||||
3. Peer A and peer B perform the key-exchange using the shared pre-key bundles
|
||||
4. The negotiated topic is derived from the shared secret
|
||||
5. Peers A & B exchange messages on the negotiated topic
|
||||
|
||||
### Concurrent sessions
|
||||
|
||||
If a node creates two sessions concurrently between two peers,
|
||||
the one with the symmetric key first in byte order SHOULD be used,
|
||||
this marks that the other has expired.
|
||||
|
||||
### Re-keying
|
||||
|
||||
On receiving a bundle from a given peer with a higher version,
|
||||
the old bundle SHOULD be marked as expired and
|
||||
a new session SHOULD be established on the next message sent.
|
||||
|
||||
### Multi-device support
|
||||
|
||||
Multi-device support is quite challenging
|
||||
as there is not a central place where information on which and how many devices
|
||||
(identified by their respective `installation-id`) a peer has, is stored.
|
||||
|
||||
Furthermore, account recovery always needs to be taken into consideration,
|
||||
where a user wipes clean the whole device and
|
||||
the node loses all the information about any previous sessions.
|
||||
Taking these considerations into account,
|
||||
the way the network propagates multi-device information using X3DH bundles,
|
||||
which will contain information about paired devices
|
||||
as well as information about the sending device.
|
||||
This means that every time a new device is paired,
|
||||
the bundle needs to be updated and propagated with the new information,
|
||||
the user has the responsibility to make sure the pairing is successful.
|
||||
|
||||
The method is loosely based on [Signal's Sesame Algorithm](https://signal.org/docs/specifications/sesame/).
|
||||
|
||||
### Pairing
|
||||
|
||||
A new `installation-id` MUST be generated on a per-device basis.
|
||||
The device should be paired as soon as possible if other devices are present.
|
||||
|
||||
If a bundle is received, which has the same `IK` as the keypair present on the device,
|
||||
the devices MAY be paired.
|
||||
Once a user enables a new device,
|
||||
a new bundle MUST be generated which includes pairing information.
|
||||
|
||||
The bundle MUST be propagated to contacts through the usual channels.
|
||||
|
||||
Removal of paired devices is a manual step that needs to be applied on each device,
|
||||
and consist simply in disabling the device,
|
||||
at which point pairing information will not be propagated anymore.
|
||||
|
||||
### Sending messages to a paired group
|
||||
|
||||
When sending a message,
|
||||
the peer SHOULD send a message to other `installation-id` that they have seen.
|
||||
The node caps the number of devices to `n`, ordered by last activity.
|
||||
The node sends messages using pairwise encryption, including their own devices.
|
||||
|
||||
Where `n` is the maximum number of devices that can be paired.
|
||||
|
||||
### Account recovery
|
||||
|
||||
Account recovery is the same as adding a new device,
|
||||
and it MUST be handled the same way.
|
||||
|
||||
### Partitioned devices
|
||||
|
||||
In some cases
|
||||
(i.e. account recovery when no other pairing device is available, device not paired),
|
||||
it is possible that a device will receive a message
|
||||
that is not targeted to its own `installation-id`.
|
||||
In this case an empty message containing bundle information MUST be sent back,
|
||||
which will notify the receiving end not to include the device in any further communication.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. Inherits all security considerations from [53/WAKU2-X3DH](/waku/standards/application/53/x3dh.md).
|
||||
|
||||
### Recommendations
|
||||
|
||||
1. The value of `n` SHOULD be configured by the app-protocol.
|
||||
- The default value SHOULD be 3,
|
||||
since a larger number of devices will result in a larger bundle size,
|
||||
which may not be desirable in a peer-to-peer network.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [53/WAKU2-X3DH](/waku/standards/application/53/x3dh.md)
|
||||
- [Signal's Sesame Algorithm](https://signal.org/docs/specifications/sesame/)
|
||||
---
|
||||
slug: 54
|
||||
title: 54/WAKU2-X3DH-SESSIONS
|
||||
name: Session management for Waku X3DH
|
||||
status: draft
|
||||
category: Standards Track
|
||||
tags: waku-application
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Andrea Piana <andreap@status.im>
|
||||
- Pedro Pombeiro <pedro@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document specifies how to manage sessions based on an X3DH key exchange.
|
||||
This includes how to establish new sessions,
|
||||
how to re-establish them, how to maintain them, and how to close them.
|
||||
|
||||
[53/WAKU2-X3DH](../53/x3dh.md) specifies the Waku `X3DH` protocol
|
||||
for end-to-end encryption.
|
||||
Once two peers complete an X3DH handshake, they SHOULD establish an X3DH session.
|
||||
|
||||
## Session Establishment
|
||||
|
||||
A node identifies a peer by their `installation-id`
|
||||
which MAY be interpreted as a device identifier.
|
||||
|
||||
### Discovery of pre-key bundles
|
||||
|
||||
The node's pre-key bundle MUST be broadcast on a content topic
|
||||
derived from the node's public key, so that the first message may be PFS-encrypted.
|
||||
Each peer MUST publish their pre-key bundle periodically to this topic,
|
||||
otherwise they risk not being able to perform key-exchanges with other peers.
|
||||
Each peer MAY publish to this topic when their metadata changes,
|
||||
so that the other peer can update their local record.
|
||||
|
||||
If peer A wants to send a message to peer B,
|
||||
it MUST derive the topic from peer B's public key, which has been shared out of band.
|
||||
Partitioned topics have been used to balance privacy and
|
||||
efficiency of broadcasting pre-key bundles.
|
||||
|
||||
The number of partitions that MUST be used is 5000.
|
||||
|
||||
The topic MUST be derived as follows:
|
||||
|
||||
```js
|
||||
var partitionsNum *big.Int = big.NewInt(5000)
|
||||
var partition *big.Int = big.NewInt(0).Mod(peerBPublicKey, partitionsNum)
|
||||
|
||||
partitionTopic := "contact-discovery-" + strconv.FormatInt(partition.Int64(), 10)
|
||||
|
||||
var hash []byte = keccak256(partitionTopic)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var contactCodeTopic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
contactCodeTopic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
### Initialization
|
||||
|
||||
A node initializes a new session once a successful X3DH exchange has taken place.
|
||||
Subsequent messages will use the established session until re-keying is necessary.
|
||||
|
||||
### Negotiated topic to be used for the session
|
||||
|
||||
After the peers have performed the initial key exchange,
|
||||
they MUST derive a topic from their shared secret to send messages on.
|
||||
To obtain this value, take the first four bytes of the keccak256 hash
|
||||
of the shared secret encoded in hexadecimal format.
|
||||
|
||||
```js
|
||||
sharedKey, err := ecies.ImportECDSA(myPrivateKey).GenerateShared(
|
||||
ecies.ImportECDSAPublic(theirPublicKey),
|
||||
16,
|
||||
16,
|
||||
)
|
||||
|
||||
|
||||
hexEncodedKey := hex.EncodeToString(sharedKey)
|
||||
|
||||
var hash []byte = keccak256(hexEncodedKey)
|
||||
var topicLen int = 4
|
||||
|
||||
if len(hash) < topicLen {
|
||||
topicLen = len(hash)
|
||||
}
|
||||
|
||||
var topic [4]byte
|
||||
for i = 0; i < topicLen; i++ {
|
||||
topic[i] = hash[i]
|
||||
}
|
||||
```
|
||||
|
||||
To summarize,
|
||||
following is the process for peer B to establish a session with peer A:
|
||||
|
||||
1. Listen to peer B's Contact Code Topic to retrieve their bundle information,
|
||||
including a list of active devices
|
||||
2. Peer A sends their pre-key bundle on peer B's partitioned topic
|
||||
3. Peer A and peer B perform the key-exchange using the shared pre-key bundles
|
||||
4. The negotiated topic is derived from the shared secret
|
||||
5. Peers A & B exchange messages on the negotiated topic
|
||||
|
||||
### Concurrent sessions
|
||||
|
||||
If a node creates two sessions concurrently between two peers,
|
||||
the one with the symmetric key first in byte order SHOULD be used,
|
||||
this marks that the other has expired.
|
||||
|
||||
### Re-keying
|
||||
|
||||
On receiving a bundle from a given peer with a higher version,
|
||||
the old bundle SHOULD be marked as expired and
|
||||
a new session SHOULD be established on the next message sent.
|
||||
|
||||
### Multi-device support
|
||||
|
||||
Multi-device support is quite challenging
|
||||
as there is not a central place where information on which and how many devices
|
||||
(identified by their respective `installation-id`) a peer has, is stored.
|
||||
|
||||
Furthermore, account recovery always needs to be taken into consideration,
|
||||
where a user wipes clean the whole device and
|
||||
the node loses all the information about any previous sessions.
|
||||
Taking these considerations into account,
|
||||
the way the network propagates multi-device information using X3DH bundles,
|
||||
which will contain information about paired devices
|
||||
as well as information about the sending device.
|
||||
This means that every time a new device is paired,
|
||||
the bundle needs to be updated and propagated with the new information,
|
||||
the user has the responsibility to make sure the pairing is successful.
|
||||
|
||||
The method is loosely based on [Signal's Sesame Algorithm](https://signal.org/docs/specifications/sesame/).
|
||||
|
||||
### Pairing
|
||||
|
||||
A new `installation-id` MUST be generated on a per-device basis.
|
||||
The device should be paired as soon as possible if other devices are present.
|
||||
|
||||
If a bundle is received, which has the same `IK` as the keypair present on the device,
|
||||
the devices MAY be paired.
|
||||
Once a user enables a new device,
|
||||
a new bundle MUST be generated which includes pairing information.
|
||||
|
||||
The bundle MUST be propagated to contacts through the usual channels.
|
||||
|
||||
Removal of paired devices is a manual step that needs to be applied on each device,
|
||||
and consist simply in disabling the device,
|
||||
at which point pairing information will not be propagated anymore.
|
||||
|
||||
### Sending messages to a paired group
|
||||
|
||||
When sending a message,
|
||||
the peer SHOULD send a message to other `installation-id` that they have seen.
|
||||
The node caps the number of devices to `n`, ordered by last activity.
|
||||
The node sends messages using pairwise encryption, including their own devices.
|
||||
|
||||
Where `n` is the maximum number of devices that can be paired.
|
||||
|
||||
### Account recovery
|
||||
|
||||
Account recovery is the same as adding a new device,
|
||||
and it MUST be handled the same way.
|
||||
|
||||
### Partitioned devices
|
||||
|
||||
In some cases
|
||||
(i.e. account recovery when no other pairing device is available, device not paired),
|
||||
it is possible that a device will receive a message
|
||||
that is not targeted to its own `installation-id`.
|
||||
In this case an empty message containing bundle information MUST be sent back,
|
||||
which will notify the receiving end not to include the device in any further communication.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. Inherits all security considerations from [53/WAKU2-X3DH](../53/x3dh.md).
|
||||
|
||||
### Recommendations
|
||||
|
||||
1. The value of `n` SHOULD be configured by the app-protocol.
|
||||
- The default value SHOULD be 3,
|
||||
since a larger number of devices will result in a larger bundle size,
|
||||
which may not be desirable in a peer-to-peer network.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [53/WAKU2-X3DH](../53/x3dh.md)
|
||||
2. [Signal's Sesame Algorithm](https://signal.org/docs/specifications/sesame/)
|
||||
|
||||
@@ -1,33 +1,33 @@
|
||||
# Sequence diagram for Waku v2 (WakuMessage, WakuData, Relay, Store, Filter)
|
||||
# PNG generated with https://mscgen.js.org
|
||||
msc {
|
||||
hscale="1",
|
||||
wordwraparcs=true;
|
||||
|
||||
a [label="A\nrelay\n(0)"],
|
||||
b [label="B relay(pubtopic1)\n(0)"],
|
||||
c [label="C relay(pubtopic2)\n(0)"],
|
||||
d [label="D relay(pubtopic1), store(pubtopic1), filter\n(0)"],
|
||||
e [label="E\nrelay, store\n(0)"],
|
||||
f [label="F\nrelay, filter\n(0)"];
|
||||
|
||||
a rbox a [label="msg1=WakuMessage(contentTopic1, data) [14/WAKU2-MESSAGE] (1)"];
|
||||
a note a [label="If version=1, encrypt data per [7/WAKU-DATA] (1)"];
|
||||
|
||||
f => d [label="FilterRequest(pubtopic1, contentTopic1) [12/WAKU2-FILTER] (2)"];
|
||||
d rbox d [label="Subscribe F to filter [12/WAKU2-FILTER] (2)"];
|
||||
|
||||
a => b [label="Publish msg1 on pubtopic1 [11/WAKU2-RELAY] (3)"];
|
||||
b => d [label="relay msg1 on pubtopic1 [11/WAKU2-RELAY] (3)"];
|
||||
|
||||
d rbox d [label="store: saves msg1 [13/WAKU2-STORE] (4)"];
|
||||
|
||||
d => f [label="MessagePush(msg1)[12/WAKU2-FILTER] (5)"];
|
||||
|
||||
---;
|
||||
|
||||
e note e [label="E comes online (6)"];
|
||||
e => d [label="HistoryQuery(pubtopic1, contentTopic1) [13/WAKU2-STORE] (6)"];
|
||||
d => e [label="HistoryResponse(msg1, ...) [13/WAKU2-STORE] (6)"];
|
||||
|
||||
}
|
||||
# Sequence diagram for Waku v2 (WakuMessage, WakuData, Relay, Store, Filter)
|
||||
# PNG generated with https://mscgen.js.org
|
||||
msc {
|
||||
hscale="1",
|
||||
wordwraparcs=true;
|
||||
|
||||
a [label="A\nrelay\n(0)"],
|
||||
b [label="B relay(pubtopic1)\n(0)"],
|
||||
c [label="C relay(pubtopic2)\n(0)"],
|
||||
d [label="D relay(pubtopic1), store(pubtopic1), filter\n(0)"],
|
||||
e [label="E\nrelay, store\n(0)"],
|
||||
f [label="F\nrelay, filter\n(0)"];
|
||||
|
||||
a rbox a [label="msg1=WakuMessage(contentTopic1, data) [14/WAKU2-MESSAGE] (1)"];
|
||||
a note a [label="If version=1, encrypt data per [7/WAKU-DATA] (1)"];
|
||||
|
||||
f => d [label="FilterRequest(pubtopic1, contentTopic1) [12/WAKU2-FILTER] (2)"];
|
||||
d rbox d [label="Subscribe F to filter [12/WAKU2-FILTER] (2)"];
|
||||
|
||||
a => b [label="Publish msg1 on pubtopic1 [11/WAKU2-RELAY] (3)"];
|
||||
b => d [label="relay msg1 on pubtopic1 [11/WAKU2-RELAY] (3)"];
|
||||
|
||||
d rbox d [label="store: saves msg1 [13/WAKU2-STORE] (4)"];
|
||||
|
||||
d => f [label="MessagePush(msg1)[12/WAKU2-FILTER] (5)"];
|
||||
|
||||
---;
|
||||
|
||||
e note e [label="E comes online (6)"];
|
||||
e => d [label="HistoryQuery(pubtopic1, contentTopic1) [13/WAKU2-STORE] (6)"];
|
||||
d => e [label="HistoryResponse(msg1, ...) [13/WAKU2-STORE] (6)"];
|
||||
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,284 +1,284 @@
|
||||
---
|
||||
slug: 11
|
||||
title: 11/WAKU2-RELAY
|
||||
name: Waku v2 Relay
|
||||
status: stable
|
||||
tags: waku-core
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
---
|
||||
|
||||
`11/WAKU2-RELAY` specifies a [Publish/Subscribe approach](https://docs.libp2p.io/concepts/publish-subscribe/)
|
||||
to peer-to-peer messaging with a strong focus on privacy,
|
||||
censorship-resistance, security and scalability.
|
||||
Its current implementation is a minor extension of the
|
||||
[libp2p GossipSub protocol](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md)
|
||||
and prescribes gossip-based dissemination.
|
||||
As such the scope is limited to defining a separate
|
||||
[`protocol id`](https://github.com/libp2p/specs/blob/master/connections/README.md#protocol-negotiation)
|
||||
for `11/WAKU2-RELAY`, establishing privacy and security requirements,
|
||||
and defining how the underlying GossipSub is to be interpreted and
|
||||
implemented within the Waku and cryptoeconomic domain.
|
||||
`11/WAKU2-RELAY` should not be confused with [libp2p circuit relay](https://github.com/libp2p/specs/tree/master/relay).
|
||||
|
||||
**Protocol identifier**: `/vac/waku/relay/2.0.0`
|
||||
|
||||
## Security Requirements
|
||||
|
||||
The `11/WAKU2-RELAY` protocol is designed to provide the following security properties
|
||||
under a static [Adversarial Model](#adversarial-model).
|
||||
Note that data confidentiality, integrity, and
|
||||
authenticity are currently considered out of scope for `11/WAKU2-RELAY` and
|
||||
must be handled by higher layer protocols such as [`14/WAKU2-MESSAGE`](../14/message.md).
|
||||
|
||||
<!-- May add the definition of the unsupported feature:
|
||||
Confidentiality indicates that an adversary
|
||||
should not be able to learn the data carried by the `WakuRelay` protocol.
|
||||
Integrity indicates that the data transferred by the `WakuRelay` protocol
|
||||
can not be tampered with by an adversarial entity without being detected.
|
||||
Authenticity no adversary can forge data on behalf of a targeted publisher and
|
||||
make it accepted by other subscribers as if the origin is the target. -->
|
||||
|
||||
- **Publisher-Message Unlinkability**:
|
||||
This property indicates that no adversarial entity can link a published `Message`
|
||||
to its publisher.
|
||||
This feature also implies the unlinkability of the publisher
|
||||
to its published topic ID as the `Message` embodies the topic IDs.
|
||||
|
||||
- **Subscriber-Topic Unlinkability**:
|
||||
This feature stands for the inability of any adversarial entity
|
||||
from linking a subscriber to its subscribed topic IDs.
|
||||
|
||||
<!-- TODO: more requirements can be added,
|
||||
but that needs further and deeper investigation-->
|
||||
|
||||
### Terminology
|
||||
|
||||
_Personally identifiable information_ (PII)
|
||||
refers to any piece of data that can be used to uniquely identify a user.
|
||||
For example, the signature verification key,
|
||||
and the hash of one's static IP address are unique for each user and
|
||||
hence count as PII.
|
||||
|
||||
## Adversarial Model
|
||||
|
||||
- Any entity running the `11/WAKU2-RELAY` protocol is considered an adversary.
|
||||
This includes publishers, subscribers, and all the peers' direct connections.
|
||||
Furthermore,
|
||||
we consider the adversary as a passive entity that attempts to collect information
|
||||
from others to conduct an attack but
|
||||
it does so without violating protocol definitions and instructions.
|
||||
For example, under the passive adversarial model,
|
||||
no malicious subscriber hides the messages it receives from other subscribers
|
||||
as it is against the description of `11/WAKU2-RELAY`.
|
||||
However,
|
||||
a malicious subscriber may learn which topics are subscribed to by which peers.
|
||||
- The following are **not** considered as part of the adversarial model:
|
||||
- An adversary with a global view of all the peers and their connections.
|
||||
- An adversary that can eavesdrop on communication links between arbitrary pairs
|
||||
of peers (unless the adversary is one end of the communication).
|
||||
In other words, the communication channels are assumed to be secure.
|
||||
|
||||
## Wire Specification
|
||||
|
||||
The [PubSub interface specification](https://github.com/libp2p/specs/blob/master/pubsub/README.md)
|
||||
defines the protobuf RPC messages
|
||||
exchanged between peers participating in a GossipSub network.
|
||||
We republish these messages here for ease of reference and
|
||||
define how `11/WAKU2-RELAY` uses and interprets each field.
|
||||
|
||||
### Protobuf definitions
|
||||
|
||||
The PubSub RPC messages are specified using [protocol buffers v2](https://developers.google.com/protocol-buffers/)
|
||||
|
||||
```protobuf
|
||||
syntax = "proto2";
|
||||
|
||||
message RPC {
|
||||
repeated SubOpts subscriptions = 1;
|
||||
repeated Message publish = 2;
|
||||
|
||||
message SubOpts {
|
||||
optional bool subscribe = 1;
|
||||
optional string topicid = 2;
|
||||
}
|
||||
|
||||
message Message {
|
||||
optional string from = 1;
|
||||
optional bytes data = 2;
|
||||
optional bytes seqno = 3;
|
||||
repeated string topicIDs = 4;
|
||||
optional bytes signature = 5;
|
||||
optional bytes key = 6;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **_NOTE:_**
|
||||
The various [control messages](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md#control-messages)
|
||||
defined for GossipSub are used as specified there.
|
||||
> **_NOTE:_**
|
||||
The [`TopicDescriptor`](https://github.com/libp2p/specs/blob/master/pubsub/README.md#the-topic-descriptor)
|
||||
is not currently used by `11/WAKU2-RELAY`.
|
||||
|
||||
### Message fields
|
||||
|
||||
The `Message` protobuf defines the format in which content is relayed between peers.
|
||||
`11/WAKU2-RELAY` specifies the following usage requirements for each field:
|
||||
|
||||
- The `from` field MUST NOT be used, following the [`StrictNoSign` signature policy](#signature-policy).
|
||||
|
||||
- The `data` field MUST be filled out with a `WakuMessage`.
|
||||
See [`14/WAKU2-MESSAGE`](../14/message.md) for more details.
|
||||
|
||||
- The `seqno` field MUST NOT be used, following the [`StrictNoSign` signature policy](#signature-policy).
|
||||
|
||||
- The `topicIDs` field MUST contain the content-topics
|
||||
that a message is being published on.
|
||||
|
||||
- The `signature` field MUST NOT be used,
|
||||
following the [`StrictNoSign` signature policy](#signature-policy).
|
||||
|
||||
- The `key` field MUST NOT be used,
|
||||
following the [`StrictNoSign` signature policy](#signature-policy).
|
||||
|
||||
### SubOpts fields
|
||||
|
||||
The `SubOpts` protobuf defines the format
|
||||
in which subscription options are relayed between peers.
|
||||
A `11/WAKU2-RELAY` node MAY decide to subscribe or
|
||||
unsubscribe from topics by sending updates using `SubOpts`.
|
||||
The following usage requirements apply:
|
||||
|
||||
- The `subscribe` field MUST contain a boolean,
|
||||
where `true` indicates subscribe and `false` indicates unsubscribe to a topic.
|
||||
|
||||
- The `topicid` field MUST contain the pubsub topic.
|
||||
|
||||
> Note: The `topicid` refering to pubsub topic and
|
||||
`topicId` refering to content-topic are detailed in [23/WAKU2-TOPICS](../../../informational/23/topics.md).
|
||||
|
||||
### Signature Policy
|
||||
|
||||
The [`StrictNoSign` option](https://github.com/libp2p/specs/blob/master/pubsub/README.md#signature-policy-options)
|
||||
MUST be used, to ensure that messages are built without the `signature`,
|
||||
`key`, `from` and `seqno` fields.
|
||||
Note that this does not merely imply that these fields be empty, but
|
||||
that they MUST be _absent_ from the marshalled message.
|
||||
|
||||
## Security Analysis
|
||||
|
||||
<!-- TODO: realized that the prime security objective of the `WakuRelay`
|
||||
protocol is to provide peers unlinkability
|
||||
as such this feature is prioritized over other features
|
||||
e.g., unlinkability is preferred over authenticity and integrity.
|
||||
It might be good to motivate unlinkability and
|
||||
its impact on the relay protocol or other protocols invoking relay protocol.-->
|
||||
|
||||
- **Publisher-Message Unlinkability**:
|
||||
To address publisher-message unlinkability,
|
||||
one should remove any PII from the published message.
|
||||
As such, `11/WAKU2-RELAY` follows the `StrictNoSign` policy as described in
|
||||
[libp2p PubSub specs](https://github.com/libp2p/specs/tree/master/pubsub#message-signing).
|
||||
As the result of the `StrictNoSign` policy,
|
||||
`Message`s should be built without the `from`,
|
||||
`signature` and `key` fields since each of these three fields individually
|
||||
counts as PII for the author of the message
|
||||
(one can link the creation of the message with libp2p peerId and
|
||||
thus indirectly with the IP address of the publisher).
|
||||
Note that removing identifiable information from messages
|
||||
cannot lead to perfect unlinkability.
|
||||
The direct connections of a publisher
|
||||
might be able to figure out which `Message`s belong to that publisher
|
||||
by analyzing its traffic.
|
||||
The possibility of such inference may get higher
|
||||
when the `data` field is also not encrypted by the upper-level protocols.
|
||||
<!-- TODO: more investigation on traffic analysis attacks and their success probability-->
|
||||
|
||||
- **Subscriber-Topic Unlinkability:**
|
||||
To preserve subscriber-topic unlinkability,
|
||||
it is recommended by [`10/WAKU2`](../10/waku2.md) to use a single PubSub topic
|
||||
in the `11/WAKU2-RELAY` protocol.
|
||||
This allows an immediate subscriber-topic unlinkability
|
||||
where subscribers are not re-identifiable from their subscribed topic IDs
|
||||
as the entire network is linked to the same topic ID.
|
||||
This level of unlinkability / anonymity
|
||||
is known as [k-anonymity](https://www.privitar.com/blog/k-anonymity-an-introduction/)
|
||||
where k is proportional to the system size
|
||||
(number of participants of Waku relay protocol).
|
||||
However, note that `11/WAKU2-RELAY` supports the use of more than one topic.
|
||||
In case that more than one topic id is utilized,
|
||||
preserving unlinkability is the responsibility of the upper-level protocols
|
||||
which MAY adopt
|
||||
[partitioned topics technique](https://specs.status.im/spec/10#partitioned-topic)
|
||||
to achieve K-anonymity for the subscribed peers.
|
||||
|
||||
## Future work
|
||||
|
||||
- **Economic spam resistance**:
|
||||
In the spam-protected `11/WAKU2-RELAY` protocol,
|
||||
no adversary can flood the system with spam messages
|
||||
(i.e., publishing a large number of messages in a short amount of time).
|
||||
Spam protection is partly provided by GossipSub v1.1 through [scoring mechanism](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#spam-protection-measures).
|
||||
At a high level,
|
||||
peers utilize a scoring function to locally score the behavior of their connections
|
||||
and remove peers with a low score.
|
||||
`11/WAKU2-RELAY` aims at enabling an advanced spam protection mechanism
|
||||
with economic disincentives by utilizing Rate Limiting Nullifiers.
|
||||
In a nutshell,
|
||||
peers must conform to a certain message publishing rate per a system-defined epoch,
|
||||
otherwise, they get financially penalized for exceeding the rate.
|
||||
More details on this new technique can be found in [`17/WAKU2-RLN-RELAY`](../17/rln-relay.md).
|
||||
<!-- TODO havn't checked if all the measures in libp2p GossipSub v1.1
|
||||
are taken in the nim-libp2p as well, may need to audit the code -->
|
||||
|
||||
- Providing **Unlinkability**, **Integrity** and **Authenticity** simultaneously:
|
||||
Integrity and authenticity are typically addressed through digital signatures and
|
||||
Message Authentication Code (MAC) schemes, however,
|
||||
the usage of digital signatures (where each signature is bound to a particular peer)
|
||||
contradicts with the unlinkability requirement
|
||||
(messages signed under a certain signature key are verifiable by a verification key
|
||||
that is bound to a particular publisher).
|
||||
As such, integrity and authenticity are missing features in `11/WAKU2-RELAY`
|
||||
in the interest of unlinkability.
|
||||
In future work, advanced signature schemes like group signatures
|
||||
can be utilized to enable authenticity, integrity, and unlinkability simultaneously.
|
||||
In a group signature scheme, a member of a group can anonymously sign a message
|
||||
on behalf of the group as such the true signer
|
||||
is indistinguishable from other group members.
|
||||
<!-- TODO: shall I add a reference for group signatures?-->
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [`10/WAKU2`](../10/waku2.md)
|
||||
|
||||
1. [`14/WAKU2-MESSAGE`](../14/message.md)
|
||||
|
||||
1. [`17/WAKU-RLN`](../17/rln-relay.md)
|
||||
|
||||
1. [GossipSub v1.0](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md)
|
||||
|
||||
1. [GossipSub v1.1](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md)
|
||||
|
||||
1. [K-anonimity](https://www.privitar.com/blog/k-anonymity-an-introduction/)
|
||||
|
||||
1. [`libp2p` concepts: Publish/Subscribe](https://docs.libp2p.io/concepts/publish-subscribe/)
|
||||
|
||||
1. [`libp2p` protocol negotiation](https://github.com/libp2p/specs/blob/master/connections/README.md#protocol-negotiation)
|
||||
|
||||
1. [Partitioned topics](https://specs.status.im/spec/10#partitioned-topic)
|
||||
|
||||
1. [Protocol Buffers](https://developers.google.com/protocol-buffers/)
|
||||
|
||||
1. [PubSub interface for libp2p (r2, 2019-02-01)](https://github.com/libp2p/specs/blob/master/pubsub/README.md)
|
||||
|
||||
1. [Waku v1 spec](../6/waku1.md)
|
||||
|
||||
1. [Whisper spec (EIP627)](https://eips.ethereum.org/EIPS/eip-627)
|
||||
---
|
||||
slug: 11
|
||||
title: 11/WAKU2-RELAY
|
||||
name: Waku v2 Relay
|
||||
status: stable
|
||||
tags: waku-core
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
---
|
||||
|
||||
`11/WAKU2-RELAY` specifies a [Publish/Subscribe approach](https://docs.libp2p.io/concepts/publish-subscribe/)
|
||||
to peer-to-peer messaging with a strong focus on privacy,
|
||||
censorship-resistance, security and scalability.
|
||||
Its current implementation is a minor extension of the
|
||||
[libp2p GossipSub protocol](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md)
|
||||
and prescribes gossip-based dissemination.
|
||||
As such the scope is limited to defining a separate
|
||||
[`protocol id`](https://github.com/libp2p/specs/blob/master/connections/README.md#protocol-negotiation)
|
||||
for `11/WAKU2-RELAY`, establishing privacy and security requirements,
|
||||
and defining how the underlying GossipSub is to be interpreted and
|
||||
implemented within the Waku and cryptoeconomic domain.
|
||||
`11/WAKU2-RELAY` should not be confused with [libp2p circuit relay](https://github.com/libp2p/specs/tree/master/relay).
|
||||
|
||||
**Protocol identifier**: `/vac/waku/relay/2.0.0`
|
||||
|
||||
## Security Requirements
|
||||
|
||||
The `11/WAKU2-RELAY` protocol is designed to provide the following security properties
|
||||
under a static [Adversarial Model](#adversarial-model).
|
||||
Note that data confidentiality, integrity, and
|
||||
authenticity are currently considered out of scope for `11/WAKU2-RELAY` and
|
||||
must be handled by higher layer protocols such as [`14/WAKU2-MESSAGE`](../14/message.md).
|
||||
|
||||
<!-- May add the definition of the unsupported feature:
|
||||
Confidentiality indicates that an adversary
|
||||
should not be able to learn the data carried by the `WakuRelay` protocol.
|
||||
Integrity indicates that the data transferred by the `WakuRelay` protocol
|
||||
can not be tampered with by an adversarial entity without being detected.
|
||||
Authenticity no adversary can forge data on behalf of a targeted publisher and
|
||||
make it accepted by other subscribers as if the origin is the target. -->
|
||||
|
||||
- **Publisher-Message Unlinkability**:
|
||||
This property indicates that no adversarial entity can link a published `Message`
|
||||
to its publisher.
|
||||
This feature also implies the unlinkability of the publisher
|
||||
to its published topic ID as the `Message` embodies the topic IDs.
|
||||
|
||||
- **Subscriber-Topic Unlinkability**:
|
||||
This feature stands for the inability of any adversarial entity
|
||||
from linking a subscriber to its subscribed topic IDs.
|
||||
|
||||
<!-- TODO: more requirements can be added,
|
||||
but that needs further and deeper investigation-->
|
||||
|
||||
### Terminology
|
||||
|
||||
_Personally identifiable information_ (PII)
|
||||
refers to any piece of data that can be used to uniquely identify a user.
|
||||
For example, the signature verification key,
|
||||
and the hash of one's static IP address are unique for each user and
|
||||
hence count as PII.
|
||||
|
||||
## Adversarial Model
|
||||
|
||||
- Any entity running the `11/WAKU2-RELAY` protocol is considered an adversary.
|
||||
This includes publishers, subscribers, and all the peers' direct connections.
|
||||
Furthermore,
|
||||
we consider the adversary as a passive entity that attempts to collect information
|
||||
from others to conduct an attack but
|
||||
it does so without violating protocol definitions and instructions.
|
||||
For example, under the passive adversarial model,
|
||||
no malicious subscriber hides the messages it receives from other subscribers
|
||||
as it is against the description of `11/WAKU2-RELAY`.
|
||||
However,
|
||||
a malicious subscriber may learn which topics are subscribed to by which peers.
|
||||
- The following are **not** considered as part of the adversarial model:
|
||||
- An adversary with a global view of all the peers and their connections.
|
||||
- An adversary that can eavesdrop on communication links between arbitrary pairs
|
||||
of peers (unless the adversary is one end of the communication).
|
||||
In other words, the communication channels are assumed to be secure.
|
||||
|
||||
## Wire Specification
|
||||
|
||||
The [PubSub interface specification](https://github.com/libp2p/specs/blob/master/pubsub/README.md)
|
||||
defines the protobuf RPC messages
|
||||
exchanged between peers participating in a GossipSub network.
|
||||
We republish these messages here for ease of reference and
|
||||
define how `11/WAKU2-RELAY` uses and interprets each field.
|
||||
|
||||
### Protobuf definitions
|
||||
|
||||
The PubSub RPC messages are specified using [protocol buffers v2](https://developers.google.com/protocol-buffers/)
|
||||
|
||||
```protobuf
|
||||
syntax = "proto2";
|
||||
|
||||
message RPC {
|
||||
repeated SubOpts subscriptions = 1;
|
||||
repeated Message publish = 2;
|
||||
|
||||
message SubOpts {
|
||||
optional bool subscribe = 1;
|
||||
optional string topicid = 2;
|
||||
}
|
||||
|
||||
message Message {
|
||||
optional string from = 1;
|
||||
optional bytes data = 2;
|
||||
optional bytes seqno = 3;
|
||||
repeated string topicIDs = 4;
|
||||
optional bytes signature = 5;
|
||||
optional bytes key = 6;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **_NOTE:_**
|
||||
The various [control messages](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md#control-messages)
|
||||
defined for GossipSub are used as specified there.
|
||||
> **_NOTE:_**
|
||||
The [`TopicDescriptor`](https://github.com/libp2p/specs/blob/master/pubsub/README.md#the-topic-descriptor)
|
||||
is not currently used by `11/WAKU2-RELAY`.
|
||||
|
||||
### Message fields
|
||||
|
||||
The `Message` protobuf defines the format in which content is relayed between peers.
|
||||
`11/WAKU2-RELAY` specifies the following usage requirements for each field:
|
||||
|
||||
- The `from` field MUST NOT be used, following the [`StrictNoSign` signature policy](#signature-policy).
|
||||
|
||||
- The `data` field MUST be filled out with a `WakuMessage`.
|
||||
See [`14/WAKU2-MESSAGE`](../14/message.md) for more details.
|
||||
|
||||
- The `seqno` field MUST NOT be used, following the [`StrictNoSign` signature policy](#signature-policy).
|
||||
|
||||
- The `topicIDs` field MUST contain the content-topics
|
||||
that a message is being published on.
|
||||
|
||||
- The `signature` field MUST NOT be used,
|
||||
following the [`StrictNoSign` signature policy](#signature-policy).
|
||||
|
||||
- The `key` field MUST NOT be used,
|
||||
following the [`StrictNoSign` signature policy](#signature-policy).
|
||||
|
||||
### SubOpts fields
|
||||
|
||||
The `SubOpts` protobuf defines the format
|
||||
in which subscription options are relayed between peers.
|
||||
A `11/WAKU2-RELAY` node MAY decide to subscribe or
|
||||
unsubscribe from topics by sending updates using `SubOpts`.
|
||||
The following usage requirements apply:
|
||||
|
||||
- The `subscribe` field MUST contain a boolean,
|
||||
where `true` indicates subscribe and `false` indicates unsubscribe to a topic.
|
||||
|
||||
- The `topicid` field MUST contain the pubsub topic.
|
||||
|
||||
> Note: The `topicid` refering to pubsub topic and
|
||||
`topicId` refering to content-topic are detailed in [23/WAKU2-TOPICS](../../../informational/23/topics.md).
|
||||
|
||||
### Signature Policy
|
||||
|
||||
The [`StrictNoSign` option](https://github.com/libp2p/specs/blob/master/pubsub/README.md#signature-policy-options)
|
||||
MUST be used, to ensure that messages are built without the `signature`,
|
||||
`key`, `from` and `seqno` fields.
|
||||
Note that this does not merely imply that these fields be empty, but
|
||||
that they MUST be _absent_ from the marshalled message.
|
||||
|
||||
## Security Analysis
|
||||
|
||||
<!-- TODO: realized that the prime security objective of the `WakuRelay`
|
||||
protocol is to provide peers unlinkability
|
||||
as such this feature is prioritized over other features
|
||||
e.g., unlinkability is preferred over authenticity and integrity.
|
||||
It might be good to motivate unlinkability and
|
||||
its impact on the relay protocol or other protocols invoking relay protocol.-->
|
||||
|
||||
- **Publisher-Message Unlinkability**:
|
||||
To address publisher-message unlinkability,
|
||||
one should remove any PII from the published message.
|
||||
As such, `11/WAKU2-RELAY` follows the `StrictNoSign` policy as described in
|
||||
[libp2p PubSub specs](https://github.com/libp2p/specs/tree/master/pubsub#message-signing).
|
||||
As the result of the `StrictNoSign` policy,
|
||||
`Message`s should be built without the `from`,
|
||||
`signature` and `key` fields since each of these three fields individually
|
||||
counts as PII for the author of the message
|
||||
(one can link the creation of the message with libp2p peerId and
|
||||
thus indirectly with the IP address of the publisher).
|
||||
Note that removing identifiable information from messages
|
||||
cannot lead to perfect unlinkability.
|
||||
The direct connections of a publisher
|
||||
might be able to figure out which `Message`s belong to that publisher
|
||||
by analyzing its traffic.
|
||||
The possibility of such inference may get higher
|
||||
when the `data` field is also not encrypted by the upper-level protocols.
|
||||
<!-- TODO: more investigation on traffic analysis attacks and their success probability-->
|
||||
|
||||
- **Subscriber-Topic Unlinkability:**
|
||||
To preserve subscriber-topic unlinkability,
|
||||
it is recommended by [`10/WAKU2`](../10/waku2.md) to use a single PubSub topic
|
||||
in the `11/WAKU2-RELAY` protocol.
|
||||
This allows an immediate subscriber-topic unlinkability
|
||||
where subscribers are not re-identifiable from their subscribed topic IDs
|
||||
as the entire network is linked to the same topic ID.
|
||||
This level of unlinkability / anonymity
|
||||
is known as [k-anonymity](https://www.privitar.com/blog/k-anonymity-an-introduction/)
|
||||
where k is proportional to the system size
|
||||
(number of participants of Waku relay protocol).
|
||||
However, note that `11/WAKU2-RELAY` supports the use of more than one topic.
|
||||
In case that more than one topic id is utilized,
|
||||
preserving unlinkability is the responsibility of the upper-level protocols
|
||||
which MAY adopt
|
||||
[partitioned topics technique](https://specs.status.im/spec/10#partitioned-topic)
|
||||
to achieve K-anonymity for the subscribed peers.
|
||||
|
||||
## Future work
|
||||
|
||||
- **Economic spam resistance**:
|
||||
In the spam-protected `11/WAKU2-RELAY` protocol,
|
||||
no adversary can flood the system with spam messages
|
||||
(i.e., publishing a large number of messages in a short amount of time).
|
||||
Spam protection is partly provided by GossipSub v1.1 through [scoring mechanism](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#spam-protection-measures).
|
||||
At a high level,
|
||||
peers utilize a scoring function to locally score the behavior of their connections
|
||||
and remove peers with a low score.
|
||||
`11/WAKU2-RELAY` aims at enabling an advanced spam protection mechanism
|
||||
with economic disincentives by utilizing Rate Limiting Nullifiers.
|
||||
In a nutshell,
|
||||
peers must conform to a certain message publishing rate per a system-defined epoch,
|
||||
otherwise, they get financially penalized for exceeding the rate.
|
||||
More details on this new technique can be found in [`17/WAKU2-RLN-RELAY`](../17/rln-relay.md).
|
||||
<!-- TODO havn't checked if all the measures in libp2p GossipSub v1.1
|
||||
are taken in the nim-libp2p as well, may need to audit the code -->
|
||||
|
||||
- Providing **Unlinkability**, **Integrity** and **Authenticity** simultaneously:
|
||||
Integrity and authenticity are typically addressed through digital signatures and
|
||||
Message Authentication Code (MAC) schemes, however,
|
||||
the usage of digital signatures (where each signature is bound to a particular peer)
|
||||
contradicts with the unlinkability requirement
|
||||
(messages signed under a certain signature key are verifiable by a verification key
|
||||
that is bound to a particular publisher).
|
||||
As such, integrity and authenticity are missing features in `11/WAKU2-RELAY`
|
||||
in the interest of unlinkability.
|
||||
In future work, advanced signature schemes like group signatures
|
||||
can be utilized to enable authenticity, integrity, and unlinkability simultaneously.
|
||||
In a group signature scheme, a member of a group can anonymously sign a message
|
||||
on behalf of the group as such the true signer
|
||||
is indistinguishable from other group members.
|
||||
<!-- TODO: shall I add a reference for group signatures?-->
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [`10/WAKU2`](../10/waku2.md)
|
||||
|
||||
1. [`14/WAKU2-MESSAGE`](../14/message.md)
|
||||
|
||||
1. [`17/WAKU-RLN`](../17/rln-relay.md)
|
||||
|
||||
1. [GossipSub v1.0](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md)
|
||||
|
||||
1. [GossipSub v1.1](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md)
|
||||
|
||||
1. [K-anonimity](https://www.privitar.com/blog/k-anonymity-an-introduction/)
|
||||
|
||||
1. [`libp2p` concepts: Publish/Subscribe](https://docs.libp2p.io/concepts/publish-subscribe/)
|
||||
|
||||
1. [`libp2p` protocol negotiation](https://github.com/libp2p/specs/blob/master/connections/README.md#protocol-negotiation)
|
||||
|
||||
1. [Partitioned topics](https://specs.status.im/spec/10#partitioned-topic)
|
||||
|
||||
1. [Protocol Buffers](https://developers.google.com/protocol-buffers/)
|
||||
|
||||
1. [PubSub interface for libp2p (r2, 2019-02-01)](https://github.com/libp2p/specs/blob/master/pubsub/README.md)
|
||||
|
||||
1. [Waku v1 spec](../6/waku1.md)
|
||||
|
||||
1. [Whisper spec (EIP627)](https://eips.ethereum.org/EIPS/eip-627)
|
||||
|
||||
@@ -1,337 +1,356 @@
|
||||
---
|
||||
slug: 12
|
||||
title: 12/WAKU2-FILTER
|
||||
name: Waku v2 Filter
|
||||
status: draft
|
||||
tags: waku-core
|
||||
version: 01
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
- Ebube Ud <ebube@status.im>
|
||||
---
|
||||
|
||||
previous versions: [00](/waku/standards/core/12/previous-versions/00/filter.md)
|
||||
|
||||
**Protocol identifiers**:
|
||||
|
||||
- _filter-subscribe_: `/vac/waku/filter-subscribe/2.0.0-beta1`
|
||||
- _filter-push_: `/vac/waku/filter-push/2.0.0-beta1`
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification describes the `12/WAKU2-FILTER` protocol,
|
||||
which enables a client to subscribe to a subset of real-time messages from a Waku peer.
|
||||
This is a more lightweight version of [11/WAKU2-RELAY](/waku/standards/core/11/relay.md),
|
||||
useful for bandwidth restricted devices.
|
||||
This is often used by nodes with lower resource limits to subscribe to full Relay nodes and
|
||||
only receive the subset of messages they desire,
|
||||
based on content topic interest.
|
||||
|
||||
## Motivation
|
||||
|
||||
Unlike the [13/WAKU2-STORE](/waku/standards/core/13/store.md) protocol
|
||||
for historical messages, this protocol allows for native lower latency scenarios,
|
||||
such as instant messaging.
|
||||
It is thus complementary to it.
|
||||
|
||||
Strictly speaking, it is not just doing basic request-response, but
|
||||
performs sender push based on receiver intent.
|
||||
While this can be seen as a form of light publish/subscribe,
|
||||
it is only used between two nodes in a direct fashion. Unlike the
|
||||
Gossip domain, this is suitable for light nodes which put a premium on bandwidth.
|
||||
No gossiping takes place.
|
||||
|
||||
It is worth noting that a light node could get by with only using the
|
||||
[13/WAKU2-STORE](/waku/standards/core/13/store.md) protocol to
|
||||
query for a recent time window, provided it is acceptable to do frequent polling.
|
||||
|
||||
## Semantics
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”,
|
||||
“SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Content filtering
|
||||
|
||||
Content filtering is a way to do
|
||||
[message-based filtering](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern#Message_filtering).
|
||||
Currently the only content filter being applied is on `contentTopic`.
|
||||
|
||||
### Terminology
|
||||
|
||||
The term Personally identifiable information (PII)
|
||||
refers to any piece of data that can be used to uniquely identify a user.
|
||||
For example, the signature verification key, and
|
||||
the hash of one's static IP address are unique for each user and hence count as PII.
|
||||
|
||||
### Protobuf
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
// Protocol identifier: /vac/waku/filter-subscribe/2.0.0-beta1
|
||||
message FilterSubscribeRequest {
|
||||
enum FilterSubscribeType {
|
||||
SUBSCRIBER_PING = 0;
|
||||
SUBSCRIBE = 1;
|
||||
UNSUBSCRIBE = 2;
|
||||
UNSUBSCRIBE_ALL = 3;
|
||||
}
|
||||
|
||||
string request_id = 1;
|
||||
FilterSubscribeType filter_subscribe_type = 2;
|
||||
|
||||
// Filter criteria
|
||||
optional string pubsub_topic = 10;
|
||||
repeated string content_topics = 11;
|
||||
}
|
||||
|
||||
message FilterSubscribeResponse {
|
||||
string request_id = 1;
|
||||
uint32 status_code = 10;
|
||||
optional string status_desc = 11;
|
||||
}
|
||||
|
||||
// Protocol identifier: /vac/waku/filter-push/2.0.0-beta1
|
||||
message MessagePush {
|
||||
WakuMessage waku_message = 1;
|
||||
optional string pubsub_topic = 2;
|
||||
}
|
||||
```
|
||||
|
||||
### Filter-Subscribe
|
||||
|
||||
A filter service node MUST support the _filter-subscribe_ protocol
|
||||
to allow filter clients to subscribe, modify, refresh and
|
||||
unsubscribe a desired set of filter criteria.
|
||||
The combination of different filter criteria
|
||||
for a specific filter client node is termed a "subscription".
|
||||
A filter client is interested in receiving messages matching the filter criteria
|
||||
in its registered subscriptions.
|
||||
|
||||
Since a filter service node is consuming resources to provide this service,
|
||||
it MAY account for usage and adapt its service provision to certain clients.
|
||||
|
||||
#### Filter Subscribe Request
|
||||
|
||||
A client node MUST send all filter requests in a `FilterSubscribeRequest` message.
|
||||
This request MUST contain a `request_id`.
|
||||
The `request_id` MUST be a uniquely generated string.
|
||||
Each request MUST include a `filter_subscribe_type`, indicating the type of request.
|
||||
|
||||
#### Filter Subscribe Response
|
||||
|
||||
When responding to a `FilterSubscribeRequest`,
|
||||
a filter service node SHOULD send a `FilterSubscribeResponse`
|
||||
with a `requestId` matching that of the request.
|
||||
This response MUST contain a `status_code` indicating if the request was successful
|
||||
or not.
|
||||
Successful status codes are in the `2xx` range.
|
||||
Client nodes SHOULD consider all other status codes as error codes and
|
||||
assume that the requested operation had failed.
|
||||
In addition,
|
||||
the filter service node MAY choose to provide a more detailed status description
|
||||
in the `status_desc` field.
|
||||
|
||||
#### Filter matching
|
||||
|
||||
In the description of each request type below,
|
||||
the term "filter criteria" refers to the combination of `pubsub_topic` and
|
||||
a set of `content_topics`.
|
||||
The request MAY include filter criteria,
|
||||
conditional to the selected `filter_subscribe_type`.
|
||||
If the request contains filter criteria,
|
||||
it MUST contain a `pubsub_topic`
|
||||
and the `content_topics` set MUST NOT be empty.
|
||||
A [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md) matches filter criteria
|
||||
when its `content_topic` is in the `content_topics` set
|
||||
and it was published on a matching `pubsub_topic`.
|
||||
|
||||
#### Filter Subscribe Types
|
||||
|
||||
The filter-subscribe types are defined as follows:
|
||||
|
||||
##### SUBSCRIBER_PING
|
||||
|
||||
A filter client that sends a `FilterSubscribeRequest` with
|
||||
`filter_subscribe_type` set to `SUBSCRIBER_PING`,
|
||||
requests that the filter service node SHOULD indicate if it has any active subscriptions
|
||||
for this client.
|
||||
The filter client SHOULD exclude any filter criteria from the request.
|
||||
The filter service node SHOULD respond with a success `status_code`
|
||||
if it has any active subscriptions for this client
|
||||
or an error `status_code` if not.
|
||||
The filter service node SHOULD ignore any filter criteria in the request.
|
||||
|
||||
##### SUBSCRIBE
|
||||
|
||||
A filter client that sends a `FilterSubscribeRequest` with
|
||||
`filter_subscribe_type` set to `SUBSCRIBE`
|
||||
requests that the filter service node SHOULD push messages
|
||||
matching this filter to the client.
|
||||
The filter client MUST include the desired filter criteria in the request.
|
||||
A client MAY use this request type to _modify_ an existing subscription
|
||||
by providing _additional_ filter criteria in a new request.
|
||||
A client MAY use this request type to _refresh_ an existing subscription
|
||||
by providing _the same_ filter criteria in a new request.
|
||||
The filter service node SHOULD respond with a success `status_code`
|
||||
if it successfully honored this request
|
||||
or an error `status_code` if not.
|
||||
The filter service node SHOULD respond with an error `status_code` and
|
||||
discard the request if the `FilterSubscribeRequest`
|
||||
does not contain valid filter criteria,
|
||||
i.e. both a `pubsub_topic` _and_ a non-empty `content_topics` set.
|
||||
|
||||
##### UNSUBSCRIBE
|
||||
|
||||
A filter client that sends a `FilterSubscribeRequest` with
|
||||
`filter_subscribe_type` set to `UNSUBSCRIBE`
|
||||
requests that the service node SHOULD _stop_ pushing messages
|
||||
matching this filter to the client.
|
||||
The filter client MUST include the filter criteria
|
||||
it desires to unsubscribe from in the request.
|
||||
A client MAY use this request type to _modify_ an existing subscription
|
||||
by providing _a subset of_ the original filter criteria
|
||||
to unsubscribe from in a new request.
|
||||
The filter service node SHOULD respond with a success `status_code`
|
||||
if it successfully honored this request
|
||||
or an error `status_code` if not.
|
||||
The filter service node SHOULD respond with an error `status_code` and
|
||||
discard the request if the unsubscribe request does not contain valid filter criteria,
|
||||
i.e. both a `pubsub_topic` _and_ a non-empty `content_topics` set.
|
||||
|
||||
##### UNSUBSCRIBE_ALL
|
||||
|
||||
A filter client that sends a `FilterSubscribeRequest` with
|
||||
`filter_subscribe_type` set to `UNSUBSCRIBE_ALL`
|
||||
requests that the service node SHOULD _stop_ pushing messages
|
||||
matching _any_ filter to the client.
|
||||
The filter client SHOULD exclude any filter criteria from the request.
|
||||
The filter service node SHOULD remove any existing subscriptions for this client.
|
||||
It SHOULD respond with a success `status_code` if it successfully honored this request
|
||||
or an error `status_code` if not.
|
||||
|
||||
### Filter-Push
|
||||
|
||||
A filter client node MUST support the _filter-push_ protocol
|
||||
to allow filter service nodes to push messages
|
||||
matching registered subscriptions to this client.
|
||||
|
||||
A filter service node SHOULD push all messages
|
||||
matching the filter criteria in a registered subscription
|
||||
to the subscribed filter client.
|
||||
These [`WakuMessage`s](/waku/standards/core/14/message.md)
|
||||
are likely to come from [`11/WAKU2-RELAY`](/waku/standards/core/11/relay.md),
|
||||
but there MAY be other sources or protocols where this comes from.
|
||||
This is up to the consumer of the protocol.
|
||||
|
||||
If a message push fails,
|
||||
the filter service node MAY consider the client node to be unreachable.
|
||||
If a specific filter client node is not reachable from the service node
|
||||
for a period of time,
|
||||
the filter service node MAY choose to stop pushing messages to the client and
|
||||
remove its subscription.
|
||||
This period is up to the service node implementation.
|
||||
It is RECOMMENDED to set `1 minute` as a reasonable default.
|
||||
|
||||
#### Message Push
|
||||
|
||||
Each message MUST be pushed in a `MessagePush` message.
|
||||
Each `MessagePush` MUST contain one (and only one) `waku_message`.
|
||||
If this message was received on a specific `pubsub_topic`,
|
||||
it SHOULD be included in the `MessagePush`.
|
||||
A filter client SHOULD NOT respond to a `MessagePush`.
|
||||
Since the filter protocol does not include caching or fault-tolerance,
|
||||
this is a best effort push service with no bundling
|
||||
or guaranteed retransmission of messages.
|
||||
A filter client SHOULD verify that each `MessagePush` it receives
|
||||
originated from a service node where the client has an active subscription
|
||||
and that it matches filter criteria belonging to that subscription.
|
||||
|
||||
### Adversarial Model
|
||||
|
||||
Any node running the `WakuFilter` protocol
|
||||
i.e., both the subscriber node and
|
||||
the queried node are considered as an adversary.
|
||||
Furthermore, we consider the adversary as a passive entity
|
||||
that attempts to collect information from other nodes to conduct an attack but
|
||||
it does so without violating protocol definitions and instructions.
|
||||
For example, under the passive adversarial model,
|
||||
no malicious node intentionally hides the messages
|
||||
matching to one's subscribed content filter
|
||||
as it is against the description of the `WakuFilter` protocol.
|
||||
|
||||
The following are not considered as part of the adversarial model:
|
||||
|
||||
- An adversary with a global view of all the nodes and their connections.
|
||||
- An adversary that can eavesdrop on communication links
|
||||
between arbitrary pairs of nodes (unless the adversary is one end of the communication).
|
||||
In specific, the communication channels are assumed to be secure.
|
||||
|
||||
### Security Considerations
|
||||
|
||||
Note that while using `WakuFilter` allows light nodes to save bandwidth,
|
||||
it comes with a privacy cost in the sense that they need to
|
||||
disclose their liking topics to the full nodes to retrieve the relevant messages.
|
||||
Currently, anonymous subscription is not supported by the `WakuFilter`, however,
|
||||
potential solutions in this regard are discussed below.
|
||||
|
||||
#### Future Work
|
||||
<!-- Alternative title: Filter-subscriber unlinkability -->
|
||||
**Anonymous filter subscription**:
|
||||
This feature guarantees that nodes can anonymously subscribe for a message filter
|
||||
(i.e., without revealing their exact content filter).
|
||||
As such, no adversary in the `WakuFilter` protocol
|
||||
would be able to link nodes to their subscribed content filers.
|
||||
The current version of the `WakuFilter` protocol does not provide anonymity
|
||||
as the subscribing node has a direct connection to the full node and
|
||||
explicitly submits its content filter to be notified about the matching messages.
|
||||
However, one can consider preserving anonymity through one of the following ways:
|
||||
|
||||
- By hiding the source of the subscription i.e., anonymous communication.
|
||||
That is the subscribing node shall hide all its PII in its filter request
|
||||
e.g., its IP address.
|
||||
This can happen by the utilization of a proxy server or by using Tor
|
||||
<!-- TODO: if nodes have to disclose their PeerIDs
|
||||
(e.g., for authentication purposes)
|
||||
when connecting to other nodes in the WakuFilter protocol,
|
||||
then Tor does not preserve anonymity since it only helps in hiding the IP.
|
||||
So, the PeerId usage in switches must be investigated further.
|
||||
Depending on how PeerId is used,
|
||||
one may be able to link between a subscriber and
|
||||
its content filter despite hiding the IP address-->.
|
||||
Note that the current structure of filter requests
|
||||
i.e., `FilterRPC` does not embody any piece of PII, otherwise,
|
||||
such data fields must be treated carefully to achieve anonymity.
|
||||
|
||||
- By deploying secure 2-party computations in which
|
||||
the subscribing node obtains the messages matching a content filter
|
||||
whereas the full node learns nothing about the content filter as well as
|
||||
the messages pushed to the subscribing node.
|
||||
Examples of such 2PC protocols are
|
||||
[Oblivious Transfers](https://link.springer.com/referenceworkentry/10.1007%2F978-1-4419-5906-5_9#:~:text=Oblivious%20transfer%20(OT)%20is%20a,information%20the%20receiver%20actually%20obtains.)
|
||||
and one-way Private Set Intersections (PSI).
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [11/WAKU2-RELAY](/waku/standards/core/11/relay.md)
|
||||
- [message-based filtering](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern#Message_filtering)
|
||||
- [13/WAKU2-STORE](/waku/standards/core/13/store.md)
|
||||
- [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md)
|
||||
- [Oblivious Transfers](https://link.springer.com/referenceworkentry/10.1007%2F978-1-4419-5906-5_9#:~:text=Oblivious%20transfer%20(OT)%20is%20a,information%20the%20receiver%20actually%20obtains)
|
||||
- 12/WAKU2-FILTER previous version: [00](waku/standards/core/12/previous-versions/00/filter.md)
|
||||
|
||||
### Informative
|
||||
|
||||
1. [Message Filtering (Wikipedia)](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern#Message_filtering)
|
||||
2. [Libp2p PubSub spec - topic validation](https://github.com/libp2p/specs/tree/master/pubsub#topic-validation)
|
||||
---
|
||||
slug: 12
|
||||
title: 12/WAKU2-FILTER
|
||||
name: Waku v2 Filter
|
||||
status: draft
|
||||
tags: waku-core
|
||||
version: 01
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
- Ebube Ud <ebube@status.im>
|
||||
---
|
||||
|
||||
previous versions: [00](./previous-versions00)
|
||||
|
||||
---
|
||||
|
||||
`WakuFilter` is a protocol that enables subscribing to messages that a peer receives.
|
||||
This is a more lightweight version of `WakuRelay`
|
||||
specifically designed for bandwidth restricted devices.
|
||||
This is due to the fact that light nodes subscribe to full-nodes and
|
||||
only receive the messages they desire.
|
||||
|
||||
## Content filtering
|
||||
|
||||
**Protocol identifiers**:
|
||||
|
||||
- _filter-subscribe_: `/vac/waku/filter-subscribe/2.0.0-beta1`
|
||||
- _filter-push_: `/vac/waku/filter-push/2.0.0-beta1`
|
||||
|
||||
Content filtering is a way to do [message-based
|
||||
filtering](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern#Message_filtering).
|
||||
Currently the only content filter being applied is on `contentTopic`. This
|
||||
corresponds to topics in Waku v1.
|
||||
|
||||
## Rationale
|
||||
|
||||
Unlike the `store` protocol for historical messages, this protocol allows for
|
||||
native lower latency scenarios such as instant messaging. It is thus
|
||||
complementary to it.
|
||||
|
||||
Strictly speaking, it is not just doing basic request response, but performs
|
||||
sender push based on receiver intent. While this can be seen as a form of light
|
||||
pub/sub, it is only used between two nodes in a direct fashion. Unlike the
|
||||
Gossip domain, this is meant for light nodes which put a premium on bandwidth.
|
||||
No gossiping takes place.
|
||||
|
||||
It is worth noting that a light node could get by with only using the `store`
|
||||
protocol to query for a recent time window, provided it is acceptable to do
|
||||
frequent polling.
|
||||
|
||||
## Design Requirements
|
||||
|
||||
The effectiveness and reliability of the content filtering service enabled by
|
||||
`WakuFilter` protocol rely on the _high availability_ of the full nodes
|
||||
as the service providers.
|
||||
To this end, full nodes must feature _high uptime_
|
||||
(to persistently listen and capture the network messages)
|
||||
as well as _high Bandwidth_ (to provide timely message delivery to the light nodes).
|
||||
|
||||
## Security Consideration
|
||||
|
||||
Note that while using `WakuFilter` allows light nodes to save bandwidth,
|
||||
it comes with a privacy cost in the sense that they need to
|
||||
disclose their liking topics to the full nodes to retrieve the relevant messages.
|
||||
Currently, anonymous subscription is not supported by the `WakuFilter`, however,
|
||||
potential solutions in this regard are sketched
|
||||
below in [Future Work](#future-work) section.
|
||||
|
||||
### Terminology
|
||||
|
||||
The term Personally identifiable information (PII)
|
||||
refers to any piece of data that can be used to uniquely identify a user.
|
||||
For example, the signature verification key, and
|
||||
the hash of one's static IP address are unique for each user and hence count as PII.
|
||||
|
||||
## Adversarial Model
|
||||
|
||||
Any node running the `WakuFilter` protocol
|
||||
i.e., both the subscriber node and the queried node are considered as an adversary.
|
||||
Furthermore, we consider the adversary as a passive entity
|
||||
that attempts to collect information from other nodes to conduct an attack but
|
||||
it does so without violating protocol definitions and instructions.
|
||||
For example, under the passive adversarial model,
|
||||
no malicious node intentionally hides the messages
|
||||
matching to one's subscribed content filter
|
||||
as it is against the description of the `WakuFilter` protocol.
|
||||
|
||||
The following are not considered as part of the adversarial model:
|
||||
|
||||
- An adversary with a global view of all the nodes and their connections.
|
||||
- An adversary that can eavesdrop on communication links
|
||||
between arbitrary pairs of nodes (unless the adversary is one end of the communication).
|
||||
In specific, the communication channels are assumed to be secure.
|
||||
|
||||
### Protobuf
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
// 12/WAKU2-FILTER rfc: https://rfc.vac.dev/spec/12/
|
||||
package waku.filter.v2;
|
||||
|
||||
// Protocol identifier: /vac/waku/filter-subscribe/2.0.0-beta1
|
||||
message FilterSubscribeRequest {
|
||||
enum FilterSubscribeType {
|
||||
SUBSCRIBER_PING = 0;
|
||||
SUBSCRIBE = 1;
|
||||
UNSUBSCRIBE = 2;
|
||||
UNSUBSCRIBE_ALL = 3;
|
||||
}
|
||||
|
||||
string request_id = 1;
|
||||
FilterSubscribeType filter_subscribe_type = 2;
|
||||
|
||||
// Filter criteria
|
||||
optional string pubsub_topic = 10;
|
||||
repeated string content_topics = 11;
|
||||
}
|
||||
|
||||
message FilterSubscribeResponse {
|
||||
string request_id = 1;
|
||||
uint32 status_code = 10;
|
||||
optional string status_desc = 11;
|
||||
}
|
||||
|
||||
// Protocol identifier: /vac/waku/filter-push/2.0.0-beta1
|
||||
message MessagePush {
|
||||
WakuMessage waku_message = 1;
|
||||
optional string pubsub_topic = 2;
|
||||
}
|
||||
```
|
||||
|
||||
### Filter-Subscribe
|
||||
|
||||
A filter service node MUST support the _filter-subscribe_ protocol
|
||||
to allow filter clients to subscribe, modify, refresh and
|
||||
unsubscribe a desired set of filter criteria.
|
||||
The combination of different filter criteria
|
||||
for a specific filter client node is termed a "subscription".
|
||||
A filter client is interested in receiving messages matching the filter criteria
|
||||
in its registered subscriptions.
|
||||
|
||||
Since a filter service node is consuming resources to provide this service,
|
||||
it MAY account for usage and adapt its service provision to certain clients.
|
||||
An incentive mechanism is currently planned but underspecified.
|
||||
|
||||
#### Filter Subscribe Request
|
||||
|
||||
A client node MUST send all filter requests in a `FilterSubscribeRequest` message.
|
||||
This request MUST contain a `request_id`.
|
||||
The `request_id` MUST be a uniquely generated string.
|
||||
Each request MUST include a `filter_subscribe_type`, indicating the type of request.
|
||||
|
||||
#### Filter Subscribe Response
|
||||
|
||||
In return to any `FilterSubscribeRequest`,
|
||||
a filter service node SHOULD respond with a `FilterSubscribeResponse`
|
||||
with a `requestId` matching that of the request.
|
||||
This response MUST contain a `status_code` indicating if the request was successful
|
||||
or not.
|
||||
Successful status codes are in the `2xx` range.
|
||||
Client nodes SHOULD consider all other status codes as error codes and
|
||||
assume that the requested operation had failed.
|
||||
In addition,
|
||||
the filter service node MAY choose to provide a more detailed status description
|
||||
in the `status_desc` field.
|
||||
|
||||
#### Filter matching
|
||||
|
||||
In the description of each request type below,
|
||||
the term "filter criteria" refers to the combination of `pubsub_topic` and
|
||||
a set of `content_topics`.
|
||||
The request MAY include filter criteria,
|
||||
conditional to the selected `filter_subscribe_type`.
|
||||
If the request contains filter criteria,
|
||||
it MUST contain a `pubsub_topic`
|
||||
and the `content_topics` set MUST NOT be empty.
|
||||
A `WakuMessage` matches filter criteria
|
||||
when its `content_topic` is in the `content_topics` set
|
||||
and it was published on a matching `pubsub_topic`.
|
||||
|
||||
#### Filter Subscribe Types
|
||||
|
||||
The following filter subscribe types are defined:
|
||||
|
||||
##### SUBSCRIBER_PING
|
||||
|
||||
A filter client that sends a `FilterSubscribeRequest` with
|
||||
`filter_subscribe_type` set to `SUBSCRIBER_PING`
|
||||
requests that the service node SHOULD indicate if it has any active subscriptions
|
||||
for this client.
|
||||
The filter client SHOULD exclude any filter criteria from the request.
|
||||
The filter service node SHOULD respond with a success code
|
||||
if it has any active subscriptions for this client
|
||||
or an error code if not.
|
||||
The filter service node SHOULD ignore any filter criteria in the request.
|
||||
|
||||
##### SUBSCRIBE
|
||||
|
||||
A filter client that sends a `FilterSubscribeRequest` with
|
||||
`filter_subscribe_type` set to `SUBSCRIBE`
|
||||
requests that the service node SHOULD push messages matching this filter to the client.
|
||||
The filter client MUST include the desired filter criteria in the request.
|
||||
A client MAY use this request type to _modify_ an existing subscription
|
||||
by providing _additional_ filter criteria in a new request.
|
||||
A client MAY use this request type to _refresh_ an existing subscription
|
||||
by providing _the same_ filter criteria in a new request.
|
||||
The filter service node SHOULD respond with a success code
|
||||
if it successfully honored this request
|
||||
or an error code if not.
|
||||
The filter service node SHOULD respond with an error code and discard the request
|
||||
if the subscribe request does not contain valid filter criteria,
|
||||
i.e. both a `pubsub_topic` _and_ a non-empty `content_topics` set.
|
||||
|
||||
##### UNSUBSCRIBE
|
||||
|
||||
A filter client that sends a `FilterSubscribeRequest` with
|
||||
`filter_subscribe_type` set to `UNSUBSCRIBE`
|
||||
requests that the service node SHOULD _stop_ pushing messages
|
||||
matching this filter to the client.
|
||||
The filter client MUST include the filter criteria
|
||||
it desires to unsubscribe from in the request.
|
||||
A client MAY use this request type to _modify_ an existing subscription
|
||||
by providing _a subset of_ the original filter criteria
|
||||
to unsubscribe from in a new request.
|
||||
The filter service node SHOULD respond with a success code
|
||||
if it successfully honored this request
|
||||
or an error code if not.
|
||||
The filter service node SHOULD respond with an error code and discard the request
|
||||
if the unsubscribe request does not contain valid filter criteria,
|
||||
i.e. both a `pubsub_topic` _and_ a non-empty `content_topics` set.
|
||||
|
||||
##### UNSUBSCRIBE_ALL
|
||||
|
||||
A filter client that sends a `FilterSubscribeRequest` with
|
||||
`filter_subscribe_type` set to `UNSUBSCRIBE_ALL`
|
||||
requests that the service node SHOULD _stop_ pushing messages
|
||||
matching _any_ filter to the client.
|
||||
The filter client SHOULD exclude any filter criteria from the request.
|
||||
The filter service node SHOULD remove any existing subscriptions for this client.
|
||||
It SHOULD respond with a success code if it successfully honored this request
|
||||
or an error code if not.
|
||||
|
||||
### Filter-Push
|
||||
|
||||
A filter client node MUST support the _filter-push_ protocol
|
||||
to allow filter service nodes to push messages
|
||||
matching registered subscriptions to this client.
|
||||
|
||||
A filter service node SHOULD push all messages
|
||||
matching the filter criteria in a registered subscription
|
||||
to the subscribed filter client.
|
||||
These [`WakuMessage`s](../14/message.md) are likely to come from [`11/WAKU2-RELAY`](../11/relay.md),
|
||||
but there MAY be other sources or protocols where this comes from.
|
||||
This is up to the consumer of the protocol.
|
||||
|
||||
If a message push fails,
|
||||
the filter service node MAY consider the client node to be unreachable.
|
||||
If a specific filter client node is not reachable from the service node
|
||||
for a period of time,
|
||||
the filter service node MAY choose to stop pushing messages to the client and
|
||||
remove its subscription.
|
||||
This period is up to the service node implementation.
|
||||
We consider `1 minute` to be a reasonable default.
|
||||
|
||||
#### Message Push
|
||||
|
||||
Each message MUST be pushed in a `MessagePush` message.
|
||||
Each `MessagePush` MUST contain one (and only one) `waku_message`.
|
||||
If this message was received on a specific `pubsub_topic`,
|
||||
it SHOULD be included in the `MessagePush`.
|
||||
A filter client SHOULD NOT respond to a `MessagePush`.
|
||||
Since the filter protocol does not include caching or fault-tolerance,
|
||||
this is a best effort push service with no bundling
|
||||
or guaranteed retransmission of messages.
|
||||
A filter client SHOULD verify that each `MessagePush` it receives
|
||||
originated from a service node where the client has an active subscription
|
||||
and that it matches filter criteria belonging to that subscription.
|
||||
|
||||
---
|
||||
|
||||
## Future Work
|
||||
<!-- Alternative title: Filter-subscriber unlinkability -->
|
||||
**Anonymous filter subscription**:
|
||||
This feature guarantees that nodes can anonymously subscribe for a message filter
|
||||
(i.e., without revealing their exact content filter).
|
||||
As such, no adversary in the `WakuFilter` protocol
|
||||
would be able to link nodes to their subscribed content filers.
|
||||
The current version of the `WakuFilter` protocol does not provide anonymity
|
||||
as the subscribing node has a direct connection to the full node and
|
||||
explicitly submits its content filter to be notified about the matching messages.
|
||||
However, one can consider preserving anonymity through one of the following ways:
|
||||
|
||||
- By hiding the source of the subscription i.e., anonymous communication.
|
||||
That is the subscribing node shall hide all its PII in its filter request
|
||||
e.g., its IP address.
|
||||
This can happen by the utilization of a proxy server or by using Tor
|
||||
<!-- TODO: if nodes have to disclose their PeerIDs
|
||||
(e.g., for authentication purposes)
|
||||
when connecting to other nodes in the WakuFilter protocol,
|
||||
then Tor does not preserve anonymity since it only helps in hiding the IP.
|
||||
So, the PeerId usage in switches must be investigated further.
|
||||
Depending on how PeerId is used,
|
||||
one may be able to link between a subscriber and
|
||||
its content filter despite hiding the IP address-->.
|
||||
Note that the current structure of filter requests
|
||||
i.e., `FilterRPC` does not embody any piece of PII, otherwise,
|
||||
such data fields must be treated carefully to achieve anonymity.
|
||||
|
||||
- By deploying secure 2-party computations in which
|
||||
the subscribing node obtains the messages matching a content filter
|
||||
whereas the full node learns nothing about the content filter as well as
|
||||
the messages pushed to the subscribing node.
|
||||
Examples of such 2PC protocols are
|
||||
[Oblivious Transfers](https://link.springer.com/referenceworkentry/10.1007%2F978-1-4419-5906-5_9#:~:text=Oblivious%20transfer%20(OT)%20is%20a,information%20the%20receiver%20actually%20obtains.)
|
||||
and one-way Private Set Intersections (PSI).
|
||||
|
||||
## Changelog
|
||||
|
||||
### Next
|
||||
|
||||
- Added initial threat model and security analysis.
|
||||
|
||||
### 2.0.0-beta2
|
||||
|
||||
Initial draft version. Released [2020-10-28](https://github.com/vacp2p/specs/commit/5ceeb88cee7b918bb58f38e7c4de5d581ff31e68)
|
||||
|
||||
- Fix: Ensure contentFilter is a repeated field, on implementation
|
||||
- Change: Add ability to unsubscribe from filters.
|
||||
Make `subscribe` an explicit boolean indication.
|
||||
Edit protobuf field order to be consistent with libp2p.
|
||||
|
||||
### 2.0.0-beta1
|
||||
|
||||
Initial draft version. Released [2020-10-05](https://github.com/vacp2p/specs/commit/31857c7434fa17efc00e3cd648d90448797d107b)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [message-based
|
||||
filtering](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern#Message_filtering)
|
||||
- [`WakuMessage`s](../14/message.md)
|
||||
- [`11/WAKU2-RELAY`](../11/relay.md)
|
||||
- [Oblivious Transfers](https://link.springer.com/referenceworkentry/10.1007%2F978-1-4419-5906-5_9#:~:text=Oblivious%20transfer%20(OT)%20is%20a,information%20the%20receiver%20actually%20obtains)
|
||||
- previous versions: [00](./previous-versions00)
|
||||
|
||||
1. [Message Filtering (Wikipedia)](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern#Message_filtering)
|
||||
|
||||
2. [Libp2p PubSub spec - topic validation](https://github.com/libp2p/specs/tree/master/pubsub#topic-validation)
|
||||
|
||||
@@ -1,231 +1,171 @@
|
||||
---
|
||||
slug: 12
|
||||
title: 12/WAKU2-FILTER
|
||||
name: Waku v2 Filter
|
||||
status: draft
|
||||
tags: waku-core
|
||||
version: v00
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
- Ebube Ud <ebube@status.im>
|
||||
---
|
||||
|
||||
|
||||
`WakuFilter` is a protocol that enables subscribing to messages that a peer receives.
|
||||
This is a more lightweight version of `WakuRelay`
|
||||
specifically designed for bandwidth restricted devices.
|
||||
This is due to the fact that light nodes subscribe to full-nodes and
|
||||
only receive the messages they desire.
|
||||
|
||||
## Content filtering
|
||||
|
||||
**Protocol identifier***: `/vac/waku/filter/2.0.0-beta1`
|
||||
|
||||
Content filtering is a way to do [message-based
|
||||
filtering](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern#Message_filtering).
|
||||
Currently the only content filter being applied is on `contentTopic`. This
|
||||
corresponds to topics in Waku v1.
|
||||
|
||||
## Rationale
|
||||
|
||||
Unlike the `store` protocol for historical messages, this protocol allows for
|
||||
native lower latency scenarios such as instant messaging. It is thus
|
||||
complementary to it.
|
||||
|
||||
Strictly speaking, it is not just doing basic request response, but performs
|
||||
sender push based on receiver intent. While this can be seen as a form of light
|
||||
pub/sub, it is only used between two nodes in a direct fashion. Unlike the
|
||||
Gossip domain, this is meant for light nodes which put a premium on bandwidth.
|
||||
No gossiping takes place.
|
||||
|
||||
It is worth noting that a light node could get by with only using the `store`
|
||||
protocol to query for a recent time window, provided it is acceptable to do
|
||||
frequent polling.
|
||||
|
||||
## Design Requirements
|
||||
|
||||
The effectiveness and reliability of the content filtering service
|
||||
enabled by `WakuFilter` protocol rely on the *high availability* of the full nodes
|
||||
as the service providers.
|
||||
To this end, full nodes must feature *high uptime*
|
||||
(to persistently listen and capture the network messages)
|
||||
as well as *high Bandwidth* (to provide timely message delivery to the light nodes).
|
||||
|
||||
## Security Consideration
|
||||
|
||||
Note that while using `WakuFilter` allows light nodes to save bandwidth,
|
||||
it comes with a privacy cost in the sense that they need to disclose their liking
|
||||
topics to the full nodes to retrieve the relevant messages.
|
||||
Currently, anonymous subscription is not supported by the `WakuFilter`, however,
|
||||
potential solutions in this regard are sketched below in [Future Work](#future-work)
|
||||
section.
|
||||
|
||||
### Terminology
|
||||
|
||||
The term Personally identifiable information (PII)
|
||||
refers to any piece of data that can be used to uniquely identify a user.
|
||||
For example, the signature verification key, and
|
||||
the hash of one's static IP address are unique for each user and hence count as PII.
|
||||
|
||||
## Adversarial Model
|
||||
|
||||
Any node running the `WakuFilter` protocol i.e.,
|
||||
both the subscriber node and the queried node are considered as an adversary.
|
||||
Furthermore, we consider the adversary as a passive entity
|
||||
that attempts to collect information from other nodes to conduct an attack but
|
||||
it does so without violating protocol definitions and instructions.
|
||||
For example, under the passive adversarial model,
|
||||
no malicious node intentionally hides the messages matching
|
||||
to one's subscribed content filter as it is against the description
|
||||
of the `WakuFilter` protocol.
|
||||
|
||||
The following are not considered as part of the adversarial model:
|
||||
|
||||
- An adversary with a global view of all the nodes and their connections.
|
||||
- An adversary that can eavesdrop on communication links
|
||||
between arbitrary pairs of nodes (unless the adversary is one end of the communication).
|
||||
In specific, the communication channels are assumed to be secure.
|
||||
|
||||
### Protobuf
|
||||
|
||||
```protobuf
|
||||
message FilterRequest {
|
||||
bool subscribe = 1;
|
||||
string topic = 2;
|
||||
repeated ContentFilter contentFilters = 3;
|
||||
|
||||
message ContentFilter {
|
||||
string contentTopic = 1;
|
||||
}
|
||||
}
|
||||
|
||||
message MessagePush {
|
||||
repeated WakuMessage messages = 1;
|
||||
}
|
||||
|
||||
message FilterRPC {
|
||||
string requestId = 1;
|
||||
FilterRequest request = 2;
|
||||
MessagePush push = 3;
|
||||
}
|
||||
```
|
||||
|
||||
#### FilterRPC
|
||||
|
||||
A node MUST send all Filter messages (`FilterRequest`, `MessagePush`)
|
||||
wrapped inside a `FilterRPC` this allows the node handler
|
||||
to determine how to handle a message as the Waku Filter protocol
|
||||
is not a request response based protocol but instead a push based system.
|
||||
|
||||
The `requestId` MUST be a uniquely generated string. When a `MessagePush` is sent
|
||||
the `requestId` MUST match the `requestId` of the subscribing `FilterRequest`
|
||||
whose filters matched the message causing it to be pushed.
|
||||
|
||||
#### FilterRequest
|
||||
|
||||
A `FilterRequest` contains an optional topic, zero or more content filters and
|
||||
a boolean signifying whether to subscribe or unsubscribe to the given filters.
|
||||
True signifies 'subscribe' and false signifies 'unsubscribe'.
|
||||
|
||||
A node that sends the RPC with a filter request and `subscribe` set to 'true'
|
||||
requests that the filter node SHOULD notify the light requesting node of messages
|
||||
matching this filter.
|
||||
|
||||
A node that sends the RPC with a filter request and `subscribe` set to 'false'
|
||||
requests that the filter node SHOULD stop notifying the light requesting node
|
||||
of messages matching this filter if it is currently doing so.
|
||||
|
||||
The filter matches when content filter and, optionally, a topic is matched.
|
||||
Content filter is matched when a `WakuMessage` `contentTopic` field is the same.
|
||||
|
||||
A filter node SHOULD honor this request, though it MAY choose not to do so. If
|
||||
it chooses not to do so it MAY tell the light why. The mechanism for doing this
|
||||
is currently not specified. For notifying the light node a filter node sends a
|
||||
MessagePush message.
|
||||
|
||||
Since such a filter node is doing extra work for a light node, it MAY also
|
||||
account for usage and be selective in how much service it provides. This
|
||||
mechanism is currently planned but underspecified.
|
||||
|
||||
#### MessagePush
|
||||
|
||||
A filter node that has received a filter request SHOULD push all messages that
|
||||
match this filter to a light node. These [`WakuMessage`'s](../14/message.md)
|
||||
are likely to come from the
|
||||
`relay` protocol and be kept at the Node, but there MAY be other sources or
|
||||
protocols where this comes from. This is up to the consumer of the protocol.
|
||||
|
||||
A filter node MUST NOT send a push message for messages that have not been
|
||||
requested via a FilterRequest.
|
||||
|
||||
If a specific light node isn't connected to a filter node for some specific
|
||||
period of time (e.g. a TTL), then the filter node MAY choose to not push these
|
||||
messages to the node. This period is up to the consumer of the protocol and node
|
||||
implementation, though a reasonable default is one minute.
|
||||
|
||||
---
|
||||
|
||||
## Future Work
|
||||
<!-- Alternative title: Filter-subscriber unlinkability -->
|
||||
**Anonymous filter subscription**:
|
||||
This feature guarantees that nodes can anonymously subscribe for a message filter
|
||||
(i.e., without revealing their exact content filter).
|
||||
As such, no adversary in the `WakuFilter` protocol would be able to link nodes
|
||||
to their subscribed content filers.
|
||||
The current version of the `WakuFilter` protocol does not provide anonymity
|
||||
as the subscribing node has a direct connection to the full node and
|
||||
explicitly submits its content filter to be notified about the matching messages.
|
||||
However, one can consider preserving anonymity through one of the following ways:
|
||||
|
||||
- By hiding the source of the subscription i.e., anonymous communication.
|
||||
That is the subscribing node shall hide all its PII in its filter request e.g.,
|
||||
its IP address.
|
||||
This can happen by the utilization of a proxy server or by using Tor
|
||||
<!-- TODO: if nodes have to disclose their PeerIDs (e.g., for authentication purposes)
|
||||
when connecting to other nodes in the WakuFilter protocol,
|
||||
then Tor does not preserve anonymity since it only helps in hiding the IP.
|
||||
So, the PeerId usage in switches must be investigated further.
|
||||
Depending on how PeerId is used, one may be able to link between a subscriber and
|
||||
its content filter despite hiding the IP address-->.
|
||||
Note that the current structure of filter requests i.e.,
|
||||
`FilterRPC` does not embody any piece of PII, otherwise,
|
||||
such data fields must be treated carefully to achieve anonymity.
|
||||
|
||||
- By deploying secure 2-party computations in which the subscribing node obtains
|
||||
the messages matching a content filter whereas the full node learns nothing
|
||||
about the content filter as well as the messages pushed to the subscribing node.
|
||||
Examples of such 2PC protocols are [Oblivious Transfers](https://link.springer.com/referenceworkentry/10.1007%2F978-1-4419-5906-5_9#:~:text=Oblivious%20transfer%20(OT)%20is%20a,information%20the%20receiver%20actually%20obtains.)
|
||||
and one-way Private Set Intersections (PSI).
|
||||
|
||||
## Changelog
|
||||
|
||||
### Next
|
||||
|
||||
- Added initial threat model and security analysis.
|
||||
|
||||
### 2.0.0-beta2
|
||||
|
||||
Initial draft version. Released [2020-10-28](https://github.com/vacp2p/specs/commit/5ceeb88cee7b918bb58f38e7c4de5d581ff31e68)
|
||||
|
||||
- Fix: Ensure contentFilter is a repeated field, on implementation
|
||||
- Change: Add ability to unsubscribe from filters.
|
||||
Make `subscribe` an explicit boolean indication.
|
||||
Edit protobuf field order to be consistent with libp2p.
|
||||
|
||||
### 2.0.0-beta1
|
||||
|
||||
Initial draft version. Released [2020-10-05](https://github.com/vacp2p/specs/commit/31857c7434fa17efc00e3cd648d90448797d107b)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [Message Filtering (Wikipedia)](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern#Message_filtering)
|
||||
|
||||
2. [Libp2p PubSub spec - topic validation](https://github.com/libp2p/specs/tree/master/pubsub#topic-validation)
|
||||
---
|
||||
slug: 12
|
||||
title: 12/WAKU2-FILTER
|
||||
name: Waku v2 Filter
|
||||
status: draft
|
||||
tags: waku-core
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
- Ebube Ud <ebube@status.im>
|
||||
---
|
||||
version: 00
|
||||
|
||||
---
|
||||
`WakuFilter` is a protocol that enables subscribing to messages that a peer receives. This is a more lightweight version of `WakuRelay` specifically designed for bandwidth restricted devices. This is due to the fact that light nodes subscribe to full-nodes and only receive the messages they desire.
|
||||
|
||||
## Content filtering
|
||||
|
||||
**Protocol identifier***: `/vac/waku/filter/2.0.0-beta1`
|
||||
|
||||
Content filtering is a way to do [message-based
|
||||
filtering](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern#Message_filtering).
|
||||
Currently the only content filter being applied is on `contentTopic`. This
|
||||
corresponds to topics in Waku v1.
|
||||
|
||||
## Rationale
|
||||
|
||||
Unlike the `store` protocol for historical messages, this protocol allows for
|
||||
native lower latency scenarios such as instant messaging. It is thus
|
||||
complementary to it.
|
||||
|
||||
Strictly speaking, it is not just doing basic request response, but performs
|
||||
sender push based on receiver intent. While this can be seen as a form of light
|
||||
pub/sub, it is only used between two nodes in a direct fashion. Unlike the
|
||||
Gossip domain, this is meant for light nodes which put a premium on bandwidth.
|
||||
No gossiping takes place.
|
||||
|
||||
It is worth noting that a light node could get by with only using the `store`
|
||||
protocol to query for a recent time window, provided it is acceptable to do
|
||||
frequent polling.
|
||||
|
||||
|
||||
## Design Requirements
|
||||
|
||||
The effectiveness and reliability of the content filtering service enabled by `WakuFilter` protocol rely on the *high availability* of the full nodes as the service providers. To this end, full nodes must feature *high uptime* (to persistently listen and capture the network messages) as well as *high Bandwidth* (to provide timely message delivery to the light nodes).
|
||||
|
||||
## Security Consideration
|
||||
|
||||
Note that while using `WakuFilter` allows light nodes to save bandwidth, it comes with a privacy cost in the sense that they need to disclose their liking topics to the full nodes to retrieve the relevant messages. Currently, anonymous subscription is not supported by the `WakuFilter`, however, potential solutions in this regard are sketched below in [Future Work](#future-work) section.
|
||||
|
||||
### Terminology
|
||||
The term Personally identifiable information (PII) refers to any piece of data that can be used to uniquely identify a user. For example, the signature verification key, and the hash of one's static IP address are unique for each user and hence count as PII.
|
||||
|
||||
## Adversarial Model
|
||||
Any node running the `WakuFilter` protocol i.e., both the subscriber node and the queried node are considered as an adversary. Furthermore, we consider the adversary as a passive entity that attempts to collect information from other nodes to conduct an attack but it does so without violating protocol definitions and instructions. For example, under the passive adversarial model, no malicious node intentionally hides the messages matching to one's subscribed content filter as it is against the description of the `WakuFilter` protocol.
|
||||
|
||||
The following are not considered as part of the adversarial model:
|
||||
- An adversary with a global view of all the nodes and their connections.
|
||||
- An adversary that can eavesdrop on communication links between arbitrary pairs of nodes (unless the adversary is one end of the communication). In specific, the communication channels are assumed to be secure.
|
||||
|
||||
### Protobuf
|
||||
|
||||
```protobuf
|
||||
message FilterRequest {
|
||||
bool subscribe = 1;
|
||||
string topic = 2;
|
||||
repeated ContentFilter contentFilters = 3;
|
||||
|
||||
message ContentFilter {
|
||||
string contentTopic = 1;
|
||||
}
|
||||
}
|
||||
|
||||
message MessagePush {
|
||||
repeated WakuMessage messages = 1;
|
||||
}
|
||||
|
||||
message FilterRPC {
|
||||
string requestId = 1;
|
||||
FilterRequest request = 2;
|
||||
MessagePush push = 3;
|
||||
}
|
||||
```
|
||||
|
||||
#### FilterRPC
|
||||
|
||||
A node MUST send all Filter messages (`FilterRequest`, `MessagePush`) wrapped inside a
|
||||
`FilterRPC` this allows the node handler to determine how to handle a message as the Waku
|
||||
Filter protocol is not a request response based protocol but instead a push based system.
|
||||
|
||||
The `requestId` MUST be a uniquely generated string. When a `MessagePush` is sent
|
||||
the `requestId` MUST match the `requestId` of the subscribing `FilterRequest` whose filters
|
||||
matched the message causing it to be pushed.
|
||||
|
||||
#### FilterRequest
|
||||
|
||||
A `FilterRequest` contains an optional topic, zero or more content filters and
|
||||
a boolean signifying whether to subscribe or unsubscribe to the given filters.
|
||||
True signifies 'subscribe' and false signifies 'unsubscribe'.
|
||||
|
||||
A node that sends the RPC with a filter request and `subscribe` set to 'true'
|
||||
requests that the filter node SHOULD notify the light requesting node of messages
|
||||
matching this filter.
|
||||
|
||||
A node that sends the RPC with a filter request and `subscribe` set to 'false'
|
||||
requests that the filter node SHOULD stop notifying the light requesting node
|
||||
of messages matching this filter if it is currently doing so.
|
||||
|
||||
The filter matches when content filter and, optionally, a topic is matched.
|
||||
Content filter is matched when a `WakuMessage` `contentTopic` field is the same.
|
||||
|
||||
A filter node SHOULD honor this request, though it MAY choose not to do so. If
|
||||
it chooses not to do so it MAY tell the light why. The mechanism for doing this
|
||||
is currently not specified. For notifying the light node a filter node sends a
|
||||
MessagePush message.
|
||||
|
||||
Since such a filter node is doing extra work for a light node, it MAY also
|
||||
account for usage and be selective in how much service it provides. This
|
||||
mechanism is currently planned but underspecified.
|
||||
|
||||
#### MessagePush
|
||||
|
||||
A filter node that has received a filter request SHOULD push all messages that
|
||||
match this filter to a light node. These [`WakuMessage`'s](../14/message.md) are likely to come from the
|
||||
`relay` protocol and be kept at the Node, but there MAY be other sources or
|
||||
protocols where this comes from. This is up to the consumer of the protocol.
|
||||
|
||||
A filter node MUST NOT send a push message for messages that have not been
|
||||
requested via a FilterRequest.
|
||||
|
||||
If a specific light node isn't connected to a filter node for some specific
|
||||
period of time (e.g. a TTL), then the filter node MAY choose to not push these
|
||||
messages to the node. This period is up to the consumer of the protocol and node
|
||||
implementation, though a reasonable default is one minute.
|
||||
|
||||
---
|
||||
# Future Work
|
||||
<!-- Alternative title: Filter-subscriber unlinkability -->
|
||||
**Anonymous filter subscription**: This feature guarantees that nodes can anonymously subscribe for a message filter (i.e., without revealing their exact content filter). As such, no adversary in the `WakuFilter` protocol would be able to link nodes to their subscribed content filers. The current version of the `WakuFilter` protocol does not provide anonymity as the subscribing node has a direct connection to the full node and explicitly submits its content filter to be notified about the matching messages. However, one can consider preserving anonymity through one of the following ways:
|
||||
- By hiding the source of the subscription i.e., anonymous communication. That is the subscribing node shall hide all its PII in its filter request e.g., its IP address. This can happen by the utilization of a proxy server or by using Tor<!-- TODO: if nodes have to disclose their PeerIDs (e.g., for authentication purposes) when connecting to other nodes in the WakuFilter protocol, then Tor does not preserve anonymity since it only helps in hiding the IP. So, the PeerId usage in switches must be investigated further. Depending on how PeerId is used, one may be able to link between a subscriber and its content filter despite hiding the IP address-->.
|
||||
Note that the current structure of filter requests i.e., `FilterRPC` does not embody any piece of PII, otherwise, such data fields must be treated carefully to achieve anonymity.
|
||||
- By deploying secure 2-party computations in which the subscribing node obtains the messages matching a content filter whereas the full node learns nothing about the content filter as well as the messages pushed to the subscribing node. Examples of such 2PC protocols are [Oblivious Transfers](https://link.springer.com/referenceworkentry/10.1007%2F978-1-4419-5906-5_9#:~:text=Oblivious%20transfer%20(OT)%20is%20a,information%20the%20receiver%20actually%20obtains.) and one-way Private Set Intersections (PSI).
|
||||
|
||||
## Changelog
|
||||
|
||||
### Next
|
||||
|
||||
- Added initial threat model and security analysis.
|
||||
|
||||
### 2.0.0-beta2
|
||||
|
||||
Initial draft version. Released [2020-10-28](https://github.com/vacp2p/specs/commit/5ceeb88cee7b918bb58f38e7c4de5d581ff31e68)
|
||||
- Fix: Ensure contentFilter is a repeated field, on implementation
|
||||
- Change: Add ability to unsubscribe from filters. Make `subscribe` an explicit boolean indication. Edit protobuf field order to be consistent with libp2p.
|
||||
|
||||
### 2.0.0-beta1
|
||||
|
||||
Initial draft version. Released [2020-10-05](https://github.com/vacp2p/specs/commit/31857c7434fa17efc00e3cd648d90448797d107b)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [Message Filtering (Wikipedia)](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern#Message_filtering)
|
||||
|
||||
2. [Libp2p PubSub spec - topic validation](https://github.com/libp2p/specs/tree/master/pubsub#topic-validation)
|
||||
@@ -1,359 +0,0 @@
|
||||
---
|
||||
slug: 13
|
||||
title: 13/WAKU2-STORE
|
||||
name: Waku v2 Store
|
||||
status: draft
|
||||
tags: waku-core
|
||||
version: 00
|
||||
editor: Simon-Pierre Vivier <simvivier@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
- Hanno Cornelius <hanno@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification explains the `13/WAKU2-STORE` protocol
|
||||
which enables querying of messages received through the relay protocol and
|
||||
stored by other nodes.
|
||||
It also supports pagination for more efficient querying of historical messages.
|
||||
|
||||
**Protocol identifier***: `/vac/waku/store/2.0.0-beta4`
|
||||
|
||||
## Terminology
|
||||
|
||||
The term PII, Personally Identifiable Information,
|
||||
refers to any piece of data that can be used to uniquely identify a user.
|
||||
For example, the signature verification key, and
|
||||
the hash of one's static IP address are unique for each user and hence count as PII.
|
||||
|
||||
## Design Requirements
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [RFC2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
Nodes willing to provide the storage service using `13/WAKU2-STORE` protocol,
|
||||
SHOULD provide a complete and full view of message history.
|
||||
As such, they are required to be *highly available* and
|
||||
specifically have a *high uptime* to consistently receive and store network messages.
|
||||
The high uptime requirement makes sure that no message is missed out
|
||||
hence a complete and intact view of the message history
|
||||
is delivered to the querying nodes.
|
||||
Nevertheless, in case storage provider nodes cannot afford high availability,
|
||||
the querying nodes may retrieve the historical messages from multiple sources
|
||||
to achieve a full and intact view of the past.
|
||||
|
||||
The concept of `ephemeral` messages introduced in
|
||||
[`14/WAKU2-MESSAGE`](../14/message.md) affects `13/WAKU2-STORE` as well.
|
||||
Nodes running `13/WAKU2-STORE` SHOULD support `ephemeral` messages as specified in
|
||||
[14/WAKU2-MESSAGE](../14/message.md).
|
||||
Nodes running `13/WAKU2-STORE` SHOULD NOT store messages
|
||||
with the `ephemeral` flag set to `true`.
|
||||
|
||||
## Adversarial Model
|
||||
|
||||
Any peer running the `13/WAKU2-STORE` protocol, i.e.
|
||||
both the querying node and the queried node, are considered as an adversary.
|
||||
Furthermore,
|
||||
we currently consider the adversary as a passive entity
|
||||
that attempts to collect information from other peers to conduct an attack but
|
||||
it does so without violating protocol definitions and instructions.
|
||||
As we evolve the protocol,
|
||||
further adversarial models will be considered.
|
||||
For example, under the passive adversarial model,
|
||||
no malicious node hides or
|
||||
lies about the history of messages
|
||||
as it is against the description of the `13/WAKU2-STORE` protocol.
|
||||
|
||||
The following are not considered as part of the adversarial model:
|
||||
|
||||
- An adversary with a global view of all the peers and their connections.
|
||||
- An adversary that can eavesdrop on communication links
|
||||
between arbitrary pairs of peers (unless the adversary is one end of the communication).
|
||||
In specific, the communication channels are assumed to be secure.
|
||||
|
||||
## Wire Specification
|
||||
|
||||
Peers communicate with each other using a request / response API.
|
||||
The messages sent are Protobuf RPC messages which are implemented using
|
||||
[protocol buffers v3](https://developers.google.com/protocol-buffers/).
|
||||
The following are the specifications of the Protobuf messages.
|
||||
|
||||
### Payloads
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
message Index {
|
||||
bytes digest = 1;
|
||||
sint64 receiverTime = 2;
|
||||
sint64 senderTime = 3;
|
||||
string pubsubTopic = 4;
|
||||
}
|
||||
|
||||
message PagingInfo {
|
||||
uint64 pageSize = 1;
|
||||
Index cursor = 2;
|
||||
enum Direction {
|
||||
BACKWARD = 0;
|
||||
FORWARD = 1;
|
||||
}
|
||||
Direction direction = 3;
|
||||
}
|
||||
|
||||
message ContentFilter {
|
||||
string contentTopic = 1;
|
||||
}
|
||||
|
||||
message HistoryQuery {
|
||||
// the first field is reserved for future use
|
||||
string pubsubtopic = 2;
|
||||
repeated ContentFilter contentFilters = 3;
|
||||
PagingInfo pagingInfo = 4;
|
||||
}
|
||||
|
||||
message HistoryResponse {
|
||||
// the first field is reserved for future use
|
||||
repeated WakuMessage messages = 2;
|
||||
PagingInfo pagingInfo = 3;
|
||||
enum Error {
|
||||
NONE = 0;
|
||||
INVALID_CURSOR = 1;
|
||||
}
|
||||
Error error = 4;
|
||||
}
|
||||
|
||||
message HistoryRPC {
|
||||
string request_id = 1;
|
||||
HistoryQuery query = 2;
|
||||
HistoryResponse response = 3;
|
||||
}
|
||||
```
|
||||
|
||||
#### Index
|
||||
|
||||
To perform pagination,
|
||||
each `WakuMessage` stored at a node running the `13/WAKU2-STORE` protocol
|
||||
is associated with a unique `Index` that encapsulates the following parts.
|
||||
|
||||
- `digest`: a sequence of bytes representing the SHA256 hash of a `WakuMessage`.
|
||||
The hash is computed over the concatenation of `contentTopic`
|
||||
and `payload` fields of a `WakuMessage` (see [14/WAKU2-MESSAGE](../14/message.md)).
|
||||
- `receiverTime`: the UNIX time in nanoseconds
|
||||
at which the `WakuMessage` is received by the receiving node.
|
||||
- `senderTime`: the UNIX time in nanoseconds
|
||||
at which the `WakuMessage` is generated by its sender.
|
||||
- `pubsubTopic`: the pubsub topic on which the `WakuMessage` is received.
|
||||
|
||||
#### PagingInfo
|
||||
|
||||
`PagingInfo` holds the information required for pagination.
|
||||
It consists of the following components.
|
||||
|
||||
- `pageSize`: A positive integer indicating the number of queried `WakuMessage`s
|
||||
in a `HistoryQuery`
|
||||
(or retrieved `WakuMessage`s in a `HistoryResponse`).
|
||||
- `cursor`: holds the `Index` of a `WakuMessage`.
|
||||
- `direction`: indicates the direction of paging
|
||||
which can be either `FORWARD` or `BACKWARD`.
|
||||
|
||||
#### ContentFilter
|
||||
|
||||
`ContentFilter` carries the information required for filtering historical messages.
|
||||
|
||||
- `contentTopic` represents the content topic of the queried historical `WakuMessage`.
|
||||
This field maps to the `contentTopic` field of the [14/WAKU2-MESSAGE](../14/message.md).
|
||||
|
||||
#### HistoryQuery
|
||||
|
||||
RPC call to query historical messages.
|
||||
|
||||
- The `pubsubTopic` field MUST indicate the pubsub topic
|
||||
of the historical messages to be retrieved.
|
||||
This field denotes the pubsub topic on which `WakuMessage`s are published.
|
||||
This field maps to `topicIDs` field of `Message` in [`11/WAKU2-RELAY`](../11/relay.md).
|
||||
Leaving this field empty means no filter on the pubsub topic
|
||||
of message history is requested.
|
||||
This field SHOULD be left empty in order to retrieve the historical `WakuMessage`
|
||||
regardless of the pubsub topics on which they are published.
|
||||
- The `contentFilters` field MUST indicate the list of content filters
|
||||
based on which the historical messages are to be retrieved.
|
||||
Leaving this field empty means no filter on the content topic
|
||||
of message history is required.
|
||||
This field SHOULD be left empty in order
|
||||
to retrieve historical `WakuMessage` regardless of their content topics.
|
||||
- `PagingInfo` holds the information required for pagination.
|
||||
Its `pageSize` field indicates the number of `WakuMessage`s
|
||||
to be included in the corresponding `HistoryResponse`.
|
||||
It is RECOMMENDED that the queried node defines a maximum page size internally.
|
||||
If the querying node leaves the `pageSize` unspecified,
|
||||
or if the `pageSize` exceeds the maximum page size,
|
||||
the queried node SHOULD auto-paginate the `HistoryResponse`
|
||||
to no more than the configured maximum page size.
|
||||
This allows mitigation of long response time for `HistoryQuery`.
|
||||
In the forward pagination request,
|
||||
the `messages` field of the `HistoryResponse` SHALL contain, at maximum,
|
||||
the `pageSize` amount of `WakuMessage` whose `Index`
|
||||
values are larger than the given `cursor`
|
||||
(and vise versa for the backward pagination).
|
||||
Note that the `cursor` of a `HistoryQuery` MAY be empty
|
||||
(e.g., for the initial query), as such, and
|
||||
depending on whether the `direction` is `BACKWARD` or
|
||||
`FORWARD` the last or the first `pageSize` `WakuMessage` SHALL be returned,
|
||||
respectively.
|
||||
|
||||
#### Sorting Messages
|
||||
|
||||
The queried node MUST sort the `WakuMessage` based on their `Index`,
|
||||
where the `senderTime` constitutes the most significant part and
|
||||
the `digest` comes next, and
|
||||
then perform pagination on the sorted result.
|
||||
As such, the retrieved page contains an ordered list of `WakuMessage`
|
||||
from the oldest messages to the most recent one.
|
||||
Alternatively, the `receiverTime` (instead of `senderTime`)
|
||||
MAY be used to sort messages during the paging process.
|
||||
However, it is RECOMMENDED the use of the `senderTime`
|
||||
for sorting as it is invariant and
|
||||
consistent across all the nodes.
|
||||
This has the benefit of `cursor` reusability i.e.,
|
||||
a `cursor` obtained from one node can be consistently used
|
||||
to query from another node.
|
||||
However, this `cursor` reusability does not hold when the `receiverTime` is utilized
|
||||
as the receiver time is affected by the network delay and
|
||||
nodes' clock asynchrony.
|
||||
|
||||
#### HistoryResponse
|
||||
|
||||
RPC call to respond to a HistoryQuery call.
|
||||
|
||||
- The `messages` field MUST contain the messages found,
|
||||
these are [14/WAKU2-MESSAGE](../14/message.md) types.
|
||||
- `PagingInfo` holds the paging information based
|
||||
on which the querying node can resume its further history queries.
|
||||
The `pageSize` indicates the number of returned Waku messages
|
||||
(i.e., the number of messages included in the `messages` field of `HistoryResponse`).
|
||||
The `direction` is the same direction as in the corresponding `HistoryQuery`.
|
||||
In the forward pagination, the `cursor` holds the `Index` of the last message
|
||||
in the `HistoryResponse` `messages` (and the first message in the backward paging).
|
||||
Regardless of the paging direction,
|
||||
the retrieved `messages` are always sorted in ascending order
|
||||
based on their timestamp as explained in the [sorting messages](#sorting-messages)section,
|
||||
that is, from the oldest to the most recent.
|
||||
The requester SHALL embed the returned `cursor` inside its next `HistoryQuery`
|
||||
to retrieve the next page of the [14/WAKU2-MESSAGE](../14/message.md).
|
||||
The `cursor` obtained from one node SHOULD NOT be used in a request to another node
|
||||
because the result may be different.
|
||||
- The `error` field contains information about any error that has occurred
|
||||
while processing the corresponding `HistoryQuery`.
|
||||
`NONE` stands for no error.
|
||||
This is also the default value.
|
||||
`INVALID_CURSOR` means that the `cursor` field of `HistoryQuery`
|
||||
does not match with the `Index` of any of the `WakuMessage`
|
||||
persisted by the queried node.
|
||||
|
||||
## Security Consideration
|
||||
|
||||
The main security consideration to take into account
|
||||
while using this protocol is that a querying node
|
||||
have to reveal their content filters of interest to the queried node,
|
||||
hence potentially compromising their privacy.
|
||||
|
||||
## Future Work
|
||||
|
||||
- **Anonymous query**: This feature guarantees that nodes
|
||||
can anonymously query historical messages from other nodes i.e.,
|
||||
without disclosing the exact topics of [14/WAKU2-MESSAGE](../14/message.md)
|
||||
they are interested in.
|
||||
As such, no adversary in the `13/WAKU2-STORE` protocol
|
||||
would be able to learn which peer is interested in which content filters i.e.,
|
||||
content topics of [14/WAKU2-MESSAGE](../14/message.md).
|
||||
The current version of the `13/WAKU2-STORE` protocol does not provide anonymity
|
||||
for historical queries,
|
||||
as the querying node needs to directly connect to another node
|
||||
in the `13/WAKU2-STORE` protocol and
|
||||
explicitly disclose the content filters of its interest
|
||||
to retrieve the corresponding messages.
|
||||
However, one can consider preserving anonymity through one of the following ways:
|
||||
- By hiding the source of the request i.e., anonymous communication.
|
||||
That is the querying node shall hide all its PII in its history request
|
||||
e.g., its IP address.
|
||||
This can happen by the utilization of a proxy server or by using Tor.
|
||||
Note that the current structure of historical requests
|
||||
does not embody any piece of PII, otherwise,
|
||||
such data fields must be treated carefully to achieve query anonymity.
|
||||
<!-- TODO: if nodes have to disclose their PeerIDs
|
||||
(e.g., for authentication purposes) when connecting to other nodes
|
||||
in the store protocol,
|
||||
then Tor does not preserve anonymity since it only helps in hiding the IP.
|
||||
So, the PeerId usage in switches must be investigated further.
|
||||
Depending on how PeerId is used, one may be able to link between a querying node
|
||||
and its queried topics despite hiding the IP address-->
|
||||
- By deploying secure 2-party computations in which the querying node
|
||||
obtains the historical messages of a certain topic,
|
||||
the queried node learns nothing about the query.
|
||||
Examples of such 2PC protocols are secure one-way Private Set Intersections (PSI).
|
||||
<!-- TODO: add a reference for PSIs? -->
|
||||
<!-- TODO: more techniques to be included -->
|
||||
<!-- TODO: Censorship resistant:
|
||||
this is about a node that hides the historical messages from other nodes.
|
||||
This attack is not included in the specs
|
||||
since it does not fit the passive adversarial model
|
||||
(the attacker needs to deviate from the store protocol).-->
|
||||
|
||||
- **Robust and verifiable timestamps**:
|
||||
Messages timestamp is a way to show that the message existed
|
||||
prior to some point in time.
|
||||
However, the lack of timestamp verifiability can create room for a range of attacks,
|
||||
including injecting messages with invalid timestamps pointing to the far future.
|
||||
To better understand the attack,
|
||||
consider a store node whose current clock shows `2021-01-01 00:00:30`
|
||||
(and assume all the other nodes have a synchronized clocks +-20seconds).
|
||||
The store node already has a list of messages,
|
||||
`(m1,2021-01-01 00:00:00), (m2,2021-01-01 00:00:01), ..., (m10:2021-01-01 00:00:20)`,
|
||||
that are sorted based on their timestamp.
|
||||
An attacker sends a message with an arbitrary large timestamp e.g.,
|
||||
10 hours ahead of the correct clock `(m',2021-01-01 10:00:30)`.
|
||||
The store node places `m'` at the end of the list,
|
||||
|
||||
```text
|
||||
(m1,2021-01-01 00:00:00), (m2,2021-01-01 00:00:01), ..., (m10:2021-01-01 00:00:20),(m',2021-01-01 10:00:30).
|
||||
```
|
||||
|
||||
Now another message arrives with a valid timestamp e.g.,
|
||||
`(m11, 2021-01-01 00:00:45)`.
|
||||
However, since its timestamp precedes the malicious message `m'`,
|
||||
it gets placed before `m'` in the list i.e.,
|
||||
|
||||
```text
|
||||
(m1,2021-01-01 00:00:00), (m2,2021-01-01 00:00:01), ..., (m10:2021-01-01 00:00:20), (m11, 2021-01-01 00:00:45), (m',2021-01-01 10:00:30).
|
||||
```
|
||||
|
||||
In fact, for the next 10 hours,
|
||||
`m'` will always be considered as the most recent message and
|
||||
served as the last message to the querying nodes irrespective
|
||||
of how many other messages arrive afterward.
|
||||
|
||||
A robust and verifiable timestamp allows the receiver of a message
|
||||
to verify that a message has been generated prior to the claimed timestamp.
|
||||
One solution is the use of [open timestamps](https://opentimestamps.org/) e.g.,
|
||||
block height in Blockchain-based timestamps.
|
||||
That is, messages contain the most recent block height
|
||||
perceived by their senders at the time of message generation.
|
||||
This proves accuracy within a range of minutes (e.g., in Bitcoin blockchain) or
|
||||
seconds (e.g., in Ethereum 2.0) from the time of origination.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [14/WAKU2-MESSAGE](../14/message.md)
|
||||
2. [protocol buffers v3](https://developers.google.com/protocol-buffers/)
|
||||
3. [11/WAKU2-RELAY](../11/relay.md)
|
||||
4. [Open timestamps](https://opentimestamps.org/)
|
||||
@@ -1,443 +1,358 @@
|
||||
---
|
||||
slug: 13
|
||||
title: 13/WAKU2-STORE
|
||||
name: Waku Store Query
|
||||
status: draft
|
||||
tags: waku-core
|
||||
version: 01
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
---
|
||||
Previous version: [00](/waku/standards/core/13/previous-versions/00/store.md)
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification explains the `WAKU2-STORE` protocol,
|
||||
which enables querying of [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md)s.
|
||||
|
||||
**Protocol identifier***: `/vac/waku/store-query/3.0.0`
|
||||
|
||||
### Terminology
|
||||
|
||||
The term PII, Personally Identifiable Information,
|
||||
refers to any piece of data that can be used to uniquely identify a user.
|
||||
For example, the signature verification key, and
|
||||
the hash of one's static IP address are unique for each user and hence count as PII.
|
||||
|
||||
## Wire Specification
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [RFC2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Design Requirements
|
||||
|
||||
The concept of `ephemeral` messages introduced in [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md) affects `WAKU2-STORE` as well.
|
||||
Nodes running `WAKU2-STORE` SHOULD support `ephemeral` messages as specified in [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md).
|
||||
Nodes running `WAKU2-STORE` SHOULD NOT store messages with the `ephemeral` flag set to `true`.
|
||||
|
||||
### Payloads
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
// Protocol identifier: /vac/waku/store-query/3.0.0
|
||||
package waku.store.v3;
|
||||
|
||||
import "waku/message/v1/message.proto";
|
||||
|
||||
message WakuMessageKeyValue {
|
||||
optional bytes message_hash = 1; // Globally unique key for a Waku Message
|
||||
|
||||
// Full message content and associated pubsub_topic as value
|
||||
optional waku.message.v1.WakuMessage message = 2;
|
||||
optional string pubsub_topic = 3;
|
||||
}
|
||||
|
||||
message StoreQueryRequest {
|
||||
string request_id = 1;
|
||||
bool include_data = 2; // Response should include full message content
|
||||
|
||||
// Filter criteria for content-filtered queries
|
||||
optional string pubsub_topic = 10;
|
||||
repeated string content_topics = 11;
|
||||
optional sint64 time_start = 12;
|
||||
optional sint64 time_end = 13;
|
||||
|
||||
// List of key criteria for lookup queries
|
||||
repeated bytes message_hashes = 20; // Message hashes (keys) to lookup
|
||||
|
||||
// Pagination info. 50 Reserved
|
||||
optional bytes pagination_cursor = 51; // Message hash (key) from where to start query (exclusive)
|
||||
bool pagination_forward = 52;
|
||||
optional uint64 pagination_limit = 53;
|
||||
}
|
||||
|
||||
message StoreQueryResponse {
|
||||
string request_id = 1;
|
||||
|
||||
optional uint32 status_code = 10;
|
||||
optional string status_desc = 11;
|
||||
|
||||
repeated WakuMessageKeyValue messages = 20;
|
||||
|
||||
optional bytes pagination_cursor = 51;
|
||||
}
|
||||
```
|
||||
|
||||
### General Store Query Concepts
|
||||
|
||||
#### Waku Message Key-Value Pairs
|
||||
|
||||
The store query protocol operates as a query protocol for a key-value store of historical messages,
|
||||
with each entry having a [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md)
|
||||
and associated `pubsub_topic` as the value,
|
||||
and [deterministic message hash](/waku/standards/core/14/message.md#deterministic-message-hashing) as the key.
|
||||
The store can be queried to return either a set of keys or a set of key-value pairs.
|
||||
|
||||
Within the store query protocol,
|
||||
the [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md) keys and
|
||||
values MUST be represented in a `WakuMessageKeyValue` message.
|
||||
This message MUST contain the deterministic `message_hash` as the key.
|
||||
It MAY contain the full [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md) and
|
||||
associated pubsub topic as the value in the `message` and
|
||||
`pubsub_topic` fields, depending on the use case as set out below.
|
||||
|
||||
If the message contains a value entry in addition to the key,
|
||||
both the `message` and `pubsub_topic` fields MUST be populated.
|
||||
The message MUST NOT have either `message` or `pubsub_topic` populated with the other unset.
|
||||
Both fields MUST either be set or unset.
|
||||
|
||||
#### Waku Message Store Eligibility
|
||||
|
||||
In order for a message to be eligible for storage:
|
||||
|
||||
- it MUST be a _valid_ [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md).
|
||||
- the `timestamp` field MUST be populated with the Unix epoch time,
|
||||
at which the message was generated in nanoseconds.
|
||||
If at the time of storage the `timestamp` deviates by more than 20 seconds
|
||||
either into the past or the future when compared to the store node’s internal clock,
|
||||
the store node MAY reject the message.
|
||||
- the `ephemeral` field MUST be set to `false`.
|
||||
|
||||
#### Waku message sorting
|
||||
|
||||
The key-value entries in the store MUST be time-sorted by the [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md) `timestamp` attribute.
|
||||
Where two or more key-value entries have identical `timestamp` values,
|
||||
the entries MUST be further sorted by the natural order of their message hash keys.
|
||||
Within the context of traversing over key-value entries in the store,
|
||||
_"forward"_ indicates traversing the entries in ascending order,
|
||||
whereas _"backward"_ indicates traversing the entries in descending order.
|
||||
|
||||
#### Pagination
|
||||
|
||||
If a large number of entries in the store service node match the query criteria provided in a `StoreQueryRequest`,
|
||||
the client MAY make use of pagination
|
||||
in a chain of store query request and response transactions
|
||||
to retrieve the full response in smaller batches termed _"pages"_.
|
||||
Pagination can be performed either in [a _forward_ or _backward_ direction](#waku-message-sorting).
|
||||
|
||||
A store query client MAY indicate the maximum number of matching entries it wants in the `StoreQueryResponse`,
|
||||
by setting the page size limit in the `pagination_limit` field.
|
||||
Note that a store service node MAY enforce its own limit
|
||||
if the `pagination_limit` is unset
|
||||
or larger than the service node's internal page size limit.
|
||||
|
||||
A `StoreQueryResponse` with a populated `pagination_cursor` indicates that more stored entries match the query than included in the response.
|
||||
|
||||
A `StoreQueryResponse` without a populated `pagination_cursor` indicates that
|
||||
there are no more matching entries in the store.
|
||||
|
||||
The client MAY request the next page of entries from the store service node
|
||||
by populating a subsequent `StoreQueryRequest` with the `pagination_cursor`
|
||||
received in the `StoreQueryResponse`.
|
||||
All other fields and query criteria MUST be the same as in the preceding `StoreQueryRequest`.
|
||||
|
||||
A `StoreQueryRequest` without a populated `pagination_cursor` indicates that
|
||||
the client wants to retrieve the "first page" of the stored entries matching the query.
|
||||
|
||||
### Store Query Request
|
||||
|
||||
A client node MUST send all historical message queries within a `StoreQueryRequest` message.
|
||||
This request MUST contain a `request_id`.
|
||||
The `request_id` MUST be a uniquely generated string.
|
||||
|
||||
If the store query client requires the store service node to include [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md) values in the query response,
|
||||
it MUST set `include_data` to `true`.
|
||||
If the store query client requires the store service node to return only message hash keys in the query response,
|
||||
it SHOULD set `include_data` to `false`.
|
||||
By default, therefore, the store service node assumes `include_data` to be `false`.
|
||||
|
||||
A store query client MAY include query filter criteria in the `StoreQueryRequest`.
|
||||
There are two types of filter use cases:
|
||||
|
||||
1. Content filtered queries and
|
||||
2. Message hash lookup queries
|
||||
|
||||
#### Content filtered queries
|
||||
|
||||
A store query client MAY request the store service node to filter historical entries by a content filter.
|
||||
Such a client MAY create a filter on content topic, on time range or on both.
|
||||
|
||||
To filter on content topic,
|
||||
the client MUST populate _both_ the `pubsub_topic` _and_ `content_topics` field.
|
||||
The client MUST NOT populate either `pubsub_topic` or
|
||||
`content_topics` and leave the other unset.
|
||||
Both fields MUST either be set or unset.
|
||||
A mixed content topic filter with just one of either `pubsub_topic` or
|
||||
`content_topics` set, SHOULD be regarded as an invalid request.
|
||||
|
||||
To filter on time range, the client MUST set `time_start`, `time_end` or both.
|
||||
Each `time_` field should contain a Unix epoch timestamp in nanoseconds.
|
||||
An unset `time_start` SHOULD be interpreted as "from the oldest stored entry".
|
||||
An unset `time_end` SHOULD be interpreted as "up to the youngest stored entry".
|
||||
|
||||
If any of the content filter fields are set,
|
||||
namely `pubsub_topic`, `content_topic`, `time_start`, or `time_end`,
|
||||
the client MUST NOT set the `message_hashes` field.
|
||||
|
||||
#### Message hash lookup queries
|
||||
|
||||
A store query client MAY request the store service node to filter historical entries by one or
|
||||
more matching message hash keys.
|
||||
This type of query acts as a "lookup" against a message hash key or
|
||||
set of keys already known to the client.
|
||||
|
||||
In order to perform a lookup query,
|
||||
the store query client MUST populate the `message_hashes` field with the list of message hash keys it wants to lookup in the store service node.
|
||||
|
||||
If the `message_hashes` field is set,
|
||||
the client MUST NOT set any of the content filter fields,
|
||||
namely `pubsub_topic`, `content_topic`, `time_start`, or `time_end`.
|
||||
|
||||
#### Presence queries
|
||||
|
||||
A presence query is a special type of lookup query that allows a client to check for the presence of one or
|
||||
more messages in the store service node,
|
||||
without retrieving the full contents (values) of the messages.
|
||||
This can, for example, be used as part of a reliability mechanism,
|
||||
whereby store query clients verify that previously published messages have been successfully stored.
|
||||
|
||||
In order to perform a presence query,
|
||||
the store query client MUST populate the `message_hashes` field in the `StoreQueryRequest` with the list of message hashes
|
||||
for which it wants to verify presence in the store service node.
|
||||
The `include_data` property MUST be set to `false`.
|
||||
The client SHOULD interpret every `message_hash` returned in the `messages` field of the `StoreQueryResponse` as present in the store.
|
||||
The client SHOULD assume that all other message hashes included in the original `StoreQueryRequest` but
|
||||
not in the `StoreQueryResponse` is not present in the store.
|
||||
|
||||
#### Pagination info
|
||||
|
||||
The store query client MAY include a message hash as `pagination_cursor`,
|
||||
to indicate at which key-value entry a store service node SHOULD start the query.
|
||||
The `pagination_cursor` is treated as exclusive
|
||||
and the corresponding entry will not be included in subsequent store query responses.
|
||||
|
||||
For forward queries,
|
||||
only messages following (see [sorting](#waku-message-sorting)) the one indexed at `pagination_cursor`
|
||||
will be returned.
|
||||
For backward queries,
|
||||
only messages preceding (see [sorting](#waku-message-sorting)) the one indexed at `pagination_cursor`
|
||||
will be returned.
|
||||
|
||||
If the store query client requires the store service node to perform a forward query,
|
||||
it MUST set `pagination_forward` to `true`.
|
||||
If the store query client requires the store service node to perform a backward query,
|
||||
it SHOULD set `pagination_forward` to `false`.
|
||||
By default, therefore, the store service node assumes pagination to be backward.
|
||||
|
||||
A store query client MAY indicate the maximum number of matching entries it wants in the `StoreQueryResponse`,
|
||||
by setting the page size limit in the `pagination_limit` field.
|
||||
Note that a store service node MAY enforce its own limit
|
||||
if the `pagination_limit` is unset
|
||||
or larger than the service node's internal page size limit.
|
||||
|
||||
See [pagination](#pagination) for more on how the pagination info is used in store transactions.
|
||||
|
||||
### Store Query Response
|
||||
|
||||
In response to any `StoreQueryRequest`,
|
||||
a store service node SHOULD respond with a `StoreQueryResponse` with a `requestId` matching that of the request.
|
||||
This response MUST contain a `status_code` indicating if the request was successful or not.
|
||||
Successful status codes are in the `2xx` range.
|
||||
A client node SHOULD consider all other status codes as error codes and
|
||||
assume that the requested operation had failed.
|
||||
In addition,
|
||||
the store service node MAY choose to provide a more detailed status description in the `status_desc` field.
|
||||
|
||||
#### Filter matching
|
||||
|
||||
For [content filtered queries](#content-filtered-queries),
|
||||
an entry in the store service node matches the filter criteria in a `StoreQueryRequest` if each of the following conditions are met:
|
||||
|
||||
- its `content_topic` is in the request `content_topics` set
|
||||
and it was published on a matching `pubsub_topic` OR the request `content_topics` and
|
||||
`pubsub_topic` fields are unset
|
||||
- its `timestamp` is _larger or equal_ than the request `start_time` OR the request `start_time` is unset
|
||||
- its `timestamp` is _smaller_ than the request `end_time` OR the request `end_time` is unset
|
||||
|
||||
Note that for content filtered queries, `start_time` is treated as _inclusive_ and
|
||||
`end_time` is treated as _exclusive_.
|
||||
|
||||
For [message hash lookup queries](#message-hash-lookup-queries),
|
||||
an entry in the store service node matches the filter criteria if its `message_hash` is in the request `message_hashes` set.
|
||||
|
||||
The store service node SHOULD respond with an error code and
|
||||
discard the request if the store query request contains both content filter criteria
|
||||
and message hashes.
|
||||
|
||||
#### Populating response messages
|
||||
|
||||
The store service node SHOULD populate the `messages` field in the response
|
||||
only with entries matching the filter criteria provided in the corresponding request.
|
||||
Regardless of whether the response is to a _forward_ or _backward_ query,
|
||||
the `messages` field in the response MUST be ordered in a forward direction
|
||||
according to the [message sorting rules](#waku-message-sorting).
|
||||
|
||||
If the corresponding `StoreQueryRequest` has `include_data` set to true,
|
||||
the service node SHOULD populate both the `message_hash` and
|
||||
`message` for each entry in the response.
|
||||
In all other cases,
|
||||
the store service node SHOULD populate only the `message_hash` field for each entry in the response.
|
||||
|
||||
#### Paginating the response
|
||||
|
||||
The response SHOULD NOT contain more `messages` than the `pagination_limit` provided in the corresponding `StoreQueryRequest`.
|
||||
It is RECOMMENDED that the store node defines its own maximum page size internally.
|
||||
If the `pagination_limit` in the request is unset,
|
||||
or exceeds this internal maximum page size,
|
||||
the store service node SHOULD ignore the `pagination_limit` field and
|
||||
apply its own internal maximum page size.
|
||||
|
||||
In response to a _forward_ `StoreQueryRequest`:
|
||||
|
||||
- if the `pagination_cursor` is set,
|
||||
the store service node SHOULD populate the `messages` field
|
||||
with matching entries following the `pagination_cursor` (exclusive).
|
||||
- if the `pagination_cursor` is unset,
|
||||
the store service node SHOULD populate the `messages` field
|
||||
with matching entries from the first entry in the store.
|
||||
- if there are still more matching entries in the store
|
||||
after the maximum page size is reached while populating the response,
|
||||
the store service node SHOULD populate the `pagination_cursor` in the `StoreQueryResponse`
|
||||
with the message hash key of the _last_ entry _included_ in the response.
|
||||
|
||||
In response to a _backward_ `StoreQueryRequest`:
|
||||
|
||||
- if the `pagination_cursor` is set,
|
||||
the store service node SHOULD populate the `messages` field
|
||||
with matching entries preceding the `pagination_cursor` (exclusive).
|
||||
- if the `pagination_cursor` is unset,
|
||||
the store service node SHOULD populate the `messages` field
|
||||
with matching entries from the last entry in the store.
|
||||
- if there are still more matching entries in the store
|
||||
after the maximum page size is reached while populating the response,
|
||||
the store service node SHOULD populate the `pagination_cursor` in the `StoreQueryResponse`
|
||||
with the message hash key of the _first_ entry _included_ in the response.
|
||||
|
||||
### Security Consideration
|
||||
|
||||
The main security consideration while using this protocol is that a querying node has to reveal its content filters of interest to the queried node,
|
||||
hence potentially compromising their privacy.
|
||||
|
||||
#### Adversarial Model
|
||||
|
||||
Any peer running the `WAKU2-STORE` protocol, i.e.
|
||||
both the querying node and the queried node, are considered as an adversary.
|
||||
Furthermore,
|
||||
we currently consider the adversary as a passive entity that attempts to collect information from other peers to conduct an attack but
|
||||
it does so without violating protocol definitions and instructions.
|
||||
As we evolve the protocol,
|
||||
further adversarial models will be considered.
|
||||
For example, under the passive adversarial model,
|
||||
no malicious node hides or
|
||||
lies about the history of messages as it is against the description of the `WAKU2-STORE` protocol.
|
||||
|
||||
The following are not considered as part of the adversarial model:
|
||||
|
||||
- An adversary with a global view of all the peers and their connections.
|
||||
- An adversary that can eavesdrop on communication links between arbitrary pairs of peers (unless the adversary is one end of the communication).
|
||||
Specifically, the communication channels are assumed to be secure.
|
||||
|
||||
### Future Work
|
||||
|
||||
- **Anonymous query**: This feature guarantees that nodes can anonymously query historical messages from other nodes i.e.,
|
||||
without disclosing the exact topics of [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md) they are interested in.
|
||||
As such, no adversary in the `WAKU2-STORE` protocol would be able to learn which peer is interested in which content filters i.e.,
|
||||
content topics of [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md).
|
||||
The current version of the `WAKU2-STORE` protocol does not provide anonymity for historical queries,
|
||||
as the querying node needs to directly connect to another node in the `WAKU2-STORE` protocol and
|
||||
explicitly disclose the content filters of its interest to retrieve the corresponding messages.
|
||||
However, one can consider preserving anonymity through one of the following ways:
|
||||
|
||||
- By hiding the source of the request i.e., anonymous communication.
|
||||
That is the querying node shall hide all its PII in its history request e.g.,
|
||||
its IP address.
|
||||
This can happen by the utilization of a proxy server or by using Tor.
|
||||
Note that the current structure of historical requests does not embody any piece of PII, otherwise,
|
||||
such data fields must be treated carefully to achieve query anonymity.
|
||||
<!-- TODO: if nodes have to disclose their PeerIDs
|
||||
(e.g., for authentication purposes) when connecting to other nodes in the store protocol,
|
||||
then Tor does not preserve anonymity since it only helps in hiding the IP.
|
||||
So, the PeerId usage in switches must be investigated further.
|
||||
Depending on how PeerId is used, one may be able to link between a querying node
|
||||
and its queried topics despite hiding the IP address-->
|
||||
- By deploying secure 2-party computations
|
||||
in which the querying node obtains the historical messages of a certain topic,
|
||||
the queried node learns nothing about the query.
|
||||
Examples of such 2PC protocols are secure one-way Private Set Intersections (PSI).
|
||||
<!-- TODO: add a reference for PSIs? --> <!-- TODO: more techniques to be included -->
|
||||
<!-- TODO: Censorship resistant:
|
||||
this is about a node that hides the historical messages from other nodes.
|
||||
This attack is not included in the specs since it does not fit the
|
||||
passive adversarial model (the attacker needs to deviate from the store protocol).-->
|
||||
|
||||
- **Robust and verifiable timestamps**: Messages timestamp is a way to show that
|
||||
the message existed prior to some point in time.
|
||||
However, the lack of timestamp verifiability can create room for a range of attacks,
|
||||
including injecting messages with invalid timestamps pointing to the far future.
|
||||
To better understand the attack,
|
||||
consider a store node whose current clock shows `2021-01-01 00:00:30`
|
||||
(and assume all the other nodes have a synchronized clocks +-20seconds).
|
||||
The store node already has a list of messages,
|
||||
`(m1,2021-01-01 00:00:00), (m2,2021-01-01 00:00:01), ..., (m10:2021-01-01 00:00:20)`,
|
||||
that are sorted based on their timestamp.
|
||||
An attacker sends a message with an arbitrary large timestamp e.g.,
|
||||
10 hours ahead of the correct clock `(m',2021-01-01 10:00:30)`.
|
||||
The store node places `m'` at the end of the list,
|
||||
`(m1,2021-01-01 00:00:00), (m2,2021-01-01 00:00:01), ..., (m10:2021-01-01 00:00:20),
|
||||
(m',2021-01-01 10:00:30)`.
|
||||
Now another message arrives with a valid timestamp e.g.,
|
||||
`(m11, 2021-01-01 00:00:45)`.
|
||||
However, since its timestamp precedes the malicious message `m'`,
|
||||
it gets placed before `m'` in the list i.e.,
|
||||
`(m1,2021-01-01 00:00:00), (m2,2021-01-01 00:00:01), ..., (m10:2021-01-01 00:00:20),
|
||||
(m11, 2021-01-01 00:00:45), (m',2021-01-01 10:00:30)`.
|
||||
In fact, for the next 10 hours,
|
||||
`m'` will always be considered as the most recent message and
|
||||
served as the last message to the querying nodes irrespective of how many other
|
||||
messages arrive afterward.
|
||||
|
||||
A robust and verifiable timestamp allows the receiver of a message to verify that
|
||||
a message has been generated prior to the claimed timestamp.
|
||||
One solution is the use of [open timestamps](https://opentimestamps.org/) e.g.,
|
||||
block height in Blockchain-based timestamps.
|
||||
That is, messages contain the most recent block height perceived by their senders
|
||||
at the time of message generation.
|
||||
This proves accuracy within a range of minutes (e.g., in Bitcoin blockchain) or
|
||||
seconds (e.g., in Ethereum 2.0) from the time of origination.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [14/WAKU2-MESSAGE](/waku/standards/core/14/message.md)
|
||||
2. [protocol buffers v3](https://developers.google.com/protocol-buffers/)
|
||||
3. [Open timestamps](https://opentimestamps.org/)
|
||||
---
|
||||
slug: 13
|
||||
title: 13/WAKU2-STORE
|
||||
name: Waku v2 Store
|
||||
status: draft
|
||||
tags: waku-core
|
||||
editor: Simon-Pierre Vivier <simvivier@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
- Hanno Cornelius <hanno@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification explains the `13/WAKU2-STORE` protocol
|
||||
which enables querying of messages received through the relay protocol and
|
||||
stored by other nodes.
|
||||
It also supports pagination for more efficient querying of historical messages.
|
||||
|
||||
**Protocol identifier***: `/vac/waku/store/2.0.0-beta4`
|
||||
|
||||
## Terminology
|
||||
|
||||
The term PII, Personally Identifiable Information,
|
||||
refers to any piece of data that can be used to uniquely identify a user.
|
||||
For example, the signature verification key, and
|
||||
the hash of one's static IP address are unique for each user and hence count as PII.
|
||||
|
||||
## Design Requirements
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [RFC2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
Nodes willing to provide the storage service using `13/WAKU2-STORE` protocol,
|
||||
SHOULD provide a complete and full view of message history.
|
||||
As such, they are required to be *highly available* and
|
||||
specifically have a *high uptime* to consistently receive and store network messages.
|
||||
The high uptime requirement makes sure that no message is missed out
|
||||
hence a complete and intact view of the message history
|
||||
is delivered to the querying nodes.
|
||||
Nevertheless, in case storage provider nodes cannot afford high availability,
|
||||
the querying nodes may retrieve the historical messages from multiple sources
|
||||
to achieve a full and intact view of the past.
|
||||
|
||||
The concept of `ephemeral` messages introduced in
|
||||
[`14/WAKU2-MESSAGE`](../14/message.md) affects `13/WAKU2-STORE` as well.
|
||||
Nodes running `13/WAKU2-STORE` SHOULD support `ephemeral` messages as specified in
|
||||
[14/WAKU2-MESSAGE](../14/message.md).
|
||||
Nodes running `13/WAKU2-STORE` SHOULD NOT store messages
|
||||
with the `ephemeral` flag set to `true`.
|
||||
|
||||
## Adversarial Model
|
||||
|
||||
Any peer running the `13/WAKU2-STORE` protocol, i.e.
|
||||
both the querying node and the queried node, are considered as an adversary.
|
||||
Furthermore,
|
||||
we currently consider the adversary as a passive entity
|
||||
that attempts to collect information from other peers to conduct an attack but
|
||||
it does so without violating protocol definitions and instructions.
|
||||
As we evolve the protocol,
|
||||
further adversarial models will be considered.
|
||||
For example, under the passive adversarial model,
|
||||
no malicious node hides or
|
||||
lies about the history of messages
|
||||
as it is against the description of the `13/WAKU2-STORE` protocol.
|
||||
|
||||
The following are not considered as part of the adversarial model:
|
||||
|
||||
- An adversary with a global view of all the peers and their connections.
|
||||
- An adversary that can eavesdrop on communication links
|
||||
between arbitrary pairs of peers (unless the adversary is one end of the communication).
|
||||
In specific, the communication channels are assumed to be secure.
|
||||
|
||||
## Wire Specification
|
||||
|
||||
Peers communicate with each other using a request / response API.
|
||||
The messages sent are Protobuf RPC messages which are implemented using
|
||||
[protocol buffers v3](https://developers.google.com/protocol-buffers/).
|
||||
The following are the specifications of the Protobuf messages.
|
||||
|
||||
### Payloads
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3";
|
||||
|
||||
message Index {
|
||||
bytes digest = 1;
|
||||
sint64 receiverTime = 2;
|
||||
sint64 senderTime = 3;
|
||||
string pubsubTopic = 4;
|
||||
}
|
||||
|
||||
message PagingInfo {
|
||||
uint64 pageSize = 1;
|
||||
Index cursor = 2;
|
||||
enum Direction {
|
||||
BACKWARD = 0;
|
||||
FORWARD = 1;
|
||||
}
|
||||
Direction direction = 3;
|
||||
}
|
||||
|
||||
message ContentFilter {
|
||||
string contentTopic = 1;
|
||||
}
|
||||
|
||||
message HistoryQuery {
|
||||
// the first field is reserved for future use
|
||||
string pubsubtopic = 2;
|
||||
repeated ContentFilter contentFilters = 3;
|
||||
PagingInfo pagingInfo = 4;
|
||||
}
|
||||
|
||||
message HistoryResponse {
|
||||
// the first field is reserved for future use
|
||||
repeated WakuMessage messages = 2;
|
||||
PagingInfo pagingInfo = 3;
|
||||
enum Error {
|
||||
NONE = 0;
|
||||
INVALID_CURSOR = 1;
|
||||
}
|
||||
Error error = 4;
|
||||
}
|
||||
|
||||
message HistoryRPC {
|
||||
string request_id = 1;
|
||||
HistoryQuery query = 2;
|
||||
HistoryResponse response = 3;
|
||||
}
|
||||
```
|
||||
|
||||
#### Index
|
||||
|
||||
To perform pagination,
|
||||
each `WakuMessage` stored at a node running the `13/WAKU2-STORE` protocol
|
||||
is associated with a unique `Index` that encapsulates the following parts.
|
||||
|
||||
- `digest`: a sequence of bytes representing the SHA256 hash of a `WakuMessage`.
|
||||
The hash is computed over the concatenation of `contentTopic`
|
||||
and `payload` fields of a `WakuMessage` (see [14/WAKU2-MESSAGE](../14/message.md)).
|
||||
- `receiverTime`: the UNIX time in nanoseconds
|
||||
at which the `WakuMessage` is received by the receiving node.
|
||||
- `senderTime`: the UNIX time in nanoseconds
|
||||
at which the `WakuMessage` is generated by its sender.
|
||||
- `pubsubTopic`: the pubsub topic on which the `WakuMessage` is received.
|
||||
|
||||
#### PagingInfo
|
||||
|
||||
`PagingInfo` holds the information required for pagination.
|
||||
It consists of the following components.
|
||||
|
||||
- `pageSize`: A positive integer indicating the number of queried `WakuMessage`s
|
||||
in a `HistoryQuery`
|
||||
(or retrieved `WakuMessage`s in a `HistoryResponse`).
|
||||
- `cursor`: holds the `Index` of a `WakuMessage`.
|
||||
- `direction`: indicates the direction of paging
|
||||
which can be either `FORWARD` or `BACKWARD`.
|
||||
|
||||
#### ContentFilter
|
||||
|
||||
`ContentFilter` carries the information required for filtering historical messages.
|
||||
|
||||
- `contentTopic` represents the content topic of the queried historical `WakuMessage`.
|
||||
This field maps to the `contentTopic` field of the [14/WAKU2-MESSAGE](../14/message.md).
|
||||
|
||||
#### HistoryQuery
|
||||
|
||||
RPC call to query historical messages.
|
||||
|
||||
- The `pubsubTopic` field MUST indicate the pubsub topic
|
||||
of the historical messages to be retrieved.
|
||||
This field denotes the pubsub topic on which `WakuMessage`s are published.
|
||||
This field maps to `topicIDs` field of `Message` in [`11/WAKU2-RELAY`](../11/relay.md).
|
||||
Leaving this field empty means no filter on the pubsub topic
|
||||
of message history is requested.
|
||||
This field SHOULD be left empty in order to retrieve the historical `WakuMessage`
|
||||
regardless of the pubsub topics on which they are published.
|
||||
- The `contentFilters` field MUST indicate the list of content filters
|
||||
based on which the historical messages are to be retrieved.
|
||||
Leaving this field empty means no filter on the content topic
|
||||
of message history is required.
|
||||
This field SHOULD be left empty in order
|
||||
to retrieve historical `WakuMessage` regardless of their content topics.
|
||||
- `PagingInfo` holds the information required for pagination.
|
||||
Its `pageSize` field indicates the number of `WakuMessage`s
|
||||
to be included in the corresponding `HistoryResponse`.
|
||||
It is RECOMMENDED that the queried node defines a maximum page size internally.
|
||||
If the querying node leaves the `pageSize` unspecified,
|
||||
or if the `pageSize` exceeds the maximum page size,
|
||||
the queried node SHOULD auto-paginate the `HistoryResponse`
|
||||
to no more than the configured maximum page size.
|
||||
This allows mitigation of long response time for `HistoryQuery`.
|
||||
In the forward pagination request,
|
||||
the `messages` field of the `HistoryResponse` SHALL contain, at maximum,
|
||||
the `pageSize` amount of `WakuMessage` whose `Index`
|
||||
values are larger than the given `cursor`
|
||||
(and vise versa for the backward pagination).
|
||||
Note that the `cursor` of a `HistoryQuery` MAY be empty
|
||||
(e.g., for the initial query), as such, and
|
||||
depending on whether the `direction` is `BACKWARD` or
|
||||
`FORWARD` the last or the first `pageSize` `WakuMessage` SHALL be returned,
|
||||
respectively.
|
||||
|
||||
#### Sorting Messages
|
||||
|
||||
The queried node MUST sort the `WakuMessage` based on their `Index`,
|
||||
where the `senderTime` constitutes the most significant part and
|
||||
the `digest` comes next, and
|
||||
then perform pagination on the sorted result.
|
||||
As such, the retrieved page contains an ordered list of `WakuMessage`
|
||||
from the oldest messages to the most recent one.
|
||||
Alternatively, the `receiverTime` (instead of `senderTime`)
|
||||
MAY be used to sort messages during the paging process.
|
||||
However, it is RECOMMENDED the use of the `senderTime`
|
||||
for sorting as it is invariant and
|
||||
consistent across all the nodes.
|
||||
This has the benefit of `cursor` reusability i.e.,
|
||||
a `cursor` obtained from one node can be consistently used
|
||||
to query from another node.
|
||||
However, this `cursor` reusability does not hold when the `receiverTime` is utilized
|
||||
as the receiver time is affected by the network delay and
|
||||
nodes' clock asynchrony.
|
||||
|
||||
#### HistoryResponse
|
||||
|
||||
RPC call to respond to a HistoryQuery call.
|
||||
|
||||
- The `messages` field MUST contain the messages found,
|
||||
these are [14/WAKU2-MESSAGE](../14/message.md) types.
|
||||
- `PagingInfo` holds the paging information based
|
||||
on which the querying node can resume its further history queries.
|
||||
The `pageSize` indicates the number of returned Waku messages
|
||||
(i.e., the number of messages included in the `messages` field of `HistoryResponse`).
|
||||
The `direction` is the same direction as in the corresponding `HistoryQuery`.
|
||||
In the forward pagination, the `cursor` holds the `Index` of the last message
|
||||
in the `HistoryResponse` `messages` (and the first message in the backward paging).
|
||||
Regardless of the paging direction,
|
||||
the retrieved `messages` are always sorted in ascending order
|
||||
based on their timestamp as explained in the [sorting messages](#sorting-messages)section,
|
||||
that is, from the oldest to the most recent.
|
||||
The requester SHALL embed the returned `cursor` inside its next `HistoryQuery`
|
||||
to retrieve the next page of the [14/WAKU2-MESSAGE](../14/message.md).
|
||||
The `cursor` obtained from one node SHOULD NOT be used in a request to another node
|
||||
because the result may be different.
|
||||
- The `error` field contains information about any error that has occurred
|
||||
while processing the corresponding `HistoryQuery`.
|
||||
`NONE` stands for no error.
|
||||
This is also the default value.
|
||||
`INVALID_CURSOR` means that the `cursor` field of `HistoryQuery`
|
||||
does not match with the `Index` of any of the `WakuMessage`
|
||||
persisted by the queried node.
|
||||
|
||||
## Security Consideration
|
||||
|
||||
The main security consideration to take into account
|
||||
while using this protocol is that a querying node
|
||||
have to reveal their content filters of interest to the queried node,
|
||||
hence potentially compromising their privacy.
|
||||
|
||||
## Future Work
|
||||
|
||||
- **Anonymous query**: This feature guarantees that nodes
|
||||
can anonymously query historical messages from other nodes i.e.,
|
||||
without disclosing the exact topics of [14/WAKU2-MESSAGE](../14/message.md)
|
||||
they are interested in.
|
||||
As such, no adversary in the `13/WAKU2-STORE` protocol
|
||||
would be able to learn which peer is interested in which content filters i.e.,
|
||||
content topics of [14/WAKU2-MESSAGE](../14/message.md).
|
||||
The current version of the `13/WAKU2-STORE` protocol does not provide anonymity
|
||||
for historical queries,
|
||||
as the querying node needs to directly connect to another node
|
||||
in the `13/WAKU2-STORE` protocol and
|
||||
explicitly disclose the content filters of its interest
|
||||
to retrieve the corresponding messages.
|
||||
However, one can consider preserving anonymity through one of the following ways:
|
||||
- By hiding the source of the request i.e., anonymous communication.
|
||||
That is the querying node shall hide all its PII in its history request
|
||||
e.g., its IP address.
|
||||
This can happen by the utilization of a proxy server or by using Tor.
|
||||
Note that the current structure of historical requests
|
||||
does not embody any piece of PII, otherwise,
|
||||
such data fields must be treated carefully to achieve query anonymity.
|
||||
<!-- TODO: if nodes have to disclose their PeerIDs
|
||||
(e.g., for authentication purposes) when connecting to other nodes
|
||||
in the store protocol,
|
||||
then Tor does not preserve anonymity since it only helps in hiding the IP.
|
||||
So, the PeerId usage in switches must be investigated further.
|
||||
Depending on how PeerId is used, one may be able to link between a querying node
|
||||
and its queried topics despite hiding the IP address-->
|
||||
- By deploying secure 2-party computations in which the querying node
|
||||
obtains the historical messages of a certain topic,
|
||||
the queried node learns nothing about the query.
|
||||
Examples of such 2PC protocols are secure one-way Private Set Intersections (PSI).
|
||||
<!-- TODO: add a reference for PSIs? -->
|
||||
<!-- TODO: more techniques to be included -->
|
||||
<!-- TODO: Censorship resistant:
|
||||
this is about a node that hides the historical messages from other nodes.
|
||||
This attack is not included in the specs
|
||||
since it does not fit the passive adversarial model
|
||||
(the attacker needs to deviate from the store protocol).-->
|
||||
|
||||
- **Robust and verifiable timestamps**:
|
||||
Messages timestamp is a way to show that the message existed
|
||||
prior to some point in time.
|
||||
However, the lack of timestamp verifiability can create room for a range of attacks,
|
||||
including injecting messages with invalid timestamps pointing to the far future.
|
||||
To better understand the attack,
|
||||
consider a store node whose current clock shows `2021-01-01 00:00:30`
|
||||
(and assume all the other nodes have a synchronized clocks +-20seconds).
|
||||
The store node already has a list of messages,
|
||||
`(m1,2021-01-01 00:00:00), (m2,2021-01-01 00:00:01), ..., (m10:2021-01-01 00:00:20)`,
|
||||
that are sorted based on their timestamp.
|
||||
An attacker sends a message with an arbitrary large timestamp e.g.,
|
||||
10 hours ahead of the correct clock `(m',2021-01-01 10:00:30)`.
|
||||
The store node places `m'` at the end of the list,
|
||||
|
||||
```text
|
||||
(m1,2021-01-01 00:00:00), (m2,2021-01-01 00:00:01), ..., (m10:2021-01-01 00:00:20),(m',2021-01-01 10:00:30).
|
||||
```
|
||||
|
||||
Now another message arrives with a valid timestamp e.g.,
|
||||
`(m11, 2021-01-01 00:00:45)`.
|
||||
However, since its timestamp precedes the malicious message `m'`,
|
||||
it gets placed before `m'` in the list i.e.,
|
||||
|
||||
```text
|
||||
(m1,2021-01-01 00:00:00), (m2,2021-01-01 00:00:01), ..., (m10:2021-01-01 00:00:20), (m11, 2021-01-01 00:00:45), (m',2021-01-01 10:00:30).
|
||||
```
|
||||
|
||||
In fact, for the next 10 hours,
|
||||
`m'` will always be considered as the most recent message and
|
||||
served as the last message to the querying nodes irrespective
|
||||
of how many other messages arrive afterward.
|
||||
|
||||
A robust and verifiable timestamp allows the receiver of a message
|
||||
to verify that a message has been generated prior to the claimed timestamp.
|
||||
One solution is the use of [open timestamps](https://opentimestamps.org/) e.g.,
|
||||
block height in Blockchain-based timestamps.
|
||||
That is, messages contain the most recent block height
|
||||
perceived by their senders at the time of message generation.
|
||||
This proves accuracy within a range of minutes (e.g., in Bitcoin blockchain) or
|
||||
seconds (e.g., in Ethereum 2.0) from the time of origination.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
1. [14/WAKU2-MESSAGE](../14/message.md)
|
||||
2. [protocol buffers v3](https://developers.google.com/protocol-buffers/)
|
||||
3. [11/WAKU2-RELAY](../11/relay.md)
|
||||
4. [Open timestamps](https://opentimestamps.org/)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user