Compare commits
6 Commits
develop
...
community_
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ec1d718f4b | ||
|
|
63a0308982 | ||
|
|
f7286d0e7c | ||
|
|
a919fd359d | ||
|
|
ecf9c46f79 | ||
|
|
ec97967fa1 |
4
.github/workflows/.markdownlint.json
vendored
@@ -1,6 +1,4 @@
|
||||
{
|
||||
"MD013": false,
|
||||
"MD024": false,
|
||||
"MD025": false,
|
||||
"MD033": false
|
||||
"MD024": false
|
||||
}
|
||||
|
||||
3
.github/workflows/markdown-lint.yml
vendored
@@ -2,6 +2,9 @@ name: markdown-linting
|
||||
|
||||
on:
|
||||
|
||||
push:
|
||||
branches:
|
||||
- '**'
|
||||
pull_request:
|
||||
branches:
|
||||
- '**'
|
||||
|
||||
1
.gitignore
vendored
@@ -1 +0,0 @@
|
||||
book
|
||||
59
Jenkinsfile
vendored
@@ -1,59 +0,0 @@
|
||||
#!/usr/bin/env groovy
|
||||
library 'status-jenkins-lib@v1.9.31'
|
||||
|
||||
pipeline {
|
||||
agent {
|
||||
docker {
|
||||
label 'linuxcontainer'
|
||||
image 'harbor.status.im/infra/ci-build-containers:linux-base-1.0.0'
|
||||
args '--volume=/nix:/nix ' +
|
||||
'--volume=/etc/nix:/etc/nix ' +
|
||||
'--user jenkins'
|
||||
}
|
||||
}
|
||||
|
||||
options {
|
||||
disableConcurrentBuilds()
|
||||
buildDiscarder(logRotator(
|
||||
numToKeepStr: '20',
|
||||
daysToKeepStr: '30',
|
||||
))
|
||||
}
|
||||
|
||||
environment {
|
||||
GIT_COMMITTER_NAME = 'status-im-auto'
|
||||
GIT_COMMITTER_EMAIL = 'auto@status.im'
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('Build') {
|
||||
steps { script {
|
||||
nix.develop('python scripts/gen_rfc_index.py && python scripts/gen_history.py && mdbook build')
|
||||
jenkins.genBuildMetaJSON('book/build.json')
|
||||
} }
|
||||
}
|
||||
|
||||
stage('Publish') {
|
||||
steps {
|
||||
sshagent(credentials: ['status-im-auto-ssh']) {
|
||||
script {
|
||||
nix.develop("""
|
||||
ghp-import \
|
||||
-b ${deployBranch()} \
|
||||
-c ${deployDomain()} \
|
||||
-p book
|
||||
""", pure: false)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
post {
|
||||
cleanup { cleanWs() }
|
||||
}
|
||||
}
|
||||
|
||||
def isMainBranch() { GIT_BRANCH ==~ /.*main/ }
|
||||
def deployBranch() { isMainBranch() ? 'deploy-master' : 'deploy-develop' }
|
||||
def deployDomain() { isMainBranch() ? 'rfc.vac.dev' : 'dev-rfc.vac.dev' }
|
||||
47
README.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Vac Request For Comments(RFC)
|
||||
|
||||
*NOTE*: This repo is WIP. We are currently restructuring the RFC process.
|
||||
|
||||
This repository contains specifications from the [Waku](https://waku.org/), [Nomos](https://nomos.tech/),
|
||||
[Codex](https://codex.storage/), and
|
||||
[Status](https://status.app/) projects that are part of the [IFT portfolio](https://free.technology/).
|
||||
[Vac](https://vac.dev) is an
|
||||
[IFT service](https://free.technology/services) that will manage the RFC,
|
||||
[Request for Comments](https://en.wikipedia.org/wiki/Request_for_Comments),
|
||||
process within this repository.
|
||||
|
||||
## New RFC Process
|
||||
|
||||
This repository replaces the previous `rfc.vac.dev` resource.
|
||||
Each project will maintain initial specifications in separate repositories,
|
||||
which may be considered as a **raw** specification.
|
||||
All [Vac](https://vac.dev) **raw** specifications and
|
||||
discussions will live in the Vac subdirectory.
|
||||
When projects have reached some level of maturity
|
||||
for a specification living in their repository,
|
||||
the process of updating the status to **draft** may begin in this repository.
|
||||
Specifications will adhere to
|
||||
[1/COSS](./vac/1/coss.md) before obtaining **draft** status.
|
||||
|
||||
Implementations should follow specifications as described,
|
||||
and all contributions will be discussed before the **stable** status is obtained.
|
||||
The goal of this RFC process will to engage all interseted parities and
|
||||
reach a rough consensus for techcinal specifications.
|
||||
|
||||
## Contributing
|
||||
|
||||
Please see [1/COSS](./vac/1/coss.md) for general guidelines and specification lifecycle.
|
||||
|
||||
Feel free to join the [Vac discord](https://discord.gg/Vy54fEWuqC).
|
||||
|
||||
Here's the project board used by core contributors and maintainers: [Projects](https://github.com/orgs/vacp2p/projects/5)
|
||||
|
||||
## IFT Projects' Raw Specifications
|
||||
|
||||
The repository for each project **raw** specifications:
|
||||
|
||||
- [Vac Raw Specifications](./vac/raw)
|
||||
- [Status Raw Specifications](./status/raw)
|
||||
- [Waku Raw Specificiations](https://github.com/waku-org/specs/tree/master)
|
||||
- [Codex Raw Specifications](none)
|
||||
- [Nomos Raw Specifications](https://github.com/logos-co/nomos-specs)
|
||||
11
book.toml
@@ -1,11 +0,0 @@
|
||||
[book]
|
||||
title = "Vac RFC"
|
||||
authors = ["Jakub Sokołowski"]
|
||||
language = "en"
|
||||
src = "docs"
|
||||
|
||||
[output.html]
|
||||
default-theme = "ayu"
|
||||
additional-css = ["custom.css"]
|
||||
additional-js = ["scripts/rfc-index.js"]
|
||||
git-repository-url = "https://github.com/vacp2p/rfc-index"
|
||||
5
codex/README.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Codex RFCs
|
||||
|
||||
Specifications related the Codex decentralised data storage platform.
|
||||
Visit [Codex specs](https://github.com/codex-storage/codex-spec)
|
||||
to view the new Codex specifications currently under discussion.
|
||||
485
codex/raw/codex-block-exchange.md
Normal file
@@ -0,0 +1,485 @@
|
||||
---
|
||||
title: CODEX-BLOCK-EXCHANGE
|
||||
name: Codex Block Exchange Protocol
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags: codex, block-exchange, p2p, data-distribution
|
||||
editor: Codex Team
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The Block Exchange (BE) is a core Codex component responsible for
|
||||
peer-to-peer content distribution across the network.
|
||||
It manages the sending and receiving of data blocks between nodes,
|
||||
enabling efficient data sharing and retrieval.
|
||||
This specification defines both an internal service interface and a
|
||||
network protocol for referring to and providing data blocks.
|
||||
Blocks are uniquely identifiable by means of an address and represent
|
||||
fixed-length chunks of arbitrary data.
|
||||
|
||||
## Semantics
|
||||
|
||||
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
|
||||
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
|
||||
document are to be interpreted as described in
|
||||
[RFC 2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Definitions
|
||||
|
||||
| Term | Description |
|
||||
|------|-------------|
|
||||
| **Block** | Fixed-length chunk of arbitrary data, uniquely identifiable |
|
||||
| **Standalone Block** | Self-contained block addressed by SHA256 hash (CID) |
|
||||
| **Dataset Block** | Block in ordered set, addressed by dataset CID + index |
|
||||
| **Block Address** | Unique identifier for standalone/dataset addressing |
|
||||
| **WantList** | List of block requests sent by a peer |
|
||||
| **Block Delivery** | Transmission of block data from one peer to another |
|
||||
| **Block Presence** | Indicator of whether peer has requested block |
|
||||
| **Merkle Proof** | Proof verifying dataset block position correctness |
|
||||
| **CID** | Content Identifier - hash-based identifier for content |
|
||||
| **Multicodec** | Self-describing format identifier for data encoding |
|
||||
| **Multihash** | Self-describing hash format |
|
||||
|
||||
## Motivation
|
||||
|
||||
The Block Exchange module serves as the fundamental layer for content
|
||||
distribution in the Codex network.
|
||||
It provides primitives for requesting and delivering blocks of data
|
||||
between peers, supporting both standalone blocks and blocks that are
|
||||
part of larger datasets.
|
||||
The protocol is designed to work over libp2p streams and integrates
|
||||
with Codex's discovery, storage, and payment systems.
|
||||
|
||||
When a peer wishes to obtain a block, it registers its unique address
|
||||
with the Block Exchange, and the Block Exchange will then be in charge
|
||||
of procuring it by finding a peer that has the block, if any, and then
|
||||
downloading it.
|
||||
The Block Exchange will also accept requests from peers which might
|
||||
want blocks that the node has, and provide them.
|
||||
|
||||
**Discovery Separation:** Throughout this specification we assume that
|
||||
if a peer wants a block, then the peer has the means to locate and
|
||||
connect to peers which either: (1) have the block; or (2) are
|
||||
reasonably expected to obtain the block in the future.
|
||||
In practical implementations, the Block Exchange will typically require
|
||||
the support of an underlying discovery service, e.g., the Codex DHT,
|
||||
to look up such peers, but this is beyond the scope of this document.
|
||||
|
||||
The protocol supports two distinct block types to accommodate different
|
||||
use cases: standalone blocks for independent data chunks and dataset
|
||||
blocks for ordered collections of data that form larger structures.
|
||||
|
||||
## Block Format
|
||||
|
||||
The Block Exchange protocol supports two types of blocks:
|
||||
|
||||
### Standalone Blocks
|
||||
|
||||
Standalone blocks are self-contained pieces of data addressed by their
|
||||
SHA256 content identifier (CID).
|
||||
These blocks are independent and do not reference any larger structure.
|
||||
|
||||
**Properties:**
|
||||
|
||||
- Addressed by content hash (SHA256)
|
||||
- Default size: 64 KiB
|
||||
- Self-contained and independently verifiable
|
||||
|
||||
### Dataset Blocks
|
||||
|
||||
Dataset blocks are part of ordered sets and are addressed by a
|
||||
`(datasetCID, index)` tuple.
|
||||
The datasetCID refers to the Merkle tree root of the entire dataset,
|
||||
and the index indicates the block's position within that dataset.
|
||||
|
||||
Formally, we can define a block as a tuple consisting of raw data and
|
||||
its content identifier: `(data: seq[byte], cid: Cid)`, where standalone
|
||||
blocks are addressed by `cid`, and dataset blocks can be addressed
|
||||
either by `cid` or a `(datasetCID, index)` tuple.
|
||||
|
||||
**Properties:**
|
||||
|
||||
- Addressed by `(treeCID, index)` tuple
|
||||
- Part of a Merkle tree structure
|
||||
- Require Merkle proof for verification
|
||||
- Must be uniformly sized within a dataset
|
||||
- Final blocks MUST be zero-padded if incomplete
|
||||
|
||||
### Block Specifications
|
||||
|
||||
All blocks in the Codex Block Exchange protocol adhere to the
|
||||
following specifications:
|
||||
|
||||
| Property | Value | Description |
|
||||
|----------|-------|-------------|
|
||||
| Default Block Size | 64 KiB | Standard size for data blocks |
|
||||
| Multicodec | `codex-block` (0xCD02) | Format identifier |
|
||||
| Multihash | `sha2-256` (0x12) | Hash algorithm for addressing |
|
||||
| Padding Requirement | Zero-padding | Incomplete final blocks padded |
|
||||
|
||||
## Service Interface
|
||||
|
||||
The Block Exchange module exposes two core primitives for
|
||||
block management:
|
||||
|
||||
### `requestBlock`
|
||||
|
||||
```python
|
||||
async def requestBlock(address: BlockAddress) -> Block
|
||||
```
|
||||
|
||||
Registers a block address for retrieval and returns the block data
|
||||
when available.
|
||||
This function can be awaited by the caller until the block is retrieved
|
||||
from the network or local storage.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `address`: BlockAddress - The unique address identifying the block
|
||||
to retrieve
|
||||
|
||||
**Returns:**
|
||||
|
||||
- `Block` - The retrieved block data
|
||||
|
||||
### `cancelRequest`
|
||||
|
||||
```python
|
||||
async def cancelRequest(address: BlockAddress) -> bool
|
||||
```
|
||||
|
||||
Cancels a previously registered block request.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `address`: BlockAddress - The address of the block request to cancel
|
||||
|
||||
**Returns:**
|
||||
|
||||
- `bool` - True if the cancellation was successful, False otherwise
|
||||
|
||||
## Dependencies
|
||||
|
||||
The Block Exchange module depends on and interacts with several other
|
||||
Codex components:
|
||||
|
||||
| Component | Purpose |
|
||||
|-----------|---------|
|
||||
| **Discovery Module** | DHT-based peer discovery for locating nodes |
|
||||
| **Local Store (Repo)** | Persistent block storage for local blocks |
|
||||
| **Advertiser** | Announces block availability to the network |
|
||||
| **Network Layer** | libp2p connections and stream management |
|
||||
|
||||
## Protocol Specification
|
||||
|
||||
### Protocol Identifier
|
||||
|
||||
The Block Exchange protocol uses the following libp2p protocol
|
||||
identifier:
|
||||
|
||||
```text
|
||||
/codex/blockexc/1.0.0
|
||||
```
|
||||
|
||||
### Connection Model
|
||||
|
||||
The protocol operates over libp2p streams.
|
||||
When a node wants to communicate with a peer:
|
||||
|
||||
1. The initiating node dials the peer using the protocol identifier
|
||||
2. A bidirectional stream is established
|
||||
3. Both sides can send and receive messages on this stream
|
||||
4. Messages are encoded using Protocol Buffers
|
||||
5. The stream remains open for the duration of the exchange session
|
||||
6. Peers track active connections in a peer context store
|
||||
|
||||
The protocol handles peer lifecycle events:
|
||||
|
||||
- **Peer Joined**: When a peer connects, it is added to the active
|
||||
peer set
|
||||
- **Peer Departed**: When a peer disconnects gracefully, its context
|
||||
is cleaned up
|
||||
- **Peer Dropped**: When a peer connection fails, it is removed from
|
||||
the active set
|
||||
|
||||
### Message Format
|
||||
|
||||
All messages use Protocol Buffers encoding for serialization.
|
||||
The main message structure supports multiple operation types in a
|
||||
single message.
|
||||
|
||||
#### Main Message Structure
|
||||
|
||||
```protobuf
|
||||
message Message {
|
||||
Wantlist wantlist = 1;
|
||||
repeated BlockDelivery payload = 3;
|
||||
repeated BlockPresence blockPresences = 4;
|
||||
int32 pendingBytes = 5;
|
||||
AccountMessage account = 6;
|
||||
StateChannelUpdate payment = 7;
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `wantlist`: Block requests from the sender
|
||||
- `payload`: Block deliveries (actual block data)
|
||||
- `blockPresences`: Availability indicators for requested blocks
|
||||
- `pendingBytes`: Number of bytes pending delivery
|
||||
- `account`: Account information for micropayments
|
||||
- `payment`: State channel update for payment processing
|
||||
|
||||
#### Block Address
|
||||
|
||||
The BlockAddress structure supports both standalone and dataset
|
||||
block addressing:
|
||||
|
||||
```protobuf
|
||||
message BlockAddress {
|
||||
bool leaf = 1;
|
||||
bytes treeCid = 2; // Present when leaf = true
|
||||
uint64 index = 3; // Present when leaf = true
|
||||
bytes cid = 4; // Present when leaf = false
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `leaf`: Indicates if this is dataset block (true) or standalone
|
||||
(false)
|
||||
- `treeCid`: Merkle tree root CID (present when `leaf = true`)
|
||||
- `index`: Position of block within dataset (present when `leaf = true`)
|
||||
- `cid`: Content identifier of the block (present when `leaf = false`)
|
||||
|
||||
**Addressing Modes:**
|
||||
|
||||
- **Standalone Block** (`leaf = false`): Direct CID reference to a
|
||||
standalone content block
|
||||
- **Dataset Block** (`leaf = true`): Reference to a block within an
|
||||
ordered set, identified by a Merkle tree root and an index.
|
||||
The Merkle root may refer to either a regular dataset, or a dataset
|
||||
that has undergone erasure-coding
|
||||
|
||||
#### WantList
|
||||
|
||||
The WantList communicates which blocks a peer desires to receive:
|
||||
|
||||
```protobuf
|
||||
message Wantlist {
|
||||
enum WantType {
|
||||
wantBlock = 0;
|
||||
wantHave = 1;
|
||||
}
|
||||
|
||||
message Entry {
|
||||
BlockAddress address = 1;
|
||||
int32 priority = 2;
|
||||
bool cancel = 3;
|
||||
WantType wantType = 4;
|
||||
bool sendDontHave = 5;
|
||||
}
|
||||
|
||||
repeated Entry entries = 1;
|
||||
bool full = 2;
|
||||
}
|
||||
```
|
||||
|
||||
**WantType Values:**
|
||||
|
||||
- `wantBlock (0)`: Request full block delivery
|
||||
- `wantHave (1)`: Request availability information only (presence check)
|
||||
|
||||
**Entry Fields:**
|
||||
|
||||
- `address`: The block being requested
|
||||
- `priority`: Request priority (currently always 0)
|
||||
- `cancel`: If true, cancels a previous want for this block
|
||||
- `wantType`: Specifies whether full block or presence is desired
|
||||
- `wantHave (1)`: Only check if peer has the block
|
||||
- `wantBlock (0)`: Request full block data
|
||||
- `sendDontHave`: If true, peer should respond even if it doesn't have
|
||||
the block
|
||||
|
||||
**WantList Fields:**
|
||||
|
||||
- `entries`: List of block requests
|
||||
- `full`: If true, replaces all previous entries; if false, delta update
|
||||
|
||||
**Delta Updates:**
|
||||
|
||||
WantLists support delta updates for efficiency.
|
||||
When `full = false`, entries represent additions or modifications to
|
||||
the existing WantList rather than a complete replacement.
|
||||
|
||||
#### Block Delivery
|
||||
|
||||
Block deliveries contain the actual block data along with verification
|
||||
information:
|
||||
|
||||
```protobuf
|
||||
message BlockDelivery {
|
||||
bytes cid = 1;
|
||||
bytes data = 2;
|
||||
BlockAddress address = 3;
|
||||
bytes proof = 4;
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `cid`: Content identifier of the block
|
||||
- `data`: Raw block data (up to 100 MiB)
|
||||
- `address`: The BlockAddress identifying this block
|
||||
- `proof`: Merkle proof (CodexProof) verifying block correctness
|
||||
(required for dataset blocks)
|
||||
|
||||
**Merkle Proof Verification:**
|
||||
|
||||
When delivering dataset blocks (`address.leaf = true`):
|
||||
|
||||
- The delivery MUST include a Merkle proof (CodexProof)
|
||||
- The proof verifies that the block at the given index is correctly
|
||||
part of the Merkle tree identified by the tree CID
|
||||
- This applies to all datasets, irrespective of whether they have been
|
||||
erasure-coded or not
|
||||
- Recipients MUST verify the proof before accepting the block
|
||||
- Invalid proofs result in block rejection
|
||||
|
||||
#### Block Presence
|
||||
|
||||
Block presence messages indicate whether a peer has or does not have a
|
||||
requested block:
|
||||
|
||||
```protobuf
|
||||
enum BlockPresenceType {
|
||||
presenceHave = 0;
|
||||
presenceDontHave = 1;
|
||||
}
|
||||
|
||||
message BlockPresence {
|
||||
BlockAddress address = 1;
|
||||
BlockPresenceType type = 2;
|
||||
bytes price = 3;
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `address`: The block address being referenced
|
||||
- `type`: Whether the peer has the block or not
|
||||
- `price`: Price (UInt256 format)
|
||||
|
||||
#### Payment Messages
|
||||
|
||||
Payment-related messages for micropayments using Nitro state channels.
|
||||
|
||||
**Account Message:**
|
||||
|
||||
```protobuf
|
||||
message AccountMessage {
|
||||
bytes address = 1; // Ethereum address to which payments should be made
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `address`: Ethereum address for receiving payments
|
||||
|
||||
**State Channel Update:**
|
||||
|
||||
```protobuf
|
||||
message StateChannelUpdate {
|
||||
bytes update = 1; // Signed Nitro state, serialized as JSON
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
|
||||
- `update`: Nitro state channel update containing payment information
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Block Verification
|
||||
|
||||
- All dataset blocks MUST include and verify Merkle proofs before acceptance
|
||||
- Standalone blocks MUST verify CID matches the SHA256 hash of the data
|
||||
- Peers SHOULD reject blocks that fail verification immediately
|
||||
|
||||
### DoS Protection
|
||||
|
||||
- Implementations SHOULD limit the number of concurrent block requests per peer
|
||||
- Implementations SHOULD implement rate limiting for WantList updates
|
||||
- Large WantLists MAY be rejected to prevent resource exhaustion
|
||||
|
||||
### Data Integrity
|
||||
|
||||
- All blocks MUST be validated before being stored or forwarded
|
||||
- Zero-padding in dataset blocks MUST be verified to prevent data corruption
|
||||
- Block sizes MUST be validated against protocol limits
|
||||
|
||||
### Privacy Considerations
|
||||
|
||||
- Block requests reveal information about what data a peer is seeking
|
||||
- Implementations MAY implement request obfuscation strategies
|
||||
- Presence information can leak storage capacity details
|
||||
|
||||
## Rationale
|
||||
|
||||
### Design Decisions
|
||||
|
||||
**Two-Tier Block Addressing:**
|
||||
The protocol supports both standalone and dataset blocks to accommodate
|
||||
different use cases.
|
||||
Standalone blocks are simpler and don't require Merkle proofs, while
|
||||
dataset blocks enable efficient verification of large datasets without
|
||||
requiring the entire dataset.
|
||||
|
||||
**WantList Delta Updates:**
|
||||
Supporting delta updates reduces bandwidth consumption when peers only
|
||||
need to modify a small portion of their wants, which is common in
|
||||
long-lived connections.
|
||||
|
||||
**Separate Presence Messages:**
|
||||
Decoupling presence information from block delivery allows peers to
|
||||
quickly assess availability without waiting for full block transfers.
|
||||
|
||||
**Fixed Block Size:**
|
||||
The 64 KiB default block size balances efficient network transmission
|
||||
with manageable memory overhead.
|
||||
|
||||
**Zero-Padding Requirement:**
|
||||
Requiring zero-padding for incomplete dataset blocks ensures uniform
|
||||
block sizes within datasets, simplifying Merkle tree construction and
|
||||
verification.
|
||||
|
||||
**Protocol Buffers:**
|
||||
Using Protocol Buffers provides efficient serialization, forward
|
||||
compatibility, and wide language support.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via
|
||||
[CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
### Normative
|
||||
|
||||
- [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt) - Key words for use
|
||||
in RFCs to Indicate Requirement Levels
|
||||
- **libp2p**: <https://libp2p.io>
|
||||
- **Protocol Buffers**: <https://protobuf.dev>
|
||||
- **Multihash**: <https://multiformats.io/multihash/>
|
||||
- **Multicodec**: <https://github.com/multiformats/multicodec>
|
||||
|
||||
### Informative
|
||||
|
||||
- **Codex Documentation**: <https://docs.codex.storage>
|
||||
- **Codex Block Exchange Module Spec**:
|
||||
<https://github.com/codex-storage/codex-docs-obsidian/blob/main/10%20Notes/Specs/Block%20Exchange%20Module%20Spec.md>
|
||||
- **Merkle Trees**: <https://en.wikipedia.org/wiki/Merkle_tree>
|
||||
- **Content Addressing**:
|
||||
<https://en.wikipedia.org/wiki/Content-addressable_storage>
|
||||
@@ -1,13 +1,18 @@
|
||||
# CODEX-MARKETPLACE
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Codex Storage Marketplace |
|
||||
| Slug | codex-marketplace |
|
||||
| Status | raw |
|
||||
| Category | Standards Track |
|
||||
| Editor | Codex Team and Dmitriy Ryajov <dryajov@status.im> |
|
||||
| Contributors | Mark Spanbroek <mark@codex.storage>, Adam Uhlíř <adam@codex.storage>, Eric Mastro <eric@codex.storage>, Jimmy Debe <jimmy@status.im>, Filip Dimitrijevic <filip@status.im> |
|
||||
---
|
||||
slug: codex-marketplace
|
||||
title: CODEX-MARKETPLACE
|
||||
name: Codex Storage Marketplace
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags: codex, storage, marketplace, smart-contract
|
||||
editor: Codex Team and Dmitriy Ryajov <dryajov@status.im>
|
||||
contributors:
|
||||
- Mark Spanbroek <mark@codex.storage>
|
||||
- Adam Uhlíř <adam@codex.storage>
|
||||
- Eric Mastro <eric@codex.storage>
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
377
codex/raw/community-history.md
Normal file
@@ -0,0 +1,377 @@
|
||||
---
|
||||
title: CODEX-COMMUNITY-HISTORY
|
||||
name: Codex Community History
|
||||
status: raw
|
||||
tags: codex
|
||||
editor:
|
||||
contributors:
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This document describes how nodes in Status Communities archive historical message data of their communities.
|
||||
Not requiring to follow the time range limit provided by [13/WAKU2-STORE](../../waku/standards/core/13/store.md)
|
||||
nodes using the [BitTorrent protocol](https://www.bittorrent.org/beps/bep_0003.html).
|
||||
It also describes how the archives are distributed to community members via the [Status network](https://status.network/),
|
||||
so they can fetch them and
|
||||
gain access to a complete message history.
|
||||
|
||||
## Background
|
||||
|
||||
Messages are stored permanently by [13/WAKU2-STORE](../../waku/standards/core/13/store.md) nodes for a configurable time range,
|
||||
which is limited by the overall storage provided by a [13/WAKU2-STORE](../../waku/standards/core/13/store.md) nodes.
|
||||
Messages older than that period are no longer provided by [13/WAKU2-STORE](../../waku/standards/core/13/store.md) nodes,
|
||||
making it impossible for other nodes to request historical messages that go beyond that time range.
|
||||
This raises issues in the case of Status communities,
|
||||
where recently joined members of a community are not able to request complete message histories of the community channels.
|
||||
|
||||
### Terminology
|
||||
|
||||
| Name | Description |
|
||||
| ---- | -------------- |
|
||||
| Waku node | A [10/WAKU2](../../waku/standards/core/10/waku.md) node that implements [11/WAKU2-RELAY](../../waku/standards/core/11/relay.md) |
|
||||
| Store node | A [10/WAKU2](../../waku/standards/core/10/waku.md) node that implements [13/WAKU2-STORE](../../waku/standards/core/13/store.md) |
|
||||
| Waku network | A group of [10/WAKU2](../../waku/standards/core/10/waku.md) nodes forming a graph, connected via [11/WAKU2-RELAY](../../waku/standards/core/11/relay.md) |
|
||||
| Status user | A Status account that is used in a Status consumer product, such as Status Mobile or Status Desktop |
|
||||
| Status node | A Status client run by a Status application |
|
||||
| Control node| A Status node that owns the private key for a Status community |
|
||||
| Community member | A Status user that is part of a Status community, not owning the private key of the community|
|
||||
| Community member node | A Status node with message archive capabilities enabled, run by a community member |
|
||||
| Live messages | [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md) received through the Waku network |
|
||||
| BitTorrent client | A program implementing the BitTorrent protocol |
|
||||
| Torrent/Torrent file | A file containing metadata about data to be downloaded by BitTorrent clients |
|
||||
| Magnet link | A link encoding the metadata provided by a torrent file (Magnet URI scheme) |
|
||||
|
||||
## Specification
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
|
||||
“SHOULD NOT”, “RECOMMENDED”, “MAY”, and
|
||||
“OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Message History Archive
|
||||
|
||||
Message history archives are represented as `WakuMessageArchive` and
|
||||
created from a [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md) exported from the local database.
|
||||
The following describes the protocol buffer for `WakuMessageArchive` :
|
||||
|
||||
``` protobuf
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
message WakuMessageArchiveMetadata {
|
||||
uint8 version = 1;
|
||||
uint64 from = 2;
|
||||
uint64 to = 3;
|
||||
repeated string content_Topic = 4;
|
||||
}
|
||||
|
||||
message WakuMessageArchive {
|
||||
uint8 version = 1;
|
||||
WakuMessageArchiveMetadata metadata = 2;
|
||||
repeated WakuMessage messages = 3; // `WakuMessage` is provided by 14/WAKU2-MESSAGE
|
||||
bytes padding = 4;
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The `from` field SHOULD contain a `timestamp` of the time range's lower bound.
|
||||
This type parallels to the `timestamp` of a `WakuMessage`.
|
||||
The `to` field SHOULD contain a `timestamp` of the time range's the higher bound.
|
||||
The `contentTopic` field MUST contain a list of all community channel `contentTopic`s.
|
||||
The `messages` field MUST contain all messages that belong in the archive, given its `from`,
|
||||
`to`, and `contentTopic` fields.
|
||||
|
||||
The `padding` field MUST contain the amount of zero bytes needed for the protobuf encoded `WakuMessageArchive`.
|
||||
The overall byte size MUST be a multiple of the `pieceLength` used to divide the data into pieces.
|
||||
This is needed for seamless encoding and
|
||||
decoding of archival data when interacting with BitTorrent,
|
||||
as explained in [creating message archive torrents](#creating-message-archive-torrents).
|
||||
|
||||
#### Message History Archive Index
|
||||
|
||||
Control nodes MUST provide message archives for the entire community history.
|
||||
The entire history consists of a set of `WakuMessageArchive`,
|
||||
where each archive contains a subset of historical `WakuMessage` for a time range of seven days.
|
||||
All the `WakuMessageArchive` are concatenated into a single file as a byte string, see [Ensuring reproducible data pieces](#ensuring-reproducible-data-pieces).
|
||||
|
||||
Control nodes MUST create a message history archive index,
|
||||
`WakuMessageArchiveIndex` with metadata,
|
||||
that allows receiving nodes to only fetch the message history archives they are interested in.
|
||||
|
||||
##### WakuMessageArchiveIndex
|
||||
|
||||
``` protobuf
|
||||
|
||||
syntax = "proto3"
|
||||
|
||||
message WakuMessageArchiveIndexMetadata {
|
||||
uint8 version = 1
|
||||
WakuMessageArchiveMetadata metadata = 2
|
||||
uint64 offset = 3
|
||||
uint64 num_pieces = 4
|
||||
}
|
||||
|
||||
message WakuMessageArchiveIndex {
|
||||
map<string, WakuMessageArchiveIndexMetadata> archives = 1
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
A `WakuMessageArchiveIndex` is a map where the key is the KECCAK-256 hash of the `WakuMessageArchiveIndexMetadata`,
|
||||
is derived from a 7-day archive
|
||||
and the value is an instance of that `WakuMessageArchiveIndexMetadata` corresponding to that archive.
|
||||
|
||||
The `offset` field MUST contain the position at which the message history archive starts in the byte string
|
||||
of the total message archive data.
|
||||
This MUST be the sum of the length of all previously created message archives in bytes, see [creating message archive torrents](#creating-message-archive-torrents).
|
||||
|
||||
The control node MUST update the `WakuMessageArchiveIndex` every time it creates one or
|
||||
more `WakuMessageArchive`s and bundle it into a new torrent.
|
||||
For every created `WakuMessageArchive`,
|
||||
there MUST be a `WakuMessageArchiveIndexMetadata` entry in the archives field `WakuMessageArchiveIndex`.
|
||||
|
||||
### Creating Message Archive Torrents
|
||||
|
||||
Control nodes MUST create a .torrent file containing metadata for all message history archives.
|
||||
To create a .torrent file, and
|
||||
later serve the message archive data on the BitTorrent network,
|
||||
control nodes MUST store the necessary data in dedicated files on the file system.
|
||||
|
||||
A torrent's source folder MUST contain the following two files:
|
||||
|
||||
- `data`: Contains all protobuf encoded `WakuMessageArchive`'s (as bit strings)
|
||||
concatenated in ascending order based on their time
|
||||
- `index`: Contains the protobuf encoded `WakuMessageArchiveIndex`
|
||||
|
||||
Control nodes SHOULD store these files in a dedicated folder that is identifiable via a community identifier.
|
||||
|
||||
### Ensuring Reproducible Data Pieces
|
||||
|
||||
The control node MUST ensure that the byte string from the protobuf encoded data
|
||||
is equal to the byte string data from the previously generated message archive torrent.
|
||||
Including the data of the latest seven days worth of messages encoded as `WakuMessageArchive`.
|
||||
Therefore, the size of data grows every seven days as it's append-only.
|
||||
|
||||
Control nodes MUST ensure that the byte size,
|
||||
for every individual `WakuMessageArchive` encoded protobuf,
|
||||
is a multiple of `pieceLength` using the padding field.
|
||||
If the `WakuMessageArchive` is not a multiple of `pieceLength`,
|
||||
its padding field MUST be filled with zero bytes and
|
||||
the `WakuMessageArchive` MUST be re-encoded until its size becomes a multiple of `pieceLength`.
|
||||
|
||||
This is necessary because the content of the data file will be split into pieces of `pieceLength` when the torrent file is created,
|
||||
and the SHA1 hash of every piece is then stored in the torrent file and
|
||||
later used by other nodes to request the data for each individual data piece.
|
||||
|
||||
By fitting message archives into a multiple of `pieceLength` and
|
||||
ensuring they fill the possible remaining space with zero bytes,
|
||||
control nodes prevent the next message archive from occupying that remaining space of the last piece,
|
||||
which will result in a different SHA1 hash for that piece.
|
||||
|
||||
Example: Without padding
|
||||
Let `WakuMessageArchive` "A1" be of size 20 bytes:
|
||||
|
||||
``` text
|
||||
0 11 22 33 44 55 66 77 88 99
|
||||
10 11 12 13 14 15 16 17 18 19
|
||||
```
|
||||
|
||||
With a `pieceLength` of 10 bytes, A1 will fit into 20 / 10 = 2 pieces:
|
||||
|
||||
```text
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
```
|
||||
|
||||
Example: With padding
|
||||
Let `WakuMessageArchive` "A2" be of size 21 bytes:
|
||||
|
||||
```text
|
||||
0 11 22 33 44 55 66 77 88 99
|
||||
10 11 12 13 14 15 16 17 18 19
|
||||
20
|
||||
```
|
||||
|
||||
With a `pieceLength` of 10 bytes,
|
||||
A2 will fit into 21 / 10 = 2 pieces.
|
||||
|
||||
The remainder will introduce a third piece:
|
||||
|
||||
```text
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
20 // piece[2] SHA1: 0x789
|
||||
```
|
||||
|
||||
The next `WakuMessageArchive` "A3" will be appended ("#3") to the existing data and
|
||||
occupy the remaining space of the third data piece.
|
||||
|
||||
The piece at index 2 will now produce a different SHA1 hash:
|
||||
|
||||
```text
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
20 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[2] SHA1: 0xeef
|
||||
#3 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[3]
|
||||
```
|
||||
|
||||
By filling up the remaining space of the third piece with A2 using its padding field,
|
||||
it is guaranteed that its SHA1 will stay the same:
|
||||
|
||||
```text
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
20 0 0 0 0 0 0 0 0 0 // piece[2] SHA1: 0x999
|
||||
#3 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[3]
|
||||
#3 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[4]
|
||||
```
|
||||
|
||||
### Seeding Message History Archives
|
||||
|
||||
The control node MUST seed the generated torrent until a new `WakuMessageArchive` is created.
|
||||
|
||||
The control node SHOULD NOT seed torrents for older message history archives.
|
||||
Only one torrent at a time SHOULD be seeded.
|
||||
|
||||
### Creating Magnet Links
|
||||
|
||||
Once a torrent file for all message archives is created,
|
||||
the control node MUST derive a magnet link,
|
||||
following the Magnet URI scheme using the underlying [BitTorrent protocol](https://www.bittorrent.org/beps/bep_0003.html) client.
|
||||
|
||||
#### Message Archive Distribution
|
||||
|
||||
Message archives are available via the BitTorrent network as they are being seeded by the control node.
|
||||
Other community member nodes will download the message archives, from the BitTorrent network,
|
||||
after receiving a magnet link that contains a message archive index.
|
||||
|
||||
The control node MUST send magnet links containing message archives and
|
||||
the message archive index to a special community channel.
|
||||
The `content_Topic` of that special channel follows the following format:
|
||||
|
||||
``` text
|
||||
|
||||
/{application-name}/{version-of-the-application}/{content-topic-name}/{encoding}
|
||||
```
|
||||
|
||||
All messages sent with this special channel's `content_Topic` MUST be instances of `ApplicationMetadataMessage`,
|
||||
with a [62/STATUS-PAYLOADS](../../status/62/payloads.md) of `CommunityMessageArchiveIndex`.
|
||||
|
||||
Only the control node MAY post to the special channel.
|
||||
Other messages on this specified channel MUST be ignored by clients.
|
||||
Community members MUST NOT have permission to send messages to the special channel.
|
||||
However, community member nodes MUST subscribe to a special channel,
|
||||
to receive a [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md) containing magnet links for message archives.
|
||||
|
||||
#### Canonical Message Histories
|
||||
|
||||
Only control nodes are allowed to distribute messages with magnet links,
|
||||
via the special channel for magnet link exchange.
|
||||
Status nodes MUST ignore all messages in the special channel that aren't signed by a control node.
|
||||
Since the magnet links are created from the control node's database
|
||||
(and previously distributed archives),
|
||||
the message history provided by the control node becomes the canonical message history and
|
||||
single source of truth for the community.
|
||||
|
||||
Community member nodes MUST replace messages in their local database with the messages extracted from archives
|
||||
within the same time range.
|
||||
Messages that the control node didn't receive MUST be removed and
|
||||
are no longer part of the message history of interest,
|
||||
even if it already existed in a community member node's database.
|
||||
|
||||
### Fetching Message History Archives
|
||||
|
||||
The process of fetching message history:
|
||||
|
||||
1. Receive message archive index magnet link as described in [Message archive distribution](#message-archive-distribution),
|
||||
2. Download the index file from the torrent, then determine which message archives to download
|
||||
3. Download individual archives
|
||||
|
||||
Community member nodes subscribe to the special channel of the control nodes that publish magnet links for message history archives.
|
||||
Two RECOMMENDED scenarios in which community member nodes can receive such a magnet link message from the special channel:
|
||||
|
||||
1. The member node receives it via live messages, by listening to the special channel.
|
||||
2. The member node requests messages for a time range of up to 30 days from store nodes
|
||||
(this is the case when a new community member joins a community.)
|
||||
3. Downloading message archives
|
||||
|
||||
When community member nodes receive a message with a `CommunityMessageHistoryArchive` [62/STATUS-PAYLOADS](../../status/62/payloads.md),
|
||||
they MUST extract the `magnet_uri`.
|
||||
Then SHOULD pass it to their underlying BitTorrent client to fetch the latest message history archive index,
|
||||
which is the index file of the torrent, see [Creating message archive torrents].
|
||||
|
||||
Due to the nature of distributed systems,
|
||||
there's no guarantee that a received message is the "last" message.
|
||||
This is especially true when community member nodes request historical messages from store nodes.
|
||||
Therefore, community member nodes MUST wait for 20 seconds after receiving the last `CommunityMessageArchive`,
|
||||
before they start extracting the magnet link to fetch the latest archive index.
|
||||
|
||||
Once a message history archive index is downloaded and
|
||||
parsed back into `WakuMessageArchiveIndex`,
|
||||
community member nodes use a local lookup table to determine which of the listed archives are missing,
|
||||
using the KECCAK-256 hashes stored in the index.
|
||||
|
||||
For this lookup to work,
|
||||
member nodes MUST store the KECCAK-256 hashes,
|
||||
of the `WakuMessageArchiveIndexMetadata` provided by the index file,
|
||||
for all of the message history archives that have been downloaded into their local database.
|
||||
|
||||
Given a `WakuMessageArchiveIndex`, member nodes can access individual `WakuMessageArchiveIndexMetadata` to download individual archives.
|
||||
|
||||
Community member nodes MUST choose one of the following options:
|
||||
|
||||
1. Download all archives: Request and download all data pieces for the data provided by the torrent
|
||||
(this is the case for new community member nodes that haven't downloaded any archives yet.)
|
||||
2. Download only the latest archive: Request and
|
||||
download all pieces starting at the offset of the latest `WakuMessageArchiveIndexMetadata`
|
||||
(this is the case for any member node that already has downloaded all previous history and
|
||||
is now interested in only the latest archive).
|
||||
3. Download specific archives: Look into from and
|
||||
to fields of every `WakuMessageArchiveIndexMetadata` and
|
||||
determine the pieces for archives of a specific time range
|
||||
(can be the case for member nodes that have recently joined the network and
|
||||
are only interested in a subset of the complete history).
|
||||
|
||||
#### Storing Historical Messages
|
||||
|
||||
When message archives are fetched,
|
||||
community member nodes MUST unwrap the resulting `WakuMessage` instances into `ApplicationMetadataMessage` instances
|
||||
and store them in their local database.
|
||||
Community member nodes SHOULD NOT store the wrapped `WakuMessage` messages.
|
||||
|
||||
All messages within the same time range MUST be replaced with the messages provided by the message history archive.
|
||||
|
||||
Community members' nodes MUST ignore the expiration state of each archive message.
|
||||
|
||||
### Security Considerations
|
||||
|
||||
#### Multiple Community Owners
|
||||
|
||||
It is possible for control nodes to export the private key of their owned community and
|
||||
pass it to other users so they become control nodes as well.
|
||||
This means it's possible for multiple control nodes to exist for one community.
|
||||
|
||||
This might conflict with the assumption that the control node serves as a single source of truth.
|
||||
Multiple control nodes can have different message histories.
|
||||
Not only will multiple control nodes multiply the amount of archive index messages being distributed to the network,
|
||||
but they might also contain different sets of magnet links and their corresponding hashes.
|
||||
Even if just a single message is missing in one of the histories,
|
||||
the hashes presented in the archive indices will look completely different,
|
||||
resulting in the community member node downloading the corresponding archive.
|
||||
This might be identical to an archive that was already downloaded,
|
||||
except for that one message.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
||||
## References
|
||||
|
||||
- [13/WAKU2-STORE](../../waku/standards/core/13/store.md)
|
||||
- [BitTorrent protocol](https://www.bittorrent.org/beps/bep_0003.html)
|
||||
- [Status network](https://status.network/)
|
||||
- [10/WAKU2](../../waku/standards/core/10/waku.md)
|
||||
- [11/WAKU2-RELAY](../../waku/standards/core/11/relay.md)
|
||||
- [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md)
|
||||
- [62/STATUS-PAYLOADS](../../status/62/payloads.md)
|
||||
506
custom.css
@@ -1,506 +0,0 @@
|
||||
:root {
|
||||
--content-max-width: 68em;
|
||||
}
|
||||
|
||||
body {
|
||||
background: var(--bg);
|
||||
color: var(--fg);
|
||||
font-family: "Source Serif Pro", "Iowan Old Style", "Palatino Linotype", "Book Antiqua", Georgia, serif;
|
||||
line-height: 1.6;
|
||||
letter-spacing: 0.01em;
|
||||
}
|
||||
|
||||
code, pre, .hljs {
|
||||
font-family: "SFMono-Regular", Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
|
||||
font-size: 0.95em;
|
||||
}
|
||||
|
||||
a {
|
||||
color: var(--links);
|
||||
}
|
||||
|
||||
a:hover {
|
||||
color: var(--links);
|
||||
opacity: 0.85;
|
||||
}
|
||||
|
||||
.page {
|
||||
background: var(--bg);
|
||||
box-shadow: none;
|
||||
border: 1px solid var(--table-border-color);
|
||||
}
|
||||
|
||||
.menu-bar {
|
||||
background: var(--bg);
|
||||
box-shadow: none;
|
||||
border-bottom: 1px solid var(--table-border-color);
|
||||
min-height: 52px;
|
||||
}
|
||||
|
||||
.menu-title {
|
||||
font-weight: 600;
|
||||
color: var(--fg);
|
||||
}
|
||||
|
||||
.icon-button {
|
||||
box-shadow: none;
|
||||
border: 1px solid transparent;
|
||||
}
|
||||
|
||||
#sidebar {
|
||||
background: var(--sidebar-bg);
|
||||
border-right: 1px solid var(--sidebar-spacer);
|
||||
box-shadow: none;
|
||||
}
|
||||
|
||||
#sidebar a {
|
||||
color: var(--sidebar-fg);
|
||||
}
|
||||
|
||||
#sidebar .chapter-item > a strong {
|
||||
color: var(--sidebar-active);
|
||||
}
|
||||
|
||||
#sidebar .part-title {
|
||||
color: var(--sidebar-non-existant);
|
||||
font-weight: 600;
|
||||
letter-spacing: 0.02em;
|
||||
}
|
||||
|
||||
main h1, main h2, main h3, main h4 {
|
||||
font-family: "Source Serif Pro", "Iowan Old Style", "Palatino Linotype", "Book Antiqua", Georgia, serif;
|
||||
color: var(--fg);
|
||||
font-weight: 600;
|
||||
margin-top: 1.1em;
|
||||
margin-bottom: 0.5em;
|
||||
}
|
||||
|
||||
main p, main li {
|
||||
color: var(--fg);
|
||||
}
|
||||
|
||||
main blockquote {
|
||||
border-left: 3px solid var(--quote-border);
|
||||
color: var(--fg);
|
||||
background: var(--quote-bg);
|
||||
}
|
||||
|
||||
table {
|
||||
border: 1px solid var(--table-border-color);
|
||||
border-collapse: collapse;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
th, td {
|
||||
border: 1px solid var(--table-border-color);
|
||||
padding: 0.4em 0.6em;
|
||||
}
|
||||
|
||||
thead {
|
||||
background: var(--table-header-bg);
|
||||
}
|
||||
|
||||
.content {
|
||||
padding: 1.5rem 2rem 3rem 2rem;
|
||||
}
|
||||
|
||||
.nav-chapters, .nav-wrapper {
|
||||
box-shadow: none;
|
||||
}
|
||||
|
||||
/* Landing layout */
|
||||
.landing-hero {
|
||||
margin-bottom: 1.5rem;
|
||||
padding: 1.25rem 1.5rem;
|
||||
background: var(--bg);
|
||||
border: 1px solid var(--table-border-color);
|
||||
}
|
||||
|
||||
.landing-hero p {
|
||||
margin: 0.3rem 0 0;
|
||||
color: var(--sidebar-fg);
|
||||
}
|
||||
|
||||
.filter-row {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 0.5rem;
|
||||
align-items: center;
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.filter-row input[type="search"] {
|
||||
padding: 0.5rem 0.65rem;
|
||||
border: 1px solid var(--searchbar-border-color);
|
||||
border-radius: 4px;
|
||||
min-width: 240px;
|
||||
background: var(--searchbar-bg);
|
||||
color: var(--searchbar-fg);
|
||||
}
|
||||
|
||||
.chips {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.chip {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 0.4rem;
|
||||
padding: 0.35rem 0.6rem;
|
||||
border: 1px solid var(--table-border-color);
|
||||
border-radius: 999px;
|
||||
background: var(--theme-hover);
|
||||
color: var(--fg);
|
||||
cursor: pointer;
|
||||
font-size: 0.95em;
|
||||
}
|
||||
|
||||
.chip.active {
|
||||
background: var(--theme-hover);
|
||||
border-color: var(--sidebar-active);
|
||||
color: var(--sidebar-active);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.quick-links {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
flex-wrap: wrap;
|
||||
margin: 0.5rem 0 1rem 0;
|
||||
}
|
||||
|
||||
.quick-links a {
|
||||
border: 1px solid var(--table-border-color);
|
||||
padding: 0.35rem 0.65rem;
|
||||
border-radius: 4px;
|
||||
background: var(--bg);
|
||||
text-decoration: none;
|
||||
color: var(--fg);
|
||||
}
|
||||
|
||||
.quick-links a:hover {
|
||||
border-color: var(--sidebar-active);
|
||||
color: var(--links);
|
||||
}
|
||||
|
||||
.rfc-table {
|
||||
width: 100%;
|
||||
border-collapse: collapse;
|
||||
margin-top: 0.75rem;
|
||||
}
|
||||
|
||||
.rfc-table th, .rfc-table td {
|
||||
border: 1px solid var(--table-border-color);
|
||||
padding: 0.45rem 0.6rem;
|
||||
}
|
||||
|
||||
.rfc-table thead {
|
||||
background: var(--table-header-bg);
|
||||
}
|
||||
|
||||
.rfc-table tbody tr:hover {
|
||||
background: var(--theme-hover);
|
||||
}
|
||||
|
||||
.badge {
|
||||
display: inline-block;
|
||||
padding: 0.15rem 0.45rem;
|
||||
border-radius: 4px;
|
||||
font-size: 0.85em;
|
||||
border: 1px solid var(--table-border-color);
|
||||
background: var(--table-alternate-bg);
|
||||
color: var(--fg);
|
||||
}
|
||||
|
||||
.rfc-header {
|
||||
margin: 0.5rem 0 1.5rem 0;
|
||||
padding-bottom: 0.75rem;
|
||||
border-bottom: 1px solid var(--table-border-color);
|
||||
}
|
||||
|
||||
.rfc-badges {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 0.5rem;
|
||||
margin: 0.5rem 0 0.75rem 0;
|
||||
}
|
||||
|
||||
.rfc-badges .badge {
|
||||
font-weight: 600;
|
||||
background: var(--theme-hover);
|
||||
}
|
||||
|
||||
.badge.category-standards,
|
||||
.badge.category-bcp,
|
||||
.badge.category-informational,
|
||||
.badge.category-experimental,
|
||||
.badge.category-other {
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.rfc-meta-table {
|
||||
font-size: 0.95em;
|
||||
}
|
||||
|
||||
.rfc-meta-table th {
|
||||
width: 9rem;
|
||||
text-align: left;
|
||||
background: var(--table-header-bg);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.rfc-meta-table td {
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
|
||||
/* Landing polish */
|
||||
main h1 {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.results-row {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: baseline;
|
||||
gap: 1rem;
|
||||
margin: 0.5rem 0 0.75rem 0;
|
||||
color: var(--sidebar-fg);
|
||||
font-size: 0.95em;
|
||||
}
|
||||
|
||||
.results-count {
|
||||
color: var(--fg);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.results-hint {
|
||||
color: var(--sidebar-fg);
|
||||
font-size: 0.9em;
|
||||
}
|
||||
|
||||
.table-wrap {
|
||||
overflow-x: auto;
|
||||
border: 1px solid var(--table-border-color);
|
||||
border-radius: 6px;
|
||||
background: var(--bg);
|
||||
}
|
||||
|
||||
.table-wrap .rfc-table {
|
||||
margin: 0;
|
||||
border: none;
|
||||
}
|
||||
|
||||
.rfc-table tbody tr:nth-child(even) {
|
||||
background: var(--table-alternate-bg);
|
||||
}
|
||||
|
||||
.rfc-table th[data-sort] {
|
||||
cursor: pointer;
|
||||
user-select: none;
|
||||
}
|
||||
|
||||
.rfc-table th.sorted {
|
||||
color: var(--links);
|
||||
}
|
||||
|
||||
.rfc-table td:first-child a {
|
||||
word-break: break-word;
|
||||
}
|
||||
|
||||
.noscript-note {
|
||||
margin-top: 0.75rem;
|
||||
color: var(--sidebar-fg);
|
||||
}
|
||||
|
||||
@media (max-width: 900px) {
|
||||
.results-row {
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
}
|
||||
|
||||
.filter-row input[type="search"] {
|
||||
width: 100%;
|
||||
min-width: 0;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
.menu-title-link {
|
||||
position: absolute;
|
||||
left: 50%;
|
||||
transform: translateX(-50%);
|
||||
text-decoration: none;
|
||||
color: inherit;
|
||||
}
|
||||
|
||||
.menu-title-link .menu-title {
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.site-nav {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.75rem;
|
||||
margin-left: auto;
|
||||
margin-right: 1rem;
|
||||
font-size: 0.95em;
|
||||
}
|
||||
|
||||
.site-nav .nav-link {
|
||||
color: var(--fg);
|
||||
text-decoration: none;
|
||||
cursor: pointer;
|
||||
background: transparent;
|
||||
border: 0;
|
||||
font: inherit;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.site-nav .nav-link:hover {
|
||||
color: var(--links);
|
||||
}
|
||||
|
||||
.nav-dropdown {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.nav-dropdown summary {
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
.nav-dropdown summary::-webkit-details-marker {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.nav-dropdown .nav-menu {
|
||||
display: none;
|
||||
position: absolute;
|
||||
right: 0;
|
||||
top: calc(100% + 0.35rem);
|
||||
background: var(--bg);
|
||||
border: 1px solid var(--table-border-color);
|
||||
border-radius: 6px;
|
||||
padding: 0.4rem 0.5rem;
|
||||
min-width: 10rem;
|
||||
z-index: 10;
|
||||
}
|
||||
|
||||
.nav-dropdown[open] .nav-menu {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.35rem;
|
||||
}
|
||||
|
||||
.nav-dropdown .nav-menu a {
|
||||
color: var(--fg);
|
||||
text-decoration: none;
|
||||
padding: 0.2rem 0.2rem;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.nav-dropdown .nav-menu a:hover {
|
||||
background: var(--theme-hover);
|
||||
color: var(--links);
|
||||
}
|
||||
|
||||
.back-to-top-link {
|
||||
color: var(--links);
|
||||
opacity: 0;
|
||||
pointer-events: none;
|
||||
transition: opacity 0.2s ease;
|
||||
}
|
||||
|
||||
.back-to-top-link.is-visible {
|
||||
opacity: 1;
|
||||
pointer-events: auto;
|
||||
}
|
||||
|
||||
.site-footer {
|
||||
margin: 2.5rem auto 1.25rem auto;
|
||||
padding-top: 0.75rem;
|
||||
border-top: 1px solid var(--table-border-color);
|
||||
color: var(--sidebar-fg);
|
||||
font-size: 0.85em;
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
gap: 0.6rem;
|
||||
max-width: var(--content-max-width);
|
||||
}
|
||||
|
||||
.site-footer a {
|
||||
color: var(--links);
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.site-footer a:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
.footer-sep {
|
||||
color: var(--sidebar-fg);
|
||||
}
|
||||
|
||||
.on-this-page {
|
||||
margin-left: 16px;
|
||||
border-inline-start: 3px solid var(--sidebar-header-border-color);
|
||||
padding-left: 8px;
|
||||
margin-top: 0.4rem;
|
||||
font-size: 0.92em;
|
||||
}
|
||||
|
||||
.on-this-page::before {
|
||||
content: "On this page";
|
||||
display: block;
|
||||
margin-bottom: 0.35rem;
|
||||
color: var(--sidebar-fg);
|
||||
font-weight: 600;
|
||||
font-size: 0.85em;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.04em;
|
||||
}
|
||||
|
||||
.on-this-page > ol {
|
||||
padding-left: 0;
|
||||
}
|
||||
|
||||
@media (max-width: 900px) {
|
||||
.site-nav {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
||||
.chapter-item > .chapter-link-wrapper > a,
|
||||
.chapter-item > a {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.4rem;
|
||||
}
|
||||
|
||||
.section-toggle::before {
|
||||
content: "▸";
|
||||
display: inline-block;
|
||||
font-size: 0.9em;
|
||||
line-height: 1;
|
||||
transition: transform 0.15s ease;
|
||||
}
|
||||
|
||||
.chapter-item.expanded > a .section-toggle::before,
|
||||
.chapter-item.expanded > .chapter-link-wrapper > a .section-toggle::before {
|
||||
transform: rotate(90deg);
|
||||
}
|
||||
|
||||
.chapter-item:not(.expanded) > ol.section {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.chapter-item:not(.expanded) + li > ol.section {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.chapter-item:not(.expanded) > .chapter-link-wrapper > a .section-toggle::before,
|
||||
.chapter-item:not(.expanded) > a .section-toggle::before {
|
||||
transform: rotate(0deg);
|
||||
}
|
||||
@@ -1,45 +0,0 @@
|
||||
# Vac RFC Index
|
||||
|
||||
An IETF-style index of Vac-managed RFCs across Waku, Nomos, Codex, and Status. Use the filters below to jump straight to a specification.
|
||||
|
||||
<div class="landing-hero">
|
||||
<div class="filter-row">
|
||||
<input id="rfc-search" type="search" placeholder="Search by number, title, status, project..." aria-label="Search RFCs">
|
||||
<div class="chips" id="status-chips">
|
||||
<span class="chip active" data-status="all" data-label="All">All</span>
|
||||
<span class="chip" data-status="stable" data-label="Stable">Stable</span>
|
||||
<span class="chip" data-status="draft" data-label="Draft">Draft</span>
|
||||
<span class="chip" data-status="raw" data-label="Raw">Raw</span>
|
||||
<span class="chip" data-status="deprecated" data-label="Deprecated">Deprecated</span>
|
||||
<span class="chip" data-status="deleted" data-label="Deleted">Deleted</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="filter-row">
|
||||
<div class="chips" id="project-chips">
|
||||
<span class="chip active" data-project="all" data-label="All projects">All projects</span>
|
||||
<span class="chip" data-project="vac" data-label="Vac">Vac</span>
|
||||
<span class="chip" data-project="waku" data-label="Waku">Waku</span>
|
||||
<span class="chip" data-project="status" data-label="Status">Status</span>
|
||||
<span class="chip" data-project="nomos" data-label="Nomos">Nomos</span>
|
||||
<span class="chip" data-project="codex" data-label="Codex">Codex</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="filter-row">
|
||||
<div class="chips" id="date-chips">
|
||||
<span class="chip active" data-date="all" data-label="All time">All time</span>
|
||||
<span class="chip" data-date="latest" data-label="Latest" data-count="false">Latest</span>
|
||||
<span class="chip" data-date="last90" data-label="Last 90 days">Last 90 days</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="results-row">
|
||||
<div id="results-count" class="results-count">Loading RFC index...</div>
|
||||
<div class="results-hint">Click a column to sort</div>
|
||||
</div>
|
||||
|
||||
<div id="rfc-table-container" class="table-wrap"></div>
|
||||
|
||||
<noscript>
|
||||
<p class="noscript-note">JavaScript is required to load the RFC index table.</p>
|
||||
</noscript>
|
||||
117
docs/SUMMARY.md
@@ -1,117 +0,0 @@
|
||||
# Summary
|
||||
|
||||
[Introduction](README.md)
|
||||
[About](about.md)
|
||||
|
||||
- [Vac](vac/README.md)
|
||||
- [1/COSS](vac/1/coss.md)
|
||||
- [2/MVDS](vac/2/mvds.md)
|
||||
- [3/Remote Log](vac/3/remote-log.md)
|
||||
- [4/MVDS Meta](vac/4/mvds-meta.md)
|
||||
- [25/Libp2p DNS Discovery](vac/25/libp2p-dns-discovery.md)
|
||||
- [32/RLN-V1](vac/32/rln-v1.md)
|
||||
- [Raw](vac/raw/README.md)
|
||||
- [Consensus Hashgraphlike](vac/raw/consensus-hashgraphlike.md)
|
||||
- [Decentralized Messaging Ethereum](vac/raw/decentralized-messaging-ethereum.md)
|
||||
- [ETH MLS Offchain](vac/raw/eth-mls-offchain.md)
|
||||
- [ETH MLS Onchain](vac/raw/eth-mls-onchain.md)
|
||||
- [ETH SecPM](vac/raw/deleted/eth-secpm.md)
|
||||
- [Gossipsub Tor Push](vac/raw/gossipsub-tor-push.md)
|
||||
- [Logos Capability Discovery](vac/raw/logos-capability-discovery.md)
|
||||
- [Mix](vac/raw/mix.md)
|
||||
- [Noise X3DH Double Ratchet](vac/raw/noise-x3dh-double-ratchet.md)
|
||||
- [RLN Interep Spec](vac/raw/rln-interep-spec.md)
|
||||
- [RLN Stealth Commitments](vac/raw/rln-stealth-commitments.md)
|
||||
- [RLN-V2](vac/raw/rln-v2.md)
|
||||
- [SDS](vac/raw/sds.md)
|
||||
- [Template](vac/template.md)
|
||||
|
||||
- [Waku](waku/README.md)
|
||||
- [Standards - Core](waku/standards/core/README.md)
|
||||
- [10/Waku2](waku/standards/core/10/waku2.md)
|
||||
- [11/Relay](waku/standards/core/11/relay.md)
|
||||
- [12/Filter](waku/standards/core/12/filter.md)
|
||||
- [13/Store](waku/standards/core/13/store.md)
|
||||
- [14/Message](waku/standards/core/14/message.md)
|
||||
- [15/Bridge](waku/standards/core/15/bridge.md)
|
||||
- [17/RLN Relay](waku/standards/core/17/rln-relay.md)
|
||||
- [19/Lightpush](waku/standards/core/19/lightpush.md)
|
||||
- [31/ENR](waku/standards/core/31/enr.md)
|
||||
- [33/Discv5](waku/standards/core/33/discv5.md)
|
||||
- [34/Peer Exchange](waku/standards/core/34/peer-exchange.md)
|
||||
- [36/Bindings API](waku/standards/core/36/bindings-api.md)
|
||||
- [64/Network](waku/standards/core/64/network.md)
|
||||
- [66/Metadata](waku/standards/core/66/metadata.md)
|
||||
- [Standards - Application](waku/standards/application/README.md)
|
||||
- [20/Toy ETH PM](waku/standards/application/20/toy-eth-pm.md)
|
||||
- [26/Payload](waku/standards/application/26/payload.md)
|
||||
- [53/X3DH](waku/standards/application/53/x3dh.md)
|
||||
- [54/X3DH Sessions](waku/standards/application/54/x3dh-sessions.md)
|
||||
- [Standards - Legacy](waku/standards/legacy/README.md)
|
||||
- [6/Waku1](waku/standards/legacy/6/waku1.md)
|
||||
- [7/Data](waku/standards/legacy/7/data.md)
|
||||
- [8/Mail](waku/standards/legacy/8/mail.md)
|
||||
- [9/RPC](waku/standards/legacy/9/rpc.md)
|
||||
- [Informational](waku/informational/README.md)
|
||||
- [22/Toy Chat](waku/informational/22/toy-chat.md)
|
||||
- [23/Topics](waku/informational/23/topics.md)
|
||||
- [27/Peers](waku/informational/27/peers.md)
|
||||
- [29/Config](waku/informational/29/config.md)
|
||||
- [30/Adaptive Nodes](waku/informational/30/adaptive-nodes.md)
|
||||
- [Deprecated](waku/deprecated/README.md)
|
||||
- [5/Waku0](waku/deprecated/5/waku0.md)
|
||||
- [16/RPC](waku/deprecated/16/rpc.md)
|
||||
- [18/Swap](waku/deprecated/18/swap.md)
|
||||
- [Fault Tolerant Store](waku/deprecated/fault-tolerant-store.md)
|
||||
|
||||
- [Nomos](nomos/README.md)
|
||||
- [Raw](nomos/raw/README.md)
|
||||
- [NomosDA Encoding](nomos/raw/nomosda-encoding.md)
|
||||
- [NomosDA Network](nomos/raw/nomosda-network.md)
|
||||
- [P2P Hardware Requirements](nomos/raw/p2p-hardware-requirements.md)
|
||||
- [P2P NAT Solution](nomos/raw/p2p-nat-solution.md)
|
||||
- [P2P Network Bootstrapping](nomos/raw/p2p-network-bootstrapping.md)
|
||||
- [P2P Network](nomos/raw/p2p-network.md)
|
||||
- [SDP](nomos/raw/sdp.md)
|
||||
- [Deprecated](nomos/deprecated/README.md)
|
||||
- [Claro](nomos/deprecated/claro.md)
|
||||
|
||||
- [Codex](codex/README.md)
|
||||
- [Raw](codex/raw/README.md)
|
||||
- [Block Exchange](codex/raw/codex-block-exchange.md)
|
||||
- [Marketplace](codex/raw/codex-marketplace.md)
|
||||
|
||||
- [Status](status/README.md)
|
||||
- [24/Curation](status/24/curation.md)
|
||||
- [28/Featuring](status/28/featuring.md)
|
||||
- [55/1-to-1 Chat](status/55/1to1-chat.md)
|
||||
- [56/Communities](status/56/communities.md)
|
||||
- [61/Community History Service](status/61/community-history-service.md)
|
||||
- [62/Payloads](status/62/payloads.md)
|
||||
- [63/Keycard Usage](status/63/keycard-usage.md)
|
||||
- [65/Account Address](status/65/account-address.md)
|
||||
- [71/Push Notification Server](status/71/push-notification-server.md)
|
||||
- [Raw](status/raw/README.md)
|
||||
- [Simple Scaling](status/raw/simple-scaling.md)
|
||||
- [Status App Protocols](status/raw/status-app-protocols.md)
|
||||
- [Status MVDS](status/raw/status-mvds.md)
|
||||
- [URL Data](status/raw/url-data.md)
|
||||
- [URL Scheme](status/raw/url-scheme.md)
|
||||
- [Deprecated](status/deprecated/README.md)
|
||||
- [3rd Party](status/deprecated/3rd-party.md)
|
||||
- [Account](status/deprecated/account.md)
|
||||
- [Client](status/deprecated/client.md)
|
||||
- [Dapp Browser API Usage](status/deprecated/dapp-browser-API-usage.md)
|
||||
- [EIPs](status/deprecated/eips.md)
|
||||
- [Ethereum Usage](status/deprecated/ethereum-usage.md)
|
||||
- [Group Chat](status/deprecated/group-chat.md)
|
||||
- [IPFS Gateway for Sticker Pack](status/deprecated/IPFS-gateway-for-sticker-Pack.md)
|
||||
- [Keycard Usage for Wallet and Chat Keys](status/deprecated/keycard-usage-for-wallet-and-chat-keys.md)
|
||||
- [Notifications](status/deprecated/notifications.md)
|
||||
- [Payloads](status/deprecated/payloads.md)
|
||||
- [Push Notification Server](status/deprecated/push-notification-server.md)
|
||||
- [Secure Transport](status/deprecated/secure-transport.md)
|
||||
- [Waku Mailserver](status/deprecated/waku-mailserver.md)
|
||||
- [Waku Usage](status/deprecated/waku-usage.md)
|
||||
- [Whisper Mailserver](status/deprecated/whisper-mailserver.md)
|
||||
- [Whisper Usage](status/deprecated/whisper-usage.md)
|
||||
@@ -1,23 +0,0 @@
|
||||
# About
|
||||
|
||||
The Vac RFC Index collects specifications maintained by Vac across Waku, Nomos,
|
||||
Codex, and Status. Each RFC documents a protocol, process, or system in a
|
||||
consistent, reviewable format.
|
||||
|
||||
This site is generated with mdBook from the repository:
|
||||
[vacp2p/rfc-index](https://github.com/vacp2p/rfc-index).
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Open a pull request against the repo.
|
||||
2. Add or update the RFC in the appropriate project folder.
|
||||
3. Include clear status and category metadata in the RFC header table.
|
||||
|
||||
If you are unsure where a document belongs, open an issue first and we will
|
||||
help route it.
|
||||
|
||||
## Links
|
||||
|
||||
- Vac: <https://vac.dev>
|
||||
- IETF RFC Series: <https://www.rfc-editor.org/>
|
||||
- Repository: <https://github.com/vacp2p/rfc-index>
|
||||
@@ -1,37 +0,0 @@
|
||||
# Codex RFCs
|
||||
|
||||
Specifications related the Codex decentralised data storage platform.
|
||||
Visit [Codex specs](https://github.com/codex-storage/codex-spec)
|
||||
to view the new Codex specifications currently under discussion.
|
||||
|
||||
<div class="landing-hero">
|
||||
<div class="filter-row">
|
||||
<input id="rfc-search" type="search" placeholder="Search by number, title, status, project..." aria-label="Search RFCs">
|
||||
<div class="chips" id="status-chips">
|
||||
<span class="chip active" data-status="all" data-label="All">All</span>
|
||||
<span class="chip" data-status="stable" data-label="Stable">Stable</span>
|
||||
<span class="chip" data-status="draft" data-label="Draft">Draft</span>
|
||||
<span class="chip" data-status="raw" data-label="Raw">Raw</span>
|
||||
<span class="chip" data-status="deprecated" data-label="Deprecated">Deprecated</span>
|
||||
<span class="chip" data-status="deleted" data-label="Deleted">Deleted</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="filter-row">
|
||||
<div class="chips" id="date-chips">
|
||||
<span class="chip active" data-date="all" data-label="All time">All time</span>
|
||||
<span class="chip" data-date="latest" data-label="Latest" data-count="false">Latest</span>
|
||||
<span class="chip" data-date="last90" data-label="Last 90 days">Last 90 days</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="results-row">
|
||||
<div id="results-count" class="results-count">Loading RFC index...</div>
|
||||
<div class="results-hint">Click a column to sort</div>
|
||||
</div>
|
||||
|
||||
<div id="rfc-table-container" class="table-wrap" data-project="codex"></div>
|
||||
|
||||
<noscript>
|
||||
<p class="noscript-note">JavaScript is required to load the RFC index table.</p>
|
||||
</noscript>
|
||||
@@ -1,3 +0,0 @@
|
||||
# Codex Raw Specifications
|
||||
|
||||
Early-stage Codex specifications collected before reaching draft status.
|
||||
@@ -1,38 +0,0 @@
|
||||
# Nomos RFCs
|
||||
|
||||
Nomos is building a secure, flexible, and
|
||||
scalable infrastructure for developers creating applications for the network state.
|
||||
Published Specifications are currently available here,
|
||||
[Nomos Specifications](https://nomos-tech.notion.site/project).
|
||||
|
||||
<div class="landing-hero">
|
||||
<div class="filter-row">
|
||||
<input id="rfc-search" type="search" placeholder="Search by number, title, status, project..." aria-label="Search RFCs">
|
||||
<div class="chips" id="status-chips">
|
||||
<span class="chip active" data-status="all" data-label="All">All</span>
|
||||
<span class="chip" data-status="stable" data-label="Stable">Stable</span>
|
||||
<span class="chip" data-status="draft" data-label="Draft">Draft</span>
|
||||
<span class="chip" data-status="raw" data-label="Raw">Raw</span>
|
||||
<span class="chip" data-status="deprecated" data-label="Deprecated">Deprecated</span>
|
||||
<span class="chip" data-status="deleted" data-label="Deleted">Deleted</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="filter-row">
|
||||
<div class="chips" id="date-chips">
|
||||
<span class="chip active" data-date="all" data-label="All time">All time</span>
|
||||
<span class="chip" data-date="latest" data-label="Latest" data-count="false">Latest</span>
|
||||
<span class="chip" data-date="last90" data-label="Last 90 days">Last 90 days</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="results-row">
|
||||
<div id="results-count" class="results-count">Loading RFC index...</div>
|
||||
<div class="results-hint">Click a column to sort</div>
|
||||
</div>
|
||||
|
||||
<div id="rfc-table-container" class="table-wrap" data-project="nomos"></div>
|
||||
|
||||
<noscript>
|
||||
<p class="noscript-note">JavaScript is required to load the RFC index table.</p>
|
||||
</noscript>
|
||||
@@ -1,3 +0,0 @@
|
||||
# Nomos Deprecated Specifications
|
||||
|
||||
Deprecated Nomos specifications kept for archival and reference purposes.
|
||||
@@ -1,3 +0,0 @@
|
||||
# Nomos Raw Specifications
|
||||
|
||||
Early-stage Nomos specifications that have not yet progressed beyond raw status.
|
||||
@@ -1,730 +0,0 @@
|
||||
[
|
||||
{
|
||||
"project": "codex",
|
||||
"slug": "Codex Block Exchange Protocol",
|
||||
"title": "Codex Block Exchange Protocol",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "codex/raw/codex-block-exchange.html"
|
||||
},
|
||||
{
|
||||
"project": "codex",
|
||||
"slug": "codex-marketplace",
|
||||
"title": "Codex Storage Marketplace",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "codex/raw/codex-marketplace.html"
|
||||
},
|
||||
{
|
||||
"project": "nomos",
|
||||
"slug": "Claro Consensus Protocol",
|
||||
"title": "Claro Consensus Protocol",
|
||||
"status": "deprecated",
|
||||
"category": "Standards Track",
|
||||
"path": "nomos/deprecated/claro.html"
|
||||
},
|
||||
{
|
||||
"project": "nomos",
|
||||
"slug": "Nomos P2P Network Bootstrapping Specification",
|
||||
"title": "Nomos P2P Network Bootstrapping Specification",
|
||||
"status": "raw",
|
||||
"category": "networking",
|
||||
"path": "nomos/raw/p2p-network-bootstrapping.html"
|
||||
},
|
||||
{
|
||||
"project": "nomos",
|
||||
"slug": "Nomos P2P Network NAT Solution Specification",
|
||||
"title": "Nomos P2P Network NAT Solution Specification",
|
||||
"status": "raw",
|
||||
"category": "networking",
|
||||
"path": "nomos/raw/p2p-nat-solution.html"
|
||||
},
|
||||
{
|
||||
"project": "nomos",
|
||||
"slug": "Nomos P2P Network Specification",
|
||||
"title": "Nomos P2P Network Specification",
|
||||
"status": "draft",
|
||||
"category": "networking",
|
||||
"path": "nomos/raw/p2p-network.html"
|
||||
},
|
||||
{
|
||||
"project": "nomos",
|
||||
"slug": "Nomos Service Declaration Protocol Specification",
|
||||
"title": "Nomos Service Declaration Protocol Specification",
|
||||
"status": "raw",
|
||||
"category": "unspecified",
|
||||
"path": "nomos/raw/sdp.html"
|
||||
},
|
||||
{
|
||||
"project": "nomos",
|
||||
"slug": "Nomos p2p Network Hardware Requirements Specification",
|
||||
"title": "Nomos p2p Network Hardware Requirements Specification",
|
||||
"status": "raw",
|
||||
"category": "infrastructure",
|
||||
"path": "nomos/raw/p2p-hardware-requirements.html"
|
||||
},
|
||||
{
|
||||
"project": "nomos",
|
||||
"slug": "NomosDA Encoding Protocol",
|
||||
"title": "NomosDA Encoding Protocol",
|
||||
"status": "raw",
|
||||
"category": "unspecified",
|
||||
"path": "nomos/raw/nomosda-encoding.html"
|
||||
},
|
||||
{
|
||||
"project": "nomos",
|
||||
"slug": "NomosDA Network",
|
||||
"title": "NomosDA Network",
|
||||
"status": "raw",
|
||||
"category": "unspecified",
|
||||
"path": "nomos/raw/nomosda-network.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "24",
|
||||
"title": "Status Community Directory Curation Voting using Waku v2",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "status/24/curation.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "28",
|
||||
"title": "Status community featuring using waku v2",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "status/28/featuring.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "3rd party",
|
||||
"title": "3rd party",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/3rd-party.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "55",
|
||||
"title": "Status 1-to-1 Chat",
|
||||
"status": "draft",
|
||||
"category": "Standards Track",
|
||||
"path": "status/55/1to1-chat.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "56",
|
||||
"title": "Status Communities that run over Waku v2",
|
||||
"status": "draft",
|
||||
"category": "Standards Track",
|
||||
"path": "status/56/communities.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "61",
|
||||
"title": "Status Community History Service",
|
||||
"status": "draft",
|
||||
"category": "Standards Track",
|
||||
"path": "status/61/community-history-service.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "62",
|
||||
"title": "Status Message Payloads",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "status/62/payloads.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "63",
|
||||
"title": "Status Keycard Usage",
|
||||
"status": "draft",
|
||||
"category": "Standards Track",
|
||||
"path": "status/63/keycard-usage.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "65",
|
||||
"title": "Status Account Address",
|
||||
"status": "draft",
|
||||
"category": "Standards Track",
|
||||
"path": "status/65/account-address.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "71",
|
||||
"title": "Push Notification Server",
|
||||
"status": "draft",
|
||||
"category": "Standards Track",
|
||||
"path": "status/71/push-notification-server.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Account",
|
||||
"title": "Account",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/account.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Client",
|
||||
"title": "Client",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/client.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Dapp browser API usage",
|
||||
"title": "Dapp browser API usage",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/dapp-browser-API-usage.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "EIPS",
|
||||
"title": "EIPS",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/eips.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Group Chat",
|
||||
"title": "Group Chat",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/group-chat.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "IPFS gateway for Sticker Pack",
|
||||
"title": "IPFS gateway for Sticker Pack",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/IPFS-gateway-for-sticker-Pack.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Keycard Usage for Wallet and Chat Keys",
|
||||
"title": "Keycard Usage for Wallet and Chat Keys",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/keycard-usage-for-wallet-and-chat-keys.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "MVDS Usage in Status",
|
||||
"title": "MVDS Usage in Status",
|
||||
"status": "raw",
|
||||
"category": "Best Current Practice",
|
||||
"path": "status/raw/status-mvds.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Notifications",
|
||||
"title": "Notifications",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/notifications.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Payloads",
|
||||
"title": "Payloads",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/payloads.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Push notification server",
|
||||
"title": "Push notification server",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/push-notification-server.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Secure Transport",
|
||||
"title": "Secure Transport",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/secure-transport.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Status Protocol Stack",
|
||||
"title": "Status Protocol Stack",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "status/raw/status-app-protocols.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Status Simple Scaling",
|
||||
"title": "Status Simple Scaling",
|
||||
"status": "raw",
|
||||
"category": "Informational",
|
||||
"path": "status/raw/simple-scaling.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Status URL Data",
|
||||
"title": "Status URL Data",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "status/raw/url-data.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Status URL Scheme",
|
||||
"title": "Status URL Scheme",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "status/raw/url-scheme.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Status interactions with the Ethereum blockchain",
|
||||
"title": "Status interactions with the Ethereum blockchain",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/ethereum-usage.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Waku Mailserver",
|
||||
"title": "Waku Mailserver",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/waku-mailserver.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Waku Usage",
|
||||
"title": "Waku Usage",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/waku-usage.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Whisper Usage",
|
||||
"title": "Whisper Usage",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/whisper-usage.html"
|
||||
},
|
||||
{
|
||||
"project": "status",
|
||||
"slug": "Whisper mailserver",
|
||||
"title": "Whisper mailserver",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "status/deprecated/whisper-mailserver.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "1",
|
||||
"title": "Consensus-Oriented Specification System",
|
||||
"status": "draft",
|
||||
"category": "Best Current Practice",
|
||||
"path": "vac/1/coss.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "2",
|
||||
"title": "Minimum Viable Data Synchronization",
|
||||
"status": "stable",
|
||||
"category": "unspecified",
|
||||
"path": "vac/2/mvds.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "25",
|
||||
"title": "Libp2p Peer Discovery via DNS",
|
||||
"status": "deleted",
|
||||
"category": "unspecified",
|
||||
"path": "vac/25/libp2p-dns-discovery.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "3",
|
||||
"title": "Remote log specification",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "vac/3/remote-log.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "32",
|
||||
"title": "Rate Limit Nullifier",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "vac/32/rln-v1.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "4",
|
||||
"title": "MVDS Metadata Field",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "vac/4/mvds-meta.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Decentralized Key and Session Setup for Secure Messaging over Ethereum",
|
||||
"title": "Decentralized Key and Session Setup for Secure Messaging over Ethereum",
|
||||
"status": "raw",
|
||||
"category": "informational",
|
||||
"path": "vac/raw/decentralized-messaging-ethereum.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Gossipsub Tor Push",
|
||||
"title": "Gossipsub Tor Push",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "vac/raw/gossipsub-tor-push.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Hashgraphlike Consensus Protocol",
|
||||
"title": "Hashgraphlike Consensus Protocol",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "vac/raw/consensus-hashgraphlike.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Interep as group management for RLN",
|
||||
"title": "Interep as group management for RLN",
|
||||
"status": "raw",
|
||||
"category": "unspecified",
|
||||
"path": "vac/raw/rln-interep-spec.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Libp2p Mix Protocol",
|
||||
"title": "Libp2p Mix Protocol",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "vac/raw/mix.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Logos Capability Discovery Protocol",
|
||||
"title": "Logos Capability Discovery Protocol",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "vac/raw/logos-capability-discovery.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "RLN Stealth Commitment Usage",
|
||||
"title": "RLN Stealth Commitment Usage",
|
||||
"status": "unknown",
|
||||
"category": "Standards Track",
|
||||
"path": "vac/raw/rln-stealth-commitments.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Rate Limit Nullifier V2",
|
||||
"title": "Rate Limit Nullifier V2",
|
||||
"status": "raw",
|
||||
"category": "unspecified",
|
||||
"path": "vac/raw/rln-v2.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Scalable Data Sync protocol for distributed logs",
|
||||
"title": "Scalable Data Sync protocol for distributed logs",
|
||||
"status": "raw",
|
||||
"category": "unspecified",
|
||||
"path": "vac/raw/sds.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Secure 1-to-1 channel setup using X3DH and the double ratchet",
|
||||
"title": "Secure 1-to-1 channel setup using X3DH and the double ratchet",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "vac/raw/noise-x3dh-double-ratchet.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Secure channel setup using Ethereum accounts",
|
||||
"title": "Secure channel setup using Ethereum accounts",
|
||||
"status": "deleted",
|
||||
"category": "Standards Track",
|
||||
"path": "vac/raw/deleted/eth-secpm.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Secure channel setup using decentralized MLS and Ethereum accounts",
|
||||
"title": "Secure channel setup using decentralized MLS and Ethereum accounts",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "vac/raw/eth-mls-onchain.html"
|
||||
},
|
||||
{
|
||||
"project": "vac",
|
||||
"slug": "Secure channel setup using decentralized MLS and Ethereum accounts",
|
||||
"title": "Secure channel setup using decentralized MLS and Ethereum accounts",
|
||||
"status": "raw",
|
||||
"category": "Standards Track",
|
||||
"path": "vac/raw/eth-mls-offchain.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "10",
|
||||
"title": "Waku v2",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/10/waku2.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "11",
|
||||
"title": "Waku v2 Relay",
|
||||
"status": "stable",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/11/relay.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "12",
|
||||
"title": "Waku v2 Filter",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/12/filter.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "13",
|
||||
"title": "Waku Store Query",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/13/store.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "14",
|
||||
"title": "Waku v2 Message",
|
||||
"status": "stable",
|
||||
"category": "Standards Track",
|
||||
"path": "waku/standards/core/14/message.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "15",
|
||||
"title": "Waku Bridge",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/15/bridge.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "16",
|
||||
"title": "Waku v2 RPC API",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "waku/deprecated/16/rpc.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "17",
|
||||
"title": "Waku v2 RLN Relay",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/17/rln-relay.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "18",
|
||||
"title": "Waku SWAP Accounting",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "waku/deprecated/18/swap.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "19",
|
||||
"title": "Waku v2 Light Push",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/19/lightpush.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "20",
|
||||
"title": "Toy Ethereum Private Message",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/application/20/toy-eth-pm.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "21",
|
||||
"title": "Waku v2 Fault-Tolerant Store",
|
||||
"status": "deleted",
|
||||
"category": "unspecified",
|
||||
"path": "waku/deprecated/fault-tolerant-store.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "22",
|
||||
"title": "Waku v2 Toy Chat",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/informational/22/toy-chat.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "23",
|
||||
"title": "Waku v2 Topic Usage Recommendations",
|
||||
"status": "draft",
|
||||
"category": "Informational",
|
||||
"path": "waku/informational/23/topics.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "26",
|
||||
"title": "Waku Message Payload Encryption",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/application/26/payload.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "27",
|
||||
"title": "Waku v2 Client Peer Management Recommendations",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/informational/27/peers.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "29",
|
||||
"title": "Waku v2 Client Parameter Configuration Recommendations",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/informational/29/config.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "30",
|
||||
"title": "Adaptive nodes",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/informational/30/adaptive-nodes.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "31",
|
||||
"title": "Waku v2 usage of ENR",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/31/enr.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "33",
|
||||
"title": "Waku v2 Discv5 Ambient Peer Discovery",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/33/discv5.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "34",
|
||||
"title": "Waku2 Peer Exchange",
|
||||
"status": "draft",
|
||||
"category": "Standards Track",
|
||||
"path": "waku/standards/core/34/peer-exchange.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "36",
|
||||
"title": "Waku v2 C Bindings API",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/36/bindings-api.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "5",
|
||||
"title": "Waku v0",
|
||||
"status": "deprecated",
|
||||
"category": "unspecified",
|
||||
"path": "waku/deprecated/5/waku0.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "53",
|
||||
"title": "X3DH usage for Waku payload encryption",
|
||||
"status": "draft",
|
||||
"category": "Standards Track",
|
||||
"path": "waku/standards/application/53/x3dh.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "54",
|
||||
"title": "Session management for Waku X3DH",
|
||||
"status": "draft",
|
||||
"category": "Standards Track",
|
||||
"path": "waku/standards/application/54/x3dh-sessions.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "6",
|
||||
"title": "Waku v1",
|
||||
"status": "stable",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/legacy/6/waku1.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "64",
|
||||
"title": "Waku v2 Network",
|
||||
"status": "draft",
|
||||
"category": "Best Current Practice",
|
||||
"path": "waku/standards/core/64/network.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "66",
|
||||
"title": "Waku Metadata Protocol",
|
||||
"status": "draft",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/core/66/metadata.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "7",
|
||||
"title": "Waku Envelope data field",
|
||||
"status": "stable",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/legacy/7/data.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "8",
|
||||
"title": "Waku Mailserver",
|
||||
"status": "stable",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/legacy/8/mail.html"
|
||||
},
|
||||
{
|
||||
"project": "waku",
|
||||
"slug": "9",
|
||||
"title": "Waku RPC API",
|
||||
"status": "stable",
|
||||
"category": "unspecified",
|
||||
"path": "waku/standards/legacy/9/rpc.html"
|
||||
}
|
||||
]
|
||||
@@ -1,36 +0,0 @@
|
||||
# Status RFCs
|
||||
|
||||
Status is a communication tool providing privacy features for the user.
|
||||
Specifications can also be viewed at [Status](https://status.app/specs).
|
||||
|
||||
<div class="landing-hero">
|
||||
<div class="filter-row">
|
||||
<input id="rfc-search" type="search" placeholder="Search by number, title, status, project..." aria-label="Search RFCs">
|
||||
<div class="chips" id="status-chips">
|
||||
<span class="chip active" data-status="all" data-label="All">All</span>
|
||||
<span class="chip" data-status="stable" data-label="Stable">Stable</span>
|
||||
<span class="chip" data-status="draft" data-label="Draft">Draft</span>
|
||||
<span class="chip" data-status="raw" data-label="Raw">Raw</span>
|
||||
<span class="chip" data-status="deprecated" data-label="Deprecated">Deprecated</span>
|
||||
<span class="chip" data-status="deleted" data-label="Deleted">Deleted</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="filter-row">
|
||||
<div class="chips" id="date-chips">
|
||||
<span class="chip active" data-date="all" data-label="All time">All time</span>
|
||||
<span class="chip" data-date="latest" data-label="Latest" data-count="false">Latest</span>
|
||||
<span class="chip" data-date="last90" data-label="Last 90 days">Last 90 days</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="results-row">
|
||||
<div id="results-count" class="results-count">Loading RFC index...</div>
|
||||
<div class="results-hint">Click a column to sort</div>
|
||||
</div>
|
||||
|
||||
<div id="rfc-table-container" class="table-wrap" data-project="status"></div>
|
||||
|
||||
<noscript>
|
||||
<p class="noscript-note">JavaScript is required to load the RFC index table.</p>
|
||||
</noscript>
|
||||
@@ -1,3 +0,0 @@
|
||||
# Status Deprecated Specifications
|
||||
|
||||
Deprecated Status specifications maintained for archival purposes.
|
||||
@@ -1,3 +0,0 @@
|
||||
# Status Raw Specifications
|
||||
|
||||
Early-stage Status specifications that precede draft or stable status.
|
||||
@@ -1,41 +0,0 @@
|
||||
# Vac RFCs
|
||||
|
||||
Vac builds public good protocols for the decentralised web.
|
||||
Vac acts as a custodian for the protocols that live in the RFC-Index repository.
|
||||
With the goal of widespread adoption,
|
||||
Vac will make sure the protocols adhere to a set of principles,
|
||||
including but not limited to liberty, security, privacy, decentralisation and inclusivity.
|
||||
|
||||
To learn more, visit [Vac Research](https://vac.dev/)
|
||||
|
||||
<div class="landing-hero">
|
||||
<div class="filter-row">
|
||||
<input id="rfc-search" type="search" placeholder="Search by number, title, status, project..." aria-label="Search RFCs">
|
||||
<div class="chips" id="status-chips">
|
||||
<span class="chip active" data-status="all" data-label="All">All</span>
|
||||
<span class="chip" data-status="stable" data-label="Stable">Stable</span>
|
||||
<span class="chip" data-status="draft" data-label="Draft">Draft</span>
|
||||
<span class="chip" data-status="raw" data-label="Raw">Raw</span>
|
||||
<span class="chip" data-status="deprecated" data-label="Deprecated">Deprecated</span>
|
||||
<span class="chip" data-status="deleted" data-label="Deleted">Deleted</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="filter-row">
|
||||
<div class="chips" id="date-chips">
|
||||
<span class="chip active" data-date="all" data-label="All time">All time</span>
|
||||
<span class="chip" data-date="latest" data-label="Latest" data-count="false">Latest</span>
|
||||
<span class="chip" data-date="last90" data-label="Last 90 days">Last 90 days</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="results-row">
|
||||
<div id="results-count" class="results-count">Loading RFC index...</div>
|
||||
<div class="results-hint">Click a column to sort</div>
|
||||
</div>
|
||||
|
||||
<div id="rfc-table-container" class="table-wrap" data-project="vac"></div>
|
||||
|
||||
<noscript>
|
||||
<p class="noscript-note">JavaScript is required to load the RFC index table.</p>
|
||||
</noscript>
|
||||
@@ -1,435 +0,0 @@
|
||||
# ETH-MLS-OFFCHAIN
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Secure channel setup using decentralized MLS and Ethereum accounts |
|
||||
| Status | raw |
|
||||
| Category | Standards Track |
|
||||
| Editor | Ugur Sen [ugur@status.im](mailto:ugur@status.im) |
|
||||
| Contributors | seemenkina [ekaterina@status.im](mailto:ekaterina@status.im) |
|
||||
|
||||
## Abstract
|
||||
|
||||
The following document specifies Ethereum authenticated scalable
|
||||
and decentralized secure group messaging application by
|
||||
integrating Message Layer Security (MLS) backend.
|
||||
Decentralization refers each user is a node in P2P network and
|
||||
each user has voice for any changes in group.
|
||||
This is achieved by integrating a consensus mechanism.
|
||||
Lastly, this RFC can also be referred to as de-MLS,
|
||||
decentralized MLS, to emphasize its deviation
|
||||
from the centralized trust assumptions of traditional MLS deployments.
|
||||
|
||||
## Motivation
|
||||
|
||||
Group messaging is a fundamental part of digital communication,
|
||||
yet most existing systems depend on centralized servers,
|
||||
which introduce risks around privacy, censorship, and unilateral control.
|
||||
In restrictive settings, servers can be blocked or surveilled;
|
||||
in more open environments, users still face opaque moderation policies,
|
||||
data collection, and exclusion from decision-making processes.
|
||||
To address this, we propose a decentralized, scalable peer-to-peer
|
||||
group messaging system where each participant runs a node, contributes
|
||||
to message propagation, and takes part in governance autonomously.
|
||||
Group membership changes are decided collectively through a lightweight
|
||||
partially synchronous, fault-tolerant consensus protocol without a centralized identity.
|
||||
This design enables truly democratic group communication and is well-suited
|
||||
for use cases like activist collectives, research collaborations, DAOs, support groups,
|
||||
and decentralized social platforms.
|
||||
|
||||
## Format Specification
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
|
||||
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Assumptions
|
||||
|
||||
- The nodes in the P2P network can discover other nodes or will connect to other nodes when subscribing to same topic in a gossipsub.
|
||||
- We MAY have non-reliable (silent) nodes.
|
||||
- We MUST have a consensus that is lightweight, scalable and finalized in a specific time.
|
||||
|
||||
## Roles
|
||||
|
||||
The three roles used in de-MLS is as follows:
|
||||
|
||||
- `node`: Nodes are participants in the network that are not currently members
|
||||
of any secure group messaging session but remain available as potential candidates for group membership.
|
||||
- `member`: Members are special nodes in the secure group messaging who
|
||||
obtains current group key of secure group messaging.
|
||||
Each node is assigned a unique identity represented as a 20-byte value named `member id`.
|
||||
- `steward`: Stewards are special and transparent members in the secure group
|
||||
messaging who organize the changes by releasing commit messages upon the voted proposals.
|
||||
There are two special subsets of steward as epoch and backup steward,
|
||||
which are defined in the section de-MLS Objects.
|
||||
|
||||
## MLS Background
|
||||
|
||||
The de-MLS consists of MLS backend, so the MLS services and other MLS components
|
||||
are taken from the original [MLS specification](https://datatracker.ietf.org/doc/rfc9420/), with or without modifications.
|
||||
|
||||
### MLS Services
|
||||
|
||||
MLS is operated in two services authentication service (AS) and delivery service (DS).
|
||||
Authentication service enables group members to authenticate the credentials presented by other group members.
|
||||
The delivery service routes MLS messages among the nodes or
|
||||
members in the protocol in the correct order and
|
||||
manage the `keyPackage` of the users where the `keyPackage` is the objects
|
||||
that provide some public information about a user.
|
||||
|
||||
### MLS Objects
|
||||
|
||||
Following section presents the MLS objects and components that used in this RFC:
|
||||
|
||||
`Epoch`: Time intervals that changes the state that is defined by members,
|
||||
section 3.4 in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
|
||||
`MLS proposal message:` Members MUST receive the proposal message prior to the
|
||||
corresponding commit message that initiates a new epoch with key changes,
|
||||
in order to ensure the intended security properties, section 12.1 in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
Here, the add and remove proposals are used.
|
||||
|
||||
`Application message`: This message type used in arbitrary encrypted communication between group members.
|
||||
This is restricted by [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) as if there is pending proposal,
|
||||
the application message should be cut.
|
||||
Note that: Since the MLS is based on servers, this delay between proposal and commit messages are very small.
|
||||
|
||||
`Commit message:` After members receive the proposals regarding group changes,
|
||||
the committer, who may be any member of the group, as specified in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/),
|
||||
generates the necessary key material for the next epoch, including the appropriate welcome messages
|
||||
for new joiners and new entropy for removed members. In this RFC, the committers only MUST be stewards.
|
||||
|
||||
### de-MLS Objects
|
||||
|
||||
This section presents the de-MLS objects:
|
||||
|
||||
`Voting proposal`: Similar to MLS proposals, but processed only if approved through a voting process.
|
||||
They function as application messages in the MLS group,
|
||||
allowing the steward to collect them without halting the protocol.
|
||||
There are three types of `voting proposal` according to the type of consensus as in shown Consensus Types section,
|
||||
these are, `commit proposal`, `steward election proposal` and `emergency criteria proposal`.
|
||||
|
||||
`Epoch steward`: The steward assigned to commit in `epoch E` according to the steward list.
|
||||
Holds the primary responsibility for creating commit in that epoch.
|
||||
|
||||
`Backup steward`: The steward next in line after the `epoch steward` on the `steward list` in `epoch E`.
|
||||
Only becomes active if the `epoch steward` is malicious or fails,
|
||||
in which case it completes the commitment phase.
|
||||
If unused in `epoch E`, it automatically becomes the `epoch steward` in `epoch E+1`.
|
||||
|
||||
`Steward list`: It is an ordered list that contains the `member id`s of authorized stewards.
|
||||
Each steward in the list becomes main responsible for creating the commit message when its turn arrives,
|
||||
according to this order for each epoch.
|
||||
For example, suppose there are two stewards in the list `steward A` first and `steward B` last in the list.
|
||||
`steward A` is responsible for creating the commit message for first epoch.
|
||||
Similarly, `steward B` is for the last epoch.
|
||||
Since the `epoch steward` is the primary committer for an epoch,
|
||||
it holds the main responsibility for producing the commit.
|
||||
However, other stewards MAY also generate a commit within the same epoch to preserve liveness
|
||||
in case the epoch steward is inactive or slow.
|
||||
Duplicate commits are not re-applied and only the single valid commit for the epoch is accepted by the group,
|
||||
as in described in section filtering proposals against the multiple comitting.
|
||||
|
||||
Therefore, if a malicious steward occurred, the `backup steward` will be charged with committing.
|
||||
Lastly, the size of the list named as `sn`, which also shows the epoch interval for steward list determination.
|
||||
|
||||
## Flow
|
||||
|
||||
General flow is as follows:
|
||||
|
||||
- A steward initializes a group just once, and then sends out Group Announcements (GA) periodically.
|
||||
- Meanwhile, each `node` creates and sends their `credential` includes `keyPackage`.
|
||||
- Each `member` creates `voting proposals` sends them to from MLS group during `epoch E`.
|
||||
- Meanwhile, the `steward` collects finalized `voting proposals` from MLS group and converts them into
|
||||
`MLS proposals` then sends them with corresponding `commit messages`
|
||||
- Evantually, with the commit messages, all members starts the next `epoch E+1`.
|
||||
|
||||
## Creating Voting Proposal
|
||||
|
||||
A `member` MAY initializes the voting with the proposal payload
|
||||
which is implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
|
||||
|
||||
```protobuf
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
message Proposal {
|
||||
string name = 10; // Proposal name
|
||||
string payload = 11; // Describes the what is voting fore
|
||||
int32 proposal_id = 12; // Unique identifier of the proposal
|
||||
bytes proposal_owner = 13; // Public key of the creator
|
||||
repeated Vote votes = 14; // Vote list in the proposal
|
||||
int32 expected_voters_count = 15; // Maximum number of distinct voters
|
||||
int32 round = 16; // Number of Votes
|
||||
int64 timestamp = 17; // Creation time of proposal
|
||||
int64 expiration_time = 18; // Time interval that the proposal is active
|
||||
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
message Vote {
|
||||
int32 vote_id = 20; // Unique identifier of the vote
|
||||
bytes vote_owner = 21; // Voter's public key
|
||||
int64 timestamp = 22; // Time when the vote was cast
|
||||
bool vote = 23; // Vote bool value (true/false)
|
||||
bytes parent_hash = 24; // Hash of previous owner's Vote
|
||||
bytes received_hash = 25; // Hash of previous received Vote
|
||||
bytes vote_hash = 26; // Hash of all previously defined fields in Vote
|
||||
bytes signature = 27; // Signature of vote_hash
|
||||
}
|
||||
```
|
||||
|
||||
The voting proposal MAY include adding a `node` or removing a `member`.
|
||||
After the `member` creates the voting proposal,
|
||||
it is emitted to the network via the MLS `Application message` with a lightweight,
|
||||
epoch based voting such as [hashgraphlike consensus.](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
|
||||
This consensus result MUST be finalized within the epoch as YES or NO.
|
||||
|
||||
If the voting result is YES, this points out the voting proposal will be converted into
|
||||
the MLS proposal by the `steward` and following commit message that starts the new epoch.
|
||||
|
||||
## Creating welcome message
|
||||
|
||||
When a MLS `MLS proposal message` is created by the `steward`,
|
||||
a `commit message` SHOULD follow,
|
||||
as in section 12.04 [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) to the members.
|
||||
In order for the new `member` joining the group to synchronize with the current members
|
||||
who received the `commit message`,
|
||||
the `steward` sends a welcome message to the node as the new `member`,
|
||||
as in section 12.4.3.1. [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
|
||||
## Single steward
|
||||
|
||||
To naive way to create a decentralized secure group messaging is having a single transparent `steward`
|
||||
who only applies the changes regarding the result of the voting.
|
||||
|
||||
This is mostly similar with the general flow and specified in voting proposal and welcome message creation sections.
|
||||
|
||||
1. Each time a single `steward` initializes a group with group parameters with parameters
|
||||
as in section 8.1. Group Context in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
2. `steward` creates a group anouncement (GA) according to the previous step and
|
||||
broadcast it to the all network periodically. GA message is visible in network to all `nodes`.
|
||||
3. The each `node` who wants to be a `member` needs to obtain this anouncement and create `credential`
|
||||
includes `keyPackage` that is specified in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) section 10.
|
||||
4. The `node` send the `KeyPackages` in plaintext with its signature with current `steward` public key which
|
||||
anounced in welcome topic. This step is crucial for security, ensuring that malicious nodes/stewards
|
||||
cannot use others' `KeyPackages`.
|
||||
It also provides flexibility for liveness in multi-steward settings,
|
||||
allowing more than one steward to obtain `KeyPackages` to commit.
|
||||
5. The `steward` aggregates all `KeyPackages` utilizes them to provision group additions for new members,
|
||||
based on the outcome of the voting process.
|
||||
6. Any `member` start to create `voting proposals` for adding or removing users,
|
||||
and present them to the voting in the MLS group as an application message.
|
||||
|
||||
However, unlimited use of `voting proposals` within the group may be misused by
|
||||
malicious or overly active members.
|
||||
Therefore, an application-level constraint can be introduced to limit the number
|
||||
or frequency of proposals initiated by each member to prevent spam or abuse.
|
||||
7. Meanwhile, the `steward` collects finalized `voting proposals` with in epoch `E`,
|
||||
that have received affirmative votes from members via application messages.
|
||||
Otherwise, the `steward` discards proposals that did not receive a majority of "YES" votes.
|
||||
Since voting proposals are transmitted as application messages, omitting them does not affect
|
||||
the protocol’s correctness or consistency.
|
||||
8. The `steward` converts all approved `voting proposals` into
|
||||
corresponding `MLS proposals` and `commit message`, and
|
||||
transmits both in a single operation as in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) section 12.4,
|
||||
including welcome messages for the new members.
|
||||
Therefore, the `commit message` ends the previous epoch and create new ones.
|
||||
9. The `members` applied the incoming `commit message` by checking the signatures and `voting proposals`
|
||||
and synchronized with the upcoming epoch.
|
||||
|
||||
## Multi stewards
|
||||
|
||||
Decentralization has already been achieved in the previous section.
|
||||
However, to improve availability and ensure censorship resistance,
|
||||
the single steward protocol is extended to a multi steward architecture.
|
||||
In this design, each epoch is coordinated by a designated steward,
|
||||
operating under the same protocol as the single steward model.
|
||||
Thus, the multi steward approach primarily defines how steward roles
|
||||
rotate across epochs while preserving the underlying structure and logic of the original protocol.
|
||||
Two variants of the multi steward design are introduced to address different system requirements.
|
||||
|
||||
### Consensus Types
|
||||
|
||||
Consensus is agnostic with its payload; therefore, it can be used for various purposes.
|
||||
Note that each message for the consensus of proposals is an `application message` in the MLS object section.
|
||||
It is used in three ways as follows:
|
||||
|
||||
1. `Commit proposal`: It is the proposal instance that is specified in Creating Voting Proposal section
|
||||
with `Proposal.payload` MUST show the commit request from `members`.
|
||||
Any member MAY create this proposal in any epoch and `epoch steward` MUST collect and commit YES voted proposals.
|
||||
This is the only proposal type common to both single steward and multi steward designs.
|
||||
2. `Steward election proposal`: This is the process that finalizes the `steward list`,
|
||||
which sets and orders stewards responsible for creating commits over a predefined number of range in (`sn_min`,`sn_max`).
|
||||
The validity of the choosen `steward list` ends when the last steward in the list (the one at the final index) completes its commit.
|
||||
At that point, a new `steward election proposal` MUST be initiated again by any member during the corresponding epoch.
|
||||
The `Proposal.payload` field MUST represent the ordered identities of the proposed stewards.
|
||||
Each steward election proposal MUST be verified and finalized through the consensus process
|
||||
so that members can identify which steward will be responsible in each epoch
|
||||
and detect any unauthorized steward commits.
|
||||
3. `Emergency criteria proposal`: If there is a malicious member or steward,
|
||||
this event MUST be voted on to finalize it.
|
||||
If this returns YES, the next epoch MUST include the removal of the member or steward.
|
||||
In a specific case where a steward is removed from the group, causing the total number of stewards to fall below `sn_min`,
|
||||
it is required to repeat the `steward election proposal`.
|
||||
`Proposal.payload` MUST consist of the evidence of the dishonesty as described in the Steward violation list,
|
||||
and the identifier of the malicious member or steward.
|
||||
This proposal can be created by any member in any epoch.
|
||||
|
||||
The order of consensus proposal messages is important to achieving a consistent result.
|
||||
Therefore, messages MUST be prioritized by type in the following order, from highest to lowest priority:
|
||||
|
||||
- `Emergency criteria proposal`
|
||||
|
||||
- `Steward election proposal`
|
||||
|
||||
- `Commit proposal`
|
||||
|
||||
This means that if a higher-priority consensus proposal is present in the network,
|
||||
lower-priority messages MUST be withheld from transmission until the higher-priority proposals have been finalized.
|
||||
|
||||
### Steward list creation
|
||||
|
||||
The `steward list` consists of steward nominees who will become actual stewards if the `steward election proposal` is finalized with YES,
|
||||
is arbitrarily chosen from `member` and OPTIONALLY adjusted depending on the needs of the implementation.
|
||||
The `steward list` size, defined by the minimum `sn_min` and maximum `sn_max` bounds,
|
||||
is determined at the time of group creation.
|
||||
The `sn_min` requirement is applied only when the total number of members exceeds `sn_min`;
|
||||
if the number of available members falls below this threshold,
|
||||
the list size automatically adjusts to include all existing members.
|
||||
|
||||
The actual size of the list MAY vary within this range as `sn`, with the minimum value being at least 1.
|
||||
|
||||
The index of the slots shows epoch info and value of index shows `member id`s.
|
||||
The next in line steward for the `epoch E` is named as `epoch steward`, which has index E.
|
||||
And the subsequent steward in the `epoch E` is named as the `backup steward`.
|
||||
For example, let's assume steward list is (S3, S2, S1) if in the previous epoch the roles were
|
||||
(`backup steward`: S2, `epoch steward`: S1), then in the next epoch they become
|
||||
(`backup steward`: S3, `epoch steward`: S2) by shifting.
|
||||
|
||||
If the `epoch steward` is honest, the `backup steward` does not involve the process in epoch,
|
||||
and the `backup steward` will be the `epoch steward` within the `epoch E+1`.
|
||||
|
||||
If the `epoch steward` is malicious, the `backup steward` is involved in the commitment phase in `epoch E`
|
||||
and the former steward becomes the `backup steward` in `epoch E`.
|
||||
|
||||
Liveness criteria:
|
||||
|
||||
Once the active `steward list` has completed its assigned epochs,
|
||||
|
||||
members MUST proceed to elect the next set of stewards
|
||||
(which MAY include some or all of the previous members).
|
||||
This election is conducted through a type 2 consensus procedure, `steward election proposal`.
|
||||
|
||||
A `Steward election proposal` is considered valid only if the resulting `steward list`
|
||||
is produced through a deterministic process that ensures an unbiased distribution of steward assignments,
|
||||
since allowing bias could enable a malicious participant to manipulate the list
|
||||
and retain control within a favored group for multiple epochs.
|
||||
|
||||
The list MUST consist of at least `sn_min` members, including retained previous stewards,
|
||||
sorted according to the ascending value of `SHA256(epoch E || member id || group id)`,
|
||||
where `epoch E` is the epoch in which the election proposal is initiated,
|
||||
and `group id` for shuffling the list across the different groups.
|
||||
Any proposal with a list that does not adhere to this generation method MUST be rejected by all members.
|
||||
|
||||
We assume that there are no recurring entries in `SHA256(epoch E || member id || group id)`, since the SHA256 outputs are unique
|
||||
when there is no repetition in the `member id` values, against the conflicts on sorting issues.
|
||||
|
||||
### Multi steward with big consensuses
|
||||
|
||||
In this model, all group modifications, such as adding or removing members,
|
||||
must be approved through consensus by all participants,
|
||||
including the steward assigned for `epoch E`.
|
||||
A configuration with multiple stewards operating under a shared consensus protocol offers
|
||||
increased decentralization and stronger protection against censorship.
|
||||
However, this benefit comes with reduced operational efficiency.
|
||||
The model is therefore best suited for small groups that value
|
||||
decentralization and censorship resistance more than performance.
|
||||
|
||||
To create a multi steward with a big consensus,
|
||||
the group is initialized with a single steward as specified as follows:
|
||||
|
||||
1. The steward initialized the group with the config file.
|
||||
This config file MUST contain (`sn_min`,`sn_max`) as the `steward list` size range.
|
||||
2. The steward adds the members as a centralized way till the number of members reaches the `sn_min`.
|
||||
Then, members propose lists by voting proposal with size `sn`
|
||||
as a consensus among all members, as mentioned in the consensus section 2, according to the checks:
|
||||
the size of the proposed list `sn` is in the interval (`sn_min`,`sn_max`).
|
||||
Note that if the total number of members is below `sn_min`,
|
||||
then the steward list size MUST be equal to the total member count.
|
||||
3. After the voting proposal ends up with a `steward list`,
|
||||
and group changes are ready to be committed as specified in single steward section
|
||||
with a difference which is members also check the committed steward is `epoch steward` or `backup steward`,
|
||||
otherwise anyone can create `emergency criteria proposal`.
|
||||
4. If the `epoch steward` violates the changing process as mentioned in the section Steward violation list,
|
||||
one of the members MUST initialize the `emergency criteria proposal` to remove the malicious Steward.
|
||||
Then `backup steward` fulfills the epoch by committing again correctly.
|
||||
|
||||
A large consensus group provides better decentralization, but it requires significant coordination,
|
||||
which MAY not be suitable for groups with more than 1000 members.
|
||||
|
||||
### Multi steward with small consensuses
|
||||
|
||||
The small consensus model offers improved efficiency with a trade-off in decentralization.
|
||||
In this design, group changes require consensus only among the stewards, rather than all members.
|
||||
Regular members participate by periodically selecting the stewards by `steward election proposal`
|
||||
but do not take part in commit decision by `commit proposal`.
|
||||
This structure enables faster coordination since consensus is achieved within a smaller group of stewards.
|
||||
It is particularly suitable for large user groups, where involving every member in each decision would be impractical.
|
||||
|
||||
The flow is similar to the big consensus including the `steward list` finalization with all members consensus
|
||||
only the difference here, the commit messages requires `commit proposal` only among the stewards.
|
||||
|
||||
## Filtering proposals against the multiple comitting
|
||||
|
||||
Since stewards are allowed to produce a commit even when they are not the designated `epoch steward`,
|
||||
multiple commits may appear within the same epoch, often reflecting recurring versions of the same proposal.
|
||||
To ensure a consistent outcome, the valid commit for the epoch SHOULD be selected as the one derived
|
||||
from the longest proposal chain, ordered by the ascending value of each proposal as `SHA256(proposal)`.
|
||||
All other cases, such as invalid commits or commits based on proposals that were not approved through voting,
|
||||
can be easily detected and discarded by the members.
|
||||
|
||||
## Steward violation list
|
||||
|
||||
A steward’s activity is called a violation if the action is one or more of the following:
|
||||
|
||||
1. Broken commit: The steward releases a different commit message from the voted `commit proposal`.
|
||||
This activity is identified by the `members` since the [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) provides the methods
|
||||
that members can use to identify the broken commit messages that are possible in a few situations,
|
||||
such as commit and proposal incompatibility. Specifically, the broken commit can arise as follows:
|
||||
1. The commit belongs to the earlier epoch.
|
||||
2. The commit message should equal the latest epoch
|
||||
3. The commit needs to be compatible with the previous epoch’s `MLS proposal`.
|
||||
2. Broken MLS proposal: The steward prepares a different `MLS proposal` for the corresponding `voting proposal`.
|
||||
This activity is identified by the `members` since both `MLS proposal` and `voting proposal` are visible
|
||||
and can be identified by checking the hash of `Proposal.payload` and `MLSProposal.payload` is the same as RFC9240 section 12.1. Proposals.
|
||||
3. Censorship and inactivity: The situation where there is a voting proposal that is visible for every member,
|
||||
and the Steward does not provide an MLS proposal and commit.
|
||||
This activity is again identified by the `members`since `voting proposals` are visible to every member in the group,
|
||||
therefore each member can verify that there is no `MLS proposal` corresponding to `voting proposal`.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
In this section, the security considerations are shown as de-MLS assurance.
|
||||
|
||||
1. Malicious Steward: A Malicious steward can act maliciously,
|
||||
as in the Steward violation list section.
|
||||
Therefore, de-MLS enforces that any steward only follows the protocol under the consensus order
|
||||
and commits without emergency criteria application.
|
||||
2. Malicious Member: A member is only marked as malicious
|
||||
when the member acts by releasing a commit message.
|
||||
3. Steward list election bias: Although SHA256 is used together with two global variables
|
||||
to shuffle stewards in a deterministic and verifiable manner,
|
||||
this approach only minimizes election bias; it does not completely eliminate it.
|
||||
This design choice is intentional, in order to preserve the efficiency advantages provided by the MLS mechanism.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
|
||||
|
||||
### References
|
||||
|
||||
- [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/)
|
||||
- [Hashgraphlike Consensus](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
|
||||
- [vacp2p/de-mls](https://github.com/vacp2p/de-mls)
|
||||
1343
docs/vac/raw/mix.md
@@ -1,39 +0,0 @@
|
||||
# Waku RFCs
|
||||
|
||||
Waku builds a family of privacy-preserving,
|
||||
censorship-resistant communication protocols for web3 applications.
|
||||
|
||||
Contributors can visit [Waku RFCs](https://github.com/waku-org/specs)
|
||||
for new Waku specifications under discussion.
|
||||
|
||||
<div class="landing-hero">
|
||||
<div class="filter-row">
|
||||
<input id="rfc-search" type="search" placeholder="Search by number, title, status, project..." aria-label="Search RFCs">
|
||||
<div class="chips" id="status-chips">
|
||||
<span class="chip active" data-status="all" data-label="All">All</span>
|
||||
<span class="chip" data-status="stable" data-label="Stable">Stable</span>
|
||||
<span class="chip" data-status="draft" data-label="Draft">Draft</span>
|
||||
<span class="chip" data-status="raw" data-label="Raw">Raw</span>
|
||||
<span class="chip" data-status="deprecated" data-label="Deprecated">Deprecated</span>
|
||||
<span class="chip" data-status="deleted" data-label="Deleted">Deleted</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="filter-row">
|
||||
<div class="chips" id="date-chips">
|
||||
<span class="chip active" data-date="all" data-label="All time">All time</span>
|
||||
<span class="chip" data-date="latest" data-label="Latest" data-count="false">Latest</span>
|
||||
<span class="chip" data-date="last90" data-label="Last 90 days">Last 90 days</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="results-row">
|
||||
<div id="results-count" class="results-count">Loading RFC index...</div>
|
||||
<div class="results-hint">Click a column to sort</div>
|
||||
</div>
|
||||
|
||||
<div id="rfc-table-container" class="table-wrap" data-project="waku"></div>
|
||||
|
||||
<noscript>
|
||||
<p class="noscript-note">JavaScript is required to load the RFC index table.</p>
|
||||
</noscript>
|
||||
@@ -1,3 +0,0 @@
|
||||
# Waku Informational RFCs
|
||||
|
||||
Informational Waku documents covering guidance, examples, and supporting material.
|
||||
@@ -1,3 +0,0 @@
|
||||
# Waku Standards - Application
|
||||
|
||||
Application-layer specifications built on top of Waku core protocols.
|
||||
@@ -1,3 +0,0 @@
|
||||
# Waku Standards - Core
|
||||
|
||||
Core Waku protocol specifications, including messaging, peer discovery, and network primitives.
|
||||
@@ -1,3 +0,0 @@
|
||||
# Waku Standards - Legacy
|
||||
|
||||
Legacy Waku standards retained for reference and historical compatibility.
|
||||
27
flake.lock
generated
@@ -1,27 +0,0 @@
|
||||
{
|
||||
"nodes": {
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1717159533,
|
||||
"narHash": "sha256-oamiKNfr2MS6yH64rUn99mIZjc45nGJlj9eGth/3Xuw=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "a62e6edd6d5e1fa0329b8653c801147986f8d446",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "nixos-23.11",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"nixpkgs": "nixpkgs"
|
||||
}
|
||||
}
|
||||
},
|
||||
"root": "root",
|
||||
"version": 7
|
||||
}
|
||||
22
flake.nix
@@ -1,22 +0,0 @@
|
||||
{
|
||||
description = "infra-docs";
|
||||
|
||||
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.11";
|
||||
|
||||
outputs = { self, nixpkgs }:
|
||||
let
|
||||
stableSystems = ["x86_64-linux" "aarch64-linux" "x86_64-darwin" "aarch64-darwin"];
|
||||
forAllSystems = nixpkgs.lib.genAttrs stableSystems;
|
||||
pkgsFor = nixpkgs.lib.genAttrs stableSystems (
|
||||
system: import nixpkgs { inherit system; }
|
||||
);
|
||||
in rec {
|
||||
devShells = forAllSystems (system: {
|
||||
default = pkgsFor.${system}.mkShellNoCC {
|
||||
packages = with pkgsFor.${system}.buildPackages; [
|
||||
openssh git ghp-import mdbook
|
||||
];
|
||||
};
|
||||
});
|
||||
};
|
||||
}
|
||||
6
nomos/README.md
Normal file
@@ -0,0 +1,6 @@
|
||||
# Nomos RFCs
|
||||
|
||||
Nomos is building a secure, flexible, and
|
||||
scalable infrastructure for developers creating applications for the network state.
|
||||
Published Specifications are currently available here,
|
||||
[Nomos Specifications](https://nomos-tech.notion.site/project).
|
||||
@@ -1,12 +1,18 @@
|
||||
# CONSENSUS-CLARO
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Claro Consensus Protocol |
|
||||
| Status | deprecated |
|
||||
| Category | Standards Track |
|
||||
| Editor | Corey Petty <corey@status.im> |
|
||||
| Contributors | Álvaro Castro-Castilla, Mark Evenson |
|
||||
---
|
||||
title: CONSENSUS-CLARO
|
||||
name: Claro Consensus Protocol
|
||||
status: deprecated
|
||||
category: Standards Track
|
||||
tags:
|
||||
- logos/consensus
|
||||
editor: Corey Petty <corey@status.im>
|
||||
created: 01-JUL-2022
|
||||
revised: <2022-08-26 Fri 13:11Z>
|
||||
uri: <https://rdf.logos.co/protocol/Claro/1/0/0#<2022-08-26%20Fri$2013:11Z>
|
||||
contributors:
|
||||
- Álvaro Castro-Castilla
|
||||
- Mark Evenson
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,17 @@
|
||||
# NOMOSDA-ENCODING
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | NomosDA Encoding Protocol |
|
||||
| Status | raw |
|
||||
| Editor | Daniel Sanchez-Quiros <danielsq@status.im> |
|
||||
| Contributors | Daniel Kashepava <danielkashepava@status.im>, Álvaro Castro-Castilla <alvaro@status.im>, Filip Dimitrijevic <filip@status.im>, Thomas Lavaur <thomaslavaur@status.im>, Mehmet Gonen <mehmet@status.im> |
|
||||
---
|
||||
title: NOMOSDA-ENCODING
|
||||
name: NomosDA Encoding Protocol
|
||||
status: raw
|
||||
category:
|
||||
tags: data-availability
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Daniel Kashepava <danielkashepava@status.im>
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
- Thomas Lavaur <thomaslavaur@status.im>
|
||||
- Mehmet Gonen <mehmet@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
@@ -1,11 +1,16 @@
|
||||
# NOMOS-DA-NETWORK
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | NomosDA Network |
|
||||
| Status | raw |
|
||||
| Editor | Daniel Sanchez Quiros <danielsq@status.im> |
|
||||
| Contributors | Álvaro Castro-Castilla <alvaro@status.im>, Daniel Kashepava <danielkashepava@status.im>, Gusto Bacvinka <augustinas@status.im>, Filip Dimitrijevic <filip@status.im> |
|
||||
---
|
||||
title: NOMOS-DA-NETWORK
|
||||
name: NomosDA Network
|
||||
status: raw
|
||||
category:
|
||||
tags: network, data-availability, da-nodes, executors, sampling
|
||||
editor: Daniel Sanchez Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Daniel Kashepava <danielkashepava@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
# P2P-HARDWARE-REQUIREMENTS
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Nomos p2p Network Hardware Requirements Specification |
|
||||
| Status | raw |
|
||||
| Category | infrastructure |
|
||||
| Editor | Daniel Sanchez-Quiros <danielsq@status.im> |
|
||||
| Contributors | Filip Dimitrijevic <filip@status.im> |
|
||||
---
|
||||
title: P2P-HARDWARE-REQUIREMENTS
|
||||
name: Nomos p2p Network Hardware Requirements Specification
|
||||
status: raw
|
||||
category: infrastructure
|
||||
tags: [hardware, requirements, nodes, validators, services]
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,12 +1,18 @@
|
||||
# P2P-NAT-SOLUTION
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Nomos P2P Network NAT Solution Specification |
|
||||
| Status | raw |
|
||||
| Category | networking |
|
||||
| Editor | Antonio Antonino <antonio@status.im> |
|
||||
| Contributors | Álvaro Castro-Castilla <alvaro@status.im>, Daniel Sanchez-Quiros <danielsq@status.im>, Petar Radovic <petar@status.im>, Gusto Bacvinka <augustinas@status.im>, Youngjoon Lee <youngjoon@status.im>, Filip Dimitrijevic <filip@status.im> |
|
||||
---
|
||||
title: P2P-NAT-SOLUTION
|
||||
name: Nomos P2P Network NAT Solution Specification
|
||||
status: raw
|
||||
category: networking
|
||||
tags: [nat, traversal, autonat, upnp, pcp, nat-pmp]
|
||||
editor: Antonio Antonino <antonio@status.im>
|
||||
contributors:
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
- Petar Radovic <petar@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- Youngjoon Lee <youngjoon@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,12 +1,18 @@
|
||||
# P2P-NETWORK-BOOTSTRAPPING
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Nomos P2P Network Bootstrapping Specification |
|
||||
| Status | raw |
|
||||
| Category | networking |
|
||||
| Editor | Daniel Sanchez-Quiros <danielsq@status.im> |
|
||||
| Contributors | Álvaro Castro-Castilla <alvaro@status.im>, Petar Radovic <petar@status.im>, Gusto Bacvinka <augustinas@status.im>, Antonio Antonino <antonio@status.im>, Youngjoon Lee <youngjoon@status.im>, Filip Dimitrijevic <filip@status.im> |
|
||||
---
|
||||
title: P2P-NETWORK-BOOTSTRAPPING
|
||||
name: Nomos P2P Network Bootstrapping Specification
|
||||
status: raw
|
||||
category: networking
|
||||
tags: [p2p, networking, bootstrapping, peer-discovery, libp2p]
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Petar Radovic <petar@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- Antonio Antonino <antonio@status.im>
|
||||
- Youngjoon Lee <youngjoon@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
# NOMOS-P2P-NETWORK
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Nomos P2P Network Specification |
|
||||
| Status | draft |
|
||||
| Category | networking |
|
||||
| Editor | Daniel Sanchez-Quiros <danielsq@status.im> |
|
||||
| Contributors | Filip Dimitrijevic <filip@status.im> |
|
||||
---
|
||||
title: NOMOS-P2P-NETWORK
|
||||
name: Nomos P2P Network Specification
|
||||
status: draft
|
||||
category: networking
|
||||
tags: [p2p, networking, libp2p, kademlia, gossipsub, quic]
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,19 @@
|
||||
# NOMOS-SDP
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Nomos Service Declaration Protocol Specification |
|
||||
| Status | raw |
|
||||
| Editor | Marcin Pawlowski <marcin@status.im> |
|
||||
| Contributors | Mehmet <mehmet@status.im>, Daniel Sanchez Quiros <danielsq@status.im>, Álvaro Castro-Castilla <alvaro@status.im>, Thomas Lavaur <thomaslavaur@status.im>, Filip Dimitrijevic <filip@status.im>, Gusto Bacvinka <augustinas@status.im>, David Rusu <davidrusu@status.im> |
|
||||
---
|
||||
title: NOMOS-SDP
|
||||
name: Nomos Service Declaration Protocol Specification
|
||||
status: raw
|
||||
category:
|
||||
tags: participation, validators, declarations
|
||||
editor: Marcin Pawlowski <marcin@status.im>
|
||||
contributors:
|
||||
- Mehmet <mehmet@status.im>
|
||||
- Daniel Sanchez Quiros <danielsq@status.im>
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Thomas Lavaur <thomaslavaur@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- David Rusu <davidrusu@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
@@ -1,260 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
import subprocess
|
||||
from typing import List, Tuple, Optional, Dict
|
||||
from pathlib import Path
|
||||
import re
|
||||
|
||||
|
||||
def log(msg: str):
|
||||
print(f"[INFO] {msg}", flush=True)
|
||||
|
||||
|
||||
def run_git(args: list) -> str:
|
||||
cmd = ["git"] + args
|
||||
log("Running: " + " ".join(cmd))
|
||||
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
if result.returncode != 0:
|
||||
print("[ERROR] Command failed:", " ".join(cmd))
|
||||
print(result.stderr)
|
||||
raise subprocess.CalledProcessError(
|
||||
result.returncode, cmd, result.stdout, result.stderr
|
||||
)
|
||||
|
||||
return result.stdout.strip()
|
||||
|
||||
|
||||
def get_repo_https_url() -> Optional[str]:
|
||||
try:
|
||||
url = run_git(["config", "--get", "remote.origin.url"]).strip()
|
||||
except subprocess.CalledProcessError:
|
||||
return None
|
||||
|
||||
if url.startswith("git@github.com:"):
|
||||
path = url[len("git@github.com:"):]
|
||||
if path.endswith(".git"):
|
||||
path = path[:-4]
|
||||
return f"https://github.com/{path}"
|
||||
|
||||
if url.startswith("https://github.com/"):
|
||||
path = url[len("https://github.com/"):]
|
||||
if path.endswith(".git"):
|
||||
path = path[:-4]
|
||||
return f"https://github.com/{path}"
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_repo_file_path(path: str) -> str:
|
||||
log(f"Resolving file path via git: {path}")
|
||||
try:
|
||||
out = run_git(["ls-files", "--full-name", path])
|
||||
except subprocess.CalledProcessError:
|
||||
raise SystemExit(f"[ERROR] {path!r} is not tracked by git")
|
||||
|
||||
if not out:
|
||||
raise SystemExit(f"[ERROR] {path!r} is not tracked by git")
|
||||
|
||||
resolved = out.splitlines()[0]
|
||||
log(f"Resolved path inside repo: {resolved}")
|
||||
return resolved
|
||||
|
||||
|
||||
def get_file_commits(path: str) -> List[Tuple[str, str, str, str]]:
|
||||
log(f"Collecting commit history for: {path}")
|
||||
|
||||
log_output = run_git([
|
||||
"log",
|
||||
"--follow",
|
||||
"--format=%H%x09%ad%x09%s",
|
||||
"--date=short",
|
||||
"--name-only",
|
||||
"--",
|
||||
path,
|
||||
])
|
||||
|
||||
if not log_output:
|
||||
log("No history found.")
|
||||
return []
|
||||
|
||||
commits: List[Tuple[str, str, str, str]] = []
|
||||
current: Optional[Dict[str, str]] = None
|
||||
for line in log_output.splitlines():
|
||||
if not line.strip():
|
||||
continue
|
||||
parts = line.split("\t", 2)
|
||||
# Detect commit line
|
||||
if len(parts) == 3 and len(parts[0]) >= 7 and all(c in "0123456789abcdef" for c in parts[0].lower()):
|
||||
if current:
|
||||
commits.append((current["commit"], current["date"], current["subject"], current.get("path", path)))
|
||||
current = {"commit": parts[0], "date": parts[1], "subject": parts[2]}
|
||||
continue
|
||||
|
||||
# If we are in a commit block and we see a path, record the first one
|
||||
if current and "path" not in current:
|
||||
current["path"] = line.strip()
|
||||
|
||||
if current:
|
||||
commits.append((current["commit"], current["date"], current["subject"], current.get("path", path)))
|
||||
|
||||
commits.reverse()
|
||||
log(f"Found {len(commits)} commits.")
|
||||
return commits
|
||||
|
||||
|
||||
def build_markdown_history(
|
||||
repo_url: str,
|
||||
file_path: str,
|
||||
commits: List[Tuple[str, str, str, str]],
|
||||
) -> str:
|
||||
log(f"Generating markdown history...")
|
||||
entries = []
|
||||
|
||||
# newest first
|
||||
for commit, date, subject, path_at_commit in reversed(commits):
|
||||
blob_url = f"{repo_url}/blob/{commit}/{path_at_commit}"
|
||||
entries.append((date, commit, subject, blob_url))
|
||||
|
||||
lines: List[str] = []
|
||||
lines.append("## Timeline\n")
|
||||
|
||||
for date, commit, subject, blob_url in entries:
|
||||
lines.append(f"- **{date}** — [`{commit[:7]}`]({blob_url}) — {subject}")
|
||||
|
||||
return "\n".join(lines).rstrip() + "\n"
|
||||
|
||||
|
||||
def find_metadata_table_end(lines: List[str]) -> Optional[int]:
|
||||
header_idx = None
|
||||
for idx, line in enumerate(lines[:80]):
|
||||
if line.strip() == "| Field | Value |":
|
||||
header_idx = idx
|
||||
break
|
||||
if header_idx is None:
|
||||
return None
|
||||
|
||||
if header_idx + 1 >= len(lines):
|
||||
return None
|
||||
|
||||
if not lines[header_idx + 1].strip().startswith("|"):
|
||||
return None
|
||||
|
||||
end_idx = header_idx + 2
|
||||
while end_idx < len(lines) and lines[end_idx].strip().startswith("|"):
|
||||
end_idx += 1
|
||||
|
||||
return end_idx
|
||||
|
||||
|
||||
def inject_timeline(file_path: Path, timeline_md: str) -> bool:
|
||||
"""
|
||||
Insert or replace a timeline block near the top of the file.
|
||||
Returns True if the file was modified.
|
||||
"""
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
start_marker = "<!-- timeline:start -->"
|
||||
end_marker = "<!-- timeline:end -->"
|
||||
block = (
|
||||
f"{start_marker}\n\n"
|
||||
f"{timeline_md.strip()}\n\n"
|
||||
f"{end_marker}\n"
|
||||
)
|
||||
|
||||
if start_marker in content and end_marker in content:
|
||||
pattern = re.compile(
|
||||
re.escape(start_marker) + r".*?" + re.escape(end_marker),
|
||||
re.DOTALL,
|
||||
)
|
||||
new_content, count = pattern.subn(block, content, count=1)
|
||||
if count and new_content != content:
|
||||
file_path.write_text(new_content, encoding="utf-8")
|
||||
return True
|
||||
return False
|
||||
|
||||
lines = content.splitlines()
|
||||
insert_pos = 0
|
||||
table_end = find_metadata_table_end(lines)
|
||||
if table_end is not None:
|
||||
insert_pos = len("\n".join(lines[:table_end]))
|
||||
else:
|
||||
for idx, line in enumerate(lines):
|
||||
if line.startswith("# "):
|
||||
insert_pos = len("\n".join(lines[: idx + 1]))
|
||||
break
|
||||
|
||||
new_content = content[:insert_pos] + "\n\n" + block + "\n" + content[insert_pos:]
|
||||
if new_content != content:
|
||||
file_path.write_text(new_content, encoding="utf-8")
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def is_rfc_file(path: Path) -> bool:
|
||||
try:
|
||||
text = path.read_text(encoding="utf-8", errors="ignore")
|
||||
except OSError:
|
||||
return False
|
||||
|
||||
if "# " not in text:
|
||||
return False
|
||||
|
||||
if "| Field | Value |" not in text:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def find_rfc_files(root: Path) -> List[Path]:
|
||||
candidates: List[Path] = []
|
||||
for path in root.rglob("*.md"):
|
||||
if path.name in {"README.md", "SUMMARY.md", "template.md"}:
|
||||
continue
|
||||
if is_rfc_file(path):
|
||||
candidates.append(path)
|
||||
return sorted(candidates)
|
||||
|
||||
|
||||
def main():
|
||||
log("Starting history generation")
|
||||
|
||||
repo_url = get_repo_https_url()
|
||||
if not repo_url:
|
||||
raise SystemExit("[ERROR] Could not determine GitHub repo URL")
|
||||
|
||||
log(f"Repo URL: {repo_url}")
|
||||
|
||||
root = Path("docs")
|
||||
files = find_rfc_files(root)
|
||||
if not files:
|
||||
raise SystemExit(f"[ERROR] No RFCs found under {root}")
|
||||
|
||||
updated = 0
|
||||
for file_path in files:
|
||||
repo_file_path = get_repo_file_path(str(file_path))
|
||||
commits = get_file_commits(repo_file_path)
|
||||
if not commits:
|
||||
log(f"[WARN] No history found for {repo_file_path}")
|
||||
continue
|
||||
|
||||
markdown = build_markdown_history(
|
||||
repo_url=repo_url,
|
||||
file_path=repo_file_path,
|
||||
commits=commits,
|
||||
)
|
||||
|
||||
modified = inject_timeline(file_path, markdown)
|
||||
if modified:
|
||||
updated += 1
|
||||
log(f"Timeline injected into {file_path}")
|
||||
|
||||
log(f"Timelines updated in {updated} files")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,126 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Generate a JSON index of RFC metadata for the landing page filters.
|
||||
|
||||
Scans the docs/ tree for Markdown files and writes
|
||||
`docs/rfc-index.json`.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional
|
||||
import html
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
ROOT = Path(__file__).resolve().parent.parent
|
||||
DOCS = ROOT / "docs"
|
||||
OUTPUT = DOCS / "rfc-index.json"
|
||||
|
||||
EXCLUDE_FILES = {"README.md", "SUMMARY.md"}
|
||||
EXCLUDE_PARTS = {"previous-versions"}
|
||||
|
||||
|
||||
def parse_meta_from_markdown_table(text: str) -> Optional[Dict[str, str]]:
|
||||
lines = text.splitlines()
|
||||
meta: Dict[str, str] = {}
|
||||
for i in range(len(lines) - 2):
|
||||
line = lines[i].strip()
|
||||
next_line = lines[i + 1].strip()
|
||||
if not (line.startswith('|') and next_line.startswith('|') and '---' in next_line):
|
||||
continue
|
||||
|
||||
# Simple two-column table parsing
|
||||
j = i + 2
|
||||
while j < len(lines) and lines[j].strip().startswith('|'):
|
||||
parts = [p.strip() for p in lines[j].strip().strip('|').split('|')]
|
||||
if len(parts) >= 2:
|
||||
key = parts[0].lower()
|
||||
value = html.unescape(parts[1])
|
||||
if key and value:
|
||||
meta[key] = value
|
||||
j += 1
|
||||
break
|
||||
|
||||
return meta or None
|
||||
|
||||
|
||||
def parse_title_from_h1(text: str) -> Optional[str]:
|
||||
match = re.search(r"^#\\s+(.+)$", text, flags=re.MULTILINE)
|
||||
if not match:
|
||||
return None
|
||||
return match.group(1).strip()
|
||||
|
||||
def run_git(args: List[str]) -> str:
|
||||
result = subprocess.run(
|
||||
["git"] + args,
|
||||
cwd=ROOT,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
encoding="utf-8",
|
||||
)
|
||||
if result.returncode != 0:
|
||||
return ""
|
||||
return result.stdout.strip()
|
||||
|
||||
|
||||
def get_last_updated(path: Path) -> str:
|
||||
rel = path.relative_to(ROOT).as_posix()
|
||||
output = run_git(["log", "-1", "--format=%ad", "--date=short", "--", rel])
|
||||
return output
|
||||
|
||||
|
||||
def collect() -> List[Dict[str, str]]:
|
||||
entries: List[Dict[str, str]] = []
|
||||
for path in DOCS.rglob("*.md"):
|
||||
rel = path.relative_to(DOCS)
|
||||
|
||||
if rel.name in EXCLUDE_FILES:
|
||||
continue
|
||||
if EXCLUDE_PARTS.intersection(rel.parts):
|
||||
continue
|
||||
|
||||
text = path.read_text(encoding="utf-8", errors="ignore")
|
||||
|
||||
meta = parse_meta_from_markdown_table(text) or {}
|
||||
|
||||
slug = meta.get("slug")
|
||||
title = meta.get("title") or meta.get("name") or parse_title_from_h1(text) or rel.stem
|
||||
status = meta.get("status") or "unknown"
|
||||
category = meta.get("category") or "unspecified"
|
||||
project = rel.parts[0]
|
||||
|
||||
# Skip the template placeholder
|
||||
if slug == "XX":
|
||||
continue
|
||||
|
||||
# mdBook renders Markdown to .html, keep links consistent
|
||||
html_path = rel.with_suffix(".html").as_posix()
|
||||
|
||||
updated = get_last_updated(path)
|
||||
|
||||
entries.append(
|
||||
{
|
||||
"project": project,
|
||||
"slug": str(slug) if slug is not None else title,
|
||||
"title": title,
|
||||
"status": status,
|
||||
"category": category,
|
||||
"updated": updated,
|
||||
"path": html_path,
|
||||
}
|
||||
)
|
||||
|
||||
entries.sort(key=lambda r: (r["project"], r["slug"]))
|
||||
return entries
|
||||
|
||||
|
||||
def main() -> None:
|
||||
entries = collect()
|
||||
OUTPUT.write_text(json.dumps(entries, indent=2), encoding="utf-8")
|
||||
print(f"Wrote {len(entries)} entries to {OUTPUT}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,616 +0,0 @@
|
||||
(() => {
|
||||
function linkMenuTitle() {
|
||||
const menuTitle = document.querySelector(".menu-title");
|
||||
if (!menuTitle || menuTitle.dataset.linked === "true") {
|
||||
return;
|
||||
}
|
||||
|
||||
const existingLink = menuTitle.closest("a");
|
||||
if (existingLink) {
|
||||
menuTitle.dataset.linked = "true";
|
||||
return;
|
||||
}
|
||||
|
||||
const root = (typeof path_to_root !== "undefined" && path_to_root) ? path_to_root : "";
|
||||
const link = document.createElement("a");
|
||||
link.href = `${root}index.html`;
|
||||
link.className = "menu-title-link";
|
||||
link.setAttribute("aria-label", "Back to home");
|
||||
|
||||
const parent = menuTitle.parentNode;
|
||||
parent.replaceChild(link, menuTitle);
|
||||
link.appendChild(menuTitle);
|
||||
menuTitle.dataset.linked = "true";
|
||||
}
|
||||
|
||||
function onReady(fn) {
|
||||
if (document.readyState === "loading") {
|
||||
document.addEventListener("DOMContentLoaded", fn, { once: true });
|
||||
} else {
|
||||
fn();
|
||||
}
|
||||
}
|
||||
|
||||
onReady(linkMenuTitle);
|
||||
|
||||
onReady(() => {
|
||||
const printLink = document.querySelector("a[href$='print.html']");
|
||||
if (!printLink) return;
|
||||
printLink.addEventListener("click", (event) => {
|
||||
event.preventDefault();
|
||||
window.print();
|
||||
});
|
||||
});
|
||||
|
||||
function addTopNav() {
|
||||
const menuBar = document.querySelector("#mdbook-menu-bar, #menu-bar");
|
||||
if (!menuBar || menuBar.querySelector(".site-nav")) return;
|
||||
|
||||
const root = (typeof path_to_root !== "undefined" && path_to_root) ? path_to_root : "";
|
||||
const nav = document.createElement("nav");
|
||||
nav.className = "site-nav";
|
||||
nav.setAttribute("aria-label", "Primary");
|
||||
nav.innerHTML = `
|
||||
<a class="nav-link" href="${root}index.html">Home</a>
|
||||
<details class="nav-dropdown">
|
||||
<summary class="nav-link">Projects</summary>
|
||||
<div class="nav-menu">
|
||||
<a href="${root}vac/index.html">Vac</a>
|
||||
<a href="${root}waku/index.html">Waku</a>
|
||||
<a href="${root}status/index.html">Status</a>
|
||||
<a href="${root}nomos/index.html">Nomos</a>
|
||||
<a href="${root}codex/index.html">Codex</a>
|
||||
</div>
|
||||
</details>
|
||||
<a class="nav-link" href="${root}about.html">About</a>
|
||||
<button class="nav-link back-to-top-link" type="button">Back to top</button>
|
||||
`;
|
||||
|
||||
const rightButtons = menuBar.querySelector(".right-buttons");
|
||||
if (rightButtons) {
|
||||
menuBar.insertBefore(nav, rightButtons);
|
||||
} else {
|
||||
menuBar.appendChild(nav);
|
||||
}
|
||||
|
||||
document.addEventListener("click", (event) => {
|
||||
if (!nav.contains(event.target)) {
|
||||
nav.querySelectorAll("details[open]").forEach((detail) => detail.removeAttribute("open"));
|
||||
}
|
||||
});
|
||||
|
||||
const backToTop = nav.querySelector(".back-to-top-link");
|
||||
if (backToTop) {
|
||||
backToTop.addEventListener("click", () => {
|
||||
window.scrollTo({ top: 0, behavior: "smooth" });
|
||||
});
|
||||
const toggleBackToTop = () => {
|
||||
backToTop.classList.toggle("is-visible", window.scrollY > 240);
|
||||
};
|
||||
toggleBackToTop();
|
||||
window.addEventListener("scroll", toggleBackToTop, { passive: true });
|
||||
}
|
||||
}
|
||||
|
||||
function addFooter() {
|
||||
if (document.querySelector(".site-footer")) return;
|
||||
const page = document.querySelector(".page");
|
||||
if (!page) return;
|
||||
|
||||
const footer = document.createElement("footer");
|
||||
footer.className = "site-footer";
|
||||
footer.innerHTML = `
|
||||
<a href="https://vac.dev">Vac</a>
|
||||
<span class="footer-sep">·</span>
|
||||
<a href="https://www.ietf.org">IETF</a>
|
||||
<span class="footer-sep">·</span>
|
||||
<a href="https://github.com/vacp2p/rfc-index">GitHub</a>
|
||||
`;
|
||||
page.appendChild(footer);
|
||||
}
|
||||
|
||||
function enhanceRfcHeader() {
|
||||
const main = document.querySelector("#mdbook-content main, #content main");
|
||||
if (!main || main.querySelector(".rfc-header")) return;
|
||||
|
||||
const h1 = main.querySelector("h1");
|
||||
if (!h1) return;
|
||||
|
||||
let cursor = h1.nextElementSibling;
|
||||
while (cursor && cursor.tagName === "DIV" && cursor.classList.contains("table-wrapper") === false) {
|
||||
if (cursor.textContent.trim() !== "") break;
|
||||
cursor = cursor.nextElementSibling;
|
||||
}
|
||||
|
||||
let table = null;
|
||||
let wrapper = null;
|
||||
if (cursor && cursor.classList.contains("table-wrapper")) {
|
||||
wrapper = cursor;
|
||||
table = cursor.querySelector("table");
|
||||
} else if (cursor && cursor.tagName === "TABLE") {
|
||||
table = cursor;
|
||||
}
|
||||
|
||||
if (!table) return;
|
||||
const headerCell = table.querySelector("th");
|
||||
if (!headerCell || headerCell.textContent.trim().toLowerCase() !== "field") return;
|
||||
|
||||
const meta = {};
|
||||
table.querySelectorAll("tbody tr").forEach((row) => {
|
||||
const keyCell = row.querySelector("td, th");
|
||||
const valCell = row.querySelectorAll("td, th")[1];
|
||||
if (!keyCell || !valCell) return;
|
||||
const key = keyCell.textContent.trim().toLowerCase();
|
||||
const value = valCell.textContent.trim();
|
||||
if (key) meta[key] = value;
|
||||
});
|
||||
|
||||
const header = document.createElement("div");
|
||||
header.className = "rfc-header";
|
||||
|
||||
const badges = document.createElement("div");
|
||||
badges.className = "rfc-badges";
|
||||
|
||||
if (meta.status) {
|
||||
const status = meta.status.toLowerCase().split("/")[0].replace(/\s+/g, "-");
|
||||
const badge = document.createElement("span");
|
||||
badge.className = `badge status-${status}`;
|
||||
badge.textContent = meta.status;
|
||||
badges.appendChild(badge);
|
||||
}
|
||||
|
||||
if (meta.category && meta.category.toLowerCase() !== "unspecified") {
|
||||
const category = meta.category.toLowerCase();
|
||||
let cls = "category-other";
|
||||
if (category.includes("best current practice") || category.includes("bcp")) {
|
||||
cls = "category-bcp";
|
||||
} else if (category.includes("standards")) {
|
||||
cls = "category-standards";
|
||||
} else if (category.includes("informational")) {
|
||||
cls = "category-informational";
|
||||
} else if (category.includes("experimental")) {
|
||||
cls = "category-experimental";
|
||||
}
|
||||
const badge = document.createElement("span");
|
||||
badge.className = `badge ${cls}`;
|
||||
badge.textContent = meta.category;
|
||||
badges.appendChild(badge);
|
||||
}
|
||||
|
||||
if (badges.children.length) {
|
||||
header.appendChild(badges);
|
||||
}
|
||||
|
||||
table.classList.add("rfc-meta-table");
|
||||
if (wrapper) {
|
||||
wrapper.classList.add("rfc-meta-table-wrapper");
|
||||
main.insertBefore(header, wrapper);
|
||||
header.appendChild(wrapper);
|
||||
} else {
|
||||
main.insertBefore(header, table);
|
||||
header.appendChild(table);
|
||||
}
|
||||
}
|
||||
|
||||
function getSectionInfo(item) {
|
||||
const direct = item.querySelector(":scope > ol.section");
|
||||
if (direct) {
|
||||
return { section: direct };
|
||||
}
|
||||
|
||||
const sibling = item.nextElementSibling;
|
||||
if (sibling && sibling.tagName === "LI") {
|
||||
const siblingSection = sibling.querySelector(":scope > ol.section");
|
||||
if (siblingSection) {
|
||||
return { section: siblingSection };
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
function initSidebarCollapsible(root) {
|
||||
if (!root) return;
|
||||
const items = root.querySelectorAll("li.chapter-item");
|
||||
items.forEach((item) => {
|
||||
const sectionInfo = getSectionInfo(item);
|
||||
const link = item.querySelector(":scope > a, :scope > .chapter-link-wrapper > a");
|
||||
if (!sectionInfo || !link) return;
|
||||
|
||||
if (!link.querySelector(".section-toggle")) {
|
||||
const toggle = document.createElement("span");
|
||||
toggle.className = "section-toggle";
|
||||
toggle.setAttribute("role", "button");
|
||||
toggle.setAttribute("aria-label", "Toggle section");
|
||||
toggle.addEventListener("click", (event) => {
|
||||
event.preventDefault();
|
||||
event.stopPropagation();
|
||||
item.classList.toggle("expanded");
|
||||
});
|
||||
link.prepend(toggle);
|
||||
}
|
||||
|
||||
if (item.dataset.collapsibleInit !== "true") {
|
||||
const hasActive = link.classList.contains("active");
|
||||
const hasActiveInSection = !!sectionInfo.section.querySelector(".active");
|
||||
if (hasActive || hasActiveInSection) {
|
||||
item.classList.add("expanded");
|
||||
} else {
|
||||
item.classList.remove("expanded");
|
||||
}
|
||||
item.dataset.collapsibleInit = "true";
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
function bindSidebarCollapsible() {
|
||||
const sidebar = document.querySelector("#mdbook-sidebar .sidebar-scrollbox")
|
||||
|| document.querySelector("#sidebar .sidebar-scrollbox");
|
||||
if (sidebar) {
|
||||
initSidebarCollapsible(sidebar);
|
||||
}
|
||||
|
||||
const iframe = document.querySelector(".sidebar-iframe-outer");
|
||||
if (iframe) {
|
||||
const onLoad = () => {
|
||||
try {
|
||||
initSidebarCollapsible(iframe.contentDocument);
|
||||
} catch (e) {
|
||||
// ignore access errors
|
||||
}
|
||||
};
|
||||
iframe.addEventListener("load", onLoad);
|
||||
onLoad();
|
||||
}
|
||||
}
|
||||
|
||||
function observeSidebar() {
|
||||
const target = document.querySelector("#mdbook-sidebar") || document.querySelector("#sidebar");
|
||||
if (!target) return;
|
||||
const observer = new MutationObserver(() => bindSidebarCollapsible());
|
||||
observer.observe(target, { childList: true, subtree: true });
|
||||
setTimeout(() => observer.disconnect(), 1500);
|
||||
}
|
||||
|
||||
onReady(() => {
|
||||
addTopNav();
|
||||
addFooter();
|
||||
enhanceRfcHeader();
|
||||
bindSidebarCollapsible();
|
||||
// toc.js may inject the sidebar after load
|
||||
setTimeout(bindSidebarCollapsible, 100);
|
||||
observeSidebar();
|
||||
});
|
||||
|
||||
const searchInput = document.getElementById("rfc-search");
|
||||
const resultsCount = document.getElementById("results-count");
|
||||
const tableContainer = document.getElementById("rfc-table-container");
|
||||
|
||||
if (!searchInput || !resultsCount || !tableContainer) {
|
||||
return;
|
||||
}
|
||||
|
||||
const rootPrefix = (typeof path_to_root !== "undefined" && path_to_root) ? path_to_root : "";
|
||||
const projectScope = tableContainer.dataset.project || "";
|
||||
const showProjectColumn = !projectScope;
|
||||
|
||||
let rfcData = [];
|
||||
const statusOrder = { stable: 0, draft: 1, raw: 2, deprecated: 3, deleted: 4, unknown: 5 };
|
||||
const statusLabels = {
|
||||
stable: "Stable",
|
||||
draft: "Draft",
|
||||
raw: "Raw",
|
||||
deprecated: "Deprecated",
|
||||
deleted: "Deleted",
|
||||
unknown: "Unknown"
|
||||
};
|
||||
const projectLabels = {
|
||||
vac: "Vac",
|
||||
waku: "Waku",
|
||||
status: "Status",
|
||||
nomos: "Nomos",
|
||||
codex: "Codex"
|
||||
};
|
||||
const headers = [
|
||||
{ key: "slug", label: "RFC", width: showProjectColumn ? "10%" : "12%" },
|
||||
{ key: "title", label: "Title", width: showProjectColumn ? "34%" : "40%" },
|
||||
...(showProjectColumn ? [{ key: "project", label: "Project", width: "12%" }] : []),
|
||||
{ key: "status", label: "Status", width: "14%" },
|
||||
{ key: "category", label: "Category", width: showProjectColumn ? "18%" : "20%" },
|
||||
{ key: "updated", label: "Updated", width: "12%" }
|
||||
];
|
||||
|
||||
let statusFilter = "all";
|
||||
let projectFilter = "all";
|
||||
let dateFilter = "all";
|
||||
let sortKey = "slug";
|
||||
let sortDir = "asc";
|
||||
|
||||
const table = document.createElement("table");
|
||||
table.className = "rfc-table";
|
||||
|
||||
const thead = document.createElement("thead");
|
||||
const headRow = document.createElement("tr");
|
||||
const headerCells = {};
|
||||
|
||||
headers.forEach((header) => {
|
||||
const th = document.createElement("th");
|
||||
th.textContent = header.label;
|
||||
th.dataset.sort = header.key;
|
||||
th.dataset.label = header.label;
|
||||
if (header.width) {
|
||||
th.style.width = header.width;
|
||||
}
|
||||
th.addEventListener("click", () => {
|
||||
if (sortKey === header.key) {
|
||||
sortDir = sortDir === "asc" ? "desc" : "asc";
|
||||
} else {
|
||||
sortKey = header.key;
|
||||
sortDir = "asc";
|
||||
}
|
||||
render();
|
||||
});
|
||||
headRow.appendChild(th);
|
||||
headerCells[header.key] = th;
|
||||
});
|
||||
thead.appendChild(headRow);
|
||||
table.appendChild(thead);
|
||||
|
||||
const tbody = document.createElement("tbody");
|
||||
table.appendChild(tbody);
|
||||
tableContainer.appendChild(table);
|
||||
|
||||
function normalizeStatus(status) {
|
||||
return (status || "unknown").toString().toLowerCase().split("/")[0];
|
||||
}
|
||||
|
||||
function formatStatus(status) {
|
||||
const key = normalizeStatus(status);
|
||||
return statusLabels[key] || status;
|
||||
}
|
||||
|
||||
function formatProject(project) {
|
||||
return projectLabels[project] || project;
|
||||
}
|
||||
|
||||
function formatCategory(category) {
|
||||
if (!category) return "unspecified";
|
||||
return category;
|
||||
}
|
||||
|
||||
function updateHeaderIndicators() {
|
||||
Object.keys(headerCells).forEach((key) => {
|
||||
const th = headerCells[key];
|
||||
const label = th.dataset.label || "";
|
||||
if (key === sortKey) {
|
||||
th.classList.add("sorted");
|
||||
th.textContent = `${label} ${sortDir === "asc" ? "^" : "v"}`;
|
||||
} else {
|
||||
th.classList.remove("sorted");
|
||||
th.textContent = label;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
function updateResultsCount(count, total) {
|
||||
if (total === 0) {
|
||||
resultsCount.textContent = "No RFCs found.";
|
||||
return;
|
||||
}
|
||||
if (dateFilter === "latest") {
|
||||
resultsCount.textContent = `Showing the ${count} most recently updated RFCs.`;
|
||||
return;
|
||||
}
|
||||
if (dateFilter === "last90") {
|
||||
resultsCount.textContent = `Showing ${count} RFCs updated in the last 90 days.`;
|
||||
return;
|
||||
}
|
||||
resultsCount.textContent = `Showing ${count} of ${total} RFCs`;
|
||||
}
|
||||
|
||||
function updateChipGroup(containerId, dataAttr, counts, total) {
|
||||
document.querySelectorAll(`#${containerId} .chip`).forEach((chip) => {
|
||||
const key = chip.dataset[dataAttr];
|
||||
const label = chip.dataset.label || chip.textContent;
|
||||
if (chip.dataset.count === "false") {
|
||||
chip.textContent = label;
|
||||
return;
|
||||
}
|
||||
const count = key === "all" ? total : (counts[key] || 0);
|
||||
chip.textContent = `${label} (${count})`;
|
||||
});
|
||||
}
|
||||
|
||||
function updateChipCounts() {
|
||||
const statusCounts = {};
|
||||
const projectCounts = {};
|
||||
let last90Count = 0;
|
||||
let datedCount = 0;
|
||||
|
||||
rfcData.forEach((item) => {
|
||||
const statusKey = normalizeStatus(item.status);
|
||||
statusCounts[statusKey] = (statusCounts[statusKey] || 0) + 1;
|
||||
projectCounts[item.project] = (projectCounts[item.project] || 0) + 1;
|
||||
if (parseDate(item.updated)) {
|
||||
datedCount += 1;
|
||||
if (isWithinDays(item.updated, 90)) {
|
||||
last90Count += 1;
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
updateChipGroup("status-chips", "status", statusCounts, rfcData.length);
|
||||
updateChipGroup("project-chips", "project", projectCounts, rfcData.length);
|
||||
updateChipGroup(
|
||||
"date-chips",
|
||||
"date",
|
||||
{ latest: Math.min(20, datedCount), last90: last90Count },
|
||||
rfcData.length
|
||||
);
|
||||
}
|
||||
|
||||
function parseDate(value) {
|
||||
if (!value) return null;
|
||||
const date = new Date(value);
|
||||
if (Number.isNaN(date.getTime())) return null;
|
||||
return date;
|
||||
}
|
||||
|
||||
function isWithinDays(value, days) {
|
||||
const date = parseDate(value);
|
||||
if (!date) return false;
|
||||
const now = new Date();
|
||||
const diffMs = now - date;
|
||||
return diffMs >= 0 && diffMs <= days * 24 * 60 * 60 * 1000;
|
||||
}
|
||||
|
||||
function passesDateFilter(item) {
|
||||
if (dateFilter === "all") return true;
|
||||
if (dateFilter === "last90") return isWithinDays(item.updated, 90);
|
||||
if (dateFilter === "latest") return true;
|
||||
return true;
|
||||
}
|
||||
|
||||
function compareItems(a, b) {
|
||||
if (sortKey === "status") {
|
||||
const aKey = normalizeStatus(a.status);
|
||||
const bKey = normalizeStatus(b.status);
|
||||
return (statusOrder[aKey] ?? 99) - (statusOrder[bKey] ?? 99);
|
||||
}
|
||||
|
||||
if (sortKey === "updated") {
|
||||
const aDate = parseDate(a.updated);
|
||||
const bDate = parseDate(b.updated);
|
||||
if (!aDate && !bDate) return 0;
|
||||
if (!aDate) return 1;
|
||||
if (!bDate) return -1;
|
||||
return aDate - bDate;
|
||||
}
|
||||
|
||||
if (sortKey === "slug") {
|
||||
const aNum = parseInt(a.slug, 10);
|
||||
const bNum = parseInt(b.slug, 10);
|
||||
const aIsNum = !isNaN(aNum);
|
||||
const bIsNum = !isNaN(bNum);
|
||||
if (aIsNum && bIsNum) return aNum - bNum;
|
||||
if (aIsNum && !bIsNum) return -1;
|
||||
if (!aIsNum && bIsNum) return 1;
|
||||
}
|
||||
|
||||
const aVal = (a[sortKey] || "").toString().toLowerCase();
|
||||
const bVal = (b[sortKey] || "").toString().toLowerCase();
|
||||
return aVal.localeCompare(bVal, undefined, { numeric: true, sensitivity: "base" });
|
||||
}
|
||||
|
||||
function sortItems(items) {
|
||||
const sorted = [...items].sort(compareItems);
|
||||
if (sortDir === "desc") sorted.reverse();
|
||||
return sorted;
|
||||
}
|
||||
|
||||
function render() {
|
||||
const query = (searchInput.value || "").toLowerCase();
|
||||
let filtered = rfcData.filter((item) => {
|
||||
const statusOk = statusFilter === "all" || normalizeStatus(item.status) === statusFilter;
|
||||
const projectOk = projectFilter === "all" || item.project === projectFilter;
|
||||
const dateOk = passesDateFilter(item);
|
||||
const text = `${item.slug} ${item.title} ${item.project} ${item.status} ${item.category}`.toLowerCase();
|
||||
const textOk = !query || text.includes(query);
|
||||
return statusOk && projectOk && dateOk && textOk;
|
||||
});
|
||||
|
||||
let sorted = sortItems(filtered);
|
||||
if (dateFilter === "latest") {
|
||||
sorted = sorted
|
||||
.slice()
|
||||
.sort((a, b) => {
|
||||
const aDate = parseDate(a.updated);
|
||||
const bDate = parseDate(b.updated);
|
||||
if (!aDate && !bDate) return 0;
|
||||
if (!aDate) return 1;
|
||||
if (!bDate) return -1;
|
||||
return bDate - aDate;
|
||||
})
|
||||
.slice(0, 20);
|
||||
}
|
||||
updateResultsCount(sorted.length, rfcData.length);
|
||||
updateHeaderIndicators();
|
||||
tbody.innerHTML = "";
|
||||
|
||||
if (!sorted.length) {
|
||||
const tr = document.createElement("tr");
|
||||
tr.innerHTML = `<td colspan="${headers.length}">No RFCs match your filters.</td>`;
|
||||
tbody.appendChild(tr);
|
||||
return;
|
||||
}
|
||||
|
||||
sorted.forEach((item) => {
|
||||
const tr = document.createElement("tr");
|
||||
const updated = item.updated || "—";
|
||||
const projectCell = showProjectColumn ? `<td>${formatProject(item.project)}</td>` : "";
|
||||
tr.innerHTML = `
|
||||
<td><a href="${rootPrefix}${item.path}">${item.slug}</a></td>
|
||||
<td>${item.title}</td>
|
||||
${projectCell}
|
||||
<td><span class="badge status-${normalizeStatus(item.status)}">${formatStatus(item.status)}</span></td>
|
||||
<td>${formatCategory(item.category)}</td>
|
||||
<td>${updated}</td>
|
||||
`;
|
||||
tbody.appendChild(tr);
|
||||
});
|
||||
}
|
||||
|
||||
searchInput.addEventListener("input", render);
|
||||
|
||||
const statusChips = document.getElementById("status-chips");
|
||||
if (statusChips) {
|
||||
statusChips.addEventListener("click", (e) => {
|
||||
if (!e.target.dataset.status) return;
|
||||
statusFilter = e.target.dataset.status;
|
||||
document.querySelectorAll("#status-chips .chip").forEach((chip) => {
|
||||
chip.classList.toggle("active", chip.dataset.status === statusFilter);
|
||||
});
|
||||
render();
|
||||
});
|
||||
}
|
||||
|
||||
const projectChips = document.getElementById("project-chips");
|
||||
if (projectChips) {
|
||||
projectChips.addEventListener("click", (e) => {
|
||||
if (!e.target.dataset.project) return;
|
||||
projectFilter = e.target.dataset.project;
|
||||
document.querySelectorAll("#project-chips .chip").forEach((chip) => {
|
||||
chip.classList.toggle("active", chip.dataset.project === projectFilter);
|
||||
});
|
||||
render();
|
||||
});
|
||||
}
|
||||
|
||||
const dateChips = document.getElementById("date-chips");
|
||||
if (dateChips) {
|
||||
dateChips.addEventListener("click", (e) => {
|
||||
if (!e.target.dataset.date) return;
|
||||
dateFilter = e.target.dataset.date;
|
||||
document.querySelectorAll("#date-chips .chip").forEach((chip) => {
|
||||
chip.classList.toggle("active", chip.dataset.date === dateFilter);
|
||||
});
|
||||
render();
|
||||
});
|
||||
}
|
||||
|
||||
resultsCount.textContent = "Loading RFC index...";
|
||||
fetch(`${rootPrefix}rfc-index.json`)
|
||||
.then((resp) => {
|
||||
if (!resp.ok) throw new Error(resp.statusText);
|
||||
return resp.json();
|
||||
})
|
||||
.then((data) => {
|
||||
rfcData = projectScope ? data.filter((item) => item.project === projectScope) : data;
|
||||
updateChipCounts();
|
||||
render();
|
||||
})
|
||||
.catch((err) => {
|
||||
console.error(err);
|
||||
resultsCount.textContent = "Failed to load RFC index.";
|
||||
});
|
||||
})();
|
||||
@@ -1,11 +1,12 @@
|
||||
# 24/STATUS-CURATION
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status Community Directory Curation Voting using Waku v2 |
|
||||
| Slug | 24 |
|
||||
| Status | draft |
|
||||
| Editor | Szymon Szlachtowicz <szymon.s@ethworks.io> |
|
||||
---
|
||||
slug: 24
|
||||
title: 24/STATUS-CURATION
|
||||
name: Status Community Directory Curation Voting using Waku v2
|
||||
status: draft
|
||||
tags: waku-application
|
||||
description: A voting protocol for SNT holders to submit votes to a smart contract. Voting is immutable, which helps avoid sabotage from malicious peers.
|
||||
editor: Szymon Szlachtowicz <szymon.s@ethworks.io>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
# 28/STATUS-FEATURING
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status community featuring using waku v2 |
|
||||
| Slug | 28 |
|
||||
| Status | draft |
|
||||
| Editor | Szymon Szlachtowicz <szymon.s@ethworks.io> |
|
||||
---
|
||||
slug: 28
|
||||
title: 28/STATUS-FEATURING
|
||||
name: Status community featuring using waku v2
|
||||
status: draft
|
||||
tags: waku-application
|
||||
description: To gain new members, current SNT holders can vote to feature an active Status community to the larger Status audience.
|
||||
editor: Szymon Szlachtowicz <szymon.s@ethworks.io>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,13 +1,19 @@
|
||||
# 55/STATUS-1TO1-CHAT
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status 1-to-1 Chat |
|
||||
| Slug | 55 |
|
||||
| Status | draft |
|
||||
| Category | Standards Track |
|
||||
| Editor | Aaryamann Challani <p1ge0nh8er@proton.me> |
|
||||
| Contributors | Andrea Piana <andreap@status.im>, Pedro Pombeiro <pedro@status.im>, Corey Petty <corey@status.im>, Oskar Thorén <oskarth@titanproxy.com>, Dean Eigenmann <dean@status.im> |
|
||||
---
|
||||
slug: 55
|
||||
title: 55/STATUS-1TO1-CHAT
|
||||
name: Status 1-to-1 Chat
|
||||
status: draft
|
||||
category: Standards Track
|
||||
tags: waku-application
|
||||
description: A chat protocol to send public and private messages to a single recipient by the Status app.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Andrea Piana <andreap@status.im>
|
||||
- Pedro Pombeiro <pedro@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
# 56/STATUS-COMMUNITIES
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status Communities that run over Waku v2 |
|
||||
| Slug | 56 |
|
||||
| Status | draft |
|
||||
| Category | Standards Track |
|
||||
| Editor | Aaryamann Challani <p1ge0nh8er@proton.me> |
|
||||
| Contributors | Andrea Piana <andreap@status.im>, Prem Chaitanya Prathi <prem@waku.org> |
|
||||
---
|
||||
slug: 56
|
||||
title: 56/STATUS-COMMUNITIES
|
||||
name: Status Communities that run over Waku v2
|
||||
status: draft
|
||||
category: Standards Track
|
||||
tags: waku-application
|
||||
description: Status Communities allow multiple users to communicate in a discussion space. This is a key feature of the Status application.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Andrea Piana <andreap@status.im>
|
||||
- Prem Chaitanya Prathi <prem@waku.org>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,13 +1,15 @@
|
||||
# 61/STATUS-Community-History-Service
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status Community History Service |
|
||||
| Slug | 61 |
|
||||
| Status | draft |
|
||||
| Category | Standards Track |
|
||||
| Editor | r4bbit <r4bbit@status.im> |
|
||||
| Contributors | Sanaz Taheri <sanaz@status.im>, John Lea <john@status.im> |
|
||||
---
|
||||
slug: 61
|
||||
title: 61/STATUS-Community-History-Service
|
||||
name: Status Community History Service
|
||||
status: draft
|
||||
category: Standards Track
|
||||
description: Explains how new members of a Status community can request historical messages from archive nodes.
|
||||
editor: r4bbit <r4bbit@status.im>
|
||||
contributors:
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
- John Lea <john@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,12 +1,16 @@
|
||||
# 62/STATUS-PAYLOADS
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status Message Payloads |
|
||||
| Slug | 62 |
|
||||
| Status | draft |
|
||||
| Editor | r4bbit <r4bbit@status.im> |
|
||||
| Contributors | Adam Babik <adam@status.im>, Andrea Maria Piana <andreap@status.im>, Oskar Thoren <oskarth@titanproxy.com>, Samuel Hawksby-Robinson <samuel@status.im> |
|
||||
---
|
||||
slug: 62
|
||||
title: 62/STATUS-PAYLOADS
|
||||
name: Status Message Payloads
|
||||
status: draft
|
||||
description: Describes the payload of each message in Status.
|
||||
editor: r4bbit <r4bbit@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
- Oskar Thoren <oskarth@titanproxy.com>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,13 +1,14 @@
|
||||
# 63/STATUS-Keycard-Usage
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status Keycard Usage |
|
||||
| Slug | 63 |
|
||||
| Status | draft |
|
||||
| Category | Standards Track |
|
||||
| Editor | Aaryamann Challani <p1ge0nh8er@proton.me> |
|
||||
| Contributors | Jimmy Debe <jimmy@status.im> |
|
||||
---
|
||||
slug: 63
|
||||
title: 63/STATUS-Keycard-Usage
|
||||
name: Status Keycard Usage
|
||||
status: draft
|
||||
category: Standards Track
|
||||
description: Describes how an application can use the Status Keycard to create, store and transact with different account addresses.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
---
|
||||
|
||||
## Terminology
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
# 65/STATUS-ACCOUNT-ADDRESS
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status Account Address |
|
||||
| Slug | 65 |
|
||||
| Status | draft |
|
||||
| Category | Standards Track |
|
||||
| Editor | Aaryamann Challani <p1ge0nh8er@proton.me> |
|
||||
| Contributors | Corey Petty <corey@status.im>, Oskar Thorén <oskarth@titanproxy.com>, Samuel Hawksby-Robinson <samuel@status.im> |
|
||||
---
|
||||
slug: 65
|
||||
title: 65/STATUS-ACCOUNT-ADDRESS
|
||||
name: Status Account Address
|
||||
status: draft
|
||||
category: Standards Track
|
||||
description: Details of what a Status account address is and how account addresses are created and used.
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
|
Before Width: | Height: | Size: 59 KiB After Width: | Height: | Size: 59 KiB |
|
Before Width: | Height: | Size: 25 KiB After Width: | Height: | Size: 25 KiB |
@@ -1,13 +1,14 @@
|
||||
# 71/STATUS-PUSH-NOTIFICATION-SERVER
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Push Notification Server |
|
||||
| Slug | 71 |
|
||||
| Status | draft |
|
||||
| Category | Standards Track |
|
||||
| Editor | Jimmy Debe <jimmy@status.im> |
|
||||
| Contributors | Andrea Maria Piana <andreap@status.im> |
|
||||
---
|
||||
slug: 71
|
||||
title: 71/STATUS-PUSH-NOTIFICATION-SERVER
|
||||
name: Push Notification Server
|
||||
status: draft
|
||||
category: Standards Track
|
||||
description: A set of methods to allow Status clients to use push notification services in mobile environments.
|
||||
editor: Jimmy Debe <jimmy@status.im>
|
||||
contributors:
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
4
status/README.md
Normal file
@@ -0,0 +1,4 @@
|
||||
# Status RFCs
|
||||
|
||||
Status is a communication tool providing privacy features for the user.
|
||||
Specifications can also be viewed at [Status](https://status.app/specs).
|
||||
@@ -1,11 +1,12 @@
|
||||
# 3RD-PARTY
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | 3rd party |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Volodymyr Kozieiev <volodymyr@status.im> |
|
||||
---
|
||||
title: 3RD-PARTY
|
||||
name: 3rd party
|
||||
status: deprecated
|
||||
description: This specification discusses 3rd party APIs that Status relies on.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Volodymyr Kozieiev <volodymyr@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
# IPFS-gateway-for-Sticker-Pack
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | IPFS gateway for Sticker Pack |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Gheorghe Pinzaru <gheorghe@status.im> |
|
||||
---
|
||||
title: IPFS-gateway-for-Sticker-Pack
|
||||
name: IPFS gateway for Sticker Pack
|
||||
status: deprecated
|
||||
description: This specification describes how Status uses the IPFS gateway to store stickers.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Gheorghe Pinzaru <gheorghe@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
# ACCOUNT
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Account |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Corey Petty <corey@status.im>, Oskar Thorén <oskar@status.im>, Samuel Hawksby-Robinson <samuel@status.im> |
|
||||
---
|
||||
title: ACCOUNT
|
||||
name: Account
|
||||
status: deprecated
|
||||
description: This specification explains what a Status account is, and how a node establishes trust.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,17 @@
|
||||
# CLIENT
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Client |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Adam Babik <adam@status.im>, Andrea Maria Piana <andreap@status.im>, Dean Eigenmann <dean@status.im>, Corey Petty <corey@status.im>, Oskar Thorén <oskar@status.im>, Samuel Hawksby-Robinson <samuel@status.im> |
|
||||
---
|
||||
title: CLIENT
|
||||
name: Client
|
||||
status: deprecated
|
||||
description: This specification describes how to write a Status client for communicating with other Status clients.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
# Dapp browser API usage
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Dapp browser API usage |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
---
|
||||
title: Dapp browser API usage
|
||||
name: Dapp browser API usage
|
||||
status: deprecated
|
||||
description: This document describes requirements that an application must fulfill in order to provide a proper environment for Dapps running inside a browser.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
# EIPS
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | EIPS |
|
||||
| Status | deprecated |
|
||||
| Editor | Ricardo Guilherme Schmidt <ricardo3@status.im> |
|
||||
| Contributors | None |
|
||||
---
|
||||
title: EIPS
|
||||
name: EIPS
|
||||
status: deprecated
|
||||
description: Status relation with the EIPs
|
||||
editor: Ricardo Guilherme Schmidt <ricardo3@status.im>
|
||||
contributors:
|
||||
-
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
# ETHEREUM-USAGE
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status interactions with the Ethereum blockchain |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Andrea Maria Piana <andreap@status.im> |
|
||||
---
|
||||
title: ETHEREUM-USAGE
|
||||
name: Status interactions with the Ethereum blockchain
|
||||
status: deprecated
|
||||
description: All interactions that the Status client has with the Ethereum blockchain.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
# GROUP-CHAT
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Group Chat |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Andrea Piana <andreap@status.im> |
|
||||
---
|
||||
title: GROUP-CHAT
|
||||
name: Group Chat
|
||||
status: deprecated
|
||||
description: This document describes the group chat protocol used by the Status application.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Andrea Piana <andreap@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
|
Before Width: | Height: | Size: 2.4 KiB After Width: | Height: | Size: 2.4 KiB |
|
Before Width: | Height: | Size: 1.1 KiB After Width: | Height: | Size: 1.1 KiB |
@@ -1,11 +1,12 @@
|
||||
# Keycard Usage for Wallet and Chat Keys
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Keycard Usage for Wallet and Chat Keys |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Roman Volosovskyi <roman@status.im> |
|
||||
---
|
||||
title: Keycard Usage for Wallet and Chat Keys
|
||||
name: Keycard Usage for Wallet and Chat Keys
|
||||
status: deprecated
|
||||
description: In this specification, we describe how Status communicates with Keycard to create, store and use multiaccount.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Roman Volosovskyi <roman@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
# NOTIFICATIONS
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Notifications |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Eric Dvorsak <eric@status.im> |
|
||||
---
|
||||
title: NOTIFICATIONS
|
||||
name: Notifications
|
||||
status: deprecated
|
||||
description: A client should implement local notifications to offer notifications for any event in the app without the privacy cost and dependency on third party services.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Eric Dvorsak <eric@status.im>
|
||||
---
|
||||
|
||||
## Local Notifications
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
# PAYLOADS
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Payloads |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Adam Babik <adam@status.im>, Andrea Maria Piana <andreap@status.im>, Oskar Thorén <oskar@status.im> |
|
||||
---
|
||||
title: PAYLOADS
|
||||
name: Payloads
|
||||
status: deprecated
|
||||
description: Payload of messages in Status, regarding chat and chat-related use cases.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
# PUSH-NOTIFICATION-SERVER
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Push notification server |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Andrea Maria Piana <andreap@status.im> |
|
||||
---
|
||||
title: PUSH-NOTIFICATION-SERVER
|
||||
name: Push notification server
|
||||
status: deprecated
|
||||
description: Status provides a set of Push notification services that can be used to achieve this functionality.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
---
|
||||
|
||||
## Reason
|
||||
|
||||
@@ -1,11 +1,16 @@
|
||||
# SECURE-TRANSPORT
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Secure Transport |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Andrea Maria Piana <andreap@status.im>, Corey Petty <corey@status.im>, Dean Eigenmann <dean@status.im>, Oskar Thorén <oskar@status.im>, Pedro Pombeiro <pedro@status.im> |
|
||||
---
|
||||
title: SECURE-TRANSPORT
|
||||
name: Secure Transport
|
||||
status: deprecated
|
||||
description: This document describes how Status provides a secure channel between two peers, providing confidentiality, integrity, authenticity, and forward secrecy.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Pedro Pombeiro <pedro@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
# WAKU-MAILSERVER
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Waku Mailserver |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Adam Babik <adam@status.im>, Oskar Thorén <oskar@status.im>, Samuel Hawksby-Robinson <samuel@status.im> |
|
||||
---
|
||||
title: WAKU-MAILSERVER
|
||||
name: Waku Mailserver
|
||||
status: deprecated
|
||||
description: Waku Mailserver is a specification that allows messages to be stored permanently and to allow the stored messages to be delivered to requesting client nodes, regardless if the messages are not available in the network due to the message TTL expiring.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,15 @@
|
||||
# WAKU-USAGE
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Waku Usage |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Adam Babik <adam@status.im>, Corey Petty <corey@status.im>, Oskar Thorén <oskar@status.im>, Samuel Hawksby-Robinson <samuel@status.im> |
|
||||
---
|
||||
title: WAKU-USAGE
|
||||
name: Waku Usage
|
||||
status: deprecated
|
||||
description: Status uses Waku to provide privacy-preserving routing and messaging on top of devP2P.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
- Samuel Hawksby-Robinson <samuel@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
# WHISPER-MAILSERVER
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Whisper mailserver |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Adam Babik <adam@status.im>, Oskar Thorén <oskar@status.im> |
|
||||
---
|
||||
title: WHISPER-MAILSERVER
|
||||
name: Whisper mailserver
|
||||
status: deprecated
|
||||
description: Whisper Mailserver is a Whisper extension that allows to store messages permanently and deliver them to the clients even though they are already not available in the network and expired.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,15 @@
|
||||
# WHISPER-USAGE
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Whisper Usage |
|
||||
| Status | deprecated |
|
||||
| Editor | Filip Dimitrijevic <filip@status.im> |
|
||||
| Contributors | Adam Babik <adam@status.im>, Andrea Piana <andreap@status.im>, Corey Petty <corey@status.im>, Oskar Thorén <oskar@status.im> |
|
||||
---
|
||||
title: WHISPER-USAGE
|
||||
name: Whisper Usage
|
||||
status: deprecated
|
||||
description: Status uses Whisper to provide privacy-preserving routing and messaging on top of devP2P.
|
||||
editor: Filip Dimitrijevic <filip@status.im>
|
||||
contributors:
|
||||
- Adam Babik <adam@status.im>
|
||||
- Andrea Piana <andreap@status.im>
|
||||
- Corey Petty <corey@status.im>
|
||||
- Oskar Thorén <oskar@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
# STATUS-SIMPLE-SCALING
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status Simple Scaling |
|
||||
| Status | raw |
|
||||
| Category | Informational |
|
||||
| Editor | Daniel Kaiser <danielkaiser@status.im> |
|
||||
| Contributors | Alvaro Revuelta <alrevuelta@status.im> |
|
||||
---
|
||||
title: STATUS-SIMPLE-SCALING
|
||||
name: Status Simple Scaling
|
||||
status: raw
|
||||
category: Informational
|
||||
tags: waku/application
|
||||
description: Describes how to scale Status Communities and Status 1-to-1 chats using Waku v2 protocol and components.
|
||||
editor: Daniel Kaiser <danielkaiser@status.im>
|
||||
contributors:
|
||||
- Alvaro Revuelta <alrevuelta@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,12 +1,15 @@
|
||||
# STATUS-PROTOCOLS
|
||||
---
|
||||
title: STATUS-PROTOCOLS
|
||||
name: Status Protocol Stack
|
||||
status: raw
|
||||
category: Standards Track
|
||||
description: Specifies the Status application protocol stack.
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
- Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status Protocol Stack |
|
||||
| Status | raw |
|
||||
| Category | Standards Track |
|
||||
| Editor | Hanno Cornelius <hanno@status.im> |
|
||||
| Contributors | Jimmy Debe <jimmy@status.im>, Aaryamann Challani <p1ge0nh8er@proton.me> |
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
# STATUS-MVDS-USAGE
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | MVDS Usage in Status |
|
||||
| Status | raw |
|
||||
| Category | Best Current Practice |
|
||||
| Editor | Kaichao Sun <kaichao@status.im> |
|
||||
---
|
||||
title: STATUS-MVDS-USAGE
|
||||
name: MVDS Usage in Status
|
||||
status: raw
|
||||
category: Best Current Practice
|
||||
description: Defines how MVDS protocol used by different message types in Status.
|
||||
editor: Kaichao Sun <kaichao@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
# STATUS-URL-DATA
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status URL Data |
|
||||
| Status | raw |
|
||||
| Category | Standards Track |
|
||||
| Editor | Felicio Mununga <felicio@status.im> |
|
||||
| Contributors | Aaryamann Challani <aaryamann@status.im> |
|
||||
---
|
||||
title: STATUS-URL-DATA
|
||||
name: Status URL Data
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Felicio Mununga <felicio@status.im>
|
||||
contributors:
|
||||
- Aaryamann Challani <aaryamann@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
# STATUS-URL-SCHEME
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Status URL Scheme |
|
||||
| Status | raw |
|
||||
| Category | Standards Track |
|
||||
| Editor | Felicio Mununga <felicio@status.im> |
|
||||
---
|
||||
title: STATUS-URL-SCHEME
|
||||
name: Status URL Scheme
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Felicio Mununga <felicio@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,13 +1,19 @@
|
||||
# 1/COSS
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Consensus-Oriented Specification System |
|
||||
| Slug | 1 |
|
||||
| Status | draft |
|
||||
| Category | Best Current Practice |
|
||||
| Editor | Daniel Kaiser <danielkaiser@status.im> |
|
||||
| Contributors | Oskar Thoren <oskarth@titanproxy.com>, Pieter Hintjens <ph@imatix.com>, André Rebentisch <andre@openstandards.de>, Alberto Barrionuevo <abarrio@opentia.es>, Chris Puttick <chris.puttick@thehumanjourney.net>, Yurii Rashkovskii <yrashk@gmail.com>, Jimmy Debe <jimmy@status.im> |
|
||||
---
|
||||
slug: 1
|
||||
title: 1/COSS
|
||||
name: Consensus-Oriented Specification System
|
||||
status: draft
|
||||
category: Best Current Practice
|
||||
editor: Daniel Kaiser <danielkaiser@status.im>
|
||||
contributors:
|
||||
- Oskar Thoren <oskarth@titanproxy.com>
|
||||
- Pieter Hintjens <ph@imatix.com>
|
||||
- André Rebentisch <andre@openstandards.de>
|
||||
- Alberto Barrionuevo <abarrio@opentia.es>
|
||||
- Chris Puttick <chris.puttick@thehumanjourney.net>
|
||||
- Yurii Rashkovskii <yrashk@gmail.com>
|
||||
- Jimmy Debe <jimmy@status.im>
|
||||
---
|
||||
|
||||
This document describes a consensus-oriented specification system (COSS)
|
||||
for building interoperable technical specifications.
|
||||
|
Before Width: | Height: | Size: 42 KiB After Width: | Height: | Size: 42 KiB |
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 24 KiB |
@@ -1,15 +1,16 @@
|
||||
# 2/MVDS
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Minimum Viable Data Synchronization |
|
||||
| Slug | 2 |
|
||||
| Status | stable |
|
||||
| Editor | Sanaz Taheri <sanaz@status.im> |
|
||||
| Contributors | Dean Eigenmann <dean@status.im>, Oskar Thorén <oskarth@titanproxy.com> |
|
||||
---
|
||||
slug: 2
|
||||
title: 2/MVDS
|
||||
name: Minimum Viable Data Synchronization
|
||||
status: stable
|
||||
editor: Sanaz Taheri <sanaz@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
---
|
||||
|
||||
In this specification, we describe a minimum viable protocol for
|
||||
data synchronization inspired by the Bramble Synchronization Protocol ([BSP](https://code.briarproject.org/briar/briar-spec/blob/master/protocols/BSP.md)).
|
||||
data synchronization inspired by the Bramble Synchronization Protocol[^1].
|
||||
This protocol is designed to ensure reliable messaging
|
||||
between peers across an unreliable peer-to-peer (P2P) network where
|
||||
they may be unreachable or unresponsive.
|
||||
@@ -186,4 +187,5 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
|
||||
|
||||
## Footnotes
|
||||
|
||||
[^1]: akwizgran et al. [BSP](https://code.briarproject.org/briar/briar-spec/blob/master/protocols/BSP.md). Briar.
|
||||
[^2]: <https://github.com/vacp2p/mvds>
|
||||
@@ -1,11 +1,11 @@
|
||||
# 25/LIBP2P-DNS-DISCOVERY
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Libp2p Peer Discovery via DNS |
|
||||
| Slug | 25 |
|
||||
| Status | deleted |
|
||||
| Editor | Hanno Cornelius <hanno@status.im> |
|
||||
---
|
||||
slug: 25
|
||||
title: 25/LIBP2P-DNS-DISCOVERY
|
||||
name: Libp2p Peer Discovery via DNS
|
||||
status: deleted
|
||||
editor: Hanno Cornelius <hanno@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
`25/LIBP2P-DNS-DISCOVERY` specifies a scheme to implement [`libp2p`](https://libp2p.io/)
|
||||
peer discovery via DNS for Waku v2.
|
||||
|
Before Width: | Height: | Size: 17 KiB After Width: | Height: | Size: 17 KiB |
@@ -1,12 +1,12 @@
|
||||
# 3/REMOTE-LOG
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Remote log specification |
|
||||
| Slug | 3 |
|
||||
| Status | draft |
|
||||
| Editor | Oskar Thorén <oskarth@titanproxy.com> |
|
||||
| Contributors | Dean Eigenmann <dean@status.im> |
|
||||
---
|
||||
slug: 3
|
||||
title: 3/REMOTE-LOG
|
||||
name: Remote log specification
|
||||
status: draft
|
||||
editor: Oskar Thorén <oskarth@titanproxy.com>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
---
|
||||
|
||||
A remote log is a replication of a local log.
|
||||
This means a node can read data that originally came from a node that is offline.
|
||||
@@ -1,12 +1,17 @@
|
||||
# 32/RLN-V1
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Rate Limit Nullifier |
|
||||
| Slug | 32 |
|
||||
| Status | draft |
|
||||
| Editor | Aaryamann Challani <p1ge0nh8er@proton.me> |
|
||||
| Contributors | Barry Whitehat <barrywhitehat@protonmail.com>, Sanaz Taheri <sanaz@status.im>, Oskar Thorén <oskarth@titanproxy.com>, Onur Kilic <onurkilic1004@gmail.com>, Blagoj Dimovski <blagoj.dimovski@yandex.com>, Rasul Ibragimov <curryrasul@gmail.com> |
|
||||
---
|
||||
slug: 32
|
||||
title: 32/RLN-V1
|
||||
name: Rate Limit Nullifier
|
||||
status: draft
|
||||
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
|
||||
contributors:
|
||||
- Barry Whitehat <barrywhitehat@protonmail.com>
|
||||
- Sanaz Taheri <sanaz@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
- Onur Kilic <onurkilic1004@gmail.com>
|
||||
- Blagoj Dimovski <blagoj.dimovski@yandex.com>
|
||||
- Rasul Ibragimov <curryrasul@gmail.com>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
# 4/MVDS-META
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | MVDS Metadata Field |
|
||||
| Slug | 4 |
|
||||
| Status | draft |
|
||||
| Editor | Sanaz Taheri <sanaz@status.im> |
|
||||
| Contributors | Dean Eigenmann <dean@status.im>, Andrea Maria Piana <andreap@status.im>, Oskar Thorén <oskarth@titanproxy.com> |
|
||||
---
|
||||
slug: 4
|
||||
title: 4/MVDS-META
|
||||
name: MVDS Metadata Field
|
||||
status: draft
|
||||
editor: Sanaz Taheri <sanaz@status.im>
|
||||
contributors:
|
||||
- Dean Eigenmann <dean@status.im>
|
||||
- Andrea Maria Piana <andreap@status.im>
|
||||
- Oskar Thorén <oskarth@titanproxy.com>
|
||||
---
|
||||
|
||||
In this specification, we describe a method to construct message history that
|
||||
will aid the consistency guarantees of [2/MVDS](../2/mvds.md).
|
||||
9
vac/README.md
Normal file
@@ -0,0 +1,9 @@
|
||||
# Vac RFCs
|
||||
|
||||
Vac builds public good protocols for the decentralised web.
|
||||
Vac acts as a custodian for the protocols that live in the RFC-Index repository.
|
||||
With the goal of widespread adoption,
|
||||
Vac will make sure the protocols adhere to a set of principles,
|
||||
including but not limited to liberty, security, privacy, decentralisation and inclusivity.
|
||||
|
||||
To learn more, visit [Vac Research](https://vac.dev/)
|
||||
@@ -1,253 +1,252 @@
|
||||
# HASHGRAPHLIKE CONSENSUS
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Hashgraphlike Consensus Protocol |
|
||||
| Status | raw |
|
||||
| Category | Standards Track |
|
||||
| Editor | Ugur Sen [ugur@status.im](mailto:ugur@status.im) |
|
||||
| Contributors | seemenkina [ekaterina@status.im](mailto:ekaterina@status.im) |
|
||||
|
||||
## Abstract
|
||||
|
||||
This document specifies a scalable, decentralized, and Byzantine Fault Tolerant (BFT)
|
||||
consensus mechanism inspired by Hashgraph, designed for binary decision-making in P2P networks.
|
||||
|
||||
## Motivation
|
||||
|
||||
Consensus is one of the essential components of decentralization.
|
||||
In particular, in the decentralized group messaging application is used for
|
||||
binary decision-making to govern the group.
|
||||
Therefore, each user contributes to the decision-making process.
|
||||
Besides achieving decentralization, the consensus mechanism MUST be strong:
|
||||
|
||||
- Under the assumption of at least `2/3` honest users in the network.
|
||||
|
||||
- Each user MUST conclude the same decision and scalability:
|
||||
message propagation in the network MUST occur within `O(log n)` rounds,
|
||||
where `n` is the total number of peers,
|
||||
in order to preserve the scalability of the messaging application.
|
||||
|
||||
## Format Specification
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
|
||||
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
## Flow
|
||||
|
||||
Any user in the group initializes the consensus by creating a proposal.
|
||||
Next, the user broadcasts the proposal to the whole network.
|
||||
Upon each user receives the proposal, validates the proposal,
|
||||
adds its vote as yes or no and with its signature and timestamp.
|
||||
The user then sends the proposal and vote to a random peer in a P2P setup,
|
||||
or to a subscribed gossipsub channel if gossip-based messaging is used.
|
||||
Therefore, each user first validates the signature and then adds its new vote.
|
||||
Each sending message counts as a round.
|
||||
After `log(n)` rounds all users in the network have the others vote
|
||||
if at least `2/3` number of users are honest where honesty follows the protocol.
|
||||
|
||||
In general, the voting-based consensus consists of the following phases:
|
||||
|
||||
1. Initialization of voting
|
||||
2. Exchanging votes across the rounds
|
||||
3. Counting the votes
|
||||
|
||||
### Assumptions
|
||||
|
||||
- The users in the P2P network can discover the nodes or they are subscribing same channel in a gossipsub.
|
||||
- We MAY have non-reliable (silent) nodes.
|
||||
- Proposal owners MUST know the number of voters.
|
||||
|
||||
## 1. Initialization of voting
|
||||
|
||||
A user initializes the voting with the proposal payload which is
|
||||
implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
|
||||
|
||||
```bash
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.voting;
|
||||
|
||||
message Proposal {
|
||||
string name = 10; // Proposal name
|
||||
string payload = 11; // Proposal description
|
||||
uint32 proposal_id = 12; // Unique identifier of the proposal
|
||||
bytes proposal_owner = 13; // Public key of the creator
|
||||
repeated Votes = 14; // Vote list in the proposal
|
||||
uint32 expected_voters_count = 15; // Maximum number of distinct voters
|
||||
uint32 round = 16; // Number of Votes
|
||||
uint64 timestamp = 17; // Creation time of proposal
|
||||
uint64 expiration_time = 18; // The time interval that the proposal is active.
|
||||
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
|
||||
}
|
||||
|
||||
message Vote {
|
||||
uint32 vote_id = 20; // Unique identifier of the vote
|
||||
bytes vote_owner = 21; // Voter's public key
|
||||
uint32 proposal_id = 22; // Linking votes and proposals
|
||||
int64 timestamp = 23; // Time when the vote was cast
|
||||
bool vote = 24; // Vote bool value (true/false)
|
||||
bytes parent_hash = 25; // Hash of previous owner's Vote
|
||||
bytes received_hash = 26; // Hash of previous received Vote
|
||||
bytes vote_hash = 27; // Hash of all previously defined fields in Vote
|
||||
bytes signature = 28; // Signature of vote_hash
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
To initiate a consensus for a proposal,
|
||||
a user MUST complete all the fields in the proposal, including attaching its `vote`
|
||||
and the `payload` that shows the purpose of the proposal.
|
||||
Notably, `parent_hash` and `received_hash` are empty strings because there is no previous or received hash.
|
||||
Then the initialization section ends when the user who creates the proposal sends it
|
||||
to the random peer from the network or sends it to the proposal to the specific channel.
|
||||
|
||||
## 2. Exchanging votes across the peers
|
||||
|
||||
Once the peer receives the proposal message `P_1` from a 1-1 or a gossipsub channel does the following checks:
|
||||
|
||||
1. Check the signatures of the each votes in proposal, in particular for proposal `P_1`,
|
||||
verify the signature of `V_1` where `V_1 = P_1.votes[0]` with `V_1.signature` and `V_1.vote_owner`
|
||||
2. Do `parent_hash` check: If there are repeated votes from the same sender,
|
||||
check that the hash of the former vote is equal to the `parent_hash` of the later vote.
|
||||
3. Do `received_hash` check: If there are multiple votes in a proposal, check that the hash of a vote is equal to the `received_hash` of the next one.
|
||||
4. After successful verification of the signature and hashes, the receiving peer proceeds to generate `P_2` containing a new vote `V_2` as following:
|
||||
|
||||
4.1. Add its public key as `P_2.vote_owner`.
|
||||
|
||||
4.2. Set `timestamp`.
|
||||
|
||||
4.3. Set boolean `vote`.
|
||||
|
||||
4.4. Define `V_2.parent_hash = 0` if there is no previous peer's vote, otherwise hash of previous owner's vote.
|
||||
|
||||
4.5. Set `V_2.received_hash = hash(P_1.votes[0])`.
|
||||
|
||||
4.6. Set `proposal_id` for the `vote`.
|
||||
|
||||
4.7. Calculate `vote_hash` by hash of all previously defined fields in Vote:
|
||||
`V_2.vote_hash = hash(vote_id, owner, proposal_id, timestamp, vote, parent_hash, received_hash)`
|
||||
|
||||
4.8. Sign `vote_hash` with its private key corresponding the public key as `vote_owner` component then adds `V_2.vote_hash`.
|
||||
|
||||
5. Create `P_2` with by adding `V_2` as follows:
|
||||
|
||||
5.1. Assign `P_2.name`, `P_2.proposal_id`, and `P_2.proposal_owner` to be identical to those in `P_1`.
|
||||
|
||||
5.2. Add the `V_2` to the `P_2.Votes` list.
|
||||
|
||||
5.3. Increase the round by one, namely `P_2.round = P_1.round + 1`.
|
||||
|
||||
5.4. Verify that the proposal has not expired by checking that: `P_2.timestamp - current_time < P_1.expiration_time`.
|
||||
If this does not hold, other peers ignore the message.
|
||||
|
||||
After the peer creates the proposal `P_2` with its vote `V_2`,
|
||||
sends it to the random peer from the network or
|
||||
sends it to the proposal to the specific channel.
|
||||
|
||||
## 3. Determining the result
|
||||
|
||||
Because consensus depends on meeting a quorum threshold,
|
||||
each peer MUST verify the accumulated votes to determine whether the necessary conditions have been satisfied.
|
||||
The voting result is set YES if the majority of the `2n/3` from the distinct peers vote YES.
|
||||
|
||||
To verify, the `findDistinctVoter` method processes the proposal by traversing its `Votes` list to determine the number of unique voters.
|
||||
|
||||
If this method returns true, the peer proceeds with strong validation,
|
||||
which ensures that if any honest peer reaches a decision,
|
||||
no other honest peer can arrive at a conflicting result.
|
||||
|
||||
1. Check each `signature` in the vote as shown in the [Section 2](#2-exchanging-votes-across-the-peers).
|
||||
|
||||
2. Check the `parent_hash` chain if there are multiple votes from the same owner namely `vote_i` and `vote_i+1` respectively,
|
||||
the parent hash of `vote_i+1` should be the hash of `vote_i`
|
||||
|
||||
3. Check the `previous_hash` chain, each received hash of `vote_i+1` should be equal to the hash of `vote_i`.
|
||||
|
||||
4. Check the `timestamp` against the replay attack.
|
||||
In particular, the `timestamp` cannot be the old in the determined threshold.
|
||||
|
||||
5. Check that the liveness criteria defined in the Liveness section are satisfied.
|
||||
|
||||
If a proposal is verified by all the checks,
|
||||
the `countVote` method counts each YES vote from the list of Votes.
|
||||
|
||||
## 4. Properties
|
||||
|
||||
The consensus mechanism satisfies liveness and security properties as follows:
|
||||
|
||||
### Liveness
|
||||
|
||||
Liveness refers to the ability of the protocol to eventually reach a decision when sufficient honest participation is present.
|
||||
In this protocol, if `n > 2` and more than `n/2` of the votes among at least `2n/3` distinct peers are YES,
|
||||
then the consensus result is defined as YES; otherwise, when `n ≤ 2`, unanimous agreement (100% YES votes) is required.
|
||||
|
||||
The peer calculates the result locally as shown in the [Section 3](#3-determining-the-result).
|
||||
From the [hashgraph property](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus),
|
||||
if a node could calculate the result of a proposal,
|
||||
it implies that no peer can calculate the opposite of the result.
|
||||
Still, reliability issues can cause some situations where peers cannot receive enough messages,
|
||||
so they cannot calculate the consensus result.
|
||||
|
||||
Rounds are incremented when a peer adds and sends the new proposal.
|
||||
Calculating the required number of rounds, `2n/3` from the distinct peers' votes is achieved in two ways:
|
||||
|
||||
1. `2n/3` rounds in pure P2P networks
|
||||
2. `2` rounds in gossipsub
|
||||
|
||||
Since the message complexity is `O(1)` in the gossipsub channel,
|
||||
in case the network has reliability issues,
|
||||
the second round is used for the peers cannot receive all the messages from the first round.
|
||||
|
||||
If an honest and online peer has received at least one vote but not enough to reach consensus,
|
||||
it MAY continue to propagate its own vote — and any votes it has received — to support message dissemination.
|
||||
This process can continue beyond the expected round count,
|
||||
as long as it remains within the expiration time defined in the proposal.
|
||||
The expiration time acts as a soft upper bound to ensure that consensus is either reached or aborted within a bounded timeframe.
|
||||
|
||||
#### Equality of votes
|
||||
|
||||
An equality of votes occurs when verifying at least `2n/3` distinct voters and
|
||||
applying `liveness_criteria_yes` the number of YES and NO votes is equal.
|
||||
|
||||
Handling ties is an application-level decision. The application MUST define a deterministic tie policy:
|
||||
|
||||
RETRY: re-run the vote with a new proposal_id, optionally adjusting parameters.
|
||||
|
||||
REJECT: abort the proposal and return voting result as NO.
|
||||
|
||||
The chosen policy SHOULD be consistent for all peers via proposal's `payload` to ensure convergence on the same outcome.
|
||||
|
||||
### Silent Node Management
|
||||
|
||||
Silent nodes are the nodes that not participate the voting as YES or NO.
|
||||
There are two possible counting votes for the silent peers.
|
||||
|
||||
1. **Silent peers means YES:**
|
||||
Silent peers counted as YES vote, if the application prefer the strong rejection for NO votes.
|
||||
2. **Silent peers means NO:**
|
||||
Silent peers counted as NO vote, if the application prefer the strong acception for NO votes.
|
||||
|
||||
The proposal is set to default true, which means silent peers' votes are counted as YES namely `liveness_criteria_yes` is set true by default.
|
||||
|
||||
### Security
|
||||
|
||||
This RFC uses cryptographic primitives to prevent the
|
||||
malicious behaviours as follows:
|
||||
|
||||
- Vote forgery attempt: creating unsigned invalid votes
|
||||
- Inconsistent voting: a malicious peer submits conflicting votes (e.g., YES to some peers and NO to others)
|
||||
in different stages of the protocol, violating vote consistency and attempting to undermine consensus.
|
||||
- Integrity breaking attempt: tampering history by changing previous votes.
|
||||
- Replay attack: storing the old votes to maliciously use in fresh voting.
|
||||
|
||||
## 5. Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
|
||||
|
||||
## 6. References
|
||||
|
||||
- [Hedera Hashgraph](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus)
|
||||
- [Gossip about gossip](https://docs.hedera.com/hedera/core-concepts/hashgraph-consensus-algorithms/gossip-about-gossip)
|
||||
- [Simple implementation of hashgraph consensus](https://github.com/conanwu777/hashgraph)
|
||||
---
|
||||
title: HASHGRAPHLIKE CONSENSUS
|
||||
name: Hashgraphlike Consensus Protocol
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Ugur Sen [ugur@status.im](mailto:ugur@status.im)
|
||||
contributors: seemenkina [ekaterina@status.im](mailto:ekaterina@status.im)
|
||||
---
|
||||
## Abstract
|
||||
|
||||
This document specifies a scalable, decentralized, and Byzantine Fault Tolerant (BFT)
|
||||
consensus mechanism inspired by Hashgraph, designed for binary decision-making in P2P networks.
|
||||
|
||||
## Motivation
|
||||
|
||||
Consensus is one of the essential components of decentralization.
|
||||
In particular, in the decentralized group messaging application is used for
|
||||
binary decision-making to govern the group.
|
||||
Therefore, each user contributes to the decision-making process.
|
||||
Besides achieving decentralization, the consensus mechanism MUST be strong:
|
||||
|
||||
- Under the assumption of at least `2/3` honest users in the network.
|
||||
|
||||
- Each user MUST conclude the same decision and scalability:
|
||||
message propagation in the network MUST occur within `O(log n)` rounds,
|
||||
where `n` is the total number of peers,
|
||||
in order to preserve the scalability of the messaging application.
|
||||
|
||||
## Format Specification
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
|
||||
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
## Flow
|
||||
|
||||
Any user in the group initializes the consensus by creating a proposal.
|
||||
Next, the user broadcasts the proposal to the whole network.
|
||||
Upon each user receives the proposal, validates the proposal,
|
||||
adds its vote as yes or no and with its signature and timestamp.
|
||||
The user then sends the proposal and vote to a random peer in a P2P setup,
|
||||
or to a subscribed gossipsub channel if gossip-based messaging is used.
|
||||
Therefore, each user first validates the signature and then adds its new vote.
|
||||
Each sending message counts as a round.
|
||||
After `log(n)` rounds all users in the network have the others vote
|
||||
if at least `2/3` number of users are honest where honesty follows the protocol.
|
||||
|
||||
In general, the voting-based consensus consists of the following phases:
|
||||
|
||||
1. Initialization of voting
|
||||
2. Exchanging votes across the rounds
|
||||
3. Counting the votes
|
||||
|
||||
### Assumptions
|
||||
|
||||
- The users in the P2P network can discover the nodes or they are subscribing same channel in a gossipsub.
|
||||
- We MAY have non-reliable (silent) nodes.
|
||||
- Proposal owners MUST know the number of voters.
|
||||
|
||||
## 1. Initialization of voting
|
||||
|
||||
A user initializes the voting with the proposal payload which is
|
||||
implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
|
||||
|
||||
```bash
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.voting;
|
||||
|
||||
message Proposal {
|
||||
string name = 10; // Proposal name
|
||||
string payload = 11; // Proposal description
|
||||
uint32 proposal_id = 12; // Unique identifier of the proposal
|
||||
bytes proposal_owner = 13; // Public key of the creator
|
||||
repeated Votes = 14; // Vote list in the proposal
|
||||
uint32 expected_voters_count = 15; // Maximum number of distinct voters
|
||||
uint32 round = 16; // Number of Votes
|
||||
uint64 timestamp = 17; // Creation time of proposal
|
||||
uint64 expiration_time = 18; // The time interval that the proposal is active.
|
||||
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
|
||||
}
|
||||
|
||||
message Vote {
|
||||
uint32 vote_id = 20; // Unique identifier of the vote
|
||||
bytes vote_owner = 21; // Voter's public key
|
||||
uint32 proposal_id = 22; // Linking votes and proposals
|
||||
int64 timestamp = 23; // Time when the vote was cast
|
||||
bool vote = 24; // Vote bool value (true/false)
|
||||
bytes parent_hash = 25; // Hash of previous owner's Vote
|
||||
bytes received_hash = 26; // Hash of previous received Vote
|
||||
bytes vote_hash = 27; // Hash of all previously defined fields in Vote
|
||||
bytes signature = 28; // Signature of vote_hash
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
To initiate a consensus for a proposal,
|
||||
a user MUST complete all the fields in the proposal, including attaching its `vote`
|
||||
and the `payload` that shows the purpose of the proposal.
|
||||
Notably, `parent_hash` and `received_hash` are empty strings because there is no previous or received hash.
|
||||
Then the initialization section ends when the user who creates the proposal sends it
|
||||
to the random peer from the network or sends it to the proposal to the specific channel.
|
||||
|
||||
## 2. Exchanging votes across the peers
|
||||
|
||||
Once the peer receives the proposal message `P_1` from a 1-1 or a gossipsub channel does the following checks:
|
||||
|
||||
1. Check the signatures of the each votes in proposal, in particular for proposal `P_1`,
|
||||
verify the signature of `V_1` where `V_1 = P_1.votes[0]` with `V_1.signature` and `V_1.vote_owner`
|
||||
2. Do `parent_hash` check: If there are repeated votes from the same sender,
|
||||
check that the hash of the former vote is equal to the `parent_hash` of the later vote.
|
||||
3. Do `received_hash` check: If there are multiple votes in a proposal, check that the hash of a vote is equal to the `received_hash` of the next one.
|
||||
4. After successful verification of the signature and hashes, the receiving peer proceeds to generate `P_2` containing a new vote `V_2` as following:
|
||||
|
||||
4.1. Add its public key as `P_2.vote_owner`.
|
||||
|
||||
4.2. Set `timestamp`.
|
||||
|
||||
4.3. Set boolean `vote`.
|
||||
|
||||
4.4. Define `V_2.parent_hash = 0` if there is no previous peer's vote, otherwise hash of previous owner's vote.
|
||||
|
||||
4.5. Set `V_2.received_hash = hash(P_1.votes[0])`.
|
||||
|
||||
4.6. Set `proposal_id` for the `vote`.
|
||||
|
||||
4.7. Calculate `vote_hash` by hash of all previously defined fields in Vote:
|
||||
`V_2.vote_hash = hash(vote_id, owner, proposal_id, timestamp, vote, parent_hash, received_hash)`
|
||||
|
||||
4.8. Sign `vote_hash` with its private key corresponding the public key as `vote_owner` component then adds `V_2.vote_hash`.
|
||||
|
||||
5. Create `P_2` with by adding `V_2` as follows:
|
||||
|
||||
5.1. Assign `P_2.name`, `P_2.proposal_id`, and `P_2.proposal_owner` to be identical to those in `P_1`.
|
||||
|
||||
5.2. Add the `V_2` to the `P_2.Votes` list.
|
||||
|
||||
5.3. Increase the round by one, namely `P_2.round = P_1.round + 1`.
|
||||
|
||||
5.4. Verify that the proposal has not expired by checking that: `P_2.timestamp - current_time < P_1.expiration_time`.
|
||||
If this does not hold, other peers ignore the message.
|
||||
|
||||
After the peer creates the proposal `P_2` with its vote `V_2`,
|
||||
sends it to the random peer from the network or
|
||||
sends it to the proposal to the specific channel.
|
||||
|
||||
## 3. Determining the result
|
||||
|
||||
Because consensus depends on meeting a quorum threshold,
|
||||
each peer MUST verify the accumulated votes to determine whether the necessary conditions have been satisfied.
|
||||
The voting result is set YES if the majority of the `2n/3` from the distinct peers vote YES.
|
||||
|
||||
To verify, the `findDistinctVoter` method processes the proposal by traversing its `Votes` list to determine the number of unique voters.
|
||||
|
||||
If this method returns true, the peer proceeds with strong validation,
|
||||
which ensures that if any honest peer reaches a decision,
|
||||
no other honest peer can arrive at a conflicting result.
|
||||
|
||||
1. Check each `signature` in the vote as shown in the [Section 2](#2-exchanging-votes-across-the-peers).
|
||||
|
||||
2. Check the `parent_hash` chain if there are multiple votes from the same owner namely `vote_i` and `vote_i+1` respectively,
|
||||
the parent hash of `vote_i+1` should be the hash of `vote_i`
|
||||
|
||||
3. Check the `previous_hash` chain, each received hash of `vote_i+1` should be equal to the hash of `vote_i`.
|
||||
|
||||
4. Check the `timestamp` against the replay attack.
|
||||
In particular, the `timestamp` cannot be the old in the determined threshold.
|
||||
|
||||
5. Check that the liveness criteria defined in the Liveness section are satisfied.
|
||||
|
||||
If a proposal is verified by all the checks,
|
||||
the `countVote` method counts each YES vote from the list of Votes.
|
||||
|
||||
## 4. Properties
|
||||
|
||||
The consensus mechanism satisfies liveness and security properties as follows:
|
||||
|
||||
### Liveness
|
||||
|
||||
Liveness refers to the ability of the protocol to eventually reach a decision when sufficient honest participation is present.
|
||||
In this protocol, if `n > 2` and more than `n/2` of the votes among at least `2n/3` distinct peers are YES,
|
||||
then the consensus result is defined as YES; otherwise, when `n ≤ 2`, unanimous agreement (100% YES votes) is required.
|
||||
|
||||
The peer calculates the result locally as shown in the [Section 3](#3-determining-the-result).
|
||||
From the [hashgraph property](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus),
|
||||
if a node could calculate the result of a proposal,
|
||||
it implies that no peer can calculate the opposite of the result.
|
||||
Still, reliability issues can cause some situations where peers cannot receive enough messages,
|
||||
so they cannot calculate the consensus result.
|
||||
|
||||
Rounds are incremented when a peer adds and sends the new proposal.
|
||||
Calculating the required number of rounds, `2n/3` from the distinct peers' votes is achieved in two ways:
|
||||
|
||||
1. `2n/3` rounds in pure P2P networks
|
||||
2. `2` rounds in gossipsub
|
||||
|
||||
Since the message complexity is `O(1)` in the gossipsub channel,
|
||||
in case the network has reliability issues,
|
||||
the second round is used for the peers cannot receive all the messages from the first round.
|
||||
|
||||
If an honest and online peer has received at least one vote but not enough to reach consensus,
|
||||
it MAY continue to propagate its own vote — and any votes it has received — to support message dissemination.
|
||||
This process can continue beyond the expected round count,
|
||||
as long as it remains within the expiration time defined in the proposal.
|
||||
The expiration time acts as a soft upper bound to ensure that consensus is either reached or aborted within a bounded timeframe.
|
||||
|
||||
#### Equality of votes
|
||||
|
||||
An equality of votes occurs when verifying at least `2n/3` distinct voters and
|
||||
applying `liveness_criteria_yes` the number of YES and NO votes is equal.
|
||||
|
||||
Handling ties is an application-level decision. The application MUST define a deterministic tie policy:
|
||||
|
||||
RETRY: re-run the vote with a new proposal_id, optionally adjusting parameters.
|
||||
|
||||
REJECT: abort the proposal and return voting result as NO.
|
||||
|
||||
The chosen policy SHOULD be consistent for all peers via proposal's `payload` to ensure convergence on the same outcome.
|
||||
|
||||
### Silent Node Management
|
||||
|
||||
Silent nodes are the nodes that not participate the voting as YES or NO.
|
||||
There are two possible counting votes for the silent peers.
|
||||
|
||||
1. **Silent peers means YES:**
|
||||
Silent peers counted as YES vote, if the application prefer the strong rejection for NO votes.
|
||||
2. **Silent peers means NO:**
|
||||
Silent peers counted as NO vote, if the application prefer the strong acception for NO votes.
|
||||
|
||||
The proposal is set to default true, which means silent peers' votes are counted as YES namely `liveness_criteria_yes` is set true by default.
|
||||
|
||||
### Security
|
||||
|
||||
This RFC uses cryptographic primitives to prevent the
|
||||
malicious behaviours as follows:
|
||||
|
||||
- Vote forgery attempt: creating unsigned invalid votes
|
||||
- Inconsistent voting: a malicious peer submits conflicting votes (e.g., YES to some peers and NO to others)
|
||||
in different stages of the protocol, violating vote consistency and attempting to undermine consensus.
|
||||
- Integrity breaking attempt: tampering history by changing previous votes.
|
||||
- Replay attack: storing the old votes to maliciously use in fresh voting.
|
||||
|
||||
## 5. Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
|
||||
|
||||
## 6. References
|
||||
|
||||
- [Hedera Hashgraph](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus)
|
||||
- [Gossip about gossip](https://docs.hedera.com/hedera/core-concepts/hashgraph-consensus-algorithms/gossip-about-gossip)
|
||||
- [Simple implementation of hashgraph consensus](https://github.com/conanwu777/hashgraph)
|
||||
@@ -1,11 +1,11 @@
|
||||
# ETH-DCGKA
|
||||
|
||||
| Field | Value |
|
||||
| --- | --- |
|
||||
| Name | Decentralized Key and Session Setup for Secure Messaging over Ethereum |
|
||||
| Status | raw |
|
||||
| Category | informational |
|
||||
| Editor | Ramses Fernandez-Valencia <ramses@status.im> |
|
||||
---
|
||||
title: ETH-DCGKA
|
||||
name: Decentralized Key and Session Setup for Secure Messaging over Ethereum
|
||||
status: raw
|
||||
category: informational
|
||||
editor: Ramses Fernandez-Valencia <ramses@status.im>
|
||||
contributors:
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||