Compare commits

..

9 Commits

Author SHA1 Message Date
Jimmy Debe
b33e8edc9d Update manifest.md 2025-12-11 09:19:10 -05:00
Jimmy Debe
fafd2b19cf Updates 2025-11-05 10:15:38 -05:00
Jimmy Debe
ae244b7374 Update Codex manifest 2025-11-05 09:46:23 -05:00
Jimmy Debe
512b631f21 Update codex/raw/manifest.md
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
2025-11-05 09:26:09 -05:00
Jimmy Debe
0b5bd120e1 Update codex/raw/manifest.md
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
2025-11-05 09:25:35 -05:00
Jimmy Debe
b1ba2e3b2c Update manifest.md 2025-10-24 08:15:57 -04:00
Jimmy Debe
751b7f7f10 Update manifest.md 2025-10-16 17:00:39 -04:00
Jimmy Debe
27bc607455 Update manifest.md 2025-09-25 17:25:45 -04:00
Jimmy Debe
2d9155a33b Add manifest 2025-09-25 17:15:32 -04:00
167 changed files with 2776 additions and 9374 deletions

View File

@@ -1,6 +1,4 @@
{
"MD013": false,
"MD024": false,
"MD025": false,
"MD033": false
"MD024": false
}

View File

@@ -2,6 +2,9 @@ name: markdown-linting
on:
push:
branches:
- '**'
pull_request:
branches:
- '**'

1
.gitignore vendored
View File

@@ -1 +0,0 @@
book

59
Jenkinsfile vendored
View File

@@ -1,59 +0,0 @@
#!/usr/bin/env groovy
library 'status-jenkins-lib@v1.9.31'
pipeline {
agent {
docker {
label 'linuxcontainer'
image 'harbor.status.im/infra/ci-build-containers:linux-base-1.0.0'
args '--volume=/nix:/nix ' +
'--volume=/etc/nix:/etc/nix ' +
'--user jenkins'
}
}
options {
disableConcurrentBuilds()
buildDiscarder(logRotator(
numToKeepStr: '20',
daysToKeepStr: '30',
))
}
environment {
GIT_COMMITTER_NAME = 'status-im-auto'
GIT_COMMITTER_EMAIL = 'auto@status.im'
}
stages {
stage('Build') {
steps { script {
nix.develop('python scripts/gen_rfc_index.py && python scripts/gen_history.py && mdbook build')
jenkins.genBuildMetaJSON('book/build.json')
} }
}
stage('Publish') {
steps {
sshagent(credentials: ['status-im-auto-ssh']) {
script {
nix.develop("""
ghp-import \
-b ${deployBranch()} \
-c ${deployDomain()} \
-p book
""", pure: false)
}
}
}
}
}
post {
cleanup { cleanWs() }
}
}
def isMainBranch() { GIT_BRANCH ==~ /.*main/ }
def deployBranch() { isMainBranch() ? 'deploy-master' : 'deploy-develop' }
def deployDomain() { isMainBranch() ? 'rfc.vac.dev' : 'dev-rfc.vac.dev' }

47
README.md Normal file
View File

@@ -0,0 +1,47 @@
# Vac Request For Comments(RFC)
*NOTE*: This repo is WIP. We are currently restructuring the RFC process.
This repository contains specifications from the [Waku](https://waku.org/), [Nomos](https://nomos.tech/),
[Codex](https://codex.storage/), and
[Status](https://status.app/) projects that are part of the [IFT portfolio](https://free.technology/).
[Vac](https://vac.dev) is an
[IFT service](https://free.technology/services) that will manage the RFC,
[Request for Comments](https://en.wikipedia.org/wiki/Request_for_Comments),
process within this repository.
## New RFC Process
This repository replaces the previous `rfc.vac.dev` resource.
Each project will maintain initial specifications in separate repositories,
which may be considered as a **raw** specification.
All [Vac](https://vac.dev) **raw** specifications and
discussions will live in the Vac subdirectory.
When projects have reached some level of maturity
for a specification living in their repository,
the process of updating the status to **draft** may begin in this repository.
Specifications will adhere to
[1/COSS](./vac/1/coss.md) before obtaining **draft** status.
Implementations should follow specifications as described,
and all contributions will be discussed before the **stable** status is obtained.
The goal of this RFC process will to engage all interseted parities and
reach a rough consensus for techcinal specifications.
## Contributing
Please see [1/COSS](./vac/1/coss.md) for general guidelines and specification lifecycle.
Feel free to join the [Vac discord](https://discord.gg/Vy54fEWuqC).
Here's the project board used by core contributors and maintainers: [Projects](https://github.com/orgs/vacp2p/projects/5)
## IFT Projects' Raw Specifications
The repository for each project **raw** specifications:
- [Vac Raw Specifications](./vac/raw)
- [Status Raw Specifications](./status/raw)
- [Waku Raw Specificiations](https://github.com/waku-org/specs/tree/master)
- [Codex Raw Specifications](none)
- [Nomos Raw Specifications](https://github.com/logos-co/nomos-specs)

View File

@@ -1,11 +0,0 @@
[book]
title = "Vac RFC"
authors = ["Jakub Sokołowski"]
language = "en"
src = "docs"
[output.html]
default-theme = "ayu"
additional-css = ["custom.css"]
additional-js = ["scripts/rfc-index.js"]
git-repository-url = "https://github.com/vacp2p/rfc-index"

95
codex/raw/manifest.md Normal file
View File

@@ -0,0 +1,95 @@
---
title: CODEX-MANIFEST
name: Codex Manifest
status: raw
category: Standards Track
tags: codex
editor:
contributors:
- Jimmy Debe <jimmy@status.im>
---
## Abstract
This specification defines the attributes of the Codex manifest.
## Rationale
The Codex manifest provides the description of the metadata uploaded to the Codex network.
It is in many ways similar to the BitTorrent metainfo file also known as .torrent files,
(for more information see, [BEP3](http://bittorrent.org/beps/bep_0003.html) from BitTorrent Enhancement Proposals (BEPs).
While the BitTorrent metainfo files are generally distributed out-of-band,
Codex manifest receives its own content identifier, [CIDv1](https://github.com/multiformats/cid#cidv1), that is announced on the Codex DHT, also
see the [CODEX-DHT specification](./dht.md).
In version 1 of the BitTorrent protocol a user wants to upload (seed) some content to the BitTorrent network,
the client chunks the content into pieces.
For each piece, a hash is computed and
included in the pieces attribute of the info dictionary in the BitTorrent metainfo file.
In Codex,
instead of hashes of individual pieces,
we create a Merkle Tree computed over the blocks in the dataset.
We then include the CID of the root of this Merkle Tree as treeCid attribute in the Codex Manifest file.
See [CODEX-MERKLE-TREE](./merkle-tree.md) for more information.
In version 2 of the BitTorrent protocol also uses Merkle Trees and
includes the root of the tree in the info dictionary for each .torrent file.
The Codex manifest, CID in particular,
has the ability to uniquely identify the content and
be able to retrieve that content from any single Codex client.
## Semantics
The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and
“OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
The manifest is encoded in multibase base58btc encoding and
has corresponding `CID`, the manifest `CID`.
In Codex, manifest `CID` SHOULD be announced on the [CODEX-DHT](./dht.md), so
the nodes storing the corresponding manifest block can be found by other clients requesting to download the corresponding dataset.
From the manifest,
providers storing relevant blocks SHOULD be identified using the `treeCid` attribute.
The manifest `CID` in Codex is similar to the `info_hash` from BitTorrent.
The `filename` of the original dataset MAY be included in the manifest.
The `mimetype` of the original dataset MAY be included in the manifest.
### Manifest Attributes
```protobuf
message Manifest {
optional bytes treeCid = 1; // cid (root) of the tree
optional uint32 blockSize = 2; // size of a single block
optional uint64 datasetSize = 3; // size of the dataset
optional uint32 codec = 4; // Dataset codec
optional uint32 hcodec = 5; // Multihash codec
optional uint32 version = 6; // Cid version
optional string filename = 7; // original filename
optional string mimetype = 8; // original mimetype
}
```
| attribute | type | description |
|-----------|------|-------------|
| `treeCid` | bytes | A hash based on [CIDv1](https://github.com/multiformats/cid#cidv1) of the root of the [Codex Tree], which is a form of a Merkle Tree corresponding to the dataset described by the manifest. Its multicodec is (codex-root, 0xCD03) |
| `blockSize` | integer | The size of each block for the given dataset. The default block size used in Codex is 64KiB. |
| `datasetSize` | integer | The total size of all blocks for the original dataset. |
| `codec` | [MultiCodec](https://github.com/multiformats/multicodec) | The multicodec used for the CIDs of the dataset blocks. Codex uses (codex-block, 0xCD02) |
| `hcodec` | [MultiCodec](https://github.com/multiformats/multicodec) | Multicodec used for computing of the multihash used in blocks CIDs. Codex uses (sha2-256, 0x12). |
| `version` | integer | The version of CID used for the dataset blocks. |
| `filename` | string | When provided, it can be used by the client as a file name while downloading the content. |
| `mimetype` | string | When provided, it can be used by the client to set a content type of the downloaded content. |
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
- [CODEX-MERKLE-TREE](./merkle-tree.md)
- [BEP3](http://bittorrent.org/beps/bep_0003.html)
- [CODEX-DHT specification](./dht.md)
- [MultiCodec](https://github.com/multiformats/multicodec)

View File

@@ -1,341 +0,0 @@
:root {
--content-max-width: 68em;
}
body {
background: var(--bg);
color: var(--fg);
font-family: "Source Serif Pro", "Iowan Old Style", "Palatino Linotype", "Book Antiqua", Georgia, serif;
line-height: 1.6;
letter-spacing: 0.01em;
}
code, pre, .hljs {
font-family: "SFMono-Regular", Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
font-size: 0.95em;
}
a {
color: var(--links);
}
a:hover {
color: var(--links);
opacity: 0.85;
}
.page {
background: var(--bg);
box-shadow: none;
border: 1px solid var(--table-border-color);
}
.menu-bar {
background: var(--bg);
box-shadow: none;
border-bottom: 1px solid var(--table-border-color);
min-height: 52px;
}
.menu-title {
font-weight: 600;
color: var(--fg);
}
.icon-button {
box-shadow: none;
border: 1px solid transparent;
}
#sidebar {
background: var(--sidebar-bg);
border-right: 1px solid var(--sidebar-spacer);
box-shadow: none;
}
#sidebar a {
color: var(--sidebar-fg);
}
#sidebar .chapter-item > a strong {
color: var(--sidebar-active);
}
#sidebar .part-title {
color: var(--sidebar-non-existant);
font-weight: 600;
letter-spacing: 0.02em;
}
main h1, main h2, main h3, main h4 {
font-family: "Source Serif Pro", "Iowan Old Style", "Palatino Linotype", "Book Antiqua", Georgia, serif;
color: var(--fg);
font-weight: 600;
margin-top: 1.2em;
margin-bottom: 0.6em;
}
main h1 + table {
margin: 1rem 0 1.5rem 0;
}
main h1 + table th {
width: 10rem;
}
main p, main li {
color: var(--fg);
}
main blockquote {
border-left: 3px solid var(--quote-border);
color: var(--fg);
background: var(--quote-bg);
}
table {
border: 1px solid var(--table-border-color);
border-collapse: collapse;
width: 100%;
}
th, td {
border: 1px solid var(--table-border-color);
padding: 0.5em 0.75em;
}
thead {
background: var(--table-header-bg);
}
.content {
padding: 1.5rem 2rem 3rem 2rem;
}
.nav-chapters, .nav-wrapper {
box-shadow: none;
}
/* Landing layout */
.landing-hero {
margin-bottom: 1.5rem;
padding: 1.25rem 1.5rem;
background: var(--bg);
border: 1px solid var(--table-border-color);
}
.landing-hero p {
margin: 0.3rem 0 0;
color: var(--sidebar-fg);
}
.filter-row {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
align-items: center;
margin-bottom: 0.75rem;
}
.filter-row input[type="search"] {
padding: 0.5rem 0.65rem;
border: 1px solid var(--searchbar-border-color);
border-radius: 4px;
min-width: 240px;
background: var(--searchbar-bg);
color: var(--searchbar-fg);
}
.chips {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
.chip {
display: inline-flex;
align-items: center;
gap: 0.4rem;
padding: 0.35rem 0.6rem;
border: 1px solid var(--table-border-color);
border-radius: 999px;
background: var(--theme-hover);
color: var(--fg);
cursor: pointer;
font-size: 0.95em;
}
.chip.active {
background: var(--theme-hover);
border-color: var(--sidebar-active);
color: var(--sidebar-active);
font-weight: 600;
}
.quick-links {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
margin: 0.5rem 0 1rem 0;
}
.quick-links a {
border: 1px solid var(--table-border-color);
padding: 0.35rem 0.65rem;
border-radius: 4px;
background: var(--bg);
text-decoration: none;
color: var(--fg);
}
.quick-links a:hover {
border-color: var(--sidebar-active);
color: var(--links);
}
.rfc-table {
width: 100%;
border-collapse: collapse;
margin-top: 0.75rem;
}
.rfc-table th, .rfc-table td {
border: 1px solid var(--table-border-color);
padding: 0.45rem 0.6rem;
}
.rfc-table thead {
background: var(--table-header-bg);
}
.rfc-table tbody tr:hover {
background: var(--theme-hover);
}
.badge {
display: inline-block;
padding: 0.15rem 0.45rem;
border-radius: 4px;
font-size: 0.85em;
border: 1px solid var(--table-border-color);
background: var(--table-alternate-bg);
color: var(--fg);
}
/* Landing polish */
main h1 {
text-align: left;
}
.results-row {
display: flex;
justify-content: space-between;
align-items: baseline;
gap: 1rem;
margin: 0.5rem 0 0.75rem 0;
color: var(--sidebar-fg);
font-size: 0.95em;
}
.results-count {
color: var(--fg);
font-weight: 600;
}
.results-hint {
color: var(--sidebar-fg);
font-size: 0.9em;
}
.table-wrap {
overflow-x: auto;
border: 1px solid var(--table-border-color);
border-radius: 6px;
background: var(--bg);
}
.table-wrap .rfc-table {
margin: 0;
border: none;
}
.rfc-table tbody tr:nth-child(even) {
background: var(--table-alternate-bg);
}
.rfc-table th[data-sort] {
cursor: pointer;
user-select: none;
}
.rfc-table th.sorted {
color: var(--links);
}
.rfc-table td:first-child a {
word-break: break-word;
}
.noscript-note {
margin-top: 0.75rem;
color: var(--sidebar-fg);
}
@media (max-width: 900px) {
.results-row {
flex-direction: column;
align-items: flex-start;
}
.filter-row input[type="search"] {
width: 100%;
min-width: 0;
}
}
.menu-title-link {
position: absolute;
left: 50%;
transform: translateX(-50%);
text-decoration: none;
color: inherit;
}
.menu-title-link .menu-title {
text-decoration: none;
}
.chapter-item > .chapter-link-wrapper > a,
.chapter-item > a {
display: flex;
align-items: center;
gap: 0.4rem;
}
.section-toggle::before {
content: "▸";
display: inline-block;
font-size: 0.9em;
line-height: 1;
transition: transform 0.15s ease;
}
.chapter-item:not(.collapsed) > a .section-toggle::before,
.chapter-item:not(.collapsed) > .chapter-link-wrapper > a .section-toggle::before {
transform: rotate(90deg);
}
.chapter-item.collapsed > ol.section {
display: none;
}
.chapter-item.collapsed + li.section-container > ol.section {
display: none;
}
.chapter-item.collapsed > .chapter-link-wrapper > a .section-toggle::before,
.chapter-item.collapsed > a .section-toggle::before {
transform: rotate(0deg);
}

View File

@@ -1,38 +0,0 @@
# Vac RFC Index
An IETF-style index of Vac-managed RFCs across Waku, Nomos, Codex, and Status. Use the filters below to jump straight to a specification.
<div class="landing-hero">
<div class="filter-row">
<input id="rfc-search" type="search" placeholder="Search by number, title, status, project..." aria-label="Search RFCs">
<div class="chips" id="status-chips">
<span class="chip active" data-status="all" data-label="All">All</span>
<span class="chip" data-status="stable" data-label="Stable">Stable</span>
<span class="chip" data-status="draft" data-label="Draft">Draft</span>
<span class="chip" data-status="raw" data-label="Raw">Raw</span>
<span class="chip" data-status="deprecated" data-label="Deprecated">Deprecated</span>
<span class="chip" data-status="deleted" data-label="Deleted">Deleted</span>
</div>
</div>
<div class="filter-row">
<div class="chips" id="project-chips">
<span class="chip active" data-project="all" data-label="All projects">All projects</span>
<span class="chip" data-project="vac" data-label="Vac">Vac</span>
<span class="chip" data-project="waku" data-label="Waku">Waku</span>
<span class="chip" data-project="status" data-label="Status">Status</span>
<span class="chip" data-project="nomos" data-label="Nomos">Nomos</span>
<span class="chip" data-project="codex" data-label="Codex">Codex</span>
</div>
</div>
</div>
<div class="results-row">
<div id="results-count" class="results-count">Loading RFC index...</div>
<div class="results-hint">Click a column to sort</div>
</div>
<div id="rfc-table-container" class="table-wrap"></div>
<noscript>
<p class="noscript-note">JavaScript is required to load the RFC index table.</p>
</noscript>

View File

@@ -1,116 +0,0 @@
# Summary
[Introduction](README.md)
- [Vac](vac/README.md)
- [1/COSS](vac/1/coss.md)
- [2/MVDS](vac/2/mvds.md)
- [3/Remote Log](vac/3/remote-log.md)
- [4/MVDS Meta](vac/4/mvds-meta.md)
- [25/Libp2p DNS Discovery](vac/25/libp2p-dns-discovery.md)
- [32/RLN-V1](vac/32/rln-v1.md)
- [Raw](vac/raw/README.md)
- [Consensus Hashgraphlike](vac/raw/consensus-hashgraphlike.md)
- [Decentralized Messaging Ethereum](vac/raw/decentralized-messaging-ethereum.md)
- [ETH MLS Offchain](vac/raw/eth-mls-offchain.md)
- [ETH MLS Onchain](vac/raw/eth-mls-onchain.md)
- [ETH SecPM](vac/raw/deleted/eth-secpm.md)
- [Gossipsub Tor Push](vac/raw/gossipsub-tor-push.md)
- [Logos Capability Discovery](vac/raw/logos-capability-discovery.md)
- [Mix](vac/raw/mix.md)
- [Noise X3DH Double Ratchet](vac/raw/noise-x3dh-double-ratchet.md)
- [RLN Interep Spec](vac/raw/rln-interep-spec.md)
- [RLN Stealth Commitments](vac/raw/rln-stealth-commitments.md)
- [RLN-V2](vac/raw/rln-v2.md)
- [SDS](vac/raw/sds.md)
- [Template](vac/template.md)
- [Waku](waku/README.md)
- [Standards - Core](waku/standards/core/README.md)
- [10/Waku2](waku/standards/core/10/waku2.md)
- [11/Relay](waku/standards/core/11/relay.md)
- [12/Filter](waku/standards/core/12/filter.md)
- [13/Store](waku/standards/core/13/store.md)
- [14/Message](waku/standards/core/14/message.md)
- [15/Bridge](waku/standards/core/15/bridge.md)
- [17/RLN Relay](waku/standards/core/17/rln-relay.md)
- [19/Lightpush](waku/standards/core/19/lightpush.md)
- [31/ENR](waku/standards/core/31/enr.md)
- [33/Discv5](waku/standards/core/33/discv5.md)
- [34/Peer Exchange](waku/standards/core/34/peer-exchange.md)
- [36/Bindings API](waku/standards/core/36/bindings-api.md)
- [64/Network](waku/standards/core/64/network.md)
- [66/Metadata](waku/standards/core/66/metadata.md)
- [Standards - Application](waku/standards/application/README.md)
- [20/Toy ETH PM](waku/standards/application/20/toy-eth-pm.md)
- [26/Payload](waku/standards/application/26/payload.md)
- [53/X3DH](waku/standards/application/53/x3dh.md)
- [54/X3DH Sessions](waku/standards/application/54/x3dh-sessions.md)
- [Standards - Legacy](waku/standards/legacy/README.md)
- [6/Waku1](waku/standards/legacy/6/waku1.md)
- [7/Data](waku/standards/legacy/7/data.md)
- [8/Mail](waku/standards/legacy/8/mail.md)
- [9/RPC](waku/standards/legacy/9/rpc.md)
- [Informational](waku/informational/README.md)
- [22/Toy Chat](waku/informational/22/toy-chat.md)
- [23/Topics](waku/informational/23/topics.md)
- [27/Peers](waku/informational/27/peers.md)
- [29/Config](waku/informational/29/config.md)
- [30/Adaptive Nodes](waku/informational/30/adaptive-nodes.md)
- [Deprecated](waku/deprecated/README.md)
- [5/Waku0](waku/deprecated/5/waku0.md)
- [16/RPC](waku/deprecated/16/rpc.md)
- [18/Swap](waku/deprecated/18/swap.md)
- [Fault Tolerant Store](waku/deprecated/fault-tolerant-store.md)
- [Nomos](nomos/README.md)
- [Raw](nomos/raw/README.md)
- [NomosDA Encoding](nomos/raw/nomosda-encoding.md)
- [NomosDA Network](nomos/raw/nomosda-network.md)
- [P2P Hardware Requirements](nomos/raw/p2p-hardware-requirements.md)
- [P2P NAT Solution](nomos/raw/p2p-nat-solution.md)
- [P2P Network Bootstrapping](nomos/raw/p2p-network-bootstrapping.md)
- [P2P Network](nomos/raw/p2p-network.md)
- [SDP](nomos/raw/sdp.md)
- [Deprecated](nomos/deprecated/README.md)
- [Claro](nomos/deprecated/claro.md)
- [Codex](codex/README.md)
- [Raw](codex/raw/README.md)
- [Block Exchange](codex/raw/codex-block-exchange.md)
- [Marketplace](codex/raw/codex-marketplace.md)
- [Status](status/README.md)
- [24/Curation](status/24/curation.md)
- [28/Featuring](status/28/featuring.md)
- [55/1-to-1 Chat](status/55/1to1-chat.md)
- [56/Communities](status/56/communities.md)
- [61/Community History Service](status/61/community-history-service.md)
- [62/Payloads](status/62/payloads.md)
- [63/Keycard Usage](status/63/keycard-usage.md)
- [65/Account Address](status/65/account-address.md)
- [71/Push Notification Server](status/71/push-notification-server.md)
- [Raw](status/raw/README.md)
- [Simple Scaling](status/raw/simple-scaling.md)
- [Status App Protocols](status/raw/status-app-protocols.md)
- [Status MVDS](status/raw/status-mvds.md)
- [URL Data](status/raw/url-data.md)
- [URL Scheme](status/raw/url-scheme.md)
- [Deprecated](status/deprecated/README.md)
- [3rd Party](status/deprecated/3rd-party.md)
- [Account](status/deprecated/account.md)
- [Client](status/deprecated/client.md)
- [Dapp Browser API Usage](status/deprecated/dapp-browser-API-usage.md)
- [EIPs](status/deprecated/eips.md)
- [Ethereum Usage](status/deprecated/ethereum-usage.md)
- [Group Chat](status/deprecated/group-chat.md)
- [IPFS Gateway for Sticker Pack](status/deprecated/IPFS-gateway-for-sticker-Pack.md)
- [Keycard Usage for Wallet and Chat Keys](status/deprecated/keycard-usage-for-wallet-and-chat-keys.md)
- [Notifications](status/deprecated/notifications.md)
- [Payloads](status/deprecated/payloads.md)
- [Push Notification Server](status/deprecated/push-notification-server.md)
- [Secure Transport](status/deprecated/secure-transport.md)
- [Waku Mailserver](status/deprecated/waku-mailserver.md)
- [Waku Usage](status/deprecated/waku-usage.md)
- [Whisper Mailserver](status/deprecated/whisper-mailserver.md)
- [Whisper Usage](status/deprecated/whisper-usage.md)

View File

@@ -1,3 +0,0 @@
# Codex Raw Specifications
Early-stage Codex specifications collected before reaching draft status.

File diff suppressed because it is too large Load Diff

View File

@@ -1,797 +0,0 @@
# CODEX-MARKETPLACE
| Field | Value |
| --- | --- |
| Name | Codex Storage Marketplace |
| Slug | codex-marketplace |
| Status | raw |
| Category | Standards Track |
| Editor | Codex Team and Dmitriy Ryajov <dryajov@status.im> |
| Contributors | Mark Spanbroek <mark@codex.storage>, Adam Uhlíř <adam@codex.storage>, Eric Mastro <eric@codex.storage>, Jimmy Debe <jimmy@status.im>, Filip Dimitrijevic <filip@status.im> |
## Abstract
Codex Marketplace and its interactions are defined by a smart contract deployed on an EVM-compatible blockchain. This specification describes these interactions for the various roles within the network.
The document is intended for implementors of Codex nodes.
## Semantics
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
### Definitions
| Terminology | Description |
|---------------------------|---------------------------------------------------------------------------------------------------------------------------|
| Storage Provider (SP) | A node in the Codex network that provides storage services to the marketplace. |
| Validator | A node that assists in identifying missing storage proofs. |
| Client | A node that interacts with other nodes in the Codex network to store, locate, and retrieve data. |
| Storage Request or Request | A request created by a client node to persist data on the Codex network. |
| Slot or Storage Slot | A space allocated by the storage request to store a piece of the request's dataset. |
| Smart Contract | A smart contract implementing the marketplace functionality. |
| Token | The ERC20-based token used within the Codex network. |
## Motivation
The Codex network aims to create a peer-to-peer storage engine with robust data durability, data persistence guarantees, and a comprehensive incentive structure.
The marketplace is a critical component of the Codex network, serving as a platform where all involved parties interact to ensure data persistence. It provides mechanisms to enforce agreements and facilitate data repair when SPs fail to fulfill their duties.
Implemented as a smart contract on an EVM-compatible blockchain, the marketplace enables various scenarios where nodes assume one or more roles to maintain a reliable persistence layer for users. This specification details these interactions.
The marketplace contract manages storage requests, maintains the state of allocated storage slots, and orchestrates SP rewards, collaterals, and storage proofs.
A node that wishes to participate in the Codex persistence layer MUST implement one or more roles described in this document.
### Roles
A node can assume one of the three main roles in the network: the client, SP, and validator.
A client is a potentially short-lived node in the network with the purpose of persisting its data in the Codex persistence layer.
An SP is a long-lived node providing storage for clients in exchange for profit. To ensure a reliable, robust service for clients, SPs are required to periodically provide proofs that they are persisting the data.
A validator ensures that SPs have submitted valid proofs each period where the smart contract required a proof to be submitted for slots filled by the SP.
---
## Part I: Protocol Specification
This part defines the **normative requirements** for the Codex Marketplace protocol. All implementations MUST comply with these requirements to participate in the Codex network. The protocol is defined by smart contract interactions on an EVM-compatible blockchain.
## Storage Request Lifecycle
The diagram below depicts the lifecycle of a storage request:
```text
┌───────────┐
│ Cancelled │
└───────────┘
│ Not all
│ Slots filled
┌───────────┐ ┌──────┴─────────────┐ ┌─────────┐
│ Submitted ├───►│ Slots Being Filled ├──────────►│ Started │
└───────────┘ └────────────────────┘ All Slots └────┬────┘
Filled │
┌───────────────────────┘
Proving ▼
┌────────────────────────────────────────────────────────────┐
│ │
│ Proof submitted │
│ ┌─────────────────────────► All good │
│ │ │
│ Proof required │
│ │ │
│ │ Proof missed │
│ └─────────────────────────► After some time slashed │
│ eventually Slot freed │
│ │
└────────┬─┬─────────────────────────────────────────────────┘
│ │ ▲
│ │ │
│ │ SP kicked out and Slot freed ┌───────┴────────┐
All good │ ├─────────────────────────────►│ Repair process │
Time ran out │ │ └────────────────┘
│ │
│ │ Too many Slots freed ┌────────┐
│ └─────────────────────────────►│ Failed │
▼ └────────┘
┌──────────┐
│ Finished │
└──────────┘
```
## Client Role
A node implementing the client role mediates the persistence of data within the Codex network.
A client has two primary responsibilities:
- Requesting storage from the network by sending a storage request to the smart contract.
- Withdrawing funds from the storage requests previously created by the client.
### Creating Storage Requests
When a user prompts the client node to create a storage request, the client node SHOULD receive the input parameters for the storage request from the user.
To create a request to persist a dataset on the Codex network, client nodes MUST split the dataset into data chunks, $(c_1, c_2, c_3, \ldots, c_{n})$. Using the erasure coding method and the provided input parameters, the data chunks are encoded and distributed over a number of slots. The applied erasure coding method MUST use the [Reed-Solomon algorithm](https://hackmd.io/FB58eZQoTNm-dnhu0Y1XnA). The final slot roots and other metadata MUST be placed into a `Manifest` (TODO: Manifest RFC). The CID for the `Manifest` MUST then be used as the `cid` for the stored dataset.
After the dataset is prepared, a client node MUST call the smart contract function `requestStorage(request)`, providing the desired request parameters in the `request` parameter. The `request` parameter is of type `Request`:
```solidity
struct Request {
address client;
Ask ask;
Content content;
uint64 expiry;
bytes32 nonce;
}
struct Ask {
uint256 proofProbability;
uint256 pricePerBytePerSecond;
uint256 collateralPerByte;
uint64 slots;
uint64 slotSize;
uint64 duration;
uint64 maxSlotLoss;
}
struct Content {
bytes cid;
bytes32 merkleRoot;
}
```
The table below provides the description of the `Request` and the associated types attributes:
| attribute | type | description |
|-----------|------|-------------|
| `client` | `address` | The Codex node requesting storage. |
| `ask` | `Ask` | Parameters of Request. |
| `content` | `Content` | The dataset that will be hosted with the storage request. |
| `expiry` | `uint64` | Timeout in seconds during which all the slots have to be filled, otherwise Request will get cancelled. The final deadline timestamp is calculated at the moment the transaction is mined. |
| `nonce` | `bytes32` | Random value to differentiate from other requests of same parameters. It SHOULD be a random byte array. |
| `pricePerBytePerSecond` | `uint256` | Amount of tokens that will be awarded to SPs for finishing the storage request. It MUST be an amount of tokens offered per slot per second per byte. The Ethereum address that submits the `requestStorage()` transaction MUST have [approval](https://docs.openzeppelin.com/contracts/2.x/api/token/erc20#IERC20-approve-address-uint256-) for the transfer of at least an equivalent amount of full reward (`pricePerBytePerSecond * duration * slots * slotSize`) in tokens. |
| `collateralPerByte` | `uint256` | The amount of tokens per byte of slot's size that SPs submit when they fill slots. Collateral is then slashed or forfeited if SPs fail to provide the service requested by the storage request (more information in the [Slashing](#### Slashing) section). |
| `proofProbability` | `uint256` | Determines the average frequency that a proof is required within a period: $\frac{1}{proofProbability}$. SPs are required to provide proofs of storage to the marketplace contract when challenged. To prevent hosts from only coming online when proofs are required, the frequency at which proofs are requested from SPs is stochastic and is influenced by the `proofProbability` parameter. |
| `duration` | `uint64` | Total duration of the storage request in seconds. It MUST NOT exceed the limit specified in the configuration `config.requestDurationLimit`. |
| `slots` | `uint64` | The number of requested slots. The slots will all have the same size. |
| `slotSize` | `uint64` | Amount of storage per slot in bytes. |
| `maxSlotLoss` | `uint64` | Max slots that can be lost without data considered to be lost. |
| `cid` | `bytes` | An identifier used to locate the Manifest representing the dataset. It MUST be a [CIDv1](https://github.com/multiformats/cid#cidv1), SHA-256 [multihash](https://github.com/multiformats/multihash) and the data it represents SHOULD be discoverable in the network, otherwise the request will be eventually canceled. |
| `merkleRoot` | `bytes32` | Merkle root of the dataset, used to verify storage proofs |
#### Renewal of Storage Requests
It should be noted that the marketplace does not support extending requests. It is REQUIRED that if the user wants to extend the duration of a request, a new request with the same CID must be [created](### Creating Storage Requests) **before the original request completes**.
This ensures that the data will continue to persist in the network at the time when the new (or existing) SPs need to retrieve the complete dataset to fill the slots of the new request.
### Monitoring and State Management
Client nodes MUST implement the following smart contract interactions for monitoring and state management:
- **getRequest(requestId)**: Retrieve the full `StorageRequest` data from the marketplace. This function is used for recovery and state verification after restarts or failures.
- **requestState(requestId)**: Query the current state of a storage request. Used for monitoring request progress and determining the appropriate client actions.
- **requestExpiresAt(requestId)**: Query when the request will expire if not fulfilled.
- **getRequestEnd(requestId)**: Query when a fulfilled request will end (used to determine when to call `freeSlot` or `withdrawFunds`).
Client nodes MUST subscribe to the following marketplace events:
- **RequestFulfilled(requestId)**: Emitted when a storage request has enough filled slots to start. Clients monitor this event to determine when their request becomes active and transitions from the submission phase to the active phase.
- **RequestFailed(requestId)**: Emitted when a storage request fails due to proof failures or other reasons. Clients observe this event to detect failed requests and initiate fund withdrawal.
### Withdrawing Funds
The client node MUST monitor the status of the requests it created. When a storage request enters the `Cancelled`, `Failed`, or `Finished` state, the client node MUST initiate the withdrawal of the remaining or refunded funds from the smart contract using the `withdrawFunds(requestId)` function.
Request states are determined as follows:
- The request is considered `Cancelled` if no `RequestFulfilled(requestId)` event is observed during the timeout specified by the value returned from the `requestExpiresAt(requestId)` function.
- The request is considered `Failed` when the `RequestFailed(requestId)` event is observed.
- The request is considered `Finished` after the interval specified by the value returned from the `getRequestEnd(requestId)` function has elapsed.
## Storage Provider Role
A Codex node acting as an SP persists data across the network by hosting slots requested by clients in their storage requests.
The following tasks need to be considered when hosting a slot:
- Filling a slot
- Proving
- Repairing a slot
- Collecting request reward and collateral
### Filling Slots
When a new request is created, the `StorageRequested(requestId, ask, expiry)` event is emitted with the following properties:
- `requestId` - the ID of the request.
- `ask` - the specification of the request parameters. For details, see the definition of the `Request` type in the [Creating Storage Requests](### Creating Storage Requests) section above.
- `expiry` - a Unix timestamp specifying when the request will be canceled if all slots are not filled by then.
It is then up to the SP node to decide, based on the emitted parameters and node's operator configuration, whether it wants to participate in the request and attempt to fill its slot(s) (note that one SP can fill more than one slot). If the SP node decides to ignore the request, no further action is required. However, if the SP decides to fill a slot, it MUST follow the remaining steps described below.
The node acting as an SP MUST decide which slot, specified by the slot index, it wants to fill. The SP MAY attempt to fill more than one slot. To fill a slot, the SP MUST first reserve the slot in the smart contract using `reserveSlot(requestId, slotIndex)`. If reservations for this slot are full, or if the SP has already reserved the slot, the transaction will revert. If the reservation was unsuccessful, then the SP is not allowed to fill the slot. If the reservation was successful, the node MUST then download the slot data using the CID of the manifest (**TODO: Manifest RFC**) and the slot index. The CID is specified in `request.content.cid`, which can be retrieved from the smart contract using `getRequest(requestId)`. Then, the node MUST generate a proof over the downloaded data (**TODO: Proving RFC**).
When the proof is ready, the SP MUST call `fillSlot()` on the smart contract with the following REQUIRED parameters:
- `requestId` - the ID of the request.
- `slotIndex` - the slot index that the node wants to fill.
- `proof` - the `Groth16Proof` proof structure, generated over the slot data.
The Ethereum address of the SP node from which the transaction originates MUST have [approval](https://docs.openzeppelin.com/contracts/2.x/api/token/erc20#IERC20-approve-address-uint256-) for the transfer of at least the amount of tokens required as collateral for the slot (`collateralPerByte * slotSize`).
If the proof delivered by the SP is invalid or the slot was already filled by another SP, then the transaction will revert. Otherwise, a `SlotFilled(requestId, slotIndex)` event is emitted. If the transaction is successful, the SP SHOULD transition into the **proving** state, where it will need to submit proof of data possession when challenged by the smart contract.
It should be noted that if the SP node observes a `SlotFilled` event for the slot it is currently downloading the dataset for or generating the proof for, it means that the slot has been filled by another node in the meantime. In response, the SP SHOULD stop its current operation and attempt to fill a different, unfilled slot.
### Proving
Once an SP fills a slot, it MUST submit proofs to the marketplace contract when a challenge is issued by the contract. SPs SHOULD detect that a proof is required for the current period using the `isProofRequired(slotId)` function, or that it will be required using the `willProofBeRequired(slotId)` function in the case that the [proving clock pointer is in downtime](https://github.com/codex-storage/codex-research/blob/41c4b4409d2092d0a5475aca0f28995034e58d14/design/storage-proof-timing.md).
Once an SP knows it has to provide a proof it MUST get the proof challenge using `getChallenge(slotId)`, which then
MUST be incorporated into the proof generation as described in Proving RFC (**TODO: Proving RFC**).
When the proof is generated, it MUST be submitted by calling the `submitProof(slotId, proof)` smart contract function.
#### Slashing
There is a slashing scheme orchestrated by the smart contract to incentivize correct behavior and proper proof submissions by SPs. This scheme is configured at the smart contract level and applies uniformly to all participants in the network. The configuration of the slashing scheme can be obtained via the `configuration()` contract call.
The slashing works as follows:
- When SP misses a proof and a validator trigger detection of this event using the `markProofAsMissing()` call, the SP is slashed by `config.collateral.slashPercentage` **of the originally required collateral** (hence the slashing amount is always the same for a given request).
- If the number of slashes exceeds `config.collateral.maxNumberOfSlashes`, the slot is freed, the remaining collateral is burned, and the slot is offered to other nodes for repair. The smart contract also emits the `SlotFreed(requestId, slotIndex)` event.
If, at any time, the number of freed slots exceeds the value specified by the `request.ask.maxSlotLoss` parameter, the dataset is considered lost, and the request is deemed _failed_. The collateral of all SPs that hosted the slots associated with the storage request is burned, and the `RequestFailed(requestId)` event is emitted.
### Repair
When a slot is freed due to too many missed proofs, which SHOULD be detected by listening to the `SlotFreed(requestId, slotIndex)` event, an SP node can decide whether to participate in repairing the slot. Similar to filling a slot, the node SHOULD consider the operator's configuration when making this decision. The SP that originally hosted the slot but failed to comply with proving requirements MAY also participate in the repair. However, by refilling the slot, the SP **will not** recover its original collateral and must submit new collateral using the `fillSlot()` call.
The repair process is similar to filling slots. If the original slot dataset is no longer present in the network, the SP MAY use erasure coding to reconstruct the dataset. Reconstructing the original slot dataset requires retrieving other pieces of the dataset stored in other slots belonging to the request. For this reason, the node that successfully repairs a slot is entitled to an additional reward. (**TODO: Implementation**)
The repair process proceeds as follows:
1. The SP observes the `SlotFreed` event and decides to repair the slot.
2. The SP MUST reserve the slot with the `reserveSlot(requestId, slotIndex)` call. For more information see the [Filling Slots](###filling slots) section.
3. The SP MUST download the chunks of data required to reconstruct the freed slot's data. The node MUST use the [Reed-Solomon algorithm](https://hackmd.io/FB58eZQoTNm-dnhu0Y1XnA) to reconstruct the missing data.
4. The SP MUST generate proof over the reconstructed data.
5. The SP MUST call the `fillSlot()` smart contract function with the same parameters and collateral allowance as described in the [Filling Slots](###filling slots) section.
### Collecting Funds
An SP node SHOULD monitor the requests and the associated slots it hosts.
When a storage request enters the `Cancelled`, `Finished`, or `Failed` state, the SP node SHOULD call the `freeSlot(slotId)` smart contract function.
The aforementioned storage request states (`Cancelled`, `Finished`, and `Failed`) can be detected as follows:
- A storage request is considered `Cancelled` if no `RequestFulfilled(requestId)` event is observed within the time indicated by the `expiry` request parameter. Note that a `RequestCancelled` event may also be emitted, but the node SHOULD NOT rely on this event to assert the request expiration, as the `RequestCancelled` event is not guaranteed to be emitted at the time of expiry.
- A storage request is considered `Finished` when the time indicated by the value returned from the `getRequestEnd(requestId)` function has elapsed.
- A node concludes that a storage request has `Failed` upon observing the `RequestFailed(requestId)` event.
For each of the states listed above, different funds are handled as follows:
- In the `Cancelled` state, the collateral is returned along with a proportional payout based on the time the node actually hosted the dataset before the expiry was reached.
- In the `Finished` state, the full reward for hosting the slot, along with the collateral, is collected.
- In the `Failed` state, no funds are collected. The reward is returned to the client, and the collateral is burned. The slot is removed from the list of slots and is no longer included in the list of slots returned by the `mySlots()` function.
## Validator Role
In a blockchain, a contract cannot change its state without a transaction and gas initiating the state change. Therefore, our smart contract requires an external trigger to periodically check and confirm that a storage proof has been delivered by the SP. This is where the validator role is essential.
The validator role is fulfilled by nodes that help to verify that SPs have submitted the required storage proofs.
It is the smart contract that checks if the proof requested from an SP has been delivered. The validator only triggers the decision-making function in the smart contract. To incentivize validators, they receive a reward each time they correctly mark a proof as missing corresponding to the percentage of the slashed collateral defined by `config.collateral.validatorRewardPercentage`.
Each time a validator observes the `SlotFilled` event, it SHOULD add the slot reported in the `SlotFilled` event to the validator's list of watched slots. Then, after the end of each period, a validator has up to `config.proofs.timeout` seconds (a configuration parameter retrievable with `configuration()`) to validate all the slots. If a slot lacks the required proof, the validator SHOULD call the `markProofAsMissing(slotId, period)` function on the smart contract. This function validates the correctness of the claim, and if right, will send a reward to the validator.
If validating all the slots observed by the validator is not feasible within the specified `timeout`, the validator MAY choose to validate only a subset of the observed slots.
---
## Part II: Implementation Suggestions
> **IMPORTANT**: The sections above (Abstract through Validator Role) define the normative Codex Marketplace protocol requirements. All implementations MUST comply with those protocol requirements to participate in the Codex network.
>
> **The sections below are non-normative**. They document implementation approaches used in the nim-codex reference implementation. These are suggestions to guide implementors but are NOT required by the protocol. Alternative implementations MAY use different approaches as long as they satisfy the protocol requirements defined in Part I.
## Implementation Suggestions
This section describes implementation approaches used in reference implementations. These are **suggestions and not normative requirements**. Implementations are free to use different internal architectures, state machines, and data structures as long as they correctly implement the protocol requirements defined above.
### Storage Provider Implementation
The nim-codex reference implementation provides a complete Storage Provider implementation with state machine management, slot queueing, and resource management. This section documents the nim-codex approach.
#### State Machine
The Sales module implements a deterministic state machine for each slot, progressing through the following states:
1. **SalePreparing** - Find a matching availability and create a reservation
2. **SaleSlotReserving** - Reserve the slot on the marketplace
3. **SaleDownloading** - Stream and persist the slot's data
4. **SaleInitialProving** - Wait for stable challenge and generate initial proof
5. **SaleFilling** - Compute collateral and fill the slot
6. **SaleFilled** - Post-filling operations and expiry updates
7. **SaleProving** - Generate and submit proofs periodically
8. **SalePayout** - Free slot and calculate collateral
9. **SaleFinished** - Terminal success state
10. **SaleFailed** - Free slot on market and transition to error
11. **SaleCancelled** - Cancellation path
12. **SaleIgnored** - Sale ignored (no matching availability or other conditions)
13. **SaleErrored** - Terminal error state
14. **SaleUnknown** - Recovery state for crash recovery
15. **SaleProvingSimulated** - Proving with injected failures for testing
All states move to `SaleErrored` if an error is raised.
##### SalePreparing
- Find a matching availability based on the following criteria: `freeSize`, `duration`, `collateralPerByte`, `minPricePerBytePerSecond` and `until`
- Create a reservation
- Move to `SaleSlotReserving` if successful
- Move to `SaleIgnored` if no availability is found or if `BytesOutOfBoundsError` is raised because of no space available.
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
##### SaleSlotReserving
- Check if the slot can be reserved
- Move to `SaleDownloading` if successful
- Move to `SaleIgnored` if `SlotReservationNotAllowedError` is raised or the slot cannot be reserved. The collateral is returned.
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
##### SaleDownloading
- Select the correct data expiry:
- When the request is started, the request end date is used
- Otherwise the expiry date is used
- Stream and persist data via `onStore`
- For each written batch, release bytes from the reservation
- Move to `SaleInitialProving` if successful
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
- Move to `SaleFilled` on `SlotFilled` event from the `marketplace`
##### SaleInitialProving
- Wait for a stable initial challenge
- Produce the initial proof via `onProve`
- Move to `SaleFilling` if successful
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
##### SaleFilling
- Get the slot collateral
- Fill the slot
- Move to `SaleFilled` if successful
- Move to `SaleIgnored` on `SlotStateMismatchError`. The collateral is returned.
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
##### SaleFilled
- Ensure that the current host has filled the slot by checking the signer address
- Notify by calling `onFilled` hook
- Call `onExpiryUpdate` to change the data expiry from expiry date to request end date
- Move to `SaleProving` (or `SaleProvingSimulated` for simulated mode)
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
##### SaleProving
- For each period: fetch challenge, call `onProve`, and submit proof
- Move to `SalePayout` when the slot request ends
- Re-raise `SlotFreedError` when the slot is freed
- Raise `SlotNotFilledError` when the slot is not filled
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
##### SaleProvingSimulated
- Submit invalid proofs every `N` periods (`failEveryNProofs` in configuration) to test failure scenarios
##### SalePayout
- Get the current collateral and try to free the slot to ensure that the slot is freed after payout.
- Forward the returned collateral to cleanup
- Move to `SaleFinished` if successful
- Move to `SaleFailed` on `RequestFailed` event from the `marketplace`
- Move to `SaleCancelled` on cancelled timer elapsed, set to storage contract expiry
##### SaleFinished
- Call `onClear` hook
- Call `onCleanUp` hook
##### SaleFailed
- Free the slot
- Move to `SaleErrored` with the failure message
##### SaleCancelled
- Ensure that the node hosting the slot frees the slot
- Call `onClear` hook
- Call `onCleanUp` hook with the current collateral
##### SaleIgnored
- Call `onCleanUp` hook with the current collateral
##### SaleErrored
- Call `onClear` hook
- Call `onCleanUp` hook
##### SaleUnknown
- Recovery entry: get the `on-chain` state and jump to the appropriate state
#### Slot Queue
Slot queue schedules slot work and instantiates one `SalesAgent` per item with bounded concurrency.
- Accepts `(requestId, slotIndex, …)` items and orders them by priority
- Spawns one `SalesAgent` for each dequeued item, in other words, one item for one agent
- Caps concurrent agents to `maxWorkers`
- Supports pause/resume
- Allows controlled requeue when an agent finishes with `reprocessSlot`
##### Slot Ordering
The criteria are in the following order:
1) **Unseen before seen** - Items that have not been seen are dequeued first.
2) **More profitable first** - Higher `profitability` wins. `profitability` is `duration * pricePerSlotPerSecond`.
3) **Less collateral first** - The item with the smaller `collateral` wins.
4) **Later expiry first** - If both items carry an `expiry`, the one with the greater timestamp wins.
Within a single request, per-slot items are shuffled before enqueuing so the default slot-index order does not influence priority.
##### Pause / Resume
When the Slot queue processes an item with `seen = true`, it means that the item was already evaluated against the current availabilities and did not match.
To avoid draining the queue with untenable requests (due to insufficient availability), the queue pauses itself.
The queue resumes when:
- `OnAvailabilitySaved` fires after an availability update that increases one of: `freeSize`, `duration`, `minPricePerBytePerSecond`, or `totalRemainingCollateral`.
- A new unseen item (`seen = false`) is pushed.
- `unpause()` is called explicitly.
##### Reprocess
Availability matching occurs in `SalePreparing`.
If no availability fits at that time, the sale is ignored with `reprocessSlot` to true, meaning that the slot is added back to the queue with the flag `seen` to true.
##### Startup
On `SlotQueue.start()`, the sales module first deletes reservations associated with inactive storage requests, then starts a new `SalesAgent` for each active storage request:
- Fetch the active `on-chain` active slots.
- Delete the local reservations for slots that are not in the active list.
- Create a new agent for each slot and assign the `onCleanUp` callback.
- Start the agent in the `SaleUnknown` state.
#### Main Behaviour
When a new slot request is received, the sales module extracts the pair `(requestId, slotIndex, …)` from the request.
A `SlotQueueItem` is then created with metadata such as `profitability`, `collateral`, `expiry`, and the `seen` flag set to `false`.
This item is pushed into the `SlotQueue`, where it will be prioritised according to the ordering rules.
#### SalesAgent
SalesAgent is the instance that executes the state machine for a single slot.
- Executes the sale state machine across the slot lifecycle
- Holds a `SalesContext` with dependencies and host hooks
- Supports crash recovery via the `SaleUnknown` state
- Handles errors by entering `SaleErrored`, which runs cleanup routines
#### SalesContext
SalesContext is a container for dependencies used by all sales.
- Provides external interfaces: `Market` (marketplace) and `Clock`
- Provides access to `Reservations`
- Provides host hooks: `onStore`, `onProve`, `onExpiryUpdate`, `onClear`, `onSale`
- Shares the `SlotQueue` handle for scheduling work
- Provides configuration such as `simulateProofFailures`
- Passed to each `SalesAgent`
#### Marketplace Subscriptions
The sales module subscribes to on-chain events to keep the queue and agents consistent.
##### StorageRequested
When the marketplace signals a new request, the sales module:
- Computes collateral for free slots.
- Creates per-slot `SlotQueueItem` entries (one per `slotIndex`) with `seen = false`.
- Pushes the items into the `SlotQueue`.
##### SlotFreed
When the marketplace signals a freed slot (needs repair), the sales module:
- Retrieves the request data for the `requestId`.
- Computes collateral for repair.
- Creates a `SlotQueueItem`.
- Pushes the item into the `SlotQueue`.
##### RequestCancelled
When a request is cancelled, the sales module removes all queue items for that `requestId`.
##### RequestFulfilled
When a request is fulfilled, the sales module removes all queue items for that `requestId` and notifies active agents bound to the request.
##### RequestFailed
When a request fails, the sales module removes all queue items for that `requestId` and notifies active agents bound to the request.
##### SlotFilled
When a slot is filled, the sales module removes the queue item for that specific `(requestId, slotIndex)` and notifies the active agent for that slot.
##### SlotReservationsFull
When the marketplace signals that reservations are full, the sales module removes the queue item for that specific `(requestId, slotIndex)`.
#### Reservations
The Reservations module manages both Availabilities and Reservations.
When an Availability is created, it reserves bytes in the storage module so no other modules can use those bytes.
Before a dataset for a slot is downloaded, a Reservation is created, and the freeSize of the Availability is reduced.
When bytes are downloaded, the reservation of those bytes in the storage module is released.
Accounting of both reserved bytes in the storage module and freeSize in the Availability are cleaned up upon completion of the state machine.
```mermaid
graph TD
A[Availability] -->|creates| R[Reservation]
A -->|reserves bytes in| SM[Storage Module]
R -->|reduces| AF[Availability.freeSize]
R -->|downloads data| D[Dataset]
D -->|releases bytes to| SM
TC[Terminal State] -->|triggers cleanup| C[Cleanup]
C -->|returns bytes to| AF
C -->|deletes| R
C -->|returns collateral to| A
```
#### Hooks
- **onStore**: streams data into the node's storage
- **onProve**: produces proofs for initial and periodic proving
- **onExpiryUpdate**: notifies the client node of a change in the expiry data
- **onSale**: notifies that the host is now responsible for the slot
- **onClear**: notification emitted once the state machine has concluded; used to reconcile Availability bytes and reserved bytes in the storage module
- **onCleanUp**: cleanup hook called in terminal states to release resources, delete reservations, and return collateral to availabilities
#### Error Handling
- Always catch `CancelledError` from `nim-chronos` and log a trace, exiting gracefully
- Catch `CatchableError`, log it, and route to `SaleErrored`
#### Cleanup
Cleanup releases resources held by a sales agent and optionally requeues the slot.
- Return reserved bytes to the availability if a reservation exists
- Delete the reservation and return any remaining collateral
- If `reprocessSlot` is true, push the slot back into the queue marked as seen
- Remove the agent from the sales set and track the removal future
#### Resource Management Approach
The nim-codex implementation uses Availabilities and Reservations to manage local storage resources:
##### Reservation Management
- Maintain `Availability` and `Reservation` records locally
- Match incoming slot requests to available capacity using prioritisation rules
- Lock capacity and collateral when creating a reservation
- Release reserved bytes progressively during download and free all remaining resources in terminal states
**Note:** Availabilities and Reservations are completely local to the Storage Provider implementation and are not visible at the protocol level. They provide one approach to managing storage capacity, but other implementations may use different resource management strategies.
---
> **Protocol Compliance Note**: The Storage Provider implementation described above is specific to nim-codex. The only normative requirements for Storage Providers are defined in the [Storage Provider Role](#storage-provider-role) section of Part I. Implementations must satisfy those protocol requirements but may use completely different internal designs.
### Client Implementation
The nim-codex reference implementation provides a complete Client implementation with state machine management for storage request lifecycles. This section documents the nim-codex approach.
The nim-codex implementation uses a state machine pattern to manage purchase lifecycles, providing deterministic state transitions, explicit terminal states, and recovery support. The state machine definitions (state identifiers, transitions, state descriptions, requirements, data models, and interfaces) are documented in the subsections below.
> **Note**: The Purchase module terminology and state machine design are specific to the nim-codex implementation. The protocol only requires that clients interact with the marketplace smart contract as specified in the Client Role section.
#### State Identifiers
- PurchasePending: `pending`
- PurchaseSubmitted: `submitted`
- PurchaseStarted: `started`
- PurchaseFinished: `finished`
- PurchaseErrored: `errored`
- PurchaseCancelled: `cancelled`
- PurchaseFailed: `failed`
- PurchaseUnknown: `unknown`
#### General Rules for All States
- If a `CancelledError` is raised, the state machine logs the cancellation message and takes no further action.
- If a `CatchableError` is raised, the state machine moves to `errored` with the error message.
#### State Transitions
```text
|
v
------------------------- unknown
| / /
v v /
pending ----> submitted ----> started ---------> finished <----/
\ \ /
\ ------------> failed <----/
\ /
--> cancelled <-----------------------
```
**Note:**
Any state can transition to errored upon a `CatchableError`.
`failed` is an intermediate state before `errored`.
`finished`, `cancelled`, and `errored` are terminal states.
#### State Descriptions
**Pending State (`pending`)**
A storage request is being created by making a call `on-chain`. If the storage request creation fails, the state machine moves to the `errored` state with the corresponding error.
**Submitted State (`submitted`)**
The storage request has been created and the purchase waits for the request to start. When it starts, an `on-chain` event `RequestFulfilled` is emitted, triggering the subscription callback, and the state machine moves to the `started` state. If the expiry is reached before the callback is called, the state machine moves to the `cancelled` state.
**Started State (`started`)**
The purchase is active and waits until the end of the request, defined by the storage request parameters, before moving to the `finished` state. A subscription is made to the marketplace to be notified about request failure. If a request failure is notified, the state machine moves to `failed`.
Marketplace subscription signature:
```nim
method subscribeRequestFailed*(market: Market, requestId: RequestId, callback: OnRequestFailed): Future[Subscription] {.base, async.}
```
**Finished State (`finished`)**
The purchase is considered successful and cleanup routines are called. The purchase module calls `marketplace.withdrawFunds` to release the funds locked by the marketplace:
```nim
method withdrawFunds*(market: Market, requestId: RequestId) {.base, async: (raises: [CancelledError, MarketError]).}
```
After that, the purchase is done; no more states are called and the state machine stops successfully.
**Failed State (`failed`)**
If the marketplace emits a `RequestFailed` event, the state machine moves to the `failed` state and the purchase module calls `marketplace.withdrawFunds` (same signature as above) to release the funds locked by the marketplace. After that, the state machine moves to `errored`.
**Cancelled State (`cancelled`)**
The purchase is cancelled and the purchase module calls `marketplace.withdrawFunds` to release the funds locked by the marketplace (same signature as above). After that, the purchase is terminated; no more states are called and the state machine stops with the reason of failure as error.
**Errored State (`errored`)**
The purchase is terminated; no more states are called and the state machine stops with the reason of failure as error.
**Unknown State (`unknown`)**
The purchase is in recovery mode, meaning that the state has to be determined. The purchase module calls the marketplace to get the request data (`getRequest`) and the request state (`requestState`):
```nim
method getRequest*(market: Market, id: RequestId): Future[?StorageRequest] {.base, async: (raises: [CancelledError]).}
method requestState*(market: Market, requestId: RequestId): Future[?RequestState] {.base, async.}
```
Based on this information, it moves to the corresponding next state.
> **Note**: Functional and non-functional requirements for the client role are summarized in the [Codex Marketplace Specification](https://github.com/codex-storage/codex-spec/blob/master/specs/marketplace.md). The requirements listed below are specific to the nim-codex Purchase module implementation.
#### Functional Requirements
##### Purchase Definition
- Every purchase MUST represent exactly one `StorageRequest`
- The purchase MUST have a unique, deterministic identifier `PurchaseId` derived from `requestId`
- It MUST be possible to restore any purchase from its `requestId` after a restart
- A purchase is considered expired when the expiry timestamp in its `StorageRequest` is reached before the request start, i.e, an event `RequestFulfilled` is emitted by the `marketplace`
##### State Machine Progression
- New purchases MUST start in the `pending` state (submission flow)
- Recovered purchases MUST start in the `unknown` state (recovery flow)
- The state machine MUST progress step-by-step until a deterministic terminal state is reached
- The choice of terminal state MUST be based on the `RequestState` returned by the `marketplace`
##### Failure Handling
- On marketplace failure events, the purchase MUST immediately transition to `errored` without retries
- If a `CancelledError` is raised, the state machine MUST log the cancellation and stop further processing
- If a `CatchableError` is raised, the state machine MUST transition to `errored` and record the error
#### Non-Functional Requirements
##### Execution Model
A purchase MUST be handled by a single thread; only one worker SHOULD process a given purchase instance at a time.
##### Reliability
`load` supports recovery after process restarts.
##### Performance
State transitions should be non-blocking; all I/O is async.
##### Logging
All state transitions and errors should be clearly logged for traceability.
##### Safety
- Avoid side effects during `new` other than initialising internal fields; `on-chain` interactions are delegated to states using `marketplace` dependency.
- Retry policy for external calls.
##### Testing
- Unit tests check that each state handles success and error properly.
- Integration tests check that a full purchase flows correctly through states.
---
> **Protocol Compliance Note**: The Client implementation described above is specific to nim-codex. The only normative requirements for Clients are defined in the [Client Role](#client-role) section of Part I. Implementations must satisfy those protocol requirements but may use completely different internal designs.
---
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
### Normative
- [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt) - Key words for use in RFCs to Indicate Requirement Levels
- [Reed-Solomon algorithm](https://hackmd.io/FB58eZQoTNm-dnhu0Y1XnA) - Erasure coding algorithm used for data encoding
- [CIDv1](https://github.com/multiformats/cid#cidv1) - Content Identifier specification
- [multihash](https://github.com/multiformats/multihash) - Self-describing hashes
- [Proof-of-Data-Possession](https://hackmd.io/2uRBltuIT7yX0CyczJevYg?view) - Zero-knowledge proof system for storage verification
- [Original Codex Marketplace Spec](https://github.com/codex-storage/codex-spec/blob/master/specs/marketplace.md) - Source specification for this document
### Informative
- [Codex Implementation](https://github.com/codex-storage/nim-codex) - Reference implementation in Nim
- [Codex market implementation](https://github.com/codex-storage/nim-codex/blob/master/codex/market.nim) - Marketplace module implementation
- [Codex Sales Component Spec](https://github.com/codex-storage/codex-docs-obsidian/blob/main/10%20Notes/Specs/Component%20Specification%20-%20Sales.md) - Storage Provider implementation details
- [Codex Purchase Component Spec](https://github.com/codex-storage/codex-docs-obsidian/blob/main/10%20Notes/Specs/Component%20Specification%20-%20Purchase.md) - Client implementation details
- [Nim Chronos](https://github.com/status-im/nim-chronos) - Async/await framework for Nim
- [Storage proof timing design](https://github.com/codex-storage/codex-research/blob/41c4b4409d2092d0a5475aca0f28995034e58d14/design/storage-proof-timing.md) - Proof timing mechanism

View File

@@ -1,3 +0,0 @@
# Nomos Deprecated Specifications
Deprecated Nomos specifications kept for archival and reference purposes.

View File

@@ -1,3 +0,0 @@
# Nomos Raw Specifications
Early-stage Nomos specifications that have not yet progressed beyond raw status.

View File

@@ -1,730 +0,0 @@
[
{
"project": "codex",
"slug": "Codex Block Exchange Protocol",
"title": "Codex Block Exchange Protocol",
"status": "raw",
"category": "Standards Track",
"path": "codex/raw/codex-block-exchange.html"
},
{
"project": "codex",
"slug": "codex-marketplace",
"title": "Codex Storage Marketplace",
"status": "raw",
"category": "Standards Track",
"path": "codex/raw/codex-marketplace.html"
},
{
"project": "nomos",
"slug": "Claro Consensus Protocol",
"title": "Claro Consensus Protocol",
"status": "deprecated",
"category": "Standards Track",
"path": "nomos/deprecated/claro.html"
},
{
"project": "nomos",
"slug": "Nomos P2P Network Bootstrapping Specification",
"title": "Nomos P2P Network Bootstrapping Specification",
"status": "raw",
"category": "networking",
"path": "nomos/raw/p2p-network-bootstrapping.html"
},
{
"project": "nomos",
"slug": "Nomos P2P Network NAT Solution Specification",
"title": "Nomos P2P Network NAT Solution Specification",
"status": "raw",
"category": "networking",
"path": "nomos/raw/p2p-nat-solution.html"
},
{
"project": "nomos",
"slug": "Nomos P2P Network Specification",
"title": "Nomos P2P Network Specification",
"status": "draft",
"category": "networking",
"path": "nomos/raw/p2p-network.html"
},
{
"project": "nomos",
"slug": "Nomos Service Declaration Protocol Specification",
"title": "Nomos Service Declaration Protocol Specification",
"status": "raw",
"category": "unspecified",
"path": "nomos/raw/sdp.html"
},
{
"project": "nomos",
"slug": "Nomos p2p Network Hardware Requirements Specification",
"title": "Nomos p2p Network Hardware Requirements Specification",
"status": "raw",
"category": "infrastructure",
"path": "nomos/raw/p2p-hardware-requirements.html"
},
{
"project": "nomos",
"slug": "NomosDA Encoding Protocol",
"title": "NomosDA Encoding Protocol",
"status": "raw",
"category": "unspecified",
"path": "nomos/raw/nomosda-encoding.html"
},
{
"project": "nomos",
"slug": "NomosDA Network",
"title": "NomosDA Network",
"status": "raw",
"category": "unspecified",
"path": "nomos/raw/nomosda-network.html"
},
{
"project": "status",
"slug": "24",
"title": "Status Community Directory Curation Voting using Waku v2",
"status": "draft",
"category": "unspecified",
"path": "status/24/curation.html"
},
{
"project": "status",
"slug": "28",
"title": "Status community featuring using waku v2",
"status": "draft",
"category": "unspecified",
"path": "status/28/featuring.html"
},
{
"project": "status",
"slug": "3rd party",
"title": "3rd party",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/3rd-party.html"
},
{
"project": "status",
"slug": "55",
"title": "Status 1-to-1 Chat",
"status": "draft",
"category": "Standards Track",
"path": "status/55/1to1-chat.html"
},
{
"project": "status",
"slug": "56",
"title": "Status Communities that run over Waku v2",
"status": "draft",
"category": "Standards Track",
"path": "status/56/communities.html"
},
{
"project": "status",
"slug": "61",
"title": "Status Community History Service",
"status": "draft",
"category": "Standards Track",
"path": "status/61/community-history-service.html"
},
{
"project": "status",
"slug": "62",
"title": "Status Message Payloads",
"status": "draft",
"category": "unspecified",
"path": "status/62/payloads.html"
},
{
"project": "status",
"slug": "63",
"title": "Status Keycard Usage",
"status": "draft",
"category": "Standards Track",
"path": "status/63/keycard-usage.html"
},
{
"project": "status",
"slug": "65",
"title": "Status Account Address",
"status": "draft",
"category": "Standards Track",
"path": "status/65/account-address.html"
},
{
"project": "status",
"slug": "71",
"title": "Push Notification Server",
"status": "draft",
"category": "Standards Track",
"path": "status/71/push-notification-server.html"
},
{
"project": "status",
"slug": "Account",
"title": "Account",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/account.html"
},
{
"project": "status",
"slug": "Client",
"title": "Client",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/client.html"
},
{
"project": "status",
"slug": "Dapp browser API usage",
"title": "Dapp browser API usage",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/dapp-browser-API-usage.html"
},
{
"project": "status",
"slug": "EIPS",
"title": "EIPS",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/eips.html"
},
{
"project": "status",
"slug": "Group Chat",
"title": "Group Chat",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/group-chat.html"
},
{
"project": "status",
"slug": "IPFS gateway for Sticker Pack",
"title": "IPFS gateway for Sticker Pack",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/IPFS-gateway-for-sticker-Pack.html"
},
{
"project": "status",
"slug": "Keycard Usage for Wallet and Chat Keys",
"title": "Keycard Usage for Wallet and Chat Keys",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/keycard-usage-for-wallet-and-chat-keys.html"
},
{
"project": "status",
"slug": "MVDS Usage in Status",
"title": "MVDS Usage in Status",
"status": "raw",
"category": "Best Current Practice",
"path": "status/raw/status-mvds.html"
},
{
"project": "status",
"slug": "Notifications",
"title": "Notifications",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/notifications.html"
},
{
"project": "status",
"slug": "Payloads",
"title": "Payloads",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/payloads.html"
},
{
"project": "status",
"slug": "Push notification server",
"title": "Push notification server",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/push-notification-server.html"
},
{
"project": "status",
"slug": "Secure Transport",
"title": "Secure Transport",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/secure-transport.html"
},
{
"project": "status",
"slug": "Status Protocol Stack",
"title": "Status Protocol Stack",
"status": "raw",
"category": "Standards Track",
"path": "status/raw/status-app-protocols.html"
},
{
"project": "status",
"slug": "Status Simple Scaling",
"title": "Status Simple Scaling",
"status": "raw",
"category": "Informational",
"path": "status/raw/simple-scaling.html"
},
{
"project": "status",
"slug": "Status URL Data",
"title": "Status URL Data",
"status": "raw",
"category": "Standards Track",
"path": "status/raw/url-data.html"
},
{
"project": "status",
"slug": "Status URL Scheme",
"title": "Status URL Scheme",
"status": "raw",
"category": "Standards Track",
"path": "status/raw/url-scheme.html"
},
{
"project": "status",
"slug": "Status interactions with the Ethereum blockchain",
"title": "Status interactions with the Ethereum blockchain",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/ethereum-usage.html"
},
{
"project": "status",
"slug": "Waku Mailserver",
"title": "Waku Mailserver",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/waku-mailserver.html"
},
{
"project": "status",
"slug": "Waku Usage",
"title": "Waku Usage",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/waku-usage.html"
},
{
"project": "status",
"slug": "Whisper Usage",
"title": "Whisper Usage",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/whisper-usage.html"
},
{
"project": "status",
"slug": "Whisper mailserver",
"title": "Whisper mailserver",
"status": "deprecated",
"category": "unspecified",
"path": "status/deprecated/whisper-mailserver.html"
},
{
"project": "vac",
"slug": "1",
"title": "Consensus-Oriented Specification System",
"status": "draft",
"category": "Best Current Practice",
"path": "vac/1/coss.html"
},
{
"project": "vac",
"slug": "2",
"title": "Minimum Viable Data Synchronization",
"status": "stable",
"category": "unspecified",
"path": "vac/2/mvds.html"
},
{
"project": "vac",
"slug": "25",
"title": "Libp2p Peer Discovery via DNS",
"status": "deleted",
"category": "unspecified",
"path": "vac/25/libp2p-dns-discovery.html"
},
{
"project": "vac",
"slug": "3",
"title": "Remote log specification",
"status": "draft",
"category": "unspecified",
"path": "vac/3/remote-log.html"
},
{
"project": "vac",
"slug": "32",
"title": "Rate Limit Nullifier",
"status": "draft",
"category": "unspecified",
"path": "vac/32/rln-v1.html"
},
{
"project": "vac",
"slug": "4",
"title": "MVDS Metadata Field",
"status": "draft",
"category": "unspecified",
"path": "vac/4/mvds-meta.html"
},
{
"project": "vac",
"slug": "Decentralized Key and Session Setup for Secure Messaging over Ethereum",
"title": "Decentralized Key and Session Setup for Secure Messaging over Ethereum",
"status": "raw",
"category": "informational",
"path": "vac/raw/decentralized-messaging-ethereum.html"
},
{
"project": "vac",
"slug": "Gossipsub Tor Push",
"title": "Gossipsub Tor Push",
"status": "raw",
"category": "Standards Track",
"path": "vac/raw/gossipsub-tor-push.html"
},
{
"project": "vac",
"slug": "Hashgraphlike Consensus Protocol",
"title": "Hashgraphlike Consensus Protocol",
"status": "raw",
"category": "Standards Track",
"path": "vac/raw/consensus-hashgraphlike.html"
},
{
"project": "vac",
"slug": "Interep as group management for RLN",
"title": "Interep as group management for RLN",
"status": "raw",
"category": "unspecified",
"path": "vac/raw/rln-interep-spec.html"
},
{
"project": "vac",
"slug": "Libp2p Mix Protocol",
"title": "Libp2p Mix Protocol",
"status": "raw",
"category": "Standards Track",
"path": "vac/raw/mix.html"
},
{
"project": "vac",
"slug": "Logos Capability Discovery Protocol",
"title": "Logos Capability Discovery Protocol",
"status": "raw",
"category": "Standards Track",
"path": "vac/raw/logos-capability-discovery.html"
},
{
"project": "vac",
"slug": "RLN Stealth Commitment Usage",
"title": "RLN Stealth Commitment Usage",
"status": "unknown",
"category": "Standards Track",
"path": "vac/raw/rln-stealth-commitments.html"
},
{
"project": "vac",
"slug": "Rate Limit Nullifier V2",
"title": "Rate Limit Nullifier V2",
"status": "raw",
"category": "unspecified",
"path": "vac/raw/rln-v2.html"
},
{
"project": "vac",
"slug": "Scalable Data Sync protocol for distributed logs",
"title": "Scalable Data Sync protocol for distributed logs",
"status": "raw",
"category": "unspecified",
"path": "vac/raw/sds.html"
},
{
"project": "vac",
"slug": "Secure 1-to-1 channel setup using X3DH and the double ratchet",
"title": "Secure 1-to-1 channel setup using X3DH and the double ratchet",
"status": "raw",
"category": "Standards Track",
"path": "vac/raw/noise-x3dh-double-ratchet.html"
},
{
"project": "vac",
"slug": "Secure channel setup using Ethereum accounts",
"title": "Secure channel setup using Ethereum accounts",
"status": "deleted",
"category": "Standards Track",
"path": "vac/raw/deleted/eth-secpm.html"
},
{
"project": "vac",
"slug": "Secure channel setup using decentralized MLS and Ethereum accounts",
"title": "Secure channel setup using decentralized MLS and Ethereum accounts",
"status": "raw",
"category": "Standards Track",
"path": "vac/raw/eth-mls-onchain.html"
},
{
"project": "vac",
"slug": "Secure channel setup using decentralized MLS and Ethereum accounts",
"title": "Secure channel setup using decentralized MLS and Ethereum accounts",
"status": "raw",
"category": "Standards Track",
"path": "vac/raw/eth-mls-offchain.html"
},
{
"project": "waku",
"slug": "10",
"title": "Waku v2",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/core/10/waku2.html"
},
{
"project": "waku",
"slug": "11",
"title": "Waku v2 Relay",
"status": "stable",
"category": "unspecified",
"path": "waku/standards/core/11/relay.html"
},
{
"project": "waku",
"slug": "12",
"title": "Waku v2 Filter",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/core/12/filter.html"
},
{
"project": "waku",
"slug": "13",
"title": "Waku Store Query",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/core/13/store.html"
},
{
"project": "waku",
"slug": "14",
"title": "Waku v2 Message",
"status": "stable",
"category": "Standards Track",
"path": "waku/standards/core/14/message.html"
},
{
"project": "waku",
"slug": "15",
"title": "Waku Bridge",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/core/15/bridge.html"
},
{
"project": "waku",
"slug": "16",
"title": "Waku v2 RPC API",
"status": "deprecated",
"category": "unspecified",
"path": "waku/deprecated/16/rpc.html"
},
{
"project": "waku",
"slug": "17",
"title": "Waku v2 RLN Relay",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/core/17/rln-relay.html"
},
{
"project": "waku",
"slug": "18",
"title": "Waku SWAP Accounting",
"status": "deprecated",
"category": "unspecified",
"path": "waku/deprecated/18/swap.html"
},
{
"project": "waku",
"slug": "19",
"title": "Waku v2 Light Push",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/core/19/lightpush.html"
},
{
"project": "waku",
"slug": "20",
"title": "Toy Ethereum Private Message",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/application/20/toy-eth-pm.html"
},
{
"project": "waku",
"slug": "21",
"title": "Waku v2 Fault-Tolerant Store",
"status": "deleted",
"category": "unspecified",
"path": "waku/deprecated/fault-tolerant-store.html"
},
{
"project": "waku",
"slug": "22",
"title": "Waku v2 Toy Chat",
"status": "draft",
"category": "unspecified",
"path": "waku/informational/22/toy-chat.html"
},
{
"project": "waku",
"slug": "23",
"title": "Waku v2 Topic Usage Recommendations",
"status": "draft",
"category": "Informational",
"path": "waku/informational/23/topics.html"
},
{
"project": "waku",
"slug": "26",
"title": "Waku Message Payload Encryption",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/application/26/payload.html"
},
{
"project": "waku",
"slug": "27",
"title": "Waku v2 Client Peer Management Recommendations",
"status": "draft",
"category": "unspecified",
"path": "waku/informational/27/peers.html"
},
{
"project": "waku",
"slug": "29",
"title": "Waku v2 Client Parameter Configuration Recommendations",
"status": "draft",
"category": "unspecified",
"path": "waku/informational/29/config.html"
},
{
"project": "waku",
"slug": "30",
"title": "Adaptive nodes",
"status": "draft",
"category": "unspecified",
"path": "waku/informational/30/adaptive-nodes.html"
},
{
"project": "waku",
"slug": "31",
"title": "Waku v2 usage of ENR",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/core/31/enr.html"
},
{
"project": "waku",
"slug": "33",
"title": "Waku v2 Discv5 Ambient Peer Discovery",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/core/33/discv5.html"
},
{
"project": "waku",
"slug": "34",
"title": "Waku2 Peer Exchange",
"status": "draft",
"category": "Standards Track",
"path": "waku/standards/core/34/peer-exchange.html"
},
{
"project": "waku",
"slug": "36",
"title": "Waku v2 C Bindings API",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/core/36/bindings-api.html"
},
{
"project": "waku",
"slug": "5",
"title": "Waku v0",
"status": "deprecated",
"category": "unspecified",
"path": "waku/deprecated/5/waku0.html"
},
{
"project": "waku",
"slug": "53",
"title": "X3DH usage for Waku payload encryption",
"status": "draft",
"category": "Standards Track",
"path": "waku/standards/application/53/x3dh.html"
},
{
"project": "waku",
"slug": "54",
"title": "Session management for Waku X3DH",
"status": "draft",
"category": "Standards Track",
"path": "waku/standards/application/54/x3dh-sessions.html"
},
{
"project": "waku",
"slug": "6",
"title": "Waku v1",
"status": "stable",
"category": "unspecified",
"path": "waku/standards/legacy/6/waku1.html"
},
{
"project": "waku",
"slug": "64",
"title": "Waku v2 Network",
"status": "draft",
"category": "Best Current Practice",
"path": "waku/standards/core/64/network.html"
},
{
"project": "waku",
"slug": "66",
"title": "Waku Metadata Protocol",
"status": "draft",
"category": "unspecified",
"path": "waku/standards/core/66/metadata.html"
},
{
"project": "waku",
"slug": "7",
"title": "Waku Envelope data field",
"status": "stable",
"category": "unspecified",
"path": "waku/standards/legacy/7/data.html"
},
{
"project": "waku",
"slug": "8",
"title": "Waku Mailserver",
"status": "stable",
"category": "unspecified",
"path": "waku/standards/legacy/8/mail.html"
},
{
"project": "waku",
"slug": "9",
"title": "Waku RPC API",
"status": "stable",
"category": "unspecified",
"path": "waku/standards/legacy/9/rpc.html"
}
]

View File

@@ -1,3 +0,0 @@
# Status Deprecated Specifications
Deprecated Status specifications maintained for archival purposes.

View File

@@ -1,3 +0,0 @@
# Status Raw Specifications
Early-stage Status specifications that precede draft or stable status.

View File

@@ -1,435 +0,0 @@
# ETH-MLS-OFFCHAIN
| Field | Value |
| --- | --- |
| Name | Secure channel setup using decentralized MLS and Ethereum accounts |
| Status | raw |
| Category | Standards Track |
| Editor | Ugur Sen [ugur@status.im](mailto:ugur@status.im) |
| Contributors | seemenkina [ekaterina@status.im](mailto:ekaterina@status.im) |
## Abstract
The following document specifies Ethereum authenticated scalable
and decentralized secure group messaging application by
integrating Message Layer Security (MLS) backend.
Decentralization refers each user is a node in P2P network and
each user has voice for any changes in group.
This is achieved by integrating a consensus mechanism.
Lastly, this RFC can also be referred to as de-MLS,
decentralized MLS, to emphasize its deviation
from the centralized trust assumptions of traditional MLS deployments.
## Motivation
Group messaging is a fundamental part of digital communication,
yet most existing systems depend on centralized servers,
which introduce risks around privacy, censorship, and unilateral control.
In restrictive settings, servers can be blocked or surveilled;
in more open environments, users still face opaque moderation policies,
data collection, and exclusion from decision-making processes.
To address this, we propose a decentralized, scalable peer-to-peer
group messaging system where each participant runs a node, contributes
to message propagation, and takes part in governance autonomously.
Group membership changes are decided collectively through a lightweight
partially synchronous, fault-tolerant consensus protocol without a centralized identity.
This design enables truly democratic group communication and is well-suited
for use cases like activist collectives, research collaborations, DAOs, support groups,
and decentralized social platforms.
## Format Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
### Assumptions
- The nodes in the P2P network can discover other nodes or will connect to other nodes when subscribing to same topic in a gossipsub.
- We MAY have non-reliable (silent) nodes.
- We MUST have a consensus that is lightweight, scalable and finalized in a specific time.
## Roles
The three roles used in de-MLS is as follows:
- `node`: Nodes are participants in the network that are not currently members
of any secure group messaging session but remain available as potential candidates for group membership.
- `member`: Members are special nodes in the secure group messaging who
obtains current group key of secure group messaging.
Each node is assigned a unique identity represented as a 20-byte value named `member id`.
- `steward`: Stewards are special and transparent members in the secure group
messaging who organize the changes by releasing commit messages upon the voted proposals.
There are two special subsets of steward as epoch and backup steward,
which are defined in the section de-MLS Objects.
## MLS Background
The de-MLS consists of MLS backend, so the MLS services and other MLS components
are taken from the original [MLS specification](https://datatracker.ietf.org/doc/rfc9420/), with or without modifications.
### MLS Services
MLS is operated in two services authentication service (AS) and delivery service (DS).
Authentication service enables group members to authenticate the credentials presented by other group members.
The delivery service routes MLS messages among the nodes or
members in the protocol in the correct order and
manage the `keyPackage` of the users where the `keyPackage` is the objects
that provide some public information about a user.
### MLS Objects
Following section presents the MLS objects and components that used in this RFC:
`Epoch`: Time intervals that changes the state that is defined by members,
section 3.4 in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
`MLS proposal message:` Members MUST receive the proposal message prior to the
corresponding commit message that initiates a new epoch with key changes,
in order to ensure the intended security properties, section 12.1 in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
Here, the add and remove proposals are used.
`Application message`: This message type used in arbitrary encrypted communication between group members.
This is restricted by [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) as if there is pending proposal,
the application message should be cut.
Note that: Since the MLS is based on servers, this delay between proposal and commit messages are very small.
`Commit message:` After members receive the proposals regarding group changes,
the committer, who may be any member of the group, as specified in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/),
generates the necessary key material for the next epoch, including the appropriate welcome messages
for new joiners and new entropy for removed members. In this RFC, the committers only MUST be stewards.
### de-MLS Objects
This section presents the de-MLS objects:
`Voting proposal`: Similar to MLS proposals, but processed only if approved through a voting process.
They function as application messages in the MLS group,
allowing the steward to collect them without halting the protocol.
There are three types of `voting proposal` according to the type of consensus as in shown Consensus Types section,
these are, `commit proposal`, `steward election proposal` and `emergency criteria proposal`.
`Epoch steward`: The steward assigned to commit in `epoch E` according to the steward list.
Holds the primary responsibility for creating commit in that epoch.
`Backup steward`: The steward next in line after the `epoch steward` on the `steward list` in `epoch E`.
Only becomes active if the `epoch steward` is malicious or fails,
in which case it completes the commitment phase.
If unused in `epoch E`, it automatically becomes the `epoch steward` in `epoch E+1`.
`Steward list`: It is an ordered list that contains the `member id`s of authorized stewards.
Each steward in the list becomes main responsible for creating the commit message when its turn arrives,
according to this order for each epoch.
For example, suppose there are two stewards in the list `steward A` first and `steward B` last in the list.
`steward A` is responsible for creating the commit message for first epoch.
Similarly, `steward B` is for the last epoch.
Since the `epoch steward` is the primary committer for an epoch,
it holds the main responsibility for producing the commit.
However, other stewards MAY also generate a commit within the same epoch to preserve liveness
in case the epoch steward is inactive or slow.
Duplicate commits are not re-applied and only the single valid commit for the epoch is accepted by the group,
as in described in section filtering proposals against the multiple comitting.
Therefore, if a malicious steward occurred, the `backup steward` will be charged with committing.
Lastly, the size of the list named as `sn`, which also shows the epoch interval for steward list determination.
## Flow
General flow is as follows:
- A steward initializes a group just once, and then sends out Group Announcements (GA) periodically.
- Meanwhile, each `node` creates and sends their `credential` includes `keyPackage`.
- Each `member` creates `voting proposals` sends them to from MLS group during `epoch E`.
- Meanwhile, the `steward` collects finalized `voting proposals` from MLS group and converts them into
`MLS proposals` then sends them with corresponding `commit messages`
- Evantually, with the commit messages, all members starts the next `epoch E+1`.
## Creating Voting Proposal
A `member` MAY initializes the voting with the proposal payload
which is implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
```protobuf
syntax = "proto3";
message Proposal {
string name = 10; // Proposal name
string payload = 11; // Describes the what is voting fore
int32 proposal_id = 12; // Unique identifier of the proposal
bytes proposal_owner = 13; // Public key of the creator
repeated Vote votes = 14; // Vote list in the proposal
int32 expected_voters_count = 15; // Maximum number of distinct voters
int32 round = 16; // Number of Votes
int64 timestamp = 17; // Creation time of proposal
int64 expiration_time = 18; // Time interval that the proposal is active
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
}
```
```bash
message Vote {
int32 vote_id = 20; // Unique identifier of the vote
bytes vote_owner = 21; // Voter's public key
int64 timestamp = 22; // Time when the vote was cast
bool vote = 23; // Vote bool value (true/false)
bytes parent_hash = 24; // Hash of previous owner's Vote
bytes received_hash = 25; // Hash of previous received Vote
bytes vote_hash = 26; // Hash of all previously defined fields in Vote
bytes signature = 27; // Signature of vote_hash
}
```
The voting proposal MAY include adding a `node` or removing a `member`.
After the `member` creates the voting proposal,
it is emitted to the network via the MLS `Application message` with a lightweight,
epoch based voting such as [hashgraphlike consensus.](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
This consensus result MUST be finalized within the epoch as YES or NO.
If the voting result is YES, this points out the voting proposal will be converted into
the MLS proposal by the `steward` and following commit message that starts the new epoch.
## Creating welcome message
When a MLS `MLS proposal message` is created by the `steward`,
a `commit message` SHOULD follow,
as in section 12.04 [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) to the members.
In order for the new `member` joining the group to synchronize with the current members
who received the `commit message`,
the `steward` sends a welcome message to the node as the new `member`,
as in section 12.4.3.1. [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
## Single steward
To naive way to create a decentralized secure group messaging is having a single transparent `steward`
who only applies the changes regarding the result of the voting.
This is mostly similar with the general flow and specified in voting proposal and welcome message creation sections.
1. Each time a single `steward` initializes a group with group parameters with parameters
as in section 8.1. Group Context in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
2. `steward` creates a group anouncement (GA) according to the previous step and
broadcast it to the all network periodically. GA message is visible in network to all `nodes`.
3. The each `node` who wants to be a `member` needs to obtain this anouncement and create `credential`
includes `keyPackage` that is specified in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) section 10.
4. The `node` send the `KeyPackages` in plaintext with its signature with current `steward` public key which
anounced in welcome topic. This step is crucial for security, ensuring that malicious nodes/stewards
cannot use others' `KeyPackages`.
It also provides flexibility for liveness in multi-steward settings,
allowing more than one steward to obtain `KeyPackages` to commit.
5. The `steward` aggregates all `KeyPackages` utilizes them to provision group additions for new members,
based on the outcome of the voting process.
6. Any `member` start to create `voting proposals` for adding or removing users,
and present them to the voting in the MLS group as an application message.
However, unlimited use of `voting proposals` within the group may be misused by
malicious or overly active members.
Therefore, an application-level constraint can be introduced to limit the number
or frequency of proposals initiated by each member to prevent spam or abuse.
7. Meanwhile, the `steward` collects finalized `voting proposals` with in epoch `E`,
that have received affirmative votes from members via application messages.
Otherwise, the `steward` discards proposals that did not receive a majority of "YES" votes.
Since voting proposals are transmitted as application messages, omitting them does not affect
the protocols correctness or consistency.
8. The `steward` converts all approved `voting proposals` into
corresponding `MLS proposals` and `commit message`, and
transmits both in a single operation as in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) section 12.4,
including welcome messages for the new members.
Therefore, the `commit message` ends the previous epoch and create new ones.
9. The `members` applied the incoming `commit message` by checking the signatures and `voting proposals`
and synchronized with the upcoming epoch.
## Multi stewards
Decentralization has already been achieved in the previous section.
However, to improve availability and ensure censorship resistance,
the single steward protocol is extended to a multi steward architecture.
In this design, each epoch is coordinated by a designated steward,
operating under the same protocol as the single steward model.
Thus, the multi steward approach primarily defines how steward roles
rotate across epochs while preserving the underlying structure and logic of the original protocol.
Two variants of the multi steward design are introduced to address different system requirements.
### Consensus Types
Consensus is agnostic with its payload; therefore, it can be used for various purposes.
Note that each message for the consensus of proposals is an `application message` in the MLS object section.
It is used in three ways as follows:
1. `Commit proposal`: It is the proposal instance that is specified in Creating Voting Proposal section
with `Proposal.payload` MUST show the commit request from `members`.
Any member MAY create this proposal in any epoch and `epoch steward` MUST collect and commit YES voted proposals.
This is the only proposal type common to both single steward and multi steward designs.
2. `Steward election proposal`: This is the process that finalizes the `steward list`,
which sets and orders stewards responsible for creating commits over a predefined number of range in (`sn_min`,`sn_max`).
The validity of the choosen `steward list` ends when the last steward in the list (the one at the final index) completes its commit.
At that point, a new `steward election proposal` MUST be initiated again by any member during the corresponding epoch.
The `Proposal.payload` field MUST represent the ordered identities of the proposed stewards.
Each steward election proposal MUST be verified and finalized through the consensus process
so that members can identify which steward will be responsible in each epoch
and detect any unauthorized steward commits.
3. `Emergency criteria proposal`: If there is a malicious member or steward,
this event MUST be voted on to finalize it.
If this returns YES, the next epoch MUST include the removal of the member or steward.
In a specific case where a steward is removed from the group, causing the total number of stewards to fall below `sn_min`,
it is required to repeat the `steward election proposal`.
`Proposal.payload` MUST consist of the evidence of the dishonesty as described in the Steward violation list,
and the identifier of the malicious member or steward.
This proposal can be created by any member in any epoch.
The order of consensus proposal messages is important to achieving a consistent result.
Therefore, messages MUST be prioritized by type in the following order, from highest to lowest priority:
- `Emergency criteria proposal`
- `Steward election proposal`
- `Commit proposal`
This means that if a higher-priority consensus proposal is present in the network,
lower-priority messages MUST be withheld from transmission until the higher-priority proposals have been finalized.
### Steward list creation
The `steward list` consists of steward nominees who will become actual stewards if the `steward election proposal` is finalized with YES,
is arbitrarily chosen from `member` and OPTIONALLY adjusted depending on the needs of the implementation.
The `steward list` size, defined by the minimum `sn_min` and maximum `sn_max` bounds,
is determined at the time of group creation.
The `sn_min` requirement is applied only when the total number of members exceeds `sn_min`;
if the number of available members falls below this threshold,
the list size automatically adjusts to include all existing members.
The actual size of the list MAY vary within this range as `sn`, with the minimum value being at least 1.
The index of the slots shows epoch info and value of index shows `member id`s.
The next in line steward for the `epoch E` is named as `epoch steward`, which has index E.
And the subsequent steward in the `epoch E` is named as the `backup steward`.
For example, let's assume steward list is (S3, S2, S1) if in the previous epoch the roles were
(`backup steward`: S2, `epoch steward`: S1), then in the next epoch they become
(`backup steward`: S3, `epoch steward`: S2) by shifting.
If the `epoch steward` is honest, the `backup steward` does not involve the process in epoch,
and the `backup steward` will be the `epoch steward` within the `epoch E+1`.
If the `epoch steward` is malicious, the `backup steward` is involved in the commitment phase in `epoch E`
and the former steward becomes the `backup steward` in `epoch E`.
Liveness criteria:
Once the active `steward list` has completed its assigned epochs,
members MUST proceed to elect the next set of stewards
(which MAY include some or all of the previous members).
This election is conducted through a type 2 consensus procedure, `steward election proposal`.
A `Steward election proposal` is considered valid only if the resulting `steward list`
is produced through a deterministic process that ensures an unbiased distribution of steward assignments,
since allowing bias could enable a malicious participant to manipulate the list
and retain control within a favored group for multiple epochs.
The list MUST consist of at least `sn_min` members, including retained previous stewards,
sorted according to the ascending value of `SHA256(epoch E || member id || group id)`,
where `epoch E` is the epoch in which the election proposal is initiated,
and `group id` for shuffling the list across the different groups.
Any proposal with a list that does not adhere to this generation method MUST be rejected by all members.
We assume that there are no recurring entries in `SHA256(epoch E || member id || group id)`, since the SHA256 outputs are unique
when there is no repetition in the `member id` values, against the conflicts on sorting issues.
### Multi steward with big consensuses
In this model, all group modifications, such as adding or removing members,
must be approved through consensus by all participants,
including the steward assigned for `epoch E`.
A configuration with multiple stewards operating under a shared consensus protocol offers
increased decentralization and stronger protection against censorship.
However, this benefit comes with reduced operational efficiency.
The model is therefore best suited for small groups that value
decentralization and censorship resistance more than performance.
To create a multi steward with a big consensus,
the group is initialized with a single steward as specified as follows:
1. The steward initialized the group with the config file.
This config file MUST contain (`sn_min`,`sn_max`) as the `steward list` size range.
2. The steward adds the members as a centralized way till the number of members reaches the `sn_min`.
Then, members propose lists by voting proposal with size `sn`
as a consensus among all members, as mentioned in the consensus section 2, according to the checks:
the size of the proposed list `sn` is in the interval (`sn_min`,`sn_max`).
Note that if the total number of members is below `sn_min`,
then the steward list size MUST be equal to the total member count.
3. After the voting proposal ends up with a `steward list`,
and group changes are ready to be committed as specified in single steward section
with a difference which is members also check the committed steward is `epoch steward` or `backup steward`,
otherwise anyone can create `emergency criteria proposal`.
4. If the `epoch steward` violates the changing process as mentioned in the section Steward violation list,
one of the members MUST initialize the `emergency criteria proposal` to remove the malicious Steward.
Then `backup steward` fulfills the epoch by committing again correctly.
A large consensus group provides better decentralization, but it requires significant coordination,
which MAY not be suitable for groups with more than 1000 members.
### Multi steward with small consensuses
The small consensus model offers improved efficiency with a trade-off in decentralization.
In this design, group changes require consensus only among the stewards, rather than all members.
Regular members participate by periodically selecting the stewards by `steward election proposal`
but do not take part in commit decision by `commit proposal`.
This structure enables faster coordination since consensus is achieved within a smaller group of stewards.
It is particularly suitable for large user groups, where involving every member in each decision would be impractical.
The flow is similar to the big consensus including the `steward list` finalization with all members consensus
only the difference here, the commit messages requires `commit proposal` only among the stewards.
## Filtering proposals against the multiple comitting
Since stewards are allowed to produce a commit even when they are not the designated `epoch steward`,
multiple commits may appear within the same epoch, often reflecting recurring versions of the same proposal.
To ensure a consistent outcome, the valid commit for the epoch SHOULD be selected as the one derived
from the longest proposal chain, ordered by the ascending value of each proposal as `SHA256(proposal)`.
All other cases, such as invalid commits or commits based on proposals that were not approved through voting,
can be easily detected and discarded by the members.
## Steward violation list
A stewards activity is called a violation if the action is one or more of the following:
1. Broken commit: The steward releases a different commit message from the voted `commit proposal`.
This activity is identified by the `members` since the [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) provides the methods
that members can use to identify the broken commit messages that are possible in a few situations,
such as commit and proposal incompatibility. Specifically, the broken commit can arise as follows:
1. The commit belongs to the earlier epoch.
2. The commit message should equal the latest epoch
3. The commit needs to be compatible with the previous epochs `MLS proposal`.
2. Broken MLS proposal: The steward prepares a different `MLS proposal` for the corresponding `voting proposal`.
This activity is identified by the `members` since both `MLS proposal` and `voting proposal` are visible
and can be identified by checking the hash of `Proposal.payload` and `MLSProposal.payload` is the same as RFC9240 section 12.1. Proposals.
3. Censorship and inactivity: The situation where there is a voting proposal that is visible for every member,
and the Steward does not provide an MLS proposal and commit.
This activity is again identified by the `members`since `voting proposals` are visible to every member in the group,
therefore each member can verify that there is no `MLS proposal` corresponding to `voting proposal`.
## Security Considerations
In this section, the security considerations are shown as de-MLS assurance.
1. Malicious Steward: A Malicious steward can act maliciously,
as in the Steward violation list section.
Therefore, de-MLS enforces that any steward only follows the protocol under the consensus order
and commits without emergency criteria application.
2. Malicious Member: A member is only marked as malicious
when the member acts by releasing a commit message.
3. Steward list election bias: Although SHA256 is used together with two global variables
to shuffle stewards in a deterministic and verifiable manner,
this approach only minimizes election bias; it does not completely eliminate it.
This design choice is intentional, in order to preserve the efficiency advantages provided by the MLS mechanism.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
### References
- [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/)
- [Hashgraphlike Consensus](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
- [vacp2p/de-mls](https://github.com/vacp2p/de-mls)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,495 +0,0 @@
# SDS
| Field | Value |
| --- | --- |
| Name | Scalable Data Sync protocol for distributed logs |
| Status | raw |
| Editor | Hanno Cornelius <hanno@status.im> |
| Contributors | Akhil Peddireddy <akhil@status.im> |
## Abstract
This specification introduces the Scalable Data Sync (SDS) protocol
to achieve end-to-end reliability
when consolidating distributed logs in a decentralized manner.
The protocol is designed for a peer-to-peer (p2p) topology
where an append-only log is maintained by each member of a group of nodes
who may individually append new entries to their local log at any time and
is interested in merging new entries from other nodes in real-time or close to real-time
while maintaining a consistent order.
The outcome of the log consolidation procedure is
that all nodes in the group eventually reflect in their own logs
the same entries in the same order.
The protocol aims to scale to very large groups.
## Motivation
A common application that fits this model is a p2p group chat (or group communication),
where the participants act as log nodes
and the group conversation is modelled as the consolidated logs
maintained on each node.
The problem of end-to-end reliability can then be stated as
ensuring that all participants eventually see the same sequence of messages
in the same causal order,
despite the challenges of network latency, message loss,
and scalability present in any communications transport layer.
The rest of this document will assume the terminology of a group communication:
log nodes being the _participants_ in the group chat
and the logged entries being the _messages_ exchanged between participants.
## Design Assumptions
We make the following simplifying assumptions for a proposed reliability protocol:
* **Broadcast routing:**
Messages are broadcast disseminated by the underlying transport.
The selected transport takes care of routing messages
to all participants of the communication.
* **Store nodes:**
There are high-availability caches (a.k.a. Store nodes)
from which missed messages can be retrieved.
These caches maintain the full history of all messages that have been broadcast.
This is an optional element in the protocol design,
but improves scalability by reducing direct interactions between participants.
* **Message ID:**
Each message has a globally unique, immutable ID (or hash).
Messages can be requested from the high-availability caches or
other participants using the corresponding message ID.
* **Participant ID:**
Each participant has a globally unique, immutable ID
visible to other participants in the communication.
* **Sender ID:**
The **Participant ID** of the original sender of a message,
often coupled with a **Message ID**.
## Wire protocol
The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
“SHOULD NOT”, “RECOMMENDED”, “MAY”, and
“OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
### Message
Messages MUST adhere to the following meta structure:
```protobuf
syntax = "proto3";
message HistoryEntry {
string message_id = 1; // Unique identifier of the SDS message, as defined in `Message`
optional bytes retrieval_hint = 2; // Optional information to help remote parties retrieve this SDS message; For example, A Waku deterministic message hash or routing payload hash
optional string sender_id = 3; // Participant ID of original message sender. Only populated if using optional SDS Repair extension
}
message Message {
string sender_id = 1; // Participant ID of the message sender
string message_id = 2; // Unique identifier of the message
string channel_id = 3; // Identifier of the channel to which the message belongs
optional uint64 lamport_timestamp = 10; // Logical timestamp for causal ordering in channel
repeated HistoryEntry causal_history = 11; // List of preceding message IDs that this message causally depends on. Generally 2 or 3 message IDs are included.
optional bytes bloom_filter = 12; // Bloom filter representing received message IDs in channel
repeated HistoryEntry repair_request = 13; // Capped list of history entries missing from sender's causal history. Only populated if using the optional SDS Repair extension.
optional bytes content = 20; // Actual content of the message
}
```
The sending participant MUST include its own globally unique identifier in the `sender_id` field.
In addition, it MUST include a globally unique identifier for the message in the `message_id` field,
likely based on a message hash.
The `channel_id` field MUST be set to the identifier of the channel of group communication
that is being synchronized.
For simple group communications without individual channels,
the `channel_id` SHOULD be set to `0`.
The `lamport_timestamp`, `causal_history` and
`bloom_filter` fields MUST be set according to the [protocol steps](#protocol-steps)
set out below.
These fields MAY be left unset in the case of [ephemeral messages](#ephemeral-messages).
The message `content` MAY be left empty for [periodic sync messages](#periodic-sync-message),
otherwise it MUST contain the application-level content
> **_Note:_** Close readers may notice that,
outside of filtering messages originating from the sender itself,
the `sender_id` field is not used for much.
Its importance is expected to increase once a p2p retrieval mechanism is added to SDS,
as is planned for the protocol.
### Participant state
Each participant MUST maintain:
* A Lamport timestamp for each channel of communication,
initialized to current epoch time in millisecond resolution.
The Lamport timestamp is increased as described in the [protocol steps](#protocol-steps)
to maintain a logical ordering of events while staying close to the current epoch time.
This allows the messages from new joiners to be correctly ordered with other recent messages,
without these new participants first having to synchronize past messages to discover the current Lamport timestamp.
* A bloom filter for received message IDs per channel.
The bloom filter SHOULD be rolled over and
recomputed once it reaches a predefined capacity of message IDs.
Furthermore,
it SHOULD be designed to minimize false positives through an optimal selection of
size and hash functions.
* A buffer for unacknowledged outgoing messages
* A buffer for incoming messages with unmet causal dependencies
* A local log (or history) for each channel,
containing all message IDs in the communication channel,
ordered by Lamport timestamp.
Messages in the unacknowledged outgoing buffer can be in one of three states:
1. **Unacknowledged** - there has been no acknowledgement of message receipt
by any participant in the channel
2. **Possibly acknowledged** - there has been ambiguous indication that the message
has been _possibly_ received by at least one participant in the channel
3. **Acknowledged** - there has been sufficient indication that the message
has been received by at least some of the participants in the channel.
This state will also remove the message from the outgoing buffer.
### Protocol Steps
For each channel of communication,
participants MUST follow these protocol steps to populate and interpret
the `lamport_timestamp`, `causal_history` and `bloom_filter` fields.
#### Send Message
Before broadcasting a message:
* the participant MUST set its local Lamport timestamp
to the maximum between the current value + `1`
and the current epoch time in milliseconds.
In other words the local Lamport timestamp is set to `max(timeNowInMs, current_lamport_timestamp + 1)`.
* the participant MUST include the increased Lamport timestamp in the message's `lamport_timestamp` field.
* the participant MUST determine the preceding few message IDs in the local history
and include these in an ordered list in the `causal_history` field.
The number of message IDs to include in the `causal_history` depends on the application.
We recommend a causal history of two message IDs.
* the participant MAY include a `retrieval_hint` in the `HistoryEntry`
for each message ID in the `causal_history` field.
This is an application-specific field to facilitate retrieval of messages,
e.g. from high-availability caches.
* the participant MUST include the current `bloom_filter`
state in the broadcast message.
After broadcasting a message,
the message MUST be added to the participants buffer
of unacknowledged outgoing messages.
#### Receive Message
Upon receiving a message,
* the participant SHOULD ignore the message if it has a `sender_id` matching its own.
* the participant MAY deduplicate the message by comparing its `message_id` to previously received message IDs.
* the participant MUST [review the ACK status](#review-ack-status) of messages
in its unacknowledged outgoing buffer
using the received message's causal history and bloom filter.
* if the message has a populated `content` field,
the participant MUST include the received message ID in its local bloom filter.
* the participant MUST verify that all causal dependencies are met
for the received message.
Dependencies are met if the message IDs in the `causal_history` of the received message
appear in the local history of the receiving participant.
If all dependencies are met and the message has a populated `content` field,
the participant MUST [deliver the message](#deliver-message).
If dependencies are unmet,
the participant MUST add the message to the incoming buffer of messages
with unmet causal dependencies.
#### Deliver Message
Triggered by the [Receive Message](#receive-message) procedure.
If the received messages Lamport timestamp is greater than the participant's
local Lamport timestamp,
the participant MUST update its local Lamport timestamp to match the received message.
The participant MUST insert the message ID into its local log,
based on Lamport timestamp.
If one or more message IDs with the same Lamport timestamp already exists,
the participant MUST follow the [Resolve Conflicts](#resolve-conflicts) procedure.
#### Resolve Conflicts
Triggered by the [Deliver Message](#deliver-message) procedure.
The participant MUST order messages with the same Lamport timestamp
in ascending order of message ID.
If the message ID is implemented as a hash of the message,
this means the message with the lowest hash would precede
other messages with the same Lamport timestamp in the local log.
#### Review ACK Status
Triggered by the [Receive Message](#receive-message) procedure.
For each message in the unacknowledged outgoing buffer,
based on the received `bloom_filter` and `causal_history`:
* the participant MUST mark all messages in the received `causal_history` as **acknowledged**.
* the participant MUST mark all messages included in the `bloom_filter`
as **possibly acknowledged**.
If a message appears as **possibly acknowledged** in multiple received bloom filters,
the participant MAY mark it as acknowledged based on probabilistic grounds,
taking into account the bloom filter size and hash number.
#### Periodic Incoming Buffer Sweep
The participant MUST periodically check causal dependencies for each message
in the incoming buffer.
For each message in the incoming buffer:
* the participant MAY attempt to retrieve missing dependencies from the Store node
(high-availability cache) or other peers.
It MAY use the application-specific `retrieval_hint` in the `HistoryEntry` to facilitate retrieval.
* if all dependencies of a message are met,
the participant MUST proceed to [deliver the message](#deliver-message).
If a message's causal dependencies have failed to be met
after a predetermined amount of time,
the participant MAY mark them as **irretrievably lost**.
#### Periodic Outgoing Buffer Sweep
The participant MUST rebroadcast **unacknowledged** outgoing messages
after a set period.
The participant SHOULD use distinct resend periods for **unacknowledged** and
**possibly acknowledged** messages,
prioritizing **unacknowledged** messages.
#### Periodic Sync Message
For each channel of communication,
participants SHOULD periodically send sync messages to maintain state.
These sync messages:
* MUST be sent with empty content
* MUST include a Lamport timestamp increased to `max(timeNowInMs, current_lamport_timestamp + 1)`,
where `timeNowInMs` is the current epoch time in milliseconds.
* MUST include causal history and bloom filter according to regular message rules
* MUST NOT be added to the unacknowledged outgoing buffer
* MUST NOT be included in causal histories of subsequent messages
* MUST NOT be included in bloom filters
* MUST NOT be added to the local log
Since sync messages are not persisted,
they MAY have non-unique message IDs without impacting the protocol.
To avoid network activity bursts in large groups,
a participant MAY choose to only send periodic sync messages
if no other messages have been broadcast in the channel after a random backoff period.
Participants MUST process the causal history and bloom filter of these sync messages
following the same steps as regular messages,
but MUST NOT persist the sync messages themselves.
#### Ephemeral Messages
Participants MAY choose to send short-lived messages for which no synchronization
or reliability is required.
These messages are termed _ephemeral_.
Ephemeral messages SHOULD be sent with `lamport_timestamp`, `causal_history`, and
`bloom_filter` unset.
Ephemeral messages SHOULD NOT be added to the unacknowledged outgoing buffer
after broadcast.
Upon reception,
ephemeral messages SHOULD be delivered immediately without buffering for causal dependencies
or including in the local log.
### SDS Repair (SDS-R)
SDS Repair (SDS-R) is an optional extension module for SDS,
allowing participants in a communication to collectively repair any gaps in causal history (missing messages)
preferably over a limited time window.
Since SDS-R acts as coordinated rebroadcasting of missing messages,
which involves all participants of the communication,
it is most appropriate in a limited use case for repairing relatively recent missed dependencies.
It is not meant to replace mechanisms for long-term consistency,
such as peer-to-peer syncing or the use of a high-availability centralised cache (Store node).
#### SDS-R message fields
SDS-R adds the following fields to SDS messages:
* `sender_id` in `HistoryEntry`:
the original message sender's participant ID.
This is used to determine the group of participants who will respond to a repair request.
* `repair_request` in `Message`:
a capped list of history entries missing for the message sender
and for which it's requesting a repair.
#### SDS-R participant state
SDS-R adds the following to each participant state:
* Outgoing **repair request buffer**:
a list of locally missing `HistoryEntry`s
each mapped to a future request timestamp, `T_req`,
after which this participant will request a repair if at that point the missing dependency has not been repaired yet.
`T_req` is computed as a pseudorandom backoff from the timestamp when the dependency was detected missing.
[Determining `T_req`](#determine-t_req) is described below.
We RECOMMEND that the outgoing repair request buffer be chronologically ordered in ascending order of `T_req`.
* Incoming **repair request buffer**:
a list of locally available `HistoryEntry`s
that were requested for repair by a remote participant
AND for which this participant might be an eligible responder,
each mapped to a future response timestamp, `T_resp`,
after which this participant will rebroadcast the corresponding requested `Message` if at that point no other participant had rebroadcast the `Message`.
`T_resp` is computed as a pseudorandom backoff from the timestamp when the repair was first requested.
[Determining `T_resp`](#determine-t_resp) is described below.
We describe below how a participant can [determine if they're an eligible responder](#determine-response-group) for a specific repair request.
* Augmented local history log:
for each message ID kept in the local log for which the participant could be a repair responder,
the full SDS `Message` must be cached rather than just the message ID,
in case this participant is called upon to rebroadcast the message.
We describe below how a participant can [determine if they're an eligible responder](#determine-response-group) for a specific message.
**_Note:_** The required state can likely be significantly reduced in future by simply requiring that a responding participant should _reconstruct_ the original `Message` when rebroadcasting, rather than the simpler, but heavier,
requirement of caching the entire received `Message` content in local history.
#### SDS-R global state
For a specific channel (that is, within a specific SDS-controlled communication)
the following SDS-R configuration state SHOULD be common for all participants in the conversation:
* `T_min`: the _minimum_ time period to wait before a missing causal entry can be repaired.
We RECOMMEND a value of at least 30 seconds.
* `T_max`: the _maximum_ time period over which missing causal entries can be repaired.
We RECOMMEND a value of between 120 and 600 seconds.
Furthermore, to avoid a broadcast storm with multiple participants responding to a repair request,
participants in a single channel MAY be divided into discrete response groups.
Participants will only respond to a repair request if they are in the response group for that request.
The global `num_response_groups` variable configures the number of response groups for this communication.
Its use is described below.
A reasonable default value for `num_response_groups` is one response group for every `128` participants.
In other words, if the (roughly) expected number of participants is expressed as `num_participants`, then
`num_response_groups = num_participants div 128 + 1`.
In other words, if there are fewer than 128 participants in a communication,
they will all belong to the same response group.
We RECOMMEND that the global state variables `T_min`, `T_max` and `num_response_groups`
be set _statically_ for a specific SDS-R application,
based on expected number of group participants and volume of traffic.
**_Note:_** Future versions of this protocol will recommend dynamic global SDS-R variables,
based on the current number of participants.
#### SDS-R send message
SDS-R adds the following steps when sending a message:
Before broadcasting a message,
* the participant SHOULD populate the `repair_request` field in the message
with _eligible_ entries from the outgoing repair request buffer.
An entry is eligible to be included in a `repair_request`
if its corresponding request timestamp, `T_req`, has expired (in other words,
`T_req <= current_time`).
The maximum number of repair request entries to include is up to the application.
We RECOMMEND that this quota be filled by the eligible entries from the outgoing repair request buffer with the lowest `T_req`.
We RECOMMEND a maximum of 3 entries.
If there are no eligible entries in the buffer,
this optional field MUST be left unset.
#### SDS-R receive message
On receiving a message,
* the participant MUST remove entries matching the received message ID from its _outgoing_ repair request buffer.
This ensures that the participant does not request repairs for dependencies that have now been met.
* the participant MUST remove entries matching the received message ID from its _incoming_ repair request buffer.
This ensures that the participant does not respond to repair requests that another participant has already responded to.
* the participant SHOULD add any unmet causal dependencies to its outgoing repair request buffer against a unique `T_req` timestamp for that entry.
It MUST compute the `T_req` for each such HistoryEntry according to the steps outlined in [_Determine T_req_](#determine-t_req).
* for each item in the `repair_request` field:
* the participant MUST remove entries matching the repair message ID from its own outgoing repair request buffer.
This limits the number of participants that will request a common missing dependency.
* if the participant has the requested `Message` in its local history _and_ is an eligible responder for the repair request,
it SHOULD add the request to its incoming repair request buffer against a unique `T_resp` timestamp for that entry.
It MUST compute the `T_resp` for each such repair request according to the steps outlined in [_Determine T_resp_](#determine-t_resp).
It MUST determine if it's an eligible responder for a repair request according to the steps outlined in [_Determine response group_](#determine-response-group).
#### Determine T_req
A participant determines the repair request timestamp, `T_req`,
for a missing `HistoryEntry` as follows:
```text
T_req = current_time + hash(participant_id, message_id) % (T_max - T_min) + T_min
```
where `current_time` is the current timestamp,
`participant_id` is the participant's _own_ participant ID
(not the `sender_id` in the missing `HistoryEntry`),
`message_id` is the missing `HistoryEntry`'s message ID,
and `T_min` and `T_max` are as set out in [SDS-R global state](#sds-r-global-state).
This allows `T_req` to be pseudorandomly and linearly distributed as a backoff of between `T_min` and `T_max` from current time.
> **_Note:_** placing `T_req` values on an exponential backoff curve will likely be more appropriate and is left for a future improvement.
#### Determine T_resp
A participant determines the repair response timestamp, `T_resp`,
for a `HistoryEntry` that it could repair as follows:
```text
distance = hash(participant_id) XOR hash(sender_id)
T_resp = current_time + distance*hash(message_id) % T_max
```
where `current_time` is the current timestamp,
`participant_id` is the participant's _own_ (local) participant ID,
`sender_id` is the requested `HistoryEntry` sender ID,
`message_id` is the requested `HistoryEntry` message ID,
and `T_max` is as set out in [SDS-R global state](#sds-r-global-state).
We first calculate the logical `distance` between the local `participant_id` and
the original `sender_id`.
If this participant is the original sender, the `distance` will be `0`.
It should then be clear that the original participant will have a response backoff time of `0`,
making it the most likely responder.
The `T_resp` values for other eligible participants will be pseudorandomly and
linearly distributed as a backoff of up to `T_max` from current time.
> **_Note:_** placing `T_resp` values on an exponential backoff curve will likely be more appropriate and
is left for a future improvement.
#### Determine response group
Given a message with `sender_id` and `message_id`,
a participant with `participant_id` is in the response group for that message if
```text
hash(participant_id, message_id) % num_response_groups == hash(sender_id, message_id) % num_response_groups
```
where `num_response_groups` is as set out in [SDS-R global state](#sds-r-global-state).
This ensures that a participant will always be in the response group for its own published messages.
It also allows participants to determine immediately on first reception of a message or
a history entry if they are in the associated response group.
#### SDS-R incoming repair request buffer sweep
An SDS-R participant MUST periodically check if there are any incoming requests in the **incoming** repair request buffer* that is due for a response.
For each item in the buffer,
the participant SHOULD broadcast the corresponding `Message` from local history
if its corresponding response timestamp, `T_resp`, has expired
(in other words, `T_resp <= current_time`).
#### SDS-R Periodic Sync Message
If the participant is due to send a periodic sync message,
it SHOULD send the message according to [SDS-R send message](#sds-r-send-message)
if there are any eligible items in the outgoing repair request buffer,
regardless of whether other participants have also recently broadcast a Periodic Sync message.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).

View File

@@ -1,3 +0,0 @@
# Waku Informational RFCs
Informational Waku documents covering guidance, examples, and supporting material.

View File

@@ -1,3 +0,0 @@
# Waku Standards - Application
Application-layer specifications built on top of Waku core protocols.

View File

@@ -1,188 +0,0 @@
# 31/WAKU2-ENR
| Field | Value |
| --- | --- |
| Name | Waku v2 usage of ENR |
| Slug | 31 |
| Status | draft |
| Editor | Franck Royer <franck@status.im> |
## Abstract
This specification describes the usage of the ENR (Ethereum Node Records)
format for [10/WAKU2](../10/waku2.md) purposes.
The ENR format is defined in [EIP-778](https://eips.ethereum.org/EIPS/eip-778) [[3]](#references).
This specification is an extension of EIP-778,
ENR used in Waku MUST adhere to both EIP-778 and 31/WAKU2-ENR.
## Motivation
EIP-1459 with the usage of ENR has been implemented [[1]](#references) [[2]](#references) as a discovery protocol for Waku.
EIP-778 specifies a number of pre-defined keys.
However, the usage of these keys alone does not allow for certain transport capabilities to be encoded,
such as Websocket.
Currently, Waku nodes running in a browser only support websocket transport protocol.
Hence, new ENR keys need to be defined to allow for the encoding of transport protocol other than raw TCP.
### Usage of Multiaddr Format Rationale
One solution would be to define new keys such as `ws` to encode the websocket port of a node.
However, we expect new transport protocols to be added overtime such as quic.
Hence, this would only provide a short term solution until another specification would need to be added.
Moreover, secure websocket involves SSL certificates.
SSL certificates are only valid for a given domain and ip,
so an ENR containing the following information:
- secure websocket port
- ipv4 fqdn
- ipv4 address
- ipv6 address
Would carry some ambiguity: Is the certificate securing the websocket port valid for the ipv4 fqdn?
the ipv4 address?
the ipv6 address?
The [10/WAKU2](../10/waku2.md) protocol family is built on the [libp2p](https://github.com/libp2p/specs) protocol stack.
Hence, it uses [multiaddr](https://github.com/multiformats/multiaddr) to format network addresses.
Directly storing one or several multiaddresses in the ENR would fix the issues listed above:
- multiaddr is self-describing and support addresses for any network protocol:
No new specification would be needed to support encoding other transport protocols in an ENR.
- multiaddr contains both the host and port information,
allowing the ambiguity previously described to be resolved.
## Wire Format
### `multiaddrs` ENR key
We define a `multiaddrs` key.
- The value MUST be a list of binary encoded multiaddr prefixed by their size.
- The size of the multiaddr MUST be encoded in a Big Endian unsigned 16-bit integer.
- The size of the multiaddr MUST be encoded in 2 bytes.
- The `secp256k1` value MUST be present on the record;
`secp256k1` is defined in [EIP-778](https://eips.ethereum.org/EIPS/eip-778) and
contains the compressed secp256k1 public key.
- The node's peer id SHOULD be deduced from the `secp256k1` value.
- The multiaddresses SHOULD NOT contain a peer id except for circuit relay addresses
- For raw TCP & UDP connections details,
[EIP-778](https://eips.ethereum.org/EIPS/eip-778) pre-defined keys SHOULD be used;
The keys `tcp`, `udp`, `ip` (and `tcp6`, `udp6`, `ip6` for IPv6)
are enough to convey all necessary information;
- To save space, `multiaddrs` key SHOULD only be used for connection details that cannot be represented using the [EIP-778](https://eips.ethereum.org/EIPS/eip-778) pre-defined keys.
- The 300 bytes size limit as defined by [EIP-778](https://eips.ethereum.org/EIPS/eip-778) still applies;
In practice, it is possible to encode 3 multiaddresses in ENR, more or
less could be encoded depending on the size of each multiaddress.
### Usage
#### Many connection types
Alice is a Waku node operator, she runs a node that supports inbound connection for the following protocols:
- TCP 10101 on `1.2.3.4`
- UDP 20202 on `1.2.3.4`
- TCP 30303 on `1234:5600:101:1::142`
- UDP 40404 on `1234:5600:101:1::142`
- Secure Websocket on `wss://example.com:443/`
- QUIC on `quic://quic.example.com:443/`
- A circuit relay address `/ip4/1.2.3.4/tcp/55555/p2p/QmRelay/p2p-circuit/p2p/QmAlice`
Alice SHOULD structure the ENR for her node as follows:
| key | value |
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `tcp` | `10101` |
| `udp` | `20202` |
| `tcp6` | `30303` |
| `udp6` | `40404` |
| `ip` | `1.2.3.4` |
| `ip6` | `1234:5600:101:1::142` |
| `secp256k1` | Alice's compressed secp256k1 public key, 33 bytes |
| `multiaddrs` | `len1 \| /dns4/example.com/tcp/443/wss \| len2 \| /dns4/quic.examle.com/tcp/443/quic \| len3 \| /ip4/1.2.3.4/tcp/55555/p2p/QmRelay` |
Where `multiaddrs`:
- `|` is the concatenation operator,
- `len1` is the length of `/dns4/example.com/tcp/443/wss` byte representation,
- `len2` is the length of `/dns4/quic.examle.com/tcp/443/quic` byte representation.
- `len3` is the length of `/ip4/1.2.3.4/tcp/55555/p2p/QmRelay` byte representation.
Notice that the `/p2p-circuit` component is not stored, but,
since circuit relay addresses are the only one containing a `p2p` component,
it's safe to assume that any address containing this component is a circuit relay address.
Decoding this type of multiaddresses would require appending the `/p2p-circuit` component.
#### Raw TCP only
Bob is a node operator that runs a node that supports inbound connection for the following protocols:
- TCP 10101 on `1.2.3.4`
Bob SHOULD structure the ENR for his node as follows:
| key | value |
| ----------- | ----------------------------------------------- |
| `tcp` | `10101` |
| `ip` | `1.2.3.4` |
| `secp256k1` | Bob's compressed secp256k1 public key, 33 bytes |
As Bob's node's connection details can be represented with EIP-778's pre-defined keys only,
it is not needed to use the `multiaddrs` key.
### Limitations
Supported key type is `secp256k1` only.
Support for other elliptic curve cryptography such as `ed25519` MAY be used.
### `waku2` ENR key
We define a `waku2` field key:
- The value MUST be an 8-bit flag field,
where bits set to `1` indicate `true` and
bits set to `0` indicate `false` for the relevant flags.
- The flag values already defined are set out below,
with `bit 7` the most significant bit and `bit 0` the least significant bit.
| bit 7 | bit 6 | bit 5 | bit 4 | bit 3 | bit 2 | bit 1 | bit 0 |
| ------- | ------- | ------- | ------- | ----------- | -------- | ------- | ------- |
| `undef` | `undef` | `undef` | `sync` | `lightpush` | `filter` | `store` | `relay` |
- In the scheme above, the flags `sync`, `lightpush`, `filter`, `store` and
`relay` correlates with support for protocols with the same name.
If a protocol is not supported, the corresponding field MUST be set to `false`.
Indicating positive support for any specific protocol is OPTIONAL,
though it MAY be required by the relevant application or discovery process.
- Flags marked as `undef` is not yet defined.
These SHOULD be set to `false` by default.
### Key Usage
- A Waku node MAY choose to populate the `waku2` field for enhanced discovery capabilities,
such as indicating supported protocols.
Such a node MAY indicate support for any specific protocol by setting the corresponding flag to `true`.
- Waku nodes that want to participate in [Node Discovery Protocol v5](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/33/discv5.md) [[4]](#references), however,
MUST implement the `waku2` key with at least one flag set to `true`.
- Waku nodes that discovered other participants using Discovery v5,
MUST filter out participant records that do not implement this field or
do not have at least one flag set to `true`.
- In addition, such nodes MAY choose to filter participants on specific flags
(such as supported protocols),
or further interpret the `waku2` field as required by the application.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
- [1](../10/waku2.md)
- [2](https://github.com/status-im/nim-waku/pull/690)
- [3](https://github.com/vacp2p/rfc/issues/462#issuecomment-943869940)
- [4](https://eips.ethereum.org/EIPS/eip-778)
- [5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)

View File

@@ -1,3 +0,0 @@
# Waku Standards - Core
Core Waku protocol specifications, including messaging, peer discovery, and network primitives.

View File

@@ -1,3 +0,0 @@
# Waku Standards - Legacy
Legacy Waku standards retained for reference and historical compatibility.

27
flake.lock generated
View File

@@ -1,27 +0,0 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1717159533,
"narHash": "sha256-oamiKNfr2MS6yH64rUn99mIZjc45nGJlj9eGth/3Xuw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a62e6edd6d5e1fa0329b8653c801147986f8d446",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-23.11",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

View File

@@ -1,22 +0,0 @@
{
description = "infra-docs";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.11";
outputs = { self, nixpkgs }:
let
stableSystems = ["x86_64-linux" "aarch64-linux" "x86_64-darwin" "aarch64-darwin"];
forAllSystems = nixpkgs.lib.genAttrs stableSystems;
pkgsFor = nixpkgs.lib.genAttrs stableSystems (
system: import nixpkgs { inherit system; }
);
in rec {
devShells = forAllSystems (system: {
default = pkgsFor.${system}.mkShellNoCC {
packages = with pkgsFor.${system}.buildPackages; [
openssh git ghp-import mdbook
];
};
});
};
}

View File

@@ -2,5 +2,5 @@
Nomos is building a secure, flexible, and
scalable infrastructure for developers creating applications for the network state.
Published Specifications are currently available here,
[Nomos Specifications](https://nomos-tech.notion.site/project).
To learn more about Nomos current protocols under discussion,
head over to [Nomos Specs](https://github.com/logos-co/nomos-specs).

View File

@@ -1,12 +1,18 @@
# CONSENSUS-CLARO
| Field | Value |
| --- | --- |
| Name | Claro Consensus Protocol |
| Status | deprecated |
| Category | Standards Track |
| Editor | Corey Petty <corey@status.im> |
| Contributors | Álvaro Castro-Castilla, Mark Evenson |
---
title: CONSENSUS-CLARO
name: Claro Consensus Protocol
status: deprecated
category: Standards Track
tags:
- logos/consensus
editor: Corey Petty <corey@status.im>
created: 01-JUL-2022
revised: <2022-08-26 Fri 13:11Z>
uri: <https://rdf.logos.co/protocol/Claro/1/0/0#<2022-08-26%20Fri$2013:11Z>
contributors:
- Álvaro Castro-Castilla
- Mark Evenson
---
## Abstract

View File

@@ -1,11 +1,17 @@
# NOMOSDA-ENCODING
| Field | Value |
| --- | --- |
| Name | NomosDA Encoding Protocol |
| Status | raw |
| Editor | Daniel Sanchez-Quiros <danielsq@status.im> |
| Contributors | Daniel Kashepava <danielkashepava@status.im>, Álvaro Castro-Castilla <alvaro@status.im>, Filip Dimitrijevic <filip@status.im>, Thomas Lavaur <thomaslavaur@status.im>, Mehmet Gonen <mehmet@status.im> |
---
title: NOMOSDA-ENCODING
name: NomosDA Encoding Protocol
status: raw
category:
tags: data-availability
editor: Daniel Sanchez-Quiros <danielsq@status.im>
contributors:
- Daniel Kashepava <danielkashepava@status.im>
- Álvaro Castro-Castilla <alvaro@status.im>
- Filip Dimitrijevic <filip@status.im>
- Thomas Lavaur <thomaslavaur@status.im>
- Mehmet Gonen <mehmet@status.im>
---
## Introduction

View File

@@ -1,11 +1,16 @@
# NOMOS-DA-NETWORK
| Field | Value |
| --- | --- |
| Name | NomosDA Network |
| Status | raw |
| Editor | Daniel Sanchez Quiros <danielsq@status.im> |
| Contributors | Álvaro Castro-Castilla <alvaro@status.im>, Daniel Kashepava <danielkashepava@status.im>, Gusto Bacvinka <augustinas@status.im>, Filip Dimitrijevic <filip@status.im> |
---
title: NOMOS-DA-NETWORK
name: NomosDA Network
status: raw
category:
tags: network, data-availability, da-nodes, executors, sampling
editor: Daniel Sanchez Quiros <danielsq@status.im>
contributors:
- Álvaro Castro-Castilla <alvaro@status.im>
- Daniel Kashepava <danielkashepava@status.im>
- Gusto Bacvinka <augustinas@status.im>
- Filip Dimitrijevic <filip@status.im>
---
## Introduction

View File

@@ -1,12 +1,13 @@
# P2P-HARDWARE-REQUIREMENTS
| Field | Value |
| --- | --- |
| Name | Nomos p2p Network Hardware Requirements Specification |
| Status | raw |
| Category | infrastructure |
| Editor | Daniel Sanchez-Quiros <danielsq@status.im> |
| Contributors | Filip Dimitrijevic <filip@status.im> |
---
title: P2P-HARDWARE-REQUIREMENTS
name: Nomos p2p Network Hardware Requirements Specification
status: raw
category: infrastructure
tags: [hardware, requirements, nodes, validators, services]
editor: Daniel Sanchez-Quiros <danielsq@status.im>
contributors:
- Filip Dimitrijevic <filip@status.im>
---
## Abstract

View File

@@ -1,12 +1,18 @@
# P2P-NAT-SOLUTION
| Field | Value |
| --- | --- |
| Name | Nomos P2P Network NAT Solution Specification |
| Status | raw |
| Category | networking |
| Editor | Antonio Antonino <antonio@status.im> |
| Contributors | Álvaro Castro-Castilla <alvaro@status.im>, Daniel Sanchez-Quiros <danielsq@status.im>, Petar Radovic <petar@status.im>, Gusto Bacvinka <augustinas@status.im>, Youngjoon Lee <youngjoon@status.im>, Filip Dimitrijevic <filip@status.im> |
---
title: P2P-NAT-SOLUTION
name: Nomos P2P Network NAT Solution Specification
status: raw
category: networking
tags: [nat, traversal, autonat, upnp, pcp, nat-pmp]
editor: Antonio Antonino <antonio@status.im>
contributors:
- Álvaro Castro-Castilla <alvaro@status.im>
- Daniel Sanchez-Quiros <danielsq@status.im>
- Petar Radovic <petar@status.im>
- Gusto Bacvinka <augustinas@status.im>
- Youngjoon Lee <youngjoon@status.im>
- Filip Dimitrijevic <filip@status.im>
---
## Abstract

View File

@@ -1,12 +1,18 @@
# P2P-NETWORK-BOOTSTRAPPING
| Field | Value |
| --- | --- |
| Name | Nomos P2P Network Bootstrapping Specification |
| Status | raw |
| Category | networking |
| Editor | Daniel Sanchez-Quiros <danielsq@status.im> |
| Contributors | Álvaro Castro-Castilla <alvaro@status.im>, Petar Radovic <petar@status.im>, Gusto Bacvinka <augustinas@status.im>, Antonio Antonino <antonio@status.im>, Youngjoon Lee <youngjoon@status.im>, Filip Dimitrijevic <filip@status.im> |
---
title: P2P-NETWORK-BOOTSTRAPPING
name: Nomos P2P Network Bootstrapping Specification
status: raw
category: networking
tags: [p2p, networking, bootstrapping, peer-discovery, libp2p]
editor: Daniel Sanchez-Quiros <danielsq@status.im>
contributors:
- Álvaro Castro-Castilla <alvaro@status.im>
- Petar Radovic <petar@status.im>
- Gusto Bacvinka <augustinas@status.im>
- Antonio Antonino <antonio@status.im>
- Youngjoon Lee <youngjoon@status.im>
- Filip Dimitrijevic <filip@status.im>
---
## Introduction

View File

@@ -1,12 +1,13 @@
# NOMOS-P2P-NETWORK
| Field | Value |
| --- | --- |
| Name | Nomos P2P Network Specification |
| Status | draft |
| Category | networking |
| Editor | Daniel Sanchez-Quiros <danielsq@status.im> |
| Contributors | Filip Dimitrijevic <filip@status.im> |
---
title: NOMOS-P2P-NETWORK
name: Nomos P2P Network Specification
status: draft
category: networking
tags: [p2p, networking, libp2p, kademlia, gossipsub, quic]
editor: Daniel Sanchez-Quiros <danielsq@status.im>
contributors:
- Filip Dimitrijevic <filip@status.im>
---
## Abstract

View File

@@ -1,11 +1,19 @@
# NOMOS-SDP
| Field | Value |
| --- | --- |
| Name | Nomos Service Declaration Protocol Specification |
| Status | raw |
| Editor | Marcin Pawlowski <marcin@status.im> |
| Contributors | Mehmet <mehmet@status.im>, Daniel Sanchez Quiros <danielsq@status.im>, Álvaro Castro-Castilla <alvaro@status.im>, Thomas Lavaur <thomaslavaur@status.im>, Filip Dimitrijevic <filip@status.im>, Gusto Bacvinka <augustinas@status.im>, David Rusu <davidrusu@status.im> |
---
title: NOMOS-SDP
name: Nomos Service Declaration Protocol Specification
status: raw
category:
tags: participation, validators, declarations
editor: Marcin Pawlowski <marcin@status.im>
contributors:
- Mehmet <mehmet@status.im>
- Daniel Sanchez Quiros <danielsq@status.im>
- Álvaro Castro-Castilla <alvaro@status.im>
- Thomas Lavaur <thomaslavaur@status.im>
- Filip Dimitrijevic <filip@status.im>
- Gusto Bacvinka <augustinas@status.im>
- David Rusu <davidrusu@status.im>
---
## Introduction

View File

@@ -1,260 +0,0 @@
#!/usr/bin/env python3
import subprocess
from typing import List, Tuple, Optional, Dict
from pathlib import Path
import re
def log(msg: str):
print(f"[INFO] {msg}", flush=True)
def run_git(args: list) -> str:
cmd = ["git"] + args
log("Running: " + " ".join(cmd))
result = subprocess.run(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
encoding="utf-8",
)
if result.returncode != 0:
print("[ERROR] Command failed:", " ".join(cmd))
print(result.stderr)
raise subprocess.CalledProcessError(
result.returncode, cmd, result.stdout, result.stderr
)
return result.stdout.strip()
def get_repo_https_url() -> Optional[str]:
try:
url = run_git(["config", "--get", "remote.origin.url"]).strip()
except subprocess.CalledProcessError:
return None
if url.startswith("git@github.com:"):
path = url[len("git@github.com:"):]
if path.endswith(".git"):
path = path[:-4]
return f"https://github.com/{path}"
if url.startswith("https://github.com/"):
path = url[len("https://github.com/"):]
if path.endswith(".git"):
path = path[:-4]
return f"https://github.com/{path}"
return None
def get_repo_file_path(path: str) -> str:
log(f"Resolving file path via git: {path}")
try:
out = run_git(["ls-files", "--full-name", path])
except subprocess.CalledProcessError:
raise SystemExit(f"[ERROR] {path!r} is not tracked by git")
if not out:
raise SystemExit(f"[ERROR] {path!r} is not tracked by git")
resolved = out.splitlines()[0]
log(f"Resolved path inside repo: {resolved}")
return resolved
def get_file_commits(path: str) -> List[Tuple[str, str, str, str]]:
log(f"Collecting commit history for: {path}")
log_output = run_git([
"log",
"--follow",
"--format=%H%x09%ad%x09%s",
"--date=short",
"--name-only",
"--",
path,
])
if not log_output:
log("No history found.")
return []
commits: List[Tuple[str, str, str, str]] = []
current: Optional[Dict[str, str]] = None
for line in log_output.splitlines():
if not line.strip():
continue
parts = line.split("\t", 2)
# Detect commit line
if len(parts) == 3 and len(parts[0]) >= 7 and all(c in "0123456789abcdef" for c in parts[0].lower()):
if current:
commits.append((current["commit"], current["date"], current["subject"], current.get("path", path)))
current = {"commit": parts[0], "date": parts[1], "subject": parts[2]}
continue
# If we are in a commit block and we see a path, record the first one
if current and "path" not in current:
current["path"] = line.strip()
if current:
commits.append((current["commit"], current["date"], current["subject"], current.get("path", path)))
commits.reverse()
log(f"Found {len(commits)} commits.")
return commits
def build_markdown_history(
repo_url: str,
file_path: str,
commits: List[Tuple[str, str, str, str]],
) -> str:
log(f"Generating markdown history...")
entries = []
# newest first
for commit, date, subject, path_at_commit in reversed(commits):
blob_url = f"{repo_url}/blob/{commit}/{path_at_commit}"
entries.append((date, commit, subject, blob_url))
lines: List[str] = []
lines.append("## Timeline\n")
for date, commit, subject, blob_url in entries:
lines.append(f"- **{date}** — [`{commit[:7]}`]({blob_url}) — {subject}")
return "\n".join(lines).rstrip() + "\n"
def find_metadata_table_end(lines: List[str]) -> Optional[int]:
header_idx = None
for idx, line in enumerate(lines[:80]):
if line.strip() == "| Field | Value |":
header_idx = idx
break
if header_idx is None:
return None
if header_idx + 1 >= len(lines):
return None
if not lines[header_idx + 1].strip().startswith("|"):
return None
end_idx = header_idx + 2
while end_idx < len(lines) and lines[end_idx].strip().startswith("|"):
end_idx += 1
return end_idx
def inject_timeline(file_path: Path, timeline_md: str) -> bool:
"""
Insert or replace a timeline block near the top of the file.
Returns True if the file was modified.
"""
content = file_path.read_text(encoding="utf-8")
start_marker = "<!-- timeline:start -->"
end_marker = "<!-- timeline:end -->"
block = (
f"{start_marker}\n\n"
f"{timeline_md.strip()}\n\n"
f"{end_marker}\n"
)
if start_marker in content and end_marker in content:
pattern = re.compile(
re.escape(start_marker) + r".*?" + re.escape(end_marker),
re.DOTALL,
)
new_content, count = pattern.subn(block, content, count=1)
if count and new_content != content:
file_path.write_text(new_content, encoding="utf-8")
return True
return False
lines = content.splitlines()
insert_pos = 0
table_end = find_metadata_table_end(lines)
if table_end is not None:
insert_pos = len("\n".join(lines[:table_end]))
else:
for idx, line in enumerate(lines):
if line.startswith("# "):
insert_pos = len("\n".join(lines[: idx + 1]))
break
new_content = content[:insert_pos] + "\n\n" + block + "\n" + content[insert_pos:]
if new_content != content:
file_path.write_text(new_content, encoding="utf-8")
return True
return False
def is_rfc_file(path: Path) -> bool:
try:
text = path.read_text(encoding="utf-8", errors="ignore")
except OSError:
return False
if "# " not in text:
return False
if "| Field | Value |" not in text:
return False
return True
def find_rfc_files(root: Path) -> List[Path]:
candidates: List[Path] = []
for path in root.rglob("*.md"):
if path.name in {"README.md", "SUMMARY.md", "template.md"}:
continue
if is_rfc_file(path):
candidates.append(path)
return sorted(candidates)
def main():
log("Starting history generation")
repo_url = get_repo_https_url()
if not repo_url:
raise SystemExit("[ERROR] Could not determine GitHub repo URL")
log(f"Repo URL: {repo_url}")
root = Path("docs")
files = find_rfc_files(root)
if not files:
raise SystemExit(f"[ERROR] No RFCs found under {root}")
updated = 0
for file_path in files:
repo_file_path = get_repo_file_path(str(file_path))
commits = get_file_commits(repo_file_path)
if not commits:
log(f"[WARN] No history found for {repo_file_path}")
continue
markdown = build_markdown_history(
repo_url=repo_url,
file_path=repo_file_path,
commits=commits,
)
modified = inject_timeline(file_path, markdown)
if modified:
updated += 1
log(f"Timeline injected into {file_path}")
log(f"Timelines updated in {updated} files")
if __name__ == "__main__":
main()

View File

@@ -1,104 +0,0 @@
#!/usr/bin/env python3
"""
Generate a JSON index of RFC metadata for the landing page filters.
Scans the docs/ tree for Markdown files and writes
`docs/rfc-index.json`.
"""
from __future__ import annotations
import json
from pathlib import Path
from typing import Dict, List, Optional
import html
import re
ROOT = Path(__file__).resolve().parent.parent
DOCS = ROOT / "docs"
OUTPUT = DOCS / "rfc-index.json"
EXCLUDE_FILES = {"README.md", "SUMMARY.md"}
EXCLUDE_PARTS = {"previous-versions"}
def parse_meta_from_markdown_table(text: str) -> Optional[Dict[str, str]]:
lines = text.splitlines()
meta: Dict[str, str] = {}
for i in range(len(lines) - 2):
line = lines[i].strip()
next_line = lines[i + 1].strip()
if not (line.startswith('|') and next_line.startswith('|') and '---' in next_line):
continue
# Simple two-column table parsing
j = i + 2
while j < len(lines) and lines[j].strip().startswith('|'):
parts = [p.strip() for p in lines[j].strip().strip('|').split('|')]
if len(parts) >= 2:
key = parts[0].lower()
value = html.unescape(parts[1])
if key and value:
meta[key] = value
j += 1
break
return meta or None
def parse_title_from_h1(text: str) -> Optional[str]:
match = re.search(r"^#\\s+(.+)$", text, flags=re.MULTILINE)
if not match:
return None
return match.group(1).strip()
def collect() -> List[Dict[str, str]]:
entries: List[Dict[str, str]] = []
for path in DOCS.rglob("*.md"):
rel = path.relative_to(DOCS)
if rel.name in EXCLUDE_FILES:
continue
if EXCLUDE_PARTS.intersection(rel.parts):
continue
text = path.read_text(encoding="utf-8", errors="ignore")
meta = parse_meta_from_markdown_table(text) or {}
slug = meta.get("slug")
title = meta.get("title") or meta.get("name") or parse_title_from_h1(text) or rel.stem
status = meta.get("status") or "unknown"
category = meta.get("category") or "unspecified"
project = rel.parts[0]
# Skip the template placeholder
if slug == "XX":
continue
# mdBook renders Markdown to .html, keep links consistent
html_path = rel.with_suffix(".html").as_posix()
entries.append(
{
"project": project,
"slug": str(slug) if slug is not None else title,
"title": title,
"status": status,
"category": category,
"path": html_path,
}
)
entries.sort(key=lambda r: (r["project"], r["slug"]))
return entries
def main() -> None:
entries = collect()
OUTPUT.write_text(json.dumps(entries, indent=2), encoding="utf-8")
print(f"Wrote {len(entries)} entries to {OUTPUT}")
if __name__ == "__main__":
main()

View File

@@ -1,362 +0,0 @@
(() => {
function linkMenuTitle() {
const menuTitle = document.querySelector(".menu-title");
if (!menuTitle || menuTitle.dataset.linked === "true") {
return;
}
const existingLink = menuTitle.closest("a");
if (existingLink) {
menuTitle.dataset.linked = "true";
return;
}
const root = (typeof path_to_root !== "undefined" && path_to_root) ? path_to_root : "";
const link = document.createElement("a");
link.href = `${root}index.html`;
link.className = "menu-title-link";
link.setAttribute("aria-label", "Back to home");
const parent = menuTitle.parentNode;
parent.replaceChild(link, menuTitle);
link.appendChild(menuTitle);
menuTitle.dataset.linked = "true";
}
function onReady(fn) {
if (document.readyState === "loading") {
document.addEventListener("DOMContentLoaded", fn, { once: true });
} else {
fn();
}
}
onReady(linkMenuTitle);
onReady(() => {
const printLink = document.querySelector("a[href$='print.html']");
if (!printLink) return;
printLink.addEventListener("click", (event) => {
event.preventDefault();
window.print();
});
});
function getSectionInfo(item) {
const direct = item.querySelector(":scope > ol.section");
if (direct) {
return { section: direct, container: item, isSibling: false };
}
const sibling = item.nextElementSibling;
if (sibling && sibling.tagName === "LI") {
const siblingSection = sibling.querySelector(":scope > ol.section");
if (siblingSection) {
sibling.classList.add("section-container");
return { section: siblingSection, container: sibling, isSibling: true };
}
}
return null;
}
function initSidebarCollapsible(root) {
if (!root) return;
const items = root.querySelectorAll("li.chapter-item");
items.forEach((item) => {
const sectionInfo = getSectionInfo(item);
const link = item.querySelector(":scope > a, :scope > .chapter-link-wrapper > a");
if (!sectionInfo || !link) return;
if (!link.querySelector(".section-toggle")) {
const toggle = document.createElement("span");
toggle.className = "section-toggle";
toggle.setAttribute("role", "button");
toggle.setAttribute("aria-label", "Toggle section");
toggle.addEventListener("click", (event) => {
event.preventDefault();
event.stopPropagation();
item.classList.toggle("collapsed");
});
link.prepend(toggle);
}
if (item.dataset.collapsibleInit !== "true") {
const hasActive = link.classList.contains("active");
const hasActiveInSection = !!sectionInfo.section.querySelector(".active");
item.classList.toggle("collapsed", !(hasActive || hasActiveInSection));
item.dataset.collapsibleInit = "true";
}
});
}
function bindSidebarCollapsible() {
const sidebar = document.querySelector("#mdbook-sidebar .sidebar-scrollbox")
|| document.querySelector("#sidebar .sidebar-scrollbox");
if (sidebar) {
initSidebarCollapsible(sidebar);
}
const iframe = document.querySelector(".sidebar-iframe-outer");
if (iframe) {
const onLoad = () => {
try {
initSidebarCollapsible(iframe.contentDocument);
} catch (e) {
// ignore access errors
}
};
iframe.addEventListener("load", onLoad);
onLoad();
}
}
function observeSidebar() {
const target = document.querySelector("#mdbook-sidebar") || document.querySelector("#sidebar");
if (!target) return;
const observer = new MutationObserver(() => bindSidebarCollapsible());
observer.observe(target, { childList: true, subtree: true });
setTimeout(() => observer.disconnect(), 1500);
}
onReady(() => {
bindSidebarCollapsible();
// toc.js may inject the sidebar after load
setTimeout(bindSidebarCollapsible, 100);
observeSidebar();
});
const searchInput = document.getElementById("rfc-search");
const resultsCount = document.getElementById("results-count");
const tableContainer = document.getElementById("rfc-table-container");
if (!searchInput || !resultsCount || !tableContainer) {
return;
}
let rfcData = [];
const statusOrder = { stable: 0, draft: 1, raw: 2, deprecated: 3, deleted: 4, unknown: 5 };
const statusLabels = {
stable: "Stable",
draft: "Draft",
raw: "Raw",
deprecated: "Deprecated",
deleted: "Deleted",
unknown: "Unknown"
};
const projectLabels = {
vac: "Vac",
waku: "Waku",
status: "Status",
nomos: "Nomos",
codex: "Codex"
};
const headers = [
{ key: "slug", label: "RFC", width: "12%" },
{ key: "title", label: "Title", width: "38%" },
{ key: "project", label: "Project", width: "12%" },
{ key: "status", label: "Status", width: "15%" },
{ key: "category", label: "Category", width: "23%" }
];
let statusFilter = "all";
let projectFilter = "all";
let sortKey = "slug";
let sortDir = "asc";
const table = document.createElement("table");
table.className = "rfc-table";
const thead = document.createElement("thead");
const headRow = document.createElement("tr");
const headerCells = {};
headers.forEach((header) => {
const th = document.createElement("th");
th.textContent = header.label;
th.dataset.sort = header.key;
th.dataset.label = header.label;
if (header.width) {
th.style.width = header.width;
}
th.addEventListener("click", () => {
if (sortKey === header.key) {
sortDir = sortDir === "asc" ? "desc" : "asc";
} else {
sortKey = header.key;
sortDir = "asc";
}
render();
});
headRow.appendChild(th);
headerCells[header.key] = th;
});
thead.appendChild(headRow);
table.appendChild(thead);
const tbody = document.createElement("tbody");
table.appendChild(tbody);
tableContainer.appendChild(table);
function normalizeStatus(status) {
return (status || "unknown").toString().toLowerCase().split("/")[0];
}
function formatStatus(status) {
const key = normalizeStatus(status);
return statusLabels[key] || status;
}
function formatProject(project) {
return projectLabels[project] || project;
}
function formatCategory(category) {
if (!category) return "unspecified";
return category;
}
function updateHeaderIndicators() {
Object.keys(headerCells).forEach((key) => {
const th = headerCells[key];
const label = th.dataset.label || "";
if (key === sortKey) {
th.classList.add("sorted");
th.textContent = `${label} ${sortDir === "asc" ? "^" : "v"}`;
} else {
th.classList.remove("sorted");
th.textContent = label;
}
});
}
function updateResultsCount(count, total) {
if (total === 0) {
resultsCount.textContent = "No RFCs found.";
return;
}
resultsCount.textContent = `Showing ${count} of ${total} RFCs`;
}
function updateChipGroup(containerId, dataAttr, counts, total) {
document.querySelectorAll(`#${containerId} .chip`).forEach((chip) => {
const key = chip.dataset[dataAttr];
const label = chip.dataset.label || chip.textContent;
const count = key === "all" ? total : (counts[key] || 0);
chip.textContent = `${label} (${count})`;
});
}
function updateChipCounts() {
const statusCounts = {};
const projectCounts = {};
rfcData.forEach((item) => {
const statusKey = normalizeStatus(item.status);
statusCounts[statusKey] = (statusCounts[statusKey] || 0) + 1;
projectCounts[item.project] = (projectCounts[item.project] || 0) + 1;
});
updateChipGroup("status-chips", "status", statusCounts, rfcData.length);
updateChipGroup("project-chips", "project", projectCounts, rfcData.length);
}
function compareItems(a, b) {
if (sortKey === "status") {
const aKey = normalizeStatus(a.status);
const bKey = normalizeStatus(b.status);
return (statusOrder[aKey] ?? 99) - (statusOrder[bKey] ?? 99);
}
if (sortKey === "slug") {
const aNum = parseInt(a.slug, 10);
const bNum = parseInt(b.slug, 10);
const aIsNum = !isNaN(aNum);
const bIsNum = !isNaN(bNum);
if (aIsNum && bIsNum) return aNum - bNum;
if (aIsNum && !bIsNum) return -1;
if (!aIsNum && bIsNum) return 1;
}
const aVal = (a[sortKey] || "").toString().toLowerCase();
const bVal = (b[sortKey] || "").toString().toLowerCase();
return aVal.localeCompare(bVal, undefined, { numeric: true, sensitivity: "base" });
}
function sortItems(items) {
const sorted = [...items].sort(compareItems);
if (sortDir === "desc") sorted.reverse();
return sorted;
}
function render() {
const query = (searchInput.value || "").toLowerCase();
const filtered = rfcData.filter((item) => {
const statusOk = statusFilter === "all" || normalizeStatus(item.status) === statusFilter;
const projectOk = projectFilter === "all" || item.project === projectFilter;
const text = `${item.slug} ${item.title} ${item.project} ${item.status} ${item.category}`.toLowerCase();
const textOk = !query || text.includes(query);
return statusOk && projectOk && textOk;
});
const sorted = sortItems(filtered);
updateResultsCount(sorted.length, rfcData.length);
updateHeaderIndicators();
tbody.innerHTML = "";
if (!sorted.length) {
const tr = document.createElement("tr");
tr.innerHTML = "<td colspan=\"5\">No RFCs match your filters.</td>";
tbody.appendChild(tr);
return;
}
sorted.forEach((item) => {
const tr = document.createElement("tr");
tr.innerHTML = `
<td><a href="./${item.path}">${item.slug}</a></td>
<td>${item.title}</td>
<td>${formatProject(item.project)}</td>
<td><span class="badge status-${normalizeStatus(item.status)}">${formatStatus(item.status)}</span></td>
<td>${formatCategory(item.category)}</td>
`;
tbody.appendChild(tr);
});
}
searchInput.addEventListener("input", render);
document.getElementById("status-chips").addEventListener("click", (e) => {
if (!e.target.dataset.status) return;
statusFilter = e.target.dataset.status;
document.querySelectorAll("#status-chips .chip").forEach((chip) => {
chip.classList.toggle("active", chip.dataset.status === statusFilter);
});
render();
});
document.getElementById("project-chips").addEventListener("click", (e) => {
if (!e.target.dataset.project) return;
projectFilter = e.target.dataset.project;
document.querySelectorAll("#project-chips .chip").forEach((chip) => {
chip.classList.toggle("active", chip.dataset.project === projectFilter);
});
render();
});
resultsCount.textContent = "Loading RFC index...";
fetch("./rfc-index.json")
.then((resp) => {
if (!resp.ok) throw new Error(resp.statusText);
return resp.json();
})
.then((data) => {
rfcData = data;
updateChipCounts();
render();
})
.catch((err) => {
console.error(err);
resultsCount.textContent = "Failed to load RFC index.";
});
})();

View File

@@ -1,11 +1,12 @@
# 24/STATUS-CURATION
| Field | Value |
| --- | --- |
| Name | Status Community Directory Curation Voting using Waku v2 |
| Slug | 24 |
| Status | draft |
| Editor | Szymon Szlachtowicz <szymon.s@ethworks.io> |
---
slug: 24
title: 24/STATUS-CURATION
name: Status Community Directory Curation Voting using Waku v2
status: draft
tags: waku-application
description: A voting protocol for SNT holders to submit votes to a smart contract. Voting is immutable, which helps avoid sabotage from malicious peers.
editor: Szymon Szlachtowicz <szymon.s@ethworks.io>
---
## Abstract

View File

@@ -1,11 +1,12 @@
# 28/STATUS-FEATURING
| Field | Value |
| --- | --- |
| Name | Status community featuring using waku v2 |
| Slug | 28 |
| Status | draft |
| Editor | Szymon Szlachtowicz <szymon.s@ethworks.io> |
---
slug: 28
title: 28/STATUS-FEATURING
name: Status community featuring using waku v2
status: draft
tags: waku-application
description: To gain new members, current SNT holders can vote to feature an active Status community to the larger Status audience.
editor: Szymon Szlachtowicz <szymon.s@ethworks.io>
---
## Abstract

View File

@@ -1,13 +1,19 @@
# 55/STATUS-1TO1-CHAT
| Field | Value |
| --- | --- |
| Name | Status 1-to-1 Chat |
| Slug | 55 |
| Status | draft |
| Category | Standards Track |
| Editor | Aaryamann Challani <p1ge0nh8er@proton.me> |
| Contributors | Andrea Piana <andreap@status.im>, Pedro Pombeiro <pedro@status.im>, Corey Petty <corey@status.im>, Oskar Thorén <oskarth@titanproxy.com>, Dean Eigenmann <dean@status.im> |
---
slug: 55
title: 55/STATUS-1TO1-CHAT
name: Status 1-to-1 Chat
status: draft
category: Standards Track
tags: waku-application
description: A chat protocol to send public and private messages to a single recipient by the Status app.
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
contributors:
- Andrea Piana <andreap@status.im>
- Pedro Pombeiro <pedro@status.im>
- Corey Petty <corey@status.im>
- Oskar Thorén <oskarth@titanproxy.com>
- Dean Eigenmann <dean@status.im>
---
## Abstract

View File

@@ -1,13 +1,16 @@
# 56/STATUS-COMMUNITIES
| Field | Value |
| --- | --- |
| Name | Status Communities that run over Waku v2 |
| Slug | 56 |
| Status | draft |
| Category | Standards Track |
| Editor | Aaryamann Challani <p1ge0nh8er@proton.me> |
| Contributors | Andrea Piana <andreap@status.im>, Prem Chaitanya Prathi <prem@waku.org> |
---
slug: 56
title: 56/STATUS-COMMUNITIES
name: Status Communities that run over Waku v2
status: draft
category: Standards Track
tags: waku-application
description: Status Communities allow multiple users to communicate in a discussion space. This is a key feature of the Status application.
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
contributors:
- Andrea Piana <andreap@status.im>
- Prem Chaitanya Prathi <prem@waku.org>
---
## Abstract

View File

@@ -1,13 +1,15 @@
# 61/STATUS-Community-History-Service
| Field | Value |
| --- | --- |
| Name | Status Community History Service |
| Slug | 61 |
| Status | draft |
| Category | Standards Track |
| Editor | r4bbit <r4bbit@status.im> |
| Contributors | Sanaz Taheri <sanaz@status.im>, John Lea <john@status.im> |
---
slug: 61
title: 61/STATUS-Community-History-Service
name: Status Community History Service
status: draft
category: Standards Track
description: Explains how new members of a Status community can request historical messages from archive nodes.
editor: r4bbit <r4bbit@status.im>
contributors:
- Sanaz Taheri <sanaz@status.im>
- John Lea <john@status.im>
---
## Abstract

View File

@@ -1,12 +1,16 @@
# 62/STATUS-PAYLOADS
| Field | Value |
| --- | --- |
| Name | Status Message Payloads |
| Slug | 62 |
| Status | draft |
| Editor | r4bbit <r4bbit@status.im> |
| Contributors | Adam Babik <adam@status.im>, Andrea Maria Piana <andreap@status.im>, Oskar Thoren <oskarth@titanproxy.com>, Samuel Hawksby-Robinson <samuel@status.im> |
---
slug: 62
title: 62/STATUS-PAYLOADS
name: Status Message Payloads
status: draft
description: Describes the payload of each message in Status.
editor: r4bbit <r4bbit@status.im>
contributors:
- Adam Babik <adam@status.im>
- Andrea Maria Piana <andreap@status.im>
- Oskar Thoren <oskarth@titanproxy.com>
- Samuel Hawksby-Robinson <samuel@status.im>
---
## Abstract

View File

@@ -1,13 +1,14 @@
# 63/STATUS-Keycard-Usage
| Field | Value |
| --- | --- |
| Name | Status Keycard Usage |
| Slug | 63 |
| Status | draft |
| Category | Standards Track |
| Editor | Aaryamann Challani <p1ge0nh8er@proton.me> |
| Contributors | Jimmy Debe <jimmy@status.im> |
---
slug: 63
title: 63/STATUS-Keycard-Usage
name: Status Keycard Usage
status: draft
category: Standards Track
description: Describes how an application can use the Status Keycard to create, store and transact with different account addresses.
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
contributors:
- Jimmy Debe <jimmy@status.im>
---
## Terminology

View File

@@ -1,13 +1,16 @@
# 65/STATUS-ACCOUNT-ADDRESS
| Field | Value |
| --- | --- |
| Name | Status Account Address |
| Slug | 65 |
| Status | draft |
| Category | Standards Track |
| Editor | Aaryamann Challani <p1ge0nh8er@proton.me> |
| Contributors | Corey Petty <corey@status.im>, Oskar Thorén <oskarth@titanproxy.com>, Samuel Hawksby-Robinson <samuel@status.im> |
---
slug: 65
title: 65/STATUS-ACCOUNT-ADDRESS
name: Status Account Address
status: draft
category: Standards Track
description: Details of what a Status account address is and how account addresses are created and used.
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
contributors:
- Corey Petty <corey@status.im>
- Oskar Thorén <oskarth@titanproxy.com>
- Samuel Hawksby-Robinson <samuel@status.im>
---
## Abstract

View File

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 59 KiB

View File

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

View File

@@ -1,13 +1,14 @@
# 71/STATUS-PUSH-NOTIFICATION-SERVER
| Field | Value |
| --- | --- |
| Name | Push Notification Server |
| Slug | 71 |
| Status | draft |
| Category | Standards Track |
| Editor | Jimmy Debe <jimmy@status.im> |
| Contributors | Andrea Maria Piana <andreap@status.im> |
---
slug: 71
title: 71/STATUS-PUSH-NOTIFICATION-SERVER
name: Push Notification Server
status: draft
category: Standards Track
description: A set of methods to allow Status clients to use push notification services in mobile environments.
editor: Jimmy Debe <jimmy@status.im>
contributors:
- Andrea Maria Piana <andreap@status.im>
---
## Abstract

View File

@@ -1,11 +1,12 @@
# 3RD-PARTY
| Field | Value |
| --- | --- |
| Name | 3rd party |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Volodymyr Kozieiev <volodymyr@status.im> |
---
title: 3RD-PARTY
name: 3rd party
status: deprecated
description: This specification discusses 3rd party APIs that Status relies on.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Volodymyr Kozieiev <volodymyr@status.im>
---
## Abstract

View File

@@ -1,11 +1,12 @@
# IPFS-gateway-for-Sticker-Pack
| Field | Value |
| --- | --- |
| Name | IPFS gateway for Sticker Pack |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Gheorghe Pinzaru <gheorghe@status.im> |
---
title: IPFS-gateway-for-Sticker-Pack
name: IPFS gateway for Sticker Pack
status: deprecated
description: This specification describes how Status uses the IPFS gateway to store stickers.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Gheorghe Pinzaru <gheorghe@status.im>
---
## Abstract

View File

@@ -1,11 +1,14 @@
# ACCOUNT
| Field | Value |
| --- | --- |
| Name | Account |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Corey Petty <corey@status.im>, Oskar Thorén <oskar@status.im>, Samuel Hawksby-Robinson <samuel@status.im> |
---
title: ACCOUNT
name: Account
status: deprecated
description: This specification explains what a Status account is, and how a node establishes trust.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Corey Petty <corey@status.im>
- Oskar Thorén <oskar@status.im>
- Samuel Hawksby-Robinson <samuel@status.im>
---
## Abstract

View File

@@ -1,11 +1,17 @@
# CLIENT
| Field | Value |
| --- | --- |
| Name | Client |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Adam Babik <adam@status.im>, Andrea Maria Piana <andreap@status.im>, Dean Eigenmann <dean@status.im>, Corey Petty <corey@status.im>, Oskar Thorén <oskar@status.im>, Samuel Hawksby-Robinson <samuel@status.im> |
---
title: CLIENT
name: Client
status: deprecated
description: This specification describes how to write a Status client for communicating with other Status clients.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Adam Babik <adam@status.im>
- Andrea Maria Piana <andreap@status.im>
- Dean Eigenmann <dean@status.im>
- Corey Petty <corey@status.im>
- Oskar Thorén <oskar@status.im>
- Samuel Hawksby-Robinson <samuel@status.im>
---
## Abstract

View File

@@ -1,10 +1,11 @@
# Dapp browser API usage
| Field | Value |
| --- | --- |
| Name | Dapp browser API usage |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
---
title: Dapp browser API usage
name: Dapp browser API usage
status: deprecated
description: This document describes requirements that an application must fulfill in order to provide a proper environment for Dapps running inside a browser.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
---
## Abstract

View File

@@ -1,11 +1,12 @@
# EIPS
| Field | Value |
| --- | --- |
| Name | EIPS |
| Status | deprecated |
| Editor | Ricardo Guilherme Schmidt <ricardo3@status.im> |
| Contributors | None |
---
title: EIPS
name: EIPS
status: deprecated
description: Status relation with the EIPs
editor: Ricardo Guilherme Schmidt <ricardo3@status.im>
contributors:
-
---
## Abstract

View File

@@ -1,11 +1,12 @@
# ETHEREUM-USAGE
| Field | Value |
| --- | --- |
| Name | Status interactions with the Ethereum blockchain |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Andrea Maria Piana <andreap@status.im> |
---
title: ETHEREUM-USAGE
name: Status interactions with the Ethereum blockchain
status: deprecated
description: All interactions that the Status client has with the Ethereum blockchain.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Andrea Maria Piana <andreap@status.im>
---
## Abstract

View File

@@ -1,11 +1,12 @@
# GROUP-CHAT
| Field | Value |
| --- | --- |
| Name | Group Chat |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Andrea Piana <andreap@status.im> |
---
title: GROUP-CHAT
name: Group Chat
status: deprecated
description: This document describes the group chat protocol used by the Status application.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Andrea Piana <andreap@status.im>
---
## Abstract

View File

Before

Width:  |  Height:  |  Size: 2.4 KiB

After

Width:  |  Height:  |  Size: 2.4 KiB

View File

Before

Width:  |  Height:  |  Size: 1.1 KiB

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -1,11 +1,12 @@
# Keycard Usage for Wallet and Chat Keys
| Field | Value |
| --- | --- |
| Name | Keycard Usage for Wallet and Chat Keys |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Roman Volosovskyi <roman@status.im> |
---
title: Keycard Usage for Wallet and Chat Keys
name: Keycard Usage for Wallet and Chat Keys
status: deprecated
description: In this specification, we describe how Status communicates with Keycard to create, store and use multiaccount.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Roman Volosovskyi <roman@status.im>
---
## Abstract

View File

@@ -1,11 +1,12 @@
# NOTIFICATIONS
| Field | Value |
| --- | --- |
| Name | Notifications |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Eric Dvorsak <eric@status.im> |
---
title: NOTIFICATIONS
name: Notifications
status: deprecated
description: A client should implement local notifications to offer notifications for any event in the app without the privacy cost and dependency on third party services.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Eric Dvorsak <eric@status.im>
---
## Local Notifications

View File

@@ -1,11 +1,14 @@
# PAYLOADS
| Field | Value |
| --- | --- |
| Name | Payloads |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Adam Babik <adam@status.im>, Andrea Maria Piana <andreap@status.im>, Oskar Thorén <oskar@status.im> |
---
title: PAYLOADS
name: Payloads
status: deprecated
description: Payload of messages in Status, regarding chat and chat-related use cases.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Adam Babik <adam@status.im>
- Andrea Maria Piana <andreap@status.im>
- Oskar Thorén <oskar@status.im>
---
## Abstract

View File

@@ -1,11 +1,12 @@
# PUSH-NOTIFICATION-SERVER
| Field | Value |
| --- | --- |
| Name | Push notification server |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Andrea Maria Piana <andreap@status.im> |
---
title: PUSH-NOTIFICATION-SERVER
name: Push notification server
status: deprecated
description: Status provides a set of Push notification services that can be used to achieve this functionality.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Andrea Maria Piana <andreap@status.im>
---
## Reason

View File

@@ -1,11 +1,16 @@
# SECURE-TRANSPORT
| Field | Value |
| --- | --- |
| Name | Secure Transport |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Andrea Maria Piana <andreap@status.im>, Corey Petty <corey@status.im>, Dean Eigenmann <dean@status.im>, Oskar Thorén <oskar@status.im>, Pedro Pombeiro <pedro@status.im> |
---
title: SECURE-TRANSPORT
name: Secure Transport
status: deprecated
description: This document describes how Status provides a secure channel between two peers, providing confidentiality, integrity, authenticity, and forward secrecy.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Andrea Maria Piana <andreap@status.im>
- Corey Petty <corey@status.im>
- Dean Eigenmann <dean@status.im>
- Oskar Thorén <oskar@status.im>
- Pedro Pombeiro <pedro@status.im>
---
## Abstract

View File

@@ -1,11 +1,14 @@
# WAKU-MAILSERVER
| Field | Value |
| --- | --- |
| Name | Waku Mailserver |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Adam Babik <adam@status.im>, Oskar Thorén <oskar@status.im>, Samuel Hawksby-Robinson <samuel@status.im> |
---
title: WAKU-MAILSERVER
name: Waku Mailserver
status: deprecated
description: Waku Mailserver is a specification that allows messages to be stored permanently and to allow the stored messages to be delivered to requesting client nodes, regardless if the messages are not available in the network due to the message TTL expiring.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Adam Babik <adam@status.im>
- Oskar Thorén <oskar@status.im>
- Samuel Hawksby-Robinson <samuel@status.im>
---
## Abstract

View File

@@ -1,11 +1,15 @@
# WAKU-USAGE
| Field | Value |
| --- | --- |
| Name | Waku Usage |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Adam Babik <adam@status.im>, Corey Petty <corey@status.im>, Oskar Thorén <oskar@status.im>, Samuel Hawksby-Robinson <samuel@status.im> |
---
title: WAKU-USAGE
name: Waku Usage
status: deprecated
description: Status uses Waku to provide privacy-preserving routing and messaging on top of devP2P.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Adam Babik <adam@status.im>
- Corey Petty <corey@status.im>
- Oskar Thorén <oskar@status.im>
- Samuel Hawksby-Robinson <samuel@status.im>
---
## Abstract

View File

@@ -1,11 +1,13 @@
# WHISPER-MAILSERVER
| Field | Value |
| --- | --- |
| Name | Whisper mailserver |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Adam Babik <adam@status.im>, Oskar Thorén <oskar@status.im> |
---
title: WHISPER-MAILSERVER
name: Whisper mailserver
status: deprecated
description: Whisper Mailserver is a Whisper extension that allows to store messages permanently and deliver them to the clients even though they are already not available in the network and expired.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Adam Babik <adam@status.im>
- Oskar Thorén <oskar@status.im>
---
## Abstract

View File

@@ -1,11 +1,15 @@
# WHISPER-USAGE
| Field | Value |
| --- | --- |
| Name | Whisper Usage |
| Status | deprecated |
| Editor | Filip Dimitrijevic <filip@status.im> |
| Contributors | Adam Babik <adam@status.im>, Andrea Piana <andreap@status.im>, Corey Petty <corey@status.im>, Oskar Thorén <oskar@status.im> |
---
title: WHISPER-USAGE
name: Whisper Usage
status: deprecated
description: Status uses Whisper to provide privacy-preserving routing and messaging on top of devP2P.
editor: Filip Dimitrijevic <filip@status.im>
contributors:
- Adam Babik <adam@status.im>
- Andrea Piana <andreap@status.im>
- Corey Petty <corey@status.im>
- Oskar Thorén <oskar@status.im>
---
## Abstract

View File

@@ -1,12 +1,14 @@
# STATUS-SIMPLE-SCALING
| Field | Value |
| --- | --- |
| Name | Status Simple Scaling |
| Status | raw |
| Category | Informational |
| Editor | Daniel Kaiser <danielkaiser@status.im> |
| Contributors | Alvaro Revuelta <alrevuelta@status.im> |
---
title: STATUS-SIMPLE-SCALING
name: Status Simple Scaling
status: raw
category: Informational
tags: waku/application
description: Describes how to scale Status Communities and Status 1-to-1 chats using Waku v2 protocol and components.
editor: Daniel Kaiser <danielkaiser@status.im>
contributors:
- Alvaro Revuelta <alrevuelta@status.im>
---
## Abstract

View File

@@ -1,12 +1,15 @@
# STATUS-PROTOCOLS
---
title: STATUS-PROTOCOLS
name: Status Protocol Stack
status: raw
category: Standards Track
description: Specifies the Status application protocol stack.
editor: Hanno Cornelius <hanno@status.im>
contributors:
- Jimmy Debe <jimmy@status.im>
- Aaryamann Challani <p1ge0nh8er@proton.me>
| Field | Value |
| --- | --- |
| Name | Status Protocol Stack |
| Status | raw |
| Category | Standards Track |
| Editor | Hanno Cornelius <hanno@status.im> |
| Contributors | Jimmy Debe <jimmy@status.im>, Aaryamann Challani <p1ge0nh8er@proton.me> |
---
## Abstract

View File

@@ -1,11 +1,12 @@
# STATUS-MVDS-USAGE
| Field | Value |
| --- | --- |
| Name | MVDS Usage in Status |
| Status | raw |
| Category | Best Current Practice |
| Editor | Kaichao Sun <kaichao@status.im> |
---
title: STATUS-MVDS-USAGE
name: MVDS Usage in Status
status: raw
category: Best Current Practice
description: Defines how MVDS protocol used by different message types in Status.
editor: Kaichao Sun <kaichao@status.im>
contributors:
---
## Abstract

View File

@@ -1,12 +1,13 @@
# STATUS-URL-DATA
| Field | Value |
| --- | --- |
| Name | Status URL Data |
| Status | raw |
| Category | Standards Track |
| Editor | Felicio Mununga <felicio@status.im> |
| Contributors | Aaryamann Challani <aaryamann@status.im> |
---
title: STATUS-URL-DATA
name: Status URL Data
status: raw
category: Standards Track
tags:
editor: Felicio Mununga <felicio@status.im>
contributors:
- Aaryamann Challani <aaryamann@status.im>
---
## Abstract

View File

@@ -1,11 +1,12 @@
# STATUS-URL-SCHEME
| Field | Value |
| --- | --- |
| Name | Status URL Scheme |
| Status | raw |
| Category | Standards Track |
| Editor | Felicio Mununga <felicio@status.im> |
---
title: STATUS-URL-SCHEME
name: Status URL Scheme
status: raw
category: Standards Track
tags:
editor: Felicio Mununga <felicio@status.im>
contributors:
---
## Abstract

View File

@@ -1,13 +1,19 @@
# 1/COSS
| Field | Value |
| --- | --- |
| Name | Consensus-Oriented Specification System |
| Slug | 1 |
| Status | draft |
| Category | Best Current Practice |
| Editor | Daniel Kaiser <danielkaiser@status.im> |
| Contributors | Oskar Thoren <oskarth@titanproxy.com>, Pieter Hintjens <ph@imatix.com>, André Rebentisch <andre@openstandards.de>, Alberto Barrionuevo <abarrio@opentia.es>, Chris Puttick <chris.puttick@thehumanjourney.net>, Yurii Rashkovskii <yrashk@gmail.com>, Jimmy Debe <jimmy@status.im> |
---
slug: 1
title: 1/COSS
name: Consensus-Oriented Specification System
status: draft
category: Best Current Practice
editor: Daniel Kaiser <danielkaiser@status.im>
contributors:
- Oskar Thoren <oskarth@titanproxy.com>
- Pieter Hintjens <ph@imatix.com>
- André Rebentisch <andre@openstandards.de>
- Alberto Barrionuevo <abarrio@opentia.es>
- Chris Puttick <chris.puttick@thehumanjourney.net>
- Yurii Rashkovskii <yrashk@gmail.com>
- Jimmy Debe <jimmy@status.im>
---
This document describes a consensus-oriented specification system (COSS)
for building interoperable technical specifications.
@@ -34,7 +40,7 @@ Request For Comments specification process managed by the Vac service department
## License
Copyright (c) 2008-26 the Editor and Contributors.
Copyright (c) 2008-24 the Editor and Contributors.
This Specification is free software;
you can redistribute it and/or

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -1,15 +1,16 @@
# 2/MVDS
| Field | Value |
| --- | --- |
| Name | Minimum Viable Data Synchronization |
| Slug | 2 |
| Status | stable |
| Editor | Sanaz Taheri <sanaz@status.im> |
| Contributors | Dean Eigenmann <dean@status.im>, Oskar Thorén <oskarth@titanproxy.com> |
---
slug: 2
title: 2/MVDS
name: Minimum Viable Data Synchronization
status: stable
editor: Sanaz Taheri <sanaz@status.im>
contributors:
- Dean Eigenmann <dean@status.im>
- Oskar Thorén <oskarth@titanproxy.com>
---
In this specification, we describe a minimum viable protocol for
data synchronization inspired by the Bramble Synchronization Protocol ([BSP](https://code.briarproject.org/briar/briar-spec/blob/master/protocols/BSP.md)).
data synchronization inspired by the Bramble Synchronization Protocol[^1].
This protocol is designed to ensure reliable messaging
between peers across an unreliable peer-to-peer (P2P) network where
they may be unreachable or unresponsive.
@@ -186,4 +187,5 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
## Footnotes
[^1]: akwizgran et al. [BSP](https://code.briarproject.org/briar/briar-spec/blob/master/protocols/BSP.md). Briar.
[^2]: <https://github.com/vacp2p/mvds>

View File

@@ -1,11 +1,11 @@
# 25/LIBP2P-DNS-DISCOVERY
| Field | Value |
| --- | --- |
| Name | Libp2p Peer Discovery via DNS |
| Slug | 25 |
| Status | deleted |
| Editor | Hanno Cornelius <hanno@status.im> |
---
slug: 25
title: 25/LIBP2P-DNS-DISCOVERY
name: Libp2p Peer Discovery via DNS
status: deleted
editor: Hanno Cornelius <hanno@status.im>
contributors:
---
`25/LIBP2P-DNS-DISCOVERY` specifies a scheme to implement [`libp2p`](https://libp2p.io/)
peer discovery via DNS for Waku v2.

View File

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -1,12 +1,12 @@
# 3/REMOTE-LOG
| Field | Value |
| --- | --- |
| Name | Remote log specification |
| Slug | 3 |
| Status | draft |
| Editor | Oskar Thorén <oskarth@titanproxy.com> |
| Contributors | Dean Eigenmann <dean@status.im> |
---
slug: 3
title: 3/REMOTE-LOG
name: Remote log specification
status: draft
editor: Oskar Thorén <oskarth@titanproxy.com>
contributors:
- Dean Eigenmann <dean@status.im>
---
A remote log is a replication of a local log.
This means a node can read data that originally came from a node that is offline.

View File

@@ -1,12 +1,17 @@
# 32/RLN-V1
| Field | Value |
| --- | --- |
| Name | Rate Limit Nullifier |
| Slug | 32 |
| Status | draft |
| Editor | Aaryamann Challani <p1ge0nh8er@proton.me> |
| Contributors | Barry Whitehat <barrywhitehat@protonmail.com>, Sanaz Taheri <sanaz@status.im>, Oskar Thorén <oskarth@titanproxy.com>, Onur Kilic <onurkilic1004@gmail.com>, Blagoj Dimovski <blagoj.dimovski@yandex.com>, Rasul Ibragimov <curryrasul@gmail.com> |
---
slug: 32
title: 32/RLN-V1
name: Rate Limit Nullifier
status: draft
editor: Aaryamann Challani <p1ge0nh8er@proton.me>
contributors:
- Barry Whitehat <barrywhitehat@protonmail.com>
- Sanaz Taheri <sanaz@status.im>
- Oskar Thorén <oskarth@titanproxy.com>
- Onur Kilic <onurkilic1004@gmail.com>
- Blagoj Dimovski <blagoj.dimovski@yandex.com>
- Rasul Ibragimov <curryrasul@gmail.com>
---
## Abstract

View File

@@ -1,12 +1,14 @@
# 4/MVDS-META
| Field | Value |
| --- | --- |
| Name | MVDS Metadata Field |
| Slug | 4 |
| Status | draft |
| Editor | Sanaz Taheri <sanaz@status.im> |
| Contributors | Dean Eigenmann <dean@status.im>, Andrea Maria Piana <andreap@status.im>, Oskar Thorén <oskarth@titanproxy.com> |
---
slug: 4
title: 4/MVDS-META
name: MVDS Metadata Field
status: draft
editor: Sanaz Taheri <sanaz@status.im>
contributors:
- Dean Eigenmann <dean@status.im>
- Andrea Maria Piana <andreap@status.im>
- Oskar Thorén <oskarth@titanproxy.com>
---
In this specification, we describe a method to construct message history that
will aid the consistency guarantees of [2/MVDS](../2/mvds.md).

View File

@@ -1,253 +1,252 @@
# HASHGRAPHLIKE CONSENSUS
| Field | Value |
| --- | --- |
| Name | Hashgraphlike Consensus Protocol |
| Status | raw |
| Category | Standards Track |
| Editor | Ugur Sen [ugur@status.im](mailto:ugur@status.im) |
| Contributors | seemenkina [ekaterina@status.im](mailto:ekaterina@status.im) |
## Abstract
This document specifies a scalable, decentralized, and Byzantine Fault Tolerant (BFT)
consensus mechanism inspired by Hashgraph, designed for binary decision-making in P2P networks.
## Motivation
Consensus is one of the essential components of decentralization.
In particular, in the decentralized group messaging application is used for
binary decision-making to govern the group.
Therefore, each user contributes to the decision-making process.
Besides achieving decentralization, the consensus mechanism MUST be strong:
- Under the assumption of at least `2/3` honest users in the network.
- Each user MUST conclude the same decision and scalability:
message propagation in the network MUST occur within `O(logn)` rounds,
where `n` is the total number of peers,
in order to preserve the scalability of the messaging application.
## Format Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
## Flow
Any user in the group initializes the consensus by creating a proposal.
Next, the user broadcasts the proposal to the whole network.
Upon each user receives the proposal, validates the proposal,
adds its vote as yes or no and with its signature and timestamp.
The user then sends the proposal and vote to a random peer in a P2P setup,
or to a subscribed gossipsub channel if gossip-based messaging is used.
Therefore, each user first validates the signature and then adds its new vote.
Each sending message counts as a round.
After `log(n)` rounds all users in the network have the others vote
if at least `2/3` number of users are honest where honesty follows the protocol.
In general, the voting-based consensus consists of the following phases:
1. Initialization of voting
2. Exchanging votes across the rounds
3. Counting the votes
### Assumptions
- The users in the P2P network can discover the nodes or they are subscribing same channel in a gossipsub.
- We MAY have non-reliable (silent) nodes.
- Proposal owners MUST know the number of voters.
## 1. Initialization of voting
A user initializes the voting with the proposal payload which is
implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
```bash
syntax = "proto3";
package vac.voting;
message Proposal {
string name = 10; // Proposal name
string payload = 11; // Proposal description
uint32 proposal_id = 12; // Unique identifier of the proposal
bytes proposal_owner = 13; // Public key of the creator
repeated Votes = 14; // Vote list in the proposal
uint32 expected_voters_count = 15; // Maximum number of distinct voters
uint32 round = 16; // Number of Votes
uint64 timestamp = 17; // Creation time of proposal
uint64 expiration_time = 18; // The time interval that the proposal is active.
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
}
message Vote {
uint32 vote_id = 20; // Unique identifier of the vote
bytes vote_owner = 21; // Voter's public key
uint32 proposal_id = 22; // Linking votes and proposals
int64 timestamp = 23; // Time when the vote was cast
bool vote = 24; // Vote bool value (true/false)
bytes parent_hash = 25; // Hash of previous owner's Vote
bytes received_hash = 26; // Hash of previous received Vote
bytes vote_hash = 27; // Hash of all previously defined fields in Vote
bytes signature = 28; // Signature of vote_hash
}
```
To initiate a consensus for a proposal,
a user MUST complete all the fields in the proposal, including attaching its `vote`
and the `payload` that shows the purpose of the proposal.
Notably, `parent_hash` and `received_hash` are empty strings because there is no previous or received hash.
Then the initialization section ends when the user who creates the proposal sends it
to the random peer from the network or sends it to the proposal to the specific channel.
## 2. Exchanging votes across the peers
Once the peer receives the proposal message `P_1` from a 1-1 or a gossipsub channel does the following checks:
1. Check the signatures of the each votes in proposal, in particular for proposal `P_1`,
verify the signature of `V_1` where `V_1 = P_1.votes[0]` with `V_1.signature` and `V_1.vote_owner`
2. Do `parent_hash` check: If there are repeated votes from the same sender,
check that the hash of the former vote is equal to the `parent_hash` of the later vote.
3. Do `received_hash` check: If there are multiple votes in a proposal, check that the hash of a vote is equal to the `received_hash` of the next one.
4. After successful verification of the signature and hashes, the receiving peer proceeds to generate `P_2` containing a new vote `V_2` as following:
4.1. Add its public key as `P_2.vote_owner`.
4.2. Set `timestamp`.
4.3. Set boolean `vote`.
4.4. Define `V_2.parent_hash = 0` if there is no previous peer's vote, otherwise hash of previous owner's vote.
4.5. Set `V_2.received_hash = hash(P_1.votes[0])`.
4.6. Set `proposal_id` for the `vote`.
4.7. Calculate `vote_hash` by hash of all previously defined fields in Vote:
`V_2.vote_hash = hash(vote_id, owner, proposal_id, timestamp, vote, parent_hash, received_hash)`
4.8. Sign `vote_hash` with its private key corresponding the public key as `vote_owner` component then adds `V_2.vote_hash`.
5. Create `P_2` with by adding `V_2` as follows:
5.1. Assign `P_2.name`, `P_2.proposal_id`, and `P_2.proposal_owner` to be identical to those in `P_1`.
5.2. Add the `V_2` to the `P_2.Votes` list.
5.3. Increase the round by one, namely `P_2.round = P_1.round + 1`.
5.4. Verify that the proposal has not expired by checking that: `P_2.timestamp - current_time < P_1.expiration_time`.
If this does not hold, other peers ignore the message.
After the peer creates the proposal `P_2` with its vote `V_2`,
sends it to the random peer from the network or
sends it to the proposal to the specific channel.
## 3. Determining the result
Because consensus depends on meeting a quorum threshold,
each peer MUST verify the accumulated votes to determine whether the necessary conditions have been satisfied.
The voting result is set YES if the majority of the `2n/3` from the distinct peers vote YES.
To verify, the `findDistinctVoter` method processes the proposal by traversing its `Votes` list to determine the number of unique voters.
If this method returns true, the peer proceeds with strong validation,
which ensures that if any honest peer reaches a decision,
no other honest peer can arrive at a conflicting result.
1. Check each `signature` in the vote as shown in the [Section 2](#2-exchanging-votes-across-the-peers).
2. Check the `parent_hash` chain if there are multiple votes from the same owner namely `vote_i` and `vote_i+1` respectively,
the parent hash of `vote_i+1` should be the hash of `vote_i`
3. Check the `previous_hash` chain, each received hash of `vote_i+1` should be equal to the hash of `vote_i`.
4. Check the `timestamp` against the replay attack.
In particular, the `timestamp` cannot be the old in the determined threshold.
5. Check that the liveness criteria defined in the Liveness section are satisfied.
If a proposal is verified by all the checks,
the `countVote` method counts each YES vote from the list of Votes.
## 4. Properties
The consensus mechanism satisfies liveness and security properties as follows:
### Liveness
Liveness refers to the ability of the protocol to eventually reach a decision when sufficient honest participation is present.
In this protocol, if `n > 2` and more than `n/2` of the votes among at least `2n/3` distinct peers are YES,
then the consensus result is defined as YES; otherwise, when `n ≤ 2`, unanimous agreement (100% YES votes) is required.
The peer calculates the result locally as shown in the [Section 3](#3-determining-the-result).
From the [hashgraph property](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus),
if a node could calculate the result of a proposal,
it implies that no peer can calculate the opposite of the result.
Still, reliability issues can cause some situations where peers cannot receive enough messages,
so they cannot calculate the consensus result.
Rounds are incremented when a peer adds and sends the new proposal.
Calculating the required number of rounds, `2n/3` from the distinct peers' votes is achieved in two ways:
1. `2n/3` rounds in pure P2P networks
2. `2` rounds in gossipsub
Since the message complexity is `O(1)` in the gossipsub channel,
in case the network has reliability issues,
the second round is used for the peers cannot receive all the messages from the first round.
If an honest and online peer has received at least one vote but not enough to reach consensus,
it MAY continue to propagate its own vote — and any votes it has received — to support message dissemination.
This process can continue beyond the expected round count,
as long as it remains within the expiration time defined in the proposal.
The expiration time acts as a soft upper bound to ensure that consensus is either reached or aborted within a bounded timeframe.
#### Equality of votes
An equality of votes occurs when verifying at least `2n/3` distinct voters and
applying `liveness_criteria_yes` the number of YES and NO votes is equal.
Handling ties is an application-level decision. The application MUST define a deterministic tie policy:
RETRY: re-run the vote with a new proposal_id, optionally adjusting parameters.
REJECT: abort the proposal and return voting result as NO.
The chosen policy SHOULD be consistent for all peers via proposal's `payload` to ensure convergence on the same outcome.
### Silent Node Management
Silent nodes are the nodes that not participate the voting as YES or NO.
There are two possible counting votes for the silent peers.
1. **Silent peers means YES:**
Silent peers counted as YES vote, if the application prefer the strong rejection for NO votes.
2. **Silent peers means NO:**
Silent peers counted as NO vote, if the application prefer the strong acception for NO votes.
The proposal is set to default true, which means silent peers' votes are counted as YES namely `liveness_criteria_yes` is set true by default.
### Security
This RFC uses cryptographic primitives to prevent the
malicious behaviours as follows:
- Vote forgery attempt: creating unsigned invalid votes
- Inconsistent voting: a malicious peer submits conflicting votes (e.g., YES to some peers and NO to others)
in different stages of the protocol, violating vote consistency and attempting to undermine consensus.
- Integrity breaking attempt: tampering history by changing previous votes.
- Replay attack: storing the old votes to maliciously use in fresh voting.
## 5. Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
## 6. References
- [Hedera Hashgraph](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus)
- [Gossip about gossip](https://docs.hedera.com/hedera/core-concepts/hashgraph-consensus-algorithms/gossip-about-gossip)
- [Simple implementation of hashgraph consensus](https://github.com/conanwu777/hashgraph)
---
title: HASHGRAPHLIKE CONSENSUS
name: Hashgraphlike Consensus Protocol
status: raw
category: Standards Track
tags:
editor: Ugur Sen [ugur@status.im](mailto:ugur@status.im)
contributors: seemenkina [ekaterina@status.im](mailto:ekaterina@status.im)
---
## Abstract
This document specifies a scalable, decentralized, and Byzantine Fault Tolerant (BFT)
consensus mechanism inspired by Hashgraph, designed for binary decision-making in P2P networks.
## Motivation
Consensus is one of the essential components of decentralization.
In particular, in the decentralized group messaging application is used for
binary decision-making to govern the group.
Therefore, each user contributes to the decision-making process.
Besides achieving decentralization, the consensus mechanism MUST be strong:
- Under the assumption of at least `2/3` honest users in the network.
- Each user MUST conclude the same decision and scalability:
message propagation in the network MUST occur within `O(logn)` rounds,
where `n` is the total number of peers,
in order to preserve the scalability of the messaging application.
## Format Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
## Flow
Any user in the group initializes the consensus by creating a proposal.
Next, the user broadcasts the proposal to the whole network.
Upon each user receives the proposal, validates the proposal,
adds its vote as yes or no and with its signature and timestamp.
The user then sends the proposal and vote to a random peer in a P2P setup,
or to a subscribed gossipsub channel if gossip-based messaging is used.
Therefore, each user first validates the signature and then adds its new vote.
Each sending message counts as a round.
After `log(n)` rounds all users in the network have the others vote
if at least `2/3` number of users are honest where honesty follows the protocol.
In general, the voting-based consensus consists of the following phases:
1. Initialization of voting
2. Exchanging votes across the rounds
3. Counting the votes
### Assumptions
- The users in the P2P network can discover the nodes or they are subscribing same channel in a gossipsub.
- We MAY have non-reliable (silent) nodes.
- Proposal owners MUST know the number of voters.
## 1. Initialization of voting
A user initializes the voting with the proposal payload which is
implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
```bash
syntax = "proto3";
package vac.voting;
message Proposal {
string name = 10; // Proposal name
string payload = 11; // Proposal description
uint32 proposal_id = 12; // Unique identifier of the proposal
bytes proposal_owner = 13; // Public key of the creator
repeated Votes = 14; // Vote list in the proposal
uint32 expected_voters_count = 15; // Maximum number of distinct voters
uint32 round = 16; // Number of Votes
uint64 timestamp = 17; // Creation time of proposal
uint64 expiration_time = 18; // The time interval that the proposal is active.
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
}
message Vote {
uint32 vote_id = 20; // Unique identifier of the vote
bytes vote_owner = 21; // Voter's public key
uint32 proposal_id = 22; // Linking votes and proposals
int64 timestamp = 23; // Time when the vote was cast
bool vote = 24; // Vote bool value (true/false)
bytes parent_hash = 25; // Hash of previous owner's Vote
bytes received_hash = 26; // Hash of previous received Vote
bytes vote_hash = 27; // Hash of all previously defined fields in Vote
bytes signature = 28; // Signature of vote_hash
}
```
To initiate a consensus for a proposal,
a user MUST complete all the fields in the proposal, including attaching its `vote`
and the `payload` that shows the purpose of the proposal.
Notably, `parent_hash` and `received_hash` are empty strings because there is no previous or received hash.
Then the initialization section ends when the user who creates the proposal sends it
to the random peer from the network or sends it to the proposal to the specific channel.
## 2. Exchanging votes across the peers
Once the peer receives the proposal message `P_1` from a 1-1 or a gossipsub channel does the following checks:
1. Check the signatures of the each votes in proposal, in particular for proposal `P_1`,
verify the signature of `V_1` where `V_1 = P_1.votes[0]` with `V_1.signature` and `V_1.vote_owner`
2. Do `parent_hash` check: If there are repeated votes from the same sender,
check that the hash of the former vote is equal to the `parent_hash` of the later vote.
3. Do `received_hash` check: If there are multiple votes in a proposal, check that the hash of a vote is equal to the `received_hash` of the next one.
4. After successful verification of the signature and hashes, the receiving peer proceeds to generate `P_2` containing a new vote `V_2` as following:
4.1. Add its public key as `P_2.vote_owner`.
4.2. Set `timestamp`.
4.3. Set boolean `vote`.
4.4. Define `V_2.parent_hash = 0` if there is no previous peer's vote, otherwise hash of previous owner's vote.
4.5. Set `V_2.received_hash = hash(P_1.votes[0])`.
4.6. Set `proposal_id` for the `vote`.
4.7. Calculate `vote_hash` by hash of all previously defined fields in Vote:
`V_2.vote_hash = hash(vote_id, owner, proposal_id, timestamp, vote, parent_hash, received_hash)`
4.8. Sign `vote_hash` with its private key corresponding the public key as `vote_owner` component then adds `V_2.vote_hash`.
5. Create `P_2` with by adding `V_2` as follows:
5.1. Assign `P_2.name`, `P_2.proposal_id`, and `P_2.proposal_owner` to be identical to those in `P_1`.
5.2. Add the `V_2` to the `P_2.Votes` list.
5.3. Increase the round by one, namely `P_2.round = P_1.round + 1`.
5.4. Verify that the proposal has not expired by checking that: `P_2.timestamp - current_time < P_1.expiration_time`.
If this does not hold, other peers ignore the message.
After the peer creates the proposal `P_2` with its vote `V_2`,
sends it to the random peer from the network or
sends it to the proposal to the specific channel.
## 3. Determining the result
Because consensus depends on meeting a quorum threshold,
each peer MUST verify the accumulated votes to determine whether the necessary conditions have been satisfied.
The voting result is set YES if the majority of the `2n/3` from the distinct peers vote YES.
To verify, the `findDistinctVoter` method processes the proposal by traversing its `Votes` list to determine the number of unique voters.
If this method returns true, the peer proceeds with strong validation,
which ensures that if any honest peer reaches a decision,
no other honest peer can arrive at a conflicting result.
1. Check each `signature` in the vote as shown in the [Section 2](#2-exchanging-votes-across-the-peers).
2. Check the `parent_hash` chain if there are multiple votes from the same owner namely `vote_i` and `vote_i+1` respectively,
the parent hash of `vote_i+1` should be the hash of `vote_i`
3. Check the `previous_hash` chain, each received hash of `vote_i+1` should be equal to the hash of `vote_i`.
4. Check the `timestamp` against the replay attack.
In particular, the `timestamp` cannot be the old in the determined threshold.
5. Check that the liveness criteria defined in the Liveness section are satisfied.
If a proposal is verified by all the checks,
the `countVote` method counts each YES vote from the list of Votes.
## 4. Properties
The consensus mechanism satisfies liveness and security properties as follows:
### Liveness
Liveness refers to the ability of the protocol to eventually reach a decision when sufficient honest participation is present.
In this protocol, if `n > 2` and more than `n/2` of the votes among at least `2n/3` distinct peers are YES,
then the consensus result is defined as YES; otherwise, when `n ≤ 2`, unanimous agreement (100% YES votes) is required.
The peer calculates the result locally as shown in the [Section 3](#3-determining-the-result).
From the [hashgraph property](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus),
if a node could calculate the result of a proposal,
it implies that no peer can calculate the opposite of the result.
Still, reliability issues can cause some situations where peers cannot receive enough messages,
so they cannot calculate the consensus result.
Rounds are incremented when a peer adds and sends the new proposal.
Calculating the required number of rounds, `2n/3` from the distinct peers' votes is achieved in two ways:
1. `2n/3` rounds in pure P2P networks
2. `2` rounds in gossipsub
Since the message complexity is `O(1)` in the gossipsub channel,
in case the network has reliability issues,
the second round is used for the peers cannot receive all the messages from the first round.
If an honest and online peer has received at least one vote but not enough to reach consensus,
it MAY continue to propagate its own vote — and any votes it has received — to support message dissemination.
This process can continue beyond the expected round count,
as long as it remains within the expiration time defined in the proposal.
The expiration time acts as a soft upper bound to ensure that consensus is either reached or aborted within a bounded timeframe.
#### Equality of votes
An equality of votes occurs when verifying at least `2n/3` distinct voters and
applying `liveness_criteria_yes` the number of YES and NO votes is equal.
Handling ties is an application-level decision. The application MUST define a deterministic tie policy:
RETRY: re-run the vote with a new proposal_id, optionally adjusting parameters.
REJECT: abort the proposal and return voting result as NO.
The chosen policy SHOULD be consistent for all peers via proposal's `payload` to ensure convergence on the same outcome.
### Silent Node Management
Silent nodes are the nodes that not participate the voting as YES or NO.
There are two possible counting votes for the silent peers.
1. **Silent peers means YES:**
Silent peers counted as YES vote, if the application prefer the strong rejection for NO votes.
2. **Silent peers means NO:**
Silent peers counted as NO vote, if the application prefer the strong acception for NO votes.
The proposal is set to default true, which means silent peers' votes are counted as YES namely `liveness_criteria_yes` is set true by default.
### Security
This RFC uses cryptographic primitives to prevent the
malicious behaviours as follows:
- Vote forgery attempt: creating unsigned invalid votes
- Inconsistent voting: a malicious peer submits conflicting votes (e.g., YES to some peers and NO to others)
in different stages of the protocol, violating vote consistency and attempting to undermine consensus.
- Integrity breaking attempt: tampering history by changing previous votes.
- Replay attack: storing the old votes to maliciously use in fresh voting.
## 5. Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
## 6. References
- [Hedera Hashgraph](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus)
- [Gossip about gossip](https://docs.hedera.com/hedera/core-concepts/hashgraph-consensus-algorithms/gossip-about-gossip)
- [Simple implementation of hashgraph consensus](https://github.com/conanwu777/hashgraph)

View File

@@ -1,11 +1,11 @@
# ETH-DCGKA
| Field | Value |
| --- | --- |
| Name | Decentralized Key and Session Setup for Secure Messaging over Ethereum |
| Status | raw |
| Category | informational |
| Editor | Ramses Fernandez-Valencia <ramses@status.im> |
---
title: ETH-DCGKA
name: Decentralized Key and Session Setup for Secure Messaging over Ethereum
status: raw
category: informational
editor: Ramses Fernandez-Valencia <ramses@status.im>
contributors:
---
## Abstract

View File

@@ -1,11 +1,12 @@
# ETH-SECPM
| Field | Value |
| --- | --- |
| Name | Secure channel setup using Ethereum accounts |
| Status | deleted |
| Category | Standards Track |
| Editor | Ramses Fernandez <ramses@status.im> |
---
title: ETH-SECPM
name: Secure channel setup using Ethereum accounts
status: deleted
category: Standards Track
tags:
editor: Ramses Fernandez <ramses@status.im>
contributors:
---
## NOTE

241
vac/raw/eth-mls-offchain.md Normal file
View File

@@ -0,0 +1,241 @@
---
title: ETH-MLS-OFFCHAIN
name: Secure channel setup using decentralized MLS and Ethereum accounts
status: raw
category: Standards Track
tags:
editor: Ugur Sen [ugur@status.im](mailto:ugur@status.im)
contributors: seemenkina [ekaterina@status.im](mailto:ekaterina@status.im)
---
## Abstract
The following document specifies Ethereum authenticated scalable
and decentralized secure group messaging application by
integrating Message Layer Security (MLS) backend.
Decentralization refers each user is a node in P2P network and
each user has voice for any changes in group.
This is achieved by integrating a consensus mechanism.
Lastly, this RFC can also be referred to as de-MLS,
decentralized MLS, to emphasize its deviation
from the centralized trust assumptions of traditional MLS deployments.
## Motivation
Group messaging is a fundamental part of digital communication,
yet most existing systems depend on centralized servers,
which introduce risks around privacy, censorship, and unilateral control.
In restrictive settings, servers can be blocked or surveilled;
in more open environments, users still face opaque moderation policies,
data collection, and exclusion from decision-making processes.
To address this, we propose a decentralized, scalable peer-to-peer
group messaging system where each participant runs a node, contributes
to message propagation, and takes part in governance autonomously.
Group membership changes are decided collectively through a lightweight
partially synchronous, fault-tolerant consensus protocol without a centralized identity.
This design enables truly democratic group communication and is well-suited
for use cases like activist collectives, research collaborations, DAOs, support groups,
and decentralized social platforms.
## Format Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
### Assumptions
- The nodes in the P2P network can discover other nodes or will connect to other nodes when subscribing to same topic in a gossipsub.
- We MAY have non-reliable (silent) nodes.
- We MUST have a consensus that is lightweight, scalable and finalized in a specific time.
## Roles
The three roles used in de-MLS is as follows:
- `node`: Nodes are members of network without being in any secure group messaging.
- `member`: Members are special nodes in the secure group messaging who
obtains current group key of secure group messaging.
- `steward`: Stewards are special and transparent members in secure group
messaging who organizes the changes upon the voted-proposals.
## MLS Background
The de-MLS consists of MLS backend, so the MLS services and other MLS components
are taken from the original [MLS specification](https://datatracker.ietf.org/doc/rfc9420/), with or without modifications.
### MLS Services
MLS is operated in two services authentication service (AS) and delivery service (DS).
Authentication service enables group members to authenticate the credentials presented by other group members.
The delivery service routes MLS messages among the nodes or
members in the protocol in the correct order and
manage the `keyPackage` of the users where the `keyPackage` is the objects
that provide some public information about a user.
### MLS Objects
Following section presents the MLS objects and components that used in this RFC:
`Epoch`: Fixed time intervals that changes the state that is defined by members,
section 3.4 in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
`MLS proposal message:` Members MUST receive the proposal message prior to the
corresponding commit message that initiates a new epoch with key changes,
in order to ensure the intended security properties, section 12.1 in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
Here, the add and remove proposals are used.
`Application message`: This message type used in arbitrary encrypted communication between group members.
This is restricted by [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) as if there is pending proposal,
the application message should be cut.
Note that: Since the MLS is based on servers, this delay between proposal and commit messages are very small.
`Commit message:` After members receive the proposals regarding group changes,
the committer, who may be any member of the group, as specified in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/),
generates the necessary key material for the next epoch, including the appropriate welcome messages
for new joiners and new entropy for removed members. In this RFC, the committers only MUST be stewards.
### de-MLS Objects
`Voting proposal` Similar to MLS proposals, but processed only if approved through a voting process.
They function as application messages in the MLS group,
allowing the steward to collect them without halting the protocol.
## Flow
General flow is as follows:
- A steward initializes a group just once, and then sends out Group Announcements (GA) periodically.
- Meanwhile, each`node`creates and sends their`credential` includes `keyPackage`.
- Each `member`creates `voting proposals` sends them to from MLS group during epoch E.
- Meanwhile, the `steward` collects finalized `voting proposals` from MLS group and converts them into
`MLS proposals` then sends them with correspondng `commit messages`
- Evantually, with the commit messages, all members starts the next epoch E+1.
## Creating Voting Proposal
A `member` MAY initializes the voting with the proposal payload
which is implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
```protobuf
syntax = "proto3";
message Proposal {
string name = 10; // Proposal name
string payload = 11; // Describes the what is voting fore
int32 proposal_id = 12; // Unique identifier of the proposal
bytes proposal_owner = 13; // Public key of the creator
repeated Vote votes = 14; // Vote list in the proposal
int32 expected_voters_count = 15; // Maximum number of distinct voters
int32 round = 16; // Number of Votes
int64 timestamp = 17; // Creation time of proposal
int64 expiration_time = 18; // Time interval that the proposal is active
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
}
```
```bash
message Vote {
int32 vote_id = 20; // Unique identifier of the vote
bytes vote_owner = 21; // Voter's public key
int64 timestamp = 22; // Time when the vote was cast
bool vote = 23; // Vote bool value (true/false)
bytes parent_hash = 24; // Hash of previous owner's Vote
bytes received_hash = 25; // Hash of previous received Vote
bytes vote_hash = 26; // Hash of all previously defined fields in Vote
bytes signature = 27; // Signature of vote_hash
}
```
The voting proposal MAY include adding a `node` or removing a `member`.
After the `member` creates the voting proposal,
it is emitted to the network via the MLS `Application message` with a lightweight,
epoch based voting such as [hashgraphlike consensus.](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
This consensus result MUST be finalized within the epoch as YES or NO.
If the voting result is YES, this points out the voting proposal will be converted into
the MLS proposal by the `steward` and following commit message that starts the new epoch.
## Creating welcome message
When a MLS `MLS proposal message` is created by the `steward`,
a `commit message` SHOULD follow,
as in section 12.04 [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) to the members.
In order for the new `member` joining the group to synchronize with the current members
who received the `commit message`,
the `steward` sends a welcome message to the node as the new `member`,
as in section 12.4.3.1. [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
## Single steward
To naive way to create a decentralized secure group messaging is having a single transparent `steward`
who only applies the changes regarding the result of the voting.
This is mostly similar with the general flow and specified in voting proposal and welcome message creation sections.
1. Each time a single `steward` initializes a group with group parameters with parameters
as in section 8.1. Group Context in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
2. `steward` creates a group anouncement (GA) according to the previous step and
broadcast it to the all network periodically. GA message is visible in network to all `nodes`.
3. The each `node` who wants to be a member needs to obtain this anouncement and create `credential`
includes `keyPackage` that is specified in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) section 10.
4. The `steward` aggregates all `KeyPackages` utilizes them to provision group additions for new members,
based on the outcome of the voting process.
5. Any `member` start to create `voting proposals` for adding or removing users,
and present them to the voting in the MLS group as an application message.
However, unlimited use of `voting proposals` within the group may be misused by
malicious or overly active members.
Therefore, an application-level constraint can be introduced to limit the number
or frequency of proposals initiated by each member to prevent spam or abuse.
6. Meanwhile, the `steward` collects finalized `voting proposals` with in epoch `E`,
that have received affirmative votes from members via application messages.
Otherwise, the `steward` discards proposals that did not receive a majority of "YES" votes.
Since voting proposals are transmitted as application messages, omitting them does not affect
the protocols correctness or consistency.
7. The `steward` converts all approved `voting proposals` into
corresponding `MLS proposals` and `commit message`, and
transmits both in a single operation as in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) section 12.4,
including welcome messages for the new members. Therefore, the `commit message` ends the previous epoch and create new ones.
## Multi stewards
Decentralization has already been achieved in the previous section.
However, to improve availability and ensure censorship resistance,
the single-steward protocol is extended to a multi-steward architecture.
In this design, each epoch is coordinated by a designated steward,
operating under the same protocol as the single-steward model.
Thus, the multi-steward approach primarily defines how steward roles
rotate across epochs while preserving the underlying structure and logic of the original protocol.
Two variants of the multi-steward design are introduced to address different system requirements.
### Multi steward with single consensus
In this model, all group modifications, such as adding or removing members,
must be approved through consensus by all participants,
including the steward assigned for epoch `E`.
A configuration with multiple stewards operating under a shared consensus protocol offers
increased decentralization and stronger protection against censorship.
However, this benefit comes with reduced operational efficiency.
The model is therefore best suited for small groups that value
decentralization and censorship resistance more than performance.
### Multi steward with two consensuses
The two-consensus model offers improved efficiency with a trade-off in decentralization.
In this design, group changes require consensus only among the stewards, rather than all members.
Regular members participate by periodically selecting the stewards but do not take part in each decision.
This structure enables faster coordination since consensus is achieved within a smaller group of stewards.
It is particularly suitable for large user groups, where involving every member in each decision would be impractical.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
### References
- [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/)
- [Hashgraphlike Consensus](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
- [vacp2p/de-mls](https://github.com/vacp2p/de-mls)

View File

@@ -1,12 +1,12 @@
# ETH-MLS-ONCHAIN
| Field | Value |
| --- | --- |
| Name | Secure channel setup using decentralized MLS and Ethereum accounts |
| Status | raw |
| Category | Standards Track |
| Editor | Ramses Fernandez <ramses@status.im> |
| Contributors | Aaryamann Challani <aaryamann@status.im>, Ekaterina Broslavskaya <ekaterina@status.im>, Ugur Sen <ugur@status.im>, Ksr <ksr@status.im> |
---
title: ETH-MLS-ONCHAIN
name: Secure channel setup using decentralized MLS and Ethereum accounts
status: raw
category: Standards Track
tags:
editor: Ramses Fernandez <ramses@status.im>
contributors: Aaryamann Challani <aaryamann@status.im>, Ekaterina Broslavskaya <ekaterina@status.im>, Ugur Sen <ugur@status.im>, Ksr <ksr@status.im>
---
## Motivation

View File

@@ -1,11 +1,12 @@
# GOSSIPSUB-TOR-PUSH
| Field | Value |
| --- | --- |
| Name | Gossipsub Tor Push |
| Status | raw |
| Category | Standards Track |
| Editor | Daniel Kaiser <danielkaiser@status.im> |
---
title: GOSSIPSUB-TOR-PUSH
name: Gossipsub Tor Push
status: raw
category: Standards Track
tags:
editor: Daniel Kaiser <danielkaiser@status.im>
contributors:
---
## Abstract

View File

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 58 KiB

Some files were not shown because too many files have changed in this diff Show More