mirror of
https://github.com/vacp2p/rfc-index.git
synced 2026-01-09 23:58:02 -05:00
Compare commits
12 Commits
codex-slot
...
status-go/
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d9fa3a9a8c | ||
|
|
6b245e2d8e | ||
|
|
45bf781d43 | ||
|
|
b1da70386e | ||
|
|
f051117d37 | ||
|
|
8c428fb388 | ||
|
|
e23da1d785 | ||
|
|
b5fc4cbfcd | ||
|
|
376ad331d4 | ||
|
|
3505da6bd6 | ||
|
|
3b968ccce3 | ||
|
|
536d31b5b7 |
@@ -257,8 +257,6 @@ message CommunityCancelRequestToJoin {
|
||||
bytes community_id = 4;
|
||||
// The display name of the requester
|
||||
string display_name = 5;
|
||||
// Magnet uri for community history protocol
|
||||
string magnet_uri = 6;
|
||||
}
|
||||
|
||||
message CommunityRequestToJoinResponse {
|
||||
|
||||
@@ -28,7 +28,7 @@ This specification describes how **Control Nodes**
|
||||
(which are specific nodes in Status communities)
|
||||
archive historical message data of their communities,
|
||||
beyond the time range limit provided by Store Nodes using
|
||||
the [BitTorrent](https://bittorrent.org) protocol.
|
||||
the [Codex](https://codex.storage) protocol.
|
||||
It also describes how the archives are distributed to community members via
|
||||
the Status network,
|
||||
so they can fetch them and gain access to a complete message history.
|
||||
@@ -50,9 +50,8 @@ while others operate in the Status communities layer):
|
||||
| Community member | A Status user that is part of a Status community, not owning the private key of the community |
|
||||
| Community member node| A Status node with message archive capabilities enabled, run by a community member |
|
||||
| Live messages | Waku messages received through the Waku network |
|
||||
| BitTorrent client | A program implementing the [BitTorrent](https://bittorrent.org) protocol |
|
||||
| Torrent/Torrent file | A file containing metadata about data to be downloaded by BitTorrent clients |
|
||||
| Magnet link | A link encoding the metadata provided by a torrent file ([Magnet URI scheme](https://en.wikipedia.org/wiki/Magnet_URI_scheme)) |
|
||||
| Codex node | A program implementing the [Codex](https://codex.storage) protocol |
|
||||
| CID | A content identifier, uniquely identifies a file that can be downloaded by Codex nodes |
|
||||
|
||||
## Requirements / Assumptions
|
||||
|
||||
@@ -101,18 +100,14 @@ this channel is not visible in the user interface.
|
||||
4. Community owner invites community members.
|
||||
5. Control node receives messages published in channels and
|
||||
stores them into a local database.
|
||||
6. After 7 days, the control node exports and
|
||||
6. Every 7 days, the control node exports and
|
||||
compresses last 7 days worth of messages from database and
|
||||
bundles it together with a
|
||||
[message archive index](#wakumessagearchiveindex) into a torrent,
|
||||
from which it then creates a magnet link ([Magnet URI scheme](https://en.wikipedia.org/wiki/Magnet_URI_scheme),
|
||||
[Extensions for Peers to Send Metadata Files](https://www.bittorrent.org/beps/bep_0009.html)).
|
||||
7. Control node sends the magnet link created in step 6 to community members via
|
||||
creates a message archive file.
|
||||
7. It uploads the messsage archive file to a Codex node, producing a CID.
|
||||
8. It updates the [message archive index](#wakumessagearchiveindex) by adding the new CID
|
||||
and its metadata, and uploads it to a Codex node as well, producing a CID.
|
||||
9. Control node sends the CID created in the previous step to community members via
|
||||
special channel created in step 3 through the Waku network.
|
||||
8. Every subsequent 7 days,
|
||||
steps 6 and 7 are repeated and
|
||||
the new message archive data
|
||||
is appended to the previously created message archive data.
|
||||
|
||||
### Serving archives for missed messages
|
||||
|
||||
@@ -125,8 +120,8 @@ it MUST go through the following process:
|
||||
for the missed time range for all channels in their community
|
||||
3. All missed messages are stored into control node's local message database
|
||||
4. If 7 or more days have elapsed since the last message history torrent was created,
|
||||
the control node will perform step 6 and
|
||||
7 of [Serving community history archives](#serving-community-history-archives)
|
||||
the control node will perform step 6 through 9
|
||||
of [Serving community history archives](#serving-community-history-archives)
|
||||
for every 7 days worth of messages in the missed time range
|
||||
(e.g. if the node was offline for 30 days, it will create 4 message history archives)
|
||||
|
||||
@@ -144,13 +139,13 @@ message archive metadata exchange provided by the community
|
||||
including the special channel from store nodes
|
||||
4. Member node receives Waku message
|
||||
([14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md))
|
||||
that contains the metadata magnet link from the special channel
|
||||
5. Member node extracts the magnet link from the Waku message and
|
||||
passes it to torrent client
|
||||
6. Member node downloads
|
||||
that contains the CID of the message archive index file from the special channel
|
||||
5. Member node extracts the CID from the Waku message and
|
||||
uses a Codex node to download it
|
||||
6. Member node interprets the
|
||||
[message archive index](#message-history-archive-index) file and
|
||||
determines which message archives are not downloaded yet (all or some)
|
||||
7. Member node fetches missing message archive data via torrent
|
||||
determines the CIDs of missing message archives
|
||||
7. Member node uses a Codex node to download the missing message archive files
|
||||
8. Member node unpacks and
|
||||
decompresses message archive data to then hydrate its local database,
|
||||
deleting any messages,
|
||||
@@ -162,13 +157,13 @@ as covered by the message history archive
|
||||
For archival data serving, the control node MUST store live messages as [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md).
|
||||
This is in addition to their database of application messages.
|
||||
This is required to provide confidentiality, authenticity,
|
||||
and integrity of message data distributed via the BitTorrent layer, and
|
||||
and integrity of message data distributed via Codex, and
|
||||
later validated by community members when they unpack message history archives.
|
||||
|
||||
Control nodes SHOULD remove those messages from their local databases
|
||||
once they are older than 30 days and
|
||||
after they have been turned into message archives and
|
||||
distributed to the BitTorrent network.
|
||||
distributed to the Codex network.
|
||||
|
||||
### Exporting messages for bundling
|
||||
|
||||
@@ -218,13 +213,6 @@ The `contentTopic` field MUST contain a list of all communiity channel topics.
|
||||
The `messages` field MUST contain all messages that belong into the archive
|
||||
given its `from`, `to` and `contentTopic` fields.
|
||||
|
||||
The `padding` field MUST contain the amount of zero bytes needed so
|
||||
that the overall byte size of the protobuf encoded `WakuMessageArchive`
|
||||
is a multiple of the `pieceLength` used to divide the message archive data into pieces.
|
||||
This is needed for seamless encoding and
|
||||
decoding of archival data in interation with BitTorrent,
|
||||
as explained in [creating message archive torrents](#creating-message-archive-torrents).
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3"
|
||||
|
||||
@@ -239,7 +227,6 @@ message WakuMessageArchive {
|
||||
uint8 version = 1
|
||||
WakuMessageArchiveMetadata metadata = 2
|
||||
repeated WakuMessage messages = 3 // `WakuMessage` is provided by 14/WAKU2-MESSAGE
|
||||
bytes padding = 4
|
||||
}
|
||||
```
|
||||
|
||||
@@ -249,8 +236,7 @@ Control nodes MUST provide message archives for the entire community history.
|
||||
The entirey history consists of a set of `WakuMessageArchive`'s
|
||||
where each archive contains a subset of historical `WakuMessage`s
|
||||
for a time range of seven days.
|
||||
All the `WakuMessageArchive`s are concatenated into a single file as a byte string
|
||||
(see [Ensuring reproducible data pieces](#ensuring-reproducible-data-pieces)).
|
||||
Each `WakuMessageArchive` is an individual file.
|
||||
|
||||
Control nodes MUST create a message history archive index
|
||||
(`WakuMessageArchiveIndex`) with metadata that allows receiving nodes
|
||||
@@ -263,10 +249,7 @@ the `WakuMessageArchiveIndexMetadata` derived from a 7-day archive and
|
||||
the value is an instance of that `WakuMessageArchiveIndexMetadata`
|
||||
corresponding to that archive.
|
||||
|
||||
The `offset` field MUST contain the position at which the message history archive
|
||||
starts in the byte string of the total message archive data.
|
||||
This MUST be the sum of the length of all previously created message archives
|
||||
in bytes (see [Creating message archive torrents](#creating-message-archive-torrents)).
|
||||
The `cid` is the Codex CID by which the message archive can be retrieved.
|
||||
|
||||
```protobuf
|
||||
syntax = "proto3"
|
||||
@@ -274,8 +257,7 @@ syntax = "proto3"
|
||||
message WakuMessageArchiveIndexMetadata {
|
||||
uint8 version = 1
|
||||
WakuMessageArchiveMetadata metadata = 2
|
||||
uint64 offset = 3
|
||||
uint64 num_pieces = 4
|
||||
string cid = 3
|
||||
}
|
||||
|
||||
message WakuMessageArchiveIndex {
|
||||
@@ -285,137 +267,37 @@ message WakuMessageArchiveIndex {
|
||||
|
||||
The control node MUST update the `WakuMessageArchiveIndex`
|
||||
every time it creates one or
|
||||
more `WakuMessageArchive`s and bundle it into a new torrent.
|
||||
more `WakuMessageArchive`s, and upload it to Codex. The resulting CID from the upload operation must be sent to the special community channel.
|
||||
For every created `WakuMessageArchive`,
|
||||
there MUST be a `WakuMessageArchiveIndexMetadata` entry in the `archives` field `WakuMessageArchiveIndex`.
|
||||
|
||||
## Creating message archive torrents
|
||||
## Creating message archives
|
||||
|
||||
Control nodes MUST create a torrent file ("torrent")
|
||||
containing metadata to all message history archives.
|
||||
To create a torrent file, and
|
||||
later serve the message archive data in the BitTorrent network,
|
||||
control nodes MUST store the necessary data in dedicated files on the file system.
|
||||
|
||||
A torrent's source folder MUST contain the following two files:
|
||||
|
||||
- `data` - Contains all protobuf encoded `WakuMessageArchive`'s (as bit strings)
|
||||
concatenated in ascending order based on their time
|
||||
- `index` - Contains the protobuf encoded `WakuMessageArchiveIndex`
|
||||
Controls nodes MUST create each message history
|
||||
archive file, and their index files in a dedicated location on the file system.
|
||||
|
||||
Control nodes SHOULD store these files in a dedicated folder that is identifiable,
|
||||
via the community id.
|
||||
|
||||
### Ensuring reproducible data pieces
|
||||
|
||||
The control node MUST ensure that the byte string resulting from
|
||||
the protobuf encoded `data` is equal to the byte string `data`
|
||||
from the previously generated message archive torrent,
|
||||
plus the data of the latest 7 days worth of messages encoded as `WakuMessageArchive`.
|
||||
Therefore, the size of `data` grows every seven days as it's append only.
|
||||
|
||||
The control nodes also MUST ensure that the byte size of every individual `WakuMessageArchive`
|
||||
encoded protobuf is a multiple of `pieceLength: ???` (**TODO**)
|
||||
using the `padding` field.
|
||||
If the protobuf encoded `WakuMessageArchive` is not a multiple of `pieceLength`,
|
||||
its `padding` field MUST be filled with zero bytes and
|
||||
the `WakuMessageArchive` MUST be re-encoded until its size becomes multiple of `pieceLength`.
|
||||
|
||||
This is necessary because the content of the `data` file
|
||||
will be split into pieces of `pieceLength` when the torrent file is created,
|
||||
and the SHA1 hash of every piece is then stored in the torrent file and
|
||||
later used by other nodes to request the data for each individual data piece.
|
||||
|
||||
By fitting message archives into a multiple of `pieceLength` and
|
||||
ensuring they fill possible remaining space with zero bytes,
|
||||
control nodes prevent the **next** message archive to
|
||||
occupy that remaining space of the last piece,
|
||||
which will result in a different SHA1 hash for that piece.
|
||||
|
||||
#### **Example: Without padding**
|
||||
|
||||
Let `WakuMessageArchive` "A1" be of size 20 bytes:
|
||||
|
||||
```json
|
||||
0 11 22 33 44 55 66 77 88 99
|
||||
10 11 12 13 14 15 16 17 18 19
|
||||
```
|
||||
|
||||
With a `pieceLength` of 10 bytes, A1 will fit into `20 / 10 = 2` pieces:
|
||||
|
||||
```json
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
```
|
||||
|
||||
#### **Example: With padding**
|
||||
|
||||
Let `WakuMessageArchive` "A2" be of size 21 bytes:
|
||||
|
||||
```json
|
||||
0 11 22 33 44 55 66 77 88 99
|
||||
10 11 12 13 14 15 16 17 18 19
|
||||
20
|
||||
```
|
||||
|
||||
With a `pieceLength` of 10 bytes, A2 will fit into `21 / 10 = 2` pieces.
|
||||
The remainder will introduce a third piece:
|
||||
|
||||
```json
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
20 // piece[2] SHA1: 0x789
|
||||
```
|
||||
|
||||
The next `WakuMessageArchive` "A3" will be appended ("#3") to the existing data
|
||||
and occupy the remaining space of the third data piece.
|
||||
The piece at index 2 will now produce a different SHA1 hash:
|
||||
|
||||
```json
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
20 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[2] SHA1: 0xeef
|
||||
#3 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[3]
|
||||
```
|
||||
|
||||
By filling up the remaining space of the third piece
|
||||
with A2 using its `padding` field,
|
||||
it is guaranteed that its SHA1 will stay the same:
|
||||
|
||||
```json
|
||||
0 11 22 33 44 55 66 77 88 99 // piece[0] SHA1: 0x123
|
||||
10 11 12 13 14 15 16 17 18 19 // piece[1] SHA1: 0x456
|
||||
20 0 0 0 0 0 0 0 0 0 // piece[2] SHA1: 0x999
|
||||
#3 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[3]
|
||||
#3 #3 #3 #3 #3 #3 #3 #3 #3 #3 // piece[4]
|
||||
```
|
||||
|
||||
### Seeding message history archives
|
||||
|
||||
The control node MUST seed the
|
||||
[generated torrent](#creating-message-archive-torrents)
|
||||
until a new `WakuMessageArchive` is created.
|
||||
The control node MUST ensure that the
|
||||
[generated archive files](#creating-message-archives) are stored in their Codex node.
|
||||
The individual archive files must be stored indefinitely.
|
||||
Only the most recent archive index file must be stored.
|
||||
|
||||
The control node SHOULD NOT seed torrents for older message history archives.
|
||||
Only one torrent at a time should be seeded.
|
||||
|
||||
### Creating magnet links
|
||||
|
||||
Once a torrent file for all message archives is created,
|
||||
the control node MUST derive a magnet link following the
|
||||
[Magnet URI scheme](https://en.wikipedia.org/wiki/Magnet_URI_scheme)
|
||||
using the underlying BitTorrent protocol client.
|
||||
The control node SHOULD delete CIDs for older message history archive index files.
|
||||
Only one archive index file per community should be stored in the Codex node at a time.
|
||||
|
||||
### Message archive distribution
|
||||
|
||||
Message archives are available via the BitTorrent network as they are being
|
||||
Message archives are available via the Codex network as they are being
|
||||
[seeded by the control node](#seeding-message-history-archives).
|
||||
Other community member nodes will download the message archives
|
||||
from the BitTorrent network once they receive a magnet link
|
||||
from the Codex network once they receive a CID
|
||||
that contains a message archive index.
|
||||
|
||||
The control node MUST send magnet links containing message archives and
|
||||
the message archive index to a special community channel.
|
||||
The control node MUST send CIDs for message archive index files to a special community channel.
|
||||
The topic of that special channel follows the following format:
|
||||
|
||||
```text
|
||||
@@ -429,18 +311,18 @@ Only the control node MAY post to the special channel.
|
||||
Other messages on this specified channel MUST be ignored by clients.
|
||||
Community members MUST NOT have permission to send messages to the special channel.
|
||||
However, community member nodes MUST subscribe to special channel
|
||||
to receive Waku messages containing magnet links for message archives.
|
||||
to receive Waku messages containing CIDs for message archives.
|
||||
|
||||
### Canonical message histories
|
||||
|
||||
Only control nodes are allowed to distribute messages with magnet links via
|
||||
the special channel for magnet link exchange.
|
||||
Only control nodes are allowed to distribute messages with CIDs via
|
||||
the special channel for CID exchange.
|
||||
Community members MUST NOT be allowed to post any messages to the special channel.
|
||||
|
||||
Status nodes MUST ensure that any message
|
||||
that isn't signed by the control node in the special channel is ignored.
|
||||
|
||||
Since the magnet links are created from the control node's database
|
||||
Since the CIDs are created from the control node's database
|
||||
(and previously distributed archives),
|
||||
the message history provided by the control node becomes the canonical message history
|
||||
and single source of truth for the community.
|
||||
@@ -456,13 +338,13 @@ even if it already existed in a community member node's database.
|
||||
Generally, fetching message history archives is a three step process:
|
||||
|
||||
1. Receive [message archive index](#message-history-archive-index)
|
||||
magnet link as described in [Message archive distribution],
|
||||
download `index` file from torrent, then determine which message archives to download
|
||||
CID as described in [Message archive distribution],
|
||||
download `index` file from Codex, then determine which message archives to download
|
||||
2. Download individual archives
|
||||
|
||||
Community member nodes subscribe to the special channel
|
||||
that control nodes publish magnet links for message history archives to.
|
||||
There are two scenarios in which member nodes can receive such a magnet link message
|
||||
that control nodes publish CIDs for message history archives to.
|
||||
There are two scenarios in which member nodes can receive such a CID message
|
||||
from the special channel:
|
||||
|
||||
1. The member node receives it via live messages, by listening to the special channel
|
||||
@@ -473,10 +355,10 @@ from store nodes (this is the case when a new community member joins a community
|
||||
|
||||
When member nodes receive a message with a `CommunityMessageHistoryArchive`
|
||||
([62/STATUS-PAYLOADS](../62/payloads.md)) from the aforementioned channnel,
|
||||
they MUST extract the `magnet_uri` and
|
||||
pass it to their underlying BitTorrent client
|
||||
so they can fetch the latest message history archive index,
|
||||
which is the `index` file of the torrent (see [Creating message archive torrents](#creating-message-archive-torrents)).
|
||||
they MUST extract the `cid` and
|
||||
pass it to their underlying Codex node
|
||||
so they can fetch the latest message history archive index file,
|
||||
which is the `index` file to access individual message history archive files (see [Creating message archives](#creating-message-archives)).
|
||||
|
||||
Due to the nature of distributed systems,
|
||||
there's no guarantee that a received message is the "last" message.
|
||||
@@ -485,7 +367,7 @@ when member nodes request historical messages from store nodes.
|
||||
|
||||
Therefore, member nodes MUST wait for 20 seconds
|
||||
after receiving the last `CommunityMessageArchive`
|
||||
before they start extracting the magnet link to fetch the latest archive index.
|
||||
before they start extracting the CID to fetch the latest archive index.
|
||||
|
||||
Once a message history archive index is downloaded and
|
||||
parsed back into `WakuMessageArchiveIndex`,
|
||||
@@ -506,18 +388,18 @@ to download individual archives.
|
||||
Community member nodes MUST choose one of the following options:
|
||||
|
||||
1. **Download all archives** - Request and
|
||||
download all data pieces for `data` provided by the torrent
|
||||
download all CIDs provided by the index file
|
||||
(this is the case for new community member nodes
|
||||
that haven't downloaded any archives yet)
|
||||
2. **Download only the latest archive** -
|
||||
Request and download all pieces starting at the `offset` of the latest `WakuMessageArchiveIndexMetadata`
|
||||
Request and download only the latest CID in the `WakuMessageArchiveIndexMetadata` list
|
||||
(this the case for any member node
|
||||
that already has downloaded all previous history and
|
||||
is now interested in only the latst archive)
|
||||
3. **Download specific archives** -
|
||||
Look into `from` and
|
||||
`to` fields of every `WakuMessageArchiveIndexMetadata` and
|
||||
determine the pieces for archives of a specific time range
|
||||
determine the CIDs for archives of a specific time range
|
||||
(can be the case for member nodes that have recently joined the network and
|
||||
are only interested in a subset of the complete history)
|
||||
|
||||
@@ -535,7 +417,7 @@ Community members nodes MUST ignore the expiration state of each archive message
|
||||
|
||||
## Considerations
|
||||
|
||||
The following are things to cosider when implementing this specification.
|
||||
The following are things to be considered when implementing this specification.
|
||||
|
||||
## Control node honesty
|
||||
|
||||
@@ -563,12 +445,12 @@ pass it to other users so they become control nodes as well.
|
||||
This means, it's possible for multiple control nodes to exist.
|
||||
|
||||
This might conflict with the assumption that the control node
|
||||
serves as a single source of thruth.
|
||||
serves as a single source of truth.
|
||||
Multiple control nodes can have different message histories.
|
||||
|
||||
Not only will multiple control nodes
|
||||
multiply the amount of archive index messages being distributed to the network,
|
||||
they might also contain different sets of magnet links and their corresponding hashes.
|
||||
they might also contain different sets of CIDs and their corresponding hashes.
|
||||
|
||||
Even if just a single message is missing in one of the histories,
|
||||
the hashes presented in archive indices will look completely different,
|
||||
@@ -583,14 +465,12 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
|
||||
## References
|
||||
|
||||
- [13/WAKU2-STORE](../../waku/standards/core/13/store.md)
|
||||
- [BitTorrent](https://bittorrent.org)
|
||||
- [10/WAKU2](../../waku/standards/core/10/waku2.md)
|
||||
- [11/WAKU2-RELAY](../../waku/standards/core/11/relay.md)
|
||||
- [Magnet URI scheme](https://en.wikipedia.org/wiki/Magnet_URI_scheme)
|
||||
- [forum discussion](https://forum.vac.dev/t/status-communities-protocol-and-product-point-of-view/114)
|
||||
- [org channels](https://github.com/status-im/specs/pull/151)
|
||||
- [UI feature spec](https://github.com/status-im/feature-specs/pull/36)
|
||||
- [Extensions for Peers to Send Metadata Files](https://www.bittorrent.org/beps/bep_0009.html)
|
||||
- [org channels spec](../56/communities.md)
|
||||
- [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md)
|
||||
- [62/STATUS-PAYLOADS](../62/payloads.md)
|
||||
- [Codex](https://codex.storage)
|
||||
|
||||
@@ -703,7 +703,7 @@ message CommunityDescription {
|
||||
map<string,CommunityChat> chats = 6;
|
||||
repeated string ban_list = 7;
|
||||
map<string,CommunityCategory> categories = 8;
|
||||
uint64 archive_magnetlink_clock = 9;
|
||||
uint64 archive_clock = 9;
|
||||
CommunityAdminSettings admin_settings = 10;
|
||||
string intro_message = 11;
|
||||
string outro_message = 12;
|
||||
@@ -890,15 +890,15 @@ Payload
|
||||
| 5 | message_type | `MessageType` | The type of message |
|
||||
| 6 | deleted_by | `string` | The public key of the user who deleted the message |
|
||||
|
||||
### CommunityMessageArchiveLink
|
||||
### CommunityMessageArchive
|
||||
|
||||
A `CommunityMessageArchiveLink` contains a magnet uri for a community's message archive,
|
||||
A `CommunityMessageArchive` contains a CID for a community's message archive,
|
||||
created using [61/STATUS-Community-History-Archives](../61/community-history-service.md).
|
||||
|
||||
```protobuf
|
||||
message CommunityMessageArchiveMagnetlink {
|
||||
message CommunityMessageArchive {
|
||||
uint64 clock = 1;
|
||||
string magnet_uri = 2;
|
||||
string cid = 2;
|
||||
}
|
||||
```
|
||||
|
||||
@@ -907,7 +907,7 @@ Payload
|
||||
| Field | Name | Type | Description |
|
||||
| ----- | ---- | ---- | ---- |
|
||||
| 1 | clock | `uint64` | Clock value of the message |
|
||||
| 2 | magnet_uri | `string` | The magnet uri of the community archive torrent |
|
||||
| 2 | cid | `string` | The Codex CID of the community archive index file |
|
||||
|
||||
### AcceptContactRequest
|
||||
|
||||
@@ -959,7 +959,7 @@ message CommunityRequestToJoinResponse {
|
||||
bool accepted = 3;
|
||||
bytes grant = 4;
|
||||
bytes community_id = 5;
|
||||
string magnet_uri = 6;
|
||||
string cid = 6;
|
||||
}
|
||||
```
|
||||
|
||||
@@ -972,7 +972,7 @@ Payload
|
||||
| 3 | accepted | `bool` | Whether the request was accepted |
|
||||
| 4 | grant | `bytes` | The grant |
|
||||
| 5 | community_id | `bytes` | The id of the community |
|
||||
| 6 | magnet_uri | `string` | The latest magnet uri of the community's archive torrent |
|
||||
| 6 | cid | `string` | The latest Codex CID of the community's archive index file |
|
||||
|
||||
### CommunityRequestToLeave
|
||||
|
||||
|
||||
@@ -75,7 +75,7 @@ These are the three main types of chats in Status.
|
||||
| ApplicationMetadataMessage_SYNC_BOOKMARK | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_CLEAR_HISTORY | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_SETTING | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_COMMUNITY_MESSAGE_ARCHIVE_MAGNETLINK | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_COMMUNITY_MESSAGE_ARCHIVE_INDEX | No | No | CommunityChat |
|
||||
| ApplicationMetadataMessage_SYNC_PROFILE_PICTURES | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_SYNC_ACCOUNT | Yes | Yes | Pair |
|
||||
| ApplicationMetadataMessage_ACCEPT_CONTACT_REQUEST | Yes | Yes | OneToOne |
|
||||
|
||||
252
vac/raw/consensus-hashgraphlike.md
Normal file
252
vac/raw/consensus-hashgraphlike.md
Normal file
@@ -0,0 +1,252 @@
|
||||
---
|
||||
title: HASHGRAPHLIKE CONSENSUS
|
||||
name: Hashgraphlike Consensus Protocol
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Ugur Sen [ugur@status.im](mailto:ugur@status.im)
|
||||
contributors: seemenkina [ekaterina@status.im](mailto:ekaterina@status.im)
|
||||
---
|
||||
## Abstract
|
||||
|
||||
This document specifies a scalable, decentralized, and Byzantine Fault Tolerant (BFT)
|
||||
consensus mechanism inspired by Hashgraph, designed for binary decision-making in P2P networks.
|
||||
|
||||
## Motivation
|
||||
|
||||
Consensus is one of the essential components of decentralization.
|
||||
In particular, in the decentralized group messaging application is used for
|
||||
binary decision-making to govern the group.
|
||||
Therefore, each user contributes to the decision-making process.
|
||||
Besides achieving decentralization, the consensus mechanism MUST be strong:
|
||||
|
||||
- Under the assumption of at least `2/3` honest users in the network.
|
||||
|
||||
- Each user MUST conclude the same decision and scalability:
|
||||
message propagation in the network MUST occur within `O(log n)` rounds,
|
||||
where `n` is the total number of peers,
|
||||
in order to preserve the scalability of the messaging application.
|
||||
|
||||
## Format Specification
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
|
||||
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
## Flow
|
||||
|
||||
Any user in the group initializes the consensus by creating a proposal.
|
||||
Next, the user broadcasts the proposal to the whole network.
|
||||
Upon each user receives the proposal, validates the proposal,
|
||||
adds its vote as yes or no and with its signature and timestamp.
|
||||
The user then sends the proposal and vote to a random peer in a P2P setup,
|
||||
or to a subscribed gossipsub channel if gossip-based messaging is used.
|
||||
Therefore, each user first validates the signature and then adds its new vote.
|
||||
Each sending message counts as a round.
|
||||
After `log(n)` rounds all users in the network have the others vote
|
||||
if at least `2/3` number of users are honest where honesty follows the protocol.
|
||||
|
||||
In general, the voting-based consensus consists of the following phases:
|
||||
|
||||
1. Initialization of voting
|
||||
2. Exchanging votes across the rounds
|
||||
3. Counting the votes
|
||||
|
||||
### Assumptions
|
||||
|
||||
- The users in the P2P network can discover the nodes or they are subscribing same channel in a gossipsub.
|
||||
- We MAY have non-reliable (silent) nodes.
|
||||
- Proposal owners MUST know the number of voters.
|
||||
|
||||
## 1. Initialization of voting
|
||||
|
||||
A user initializes the voting with the proposal payload which is
|
||||
implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
|
||||
|
||||
```bash
|
||||
syntax = "proto3";
|
||||
|
||||
package vac.voting;
|
||||
|
||||
message Proposal {
|
||||
string name = 10; // Proposal name
|
||||
string payload = 11; // Proposal description
|
||||
uint32 proposal_id = 12; // Unique identifier of the proposal
|
||||
bytes proposal_owner = 13; // Public key of the creator
|
||||
repeated Votes = 14; // Vote list in the proposal
|
||||
uint32 expected_voters_count = 15; // Maximum number of distinct voters
|
||||
uint32 round = 16; // Number of Votes
|
||||
uint64 timestamp = 17; // Creation time of proposal
|
||||
uint64 expiration_time = 18; // The time interval that the proposal is active.
|
||||
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
|
||||
}
|
||||
|
||||
message Vote {
|
||||
uint32 vote_id = 20; // Unique identifier of the vote
|
||||
bytes vote_owner = 21; // Voter's public key
|
||||
uint32 proposal_id = 22; // Linking votes and proposals
|
||||
int64 timestamp = 23; // Time when the vote was cast
|
||||
bool vote = 24; // Vote bool value (true/false)
|
||||
bytes parent_hash = 25; // Hash of previous owner's Vote
|
||||
bytes received_hash = 26; // Hash of previous received Vote
|
||||
bytes vote_hash = 27; // Hash of all previously defined fields in Vote
|
||||
bytes signature = 28; // Signature of vote_hash
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
To initiate a consensus for a proposal,
|
||||
a user MUST complete all the fields in the proposal, including attaching its `vote`
|
||||
and the `payload` that shows the purpose of the proposal.
|
||||
Notably, `parent_hash` and `received_hash` are empty strings because there is no previous or received hash.
|
||||
Then the initialization section ends when the user who creates the proposal sends it
|
||||
to the random peer from the network or sends it to the proposal to the specific channel.
|
||||
|
||||
## 2. Exchanging votes across the peers
|
||||
|
||||
Once the peer receives the proposal message `P_1` from a 1-1 or a gossipsub channel does the following checks:
|
||||
|
||||
1. Check the signatures of the each votes in proposal, in particular for proposal `P_1`,
|
||||
verify the signature of `V_1` where `V_1 = P_1.votes[0]` with `V_1.signature` and `V_1.vote_owner`
|
||||
2. Do `parent_hash` check: If there are repeated votes from the same sender,
|
||||
check that the hash of the former vote is equal to the `parent_hash` of the later vote.
|
||||
3. Do `received_hash` check: If there are multiple votes in a proposal, check that the hash of a vote is equal to the `received_hash` of the next one.
|
||||
4. After successful verification of the signature and hashes, the receiving peer proceeds to generate `P_2` containing a new vote `V_2` as following:
|
||||
|
||||
4.1. Add its public key as `P_2.vote_owner`.
|
||||
|
||||
4.2. Set `timestamp`.
|
||||
|
||||
4.3. Set boolean `vote`.
|
||||
|
||||
4.4. Define `V_2.parent_hash = 0` if there is no previous peer's vote, otherwise hash of previous owner's vote.
|
||||
|
||||
4.5. Set `V_2.received_hash = hash(P_1.votes[0])`.
|
||||
|
||||
4.6. Set `proposal_id` for the `vote`.
|
||||
|
||||
4.7. Calculate `vote_hash` by hash of all previously defined fields in Vote:
|
||||
`V_2.vote_hash = hash(vote_id, owner, proposal_id, timestamp, vote, parent_hash, received_hash)`
|
||||
|
||||
4.8. Sign `vote_hash` with its private key corresponding the public key as `vote_owner` component then adds `V_2.vote_hash`.
|
||||
|
||||
5. Create `P_2` with by adding `V_2` as follows:
|
||||
|
||||
5.1. Assign `P_2.name`, `P_2.proposal_id`, and `P_2.proposal_owner` to be identical to those in `P_1`.
|
||||
|
||||
5.2. Add the `V_2` to the `P_2.Votes` list.
|
||||
|
||||
5.3. Increase the round by one, namely `P_2.round = P_1.round + 1`.
|
||||
|
||||
5.4. Verify that the proposal has not expired by checking that: `P_2.timestamp - current_time < P_1.expiration_time`.
|
||||
If this does not hold, other peers ignore the message.
|
||||
|
||||
After the peer creates the proposal `P_2` with its vote `V_2`,
|
||||
sends it to the random peer from the network or
|
||||
sends it to the proposal to the specific channel.
|
||||
|
||||
## 3. Determining the result
|
||||
|
||||
Because consensus depends on meeting a quorum threshold,
|
||||
each peer MUST verify the accumulated votes to determine whether the necessary conditions have been satisfied.
|
||||
The voting result is set YES if the majority of the `2n/3` from the distinct peers vote YES.
|
||||
|
||||
To verify, the `findDistinctVoter` method processes the proposal by traversing its `Votes` list to determine the number of unique voters.
|
||||
|
||||
If this method returns true, the peer proceeds with strong validation,
|
||||
which ensures that if any honest peer reaches a decision,
|
||||
no other honest peer can arrive at a conflicting result.
|
||||
|
||||
1. Check each `signature` in the vote as shown in the [Section 2](#2-exchanging-votes-across-the-peers).
|
||||
|
||||
2. Check the `parent_hash` chain if there are multiple votes from the same owner namely `vote_i` and `vote_i+1` respectively,
|
||||
the parent hash of `vote_i+1` should be the hash of `vote_i`
|
||||
|
||||
3. Check the `previous_hash` chain, each received hash of `vote_i+1` should be equal to the hash of `vote_i`.
|
||||
|
||||
4. Check the `timestamp` against the replay attack.
|
||||
In particular, the `timestamp` cannot be the old in the determined threshold.
|
||||
|
||||
5. Check that the liveness criteria defined in the Liveness section are satisfied.
|
||||
|
||||
If a proposal is verified by all the checks,
|
||||
the `countVote` method counts each YES vote from the list of Votes.
|
||||
|
||||
## 4. Properties
|
||||
|
||||
The consensus mechanism satisfies liveness and security properties as follows:
|
||||
|
||||
### Liveness
|
||||
|
||||
Liveness refers to the ability of the protocol to eventually reach a decision when sufficient honest participation is present.
|
||||
In this protocol, if `n > 2` and more than `n/2` of the votes among at least `2n/3` distinct peers are YES,
|
||||
then the consensus result is defined as YES; otherwise, when `n ≤ 2`, unanimous agreement (100% YES votes) is required.
|
||||
|
||||
The peer calculates the result locally as shown in the [Section 3](#3-determining-the-result).
|
||||
From the [hashgraph property](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus),
|
||||
if a node could calculate the result of a proposal,
|
||||
it implies that no peer can calculate the opposite of the result.
|
||||
Still, reliability issues can cause some situations where peers cannot receive enough messages,
|
||||
so they cannot calculate the consensus result.
|
||||
|
||||
Rounds are incremented when a peer adds and sends the new proposal.
|
||||
Calculating the required number of rounds, `2n/3` from the distinct peers' votes is achieved in two ways:
|
||||
|
||||
1. `2n/3` rounds in pure P2P networks
|
||||
2. `2` rounds in gossipsub
|
||||
|
||||
Since the message complexity is `O(1)` in the gossipsub channel,
|
||||
in case the network has reliability issues,
|
||||
the second round is used for the peers cannot receive all the messages from the first round.
|
||||
|
||||
If an honest and online peer has received at least one vote but not enough to reach consensus,
|
||||
it MAY continue to propagate its own vote — and any votes it has received — to support message dissemination.
|
||||
This process can continue beyond the expected round count,
|
||||
as long as it remains within the expiration time defined in the proposal.
|
||||
The expiration time acts as a soft upper bound to ensure that consensus is either reached or aborted within a bounded timeframe.
|
||||
|
||||
#### Equality of votes
|
||||
|
||||
An equality of votes occurs when verifying at least `2n/3` distinct voters and
|
||||
applying `liveness_criteria_yes` the number of YES and NO votes is equal.
|
||||
|
||||
Handling ties is an application-level decision. The application MUST define a deterministic tie policy:
|
||||
|
||||
RETRY: re-run the vote with a new proposal_id, optionally adjusting parameters.
|
||||
|
||||
REJECT: abort the proposal and return voting result as NO.
|
||||
|
||||
The chosen policy SHOULD be consistent for all peers via proposal's `payload` to ensure convergence on the same outcome.
|
||||
|
||||
### Silent Node Management
|
||||
|
||||
Silent nodes are the nodes that not participate the voting as YES or NO.
|
||||
There are two possible counting votes for the silent peers.
|
||||
|
||||
1. **Silent peers means YES:**
|
||||
Silent peers counted as YES vote, if the application prefer the strong rejection for NO votes.
|
||||
2. **Silent peers means NO:**
|
||||
Silent peers counted as NO vote, if the application prefer the strong acception for NO votes.
|
||||
|
||||
The proposal is set to default true, which means silent peers' votes are counted as YES namely `liveness_criteria_yes` is set true by default.
|
||||
|
||||
### Security
|
||||
|
||||
This RFC uses cryptographic primitives to prevent the
|
||||
malicious behaviours as follows:
|
||||
|
||||
- Vote forgery attempt: creating unsigned invalid votes
|
||||
- Inconsistent voting: a malicious peer submits conflicting votes (e.g., YES to some peers and NO to others)
|
||||
in different stages of the protocol, violating vote consistency and attempting to undermine consensus.
|
||||
- Integrity breaking attempt: tampering history by changing previous votes.
|
||||
- Replay attack: storing the old votes to maliciously use in fresh voting.
|
||||
|
||||
## 5. Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
|
||||
|
||||
## 6. References
|
||||
|
||||
- [Hedera Hashgraph](https://hedera.com/learning/hedera-hashgraph/what-is-hashgraph-consensus)
|
||||
- [Gossip about gossip](https://docs.hedera.com/hedera/core-concepts/hashgraph-consensus-algorithms/gossip-about-gossip)
|
||||
- [Simple implementation of hashgraph consensus](https://github.com/conanwu777/hashgraph)
|
||||
241
vac/raw/eth-mls-offchain.md
Normal file
241
vac/raw/eth-mls-offchain.md
Normal file
@@ -0,0 +1,241 @@
|
||||
---
|
||||
title: ETH-MLS-OFFCHAIN
|
||||
name: Secure channel setup using decentralized MLS and Ethereum accounts
|
||||
status: raw
|
||||
category: Standards Track
|
||||
tags:
|
||||
editor: Ugur Sen [ugur@status.im](mailto:ugur@status.im)
|
||||
contributors: seemenkina [ekaterina@status.im](mailto:ekaterina@status.im)
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The following document specifies Ethereum authenticated scalable
|
||||
and decentralized secure group messaging application by
|
||||
integrating Message Layer Security (MLS) backend.
|
||||
Decentralization refers each user is a node in P2P network and
|
||||
each user has voice for any changes in group.
|
||||
This is achieved by integrating a consensus mechanism.
|
||||
Lastly, this RFC can also be referred to as de-MLS,
|
||||
decentralized MLS, to emphasize its deviation
|
||||
from the centralized trust assumptions of traditional MLS deployments.
|
||||
|
||||
## Motivation
|
||||
|
||||
Group messaging is a fundamental part of digital communication,
|
||||
yet most existing systems depend on centralized servers,
|
||||
which introduce risks around privacy, censorship, and unilateral control.
|
||||
In restrictive settings, servers can be blocked or surveilled;
|
||||
in more open environments, users still face opaque moderation policies,
|
||||
data collection, and exclusion from decision-making processes.
|
||||
To address this, we propose a decentralized, scalable peer-to-peer
|
||||
group messaging system where each participant runs a node, contributes
|
||||
to message propagation, and takes part in governance autonomously.
|
||||
Group membership changes are decided collectively through a lightweight
|
||||
partially synchronous, fault-tolerant consensus protocol without a centralized identity.
|
||||
This design enables truly democratic group communication and is well-suited
|
||||
for use cases like activist collectives, research collaborations, DAOs, support groups,
|
||||
and decentralized social platforms.
|
||||
|
||||
## Format Specification
|
||||
|
||||
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
|
||||
“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document
|
||||
are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
|
||||
|
||||
### Assumptions
|
||||
|
||||
- The nodes in the P2P network can discover other nodes or will connect to other nodes when subscribing to same topic in a gossipsub.
|
||||
- We MAY have non-reliable (silent) nodes.
|
||||
- We MUST have a consensus that is lightweight, scalable and finalized in a specific time.
|
||||
|
||||
## Roles
|
||||
|
||||
The three roles used in de-MLS is as follows:
|
||||
|
||||
- `node`: Nodes are members of network without being in any secure group messaging.
|
||||
- `member`: Members are special nodes in the secure group messaging who
|
||||
obtains current group key of secure group messaging.
|
||||
- `steward`: Stewards are special and transparent members in secure group
|
||||
messaging who organizes the changes upon the voted-proposals.
|
||||
|
||||
## MLS Background
|
||||
|
||||
The de-MLS consists of MLS backend, so the MLS services and other MLS components
|
||||
are taken from the original [MLS specification](https://datatracker.ietf.org/doc/rfc9420/), with or without modifications.
|
||||
|
||||
### MLS Services
|
||||
|
||||
MLS is operated in two services authentication service (AS) and delivery service (DS).
|
||||
Authentication service enables group members to authenticate the credentials presented by other group members.
|
||||
The delivery service routes MLS messages among the nodes or
|
||||
members in the protocol in the correct order and
|
||||
manage the `keyPackage` of the users where the `keyPackage` is the objects
|
||||
that provide some public information about a user.
|
||||
|
||||
### MLS Objects
|
||||
|
||||
Following section presents the MLS objects and components that used in this RFC:
|
||||
|
||||
`Epoch`: Fixed time intervals that changes the state that is defined by members,
|
||||
section 3.4 in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
|
||||
`MLS proposal message:` Members MUST receive the proposal message prior to the
|
||||
corresponding commit message that initiates a new epoch with key changes,
|
||||
in order to ensure the intended security properties, section 12.1 in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
Here, the add and remove proposals are used.
|
||||
|
||||
`Application message`: This message type used in arbitrary encrypted communication between group members.
|
||||
This is restricted by [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) as if there is pending proposal,
|
||||
the application message should be cut.
|
||||
Note that: Since the MLS is based on servers, this delay between proposal and commit messages are very small.
|
||||
`Commit message:` After members receive the proposals regarding group changes,
|
||||
the committer, who may be any member of the group, as specified in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/),
|
||||
generates the necessary key material for the next epoch, including the appropriate welcome messages
|
||||
for new joiners and new entropy for removed members. In this RFC, the committers only MUST be stewards.
|
||||
|
||||
### de-MLS Objects
|
||||
|
||||
`Voting proposal` Similar to MLS proposals, but processed only if approved through a voting process.
|
||||
They function as application messages in the MLS group,
|
||||
allowing the steward to collect them without halting the protocol.
|
||||
|
||||
## Flow
|
||||
|
||||
General flow is as follows:
|
||||
|
||||
- A steward initializes a group just once, and then sends out Group Announcements (GA) periodically.
|
||||
|
||||
- Meanwhile, each`node`creates and sends their`credential` includes `keyPackage`.
|
||||
- Each `member`creates `voting proposals` sends them to from MLS group during epoch E.
|
||||
- Meanwhile, the `steward` collects finalized `voting proposals` from MLS group and converts them into
|
||||
`MLS proposals` then sends them with correspondng `commit messages`
|
||||
- Evantually, with the commit messages, all members starts the next epoch E+1.
|
||||
|
||||
## Creating Voting Proposal
|
||||
|
||||
A `member` MAY initializes the voting with the proposal payload
|
||||
which is implemented using [protocol buffers v3](https://protobuf.dev/) as follows:
|
||||
|
||||
```protobuf
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
message Proposal {
|
||||
string name = 10; // Proposal name
|
||||
string payload = 11; // Describes the what is voting fore
|
||||
int32 proposal_id = 12; // Unique identifier of the proposal
|
||||
bytes proposal_owner = 13; // Public key of the creator
|
||||
repeated Vote votes = 14; // Vote list in the proposal
|
||||
int32 expected_voters_count = 15; // Maximum number of distinct voters
|
||||
int32 round = 16; // Number of Votes
|
||||
int64 timestamp = 17; // Creation time of proposal
|
||||
int64 expiration_time = 18; // Time interval that the proposal is active
|
||||
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
message Vote {
|
||||
int32 vote_id = 20; // Unique identifier of the vote
|
||||
bytes vote_owner = 21; // Voter's public key
|
||||
int64 timestamp = 22; // Time when the vote was cast
|
||||
bool vote = 23; // Vote bool value (true/false)
|
||||
bytes parent_hash = 24; // Hash of previous owner's Vote
|
||||
bytes received_hash = 25; // Hash of previous received Vote
|
||||
bytes vote_hash = 26; // Hash of all previously defined fields in Vote
|
||||
bytes signature = 27; // Signature of vote_hash
|
||||
}
|
||||
```
|
||||
|
||||
The voting proposal MAY include adding a `node` or removing a `member`.
|
||||
After the `member` creates the voting proposal,
|
||||
it is emitted to the network via the MLS `Application message` with a lightweight,
|
||||
epoch based voting such as [hashgraphlike consensus.](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
|
||||
This consensus result MUST be finalized within the epoch as YES or NO.
|
||||
|
||||
If the voting result is YES, this points out the voting proposal will be converted into
|
||||
the MLS proposal by the `steward` and following commit message that starts the new epoch.
|
||||
|
||||
## Creating welcome message
|
||||
|
||||
When a MLS `MLS proposal message` is created by the `steward`,
|
||||
a `commit message` SHOULD follow,
|
||||
as in section 12.04 [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) to the members.
|
||||
In order for the new `member` joining the group to synchronize with the current members
|
||||
who received the `commit message`,
|
||||
the `steward` sends a welcome message to the node as the new `member`,
|
||||
as in section 12.4.3.1. [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
|
||||
## Single steward
|
||||
|
||||
To naive way to create a decentralized secure group messaging is having a single transparent `steward`
|
||||
who only applies the changes regarding the result of the voting.
|
||||
|
||||
This is mostly similar with the general flow and specified in voting proposal and welcome message creation sections.
|
||||
|
||||
1. Each time a single `steward` initializes a group with group parameters with parameters
|
||||
as in section 8.1. Group Context in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/).
|
||||
2. `steward` creates a group anouncement (GA) according to the previous step and
|
||||
broadcast it to the all network periodically. GA message is visible in network to all `nodes`.
|
||||
3. The each `node` who wants to be a member needs to obtain this anouncement and create `credential`
|
||||
includes `keyPackage` that is specified in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) section 10.
|
||||
4. The `steward` aggregates all `KeyPackages` utilizes them to provision group additions for new members,
|
||||
based on the outcome of the voting process.
|
||||
5. Any `member` start to create `voting proposals` for adding or removing users,
|
||||
and present them to the voting in the MLS group as an application message.
|
||||
|
||||
However, unlimited use of `voting proposals` within the group may be misused by
|
||||
malicious or overly active members.
|
||||
Therefore, an application-level constraint can be introduced to limit the number
|
||||
or frequency of proposals initiated by each member to prevent spam or abuse.
|
||||
6. Meanwhile, the `steward` collects finalized `voting proposals` with in epoch `E`,
|
||||
that have received affirmative votes from members via application messages.
|
||||
Otherwise, the `steward` discards proposals that did not receive a majority of "YES" votes.
|
||||
Since voting proposals are transmitted as application messages, omitting them does not affect
|
||||
the protocol’s correctness or consistency.
|
||||
7. The `steward` converts all approved `voting proposals` into
|
||||
corresponding `MLS proposals` and `commit message`, and
|
||||
transmits both in a single operation as in [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/) section 12.4,
|
||||
including welcome messages for the new members. Therefore, the `commit message` ends the previous epoch and create new ones.
|
||||
|
||||
## Multi stewards
|
||||
|
||||
Decentralization has already been achieved in the previous section.
|
||||
However, to improve availability and ensure censorship resistance,
|
||||
the single-steward protocol is extended to a multi-steward architecture.
|
||||
In this design, each epoch is coordinated by a designated steward,
|
||||
operating under the same protocol as the single-steward model.
|
||||
Thus, the multi-steward approach primarily defines how steward roles
|
||||
rotate across epochs while preserving the underlying structure and logic of the original protocol.
|
||||
Two variants of the multi-steward design are introduced to address different system requirements.
|
||||
|
||||
### Multi steward with single consensus
|
||||
|
||||
In this model, all group modifications, such as adding or removing members,
|
||||
must be approved through consensus by all participants,
|
||||
including the steward assigned for epoch `E`.
|
||||
A configuration with multiple stewards operating under a shared consensus protocol offers
|
||||
increased decentralization and stronger protection against censorship.
|
||||
However, this benefit comes with reduced operational efficiency.
|
||||
The model is therefore best suited for small groups that value
|
||||
decentralization and censorship resistance more than performance.
|
||||
|
||||
### Multi steward with two consensuses
|
||||
|
||||
The two-consensus model offers improved efficiency with a trade-off in decentralization.
|
||||
In this design, group changes require consensus only among the stewards, rather than all members.
|
||||
Regular members participate by periodically selecting the stewards but do not take part in each decision.
|
||||
This structure enables faster coordination since consensus is achieved within a smaller group of stewards.
|
||||
It is particularly suitable for large user groups, where involving every member in each decision would be impractical.
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/)
|
||||
|
||||
### References
|
||||
|
||||
- [MLS RFC 9420](https://datatracker.ietf.org/doc/rfc9420/)
|
||||
- [Hashgraphlike Consensus](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
|
||||
- [vacp2p/de-mls](https://github.com/vacp2p/de-mls)
|
||||
@@ -55,6 +55,9 @@ but improves scalability by reducing direct interactions between participants.
|
||||
Each message has a globally unique, immutable ID (or hash).
|
||||
Messages can be requested from the high-availability caches or
|
||||
other participants using the corresponding message ID.
|
||||
* **Participant ID:**
|
||||
Each participant has a globally unique, immutable ID
|
||||
visible to other participants in the communication.
|
||||
|
||||
## Wire protocol
|
||||
|
||||
@@ -75,7 +78,7 @@ message HistoryEntry {
|
||||
}
|
||||
|
||||
message Message {
|
||||
// 1 Reserved for sender/participant id
|
||||
string sender_id = 1; // Participant ID of the message sender
|
||||
string message_id = 2; // Unique identifier of the message
|
||||
string channel_id = 3; // Identifier of the channel to which the message belongs
|
||||
optional int32 lamport_timestamp = 10; // Logical timestamp for causal ordering in channel
|
||||
@@ -85,7 +88,8 @@ message Message {
|
||||
}
|
||||
```
|
||||
|
||||
Each message MUST include its globally unique identifier in the `message_id` field,
|
||||
The sending participant MUST include its own globally unique identifier in the `sender_id` field.
|
||||
In addition, it MUST include a globally unique identifier for the message in the `message_id` field,
|
||||
likely based on a message hash.
|
||||
The `channel_id` field MUST be set to the identifier of the channel of group communication
|
||||
that is being synchronized.
|
||||
@@ -98,12 +102,16 @@ These fields MAY be left unset in the case of [ephemeral messages](#ephemeral-me
|
||||
The message `content` MAY be left empty for [periodic sync messages](#periodic-sync-message),
|
||||
otherwise it MUST contain the application-level content
|
||||
|
||||
> **_Note:_** Close readers may notice that, outside of filtering messages originating from the sender itself,
|
||||
the `sender_id` field is not used for much.
|
||||
Its importance is expected to increase once a p2p retrieval mechanism is added to SDS, as is planned for the protocol.
|
||||
|
||||
### Participant state
|
||||
|
||||
Each participant MUST maintain:
|
||||
|
||||
* A Lamport timestamp for each channel of communication,
|
||||
initialized to current epoch time in nanosecond resolution.
|
||||
initialized to current epoch time in millisecond resolution.
|
||||
* A bloom filter for received message IDs per channel.
|
||||
The bloom filter SHOULD be rolled over and
|
||||
recomputed once it reaches a predefined capacity of message IDs.
|
||||
@@ -157,6 +165,8 @@ of unacknowledged outgoing messages.
|
||||
|
||||
Upon receiving a message,
|
||||
|
||||
* the participant SHOULD ignore the message if it has a `sender_id` matching its own.
|
||||
* the participant MAY deduplicate the message by comparing its `message_id` to previously received message IDs.
|
||||
* the participant MUST [review the ACK status](#review-ack-status) of messages
|
||||
in its unacknowledged outgoing buffer
|
||||
using the received message's causal history and bloom filter.
|
||||
|
||||
Reference in New Issue
Block a user