mirror of
https://github.com/vacp2p/rfc-index.git
synced 2026-01-08 23:28:15 -05:00
Merge branch 'main' into logos/raw/logos-protocol-modules-raw
This commit is contained in:
@@ -2,5 +2,5 @@
|
||||
|
||||
Nomos is building a secure, flexible, and
|
||||
scalable infrastructure for developers creating applications for the network state.
|
||||
To learn more about Nomos current protocols under discussion,
|
||||
head over to [Nomos Specs](https://github.com/logos-co/nomos-specs).
|
||||
Published Specifications are currently available here,
|
||||
[Nomos Specifications](https://nomos-tech.notion.site/project).
|
||||
|
||||
170
nomos/raw/nomosda-encoding.md
Normal file
170
nomos/raw/nomosda-encoding.md
Normal file
@@ -0,0 +1,170 @@
|
||||
---
|
||||
title: NOMOSDA-ENCODING
|
||||
name: NomosDA Encoding Protocol
|
||||
status: raw
|
||||
category:
|
||||
tags: data-availability
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Daniel Kashepava <danielkashepava@status.im>
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
- Thomas Lavaur <thomaslavaur@status.im>
|
||||
- Mehmet Gonen <mehmet@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes the encoding and verification processes of NomosDA, which is the data availability (DA) solution used by the Nomos blockchain. NomosDA provides an assurance that all data from Nomos blobs are accessible and verifiable by every network participant.
|
||||
|
||||
This document presents an implementation specification describing how:
|
||||
|
||||
- Encoders encode blobs they want to upload to the Data Availability layer.
|
||||
- Other nodes implement the verification of blobs that were already uploaded to DA.
|
||||
|
||||
## Definitions
|
||||
|
||||
- **Encoder**: An encoder is any actor who performs the encoding process described in this document. This involves committing to the data, generating proofs, and submitting the result to the DA layer.
|
||||
|
||||
In the Nomos architecture, the rollup sequencer typically acts as the encoder, but the role is not exclusive and any actor in the DA layer can also act as encoders.
|
||||
- **Verifier**: Verifies its portion of the distributed blob data as per the verification protocol. In the Nomos architecture, the DA nodes act as the verifiers.
|
||||
|
||||
## Overview
|
||||
|
||||
In the encoding stage, the encoder takes the DA parameters and the padded blob data and creates an initial matrix of data chunks. This matrix is expanded using Reed-Solomon coding and various commitments and proofs are created for the data.
|
||||
|
||||
When a verifier receives a sample, it verifies the data it receives from the encoder and broadcasts the information if the data is verified. Finally, the verifier stores the sample data for the required length of time.
|
||||
|
||||
## Construction
|
||||
|
||||
The encoder and verifier use the [NomosDA cryptographic protocol](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21) to carry out their respective functions. These functions are implemented as abstracted and configurable software entities that allow the original data to be encoded and verified via high-level operations.
|
||||
|
||||
### Glossary
|
||||
|
||||
| Name | Description | Representation |
|
||||
| --- | --- | --- |
|
||||
| `Commitment` | Commitment as per the [NomosDA Cryptographic Protocol](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21) | `bytes` |
|
||||
| `Proof` | Proof as per the [NomosDA Cryptographic Protocol](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21) | `bytes` |
|
||||
| `ChunksMatrix` | Matrix of chunked data. Each chunk is **31 bytes.** Row and Column sizes depend on the encoding necessities. | `List[List[bytes]]` |
|
||||
|
||||
### Encoder
|
||||
|
||||
An encoder takes a set of parameters and the blob data, and creates a matrix of chunks that it uses to compute the necessary cryptographic data. It produces the set of Reed-Solomon (RS) encoded data, the commitments, and the proofs that are needed prior to [dispersal](https://www.notion.so/NomosDA-Dispersal-1fd261aa09df815288c9caf45ed72c95?pvs=21).
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
A[DaEncoderParams] -->|Input| B(Encoder)
|
||||
I[31bytes-padded-input] -->|Input| B
|
||||
B -->|Creates| D[Chunks matrix]
|
||||
D --> |Input| C[NomosDA encoding]
|
||||
C --> E{Encoded data📄}
|
||||
```
|
||||
|
||||
#### Encoding Process
|
||||
|
||||
The encoder executes the encoding process as follows:
|
||||
|
||||
1. The encoder takes the following input parameters:
|
||||
|
||||
```python
|
||||
class DAEncoderParams:
|
||||
column_count: usize
|
||||
bytes_per_field_element: usize
|
||||
```
|
||||
|
||||
| Name | Description | Representation |
|
||||
| --- | --- | --- |
|
||||
| `column_count` | The number of subnets available for dispersal in the system | `usize`, `int` in Python |
|
||||
| `bytes_per_field_element` | The amount of bytes per data chunk. This is set to 31 bytes. Each chunk has 31 bytes rather than 32 to ensure that the chunk value does not exceed the maximum value on the [BLS12-381 elliptic curve](https://electriccoin.co/blog/new-snark-curve/). | `usize`, `int` in Python |
|
||||
|
||||
2. The encoder also includes the blob data to be encoded, which must be of a size that is a multiple of `bytes_per_field_element` bytes. Clients are responsible for padding the data so it fits this constraint.
|
||||
3. The encoder splits the data into `bytes_per_field_element`-sized chunks. It also arranges these chunks into rows and columns, creating a matrix.
|
||||
a. The amount of columns of the matrix needs to fit with the `column_count` parameter, taking into account the `rs_expansion_factor` (currently fixed to 2).
|
||||
i. This means that the size of each row in this matrix is `(bytes_per_field_element*column_count)/rs_expansion_factor`.
|
||||
b. The amount of rows depends on the size of the data.
|
||||
4. The data is encoded as per [the cryptographic details](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21).
|
||||
5. The encoder provides the encoded data set:
|
||||
|
||||
| Name | Description | Representation |
|
||||
| --- | --- | --- |
|
||||
| `data` | Original data | `bytes` |
|
||||
| `chunked_data` | Matrix before RS expansion | `ChunksMatrix` |
|
||||
| `extended_matrix` | Matrix after RS expansion | `ChunksMatrix` |
|
||||
| `row_commitments` | Commitments for each matrix row | `List[Commitment]` |
|
||||
| `combined_column_proofs` | Proofs for each matrix column | `List[Proof]` |
|
||||
|
||||
```python
|
||||
class EncodedData:
|
||||
data: bytes
|
||||
chunked_data: ChunksMatrix
|
||||
extended_matrix: ChunksMatrix
|
||||
row_commitments: List[Commitment]
|
||||
combined_column_proofs: List[Proof]
|
||||
```
|
||||
|
||||
#### Encoder Limits
|
||||
|
||||
NomosDA does not impose a fixed limit on blob size at the encoding level. However, protocols that involve resource-intensive operations must include upper bounds to prevent abuse. In the case of NomosDA, blob size limits are expected to be enforced, as part of the protocol's broader responsibility for resource management and fairness.
|
||||
|
||||
Larger blobs naturally result in higher computational and bandwidth costs, particularly for the encoder, who must compute a proof for each column. Without size limits, malicious clients could exploit the system by attempting to stream unbounded data to DA nodes. Since payment is provided before blob dispersal, DA nodes are protected from performing unnecessary work. This enables the protocol to safely accept very large blobs, as the primary computational cost falls on the encoder. The protocol can accommodate generous blob sizes in practice, while rejecting only absurdly large blobs, such as those exceeding 1 GB, to prevent denial-of-service attacks and ensure network stability.
|
||||
|
||||
To mitigate this, the protocol define acceptable blob size limits, and DA implementations enforce local mitigation strategies, such as flagging or blacklisting clients that violate these constraints.
|
||||
|
||||
### Verifier
|
||||
|
||||
A verifier checks the proper encoding of data blobs it receives. A verifier executes the verification process as follows:
|
||||
|
||||
1. The verifier receives a `DAShare` with the required verification data:
|
||||
|
||||
| Name | Description | Representation |
|
||||
| --- | --- | --- |
|
||||
| `column` | Column chunks (31 bytes) from the encoded matrix | `List[bytes]` |
|
||||
| `column_idx` | Column id (`0..2047`). It is directly related to the `subnetworks` in the [network specification](https://www.notion.so/NomosDA-Network-Specification-1fd261aa09df81188e76cb083791252d?pvs=21). | `u16`, unsigned int of 16 bits. `int` in Python |
|
||||
| `combined_column_proof` | Proof of the random linear combination of the column elements. | `Proof` |
|
||||
| `row_commitments` | Commitments for each matrix row | `List[Commitment]` |
|
||||
| `blob_id` | This is computed as the hash (**blake2b**) of `row_commitments` | `bytes` |
|
||||
|
||||
2. Upon receiving the above data it verifies the column data as per the [cryptographic details](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21). If the verification is successful, the node triggers the [replication protocol](https://www.notion.so/NomosDA-Subnetwork-Replication-1fd261aa09df811d93f8c6280136bfbb?pvs=21) and stores the blob.
|
||||
|
||||
```python
|
||||
class DAShare:
|
||||
column: Column
|
||||
column_idx: u16
|
||||
combined_column_proof: Proof
|
||||
row_commitments: List[Commitment]
|
||||
|
||||
def blob_id(self) -> BlobId:
|
||||
hasher = blake2b(digest_size=32)
|
||||
for c in self.row_commitments:
|
||||
hasher.update(bytes(c))
|
||||
return hasher.digest()
|
||||
```
|
||||
|
||||
### Verification Logic
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant N as Node
|
||||
participant S as Subnetwork Column N
|
||||
loop For each incoming blob column
|
||||
N-->>N: If blob is valid
|
||||
N-->>S: Replication
|
||||
N->>N: Stores blob
|
||||
end
|
||||
```
|
||||
|
||||
## Details
|
||||
|
||||
The encoder and verifier processes described above make use of a variety of cryptographic functions to facilitate the correct verification of column data by verifiers. These functions rely on primitives such as polynomial commitments and Reed-Solomon erasure codes, the details of which are outside the scope of this document. These details, as well as introductions to the cryptographic primitives being used, can be found in the NomosDA Cryptographic Protocol:
|
||||
|
||||
[NomosDA Cryptographic Protocol](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21)
|
||||
|
||||
## References
|
||||
|
||||
- Encoder Specification: [GitHub/encoder.py](https://github.com/logos-co/nomos-specs/blob/master/da/encoder.py)
|
||||
- Verifier Specification: [GitHub/verifier.py](https://github.com/logos-co/nomos-specs/blob/master/da/verifier.py)
|
||||
- Cryptographic protocol: [NomosDA Cryptographic Protocol](https://www.notion.so/NomosDA-Cryptographic-Protocol-1fd261aa09df816fa97ac81304732e77?pvs=21)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
255
nomos/raw/nomosda-network.md
Normal file
255
nomos/raw/nomosda-network.md
Normal file
@@ -0,0 +1,255 @@
|
||||
---
|
||||
title: NOMOS-DA-NETWORK
|
||||
name: NomosDA Network
|
||||
status: raw
|
||||
category:
|
||||
tags: network, data-availability, da-nodes, executors, sampling
|
||||
editor: Daniel Sanchez Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Daniel Kashepava <danielkashepava@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
NomosDA is the scalability solution protocol for data availability within the Nomos network.
|
||||
This document delineates the protocol's structure at the network level,
|
||||
identifies participants,
|
||||
and describes the interactions among its components.
|
||||
Please note that this document does not delve into the cryptographic aspects of the design.
|
||||
For comprehensive details on the cryptographic operations,
|
||||
a detailed specification is a work in progress.
|
||||
|
||||
## Objectives
|
||||
|
||||
NomosDA was created to ensure that data from Nomos zones is distributed, verifiable, immutable, and accessible.
|
||||
At the same time, it is optimised for the following properties:
|
||||
|
||||
- **Decentralization**: NomosDA’s data availability guarantees must be achieved with minimal trust assumptions
|
||||
and centralised actors. Therefore,
|
||||
permissioned DA schemes involving a Data Availability Committee (DAC) had to be avoided in the design.
|
||||
Schemes that require some nodes to download the entire blob data were also off the list
|
||||
due to the disproportionate role played by these “supernodes”.
|
||||
|
||||
- **Scalability**: NomosDA is intended to be a bandwidth-scalable protocol, ensuring that its functions are maintained as the Nomos network grows. Therefore, NomosDA was designed to minimise the amount of data sent to participants, reducing the communication bottleneck and allowing more parties to participate in the DA process.
|
||||
|
||||
To achieve the above properties, NomosDA splits up zone data and
|
||||
distributes it among network participants,
|
||||
with cryptographic properties used to verify the data’s integrity.
|
||||
A major feature of this design is that parties who wish to receive an assurance of data availability
|
||||
can do so very quickly and with minimal hardware requirements.
|
||||
However, this comes at the cost of additional complexity and resources required by more integral participants.
|
||||
|
||||
## Requirements
|
||||
|
||||
In order to ensure that the above objectives are met,
|
||||
the NomosDA network requires a group of participants
|
||||
that undertake a greater burden in terms of active involvement in the protocol.
|
||||
Recognising that not all node operators can do so,
|
||||
NomosDA assigns different roles to different kinds of participants,
|
||||
depending on their ability and willingness to contribute more computing power
|
||||
and bandwidth to the protocol.
|
||||
It was therefore necessary for NomosDA to be implemented as an opt-in Service Network.
|
||||
|
||||
Because the NomosDA network has an arbitrary amount of participants,
|
||||
and the data is split into a fixed number of portions (see the [Encoding & Verification Specification](https://www.notion.so/NomosDA-Encoding-Verification-4d8ca269e96d4fdcb05abc70426c5e7c)),
|
||||
it was necessary to define exactly how each portion is assigned to a participant who will receive and verify it.
|
||||
This assignment algorithm must also be flexible enough to ensure smooth operation in a variety of scenarios,
|
||||
including where there are more or fewer participants than the number of portions.
|
||||
|
||||
## Overview
|
||||
|
||||
### Network Participants
|
||||
|
||||
The NomosDA network includes three categories of participants:
|
||||
|
||||
- **Executors**: Tasked with the encoding and dispersal of data blobs.
|
||||
- **DA Nodes**: Receive and verify the encoded data,
|
||||
subsequently temporarily storing it for further network validation through sampling.
|
||||
- **Light Nodes**: Employ sampling to ascertain data availability.
|
||||
|
||||
### Network Distribution
|
||||
|
||||
The NomosDA network is segmented into `num_subnets` subnetworks.
|
||||
These subnetworks represent subsets of peers from the overarching network,
|
||||
each responsible for a distinct portion of the distributed encoded data.
|
||||
Peers in the network may engage in one or multiple subnetworks,
|
||||
contingent upon network size and participant count.
|
||||
|
||||
### Sub-protocols
|
||||
|
||||
The NomosDA protocol consists of the following sub-protocols:
|
||||
|
||||
- **Dispersal**: Describes how executors distribute encoded data blobs to subnetworks.
|
||||
[NomosDA Dispersal](https://www.notion.so/NomosDA-Dispersal-1818f96fb65c805ca257cb14798f24d4?pvs=21)
|
||||
- **Replication**: Defines how DA nodes distribute encoded data blobs within subnetworks.
|
||||
[NomosDA Subnetwork Replication](https://www.notion.so/NomosDA-Subnetwork-Replication-1818f96fb65c80119fa0e958a087cc2b?pvs=21)
|
||||
- **Sampling**: Used by sampling clients (e.g., light clients) to verify the availability of previously dispersed
|
||||
and replicated data.
|
||||
[NomosDA Sampling](https://www.notion.so/NomosDA-Sampling-1538f96fb65c8031a44cf7305d271779?pvs=21)
|
||||
- **Reconstruction**: Describes gathering and decoding dispersed data back into its original form.
|
||||
[NomosDA Reconstruction](https://www.notion.so/NomosDA-Reconstruction-1828f96fb65c80b2bbb9f4c5a0cf26a5?pvs=21)
|
||||
- **Indexing**: Tracks and exposes blob metadata on-chain.
|
||||
[NomosDA Indexing](https://www.notion.so/NomosDA-Indexing-1bb8f96fb65c8044b635da9df20c2411?pvs=21)
|
||||
|
||||
## Construction
|
||||
|
||||
### NomosDA Network Registration
|
||||
|
||||
Entities wishing to participate in NomosDA must declare their role via [SDP](https://www.notion.so/Final-Draft-Validator-Role-Protocol-17b8f96fb65c80c69c2ef55e22e29506) (Service Declaration Protocol).
|
||||
Once declared, they're accounted for in the subnetwork construction.
|
||||
|
||||
This enables participation in:
|
||||
|
||||
- Dispersal (as executor)
|
||||
- Replication & sampling (as DA node)
|
||||
- Sampling (as light node)
|
||||
|
||||
### Subnetwork Assignment
|
||||
|
||||
The NomosDA network comprises `num_subnets` subnetworks,
|
||||
which are virtual in nature.
|
||||
A subnetwork is a subset of peers grouped together so nodes know who they should connect with,
|
||||
serving as groupings of peers tasked with executing the dispersal and replication sub-protocols.
|
||||
In each subnetwork, participants establish a fully connected overlay,
|
||||
ensuring all nodes maintain permanent connections for the lifetime of the SDP set
|
||||
with peers within the same subnetwork.
|
||||
Nodes refer to nodes in the Data Availability SDP set to ascertain their connectivity requirements across subnetworks.
|
||||
|
||||
#### Assignment Algorithm
|
||||
|
||||
The concrete distribution algorithm is described in the following specification:
|
||||
[DA Subnetwork Assignation](https://www.notion.so/DA-Subnetwork-Assignation-217261aa09df80fc8bb9cf46092741ce)
|
||||
|
||||
## Executor Connections
|
||||
|
||||
Each executor maintains a connection with one peer per subnetwork,
|
||||
necessitating at least num_subnets stable and healthy connections.
|
||||
Executors are expected to allocate adequate resources to sustain these connections.
|
||||
An example algorithm for peer selection would be:
|
||||
|
||||
```python
|
||||
def select_peers(
|
||||
subnetworks: Sequence[Set[PeerId]],
|
||||
filtered_subnetworks: Set[int],
|
||||
filtered_peers: Set[PeerId]
|
||||
) -> Set[PeerId]:
|
||||
result = set()
|
||||
for i, subnetwork in enumerate(subnetworks):
|
||||
available_peers = subnetwork - filtered_peers
|
||||
if i not in filtered_subnetworks and available_peers:
|
||||
result.add(next(iter(available_peers)))
|
||||
return result
|
||||
```
|
||||
|
||||
## NomosDA Protocol Steps
|
||||
|
||||
### Dispersal
|
||||
|
||||
1. The NomosDA protocol is initiated by executors
|
||||
who perform data encoding as outlined in the [Encoding Specification](https://www.notion.so/NomosDA-Encoding-Verification-4d8ca269e96d4fdcb05abc70426c5e7c).
|
||||
2. Executors prepare and distribute each encoded data portion
|
||||
to its designated subnetwork (from `0` to `num_subnets - 1` ).
|
||||
3. Executors might opt to perform sampling to confirm successful dispersal.
|
||||
4. Post-dispersal, executors publish the dispersed `blob_id` and metadata to the mempool. <!-- TODO: add link to dispersal document-->
|
||||
|
||||
### Replication
|
||||
|
||||
DA nodes receive columns from dispersal or replication
|
||||
and validate the data encoding.
|
||||
Upon successful validation,
|
||||
they replicate the validated column to connected peers within their subnetwork.
|
||||
Replication occurs once per blob; subsequent validations of the same blob are discarded.
|
||||
|
||||
### Sampling
|
||||
|
||||
1. Sampling is [invoked based on the node's current role](https://www.notion.so/1538f96fb65c8031a44cf7305d271779?pvs=25#15e8f96fb65c8006b9d7f12ffdd9a159).
|
||||
2. The node selects `sample_size` random subnetworks
|
||||
and queries each for the availability of the corresponding column for the sampled blob. Sampling is deemed successful only if all queried subnetworks respond affirmatively.
|
||||
|
||||
- If `num_subnets` is 2048, `sample_size` is [20 as per the sampling research](https://www.notion.so/1708f96fb65c80a08c97d728cb8476c3?pvs=25#1708f96fb65c80bab6f9c6a946940078)
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
SamplingClient ->> DANode_1: Request
|
||||
DANode_1 -->> SamplingClient: Response
|
||||
SamplingClient ->>DANode_2: Request
|
||||
DANode_2 -->> SamplingClient: Response
|
||||
SamplingClient ->> DANode_n: Request
|
||||
DANode_n -->> SamplingClient: Response
|
||||
```
|
||||
|
||||
### Network Schematics
|
||||
|
||||
The overall network and protocol interactions is represented by the following diagram
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph Replication
|
||||
subgraph Subnetwork_N
|
||||
N10 -->|Replicate| N20
|
||||
N20 -->|Replicate| N30
|
||||
N30 -->|Replicate| N10
|
||||
end
|
||||
subgraph ...
|
||||
end
|
||||
subgraph Subnetwork_0
|
||||
N1 -->|Replicate| N2
|
||||
N2 -->|Replicate| N3
|
||||
N3 -->|Replicate| N1
|
||||
end
|
||||
end
|
||||
subgraph Sampling
|
||||
N9 -->|Sample 0| N2
|
||||
N9 -->|Sample S| N20
|
||||
end
|
||||
subgraph Dispersal
|
||||
Executor -->|Disperse| N1
|
||||
Executor -->|Disperse| N10
|
||||
end
|
||||
```
|
||||
|
||||
## Details
|
||||
|
||||
### Network specifics
|
||||
|
||||
The NomosDA network is engineered for connection efficiency.
|
||||
Executors manage numerous open connections,
|
||||
utilizing their resource capabilities.
|
||||
DA nodes, with their resource constraints,
|
||||
are designed to maximize connection reuse.
|
||||
|
||||
NomosDA uses [multiplexed](https://docs.libp2p.io/concepts/transports/quic/#quic-native-multiplexing) streams over [QUIC](https://docs.libp2p.io/concepts/transports/quic/) connections.
|
||||
For each sub-protocol, a stream protocol ID is defined to negotiate the protocol,
|
||||
triggering the specific protocol once established:
|
||||
|
||||
- Dispersal: /nomos/da/{version}/dispersal
|
||||
- Replication: /nomos/da/{version}/replication
|
||||
- Sampling: /nomos/da/{version}/sampling
|
||||
|
||||
Through these multiplexed streams,
|
||||
DA nodes can utilize the same connection for all sub-protocols.
|
||||
This, combined with virtual subnetworks (membership sets),
|
||||
ensures the overlay node distribution is scalable for networks of any size.
|
||||
|
||||
## References
|
||||
|
||||
- [Encoding Specification](https://www.notion.so/NomosDA-Encoding-Verification-4d8ca269e96d4fdcb05abc70426c5e7c)
|
||||
- [Encoding & Verification Specification](https://www.notion.so/NomosDA-Encoding-Verification-4d8ca269e96d4fdcb05abc70426c5e7c)
|
||||
- [NomosDA Dispersal](https://www.notion.so/NomosDA-Dispersal-1818f96fb65c805ca257cb14798f24d4?pvs=21)
|
||||
- [NomosDA Subnetwork Replication](https://www.notion.so/NomosDA-Subnetwork-Replication-1818f96fb65c80119fa0e958a087cc2b?pvs=21)
|
||||
- [DA Subnetwork Assignation](https://www.notion.so/DA-Subnetwork-Assignation-217261aa09df80fc8bb9cf46092741ce)
|
||||
- [NomosDA Sampling](https://www.notion.so/NomosDA-Sampling-1538f96fb65c8031a44cf7305d271779?pvs=21)
|
||||
- [NomosDA Reconstruction](https://www.notion.so/NomosDA-Reconstruction-1828f96fb65c80b2bbb9f4c5a0cf26a5?pvs=21)
|
||||
- [NomosDA Indexing](https://www.notion.so/NomosDA-Indexing-1bb8f96fb65c8044b635da9df20c2411?pvs=21)
|
||||
- [SDP](https://www.notion.so/Final-Draft-Validator-Role-Protocol-17b8f96fb65c80c69c2ef55e22e29506)
|
||||
- [invoked based on the node's current role](https://www.notion.so/1538f96fb65c8031a44cf7305d271779?pvs=25#15e8f96fb65c8006b9d7f12ffdd9a159)
|
||||
- [20 as per the sampling research](https://www.notion.so/1708f96fb65c80a08c97d728cb8476c3?pvs=25#1708f96fb65c80bab6f9c6a946940078)
|
||||
- [multiplexed](https://docs.libp2p.io/concepts/transports/quic/#quic-native-multiplexing)
|
||||
- [QUIC](https://docs.libp2p.io/concepts/transports/quic/)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
236
nomos/raw/p2p-hardware-requirements.md
Normal file
236
nomos/raw/p2p-hardware-requirements.md
Normal file
@@ -0,0 +1,236 @@
|
||||
---
|
||||
title: P2P-HARDWARE-REQUIREMENTS
|
||||
name: Nomos p2p Network Hardware Requirements Specification
|
||||
status: raw
|
||||
category: infrastructure
|
||||
tags: [hardware, requirements, nodes, validators, services]
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification defines the hardware requirements for running various types of Nomos blockchain nodes. Hardware needs vary significantly based on the node's role, from lightweight verification nodes to high-performance Zone Executors. The requirements are designed to support diverse participation levels while ensuring network security and performance.
|
||||
|
||||
## Motivation
|
||||
|
||||
The Nomos network is designed to be inclusive and accessible across a wide range of hardware configurations. By defining clear hardware requirements for different node types, we enable:
|
||||
|
||||
1. **Inclusive Participation**: Allow users with limited resources to participate as Light Nodes
|
||||
2. **Scalable Infrastructure**: Support varying levels of network participation based on available resources
|
||||
3. **Performance Optimization**: Ensure adequate resources for computationally intensive operations
|
||||
4. **Network Security**: Maintain network integrity through properly resourced validator nodes
|
||||
5. **Service Quality**: Define requirements for optional services that enhance network functionality
|
||||
|
||||
**Important Notice**: These hardware requirements are preliminary and subject to revision based on implementation testing and real-world network performance data.
|
||||
|
||||
## Specification
|
||||
|
||||
### Node Types Overview
|
||||
|
||||
Hardware requirements vary based on the node's role and services:
|
||||
|
||||
- **Light Node**: Minimal verification with minimal resources
|
||||
- **Basic Bedrock Node**: Standard validation participation
|
||||
- **Service Nodes**: Enhanced capabilities for optional network services
|
||||
|
||||
### Light Node
|
||||
|
||||
Light Nodes provide network verification with minimal resource requirements, suitable for resource-constrained environments.
|
||||
|
||||
**Target Use Cases:**
|
||||
|
||||
- Mobile devices and smartphones
|
||||
- Single-board computers (Raspberry Pi, etc.)
|
||||
- IoT devices with network connectivity
|
||||
- Users with limited hardware resources
|
||||
|
||||
**Hardware Requirements:**
|
||||
|
||||
| Component | Specification |
|
||||
|-----------|---------------|
|
||||
| **CPU** | Low-power processor (smartphone/SBC capable) |
|
||||
| **Memory (RAM)** | 512 MB |
|
||||
| **Storage** | Minimal (few GB) |
|
||||
| **Network** | Reliable connection, 1 Mbps free bandwidth |
|
||||
|
||||
### Basic Bedrock Node (Validator)
|
||||
|
||||
Basic validators participate in Bedrock consensus using typical consumer hardware.
|
||||
|
||||
**Target Use Cases:**
|
||||
|
||||
- Individual validators on consumer hardware
|
||||
- Small-scale validation operations
|
||||
- Entry-level network participation
|
||||
|
||||
**Hardware Requirements:**
|
||||
|
||||
| Component | Specification |
|
||||
|-----------|---------------|
|
||||
| **CPU** | 2 cores, 2 GHz modern multi-core processor |
|
||||
| **Memory (RAM)** | 1 GB minimum |
|
||||
| **Storage** | SSD with 100+ GB free space, expandable |
|
||||
| **Network** | Reliable connection, 1 Mbps free bandwidth |
|
||||
|
||||
### Service-Specific Requirements
|
||||
|
||||
Nodes can optionally run additional Bedrock Services that require enhanced resources beyond basic validation.
|
||||
|
||||
#### Data Availability (DA) Service
|
||||
|
||||
DA Service nodes store and serve data shares for the network's data availability layer.
|
||||
|
||||
**Service Role:**
|
||||
|
||||
- Store blockchain data and blob data long-term
|
||||
- Serve data shares to requesting nodes
|
||||
- Maintain high availability for data retrieval
|
||||
|
||||
**Additional Requirements:**
|
||||
|
||||
| Component | Specification | Rationale |
|
||||
|-----------|---------------|-----------|
|
||||
| **CPU** | Same as Basic Bedrock Node | Standard processing needs |
|
||||
| **Memory (RAM)** | Same as Basic Bedrock Node | Standard memory needs |
|
||||
| **Storage** | **Fast SSD, 500+ GB free** | Long-term chain and blob storage |
|
||||
| **Network** | **High bandwidth (10+ Mbps)** | Concurrent data serving |
|
||||
| **Connectivity** | **Stable, accessible external IP** | Direct peer connections |
|
||||
|
||||
**Network Requirements:**
|
||||
|
||||
- Capacity to handle multiple concurrent connections
|
||||
- Stable external IP address for direct peer access
|
||||
- Low latency for efficient data serving
|
||||
|
||||
#### Blend Protocol Service
|
||||
|
||||
Blend Protocol nodes provide anonymous message routing capabilities.
|
||||
|
||||
**Service Role:**
|
||||
|
||||
- Route messages anonymously through the network
|
||||
- Provide timing obfuscation for privacy
|
||||
- Maintain multiple concurrent connections
|
||||
|
||||
**Additional Requirements:**
|
||||
|
||||
| Component | Specification | Rationale |
|
||||
|-----------|---------------|-----------|
|
||||
| **CPU** | Same as Basic Bedrock Node | Standard processing needs |
|
||||
| **Memory (RAM)** | Same as Basic Bedrock Node | Standard memory needs |
|
||||
| **Storage** | Same as Basic Bedrock Node | Standard storage needs |
|
||||
| **Network** | **Stable connection (10+ Mbps)** | Multiple concurrent connections |
|
||||
| **Connectivity** | **Stable, accessible external IP** | Direct peer connections |
|
||||
|
||||
**Network Requirements:**
|
||||
|
||||
- Low-latency connection for effective message blending
|
||||
- Stable connection for timing obfuscation
|
||||
- Capability to handle multiple simultaneous connections
|
||||
|
||||
#### Executor Network Service
|
||||
|
||||
Zone Executors perform the most computationally intensive work in the network.
|
||||
|
||||
**Service Role:**
|
||||
|
||||
- Execute Zone state transitions
|
||||
- Generate zero-knowledge proofs
|
||||
- Process complex computational workloads
|
||||
|
||||
**Critical Performance Note**: Zone Executors perform the heaviest computational work in the network. High-performance hardware is crucial for effective participation and may provide competitive advantages in execution markets.
|
||||
|
||||
**Hardware Requirements:**
|
||||
|
||||
| Component | Specification | Rationale |
|
||||
|-----------|---------------|-----------|
|
||||
| **CPU** | **Very high-performance multi-core processor** | Zone logic execution and ZK proving |
|
||||
| **Memory (RAM)** | **32+ GB strongly recommended** | Complex Zone execution requirements |
|
||||
| **Storage** | Same as Basic Bedrock Node | Standard storage needs |
|
||||
| **GPU** | **Highly recommended/often necessary** | Efficient ZK proof generation |
|
||||
| **Network** | **High bandwidth (10+ Mbps)** | Data dispersal and high connection load |
|
||||
|
||||
**GPU Requirements:**
|
||||
|
||||
- **NVIDIA**: CUDA-enabled GPU (RTX 3090 or equivalent recommended)
|
||||
- **Apple**: Metal-compatible Apple Silicon
|
||||
- **Performance Impact**: Strong GPU significantly reduces proving time
|
||||
|
||||
**Network Requirements:**
|
||||
|
||||
- Support for **2048+ direct UDP connections** to DA Nodes (for blob publishing)
|
||||
- High bandwidth for data dispersal operations
|
||||
- Stable connection for continuous operation
|
||||
|
||||
*Note: DA Nodes utilizing [libp2p](https://docs.libp2p.io/) connections need sufficient capacity to receive and serve data shares over many connections.*
|
||||
|
||||
## Implementation Requirements
|
||||
|
||||
### Minimum Requirements
|
||||
|
||||
All Nomos nodes MUST meet:
|
||||
|
||||
1. **Basic connectivity** to the Nomos network via [libp2p](https://docs.libp2p.io/)
|
||||
2. **Adequate storage** for their designated role
|
||||
3. **Sufficient processing power** for their service level
|
||||
4. **Reliable network connection** with appropriate bandwidth for [QUIC](https://docs.libp2p.io/concepts/transports/quic/) transport
|
||||
|
||||
### Optional Enhancements
|
||||
|
||||
Node operators MAY implement:
|
||||
|
||||
- Hardware redundancy for critical services
|
||||
- Enhanced cooling for high-performance configurations
|
||||
- Dedicated network connections for service nodes utilizing [libp2p](https://docs.libp2p.io/) protocols
|
||||
- Backup power systems for continuous operation
|
||||
|
||||
### Resource Scaling
|
||||
|
||||
Requirements may vary based on:
|
||||
|
||||
- **Network Load**: Higher network activity increases resource demands
|
||||
- **Zone Complexity**: More complex Zones require additional computational resources
|
||||
- **Service Combinations**: Running multiple services simultaneously increases requirements
|
||||
- **Geographic Location**: Network latency affects optimal performance requirements
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Hardware Security
|
||||
|
||||
1. **Secure Storage**: Use encrypted storage for sensitive node data
|
||||
2. **Network Security**: Implement proper firewall configurations
|
||||
3. **Physical Security**: Secure physical access to node hardware
|
||||
4. **Backup Strategies**: Maintain secure backups of critical data
|
||||
|
||||
### Performance Security
|
||||
|
||||
1. **Resource Monitoring**: Monitor resource usage to detect anomalies
|
||||
2. **Redundancy**: Plan for hardware failures in critical services
|
||||
3. **Isolation**: Consider containerization or virtualization for service isolation
|
||||
4. **Update Management**: Maintain secure update procedures for hardware drivers
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Scalability
|
||||
|
||||
- **Light Nodes**: Minimal resource footprint, high scalability
|
||||
- **Validators**: Moderate resource usage, network-dependent scaling
|
||||
- **Service Nodes**: High resource usage, specialized scaling requirements
|
||||
|
||||
### Resource Efficiency
|
||||
|
||||
- **CPU Usage**: Optimized algorithms for different hardware tiers
|
||||
- **Memory Usage**: Efficient data structures for constrained environments
|
||||
- **Storage Usage**: Configurable retention policies and compression
|
||||
- **Network Usage**: Adaptive bandwidth utilization based on [libp2p](https://docs.libp2p.io/) capacity and [QUIC](https://docs.libp2p.io/concepts/transports/quic/) connection efficiency
|
||||
|
||||
## References
|
||||
|
||||
1. [libp2p protocol](https://docs.libp2p.io/)
|
||||
2. [QUIC protocol](https://docs.libp2p.io/concepts/transports/quic/)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
377
nomos/raw/p2p-nat-solution.md
Normal file
377
nomos/raw/p2p-nat-solution.md
Normal file
@@ -0,0 +1,377 @@
|
||||
---
|
||||
title: P2P-NAT-SOLUTION
|
||||
name: Nomos P2P Network NAT Solution Specification
|
||||
status: raw
|
||||
category: networking
|
||||
tags: [nat, traversal, autonat, upnp, pcp, nat-pmp]
|
||||
editor: Antonio Antonino <antonio@status.im>
|
||||
contributors:
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
- Petar Radovic <petar@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- Youngjoon Lee <youngjoon@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification defines a comprehensive NAT (Network Address Translation) traversal solution for the Nomos P2P network. The solution enables nodes to automatically determine their NAT status and establish both outbound and inbound connections regardless of network configuration. The strategy combines [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md), dynamic port mapping protocols, and continuous verification to maximize public reachability while maintaining decentralized operation.
|
||||
|
||||
## Motivation
|
||||
|
||||
Network Address Translation presents a critical challenge for Nomos participants, particularly those operating on consumer hardware without technical expertise. The Nomos network requires a NAT traversal solution that:
|
||||
|
||||
1. **Automatic Operation**: Works out-of-the-box without user configuration
|
||||
2. **Inclusive Participation**: Enables nodes on consumer hardware to participate effectively
|
||||
3. **Decentralized Approach**: Leverages the existing Nomos P2P network rather than centralized services
|
||||
4. **Progressive Fallback**: Escalates through increasingly complex protocols as needed
|
||||
5. **Dynamic Adaptation**: Handles changing network environments and configurations
|
||||
|
||||
The solution must ensure that nodes can both establish outbound connections and accept inbound connections from other peers, maintaining network connectivity across diverse NAT configurations.
|
||||
|
||||
## Specification
|
||||
|
||||
### Terminology
|
||||
|
||||
- **Public Node**: A node that is publicly reachable via a public IP address or valid port mapping
|
||||
- **Private Node**: A node that is not publicly reachable due to NAT/firewall restrictions
|
||||
- **Dialing**: The process of establishing a connection using the [libp2p protocol](https://docs.libp2p.io/) stack
|
||||
- **NAT Status**: Whether a node is publicly reachable or hidden behind NAT
|
||||
|
||||
### Key Design Principles
|
||||
|
||||
#### Optional Configuration
|
||||
|
||||
The NAT traversal strategy must work out-of-the-box whenever possible. Users who do not want to engage in configuration should only need to install the node software package. However, users requiring full control must be able to configure every aspect of the strategy.
|
||||
|
||||
#### Decentralized Operation
|
||||
|
||||
The solution leverages the existing Nomos P2P network for coordination rather than relying on centralized third-party services. This maintains the decentralized nature of the network while providing necessary NAT traversal capabilities.
|
||||
|
||||
#### Progressive Fallback
|
||||
|
||||
The protocol begins with lightweight checks and escalates through more complex and resource-intensive protocols. Failure at any step moves the protocol to the next stage in the strategy, ensuring maximum compatibility across network configurations.
|
||||
|
||||
#### Dynamic Network Environment
|
||||
|
||||
Unless explicitly configured for static addresses, each node's public or private status is assumed to be dynamic. A once publicly-reachable node can become unreachable and vice versa, requiring continuous monitoring and adaptation.
|
||||
|
||||
### Node Discovery Considerations
|
||||
|
||||
The Nomos public network encourages participation from a large number of nodes, many deployed through simple installation procedures. Some nodes will not achieve Public status, but the discovery protocol must track these peers and allow other nodes to discover them. This prevents network partitioning and ensures Private nodes remain accessible to other participants.
|
||||
|
||||
### NAT Traversal Protocol
|
||||
|
||||
#### Protocol Requirements
|
||||
|
||||
**Each node MUST:**
|
||||
|
||||
- Run an [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) client, except for nodes statically configured as Public
|
||||
- Use the [Identify protocol](https://github.com/libp2p/specs/blob/master/identify/README.md) to advertise support for:
|
||||
- `/nomos/autonat/2/dial-request` for main network
|
||||
- `/nomos-testnet/autonat/2/dial-request` for public testnet
|
||||
- `/nomos/autonat/2/dial-back` and `/nomos-testnet/autonat/2/dial-back` respectively
|
||||
|
||||
#### NAT State Machine
|
||||
|
||||
The NAT traversal process follows a multi-phase state machine:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
Start@{shape: circle, label: "Start"} -->|Preconfigured public IP or port mapping| StaticPublic[Statically configured as<br/>**Public**]
|
||||
subgraph Phase0 [Phase 0]
|
||||
Start -->|Default configuration| Boot
|
||||
end
|
||||
subgraph Phase1 [Phase 1]
|
||||
Boot[Bootstrap and discover AutoNAT servers]--> Inspect
|
||||
Inspect[Inspect own IP addresses]-->|At least 1 IP address in the public range| ConfirmPublic[AutoNAT]
|
||||
end
|
||||
subgraph Phase2 [Phase 2]
|
||||
Inspect -->|No IP addresses in the public range| MapPorts[Port Mapping Client<br/>UPnP/NAT-PMP/PCP]
|
||||
MapPorts -->|Successful port map| ConfirmMapPorts[AutoNAT]
|
||||
end
|
||||
ConfirmPublic -->|Node's IP address reachable by AutoNAT server| Public[**Public** Node]
|
||||
ConfirmPublic -->|Node's IP address not reachable by AutoNAT server or Timeout| MapPorts
|
||||
ConfirmMapPorts -->|Mapped IP address and port reachable by AutoNAT server| Public
|
||||
ConfirmMapPorts -->|Mapped IP address and port not reachable by AutoNAT server or Timeout| Private
|
||||
MapPorts -->|Failure or Timeout| Private[**Private** Node]
|
||||
subgraph Phase3 [Phase 3]
|
||||
Public -->Monitor
|
||||
Private --> Monitor
|
||||
end
|
||||
Monitor[Network Monitoring] -->|Restart| Inspect
|
||||
```
|
||||
|
||||
### Phase Implementation
|
||||
|
||||
#### Phase 0: Bootstrapping and Identifying Public Nodes
|
||||
|
||||
If the node is statically configured by the operator to be Public, the procedure stops here.
|
||||
|
||||
The node utilizes bootstrapping and discovery mechanisms to find other Public nodes. The [Identify protocol](https://github.com/libp2p/specs/blob/master/identify/README.md) confirms which detected Public nodes support [AutoNAT v2](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md).
|
||||
|
||||
#### Phase 1: NAT Detection
|
||||
|
||||
The node starts an [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) client and inspects its own addresses. For each public IP address, the node verifies public reachability via [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md). If any public IP addresses are confirmed, the node assumes Public status and moves to Phase 3. Otherwise, it continues to Phase 2.
|
||||
|
||||
#### Phase 2: Automated Port Mapping
|
||||
|
||||
The node attempts to secure port mapping on the default gateway using:
|
||||
|
||||
- **[PCP](https://datatracker.ietf.org/doc/html/rfc6887)** (Port Control Protocol) - Most reliable
|
||||
- **[NAT-PMP](https://datatracker.ietf.org/doc/html/rfc6886)** (NAT Port Mapping Protocol) - Second most reliable
|
||||
- **[UPnP-IGD](https://datatracker.ietf.org/doc/html/rfc6970)** (Universal Plug and Play Internet Gateway Device) - Most widely deployed
|
||||
|
||||
**Port Mapping Algorithm:**
|
||||
|
||||
```python
|
||||
def try_port_mapping():
|
||||
# Step 1: Get the local IPv4 address
|
||||
local_ip = get_local_ipv4_address()
|
||||
|
||||
# Step 2: Get the default gateway IPv4 address
|
||||
gateway_ip = get_default_gateway_address()
|
||||
|
||||
# Step 3: Abort if local or gateway IP could not be determined
|
||||
if not local_ip or not gateway_ip:
|
||||
return "Mapping failed: Unable to get local or gateway IPv4"
|
||||
|
||||
# Step 4: Probe the gateway for protocol support
|
||||
supports_pcp = probe_pcp(gateway_ip)
|
||||
supports_nat_pmp = probe_nat_pmp(gateway_ip)
|
||||
supports_upnp = probe_upnp(gateway_ip) # Optional for logging
|
||||
|
||||
# Step 5-9: Try protocols in order of reliability
|
||||
# PCP (most reliable) -> NAT-PMP -> UPnP -> fallback attempts
|
||||
|
||||
protocols = [
|
||||
(supports_pcp, try_pcp_mapping),
|
||||
(supports_nat_pmp, try_nat_pmp_mapping),
|
||||
(True, try_upnp_mapping), # Always try UPnP
|
||||
(not supports_pcp, try_pcp_mapping), # Fallback
|
||||
(not supports_nat_pmp, try_nat_pmp_mapping) # Last resort
|
||||
]
|
||||
|
||||
for supported, mapping_func in protocols:
|
||||
if supported:
|
||||
mapping = mapping_func(local_ip, gateway_ip)
|
||||
if mapping:
|
||||
return mapping
|
||||
|
||||
return "Mapping failed: No protocol succeeded"
|
||||
```
|
||||
|
||||
If mapping succeeds, the node uses [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) to confirm public reachability. Upon confirmation, the node assumes Public status. Otherwise, it assumes Private status.
|
||||
|
||||
**Port Mapping Sequence:**
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
box Node
|
||||
participant AutoNAT Client
|
||||
participant NAT State Machine
|
||||
participant Port Mapping Client
|
||||
end
|
||||
participant Router
|
||||
|
||||
alt Mapping is successful
|
||||
Note left of AutoNAT Client: Phase 2
|
||||
Port Mapping Client ->> +Router: Requests new mapping
|
||||
Router ->> Port Mapping Client: Confirms new mapping
|
||||
Port Mapping Client ->> NAT State Machine: Mapping secured
|
||||
NAT State Machine ->> AutoNAT Client: Requests confirmation<br/>that mapped address<br/>is publicly reachable
|
||||
|
||||
alt Node asserts Public status
|
||||
AutoNAT Client ->> NAT State Machine: Mapped address<br/>is publicly reachable
|
||||
Note left of AutoNAT Client: Phase 3<br/>Network Monitoring
|
||||
else Node asserts Private status
|
||||
AutoNAT Client ->> NAT State Machine: Mapped address<br/>is not publicly reachable
|
||||
Note left of AutoNAT Client: Phase 3<br/>Network Monitoring
|
||||
end
|
||||
else Mapping fails, node asserts Private status
|
||||
Note left of AutoNAT Client: Phase 2
|
||||
Port Mapping Client ->> Router: Requests new mapping
|
||||
Router ->> Port Mapping Client: Refuses new mapping or Timeout
|
||||
Port Mapping Client ->> NAT State Machine: Mapping failed
|
||||
Note left of AutoNAT Client: Phase 3<br/>Network Monitoring
|
||||
end
|
||||
```
|
||||
|
||||
#### Phase 3: Network Monitoring
|
||||
|
||||
Unless explicitly configured, nodes must monitor their network status and restart from Phase 1 when changes are detected.
|
||||
|
||||
**Public Node Monitoring:**
|
||||
|
||||
A Public node must restart when:
|
||||
|
||||
- [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) client no longer confirms public reachability
|
||||
- A previously successful port mapping is lost or refresh fails
|
||||
|
||||
**Private Node Monitoring:**
|
||||
|
||||
A Private node must restart when:
|
||||
|
||||
- It gains a new public IP address
|
||||
- Port mapping is likely to succeed (gateway change, sufficient time passed)
|
||||
|
||||
**Network Monitoring Sequence:**
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant AutoNAT Server
|
||||
box Node
|
||||
participant AutoNAT Client
|
||||
participant NAT State Machine
|
||||
participant Port Mapping Client
|
||||
end
|
||||
participant Router
|
||||
|
||||
Note left of AutoNAT Server: Phase 3<br/>Network Monitoring
|
||||
par Refresh mapping and monitor changes
|
||||
loop periodically refreshes mapping
|
||||
Port Mapping Client ->> Router: Requests refresh
|
||||
Router ->> Port Mapping Client: Confirms mapping refresh
|
||||
end
|
||||
break Mapping is lost, the node loses Public status
|
||||
Router ->> Port Mapping Client: Refresh failed or mapping dropped
|
||||
Port Mapping Client ->> NAT State Machine: Mapping lost
|
||||
NAT State Machine ->> NAT State Machine: Restart
|
||||
end
|
||||
and Monitor public reachability of mapped addresses
|
||||
loop periodically checks public reachability
|
||||
AutoNAT Client ->> AutoNAT Server: Requests dialback
|
||||
AutoNAT Server ->> AutoNAT Client: Dialback successful
|
||||
end
|
||||
break
|
||||
AutoNAT Server ->> AutoNAT Client: Dialback failed or Timeout
|
||||
AutoNAT Client ->> NAT State Machine: Public reachability lost
|
||||
NAT State Machine ->> NAT State Machine: Restart
|
||||
end
|
||||
end
|
||||
Note left of AutoNAT Server: Phase 1
|
||||
```
|
||||
|
||||
### Public Node Responsibilities
|
||||
|
||||
**A Public node MUST:**
|
||||
|
||||
- Run an [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) server
|
||||
- Listen on and advertise via [Identify protocol](https://github.com/libp2p/specs/blob/master/identify/README.md) its publicly reachable [multiaddresses](https://github.com/libp2p/specs/blob/master/addressing/README.md):
|
||||
|
||||
`/{public_peer_ip}/udp/{port}/quic-v1/p2p/{public_peer_id}`
|
||||
|
||||
- Periodically renew port mappings according to protocol recommendations
|
||||
- Maintain high availability for [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) services
|
||||
|
||||
### Peer Dialing
|
||||
|
||||
Other peers can always dial a Public peer using its publicly reachable [multiaddresses](https://github.com/libp2p/specs/blob/master/addressing/README.md):
|
||||
|
||||
`/{public_peer_ip}/udp/{port}/quic-v1/p2p/{public_peer_id}`
|
||||
|
||||
## Implementation Requirements
|
||||
|
||||
### Mandatory Components
|
||||
|
||||
All Nomos nodes MUST implement:
|
||||
|
||||
1. **[AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) client** for NAT status detection
|
||||
2. **Port mapping clients** for [PCP](https://datatracker.ietf.org/doc/html/rfc6887), [NAT-PMP](https://datatracker.ietf.org/doc/html/rfc6886), and [UPnP-IGD](https://datatracker.ietf.org/doc/html/rfc6970)
|
||||
3. **[Identify protocol](https://github.com/libp2p/specs/blob/master/identify/README.md)** for capability advertisement
|
||||
4. **Network monitoring** for status change detection
|
||||
|
||||
### Optional Enhancements
|
||||
|
||||
Nodes MAY implement:
|
||||
|
||||
- Custom port mapping retry strategies
|
||||
- Enhanced network change detection
|
||||
- Advanced [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) server load balancing
|
||||
- Backup connectivity mechanisms
|
||||
|
||||
### Configuration Parameters
|
||||
|
||||
#### [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) Configuration
|
||||
|
||||
```yaml
|
||||
autonat:
|
||||
client:
|
||||
dial_timeout: 15s
|
||||
max_peer_addresses: 16
|
||||
throttle_global_limit: 30
|
||||
throttle_peer_limit: 3
|
||||
server:
|
||||
dial_timeout: 30s
|
||||
max_peer_addresses: 16
|
||||
throttle_global_limit: 30
|
||||
throttle_peer_limit: 3
|
||||
```
|
||||
|
||||
#### Port Mapping Configuration
|
||||
|
||||
```yaml
|
||||
port_mapping:
|
||||
pcp:
|
||||
timeout: 30s
|
||||
lifetime: 7200s # 2 hours
|
||||
retry_interval: 300s
|
||||
nat_pmp:
|
||||
timeout: 30s
|
||||
lifetime: 7200s
|
||||
retry_interval: 300s
|
||||
upnp:
|
||||
timeout: 30s
|
||||
lease_duration: 7200s
|
||||
retry_interval: 300s
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### NAT Traversal Security
|
||||
|
||||
1. **Port Mapping Validation**: Verify that requested port mappings are actually created
|
||||
2. **[AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) Server Trust**: Implement peer reputation for [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) servers
|
||||
3. **Gateway Communication**: Secure communication with NAT devices
|
||||
4. **Address Validation**: Validate public addresses before advertisement
|
||||
|
||||
### Privacy Considerations
|
||||
|
||||
1. **IP Address Exposure**: Public nodes necessarily expose IP addresses
|
||||
2. **Traffic Analysis**: Monitor for patterns that could reveal node behavior
|
||||
3. **Gateway Information**: Minimize exposure of internal network topology
|
||||
|
||||
### Denial of Service Protection
|
||||
|
||||
1. **[AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) Rate Limiting**: Implement request throttling for [AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) services
|
||||
2. **Port Mapping Abuse**: Prevent excessive port mapping requests
|
||||
3. **Resource Exhaustion**: Limit concurrent NAT traversal attempts
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Scalability
|
||||
|
||||
- **[AutoNAT](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md) Server Load**: Distributed across Public nodes
|
||||
- **Port Mapping Overhead**: Minimal ongoing resource usage
|
||||
- **Network Monitoring**: Efficient periodic checks
|
||||
|
||||
### Reliability
|
||||
|
||||
- **Fallback Mechanisms**: Multiple protocols ensure high success rates
|
||||
- **Continuous Monitoring**: Automatic recovery from connectivity loss
|
||||
- **Protocol Redundancy**: Multiple port mapping protocols increase reliability
|
||||
|
||||
## References
|
||||
|
||||
1. [Multiaddress spec](https://github.com/libp2p/specs/blob/master/addressing/README.md)
|
||||
2. [Identify protocol spec](https://github.com/libp2p/specs/blob/master/identify/README.md)
|
||||
3. [AutoNAT v2 protocol spec](https://github.com/libp2p/specs/blob/master/autonat/autonat-v2.md)
|
||||
4. [Circuit Relay v2 protocol spec](https://github.com/libp2p/specs/blob/master/relay/circuit-v2.md)
|
||||
5. [PCP - RFC 6887](https://datatracker.ietf.org/doc/html/rfc6887)
|
||||
6. [NAT-PMP - RFC 6886](https://datatracker.ietf.org/doc/html/rfc6886)
|
||||
7. [UPnP IGD - RFC 6970](https://datatracker.ietf.org/doc/html/rfc6970)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
185
nomos/raw/p2p-network-bootstrapping.md
Normal file
185
nomos/raw/p2p-network-bootstrapping.md
Normal file
@@ -0,0 +1,185 @@
|
||||
---
|
||||
title: P2P-NETWORK-BOOTSTRAPPING
|
||||
name: Nomos P2P Network Bootstrapping Specification
|
||||
status: raw
|
||||
category: networking
|
||||
tags: [p2p, networking, bootstrapping, peer-discovery, libp2p]
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Petar Radovic <petar@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- Antonio Antonino <antonio@status.im>
|
||||
- Youngjoon Lee <youngjoon@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Nomos network bootstrapping is the process by which a new node discovers peers and synchronizes with the existing decentralized network. It ensures that a node can:
|
||||
|
||||
1. **Discover Peers** – Find other active nodes in the network.
|
||||
2. **Establish Connections** – Securely connect to trusted peers.
|
||||
3. **Negotiate (libp2p) Protocols** - Ensure that other peers operate in the same protocols as the node needs.
|
||||
|
||||
## Overview
|
||||
|
||||
The Nomos P2P network bootstrapping strategy relies on a designated subset of **bootstrap nodes** to facilitate secure and efficient node onboarding. These nodes serve as the initial entry points for new network participants.
|
||||
|
||||
### Key Design Principles
|
||||
|
||||
#### Trusted Bootstrap Nodes
|
||||
|
||||
A curated set of publicly announced and highly available nodes ensures reliability during initial peer discovery. These nodes are configured with elevated connection limits to handle a high volume of incoming bootstrapping requests from new participants.
|
||||
|
||||
#### Node Configuration & Onboarding
|
||||
|
||||
New node operators must explicitly configure their instances with the addresses of bootstrap nodes. This configuration may be preloaded or dynamically fetched from a trusted source to minimize manual setup.
|
||||
|
||||
#### Network Integration
|
||||
|
||||
Upon initialization, the node establishes connections with the bootstrap nodes and begins participating in Nomos networking protocols. Through these connections, the node discovers additional peers, synchronizes with the network state, and engages in protocol-specific communication (e.g., consensus, block propagation).
|
||||
|
||||
### Security & Decentralization Considerations
|
||||
|
||||
**Trust Minimization**: While bootstrap nodes provide initial connectivity, the network rapidly transitions to decentralized peer discovery to prevent over-reliance on any single entity.
|
||||
|
||||
**Authenticated Announcements**: The identities and addresses of bootstrap nodes are publicly verifiable to mitigate impersonation attacks. From [libp2p documentation](https://docs.libp2p.io/concepts/transports/quic/#quic-in-libp2p):
|
||||
|
||||
> To authenticate each others' peer IDs, peers encode their peer ID into a self-signed certificate, which they sign using their host's private key.
|
||||
|
||||
**Dynamic Peer Management**: After bootstrapping, nodes continuously refine their peer lists to maintain a resilient and distributed network topology.
|
||||
|
||||
This approach ensures **rapid, secure, and scalable** network participation while preserving the decentralized ethos of the Nomos protocol.
|
||||
|
||||
## Protocol
|
||||
|
||||
### Protocol Overview
|
||||
|
||||
The bootstrapping protocol follows libp2p conventions for peer discovery and connection establishment. Implementation details are handled by the underlying libp2p stack with Nomos-specific configuration parameters.
|
||||
|
||||
### Bootstrapping Process
|
||||
|
||||
#### Step-by-Step bootstrapping process
|
||||
|
||||
1. **Node Initial Configuration**: New nodes load pre-configured bootstrap node addresses. Addresses may be `IP` or `DNS` embedded in a compatible [libp2p PeerId multiaddress](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-ids-in-multiaddrs). Node operators may chose to advertise more than one address. This is out of the scope of this protocol. For example:
|
||||
|
||||
`/ip4/198.51.100.0/udp/4242/p2p/QmYyQSo1c1Ym7orWxLYvCrM2EmxFTANf8wXmmE7DWjhx5N` or
|
||||
|
||||
`/dns/foo.bar.net/udp/4242/p2p/QmYyQSo1c1Ym7orWxLYvCrM2EmxFTANf8wXmmE7DWjhx5N`
|
||||
|
||||
2. **Secure Connection**: Nodes establish connections to bootstrap nodes announced addresses. Verifies network identity and protocol compatibility.
|
||||
|
||||
3. **Peer Discovery**: Requests and receives validated peer lists from bootstrap nodes. Each entry includes connectivity details as per the peer discovery protocol engaging after the initial connection.
|
||||
|
||||
4. **Network Integration**: Iteratively connects to discovered peers. Gradually build peer connections.
|
||||
|
||||
5. **Protocol Engagement**: Establishes required protocol channels (gossip/consensus/sync). Begins participating in network operations.
|
||||
|
||||
6. **Ongoing Maintenance**: Continuously evaluates and refreshes peer connections. Ideally removes the connection to the bootstrap node itself. Bootstrap nodes may chose to remove the connection on their side to keep high availability for other nodes.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Nomos Network
|
||||
participant Node
|
||||
participant Bootstrap Node
|
||||
|
||||
Node->>Node: Fetches bootstrapping addresses
|
||||
|
||||
loop Interacts with bootstrap node
|
||||
Node->>+Bootstrap Node: Connects
|
||||
Bootstrap Node->>-Node: Sends discovered peers information
|
||||
end
|
||||
|
||||
loop Connects to Network participants
|
||||
Node->>Nomos Network: Engages in connections
|
||||
Node->>Nomos Network: Negotiates protocols
|
||||
end
|
||||
|
||||
loop Ongoing maintenance
|
||||
Node-->>Nomos Network: Evaluates peer connections
|
||||
alt Bootstrap connection no longer needed
|
||||
Node-->>Bootstrap Node: Disconnects
|
||||
else Bootstrap enforces disconnection
|
||||
Bootstrap Node-->>Node: Disconnects
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The bootstrapping process for the Nomos p2p network uses the **QUIC** transport as specified in the Nomos network specification.
|
||||
|
||||
Bootstrapping is separated from the network's peer discovery protocol. It assumes that there is one protocol that would engage as soon as the connection with the bootstrapping node triggers. Currently Nomos network uses `kademlia` as the current first approach for the Nomos p2p network, this comes granted.
|
||||
|
||||
### Bootstrap Node Requirements
|
||||
|
||||
Bootstrap nodes MUST fulfill the following requirements:
|
||||
|
||||
- **High Availability**: Maintain uptime of 99.5% or higher
|
||||
- **Connection Capacity**: Support minimum 1000 concurrent connections
|
||||
- **Geographic Distribution**: Deploy across multiple regions
|
||||
- **Protocol Compatibility**: Support all required Nomos network protocols
|
||||
- **Security**: Implement proper authentication and rate limiting
|
||||
|
||||
### Network Configuration
|
||||
|
||||
Bootstrap node addresses are distributed through:
|
||||
|
||||
- **Hardcoded addresses** in node software releases
|
||||
- **DNS seeds** for dynamic address resolution
|
||||
- **Community-maintained lists** with cryptographic verification
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Trust Model
|
||||
|
||||
Bootstrap nodes operate under a **minimal trust model**:
|
||||
|
||||
- Nodes verify peer identities through cryptographic authentication
|
||||
- Bootstrap connections are temporary and replaced by organic peer discovery
|
||||
- No single bootstrap node can control network participation
|
||||
|
||||
### Attack Mitigation
|
||||
|
||||
**Sybil Attack Protection**: Bootstrap nodes implement connection limits and peer verification to prevent malicious flooding.
|
||||
|
||||
**Eclipse Attack Prevention**: Nodes connect to multiple bootstrap nodes and rapidly diversify their peer connections.
|
||||
|
||||
**Denial of Service Resistance**: Rate limiting and connection throttling protect bootstrap nodes from resource exhaustion attacks.
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Bootstrapping Metrics
|
||||
|
||||
- **Initial Connection Time**: Target < 30 seconds to first bootstrap node
|
||||
- **Peer Discovery Duration**: Discover minimum viable peer set within 2 minutes
|
||||
- **Network Integration**: Full protocol engagement within 5 minutes
|
||||
|
||||
### Resource Requirements
|
||||
|
||||
#### Bootstrap Nodes
|
||||
|
||||
- Memory: Minimum 4GB RAM
|
||||
- Bandwidth: 100 Mbps sustained
|
||||
- Storage: 50GB available space
|
||||
|
||||
#### Regular Nodes
|
||||
|
||||
- Memory: 512MB for bootstrapping process
|
||||
- Bandwidth: 10 Mbps during initial sync
|
||||
- Storage: Minimal requirements
|
||||
|
||||
## References
|
||||
|
||||
- P2P Network Specification (internal document)
|
||||
- [libp2p QUIC Transport](https://docs.libp2p.io/concepts/transports/quic/)
|
||||
- [libp2p Peer IDs and Addressing](https://docs.libp2p.io/concepts/fundamentals/peers/)
|
||||
- [Ethereum bootnodes](https://ethereum.org/en/developers/docs/nodes-and-clients/bootnodes/)
|
||||
- [Bitcoin peer discovery](https://developer.bitcoin.org/devguide/p2p_network.html#peer-discovery)
|
||||
- [Cardano nodes connectivity](https://docs.cardano.org/stake-pool-operators/node-connectivity)
|
||||
- [Cardano peer sharing](https://www.coincashew.com/coins/overview-ada/guide-how-to-build-a-haskell-stakepool-node/part-v-tips/implementing-peer-sharing)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
307
nomos/raw/p2p-network.md
Normal file
307
nomos/raw/p2p-network.md
Normal file
@@ -0,0 +1,307 @@
|
||||
---
|
||||
title: NOMOS-P2P-NETWORK
|
||||
name: Nomos P2P Network Specification
|
||||
status: draft
|
||||
category: networking
|
||||
tags: [p2p, networking, libp2p, kademlia, gossipsub, quic]
|
||||
editor: Daniel Sanchez-Quiros <danielsq@status.im>
|
||||
contributors:
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This specification defines the peer-to-peer (P2P) network layer for Nomos blockchain nodes. The network serves as the comprehensive communication infrastructure enabling transaction dissemination through mempool and block propagation. The specification leverages established libp2p protocols to ensure robust, scalable performance with low bandwidth requirements and minimal latency while maintaining accessibility for diverse hardware configurations and network environments.
|
||||
|
||||
## Motivation
|
||||
|
||||
The Nomos blockchain requires a reliable, scalable P2P network that can:
|
||||
|
||||
1. **Support diverse hardware**: From laptops to dedicated servers across various operating systems and geographic locations
|
||||
2. **Enable inclusive participation**: Allow non-technical users to operate nodes with minimal configuration
|
||||
3. **Maintain connectivity**: Ensure nodes remain reachable even with limited connectivity or behind NAT/routers
|
||||
4. **Scale efficiently**: Support large-scale networks (+10k nodes) with eventual consistency
|
||||
5. **Provide low-latency communication**: Enable efficient transaction and block propagation
|
||||
|
||||
## Specification
|
||||
|
||||
### Network Architecture Overview
|
||||
|
||||
The Nomos P2P network addresses three critical challenges:
|
||||
|
||||
- **Peer Connectivity**: Mechanisms for peers to join and connect to the network
|
||||
- **Peer Discovery**: Enabling peers to locate and identify network participants
|
||||
- **Message Transmission**: Facilitating efficient message exchange across the network
|
||||
|
||||
### Transport Protocol
|
||||
|
||||
#### QUIC Protocol Transport
|
||||
|
||||
The Nomos network employs **[QUIC protocol](https://docs.libp2p.io/concepts/transports/quic/)** as the primary transport protocol, leveraging the [libp2p protocol](https://docs.libp2p.io/) implementation.
|
||||
|
||||
**Rationale for [QUIC protocol](https://docs.libp2p.io/concepts/transports/quic/):**
|
||||
|
||||
- Rapid connection establishment
|
||||
- Enhanced NAT traversal capabilities (UDP-based)
|
||||
- Built-in multiplexing simplifies configuration
|
||||
- Production-tested reliability
|
||||
|
||||
### Peer Discovery
|
||||
|
||||
#### Kademlia DHT
|
||||
|
||||
The network utilizes libp2p's Kademlia Distributed Hash Table (DHT) for peer discovery.
|
||||
|
||||
**Protocol Identifiers:**
|
||||
|
||||
- **Mainnet**: `/nomos/kad/1.0.0`
|
||||
- **Testnet**: `/nomos-testnet/kad/1.0.0`
|
||||
|
||||
**Features:**
|
||||
|
||||
- Proximity-based peer discovery heuristics
|
||||
- Distributed peer routing table
|
||||
- Resilient to network partitions
|
||||
- Automatic peer replacement
|
||||
|
||||
#### Identify Protocol
|
||||
|
||||
Complements Kademlia by enabling peer information exchange.
|
||||
|
||||
**Protocol Identifiers:**
|
||||
|
||||
- **Mainnet**: `/nomos/identify/1.0.0`
|
||||
- **Testnet**: `/nomos-testnet/identify/1.0.0`
|
||||
|
||||
**Capabilities:**
|
||||
|
||||
- Protocol support advertisement
|
||||
- Peer capability negotiation
|
||||
- Network interoperability enhancement
|
||||
|
||||
#### Future Considerations
|
||||
|
||||
The current Kademlia implementation is acknowledged as interim. Future improvements target:
|
||||
|
||||
- Lightweight design without full DHT overhead
|
||||
- Highly-scalable eventual consistency
|
||||
- Support for 10k+ nodes with minimal resource usage
|
||||
|
||||
### NAT Traversal
|
||||
|
||||
The network implements comprehensive NAT traversal solutions to ensure connectivity across diverse network configurations.
|
||||
|
||||
**Objectives:**
|
||||
|
||||
- Configuration-free peer connections
|
||||
- Support for users with varying technical expertise
|
||||
- Enable nodes on standard consumer hardware
|
||||
|
||||
**Implementation:**
|
||||
|
||||
- Tailored solutions based on user network configuration
|
||||
- Automatic NAT type detection and adaptation
|
||||
- Fallback mechanisms for challenging network environments
|
||||
|
||||
*Note: Detailed NAT traversal specifications are maintained in a separate document.*
|
||||
|
||||
### Message Dissemination
|
||||
|
||||
#### Gossipsub Protocol
|
||||
|
||||
Nomos employs **gossipsub** for reliable message propagation across the network.
|
||||
|
||||
**Integration:**
|
||||
|
||||
- Seamless integration with Kademlia peer discovery
|
||||
- Automatic peer list updates
|
||||
- Efficient message routing and delivery
|
||||
|
||||
#### Topic Configuration
|
||||
|
||||
**Mempool Dissemination:**
|
||||
|
||||
- **Mainnet**: `/nomos/mempool/0.1.0`
|
||||
- **Testnet**: `/nomos-testnet/mempool/0.1.0`
|
||||
|
||||
**Block Propagation:**
|
||||
|
||||
- **Mainnet**: `/nomos/cryptarchia/0.1.0`
|
||||
- **Testnet**: `/nomos-testnet/cryptarchia/0.1.0`
|
||||
|
||||
#### Network Parameters
|
||||
|
||||
**Peering Degree:**
|
||||
|
||||
- **Minimum recommended**: 8 peers
|
||||
- **Rationale**: Ensures redundancy and efficient propagation
|
||||
- **Configurable**: Nodes may adjust based on resources and requirements
|
||||
|
||||
### Bootstrapping
|
||||
|
||||
#### Initial Network Entry
|
||||
|
||||
New nodes connect to the network through designated bootstrap nodes.
|
||||
|
||||
**Process:**
|
||||
|
||||
1. Connect to known bootstrap nodes
|
||||
2. Obtain initial peer list through Kademlia
|
||||
3. Establish gossipsub connections
|
||||
4. Begin participating in network protocols
|
||||
|
||||
**Bootstrap Node Requirements:**
|
||||
|
||||
- High availability and reliability
|
||||
- Geographic distribution
|
||||
- Version compatibility maintenance
|
||||
|
||||
### Message Encoding
|
||||
|
||||
All network messages follow the Nomos Wire Format specification for consistent encoding and decoding across implementations.
|
||||
|
||||
**Key Properties:**
|
||||
|
||||
- Deterministic serialization
|
||||
- Efficient binary encoding
|
||||
- Forward/backward compatibility support
|
||||
- Cross-platform consistency
|
||||
|
||||
*Note: Detailed wire format specifications are maintained in a separate document.*
|
||||
|
||||
## Implementation Requirements
|
||||
|
||||
### Mandatory Protocols
|
||||
|
||||
All Nomos nodes MUST implement:
|
||||
|
||||
1. **Kademlia DHT** for peer discovery
|
||||
2. **Identify protocol** for peer information exchange
|
||||
3. **Gossipsub** for message dissemination
|
||||
|
||||
### Optional Enhancements
|
||||
|
||||
Nodes MAY implement:
|
||||
|
||||
- Advanced NAT traversal techniques
|
||||
- Custom peering strategies
|
||||
- Enhanced message routing optimizations
|
||||
|
||||
### Network Versioning
|
||||
|
||||
Protocol versions follow semantic versioning:
|
||||
|
||||
- **Major version**: Breaking protocol changes
|
||||
- **Minor version**: Backward-compatible enhancements
|
||||
- **Patch version**: Bug fixes and optimizations
|
||||
|
||||
## Configuration Parameters
|
||||
|
||||
### Implementation Note
|
||||
|
||||
**Current Status**: The Nomos P2P network implementation uses hardcoded libp2p protocol parameters for optimal performance and reliability. While the node configuration file (`config.yaml`) contains network-related settings, the core libp2p protocol parameters (Kademlia DHT, Identify, and Gossipsub) are embedded in the source code.
|
||||
|
||||
### Node Configuration
|
||||
|
||||
The following network parameters are configurable via `config.yaml`:
|
||||
|
||||
#### Network Backend Settings
|
||||
|
||||
```yaml
|
||||
network:
|
||||
backend:
|
||||
host: 0.0.0.0
|
||||
port: 3000
|
||||
node_key: <node_private_key>
|
||||
initial_peers: []
|
||||
```
|
||||
|
||||
#### Protocol-Specific Topics
|
||||
|
||||
**Mempool Dissemination:**
|
||||
|
||||
- **Mainnet**: `/nomos/mempool/0.1.0`
|
||||
- **Testnet**: `/nomos-testnet/mempool/0.1.0`
|
||||
|
||||
**Block Propagation:**
|
||||
|
||||
- **Mainnet**: `/nomos/cryptarchia/0.1.0`
|
||||
- **Testnet**: `/nomos-testnet/cryptarchia/0.1.0`
|
||||
|
||||
### Hardcoded Protocol Parameters
|
||||
|
||||
The following libp2p protocol parameters are currently hardcoded in the implementation:
|
||||
|
||||
#### Peer Discovery Parameters
|
||||
|
||||
- **Protocol identifiers** for Kademlia DHT and Identify protocols
|
||||
- **DHT routing table** configuration and query timeouts
|
||||
- **Peer discovery intervals** and connection management
|
||||
|
||||
#### Message Dissemination Parameters
|
||||
|
||||
- **Gossipsub mesh parameters** (peer degree, heartbeat intervals)
|
||||
- **Message validation** and caching settings
|
||||
- **Topic subscription** and fanout management
|
||||
|
||||
#### Rationale for Hardcoded Parameters
|
||||
|
||||
1. **Network Stability**: Prevents misconfigurations that could fragment the network
|
||||
2. **Performance Optimization**: Parameters are tuned for the target network size and latency requirements
|
||||
3. **Security**: Reduces attack surface by limiting configurable network parameters
|
||||
4. **Simplicity**: Eliminates need for operators to understand complex P2P tuning
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Network-Level Security
|
||||
|
||||
1. **Peer Authentication**: Utilize libp2p's built-in peer identity verification
|
||||
2. **Message Validation**: Implement application-layer message validation
|
||||
3. **Rate Limiting**: Protect against spam and DoS attacks
|
||||
4. **Blacklisting**: Mechanism for excluding malicious peers
|
||||
|
||||
### Privacy Considerations
|
||||
|
||||
1. **Traffic Analysis**: Gossipsub provides some resistance to traffic analysis
|
||||
2. **Metadata Leakage**: Minimize identifiable information in protocol messages
|
||||
3. **Connection Patterns**: Randomize connection timing and patterns
|
||||
|
||||
### Denial of Service Protection
|
||||
|
||||
1. **Resource Limits**: Impose limits on connections and message rates
|
||||
2. **Peer Scoring**: Implement reputation-based peer management
|
||||
3. **Circuit Breakers**: Automatic protection against resource exhaustion
|
||||
|
||||
### Node Configuration Example
|
||||
|
||||
[Nomos Node Configuration](https://github.com/logos-co/nomos/blob/master/nodes/nomos-node/config.yaml) is an example node configuration
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Scalability
|
||||
|
||||
- **Target Network Size**: 10,000+ nodes
|
||||
- **Message Latency**: Sub-second for critical messages
|
||||
- **Bandwidth Efficiency**: Optimized for limited bandwidth environments
|
||||
|
||||
### Resource Requirements
|
||||
|
||||
- **Memory Usage**: Minimal DHT routing table overhead
|
||||
- **CPU Usage**: Efficient cryptographic operations
|
||||
- **Network Bandwidth**: Adaptive based on node role and capacity
|
||||
|
||||
## References
|
||||
|
||||
Original working document, from Nomos Notion: [P2P Network Specification](https://nomos-tech.notion.site/P2P-Network-Specification-206261aa09df81db8100d5f410e39d75).
|
||||
|
||||
1. [libp2p Specifications](https://docs.libp2p.io/)
|
||||
2. [QUIC Protocol Specification](https://docs.libp2p.io/concepts/transports/quic/)
|
||||
3. [Kademlia DHT](https://docs.libp2p.io/concepts/discovery-routing/kaddht/)
|
||||
4. [Gossipsub Protocol](https://github.com/libp2p/specs/tree/master/pubsub/gossipsub)
|
||||
5. [Identify Protocol](https://github.com/libp2p/specs/blob/master/identify/README.md)
|
||||
6. [Nomos Implementation](https://github.com/logos-co/nomos) - Reference implementation and source code
|
||||
7. [Nomos Node Configuration](https://github.com/logos-co/nomos/blob/master/nodes/nomos-node/config.yaml) - Example node configuration
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
345
nomos/raw/sdp.md
Normal file
345
nomos/raw/sdp.md
Normal file
@@ -0,0 +1,345 @@
|
||||
---
|
||||
title: NOMOS-SDP
|
||||
name: Nomos Service Declaration Protocol Specification
|
||||
status: raw
|
||||
category:
|
||||
tags: participation, validators, declarations
|
||||
editor: Marcin Pawlowski <marcin@status.im>
|
||||
contributors:
|
||||
- Mehmet <mehmet@status.im>
|
||||
- Daniel Sanchez Quiros <danielsq@status.im>
|
||||
- Álvaro Castro-Castilla <alvaro@status.im>
|
||||
- Thomas Lavaur <thomaslavaur@status.im>
|
||||
- Filip Dimitrijevic <filip@status.im>
|
||||
- Gusto Bacvinka <augustinas@status.im>
|
||||
- David Rusu <davidrusu@status.im>
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
This document defines a mechanism enabling validators to declare their participation in specific protocols that require a known and agreed-upon list of participants. Some examples of this are Data Availability and the Blend Network. We create a single repository of identifiers which is used to establish secure communication between validators and provide services. Before being admitted to the repository, the validator proves that it locked at least a minimum stake.
|
||||
|
||||
## Requirements
|
||||
|
||||
The requirements for the protocol are defined as follows:
|
||||
|
||||
- A declaration must be backed by a confirmation that the sender of the declaration owns a certain value of the stake.
|
||||
- A declaration is valid until it is withdrawn or is not used for a service-specific amount of time.
|
||||
|
||||
## Overview
|
||||
|
||||
The SDP enables nodes to declare their eligibility to serve a specific service in the system, and withdraw their declarations.
|
||||
|
||||
### Protocol Actions
|
||||
|
||||
The protocol defines the following actions:
|
||||
|
||||
- **Declare**: A node sends a declaration that confirms its willingness to provide a specific service, which is confirmed by locking a threshold of stake.
|
||||
- **Active**: A node marks that its participation in the protocol is active according to the service-specific activity logic. This action enables the protocol to monitor the node's activity. We utilize this as a non-intrusive differentiator of node activity. It is crucial to exclude inactive nodes from the set of active nodes, as it enhances the stability of services.
|
||||
- **Withdraw**: A node withdraws its declaration and stops providing a service.
|
||||
|
||||
The logic of the protocol is straightforward:
|
||||
|
||||
1. A node sends a declaration message for a specific service and proves it has a minimum stake.
|
||||
2. The declaration is registered on the ledger, and the node can commence its service according to the service-specific service logic.
|
||||
3. After a service-specific service-providing time, the node confirms its activity.
|
||||
4. The node must confirm its activity with a service-specific minimum frequency; otherwise, its declaration is inactive.
|
||||
5. After the service-specific locking period, the node can send a withdrawal message, and its declaration is removed from the ledger, which means that the node will no longer provide the service.
|
||||
|
||||
💡 The protocol messages are subject to a finality that means messages become part of the immutable ledger after a delay. The delay at which it happens is defined by the consensus.
|
||||
|
||||
## Construction
|
||||
|
||||
In this section, we present the main constructions of the protocol. First, we start with data definitions. Second, we describe the protocol actions. Finally, we present part of the Bedrock Mantle design responsible for storing and processing SDP-related messages and data.
|
||||
|
||||
### Data
|
||||
|
||||
In this section, we discuss and define data types, messages, and their storage.
|
||||
|
||||
#### Service Types
|
||||
|
||||
We define the following services which can be used for service declaration:
|
||||
|
||||
- `BN`: for Blend Network service.
|
||||
- `DA`: for Data Availability service.
|
||||
|
||||
```python
|
||||
class ServiceType(Enum):
|
||||
BN="BN" # Blend Network
|
||||
DA="DA" # Data Availability
|
||||
```
|
||||
|
||||
A declaration can be generated for any of the services above. Any declaration that is not one of the above must be rejected. The number of services might grow in the future.
|
||||
|
||||
#### Minimum Stake
|
||||
|
||||
The minimum stake is a global value that defines the minimum stake a node must have to perform any service.
|
||||
|
||||
The `MinStake` is a structure that holds the value of the stake `stake_threshold` and the block number it was set at: `timestamp`.
|
||||
|
||||
```python
|
||||
class MinStake:
|
||||
stake_threshold: StakeThreshold
|
||||
timestamp: BlockNumber
|
||||
```
|
||||
|
||||
The `stake_thresholds` is a structure aggregating all defined `MinStake` values.
|
||||
|
||||
```python
|
||||
stake_thresholds: list[MinStake]
|
||||
```
|
||||
|
||||
For more information on how the minimum stake is calculated, please refer to the Nomos documentation.
|
||||
|
||||
#### Service Parameters
|
||||
|
||||
The service parameters structure defines the parameters set necessary for correctly handling interaction between the protocol and services. Each of the service types defined above must be mapped to a set of the following parameters:
|
||||
|
||||
- `session_length` defines the session length expressed as the number of blocks; the sessions are counted from block `timestamp`.
|
||||
- `lock_period` defines the minimum time (as a number of sessions) during which the declaration cannot be withdrawn, this time must include the period necessary for finalizing the declaration (which might be implicit) and provision of a service for least a single session; it can be expressed as the number of blocks by multiplying its value by the `session_length`.
|
||||
- `inactivity_period` defines the maximum time (as a number of sessions) during which an activation message must be sent; otherwise, the declaration is considered inactive; it can be expressed as the number of blocks by multiplying its value by the `session_length`.
|
||||
- `retention_period` defines the time (as a number of sessions) after which the declaration can be safely deleted by the Garbage Collection mechanism; it can be expressed as the number of blocks by multiplying its value by the `session_length`.
|
||||
- `timestamp` defines the block number at which the parameter was set.
|
||||
|
||||
```python
|
||||
class ServiceParameters:
|
||||
session_length: NumberOfBlocks
|
||||
lock_period: NumberOfSessions
|
||||
inactivity_period: NumberOfSessions
|
||||
retention_period: NumberOfSessions
|
||||
timestamp: BlockNumber
|
||||
```
|
||||
|
||||
The `parameters` is a structure aggregating all defined `ServiceParameters` values.
|
||||
|
||||
```python
|
||||
parameters: list[ServiceParameters]
|
||||
```
|
||||
|
||||
#### Identifiers
|
||||
|
||||
We define the following set of identifiers which are used for service-specific cryptographic operations:
|
||||
|
||||
- `provider_id`: used to sign the SDP messages and to establish secure links between validators; it is `Ed25519PublicKey`.
|
||||
- `zk_id`: used for zero-knowledge operations by the validator that includes rewarding ([Zero Knowledge Signature Scheme (ZkSignature)](https://www.notion.so/Zero-Knowledge-Signature-Scheme-ZkSignature-21c261aa09df8119bfb2dc74a3430df6?pvs=21)).
|
||||
|
||||
#### Locators
|
||||
|
||||
A `Locator` is the address of a validator which is used to establish secure communication between validators. It follows the [multiaddr addressing scheme from libp2p](https://docs.libp2p.io/concepts/fundamentals/addressing/), but it must contain only the location part and must not contain the node identity (`peer_id`).
|
||||
|
||||
The `provider_id` must be used as the node identity. Therefore, the `Locator` must be completed by adding the `provider_id` at the end of it, which makes the `Locator` usable in the context of libp2p.
|
||||
|
||||
The length of the `Locator` is restricted to 329 characters.
|
||||
|
||||
The syntax of every `Locator` entry must be validated.
|
||||
|
||||
**The common formatting of every** `Locator` **must be applied to maintain its unambiguity, to make deterministic ID generation work consistently.** The `Locator` must at least contain only lower case letters and every part of the address must be explicit (no implicit defaults).
|
||||
|
||||
#### Declaration Message
|
||||
|
||||
The construction of the declaration message is as follows:
|
||||
|
||||
```python
|
||||
class DeclarationMessage:
|
||||
service_type: ServiceType
|
||||
locators: list[Locator]
|
||||
provider_id: Ed25519PublicKey
|
||||
zk_id: ZkPublicKey
|
||||
```
|
||||
|
||||
The `locators` list length must be limited to reduce the potential for abuse. Therefore, the length of the list cannot be longer than 8.
|
||||
|
||||
The message must be signed by the `provider_id` key to prove ownership of the key that is used for network-level authentication of the validator. The message is also signed by the `zk_id` key (by default all Mantle transactions are signed with `zk_id` key).
|
||||
|
||||
#### Declaration Storage
|
||||
|
||||
Only valid declaration messages can be stored on the ledger. We define the `DeclarationInfo` as follows:
|
||||
|
||||
```python
|
||||
class DeclarationInfo:
|
||||
service: ServiceType
|
||||
provider_id: Ed25519PublicKey
|
||||
zk_id: ZkPublicKey
|
||||
locators: list[Locator]
|
||||
created: BlockNumber
|
||||
active: BlockNumber
|
||||
withdrawn: BlockNumber
|
||||
nonce: Nonce
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `service` defines the service type of the declaration;
|
||||
- `provider_id` is an `Ed25519PublicKey` used to sign the message by the validator;
|
||||
- `zk_id` is used for zero-knowledge operations by the validator that includes rewarding ([Zero Knowledge Signature Scheme (ZkSignature)](https://www.notion.so/Zero-Knowledge-Signature-Scheme-ZkSignature-21c261aa09df8119bfb2dc74a3430df6?pvs=21));
|
||||
- `locators` are a copy of the `locators` from the `DeclarationMessage`;
|
||||
- `created` refers to the block number of the block that contained the declaration;
|
||||
- `active` refers to the latest block number for which the active message was sent (it is set to `created` by default);
|
||||
- `withdrawn` refers to the block number for which the service declaration was withdrawn (it is set to 0 by default).
|
||||
- The `nonce` must be set to 0 for the declaration message and must increase monotonically by every message sent for the `declaration_id`.
|
||||
|
||||
We also define the `declaration_id` (of a `DeclarationId` type) that is the unique identifier of `DeclarationInfo` calculated as a hash of the concatenation of `service`, `provider_id`, `locators` and `zk_id`. The implementation of the hash function is `blake2b` using 256 bits of the output.
|
||||
|
||||
```python
|
||||
declaration_id = Hash(service||provider_id||zk_id||locators)
|
||||
```
|
||||
|
||||
The `declaration_id` is not stored as part of the `DeclarationInfo` but it is used to index it.
|
||||
|
||||
All `DeclarationInfo` references are stored in the `declarations` and are indexed by `declaration_id`.
|
||||
|
||||
```python
|
||||
declarations: list[declaration_id]
|
||||
```
|
||||
|
||||
#### Active Message
|
||||
|
||||
The construction of the active message is as follows:
|
||||
|
||||
```python
|
||||
class ActiveMessage:
|
||||
declaration_id: DeclarationId
|
||||
nonce: Nonce
|
||||
metadata: Metadata
|
||||
```
|
||||
|
||||
where `metadata` is a service-specific node activeness metadata.
|
||||
|
||||
The message must be signed by the `zk_id` key associated with the `declaration_id`.
|
||||
|
||||
The `nonce` must increase monotonically by every message sent for the `declaration_id`.
|
||||
|
||||
#### Withdraw Message
|
||||
|
||||
The construction of the withdraw message is as follows:
|
||||
|
||||
```python
|
||||
class WithdrawMessage:
|
||||
declaration_id: DeclarationId
|
||||
nonce: Nonce
|
||||
```
|
||||
|
||||
The message must be signed by the `zk_id` key from the `declaration_id`.
|
||||
|
||||
The `nonce` must increase monotonically by every message sent for the `declaration_id`.
|
||||
|
||||
#### Indexing
|
||||
|
||||
Every event must be correctly indexed to enable lighter synchronization of the changes. Therefore, we index every `declaration_id` according to `EventType`, `ServiceType`, and `Timestamp`. Where `EventType = { "created", "active", "withdrawn" }` follows the type of the message.
|
||||
|
||||
```python
|
||||
events = {
|
||||
event_type: {
|
||||
service_type: {
|
||||
timestamp: {
|
||||
declarations: list[declaration_id]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Protocol
|
||||
|
||||
#### Declare
|
||||
|
||||
The Declare action associates a validator with a service it wants to provide. It requires sending a valid `DeclarationMessage` (as defined in Declaration Message), which is then processed (as defined below) and stored (as defined in Declaration Storage).
|
||||
|
||||
The declaration message is considered valid when all of the following are met:
|
||||
|
||||
- The sender meets the stake requirements.
|
||||
- The `declaration_id` is unique.
|
||||
- The sender knows the secret behind the `provider_id` identifier.
|
||||
- The length of the `locators` list must not be longer than 8.
|
||||
- The `nonce` is increasing monotonically.
|
||||
|
||||
If all of the above conditions are fulfilled, then the message is stored on the ledger; otherwise, the message is discarded.
|
||||
|
||||
#### Active
|
||||
|
||||
The Active action enables marking the provider as actively providing a service. It requires sending a valid `ActiveMessage` (as defined in Active Message), which is relayed to the service-specific node activity logic (as indicated by the service type in Common SDP Structures).
|
||||
|
||||
The Active action updates the `active` value of the `DeclarationInfo`, which means that it also activates inactive (but not expired) providers.
|
||||
|
||||
The SDP active action logic is:
|
||||
|
||||
1. A node sends a `ActiveMessage` transaction.
|
||||
2. The `ActiveMessage` is verified by the SDP logic.
|
||||
a. The `declaration_id` returns an existing `DeclarationInfo`.
|
||||
b. The transaction containing `ActiveMessage` is signed by the `zk_id`.
|
||||
c. The `withdrawn` from the `DeclarationInfo` is set to zero.
|
||||
d. The `nonce` is increasing monotonically.
|
||||
3. If any of these conditions fail, discard the message and stop processing.
|
||||
4. The message is processed by the service-specific activity logic alongside the `active` value indicating the period since the last active message was sent. The `active` value comes from the `DeclarationInfo`.
|
||||
5. If the service-specific activity logic approves the node active message, then the `active` field of the `DeclarationInfo` is set to the current block height.
|
||||
|
||||
#### Withdraw
|
||||
|
||||
The withdraw action enables a withdrawal of a service declaration. It requires sending a valid `WithdrawMessage` (as defined in Withdraw Message). The withdrawal cannot happen before the end of the locking period, which is defined as the number of blocks counted since `created`. This lock period is stored as `lock_period` in the Service Parameters.
|
||||
|
||||
The logic of the withdraw action is:
|
||||
|
||||
1. A node sends a `WithdrawMessage` transaction.
|
||||
2. The `WithdrawMessage` is verified by the SDP logic:
|
||||
a. The `declaration_id` returns an existing `DeclarationInfo`.
|
||||
b. The transaction containing `WithdrawMessage` is signed by the `zk_id`.
|
||||
c. The `withdrawn` from `DeclarationInfo` is set to zero.
|
||||
d. The `nonce` is increasing monotonically.
|
||||
3. If any of the above is not correct, then discard the message and stop.
|
||||
4. Set the `withdrawn` from the `DeclarationInfo` to the current block height.
|
||||
5. Unlock the stake.
|
||||
|
||||
#### Garbage Collection
|
||||
|
||||
The protocol requires a garbage collection mechanism that periodically removes unused `DeclarationInfo` entries.
|
||||
|
||||
The logic of garbage collection is:
|
||||
|
||||
For every `DeclarationInfo` in the `declarations` set, remove the entry if either:
|
||||
|
||||
1. The entry is past the retention period: `withdrawn + retention_period < current_block_height`.
|
||||
2. The entry is inactive beyond the inactivity and retention periods: `active + inactivity_period + retention_period < current_block_height`.
|
||||
|
||||
#### Query
|
||||
|
||||
The protocol must enable querying the ledger in at least the following manner:
|
||||
|
||||
- `GetAllProviderId(timestamp)`, returns all `provider_id`s associated with the `timestamp`.
|
||||
- `GetAllProviderIdSince(timestamp)`, returns all `provider_id`s since the `timestamp`.
|
||||
- `GetAllDeclarationInfo(timestamp)`, returns all `DeclarationInfo` entries associated with the `timestamp`.
|
||||
- `GetAllDeclarationInfoSince(timestamp)`, returns all `DeclarationInfo` entries since the `timestamp`.
|
||||
- `GetDeclarationInfo(provider_id)`, returns the `DeclarationInfo` entry identified by the `provider_id`.
|
||||
- `GetDeclarationInfo(declaration_id)`, returns the `DeclarationInfo` entry identified by the `declaration_id`.
|
||||
- `GetAllServiceParameters(timestamp)`, returns all entries of the `ServiceParameters` store for the requested `timestamp`.
|
||||
- `GetAllServiceParametersSince(timestamp)`, returns all entries of the `ServiceParameters` store since the requested `timestamp`.
|
||||
- `GetServiceParameters(service_type, timestamp)`, returns the service parameter entry from the `ServiceParameters` store of a `service_type` for a specified `timestamp`.
|
||||
- `GetMinStake(timestamp)`, returns the `MinStake` structure at the requested `timestamp`.
|
||||
- `GetMinStakeSince(timestamp)`, returns a set of `MinStake` structures since the requested `timestamp`.
|
||||
|
||||
The query must return an error if the retention period for the delegation has passed and the requested information is not available.
|
||||
|
||||
The list of queries may be extended.
|
||||
|
||||
Every query must return information for a finalized state only.
|
||||
|
||||
### Mantle and ZK Proof
|
||||
|
||||
For more information about the Mantle and ZK proofs, please refer to [Mantle Specification](https://www.notion.so/Mantle-Specification-21c261aa09df810c8820fab1d78b53d9?pvs=21).
|
||||
|
||||
## Appendix
|
||||
|
||||
### Future Improvements
|
||||
|
||||
Refer to the [Mantle Specification](https://www.notion.so/Mantle-Specification-21c261aa09df810c8820fab1d78b53d9?pvs=21) for a list of potential improvements to the protocol.
|
||||
|
||||
## References
|
||||
|
||||
- Mantle and ZK Proof: [Mantle Specification](https://www.notion.so/Mantle-Specification-21c261aa09df810c8820fab1d78b53d9?pvs=21)
|
||||
- Ed25519 Digital Signatures: [RFC 8032](https://datatracker.ietf.org/doc/html/rfc8032)
|
||||
- BLAKE2b Cryptographic Hash: [RFC 7693](https://datatracker.ietf.org/doc/html/rfc7693)
|
||||
- libp2p Multiaddr: [Addressing Specification](https://docs.libp2p.io/concepts/fundamentals/addressing/)
|
||||
- Zero Knowledge Signatures: [ZkSignature Scheme](https://www.notion.so/Zero-Knowledge-Signature-Scheme-ZkSignature-21c261aa09df8119bfb2dc74a3430df6?pvs=21)
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
@@ -81,7 +81,7 @@ message Message {
|
||||
string sender_id = 1; // Participant ID of the message sender
|
||||
string message_id = 2; // Unique identifier of the message
|
||||
string channel_id = 3; // Identifier of the channel to which the message belongs
|
||||
optional int32 lamport_timestamp = 10; // Logical timestamp for causal ordering in channel
|
||||
optional uint64 lamport_timestamp = 10; // Logical timestamp for causal ordering in channel
|
||||
repeated HistoryEntry causal_history = 11; // List of preceding message IDs that this message causally depends on. Generally 2 or 3 message IDs are included.
|
||||
optional bytes bloom_filter = 12; // Bloom filter representing received message IDs in channel
|
||||
optional bytes content = 20; // Actual content of the message
|
||||
@@ -112,6 +112,10 @@ Each participant MUST maintain:
|
||||
|
||||
* A Lamport timestamp for each channel of communication,
|
||||
initialized to current epoch time in millisecond resolution.
|
||||
The Lamport timestamp is increased as described in the [protocol steps](#protocol-steps)
|
||||
to maintain a logical ordering of events while staying close to the current epoch time.
|
||||
This allows the messages from new joiners to be correctly ordered with other recent messages,
|
||||
without these new participants first having to synchronize past messages to discover the current Lamport timestamp.
|
||||
* A bloom filter for received message IDs per channel.
|
||||
The bloom filter SHOULD be rolled over and
|
||||
recomputed once it reaches a predefined capacity of message IDs.
|
||||
@@ -144,8 +148,11 @@ the `lamport_timestamp`, `causal_history` and `bloom_filter` fields.
|
||||
|
||||
Before broadcasting a message:
|
||||
|
||||
* the participant MUST increase its local Lamport timestamp by `1` and
|
||||
include this in the `lamport_timestamp` field.
|
||||
* the participant MUST set its local Lamport timestamp
|
||||
to the maximum between the current value + `1`
|
||||
and the current epoch time in milliseconds.
|
||||
In other words the local Lamport timestamp is set to `max(timeNowInMs, current_lamport_timestamp + 1)`.
|
||||
* the participant MUST include the increased Lamport timestamp in the message's `lamport_timestamp` field.
|
||||
* the participant MUST determine the preceding few message IDs in the local history
|
||||
and include these in an ordered list in the `causal_history` field.
|
||||
The number of message IDs to include in the `causal_history` depends on the application.
|
||||
@@ -250,7 +257,8 @@ participants SHOULD periodically send sync messages to maintain state.
|
||||
These sync messages:
|
||||
|
||||
* MUST be sent with empty content
|
||||
* MUST include an incremented Lamport timestamp
|
||||
* MUST include a Lamport timestamp increased to `max(timeNowInMs, current_lamport_timestamp + 1)`,
|
||||
where `timeNowInMs` is the current epoch time in milliseconds.
|
||||
* MUST include causal history and bloom filter according to regular message rules
|
||||
* MUST NOT be added to the unacknowledged outgoing buffer
|
||||
* MUST NOT be included in causal histories of subsequent messages
|
||||
|
||||
Reference in New Issue
Block a user