diff --git a/404.html b/404.html index 1ee07804..6ca3ed86 100644 --- a/404.html +++ b/404.html @@ -10,8 +10,8 @@ - - + +
Skip to main content

Archive

Archive

+ ) .lsd-dropdown__trigger:focus .lsd-dropdown__option-label{-webkit-text-decoration:underline;text-decoration:underline;}.lsd-dropdown__label{display:block;}.lsd-dropdown__button-container{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;}.lsd-dropdown__trigger{width:100%;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;border:none;cursor:pointer;background:none;}.lsd-dropdown__trigger:focus{outline:none;}.lsd-dropdown__option-label{cursor:inherit;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;}.lsd-dropdown__icons{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;gap:8px;}.lsd-dropdown__icon{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}.lsd-dropdown__supporting-text{margin:6px 14px;}.lsd-dropdown--error .lsd-dropdown__option-label{-webkit-text-decoration:line-through;text-decoration:line-through;}.lsd-dropdown--disabled{opacity:0.34;cursor:initial;}.lsd-dropdown--large{width:208px;}.lsd-dropdown--large.lsd-dropdown--error{width:230px;}.lsd-dropdown--large .lsd-dropdown__label{margin:0 0 6px 18px;}.lsd-dropdown--large .lsd-dropdown__button-container{height:40px;}.lsd-dropdown--large .lsd-dropdown__trigger{padding:10px 0px 10px 18px;}.lsd-dropdown--large .lsd-dropdown__icons{padding:0px 14px;}.lsd-dropdown--medium{width:188px;}.lsd-dropdown--medium.lsd-dropdown--error{width:210px;}.lsd-dropdown--medium .lsd-dropdown__label{margin:0 0 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__button-container{height:32px;}.lsd-dropdown--medium .lsd-dropdown__trigger{padding:6px 0px 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__icons{padding:0px 12px;}.lsd-dropdown--small{width:164px;}.lsd-dropdown--small.lsd-dropdown--error{width:186px;}.lsd-dropdown--small .lsd-dropdown__label{margin:0 0 6px 12px;}.lsd-dropdown--small .lsd-dropdown__button-container{height:28px;}.lsd-dropdown--small .lsd-dropdown__trigger{padding:6px 0px 6px 12px;}.lsd-dropdown--small .lsd-dropdown__icons{padding:0px 10px;}.lsd-dropdown--outlined .lsd-dropdown__button-container{border:1px solid rgb(var(--lsd-border-primary));}.lsd-dropdown--underlined .lsd-dropdown__button-container{border:1px solid transparent;border-bottom:1px solid rgb(var(--lsd-border-primary));}
Skip to main content

Archive

Archive

\ No newline at end of file diff --git a/rlog/atom.xml b/rlog/atom.xml index 7fd4919a..5829dc72 100644 --- a/rlog/atom.xml +++ b/rlog/atom.xml @@ -2,11 +2,367 @@ https://vac.dev/rlog Vac Research Blog - 2025-07-25T18:30:00.000Z + 2025-08-12T12:00:00.000Z https://github.com/jpmonette/feed Vac Research Blog https://vac.dev/theme/image/favicon.ico + + <![CDATA[Scaling libp2p GossipSub for Large Messages: An Evaluation of Performance Improvement Proposals]]> + https://vac.dev/rlog/gsub-perf-imp-comparison + + 2025-08-12T12:00:00.000Z + + The original GossipSub design emphasizes robustness, with less focus on message sizes. +However, emerging use cases—such as Ethereum's EIP-4844 and +data availability sampling (DAS) require rapid propagation of high data volumes, +often in the form of large messages. +Many ongoing research efforts attempt to find solutions +for efficiently forwarding large messages through the GossipSub network. +This post provides a concise overview and performance evaluation of some research initiatives +aimed at improving GossipSub to meet the performance needs of modern P2P networks.

+ +

Overview

+

For each subscribed topic, GossipSub nodes maintain a full-message mesh of peers with a target degree DD, +bounded by lower and upper thresholds DlowD_{low} and DhighD_{high}​, respectively. +In parallel, a metadata-only (gossip) mesh includes additional KK peers for metadata exchange. +All messages flow through the full-message mesh, and metadata flows through the gossip mesh.

+

The metadata contains IHAVE messages, announcing IDs of seen messages. +Upon receiving an IHAVE announcement about an unseen message ID, +a receiver responds with an IWANT request to retrieve the announced message. +IHAVE announcements serve multiple purposes:

+
    +
  1. Offer additional resilience in situations involving non-conforming peers or network partitions.
  2. +
  3. Speed up message propagation by allowing far-off peers to fetch overdue messages.
  4. +
+

Since GossipSub v1.1 [1], +replying to IWANT requests is optional, to safeguard honest peers from adversaries. +This change encourages peers to make redundant IWANT requests for the same message. +While this arrangement works well for small messages, it can be inefficient for bigger ones. +This inefficiency arises from an increase in duplicates +and a higher number of IWANT requests due to longer transmission times for large messages. +As a result, we observe a significant rise in bandwidth utilization and message dissemination times across the network.

+

IDONTWANT messages in GossipSub v1.2 [2] help reduce some duplicates. +However, further efforts are necessary to mitigate this problem +[3, +4]. +That is why many recent works focus on improving GossipSub's performance when handling large messages +[5, +6, +7, +8, +9, +10, +11, +12].

+

In this post, we evaluate the push-pull phase transition (PPPT) mechanism +[10], +the GossipSub v1.4 proposal [11, +5], +and the GossipSub v2.0 proposal [12] +for performance against Gossipsub v1.2. +For this purpose, we implement minimal proof-of-concept (PoC) versions of these protocols in nim-libp2p +[13] +and use the shadow simulator to provide performance evaluation results.

+

We begin by discussing the issues of message dissemination times and duplicate messages in GossipSub. +Next, we provide a brief overview of the proposals under review, +followed by the experiments and findings from our performance evaluations.

+

Message Transfer Time and Duplicates

+

Assuming uniform link characteristics, message dissemination to full-message mesh concludes in +τD(D×τtx)+τp\tau_D \approx (D \times \tau_{tx}) + \tau_p time, where τtx=SR\tau_{tx} = \frac{S}{R}, +with SS, RR, and τp\tau_p being the message size, data rate, and link latency, respectively.

+

This simplifies network-wide dissemination time to τNτD×h\tau_{N} \approx \tau_D \times h, +with hh indicating the number of hops along the longest path. +This implies that a tenfold increase in message size results in an eightyfold rise in D×τtxD \times \tau_{tx} for a mesh with D=8D = 8, +which accumulates across hh hops. +This leads to two fundamental problems:

+
    +
  1. +

    A longer contention interval (τD\tau_D) increases the chance that peers receive the same message from multiple mesh members during that interval,
    +leading to redundant transmissions and more duplicates. +Reducing DD inadvertently increases hh — potentially slowing propagation overall.

    +
  2. +
  3. +

    Peers are unaware of ongoing message receptions and may generate redundant IWANT requests +for the messages they are already receiving. +Most of these requests are entertained by early message receivers, +which increases message dissemination time by increasing the workload at peers situated along the optimal path.

    +
  4. +
+

Talking about duplicates, a network comprising NN peers, +each with a degree DD, has a total of N×D2\frac {N \times D}{2} edges (links), +as every link connects two peers. +Assuming that a message traverses every link exactly once, +we get at least N×D2\frac {N \times D}{2} transmissions.

+

Only N1N-1 transmissions are necessary for delivering a message to all peers. +As a result, we get N×D2(N1)\frac {N \times D}{2} -(N-1) duplicates in the network. +We can simplify average duplicates received by a single peer to dˉminD21\bar{d}_{min} \approx \frac{D}{2}-1. +Here, dˉmin\bar{d}_{min} represents the lower bound on average duplicates +because we assume that the send and receive operations are mutually exclusive. +This assumption requires that message transmission times (and link latencies) +are so small that no two peers simultaneously transmit the same message to each other.

+

However, a large message can noticeably increase the contention interval, +which increases the likelihood that many peers will simultaneously transmit the same message to each other.

+

Authors in [14] +explore the upper bound (dˉmax\bar{d}_{max}) on duplicates. +They argue that a node can forward a received message to a maximum of D1D-1 peers +while the original publisher sends it to DD peers. +As a result, we get N(D1)+1N(D-1)+1 transmissions in the network. +Only N1N-1 transmissions are necessary to deliver a message to all peers. +Remaining N(D1)+1(N1)N(D-1)+1-(N-1) transmissions are duplicates, +which simplifies the upper bound on average duplicates received by each peer to dˉmaxD2\bar{d}_{max} \approx D-2. +This rise indicates that larger messages can lead to more duplicates due to longer contention intervals. +It is essential to highlight that the impact of IWANT/IDONTWANT messages is not considered in dˉ\bar{d} computations above.

+

Protocols Considered

+

Push-Pull Phase Transition (PPPT)

+

In PPPT, authors argue that most redundant transmissions occur during the later stages of message propagation. +As a message traverses through the network, +the peers forwarding the message should gradually reduce the number of mesh members that directly receive +the message (push) and send immediate IHAVE announcements to the remaining peers in the mesh. +The remaining mesh members can fetch any missing messages using IWANT requests (Pull). +The authors also suggest two strategies to estimate the message propagation stage:

+
    +
  1. +

    Include a hop count in the message header to identify the number of hops traversed by that message. +When a peer forwards a received message, +it performs a pull operation for a subset of mesh members that equals the specified hop count +and a push operation for the remaining mesh members.

    +
  2. +
  3. +

    Infer the message propagation stage by looking into +the number of received IHAVE announcements and duplicates for that message. +Use this information to choose a balance between pull-based and push-based message forwarding.

    +
  4. +
+

The authors suggest that instead of simultaneously pushing a message to the selected peers, +sequentially initiating transmission to each peer after a short delay +enhances the likelihood of timely receiving a higher number of IDONTWANT requests.

+

Key Considerations and PoC Implementation

+

The use of hop count is a more effective and straightforward method for identifying the message propagation stage +than relying on the duplicate count. +However, this approach may compromise the publisher's anonymity and reveal information about the publisher and its mesh members. +Additional due diligence may be needed to address these issues.

+

In the PoC implementation [15], +we use hop count to determine the message propagation stage. +When forwarding a message, every peer selects a subset of mesh members +equal to the advertised hop count for pull operation and +forwards the message to the remaining mesh members. +If the advertised hop count exceeds the number of mesh members chosen for message forwarding, +the sender relays the message to all selected peers.

+

GossipSub v1.4 Proposal

+

GossipSub v1.4 proposal considers longer large-message transfer times (τD\tau_D) as contention intervals and +argues that most duplicates occur during these intervals for two fundamental reasons:

+
    +
  1. +

    Peers are unaware of ongoing message receptions and +may generate redundant IWANT requests for messages they are already receiving.

    +
  2. +
  3. +

    Peers can send IDONTWANT announcements only after receiving the entire message. +However, a large contention interval increases the likelihood that +many redundant transmissions will start before IDONTWANT messages are issued.

    +
  4. +
+

GossipSub v1.4 proposal eliminates contention interval with help from two new control messages: +PREAMBLE and IMRECEIVING. +A PREAMBLE precedes every large message transmission. +Upon receiving a preamble, a peer learns about the messages it is receiving and performs two actions:

+
    +
  1. +

    Notify mesh members about ongoing message reception using an IMRECEIVING announcement. +On receiving an IMRECEIVING announcement from a peer, +mesh members defer sending the announced message to that peer.

    +
  2. +
  3. +

    Defer IWANT requests for messages that are currently being received. +Peers also limit the outstanding IWANT requests for any message to one.

    +
  4. +
+

Key Considerations and PoC Implementation

+

The use of PREAMBLE/IMRECEIVING addresses the limitation of IDONTWANT messages. +For instance, consider a peer XX begins receiving a message mm at time t1t_1. +It can transmit IDONTWANT only after receiving mm, i.e., at time t1+τDt_1+\tau_D. +Therefore, it can not cancel any duplicate receptions of mm that start before t1+τD+τpt1 + \tau_D + \tau_p. +In contrast, IMRECEIVING announcements for mm start at t1+Δt_1 + \Delta, +where Δ\Delta denotes PREAMBLE processing time and satisfies ΔτD\Delta \ll \tau_D. +As a result, peer XX can eliminate all duplicate receptions of mm that start after t1+Δ+τpt_1 + \Delta +\tau_p, +which noticeably reduces duplicates.

+

The use of PREAMBLE also allows deferring IWANT requests for messages we are already receiving, +which can also improve message dissemination time by +reducing the workload on peers along the optimal message forwarding path.

+

It is worth mentioning that a malicious peer can exploit this approach by sending a PREAMBLE and +never completing (or deliberately delaying) the promised message transfer. +The optional safety strategy in GossipSub v1.4 proposal suggests using a peer score threshold +for PREAMBLE processing and a behavior penalty for broken promises. +A timeout strategy helps recover such messages.

+

It is essential to mention that sending and processing of PREAMBLE and IMRECEIVING messages is optional. +This flexibility allows for the use of custom safety strategies in various implementations. +For example, the ongoing production-grade implementation of GossipSub v1.4 in nim-libp2p +allows peers to ignore PREAMBLEs unless they come from mesh members with higher data rates +(bandwidth estimation becomes trivial with PREAMBLEs) and good peer scores. +This implementation also lets peers choose between a push or pull strategy for handling broken promises.

+

For the performance evaluations in this post, we utilize the PoC implementation of GossipSub v1.4 +[16]. +A complete, production-grade version is currently undergoing testing and validation +[17].

+

GossipSub v2.0 Proposal

+

GossipSub v2.0 introduces a hybrid method for message dissemination +that combines both push and pull strategies through two new control messages: IANNOUNCE and INEED. +These messages are analogous to IHAVE and IWANT messages, respectively. +However, IANNOUNCE messages are issued to the mesh members +immediately after validating a received message without waiting for the heartbeat interval. +Similarly, INEED requests are made exclusively to mesh members, +and a peer generates only one INEED request for a received message.

+

The balance between push and pull approaches is determined by the DannounceD_{announce} parameter. +On receiving a message, a peer forwards it to DDannounceD - D_{announce} mesh members and +sends IANNOUNCE messages to the remaining mesh peers. +On receiving an IANNOUNCE for an unseen message, +a peer can request it using an INEED message.

+

Key Considerations and PoC Implementation

+

The DannounceD_{announce} parameter governs the balance between push and pull operations. +Setting Dannounce=DD_{announce} = D results in a pull-only operation, +which can eliminate duplicates at the cost of increased message dissemination time. +In contrast, setting DannounceD_{announce} to zero reverts to standard GossipSub v1.2 operation. +The authors suggest setting Dannounce=D1D_{announce} = D-1 to moderately decrease dissemination time +while incurring only a small number of duplicate transmissions.

+

It is important to note that malicious peers can exploit this approach +by delaying or entirely omitting responses to INEED requests. +Similarly, sending INEED requests to suboptimal or overwhelmed peers can further increase message dissemination time. +The authors propose using a timeout strategy and negative peer scoring to address these issues. +If a message transfer does not complete within the specified interval, +the receiver decreases the sender's peer score +and issues a new INEED request to an alternative mesh member.

+

For the performance evaluations in this post, +we utilize the PoC implementation of GossipSub v2.0 +[18] from nim-libp2p. +We set Dannounce=D1D_{announce} = D-1 and allow any peer to send a single IWANT request for a message, +only if it has not previously sent an INEED request for the same message.

+

Experiments

+

We conducted a series of experiments under various configurations to evaluate the performance of +the GossipSub v1.4 proposal, the PPPT approach, and the GossipSub v2.0 proposal +against the baseline GossipSub v1.2 protocol. +To support these evaluations, we extended the nim-libp2p implementation to include minimal PoC implementations of the considered protocols +[16, +15, +18]
+and used the Shadow simulator [19] +to carry out performance evaluations.

+

For GossipSub v1.4 and PPPT, we also report results from delayed forwarding, +where peers introduce a short delay before relaying a message to every mesh member. +This delay helps reduce the number of redundant transmissions +by increasing the likelihood of timely receiving a higher number of IDONTWANT notifications.

+

We evaluate performance using network-wide message dissemination time (latency), +network-wide bandwidth utilization (bandwidth), +and the average number of duplicates received by a peer for every transmitted message. +We also report the average number of IWANT requests transmitted by a peer for a single message.

+

For each experiment, we transmit multiple messages in the network. +We average the network-wide dissemination time for these messages to report latency. +Bandwidth refers to the total volume of traffic in the network, +encompassing control messages and data transmissions (including duplicates and IWANT replies). +A peer usually receives multiple copies of any transmitted message. +Excluding the first received message, all copies are duplicates. +We compute average duplicates received by a peer as 1NMj=1Mi=1Ndi,j\frac{1}{N M} \sum_{j=1}^{M} \sum_{i=1}^{N} d_{i,j} +where NN and MM denote the number of peers and the number of transmitted messages, respectively, +and di,jd_{i,j} represents the number of duplicates received by peer ii for message jj. +A similar mechanism computes average IWANT requests.

+

Three simulation scenarios are considered:

+
    +
  • +

    Scenario 1: The number of publishers and message size are kept constant while the network size gradually increases.

    +
  • +
  • +

    Scenario 2: The number of publishers and the network size remain constant while the message size gradually increases.

    +
  • +
  • +

    Scenario 3: The number of nodes and message size remain constant while the number of publishers gradually increases.

    +
  • +
+

In all experiments, we transmit multiple messages such that +every publisher sends exactly one message to the network. +After a publisher transmits a message, +each subsequent publisher waits for a specified interval (inter-message delay) before sending the next message.

+

Rotating publishers ensures that every message traverses a different path, +which helps achieve fair performance evaluation. +On the other hand, changing inter-message delays allows for creating varying traffic patterns. +A shorter inter-message delay implies more messages can be in transit simultaneously, +which helps evaluate performance against large message counts. +A longer delay ensures every message is fully disseminated before introducing a new message. +Similarly, increasing message size stresses the network. +As a result, we evaluate performance across a broader range of use cases.

+

The simulation details are presented in the tables below. +The experiments are conducted using the shadow simulator. +We uniformly set peer bandwidths and link latencies between 40-200 Mbps and +40-130 milliseconds in five variations.

+

Table 1: Simulation Scenarios.

+
ExperimentNo. of NodesNo. of PublishersMessage Size (KB)Inter-Message Delay (ms)
Scenario 13000, 6000, 9000, 12000715010000
Scenario 2150010200, 600, 1000, 1400, 180010000
Scenario 3150025, 50, 75, 100, 1255050
+

Table 2: Simulation Parameters.

+
ParameterValueParameterValue
DD8DlowD_{low}6
DlazyD_{lazy}6DhighD_{high}12
Gossip factor0.05Muxeryamux
Heartbeat interval1000 msFloodpublishfalse
Peer Bandwidth40-200 MbpsLink Latency40-130ms
DannounceD_{announce} GossipSub v2.07Forward delay in PPPT/v1.4 with delay35 ms
+

Results

+

Scenario1: Increasing Network Size

+
Bandwidth
Latency
Duplicates
IWANTs
BandwidthLatencyAverage DuplicatesAverage IWANT Requests
+

Scenario2: Increasing Message Size

+
Bandwidth
Latency
Duplicates
IWANTs
BandwidthLatencyAverage DuplicatesAverage IWANT Requests
+

Scenario3: Increasing Number of Publishers

+
Bandwidth
Latency
Duplicates
IWANTs
BandwidthLatencyAverage DuplicatesAverage IWANT Requests
+

Findings

+
    +
  • +

    The number of IWANT requests increases with the message size. +Limiting ongoing IWANT requests for each message to one can be beneficial. +Additionally, the use of message PREAMBLEs can help eliminate IWANT requests +for messages that are already being received.

    +
  • +
  • +

    Pull-based approaches can substantially reduce bandwidth utilization, +but may result in much longer message dissemination times. +However, these approaches can achieve simultaneous propagation of multiple messages +by implicitly rotating among outgoing messages. +As a result, increasing the number of messages yields similar dissemination times.

    +
  • +
  • +

    Transition from push to pull operation during the later stages of message propagation can reduce bandwidth consumption, +without compromising latency. +However, determining the propagation stage is challenging. +Methods like hop counts may compromise anonymity, +while using IHAVE announcements can be misleading. For instance, in the case of large messages, +peers may receive IHAVE announcements much earlier than the actual message spreads through the network.

    +
  • +
  • +

    Push-based approaches achieve the fastest message dissemination +but often produce a higher number of duplicate messages. +Employing mechanisms like PREAMBLE/IMRECEIVING messages +for guided elimination of duplicate messages can significantly reduce bandwidth consumption. +This reduction not only minimizes redundant transmissions +but also decreases the overall message dissemination time +by lessening the workload on peers located along optimal message forwarding paths.

    +
  • +
+

Please feel free to join the discussion and leave feedback regarding this post in the +VAC forum.

+

References

+

[1] GossipSub v1.1 Specifications

+

[2] GossipSub v1.2 Specifications

+

[3] Number of Duplicate Messages in Ethereum’s Gossipsub Network

+

[4] Impact of IDONTWANT in the Number of Duplicates

+

[5] PREAMBLE and IMRECEIVING for Improved Large Message Handling

+

[6] Staggering and Fragmentation for Improved Large Message Handling

+

[7] GossipSub for Big Messages

+

[8] FullDAS: Towards Massive Scalability with 32MB Blocks and Beyond

+

[9] Choke Extension for GossipSub

+

[10] PPPT: Fighting the GossipSub Overhead with Push-Pull Phase Transition

+

[11] GossipSub v1.4 Specifications Proposal

+

[12] GossipSub v2.0 Specifications Proposal

+

[13] Libp2p Implementation in nim

+

[14] The Streamr Network: Performance and Scalability

+

[15] PPPT: PoC Implementation in nim-libp2p

+

[16] GossipSub v1.4: PoC Implementation in nim-libp2p

+

[17] GossipSub v1.4: Production-Grade Implementation in nim-libp2p

+

[18] GossipSub v2.0: PoC Implementation in nim-libp2p

+

[19] nim-libp2p GossipSub Test Node

]]>
+ + Umar Farooq + +
<![CDATA[Zerokit optimizations: A performance journey]]> https://vac.dev/rlog/2025-zerokit-perf @@ -5373,159 +5729,4 @@ to devices with limited resources.

Franck
- - <![CDATA[Waku for All Decentralized Applications and Infrastructures]]> - https://vac.dev/rlog/waku-for-all - - 2022-11-08T00:00:00.000Z - - Waku is an open communication protocol and network. Decentralized apps and infrastructure can use Waku for their -communication needs. It is designed to enable dApps and decentralized infrastructure projects to have secure, private, -scalable communication. Waku is available in several languages and platforms, from Web to mobile to desktop to cloud. -Initially, We pushed Waku adoption to the Web ecosystem, we learned that Waku is usable in a variety of complex applications -and infrastructure projects. We have prioritized our effort to make Waku usable on various platforms and environments.

- -

Background

-

We have built Waku to be the communication layer for Web3. Waku is a collection of protocols to chose from for your -messaging needs. It enables secure, censorship-resistant, privacy-preserving, spam-protected communication for its user. -It is designed to run on any device, from mobile to the cloud.

-

Waku is available on many systems and environments and used by several applications and SDKs for decentralized communications.

-

This involved research efforts in various domains: conversational security, protocol incentivization, zero-knowledge, -etc.

-

Waku uses novel technologies. Hence, we knew that early dogfooding of Waku was necessary. Even if research -was still in progress [1]. Thus, as soon as Waku protocols and software were usable, we started to push -for the adoption of Waku. This started back in 2021.

-

Waku is the communication component of the Web3 trifecta. This trifecta was Ethereum (contracts), Swarm -(storage) and Whisper (communication). Hence, it made sense to first target dApps which already uses one of the pillars: -Ethereum.

-

As most dApps are web apps, we started the development of js-waku for the browser.

-

Once ready, we reached out to dApps to integrate Waku, added prizes to hackathons -and gave talks.

-

We also assumed we would see patterns in the usage of Waku, that we would facilitate with the help of -SDKs.

-

Finally, we created several web apps: -examples -and PoCs.

-

By discussing with Waku users and watching it being used, we learned a few facts:

-
    -
  1. The potential use cases for Waku are varied and many:
  2. -
- -
    -
  1. Many projects are interested in having an embedded chat in their dApp,
  2. -
  3. There are complex applications that need Waku as a solution. Taking RAILGUN as an example:
  4. -
-
    -
  • Web wallet
  • -
  • + React Native mobile wallet
  • -
  • + NodeJS node/backend.
  • -
-

(1) means that it is not that easy to create SDKs for common use cases.

-

(2) was a clear candidate for an SDK. Yet, building a chat app is a complex task. Hence, the Status app team tackled -this in the form of Status Web.

-

Finally, (3) was the most important lesson. We learned that multi-tier applications need Waku for decentralized and -censorship-resistant communications. For these projects, js-waku is simply not enough. They need Waku to work in their -Golang backend, Unity desktop game and React Native mobile app.

-

We understood that we should see the whole Waku software suite -(js-waku, -nwaku, -go-waku, -waku-react-native, -etc) as an asset for its success. -That we should not limit outreach, marketing, documentation efforts to the web, but target all platforms.

-

From a market perspective, we identified several actors:

-
    -
  • platforms: Projects that uses Waku to handle communication,
  • -
  • operators: Operators run Waku nodes and are incentivized to do so,
  • -
  • developers: Developers are usually part of a platforms or solo hackers learning Web3,
  • -
  • contributors: Developers and researchers with interests in decentralization, privacy, censorship-resistance, -zero-knowledge, etc.
  • -
-

Waku for All Decentralized Applications and Infrastructures

-

In 2022, we shifted our focus to make the various Waku implementations usable and used.

-

We made Waku multi-plaform.

-

We shifted Waku positioning to leverage all Waku implementations and better serve the user's needs:

- -

We are consolidating the documentation for all implementations on a single website (work in progress) -to improve developer experience.

-

This year, we also started the operator outreach effort to push for users to run their own Waku nodes. We have -recently concluded our first operator trial run. -Nwaku's documentation, stability and performance has improved. It is now easier to -run your own Waku node.

-

Today, operator wannabes most likely run their own nodes to support or use the Waku network. -We are dogfooding -Waku RLN, our novel economic spam protection protocol, -and looking at incentivizing the Waku Store protocol. -This way, we are adding reasons to run your own Waku node.

-

For those who were following us in 2021, know that we are retiring the Waku Connect branding in favour of the Waku -branding.

-

Waku for Your Project

-

As discussed, Waku is now available on various platforms. The question remains: How can Waku benefit your project?

-

Here are a couple of use cases we recently investigated:

-

Layer-2 Decentralization

-

Most ([2] [3] roll-ups use a centralized sequencer or equivalent. Running several sequencers is not as straightforward as running several execution nodes. -Waku can help:

-
    -
  • Provide a neutral marketplace for a mempool: If sequencers compete for L2 tx fees, they may not be incentivized to -share transactions with other sequencers. Waku nodes can act as a neutral network to enable all sequences to access -transactions.
  • -
  • Enable censorship-resistant wallet<>L2 communication,
  • -
  • Provide rate limiting mechanism for spam protection: Using RLN to prevent DDOS.
  • -
-

Device pairing and communication

-

With Waku Device Pairing, a user can setup a secure encrypted communication channel -between their devices. As this channel would operate over Waku, it would be censorship-resistant and privacy preserving. -These two devices could be:

-
    -
  • Ethereum node and mobile phone to access a remote admin panel,
  • -
  • Alice's phone and Bob's phone for any kind of secure communication,
  • -
  • Mobile wallet and desktop/browser dApp for transaction and signature exchange.
  • -
-

Check js-waku#950 for the latest update on this.

-

Get Involved

-

Developer? Grab any of the Waku implementations and integrate it in your app: https://waku.org/platform.

-

Researcher? See https://vac.dev/contribute to participate in Waku research.

-

Tech-savvy? Try to run your own node: https://waku.org/operator.

-

Otherwise, play around with the various web examples.

-

If you want to help, we are hiring!

-

Moving Forward

-

What you can expect next:

- -
-

References

-
    -
  • [1] Waku is modular; it is a suite of protocols; hence some Waku protocols may be mature, while -new protocols are still being designed. Which means that research continues to be ongoing while -Waku is already used in production.
  • -
  • [2] The Optimism Foundation runs the only block produce on the Optimism network.
  • -
  • [3] Top 10 L2s are documented has having a centralized operator.
  • -
]]>
- - Franck - -
\ No newline at end of file diff --git a/rlog/authors/index.html b/rlog/authors/index.html index 0e2021ef..abcd52c0 100644 --- a/rlog/authors/index.html +++ b/rlog/authors/index.html @@ -10,14 +10,14 @@ - - + +
Skip to main content

Authors

  • Circe

    1
  • Dean

    3
  • Franck

    3
  • Hanno Cornelius

    2
  • Daniel

    2
  • Oskar

    12
  • Richard

    1
  • s1fr0

    1
  • Sanaz

    1
  • Moudy

    4
  • Aaryamann

    3
  • Umar Farooq

    3
  • Marvin

    3
  • Aleksei Vambol

    1
  • BenPH

    1
  • Ivan

    1
  • Gabriel

    1
  • Vac

    1
+ ) .lsd-dropdown__trigger:focus .lsd-dropdown__option-label{-webkit-text-decoration:underline;text-decoration:underline;}.lsd-dropdown__label{display:block;}.lsd-dropdown__button-container{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;}.lsd-dropdown__trigger{width:100%;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;border:none;cursor:pointer;background:none;}.lsd-dropdown__trigger:focus{outline:none;}.lsd-dropdown__option-label{cursor:inherit;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;}.lsd-dropdown__icons{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;gap:8px;}.lsd-dropdown__icon{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}.lsd-dropdown__supporting-text{margin:6px 14px;}.lsd-dropdown--error .lsd-dropdown__option-label{-webkit-text-decoration:line-through;text-decoration:line-through;}.lsd-dropdown--disabled{opacity:0.34;cursor:initial;}.lsd-dropdown--large{width:208px;}.lsd-dropdown--large.lsd-dropdown--error{width:230px;}.lsd-dropdown--large .lsd-dropdown__label{margin:0 0 6px 18px;}.lsd-dropdown--large .lsd-dropdown__button-container{height:40px;}.lsd-dropdown--large .lsd-dropdown__trigger{padding:10px 0px 10px 18px;}.lsd-dropdown--large .lsd-dropdown__icons{padding:0px 14px;}.lsd-dropdown--medium{width:188px;}.lsd-dropdown--medium.lsd-dropdown--error{width:210px;}.lsd-dropdown--medium .lsd-dropdown__label{margin:0 0 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__button-container{height:32px;}.lsd-dropdown--medium .lsd-dropdown__trigger{padding:6px 0px 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__icons{padding:0px 12px;}.lsd-dropdown--small{width:164px;}.lsd-dropdown--small.lsd-dropdown--error{width:186px;}.lsd-dropdown--small .lsd-dropdown__label{margin:0 0 6px 12px;}.lsd-dropdown--small .lsd-dropdown__button-container{height:28px;}.lsd-dropdown--small .lsd-dropdown__trigger{padding:6px 0px 6px 12px;}.lsd-dropdown--small .lsd-dropdown__icons{padding:0px 10px;}.lsd-dropdown--outlined .lsd-dropdown__button-container{border:1px solid rgb(var(--lsd-border-primary));}.lsd-dropdown--underlined .lsd-dropdown__button-container{border:1px solid transparent;border-bottom:1px solid rgb(var(--lsd-border-primary));}
Skip to main content

Authors

  • Circe

    1
  • Dean

    3
  • Franck

    3
  • Hanno Cornelius

    2
  • Daniel

    2
  • Oskar

    12
  • Richard

    1
  • s1fr0

    1
  • Sanaz

    1
  • Moudy

    4
  • Aaryamann

    3
  • Umar Farooq

    4
  • Marvin

    3
  • Aleksei Vambol

    1
  • BenPH

    1
  • Ivan

    1
  • Gabriel

    1
  • Vac

    1
\ No newline at end of file diff --git a/rlog/building-privacy-protecting-infrastructure/index.html b/rlog/building-privacy-protecting-infrastructure/index.html index 591fe496..6acadc84 100644 --- a/rlog/building-privacy-protecting-infrastructure/index.html +++ b/rlog/building-privacy-protecting-infrastructure/index.html @@ -10,8 +10,8 @@ - - + +
Skip to main content

Scaling libp2p GossipSub for Large Messages: An Evaluation of Performance Improvement Proposals

by
16 min read

The original GossipSub design emphasizes robustness, with less focus on message sizes. +However, emerging use cases—such as Ethereum's EIP-4844 and +data availability sampling (DAS) require rapid propagation of high data volumes, +often in the form of large messages. +Many ongoing research efforts attempt to find solutions +for efficiently forwarding large messages through the GossipSub network. +This post provides a concise overview and performance evaluation of some research initiatives +aimed at improving GossipSub to meet the performance needs of modern P2P networks.

+ +

Overview

+

For each subscribed topic, GossipSub nodes maintain a full-message mesh of peers with a target degree DD, +bounded by lower and upper thresholds DlowD_{low} and DhighD_{high}​, respectively. +In parallel, a metadata-only (gossip) mesh includes additional KK peers for metadata exchange. +All messages flow through the full-message mesh, and metadata flows through the gossip mesh.

+

The metadata contains IHAVE messages, announcing IDs of seen messages. +Upon receiving an IHAVE announcement about an unseen message ID, +a receiver responds with an IWANT request to retrieve the announced message. +IHAVE announcements serve multiple purposes:

+
    +
  1. Offer additional resilience in situations involving non-conforming peers or network partitions.
  2. +
  3. Speed up message propagation by allowing far-off peers to fetch overdue messages.
  4. +
+

Since GossipSub v1.1 [1], +replying to IWANT requests is optional, to safeguard honest peers from adversaries. +This change encourages peers to make redundant IWANT requests for the same message. +While this arrangement works well for small messages, it can be inefficient for bigger ones. +This inefficiency arises from an increase in duplicates +and a higher number of IWANT requests due to longer transmission times for large messages. +As a result, we observe a significant rise in bandwidth utilization and message dissemination times across the network.

+

IDONTWANT messages in GossipSub v1.2 [2] help reduce some duplicates. +However, further efforts are necessary to mitigate this problem +[3, +4]. +That is why many recent works focus on improving GossipSub's performance when handling large messages +[5, +6, +7, +8, +9, +10, +11, +12].

+

In this post, we evaluate the push-pull phase transition (PPPT) mechanism +[10], +the GossipSub v1.4 proposal [11, +5], +and the GossipSub v2.0 proposal [12] +for performance against Gossipsub v1.2. +For this purpose, we implement minimal proof-of-concept (PoC) versions of these protocols in nim-libp2p +[13] +and use the shadow simulator to provide performance evaluation results.

+

We begin by discussing the issues of message dissemination times and duplicate messages in GossipSub. +Next, we provide a brief overview of the proposals under review, +followed by the experiments and findings from our performance evaluations.

+

Message Transfer Time and Duplicates

+

Assuming uniform link characteristics, message dissemination to full-message mesh concludes in +τD(D×τtx)+τp\tau_D \approx (D \times \tau_{tx}) + \tau_p time, where τtx=SR\tau_{tx} = \frac{S}{R}, +with SS, RR, and τp\tau_p being the message size, data rate, and link latency, respectively.

+

This simplifies network-wide dissemination time to τNτD×h\tau_{N} \approx \tau_D \times h, +with hh indicating the number of hops along the longest path. +This implies that a tenfold increase in message size results in an eightyfold rise in D×τtxD \times \tau_{tx} for a mesh with D=8D = 8, +which accumulates across hh hops. +This leads to two fundamental problems:

+
    +
  1. +

    A longer contention interval (τD\tau_D) increases the chance that peers receive the same message from multiple mesh members during that interval,
    +leading to redundant transmissions and more duplicates. +Reducing DD inadvertently increases hh — potentially slowing propagation overall.

    +
  2. +
  3. +

    Peers are unaware of ongoing message receptions and may generate redundant IWANT requests +for the messages they are already receiving. +Most of these requests are entertained by early message receivers, +which increases message dissemination time by increasing the workload at peers situated along the optimal path.

    +
  4. +
+

Talking about duplicates, a network comprising NN peers, +each with a degree DD, has a total of N×D2\frac {N \times D}{2} edges (links), +as every link connects two peers. +Assuming that a message traverses every link exactly once, +we get at least N×D2\frac {N \times D}{2} transmissions.

+

Only N1N-1 transmissions are necessary for delivering a message to all peers. +As a result, we get N×D2(N1)\frac {N \times D}{2} -(N-1) duplicates in the network. +We can simplify average duplicates received by a single peer to dˉminD21\bar{d}_{min} \approx \frac{D}{2}-1. +Here, dˉmin\bar{d}_{min} represents the lower bound on average duplicates +because we assume that the send and receive operations are mutually exclusive. +This assumption requires that message transmission times (and link latencies) +are so small that no two peers simultaneously transmit the same message to each other.

+

However, a large message can noticeably increase the contention interval, +which increases the likelihood that many peers will simultaneously transmit the same message to each other.

+

Authors in [14] +explore the upper bound (dˉmax\bar{d}_{max}) on duplicates. +They argue that a node can forward a received message to a maximum of D1D-1 peers +while the original publisher sends it to DD peers. +As a result, we get N(D1)+1N(D-1)+1 transmissions in the network. +Only N1N-1 transmissions are necessary to deliver a message to all peers. +Remaining N(D1)+1(N1)N(D-1)+1-(N-1) transmissions are duplicates, +which simplifies the upper bound on average duplicates received by each peer to dˉmaxD2\bar{d}_{max} \approx D-2. +This rise indicates that larger messages can lead to more duplicates due to longer contention intervals. +It is essential to highlight that the impact of IWANT/IDONTWANT messages is not considered in dˉ\bar{d} computations above.

+

Protocols Considered

+

Push-Pull Phase Transition (PPPT)

+

In PPPT, authors argue that most redundant transmissions occur during the later stages of message propagation. +As a message traverses through the network, +the peers forwarding the message should gradually reduce the number of mesh members that directly receive +the message (push) and send immediate IHAVE announcements to the remaining peers in the mesh. +The remaining mesh members can fetch any missing messages using IWANT requests (Pull). +The authors also suggest two strategies to estimate the message propagation stage:

+
    +
  1. +

    Include a hop count in the message header to identify the number of hops traversed by that message. +When a peer forwards a received message, +it performs a pull operation for a subset of mesh members that equals the specified hop count +and a push operation for the remaining mesh members.

    +
  2. +
  3. +

    Infer the message propagation stage by looking into +the number of received IHAVE announcements and duplicates for that message. +Use this information to choose a balance between pull-based and push-based message forwarding.

    +
  4. +
+

The authors suggest that instead of simultaneously pushing a message to the selected peers, +sequentially initiating transmission to each peer after a short delay +enhances the likelihood of timely receiving a higher number of IDONTWANT requests.

+

Key Considerations and PoC Implementation

+

The use of hop count is a more effective and straightforward method for identifying the message propagation stage +than relying on the duplicate count. +However, this approach may compromise the publisher's anonymity and reveal information about the publisher and its mesh members. +Additional due diligence may be needed to address these issues.

+

In the PoC implementation [15], +we use hop count to determine the message propagation stage. +When forwarding a message, every peer selects a subset of mesh members +equal to the advertised hop count for pull operation and +forwards the message to the remaining mesh members. +If the advertised hop count exceeds the number of mesh members chosen for message forwarding, +the sender relays the message to all selected peers.

+

GossipSub v1.4 Proposal

+

GossipSub v1.4 proposal considers longer large-message transfer times (τD\tau_D) as contention intervals and +argues that most duplicates occur during these intervals for two fundamental reasons:

+
    +
  1. +

    Peers are unaware of ongoing message receptions and +may generate redundant IWANT requests for messages they are already receiving.

    +
  2. +
  3. +

    Peers can send IDONTWANT announcements only after receiving the entire message. +However, a large contention interval increases the likelihood that +many redundant transmissions will start before IDONTWANT messages are issued.

    +
  4. +
+

GossipSub v1.4 proposal eliminates contention interval with help from two new control messages: +PREAMBLE and IMRECEIVING. +A PREAMBLE precedes every large message transmission. +Upon receiving a preamble, a peer learns about the messages it is receiving and performs two actions:

+
    +
  1. +

    Notify mesh members about ongoing message reception using an IMRECEIVING announcement. +On receiving an IMRECEIVING announcement from a peer, +mesh members defer sending the announced message to that peer.

    +
  2. +
  3. +

    Defer IWANT requests for messages that are currently being received. +Peers also limit the outstanding IWANT requests for any message to one.

    +
  4. +
+

Key Considerations and PoC Implementation

+

The use of PREAMBLE/IMRECEIVING addresses the limitation of IDONTWANT messages. +For instance, consider a peer XX begins receiving a message mm at time t1t_1. +It can transmit IDONTWANT only after receiving mm, i.e., at time t1+τDt_1+\tau_D. +Therefore, it can not cancel any duplicate receptions of mm that start before t1+τD+τpt1 + \tau_D + \tau_p. +In contrast, IMRECEIVING announcements for mm start at t1+Δt_1 + \Delta, +where Δ\Delta denotes PREAMBLE processing time and satisfies ΔτD\Delta \ll \tau_D. +As a result, peer XX can eliminate all duplicate receptions of mm that start after t1+Δ+τpt_1 + \Delta +\tau_p, +which noticeably reduces duplicates.

+

The use of PREAMBLE also allows deferring IWANT requests for messages we are already receiving, +which can also improve message dissemination time by +reducing the workload on peers along the optimal message forwarding path.

+

It is worth mentioning that a malicious peer can exploit this approach by sending a PREAMBLE and +never completing (or deliberately delaying) the promised message transfer. +The optional safety strategy in GossipSub v1.4 proposal suggests using a peer score threshold +for PREAMBLE processing and a behavior penalty for broken promises. +A timeout strategy helps recover such messages.

+

It is essential to mention that sending and processing of PREAMBLE and IMRECEIVING messages is optional. +This flexibility allows for the use of custom safety strategies in various implementations. +For example, the ongoing production-grade implementation of GossipSub v1.4 in nim-libp2p +allows peers to ignore PREAMBLEs unless they come from mesh members with higher data rates +(bandwidth estimation becomes trivial with PREAMBLEs) and good peer scores. +This implementation also lets peers choose between a push or pull strategy for handling broken promises.

+

For the performance evaluations in this post, we utilize the PoC implementation of GossipSub v1.4 +[16]. +A complete, production-grade version is currently undergoing testing and validation +[17].

+

GossipSub v2.0 Proposal

+

GossipSub v2.0 introduces a hybrid method for message dissemination +that combines both push and pull strategies through two new control messages: IANNOUNCE and INEED. +These messages are analogous to IHAVE and IWANT messages, respectively. +However, IANNOUNCE messages are issued to the mesh members +immediately after validating a received message without waiting for the heartbeat interval. +Similarly, INEED requests are made exclusively to mesh members, +and a peer generates only one INEED request for a received message.

+

The balance between push and pull approaches is determined by the DannounceD_{announce} parameter. +On receiving a message, a peer forwards it to DDannounceD - D_{announce} mesh members and +sends IANNOUNCE messages to the remaining mesh peers. +On receiving an IANNOUNCE for an unseen message, +a peer can request it using an INEED message.

+

Key Considerations and PoC Implementation

+

The DannounceD_{announce} parameter governs the balance between push and pull operations. +Setting Dannounce=DD_{announce} = D results in a pull-only operation, +which can eliminate duplicates at the cost of increased message dissemination time. +In contrast, setting DannounceD_{announce} to zero reverts to standard GossipSub v1.2 operation. +The authors suggest setting Dannounce=D1D_{announce} = D-1 to moderately decrease dissemination time +while incurring only a small number of duplicate transmissions.

+

It is important to note that malicious peers can exploit this approach +by delaying or entirely omitting responses to INEED requests. +Similarly, sending INEED requests to suboptimal or overwhelmed peers can further increase message dissemination time. +The authors propose using a timeout strategy and negative peer scoring to address these issues. +If a message transfer does not complete within the specified interval, +the receiver decreases the sender's peer score +and issues a new INEED request to an alternative mesh member.

+

For the performance evaluations in this post, +we utilize the PoC implementation of GossipSub v2.0 +[18] from nim-libp2p. +We set Dannounce=D1D_{announce} = D-1 and allow any peer to send a single IWANT request for a message, +only if it has not previously sent an INEED request for the same message.

+

Experiments

+

We conducted a series of experiments under various configurations to evaluate the performance of +the GossipSub v1.4 proposal, the PPPT approach, and the GossipSub v2.0 proposal +against the baseline GossipSub v1.2 protocol. +To support these evaluations, we extended the nim-libp2p implementation to include minimal PoC implementations of the considered protocols +[16, +15, +18]
+and used the Shadow simulator [19] +to carry out performance evaluations.

+

For GossipSub v1.4 and PPPT, we also report results from delayed forwarding, +where peers introduce a short delay before relaying a message to every mesh member. +This delay helps reduce the number of redundant transmissions +by increasing the likelihood of timely receiving a higher number of IDONTWANT notifications.

+

We evaluate performance using network-wide message dissemination time (latency), +network-wide bandwidth utilization (bandwidth), +and the average number of duplicates received by a peer for every transmitted message. +We also report the average number of IWANT requests transmitted by a peer for a single message.

+

For each experiment, we transmit multiple messages in the network. +We average the network-wide dissemination time for these messages to report latency. +Bandwidth refers to the total volume of traffic in the network, +encompassing control messages and data transmissions (including duplicates and IWANT replies). +A peer usually receives multiple copies of any transmitted message. +Excluding the first received message, all copies are duplicates. +We compute average duplicates received by a peer as 1NMj=1Mi=1Ndi,j\frac{1}{N M} \sum_{j=1}^{M} \sum_{i=1}^{N} d_{i,j} +where NN and MM denote the number of peers and the number of transmitted messages, respectively, +and di,jd_{i,j} represents the number of duplicates received by peer ii for message jj. +A similar mechanism computes average IWANT requests.

+

Three simulation scenarios are considered:

+
    +
  • +

    Scenario 1: The number of publishers and message size are kept constant while the network size gradually increases.

    +
  • +
  • +

    Scenario 2: The number of publishers and the network size remain constant while the message size gradually increases.

    +
  • +
  • +

    Scenario 3: The number of nodes and message size remain constant while the number of publishers gradually increases.

    +
  • +
+

In all experiments, we transmit multiple messages such that +every publisher sends exactly one message to the network. +After a publisher transmits a message, +each subsequent publisher waits for a specified interval (inter-message delay) before sending the next message.

+

Rotating publishers ensures that every message traverses a different path, +which helps achieve fair performance evaluation. +On the other hand, changing inter-message delays allows for creating varying traffic patterns. +A shorter inter-message delay implies more messages can be in transit simultaneously, +which helps evaluate performance against large message counts. +A longer delay ensures every message is fully disseminated before introducing a new message. +Similarly, increasing message size stresses the network. +As a result, we evaluate performance across a broader range of use cases.

+

The simulation details are presented in the tables below. +The experiments are conducted using the shadow simulator. +We uniformly set peer bandwidths and link latencies between 40-200 Mbps and +40-130 milliseconds in five variations.

+

Table 1: Simulation Scenarios.

+
ExperimentNo. of NodesNo. of PublishersMessage Size (KB)Inter-Message Delay (ms)
Scenario 13000, 6000, 9000, 12000715010000
Scenario 2150010200, 600, 1000, 1400, 180010000
Scenario 3150025, 50, 75, 100, 1255050
+

Table 2: Simulation Parameters.

+
ParameterValueParameterValue
DD8DlowD_{low}6
DlazyD_{lazy}6DhighD_{high}12
Gossip factor0.05Muxeryamux
Heartbeat interval1000 msFloodpublishfalse
Peer Bandwidth40-200 MbpsLink Latency40-130ms
DannounceD_{announce} GossipSub v2.07Forward delay in PPPT/v1.4 with delay35 ms
+

Results

+

Scenario1: Increasing Network Size

+
Bandwidth
Latency
Duplicates
IWANTs
BandwidthLatencyAverage DuplicatesAverage IWANT Requests
+

Scenario2: Increasing Message Size

+
Bandwidth
Latency
Duplicates
IWANTs
BandwidthLatencyAverage DuplicatesAverage IWANT Requests
+

Scenario3: Increasing Number of Publishers

+
Bandwidth
Latency
Duplicates
IWANTs
BandwidthLatencyAverage DuplicatesAverage IWANT Requests
+

Findings

+
    +
  • +

    The number of IWANT requests increases with the message size. +Limiting ongoing IWANT requests for each message to one can be beneficial. +Additionally, the use of message PREAMBLEs can help eliminate IWANT requests +for messages that are already being received.

    +
  • +
  • +

    Pull-based approaches can substantially reduce bandwidth utilization, +but may result in much longer message dissemination times. +However, these approaches can achieve simultaneous propagation of multiple messages +by implicitly rotating among outgoing messages. +As a result, increasing the number of messages yields similar dissemination times.

    +
  • +
  • +

    Transition from push to pull operation during the later stages of message propagation can reduce bandwidth consumption, +without compromising latency. +However, determining the propagation stage is challenging. +Methods like hop counts may compromise anonymity, +while using IHAVE announcements can be misleading. For instance, in the case of large messages, +peers may receive IHAVE announcements much earlier than the actual message spreads through the network.

    +
  • +
  • +

    Push-based approaches achieve the fastest message dissemination +but often produce a higher number of duplicate messages. +Employing mechanisms like PREAMBLE/IMRECEIVING messages +for guided elimination of duplicate messages can significantly reduce bandwidth consumption. +This reduction not only minimizes redundant transmissions +but also decreases the overall message dissemination time +by lessening the workload on peers located along optimal message forwarding paths.

    +
  • +
+

Please feel free to join the discussion and leave feedback regarding this post in the +VAC forum.

+

References

+

[1] GossipSub v1.1 Specifications

+

[2] GossipSub v1.2 Specifications

+

[3] Number of Duplicate Messages in Ethereum’s Gossipsub Network

+

[4] Impact of IDONTWANT in the Number of Duplicates

+

[5] PREAMBLE and IMRECEIVING for Improved Large Message Handling

+

[6] Staggering and Fragmentation for Improved Large Message Handling

+

[7] GossipSub for Big Messages

+

[8] FullDAS: Towards Massive Scalability with 32MB Blocks and Beyond

+

[9] Choke Extension for GossipSub

+

[10] PPPT: Fighting the GossipSub Overhead with Push-Pull Phase Transition

+

[11] GossipSub v1.4 Specifications Proposal

+

[12] GossipSub v2.0 Specifications Proposal

+

[13] Libp2p Implementation in nim

+

[14] The Streamr Network: Performance and Scalability

+

[15] PPPT: PoC Implementation in nim-libp2p

+

[16] GossipSub v1.4: PoC Implementation in nim-libp2p

+

[17] GossipSub v1.4: Production-Grade Implementation in nim-libp2p

+

[18] GossipSub v2.0: PoC Implementation in nim-libp2p

+

[19] nim-libp2p GossipSub Test Node

+ + \ No newline at end of file diff --git a/rlog/index.html b/rlog/index.html index 965753d9..28121da9 100644 --- a/rlog/index.html +++ b/rlog/index.html @@ -3,22 +3,29 @@ -Research Blog | Vac Research +Research Blog | Vac Research - - + +
Skip to main content

Research Blog

Blog

Zerokit optimizations: A performance journey

by
10 min read

Zerokit is a toolkit + ) .lsd-dropdown__trigger:focus .lsd-dropdown__option-label{-webkit-text-decoration:underline;text-decoration:underline;}.lsd-dropdown__label{display:block;}.lsd-dropdown__button-container{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;}.lsd-dropdown__trigger{width:100%;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;border:none;cursor:pointer;background:none;}.lsd-dropdown__trigger:focus{outline:none;}.lsd-dropdown__option-label{cursor:inherit;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;}.lsd-dropdown__icons{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;gap:8px;}.lsd-dropdown__icon{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}.lsd-dropdown__supporting-text{margin:6px 14px;}.lsd-dropdown--error .lsd-dropdown__option-label{-webkit-text-decoration:line-through;text-decoration:line-through;}.lsd-dropdown--disabled{opacity:0.34;cursor:initial;}.lsd-dropdown--large{width:208px;}.lsd-dropdown--large.lsd-dropdown--error{width:230px;}.lsd-dropdown--large .lsd-dropdown__label{margin:0 0 6px 18px;}.lsd-dropdown--large .lsd-dropdown__button-container{height:40px;}.lsd-dropdown--large .lsd-dropdown__trigger{padding:10px 0px 10px 18px;}.lsd-dropdown--large .lsd-dropdown__icons{padding:0px 14px;}.lsd-dropdown--medium{width:188px;}.lsd-dropdown--medium.lsd-dropdown--error{width:210px;}.lsd-dropdown--medium .lsd-dropdown__label{margin:0 0 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__button-container{height:32px;}.lsd-dropdown--medium .lsd-dropdown__trigger{padding:6px 0px 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__icons{padding:0px 12px;}.lsd-dropdown--small{width:164px;}.lsd-dropdown--small.lsd-dropdown--error{width:186px;}.lsd-dropdown--small .lsd-dropdown__label{margin:0 0 6px 12px;}.lsd-dropdown--small .lsd-dropdown__button-container{height:28px;}.lsd-dropdown--small .lsd-dropdown__trigger{padding:6px 0px 6px 12px;}.lsd-dropdown--small .lsd-dropdown__icons{padding:0px 10px;}.lsd-dropdown--outlined .lsd-dropdown__button-container{border:1px solid rgb(var(--lsd-border-primary));}.lsd-dropdown--underlined .lsd-dropdown__button-container{border:1px solid transparent;border-bottom:1px solid rgb(var(--lsd-border-primary));}

Research Blog

Blog

Scaling libp2p GossipSub for Large Messages: An Evaluation of Performance Improvement Proposals

by
16 min read

The original GossipSub design emphasizes robustness, with less focus on message sizes. +However, emerging use cases—such as Ethereum's EIP-4844 and +data availability sampling (DAS) require rapid propagation of high data volumes, +often in the form of large messages. +Many ongoing research efforts attempt to find solutions +for efficiently forwarding large messages through the GossipSub network. +This post provides a concise overview and performance evaluation of some research initiatives +aimed at improving GossipSub to meet the performance needs of modern P2P networks.

Zerokit optimizations: A performance journey

by
10 min read

Zerokit is a toolkit providing powerful zero-knowledge utilities, including a means to answer the question "How do you prevent spam when every message is anonymous?". Its use of the Merkle hash @@ -33,6 +40,6 @@ place where Zerokit comes through, fast and efficient, ready to face the world.

Nim in Logos - 1st Edition

by
7 min read

Welcome to the first edition of Nim in Logos — a newsletter covering major Nim features from Logos' perspective.

If you have comments or suggestions, feel free to reach out to the authors directly or start a thread in the Logos Discord server.

+as the components of affine permutation layers of P-SP-networks.

\ No newline at end of file diff --git a/rlog/introducing-nwaku/index.html b/rlog/introducing-nwaku/index.html index 93af04c4..02c8b738 100644 --- a/rlog/introducing-nwaku/index.html +++ b/rlog/introducing-nwaku/index.html @@ -10,8 +10,8 @@ - - + +
Skip to main content

Research Blog

Blog

Waku for All Decentralized Applications and Infrastructures

by
7 min read

Waku is an open communication protocol and network. Decentralized apps and infrastructure can use Waku for their -communication needs. It is designed to enable dApps and decentralized infrastructure projects to have secure, private, -scalable communication. Waku is available in several languages and platforms, from Web to mobile to desktop to cloud. -Initially, We pushed Waku adoption to the Web ecosystem, we learned that Waku is usable in a variety of complex applications -and infrastructure projects. We have prioritized our effort to make Waku usable on various platforms and environments.

+ ) .lsd-dropdown__trigger:focus .lsd-dropdown__option-label{-webkit-text-decoration:underline;text-decoration:underline;}.lsd-dropdown__label{display:block;}.lsd-dropdown__button-container{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;}.lsd-dropdown__trigger{width:100%;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;border:none;cursor:pointer;background:none;}.lsd-dropdown__trigger:focus{outline:none;}.lsd-dropdown__option-label{cursor:inherit;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;}.lsd-dropdown__icons{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;gap:8px;}.lsd-dropdown__icon{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}.lsd-dropdown__supporting-text{margin:6px 14px;}.lsd-dropdown--error .lsd-dropdown__option-label{-webkit-text-decoration:line-through;text-decoration:line-through;}.lsd-dropdown--disabled{opacity:0.34;cursor:initial;}.lsd-dropdown--large{width:208px;}.lsd-dropdown--large.lsd-dropdown--error{width:230px;}.lsd-dropdown--large .lsd-dropdown__label{margin:0 0 6px 18px;}.lsd-dropdown--large .lsd-dropdown__button-container{height:40px;}.lsd-dropdown--large .lsd-dropdown__trigger{padding:10px 0px 10px 18px;}.lsd-dropdown--large .lsd-dropdown__icons{padding:0px 14px;}.lsd-dropdown--medium{width:188px;}.lsd-dropdown--medium.lsd-dropdown--error{width:210px;}.lsd-dropdown--medium .lsd-dropdown__label{margin:0 0 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__button-container{height:32px;}.lsd-dropdown--medium .lsd-dropdown__trigger{padding:6px 0px 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__icons{padding:0px 12px;}.lsd-dropdown--small{width:164px;}.lsd-dropdown--small.lsd-dropdown--error{width:186px;}.lsd-dropdown--small .lsd-dropdown__label{margin:0 0 6px 12px;}.lsd-dropdown--small .lsd-dropdown__button-container{height:28px;}.lsd-dropdown--small .lsd-dropdown__trigger{padding:6px 0px 6px 12px;}.lsd-dropdown--small .lsd-dropdown__icons{padding:0px 10px;}.lsd-dropdown--outlined .lsd-dropdown__button-container{border:1px solid rgb(var(--lsd-border-primary));}.lsd-dropdown--underlined .lsd-dropdown__button-container{border:1px solid transparent;border-bottom:1px solid rgb(var(--lsd-border-primary));}
Skip to main content

Research Blog

Blog
\ No newline at end of file diff --git a/rlog/page/3/index.html b/rlog/page/3/index.html index bf446239..53decf22 100644 --- a/rlog/page/3/index.html +++ b/rlog/page/3/index.html @@ -3,21 +3,25 @@ -Research Blog | Vac Research +Research Blog | Vac Research - - + +
Skip to main content

Research Blog

Blog

Building Privacy-Protecting Infrastructure

by
19 min read

What is privacy-protecting infrastructure? Why do we need it and how we can build it? We'll look at Waku, the communication layer for Web3. We'll see how it uses ZKPs to incentivize and protect the Waku network. We'll also look at Zerokit, a library that makes it easier to use ZKPs in different environments. After reading this, I hope you'll better understand the importance of privacy-protecting infrastructure and how we can build it.

Introducing nwaku

by
11 min read

Introducing nwaku, a Nim-based Waku v2 client, including a summary of recent developments and preview of current and future focus areas.

Privacy-preserving p2p economic spam protection in Waku v2

by
21 min read

This post is going to give you an overview of how spam protection can be achieved in Waku Relay through rate-limiting nullifiers. We will cover a summary of spam-protection methods in centralized and p2p systems, and the solution overview and details of the economic spam-protection method. The open issues and future steps are discussed in the end.

+ ) .lsd-dropdown__trigger:focus .lsd-dropdown__option-label{-webkit-text-decoration:underline;text-decoration:underline;}.lsd-dropdown__label{display:block;}.lsd-dropdown__button-container{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;}.lsd-dropdown__trigger{width:100%;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;border:none;cursor:pointer;background:none;}.lsd-dropdown__trigger:focus{outline:none;}.lsd-dropdown__option-label{cursor:inherit;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;}.lsd-dropdown__icons{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;gap:8px;}.lsd-dropdown__icon{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}.lsd-dropdown__supporting-text{margin:6px 14px;}.lsd-dropdown--error .lsd-dropdown__option-label{-webkit-text-decoration:line-through;text-decoration:line-through;}.lsd-dropdown--disabled{opacity:0.34;cursor:initial;}.lsd-dropdown--large{width:208px;}.lsd-dropdown--large.lsd-dropdown--error{width:230px;}.lsd-dropdown--large .lsd-dropdown__label{margin:0 0 6px 18px;}.lsd-dropdown--large .lsd-dropdown__button-container{height:40px;}.lsd-dropdown--large .lsd-dropdown__trigger{padding:10px 0px 10px 18px;}.lsd-dropdown--large .lsd-dropdown__icons{padding:0px 14px;}.lsd-dropdown--medium{width:188px;}.lsd-dropdown--medium.lsd-dropdown--error{width:210px;}.lsd-dropdown--medium .lsd-dropdown__label{margin:0 0 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__button-container{height:32px;}.lsd-dropdown--medium .lsd-dropdown__trigger{padding:6px 0px 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__icons{padding:0px 12px;}.lsd-dropdown--small{width:164px;}.lsd-dropdown--small.lsd-dropdown--error{width:186px;}.lsd-dropdown--small .lsd-dropdown__label{margin:0 0 6px 12px;}.lsd-dropdown--small .lsd-dropdown__button-container{height:28px;}.lsd-dropdown--small .lsd-dropdown__trigger{padding:6px 0px 6px 12px;}.lsd-dropdown--small .lsd-dropdown__icons{padding:0px 10px;}.lsd-dropdown--outlined .lsd-dropdown__button-container{border:1px solid rgb(var(--lsd-border-primary));}.lsd-dropdown--underlined .lsd-dropdown__button-container{border:1px solid transparent;border-bottom:1px solid rgb(var(--lsd-border-primary));}
Skip to main content

Research Blog

Blog

Waku for All Decentralized Applications and Infrastructures

by
7 min read

Waku is an open communication protocol and network. Decentralized apps and infrastructure can use Waku for their +communication needs. It is designed to enable dApps and decentralized infrastructure projects to have secure, private, +scalable communication. Waku is available in several languages and platforms, from Web to mobile to desktop to cloud. +Initially, We pushed Waku adoption to the Web ecosystem, we learned that Waku is usable in a variety of complex applications +and infrastructure projects. We have prioritized our effort to make Waku usable on various platforms and environments.

Building Privacy-Protecting Infrastructure

by
19 min read

What is privacy-protecting infrastructure? Why do we need it and how we can build it? We'll look at Waku, the communication layer for Web3. We'll see how it uses ZKPs to incentivize and protect the Waku network. We'll also look at Zerokit, a library that makes it easier to use ZKPs in different environments. After reading this, I hope you'll better understand the importance of privacy-protecting infrastructure and how we can build it.

Introducing nwaku

by
11 min read

Introducing nwaku, a Nim-based Waku v2 client, including a summary of recent developments and preview of current and future focus areas.

\ No newline at end of file diff --git a/rlog/page/4/index.html b/rlog/page/4/index.html index 418badda..7a624015 100644 --- a/rlog/page/4/index.html +++ b/rlog/page/4/index.html @@ -3,21 +3,21 @@ -Research Blog | Vac Research +Research Blog | Vac Research - - + +
Skip to main content

Research Blog

Blog

Waku v2 Update

by
8 min read

A research log. Read on to find out what is going on with Waku v2, a messaging protocol. What has been happening? What is coming up next?

From Kademlia to Discv5

by
9 min read

A quick history of discovery in peer-to-peer networks, along with a look into discv4 and discv5, detailing what they are, how they work and where they differ.

Waku Update

by
6 min read

A research log. What's the current state of Waku? How many users does it support? What are the bottlenecks? What's next?

+ ) .lsd-dropdown__trigger:focus .lsd-dropdown__option-label{-webkit-text-decoration:underline;text-decoration:underline;}.lsd-dropdown__label{display:block;}.lsd-dropdown__button-container{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;}.lsd-dropdown__trigger{width:100%;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;border:none;cursor:pointer;background:none;}.lsd-dropdown__trigger:focus{outline:none;}.lsd-dropdown__option-label{cursor:inherit;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;}.lsd-dropdown__icons{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;gap:8px;}.lsd-dropdown__icon{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}.lsd-dropdown__supporting-text{margin:6px 14px;}.lsd-dropdown--error .lsd-dropdown__option-label{-webkit-text-decoration:line-through;text-decoration:line-through;}.lsd-dropdown--disabled{opacity:0.34;cursor:initial;}.lsd-dropdown--large{width:208px;}.lsd-dropdown--large.lsd-dropdown--error{width:230px;}.lsd-dropdown--large .lsd-dropdown__label{margin:0 0 6px 18px;}.lsd-dropdown--large .lsd-dropdown__button-container{height:40px;}.lsd-dropdown--large .lsd-dropdown__trigger{padding:10px 0px 10px 18px;}.lsd-dropdown--large .lsd-dropdown__icons{padding:0px 14px;}.lsd-dropdown--medium{width:188px;}.lsd-dropdown--medium.lsd-dropdown--error{width:210px;}.lsd-dropdown--medium .lsd-dropdown__label{margin:0 0 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__button-container{height:32px;}.lsd-dropdown--medium .lsd-dropdown__trigger{padding:6px 0px 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__icons{padding:0px 12px;}.lsd-dropdown--small{width:164px;}.lsd-dropdown--small.lsd-dropdown--error{width:186px;}.lsd-dropdown--small .lsd-dropdown__label{margin:0 0 6px 12px;}.lsd-dropdown--small .lsd-dropdown__button-container{height:28px;}.lsd-dropdown--small .lsd-dropdown__trigger{padding:6px 0px 6px 12px;}.lsd-dropdown--small .lsd-dropdown__icons{padding:0px 10px;}.lsd-dropdown--outlined .lsd-dropdown__button-container{border:1px solid rgb(var(--lsd-border-primary));}.lsd-dropdown--underlined .lsd-dropdown__button-container{border:1px solid transparent;border-bottom:1px solid rgb(var(--lsd-border-primary));}
Skip to main content

Research Blog

Blog

Privacy-preserving p2p economic spam protection in Waku v2

by
21 min read

This post is going to give you an overview of how spam protection can be achieved in Waku Relay through rate-limiting nullifiers. We will cover a summary of spam-protection methods in centralized and p2p systems, and the solution overview and details of the economic spam-protection method. The open issues and future steps are discussed in the end.

Waku v2 Update

by
8 min read

A research log. Read on to find out what is going on with Waku v2, a messaging protocol. What has been happening? What is coming up next?

From Kademlia to Discv5

by
9 min read

A quick history of discovery in peer-to-peer networks, along with a look into discv4 and discv5, detailing what they are, how they work and where they differ.

Waku Update

by
6 min read

A research log. What's the current state of Waku? How many users does it support? What are the bottlenecks? What's next?

\ No newline at end of file diff --git a/rlog/page/5/index.html b/rlog/page/5/index.html index ea731b5a..bd270ccf 100644 --- a/rlog/page/5/index.html +++ b/rlog/page/5/index.html @@ -3,21 +3,21 @@ -Research Blog | Vac Research +Research Blog | Vac Research - - + +
Skip to main content

Research Blog

Blog

Vac - A Rough Overview

by
6 min read

Vac is a modular peer-to-peer messaging stack, with a focus on secure messaging. Overview of terms, stack and open problems.

+ ) .lsd-dropdown__trigger:focus .lsd-dropdown__option-label{-webkit-text-decoration:underline;text-decoration:underline;}.lsd-dropdown__label{display:block;}.lsd-dropdown__button-container{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;}.lsd-dropdown__trigger{width:100%;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;border:none;cursor:pointer;background:none;}.lsd-dropdown__trigger:focus{outline:none;}.lsd-dropdown__option-label{cursor:inherit;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;}.lsd-dropdown__icons{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;gap:8px;}.lsd-dropdown__icon{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}.lsd-dropdown__supporting-text{margin:6px 14px;}.lsd-dropdown--error .lsd-dropdown__option-label{-webkit-text-decoration:line-through;text-decoration:line-through;}.lsd-dropdown--disabled{opacity:0.34;cursor:initial;}.lsd-dropdown--large{width:208px;}.lsd-dropdown--large.lsd-dropdown--error{width:230px;}.lsd-dropdown--large .lsd-dropdown__label{margin:0 0 6px 18px;}.lsd-dropdown--large .lsd-dropdown__button-container{height:40px;}.lsd-dropdown--large .lsd-dropdown__trigger{padding:10px 0px 10px 18px;}.lsd-dropdown--large .lsd-dropdown__icons{padding:0px 14px;}.lsd-dropdown--medium{width:188px;}.lsd-dropdown--medium.lsd-dropdown--error{width:210px;}.lsd-dropdown--medium .lsd-dropdown__label{margin:0 0 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__button-container{height:32px;}.lsd-dropdown--medium .lsd-dropdown__trigger{padding:6px 0px 6px 14px;}.lsd-dropdown--medium .lsd-dropdown__icons{padding:0px 12px;}.lsd-dropdown--small{width:164px;}.lsd-dropdown--small.lsd-dropdown--error{width:186px;}.lsd-dropdown--small .lsd-dropdown__label{margin:0 0 6px 12px;}.lsd-dropdown--small .lsd-dropdown__button-container{height:28px;}.lsd-dropdown--small .lsd-dropdown__trigger{padding:6px 0px 6px 12px;}.lsd-dropdown--small .lsd-dropdown__icons{padding:0px 10px;}.lsd-dropdown--outlined .lsd-dropdown__button-container{border:1px solid rgb(var(--lsd-border-primary));}.lsd-dropdown--underlined .lsd-dropdown__button-container{border:1px solid transparent;border-bottom:1px solid rgb(var(--lsd-border-primary));}
Skip to main content

Research Blog

Blog

Vac - A Rough Overview

by
6 min read

Vac is a modular peer-to-peer messaging stack, with a focus on secure messaging. Overview of terms, stack and open problems.

\ No newline at end of file diff --git a/rlog/presenting-js-waku/index.html b/rlog/presenting-js-waku/index.html index f7a3d5f0..68fddcc3 100644 --- a/rlog/presenting-js-waku/index.html +++ b/rlog/presenting-js-waku/index.html @@ -10,8 +10,8 @@ - - + +