Merge branch 'master' of github.com:darkrenaissance/darkfi into dao_demo

This commit is contained in:
lunar-mining
2022-09-08 08:38:57 +02:00
33 changed files with 729 additions and 238 deletions

View File

@@ -54,14 +54,14 @@ fix: token_lists zkas $(PROOFS_BIN)
clippy: token_lists zkas $(PROOFS_BIN)
RUSTFLAGS="$(RUSTFLAGS)" $(CARGO) clippy --release --all-features --all
rustdoc: token_lists
rustdoc: token_lists zkas
RUSTFLAGS="$(RUSTFLAGS)" $(CARGO) doc --release --workspace --all-features \
--no-deps --document-private-items
test: token_lists zkas $(PROOFS_BIN) test-tx
RUSTFLAGS="$(RUSTFLAGS)" $(CARGO) test --release --all-features --all
test-tx:
test-tx: zkas
RUSTFLAGS="$(RUSTFLAGS)" $(CARGO) run --release --features=node,zkas --example tx
clean:

View File

@@ -5,9 +5,10 @@
[![Manifesto - unsystem](https://img.shields.io/badge/Manifesto-unsystem-informational?logo=minutemailer&logoColor=white&style=flat-square)](https://dark.fi/manifesto.html)
[![Book - mdbook](https://img.shields.io/badge/Book-mdbook-orange?logo=gitbook&logoColor=white&style=flat-square)](https://darkrenaissance.github.io/darkfi)
## Connect to darkfi IRC
## Connect to DarkFi IRC
Follow [installation instructions](https://darkrenaissance.github.io/darkfi/misc/ircd.html#installation) for the p2p IRC daemon.
Follow the [installation instructions](https://darkrenaissance.github.io/darkfi/misc/ircd.html#installation)
for the P2P IRC daemon.
## Build
@@ -30,8 +31,8 @@ The following dependencies are also required:
| freetype2 libs | libfreetype6-dev |
| expat xml lib | libexpat1-dev |
Users of Debian-based systems (e.g. Ubuntu) can simply run the following
to install the required dependencies:
Users of Debian-based systems (e.g. Ubuntu) can simply run the
following to install the required dependencies:
```shell
# apt-get update
@@ -40,20 +41,21 @@ to install the required dependencies:
libexpat1-dev
```
Alternatively, users can chose one of the automated scripts
under `contrib` folder by executing:
Alternatively, users can try using the automated script under `contrib`
folder by executing:
```shell
% bash contrib/*_setup.sh
% sh contrib/dependency_setup.sh
```
The following setup script are provided:
* **mac_setup.sh**: installation using brew (brew will be installed if not present).
* **void_setup.sh**: Xbps dependencies for Void Linux.
The script will try to recognize which system you are running,
and install dependencies accordingly. In case it does not find your
package manager, please consider adding support for it into the script
and sending a patch.
To build the necessary binaries, we can just clone the repo, and use the
provided Makefile to build the project. This will download the trusted
setup params, and compile the source code.
To build the necessary binaries, we can just clone the repo, and use
the provided Makefile to build the project. This will download the
trusted setup params, and compile the source code.
```shell
% git clone https://github.com/darkrenaissance/darkfi

251
bin/dao/daod/demo-spec.md Normal file
View File

@@ -0,0 +1,251 @@
---
title: DAO demo architecture
author: jstark
---
This document outlines a simple demo to showcase the smart contract
schema underlying the initial DAO MVP. We have tried to mimic the
basic DarkFi architecture while remaining as simple as possible.
We do not have a blockchain, p2p network, or encrypted wallet database
in this highly simplified demo. It is just a local network of 4 nodes
and a relayer. The values are all stored in memory.
# Layers
**bin**
Located in darkfi/bin/dao.
* **daod/**: receives rpc requests and operates a `client`.
* **dao-cli/**: command-line interface that receives input and sends rpc requests.
* **relayerd/**: receives transactions on TCP and relays them to all nodes.
**src**
Located in darkfi/bin/dao/daod.
* **contract/**: source code for dao and money contracts.
* **util/**: demo-wide utilities.
* **state/**: stores dao and money states.
* **tx**: underlying types required by transactions, function calls and call data.
* **node/**: a dao full node.
**node**
A dao node containing `client` and `wallet` submodules.
* **client/**: operates a wallet and performs state transition and validate methods.
* **wallet/**: owns and operates secret values.
**proof**
Located in darkfi/bin/dao/proof. Contains the zk proofs.
# Command Flow
The following assumes that a user has already compiled the zk contracts
by running `make`.
This requires a `Makescript` as follows:
```
ZK_SRC_FILES := $(wildcard proof/*.zk)
ZK_BIN_FILES := $(patsubst proof/%.zk, proof/%.zk.bin, $(ZK_SRC_FILES))
daod: $(ZK_BIN_FILES)
cargo run --release
proof/%.zk.bin: proof/%.zk
zkas $<
```
We will also need to write a shell script that opens 9 terminals and
runs the following:
*Terminal 1:* relayerd.
*Terminal 2-5:* 4 instances of daod.
*Terminal 6-9:* 4 instances of dao-cli.
Relayerd and daod should be sent to the background, so the demo will
consist visually of 4 terminals running dao-cli.
## Start relayer
1. `relayerd` starts a listener for all TCP ports specified in config.
## Initialize DAO
Note: this happens automatically on daod first run.
1. `daod`: starts a listener on the relayer TCP port.
2. `daod:` creates a client and calls `client.init()`.
3. `client`: creates a money wallet.
4. `money-wallet`: generates cashier, faucet keys.
5. `client`: gets public keys from wallet and calls `state.new()`.
6. `state`: creates ZkContractTable, StateRegistry.
7. `state`: loads all zk binaries and saves them in ZkContractTable.
8. `state`: creates a new money/dao state and registers them in StateRegistry.
## Stage 1: create DAO
1. `dao-cli:` sends `create()` rpc request to daod.
2. `daod`: receives rpc request and calls `client.create()`.
3. `client`: creates a dao wallet.
4. `dao-wallet`: specifies the dao params.
5. `dao-wallet`: creates a dao keypair, bulla blind, and signature secret.
**build sequence.**
Note: Builders differ according to the FuncCall, but the basic sequence
is the same.
6. `dao-wallet`: build: creates dao_contract::mint::wallet::Builder.
7. `dao-wallet`: generates a FuncCall from builder.build().
8. `dao-wallet`: adds FuncCall to a vector.
9. `dao-wallet`: sign the vector of FuncCalls.
**send sequence.**
10. `dao-wallet`: create a Transaction.
11. `dao-wallet`: send the Transaction to the relayer.
12. `relayer`: receives a Transaction on one of its connections.
13. `relayer`: relays the Transaction to all connected nodes.
**recv sequence.**
14. `daod`: receives a Transaction on its relayerd listener.
15. 'daod`: sends the Transaction to Client.
**validate sequence.**
16. `client`: validate: creates an empty vector of updates.
16. `client`: loops through all FuncCalls in the Transaction.
17. `client`: runs a match statement on the FUNC_ID.
18. `client`: finds mint FUNC_ID and runs a state transition function.
20. `client`: pushes the result to Vec<Update>
21. `client`: outside the loop, atomically applies all updates.
22. `client`: calls zk_verify() on the Transaction.
23. `client`: verifies signatures.
------------------------------------------------------------------------
24. `client`: sends Transaction to the relayer.
25. `relayer`: receives Transaction and relays.
* TODO: `dao-wallet`: waits until Transction is confirmed. (how?)
27. `dao-wallet`: look up the dao state and call witness().
28. `dao-wallet`: get the dao bulla from the Transaction.
29. `dao-cli`: print "Created DAO {}".
## Stage 2: fund DAO
* TODO: for the demo it might be better to call mint() first and then
fund(), passing the values into fund()
Here we are creating a treasury token and sending it to the DAO.
1. `dao-cli:` `fund()` rpc request to daod
2. `daod`: receives rpc request and calls `client.fund()`.
3. `client`: creates treasury token, random token ID and supply
Note: dao-wallet must manually track coins to retrieve coins belonging
to its private key.
4. `dao-wallet`: looks up the money state, and calls state.wallet_cache.track()
5. `money-wallet`: sets spend hook to dao_contract::exec::FUNC_ID
5. `money-wallet`: sets user_data to dao_bulla
* TODO: how does it get the dao_bulla? Must be stored somewhere.
6. `money-wallet`: specifies dao public key and treasury token BuilderOutputInfo.
5. `money-wallet`: runs the build sequence for money::transfer.
9. `money-wallet`: create Transaction and send.
10. `relayer`: receives Transaction and relays.
11. `daod`: receives a Transaction and sends to client.
12. `client`: runs the validate sequence.
Note: here we get all coins associated with the private key.
13. `dao-wallet`: looks up the state and calls WalletCache.get_received()
14. `dao-wallet`: check the coin is valid by recreating Coin
15. `daod`: sendswith token ID and balance to dao-cli.
16. `dao-cli`: displays data using pretty table.
## Stage 3: airdrop
1. `dao-cli`: calls keygen()
2. `daod`: client.keygen()
3. `daod`: money-wallet.keygen()
4. `money-wallet`: creates new keypair
5. `money-wallet`: looks up the money_contract State and calls WalletCache.track()
6. `money-wallet`: return the public key
7. `dao-cli`: prints the public key
Note: do this 3 times to generate 3 pubkey keys for different daod instances.
8. `dao-cli`: calls mint()
9. `daod`: call client.mint()
10. `client:` creates governance token with random token ID and supply
11. `dao-cli`: prints "created token {} with supply {}"
12. `dao-cli`: calls airdrop() and passes a value and a pubkey.
13. `dao-wallet:` runs the build sequence for money::transfer.
14. `dao-wallet`: create Transaction and send.
15. `relayer`: receives Transaction and relays.
16. `daod`: receives a Transaction and sends to client.
17. `client`: runs the validate sequence.
18. `money-wallet`: state.wallet_cache.get_received()
19. `money-wallet`: check the coin is valid by recreating Coin
20. `daod`: sends token ID and balance to cli
21. `dao-cli`: prints "received coin {} with value {}".
* TODO: money-wallet must keep track of Coins and have a flag for whether or not they are spent.
* Hashmap of <Coin, bool> ?
## Stage 4: create proposal
* TODO: maybe for the demo we should just hardcode a user/ proposal recipient.
1. `dao-cli`: calls propose() and enter a user pubkey and an amount
3. `dao-wallet`: runs the build sequence for dao_contract::propose
4. `dao-wallet`: specifies user pubkey, amount and token ID in Proposal
5. `dao-cli`: prints "Created proposal to send {} xDRK to {}"
6. `dao-wallet`: create Transaction and send.
7. `relayer`: receives Transaction and relays.
8. `daod`: receives a Transaction and sends to client.
9. `client`: runs the validate sequence.
* TODO: how does everyone have access to DAO private key?
10. `dao-wallet`: reads received proposal and tries to decrypt Note
11. `dao-wallet`: sends decrypted values to daod
12. `dao-cli`: prints "Proposal is now active"
## Stage 5 vote
1. `dao-cli`: calls vote() and enters a vote option (yes or no) and an amount
2. `daod`: calls client.vote()
3. `money-wallet`: get money_leaf_position and money_merkle_path
4. `money-wallet`: create builder sequence for dao_contract::vote
5. `money-wallet`: specify dao_keypair in vote_keypair field
* TODO: this implies that money-wallet is able to access private values in dao-wallet
6. `money-wallet`: signs and sends
7. `relayer`: receives Transaction and relays.
8. `daod`: receives a Transaction and sends to client.
9. `client`: runs the validate sequence.
10. `dao-wallet`: tries to decrypt the Vote.
11. `dao-cli`: prints "Received vote {} value {}"
Note: repeat 3 times with different values and vote options.
* TODO: ignore section re: vote commitments?
* TODO: determine outcome: yes_votes_value/ all_votes_value
e.g. when the quorum is reached, print "Quorum reached! Outcome {}"
or just hardcode it for X n. of voters
## Stage 6: Executing the proposal
1. `dao-cli`: calls exec()
* TODO: how does dao have access to user data?
2. `dao-wallet`: get money_leaf_position and money_merkle_path
3. `dao-wallet`: specifies user_kaypair and proposal amount in 1st output
4. `dao-wallet`: specifies change in 2nd output
5. `dao-wallet`: run build sequence for money_contract::transfer
6. `dao-wallet`: run build sequence for dao_contract::exec
7. `dao-wallet`: signs transaction and sends
8. `relayer`: receives Transaction and relays.
9. `daod`: receives a Transaction and sends to client.
10. `client`: runs the validate sequence.

View File

@@ -21,6 +21,7 @@ pub struct OwnCoin {
pub struct WalletCache {
// Normally this would be a HashMap, but SecretKey is not Hash-able
// TODO: This can be HashableBase
cache: Vec<(SecretKey, Vec<OwnCoin>)>,
}

View File

@@ -433,6 +433,7 @@ pub async fn demo() -> Result<()> {
}
tx.zk_verify(&zk_bins);
tx.verify_sigs();
// Wallet stuff

View File

@@ -11,12 +11,12 @@ pub enum RpcError {
InvalidKeypair = -32106,
UnknownSlot = -32107,
TxBuildFail = -32108,
NetworkNameError = -32109,
// NetworkNameError = -32109,
ParseError = -32110,
TxBroadcastFail = -32111,
NotYetSynced = -32112,
InvalidAddressParam = -32113,
InvalidAmountParam = -32114,
// InvalidAmountParam = -32114,
DecryptionFailed = -32115,
}
@@ -30,12 +30,12 @@ fn to_tuple(e: RpcError) -> (i64, String) {
RpcError::InvalidKeypair => "Invalid keypair",
RpcError::UnknownSlot => "Did not find slot",
RpcError::TxBuildFail => "Failed building transaction",
RpcError::NetworkNameError => "Unknown network name",
// RpcError::NetworkNameError => "Unknown network name",
RpcError::ParseError => "Parse error",
RpcError::TxBroadcastFail => "Failed broadcasting transaction",
RpcError::NotYetSynced => "Blockchain not yet synced",
RpcError::InvalidAddressParam => "Invalid address parameter",
RpcError::InvalidAmountParam => "invalid amount parameter",
// RpcError::InvalidAmountParam => "invalid amount parameter",
RpcError::DecryptionFailed => "Decryption failed",
};

View File

@@ -50,8 +50,9 @@ enum ArgsSubCommand {
fn print_patches(value: &Vec<serde_json::Value>) {
for res in value {
let res = res.as_array().unwrap();
let (title, changes) = (res[0].as_str().unwrap(), res[1].as_str().unwrap());
println!("FILE: {}", title);
let res: Vec<&str> = res.iter().map(|r| r.as_str().unwrap()).collect();
let (title, workspace, changes) = (res[0], res[1], res[2]);
println!("WORKSPACE: {} FILE: {}", workspace, title);
println!("{}", changes);
println!("----------------------------------");
}

View File

@@ -7,8 +7,8 @@
## Sets Author name
# author="name"
## Secret Key To Encrypt/Decrypt Patches
#secret = "86MGNN31r3VxT4ULMmhQnMtV8pDnod339KwHwHCfabG2"
## Workspaces
# workspaces = ["darkfi:86MGNN31r3VxT4ULMmhQnMtV8pDnod339KwHwHCfabG2"]
## Raft net settings
[net]

View File

@@ -11,9 +11,11 @@ use darkfi::{
Error,
};
use crate::Patch;
pub struct JsonRpcInterface {
sender: async_channel::Sender<(String, bool, Vec<String>)>,
receiver: async_channel::Receiver<Vec<Vec<(String, String)>>>,
receiver: async_channel::Receiver<Vec<Vec<Patch>>>,
}
#[async_trait]
@@ -36,10 +38,25 @@ impl RequestHandler for JsonRpcInterface {
}
}
fn patch_to_tuple(p: &Patch, colorize: bool) -> (String, String, String) {
(p.path.to_owned(), p.workspace.to_owned(), if colorize { p.colorize() } else { p.to_string() })
}
fn printable_patches(
patches: Vec<Vec<Patch>>,
colorize: bool,
) -> Vec<Vec<(String, String, String)>> {
let mut response = vec![];
for ps in patches {
response.push(ps.iter().map(|p| patch_to_tuple(p, colorize)).collect())
}
response
}
impl JsonRpcInterface {
pub fn new(
sender: async_channel::Sender<(String, bool, Vec<String>)>,
receiver: async_channel::Receiver<Vec<Vec<(String, String)>>>,
receiver: async_channel::Receiver<Vec<Vec<Patch>>>,
) -> Self {
Self { sender, receiver }
}
@@ -60,6 +77,7 @@ impl JsonRpcInterface {
}
let response = self.receiver.recv().await.unwrap();
let response = printable_patches(response, true);
JsonResponse::new(json!(response), id).into()
}
@@ -80,6 +98,7 @@ impl JsonRpcInterface {
}
let response = self.receiver.recv().await.unwrap();
let response = printable_patches(response, false);
JsonResponse::new(json!(response), id).into()
}

View File

@@ -65,11 +65,11 @@ pub struct Args {
#[structopt(long, default_value = "NONE")]
pub author: String,
/// Secret Key To Encrypt/Decrypt Patches
#[structopt(long, default_value = "")]
pub secret: String,
#[structopt(long)]
pub workspaces: Vec<String>,
/// Generate A New Secret Key
#[structopt(long)]
pub keygen: bool,
pub generate: bool,
/// Clean all the local data in docs path
/// (BE CAREFULL) Check the docs path in the config file before running this
#[structopt(long)]
@@ -90,6 +90,28 @@ pub struct EncryptedPatch {
payload: Vec<u8>,
}
fn get_workspaces(settings: &Args, docs_path: &Path) -> Result<FxHashMap<String, SalsaBox>> {
let mut workspaces = FxHashMap::default();
for workspace in settings.workspaces.iter() {
let workspace: Vec<&str> = workspace.split(':').collect();
let (workspace, secret) = (workspace[0], workspace[1]);
let bytes: [u8; 32] = bs58::decode(secret)
.into_vec()?
.try_into()
.map_err(|_| Error::ParseFailed("Parse secret key failed"))?;
let secret = crypto_box::SecretKey::from(bytes);
let public = secret.public_key();
let salsa_box = crypto_box::SalsaBox::new(&public, &secret);
workspaces.insert(workspace.to_string(), salsa_box);
create_dir_all(docs_path.join(workspace))?;
}
Ok(workspaces)
}
fn encrypt_patch(
patch: &Patch,
salsa_box: &SalsaBox,
@@ -126,9 +148,9 @@ fn str_to_chars(s: &str) -> Vec<&str> {
s.graphemes(true).collect::<Vec<&str>>()
}
fn path_to_id(path: &str) -> String {
fn path_to_id(path: &str, workspace: &str) -> String {
let mut hasher = sha2::Sha256::new();
hasher.update(path);
hasher.update(&format!("{}{}", path, workspace));
bs58::encode(hex::encode(hasher.finalize())).into_string()
}
@@ -177,11 +199,11 @@ struct Darkwiki {
settings: DarkWikiSettings,
#[allow(clippy::type_complexity)]
rpc: (
async_channel::Sender<Vec<Vec<(String, String)>>>,
async_channel::Sender<Vec<Vec<Patch>>>,
async_channel::Receiver<(String, bool, Vec<String>)>,
),
raft: (async_channel::Sender<EncryptedPatch>, async_channel::Receiver<EncryptedPatch>),
salsa_box: SalsaBox,
workspaces: FxHashMap<String, SalsaBox>,
}
impl Darkwiki {
@@ -202,17 +224,19 @@ impl Darkwiki {
}
}
patch = self.raft.1.recv().fuse() => {
self.on_receive_patch(&patch?)?;
for (workspace, salsa_box) in self.workspaces.iter() {
if let Ok(patch) = decrypt_patch(&patch.clone()?, &salsa_box) {
info!("[{}] Receive a {:?}", workspace, patch);
self.on_receive_patch(&patch)?;
}
}
}
}
}
}
fn on_receive_patch(&self, received_patch: &EncryptedPatch) -> Result<()> {
let received_patch = decrypt_patch(received_patch, &self.salsa_box)?;
info!("Receive a {:?}", received_patch);
fn on_receive_patch(&self, received_patch: &Patch) -> Result<()> {
let sync_id_path = self.settings.datastore_path.join("sync").join(&received_patch.id);
let local_id_path = self.settings.datastore_path.join("local").join(&received_patch.id);
@@ -233,7 +257,7 @@ impl Darkwiki {
}
sync_patch.timestamp = received_patch.timestamp;
sync_patch.author = received_patch.author;
sync_patch.author = received_patch.author.clone();
save_json_file::<Patch>(&sync_id_path, &sync_patch)?;
} else if !received_patch.base.is_empty() {
save_json_file::<Patch>(&sync_id_path, &received_patch)?;
@@ -248,43 +272,62 @@ impl Darkwiki {
files: Vec<String>,
rng: &mut OsRng,
) -> Result<()> {
let (patches, local, sync, merge) = self.update(dry, files)?;
let mut local: Vec<Patch> = vec![];
let mut sync: Vec<Patch> = vec![];
let mut merge: Vec<Patch> = vec![];
if !dry {
for patch in patches {
info!("Send a {:?}", patch);
let encrypt_patch = encrypt_patch(&patch, &self.salsa_box, rng)?;
self.raft.0.send(encrypt_patch).await?;
for (workspace, salsa_box) in self.workspaces.iter() {
let (patches, l, s, m) = self.update(
dry,
&self.settings.docs_path.join(workspace),
files.clone(),
workspace,
)?;
local.extend(l);
sync.extend(s);
merge.extend(m);
if !dry {
for patch in patches {
info!("Send a {:?}", patch);
let encrypt_patch = encrypt_patch(&patch, &salsa_box, rng)?;
self.raft.0.send(encrypt_patch).await?;
}
}
}
let local: Vec<(String, String)> =
local.iter().map(|p| (p.path.to_owned(), p.colorize())).collect();
let sync: Vec<(String, String)> =
sync.iter().map(|p| (p.path.to_owned(), p.colorize())).collect();
let merge: Vec<(String, String)> =
merge.iter().map(|p| (p.path.to_owned(), p.colorize())).collect();
self.rpc.0.send(vec![local, sync, merge]).await?;
Ok(())
}
async fn on_receive_restore(&self, dry: bool, files_name: Vec<String>) -> Result<()> {
let patches = self.restore(dry, files_name)?;
let patches: Vec<(String, String)> =
patches.iter().map(|p| (p.path.to_owned(), p.to_string())).collect();
let mut patches: Vec<Patch> = vec![];
for (workspace, _) in self.workspaces.iter() {
let ps = self.restore(
dry,
&self.settings.docs_path.join(workspace),
&files_name,
workspace,
)?;
patches.extend(ps);
}
self.rpc.0.send(vec![patches]).await?;
Ok(())
}
fn restore(&self, dry: bool, files_name: Vec<String>) -> Result<Vec<Patch>> {
fn restore(
&self,
dry: bool,
docs_path: &Path,
files_name: &[String],
workspace: &str,
) -> Result<Vec<Patch>> {
let local_path = self.settings.datastore_path.join("local");
let docs_path = self.settings.docs_path.clone();
let mut patches = vec![];
@@ -294,6 +337,10 @@ impl Darkwiki {
let file_path = local_path.join(&file_id);
let local_patch: Patch = load_json_file(&file_path)?;
if local_patch.workspace != workspace {
continue
}
if !files_name.is_empty() && !files_name.contains(&local_patch.path.to_string()) {
continue
}
@@ -305,7 +352,7 @@ impl Darkwiki {
}
if !dry {
self.save_doc(&local_patch.path, &local_patch.to_string())?;
self.save_doc(&local_patch.path, &local_patch.to_string(), workspace)?;
}
patches.push(local_patch);
@@ -314,7 +361,13 @@ impl Darkwiki {
Ok(patches)
}
fn update(&self, dry: bool, files_name: Vec<String>) -> Result<Patches> {
fn update(
&self,
dry: bool,
docs_path: &Path,
files_name: Vec<String>,
workspace: &str,
) -> Result<Patches> {
let mut patches: Vec<Patch> = vec![];
let mut local_patches: Vec<Patch> = vec![];
let mut sync_patches: Vec<Patch> = vec![];
@@ -322,7 +375,6 @@ impl Darkwiki {
let local_path = self.settings.datastore_path.join("local");
let sync_path = self.settings.datastore_path.join("sync");
let docs_path = self.settings.docs_path.clone();
// save and compare docs in darkwiki and local dirs
// then merged with sync patches if any received
@@ -342,10 +394,10 @@ impl Darkwiki {
continue
}
let doc_id = path_to_id(doc_path);
let doc_id = path_to_id(doc_path, workspace);
// create new patch
let mut new_patch = Patch::new(doc_path, &doc_id, &self.settings.author);
let mut new_patch = Patch::new(doc_path, &doc_id, &self.settings.author, workspace);
// check for any changes found with local doc and darkwiki doc
if let Ok(local_patch) = load_json_file::<Patch>(&local_path.join(&doc_id)) {
@@ -381,7 +433,7 @@ impl Darkwiki {
let sync_patch_t = new_patch.transform(&sync_patch);
new_patch = new_patch.merge(&sync_patch_t);
if !dry {
self.save_doc(doc_path, &new_patch.to_string())?;
self.save_doc(doc_path, &new_patch.to_string(), workspace)?;
}
merge_patches.push(new_patch.clone());
}
@@ -410,6 +462,10 @@ impl Darkwiki {
let file_path = sync_path.join(&file_id);
let sync_patch: Patch = load_json_file(&file_path)?;
if sync_patch.workspace != workspace {
continue
}
if is_delete_patch(&sync_patch) {
if local_path.join(&sync_patch.id).exists() {
sync_patches.push(sync_patch.clone());
@@ -435,7 +491,7 @@ impl Darkwiki {
}
if !dry {
self.save_doc(&sync_patch.path, &sync_patch.to_string())?;
self.save_doc(&sync_patch.path, &sync_patch.to_string(), workspace)?;
save_json_file(&local_path.join(file_id), &sync_patch)?;
}
@@ -451,13 +507,21 @@ impl Darkwiki {
let file_path = local_path.join(&file_id);
let local_patch: Patch = load_json_file(&file_path)?;
if local_patch.workspace != workspace {
continue
}
if !files_name.is_empty() && !files_name.contains(&local_patch.path.to_string()) {
continue
}
if !docs_path.join(&local_patch.path).exists() {
let mut new_patch =
Patch::new(&local_patch.path, &local_patch.id, &self.settings.author);
let mut new_patch = Patch::new(
&local_patch.path,
&local_patch.id,
&self.settings.author,
&local_patch.workspace,
);
new_patch.add_op(&OpMethod::Delete(local_patch.to_string().len() as u64));
patches.push(new_patch.clone());
@@ -473,8 +537,8 @@ impl Darkwiki {
Ok((patches, local_patches, sync_patches, merge_patches))
}
fn save_doc(&self, path: &str, edit: &str) -> Result<()> {
let path = self.settings.docs_path.join(path);
fn save_doc(&self, path: &str, edit: &str, workspace: &str) -> Result<()> {
let path = self.settings.docs_path.join(workspace).join(path);
if let Some(p) = path.parent() {
if !p.exists() && !p.to_str().unwrap().is_empty() {
create_dir_all(p)?;
@@ -492,7 +556,7 @@ async fn realmain(settings: Args, executor: Arc<Executor<'_>>) -> Result<()> {
if settings.refresh {
println!("Removing local docs in: {:?} (yes/no)? ", docs_path);
let mut confirm = String::new();
stdin().read_line(&mut confirm).ok().expect("Failed to read line");
stdin().read_line(&mut confirm).expect("Failed to read line");
let confirm = confirm.to_lowercase();
let confirm = confirm.trim();
@@ -512,25 +576,43 @@ async fn realmain(settings: Args, executor: Arc<Executor<'_>>) -> Result<()> {
create_dir_all(datastore_path.join("local"))?;
create_dir_all(datastore_path.join("sync"))?;
if settings.keygen {
info!("Generating a new secret key");
let mut rng = crypto_box::rand_core::OsRng;
let secret_key = SecretKey::generate(&mut rng);
let encoded = bs58::encode(secret_key.as_bytes());
println!("Secret key: {}", encoded.into_string());
if settings.generate {
println!("Generating a new workspace");
loop {
println!("Name for the new workspace: ");
let mut workspace = String::new();
stdin().read_line(&mut workspace).ok().expect("Failed to read line");
let workspace = workspace.to_lowercase();
let workspace = workspace.trim();
if workspace.is_empty() && workspace.len() < 3 {
error!("Wrong workspace try again");
continue
}
let mut rng = crypto_box::rand_core::OsRng;
let secret_key = SecretKey::generate(&mut rng);
let encoded = bs58::encode(secret_key.as_bytes());
create_dir_all(docs_path.join(workspace))?;
println!("workspace: {}:{}", workspace, encoded.into_string());
println!("Please add it to the config file.");
break
}
return Ok(())
}
let bytes: [u8; 32] = bs58::decode(settings.secret)
.into_vec()?
.try_into()
.map_err(|_| Error::ParseFailed("Parse secret key failed"))?;
let secret = crypto_box::SecretKey::from(bytes);
let public = secret.public_key();
let salsa_box = crypto_box::SalsaBox::new(&public, &secret);
let workspaces = get_workspaces(&settings, &docs_path)?;
if workspaces.is_empty() {
error!("Please add at least on workspace to the config file.");
println!("Run `$ darkwikid --generate` to generate new workspace.");
return Ok(())
}
let (rpc_sx, rpc_rv) = async_channel::unbounded::<(String, bool, Vec<String>)>();
let (notify_sx, notify_rv) = async_channel::unbounded::<Vec<Vec<(String, String)>>>();
let (notify_sx, notify_rv) = async_channel::unbounded::<Vec<Vec<Patch>>>();
//
// RPC
@@ -575,8 +657,6 @@ async fn realmain(settings: Args, executor: Arc<Executor<'_>>) -> Result<()> {
executor.spawn(p2p.clone().run(executor.clone())).detach();
p2p.clone().wait_for_outbound(executor.clone()).await?;
//
// Darkwiki start
//
@@ -590,7 +670,7 @@ async fn realmain(settings: Args, executor: Arc<Executor<'_>>) -> Result<()> {
settings: darkwiki_settings,
raft: (raft_sx, raft_rv),
rpc: (notify_sx, rpc_rv),
salsa_box,
workspaces,
};
darkwiki.start().await.unwrap_or(());
})

View File

@@ -27,6 +27,7 @@ pub struct Patch {
pub id: String,
pub base: String,
pub timestamp: Timestamp,
pub workspace: String,
ops: OpMethods,
}
@@ -66,12 +67,13 @@ impl std::string::ToString for Patch {
}
impl Patch {
pub fn new(path: &str, id: &str, author: &str) -> Self {
pub fn new(path: &str, id: &str, author: &str, workspace: &str) -> Self {
Self {
path: path.to_string(),
id: id.to_string(),
ops: OpMethods(vec![]),
base: String::new(),
workspace: workspace.to_string(),
author: author.to_string(),
timestamp: Timestamp::current_time(),
}
@@ -146,7 +148,7 @@ impl Patch {
//
// TODO need more work to get better performance with iterators
pub fn transform(&self, other: &Self) -> Self {
let mut new_patch = Self::new(&self.path, &self.id, &self.author);
let mut new_patch = Self::new(&self.path, &self.id, &self.author, "");
new_patch.base = self.base.clone();
let mut ops1 = self.ops.0.iter().cloned();
@@ -253,7 +255,7 @@ impl Patch {
let mut ops1 = ops1.iter().cloned();
let mut ops2 = other.ops.0.iter().cloned();
let mut new_patch = Self::new(&self.path, &self.id, &self.author);
let mut new_patch = Self::new(&self.path, &self.id, &self.author, "");
new_patch.base = self.base.clone();
let mut op1 = ops1.next();
@@ -468,7 +470,7 @@ mod tests {
#[test]
fn test_to_string() {
let mut patch = Patch::new("", &gen_id(30), "");
let mut patch = Patch::new("", &gen_id(30), "", "");
patch.base = "text example\n hello".to_string();
patch.retain(14);
patch.delete(5);
@@ -479,7 +481,7 @@ mod tests {
#[test]
fn test_merge() {
let mut patch_init = Patch::new("", &gen_id(30), "");
let mut patch_init = Patch::new("", &gen_id(30), "", "");
let base = "text example\n hello";
patch_init.base = base.to_string();
@@ -517,7 +519,7 @@ mod tests {
#[test]
fn test_transform() {
let mut patch_init = Patch::new("", &gen_id(30), "");
let mut patch_init = Patch::new("", &gen_id(30), "", "");
let base = "text example\n hello";
patch_init.base = base.to_string();
@@ -555,7 +557,7 @@ mod tests {
#[test]
fn test_transform2() {
let mut patch_init = Patch::new("", &gen_id(30), "");
let mut patch_init = Patch::new("", &gen_id(30), "", "");
let base = "#hello\n hello";
patch_init.base = base.to_string();
@@ -587,7 +589,7 @@ mod tests {
assert_eq!(op_method, op_method_deser);
// serialize & deserialize Patch
let mut patch = Patch::new("", &gen_id(30), "");
let mut patch = Patch::new("", &gen_id(30), "", "");
patch.insert("hello");
patch.delete(2);

View File

@@ -1,5 +1,5 @@
use async_std::sync::Arc;
use std::{fs::File, io, io::Read, path::PathBuf};
use std::{fs::File, io, io::Read};
use clap::Parser;
use darkfi::util::{
@@ -114,7 +114,7 @@ async fn main() -> DnetViewResult<()> {
let log_level = get_log_level(args.verbose.into());
let log_config = get_log_config();
let log_file_path = PathBuf::from(expand_path(&args.log_path)?);
let log_file_path = expand_path(&args.log_path)?;
if let Some(parent) = log_file_path.parent() {
std::fs::create_dir_all(parent)?;
};

View File

@@ -75,7 +75,7 @@ impl DataParser {
// Parse response
match response {
Ok(reply) => {
if !reply.as_object().is_some() || reply.as_object().unwrap().is_empty() {
if reply.as_object().is_none() || reply.as_object().unwrap().is_empty() {
return Err(DnetViewError::EmptyRpcReply)
}

View File

@@ -1,7 +1,7 @@
use std::{process::exit, str::FromStr, time::Instant};
use clap::{Parser, Subcommand};
use prettytable::{cell, format, row, Table};
use prettytable::{format, row, Table};
use serde_json::json;
use simplelog::{ColorChoice, TermLogger, TerminalMode};

View File

@@ -43,7 +43,7 @@ outbound_connections=5
seeds = ["tls://lilith0.dark.fi:25551", "tls://lilith1.dark.fi:25551"]
# Prefered transports for outbound connections
#transports = ["tls", "tcp"]
#outbound_transports = ["tls", "tcp"]
## Only used for debugging. Compromises privacy when set.
#node_id = "foo"

View File

@@ -1,8 +1,10 @@
use async_std::sync::{Arc, Mutex};
use std::{cmp::Ordering, collections::VecDeque};
use std::{
cmp::Ordering,
collections::{BTreeMap, VecDeque},
};
use chrono::Utc;
use fxhash::FxHashMap;
use ripemd::{Digest, Ripemd160};
use crate::Privmsg;
@@ -31,20 +33,33 @@ pub fn create_buffers() -> Buffers {
}
#[derive(Clone)]
pub struct UMsgs(pub FxHashMap<String, Privmsg>);
pub struct UMsgs {
pub msgs: BTreeMap<String, Privmsg>,
capacity: usize,
}
impl UMsgs {
pub fn new() -> Self {
Self(FxHashMap::default())
Self { msgs: BTreeMap::new(), capacity: SIZE_OF_MSGS_BUFFER }
}
pub fn insert(&mut self, msg: &Privmsg) -> String {
let mut hasher = Ripemd160::new();
hasher.update(msg.to_string());
let key = hex::encode(hasher.finalize());
self.0.insert(key.clone(), msg.clone());
if self.msgs.len() == self.capacity {
self.pop_front();
}
self.msgs.insert(key.clone(), msg.clone());
key
}
fn pop_front(&mut self) {
let first_key = self.msgs.iter().next_back().unwrap().0.clone();
self.msgs.remove(&first_key);
}
}
impl Default for UMsgs {
@@ -126,6 +141,14 @@ impl PrivmsgsBuffer {
self.buffer.iter()
}
pub fn len(&self) -> usize {
self.buffer.len()
}
pub fn is_empty(&self) -> bool {
self.len() == 0
}
pub fn last_term(&self) -> u64 {
match self.buffer.len() {
0 => 0,

View File

@@ -157,8 +157,6 @@ async fn realmain(settings: Args, executor: Arc<Executor<'_>>) -> Result<()> {
let executor_cloned = executor.clone();
executor_cloned.spawn(p2p.clone().run(executor.clone())).detach();
p2p.clone().wait_for_outbound(executor.clone()).await?;
//
// RPC interface
//

View File

@@ -113,7 +113,7 @@ impl ProtocolPrivmsg {
let mut inv_requested = vec![];
for inv_object in inv.invs.iter() {
let msgs = &mut self.buffers.unread_msgs.lock().await.0;
let msgs = &mut self.buffers.unread_msgs.lock().await.msgs;
if let Some(msg) = msgs.get_mut(&inv_object.0) {
msg.read_confirms += 1;
} else {
@@ -164,7 +164,7 @@ impl ProtocolPrivmsg {
let getdata = self.getdata_sub.receive().await?;
let getdata = (*getdata).to_owned();
let msgs = &self.buffers.unread_msgs.lock().await.0;
let msgs = &self.buffers.unread_msgs.lock().await.msgs;
for inv in getdata.invs {
if let Some(msg) = msgs.get(&inv.0) {
self.channel.send(msg.clone()).await?;
@@ -178,7 +178,7 @@ impl ProtocolPrivmsg {
}
async fn update_unread_msgs(&self) -> Result<()> {
let msgs = &mut self.buffers.unread_msgs.lock().await.0;
let msgs = &mut self.buffers.unread_msgs.lock().await.msgs;
for (hash, msg) in msgs.clone() {
if msg.timestamp + UNREAD_MSG_EXPIRE_TIME < Utc::now().timestamp() {
msgs.remove(&hash);
@@ -203,7 +203,7 @@ impl ProtocolPrivmsg {
self.update_unread_msgs().await?;
for msg in self.buffers.unread_msgs.lock().await.0.values() {
for msg in self.buffers.unread_msgs.lock().await.msgs.values() {
self.channel.send(msg.clone()).await?;
}
Ok(())

View File

@@ -1,6 +1,6 @@
use crypto_box::SalsaBox;
use fxhash::FxHashMap;
use log::info;
use log::{info, warn};
use serde::Deserialize;
use structopt::StructOpt;
use structopt_toml::StructOptToml;
@@ -135,6 +135,7 @@ fn parse_priv_key(data: &str) -> Result<String> {
pk = prv_key.0.into();
}
info!("Found secret key in config, noted it down.");
Ok(pk)
}
@@ -148,9 +149,12 @@ fn parse_priv_key(data: &str) -> Result<String> {
pub fn parse_configured_contacts(data: &str) -> Result<FxHashMap<String, ContactInfo>> {
let mut ret = FxHashMap::default();
let map = match toml::from_str(data)? {
Value::Table(m) => m,
_ => return Ok(ret),
let map = match toml::from_str(data) {
Ok(Value::Table(m)) => m,
_ => {
warn!("Invalid TOML string passed as argument to parse_configured_contacts()");
return Ok(ret)
}
};
if !map.contains_key("contact") {
@@ -158,34 +162,82 @@ pub fn parse_configured_contacts(data: &str) -> Result<FxHashMap<String, Contact
}
if !map["contact"].is_table() {
warn!("TOML configuration contains a \"contact\" field, but it is not a table.");
return Ok(ret)
}
let contacts = map["contact"].as_table().unwrap();
// Our secret key for NaCl boxes.
let found_priv = match parse_priv_key(data) {
Ok(v) => v,
Err(_) => {
info!("Did not found private key in config, skipping contact configuration.");
return Ok(ret)
}
};
let bytes: [u8; 32] = match bs58::decode(found_priv).into_vec() {
Ok(v) => {
if v.len() != 32 {
warn!("Decoded base58 secret key string is not 32 bytes");
warn!("Skipping private contact configuration");
return Ok(ret)
}
v.try_into().unwrap()
}
Err(e) => {
warn!("Failed to decode base58 secret key from string: {}", e);
warn!("Skipping private contact configuration");
return Ok(ret)
}
};
let secret = crypto_box::SecretKey::from(bytes);
for cnt in contacts {
info!("Found configuration for contact {}", cnt.0);
let mut contact_info = ContactInfo::new()?;
if !cnt.1.is_table() && !cnt.1.as_table().unwrap().contains_key("contact_pubkey") {
if !cnt.1.is_table() {
warn!("Config for contact {} isn't a TOML table", cnt.0);
continue
}
let table = cnt.1.as_table().unwrap();
if table.is_empty() {
warn!("Configuration for contact {} is empty.", cnt.0);
continue
}
// Build the NaCl box
//// public_key
let pubkey = cnt.1["contact_pubkey"].as_str().unwrap();
let bytes: [u8; 32] = bs58::decode(pubkey).into_vec()?.try_into().unwrap();
if !table.contains_key("contact_pubkey") || !table["contact_pubkey"].is_str() {
warn!("Contact {} doesn't have `contact_pubkey` set or is not a string.", cnt.0);
continue
}
let pub_str = table["contact_pubkey"].as_str().unwrap();
let bytes: [u8; 32] = match bs58::decode(pub_str).into_vec() {
Ok(v) => {
if v.len() != 32 {
warn!("Decoded base58 string is not 32 bytes");
continue
}
v.try_into().unwrap()
}
Err(e) => {
warn!("Failed to decode base58 pubkey from string: {}", e);
continue
}
};
let public = crypto_box::PublicKey::from(bytes);
//// private_key
let bytes: [u8; 32] = bs58::decode(parse_priv_key(data)?).into_vec()?.try_into().unwrap();
let secret = crypto_box::SecretKey::from(bytes);
contact_info.salt_box = Some(SalsaBox::new(&public, &secret));
ret.insert(cnt.0.to_string(), contact_info);
info!("Instantiated NaCl box for contact {}", cnt.0);
}
Ok(ret)
}

View File

@@ -149,7 +149,7 @@ async fn realmain(settings: Args, executor: Arc<Executor<'_>>) -> Result<()> {
if settings.refresh {
println!("Removing local data in: {:?} (yes/no)? ", datastore_path);
let mut confirm = String::new();
stdin().read_line(&mut confirm).ok().expect("Failed to read line");
stdin().read_line(&mut confirm).expect("Failed to read line");
let confirm = confirm.to_lowercase();
let confirm = confirm.trim();
@@ -221,8 +221,6 @@ async fn realmain(settings: Args, executor: Arc<Executor<'_>>) -> Result<()> {
executor.spawn(p2p.clone().run(executor.clone())).detach();
p2p.clone().wait_for_outbound(executor.clone()).await?;
//
// RPC interface
//

View File

@@ -0,0 +1,79 @@
#!/bin/sh
set -e
if [ "$UID" != 0 ]; then
SUDO="$(command -v sudo)"
else
SUDO=""
fi
brew_sh="https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh"
setup_mac() {
if ! command -v brew >/dev/null; then
echo "brew not found, installing..." >&2
bash -c "$(curl -fL "${brew_sh}")" || return 1
fi
for i in cmake gcc jq pkgconf llvm@13 freetype expat; do
echo "Installing $i with brew..." >&2
brew install "$i" || return 1
done
}
setup_apt() {
APTGET="$SUDO $1"
$APTGET update || return 1
$APTGET install -y build-essential cmake jq wget pkg-config \
clang libclang-dev llvm-dev libudev-dev libfreetype6-dev \
libexpat1-dev || return 1
}
setup_xbps() {
XBPS="$SUDO $1"
$XBPS -S base-devel cmake wget expat-devel freetype-devel \
fontconfig-devel jq openssl-devel clang libclang llvm \
libllvm12 libgudev-devel
}
case "$(uname -s)" in
Linux)
if command -v apt >/dev/null; then
echo "Setting up for apt" >&2
setup_apt "$(command -v apt)" || exit 1
echo "Dependencies installed!" >&2
exit 0
fi
if command -v apt-get >/dev/null; then
echo "Setting up for apt-get" >&2
setup_apt "$(command -v apt-get)" || exit 1
echo "Dependencies installed!" >&2
exit 0
fi
if command -v xbps-install; then
echo "Setting up for xbps" >&2
setup_xbps "$(command -v xbps-install)" || exit 1
echo "Dependencies installed!" >&2
exit 0
fi
echo "Error: Could not recognize your package manager." >&2
exit 1
;;
Darwin)
echo "Setting up for OSX" >&2
setup_mac || exit 1
echo "Dependencies installed!" >&2
exit 0
;;
*|"")
echo "Unsupported OS, sorry." >&2
exit 1
;;
esac

View File

@@ -1,13 +0,0 @@
#!/bin/bash
if ! command -v brew &> /dev/null ; then
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
fi
brew install cmake
brew install gcc
brew install jq
brew install pkgconf
brew install llvm@13
brew install freetype
brew install expat

View File

@@ -1,4 +1,6 @@
#!/usr/bin/env python3
# Update the version in the toplevel Cargo.toml for DarkFi, and then run this
# script to update all the other Cargo.toml files.
import subprocess
from os import chdir

View File

@@ -1,16 +0,0 @@
#!/bin/bash
sudo xbps-install -S \
base-devel \
cmake \
wget \
expat-devel \
freetype-devel \
fontconfig-devel \
jq \
openssl-devel \
clang \
libclang \
llvm \
libllvm12 \
libgudev-devel \

View File

@@ -65,7 +65,7 @@ on how to install it on your computer.
Once installed, we can configure a new server which will represent our
`ircd` instance. First, start weechat, and in its window - run the
following commands (there is an assumption that `irc_listen` in the
`ircd` config file is set to `127.0.0.1:11066`):
`ircd` config file is set to `127.0.0.1:6667`):
```
/server add darkfi localhost/6667 -autoconnect

View File

@@ -286,10 +286,10 @@ impl Channel {
if Self::is_eof_error(err.clone()) {
info!("Inbound connection {} disconnected", self.address());
} else {
error!("Read error on channel: {}", err);
error!("Read error on channel {}: {}", self.address(), err);
}
debug!(target: "net",
"Channel::receive_loop() stopping channel {:?}",
"Channel::receive_loop() stopping channel {}",
self.address()
);
self.stop().await;

View File

@@ -253,7 +253,7 @@ impl P2p {
// Wait for the process for each of the provided addresses, excluding our own inbound addresses
async fn outbound_addr_loop(
self_inbound_addr: &Vec<Url>,
self_inbound_addr: &[Url],
timeout: u64,
stop_sub: Subscription<()>,
addrs: &Vec<Url>,

View File

@@ -15,7 +15,7 @@ use super::{
};
const SEND_ADDR_SLEEP_SECONDS: u64 = 900;
const LOCALNET: [&'static str; 5] = ["localhost", "0.0.0.0", "[::]", "127.0.0.1", "[::1]"];
const LOCALNET: [&str; 5] = ["localhost", "0.0.0.0", "[::]", "127.0.0.1", "[::1]"];
/// Defines address and get-address messages.
pub struct ProtocolAddress {
@@ -67,10 +67,9 @@ impl ProtocolAddress {
loop {
let addrs_msg = self.addrs_sub.receive().await?;
debug!(target: "net", "ProtocolAddress::handle_receive_addrs() received {} addrs", addrs_msg.addrs.len());
let mut filtered = vec![];
for (i, addr) in addrs_msg.addrs.iter().enumerate() {
debug!(target: "net", " addr[{}]: {}", i, addr);
if !self.settings.localnet {
let addrs = if !self.settings.localnet {
let mut filtered = vec![];
for addr in &addrs_msg.addrs {
match addr.host_str() {
Some(host_str) => {
if LOCALNET.contains(&host_str) {
@@ -83,10 +82,13 @@ impl ProtocolAddress {
continue
}
}
filtered.push(addr.clone());
}
filtered.push(addr.clone());
}
self.hosts.store(filtered).await;
filtered
} else {
addrs_msg.addrs.clone()
};
self.hosts.store(addrs).await;
}
}

View File

@@ -219,8 +219,8 @@ impl ProgramState for State {
if let Ok(mr) = self.merkle_roots.contains(merkle_root) {
return mr
}
// FIXME: An error here means a db issue
false
panic!("RootStore db corruption, could not check merkle_roots.contains()");
}
fn nullifier_exists(&self, nullifier: &Nullifier) -> bool {
@@ -228,8 +228,8 @@ impl ProgramState for State {
if let Ok(nf) = self.nullifiers.contains(nullifier) {
return nf
}
// FIXME: An error here means a db issue
false
panic!("NullifierStore db corruption, could not check nullifiers.contains()");
}
fn mint_vk(&self) -> &VerifyingKey {

View File

@@ -1,6 +1,6 @@
use async_std::{
sync::{Arc, Mutex},
task,
task::sleep,
};
use std::time::Duration;
@@ -14,7 +14,7 @@ use rand::{rngs::OsRng, Rng, RngCore};
use crate::{
net,
util::{
self, gen_id,
gen_id,
serial::{deserialize, serialize, Decodable, Encodable},
},
Error, Result,
@@ -29,9 +29,9 @@ use super::{
prune_map, DataStore, RaftSettings,
};
async fn send_node_id_loop(sender: async_channel::Sender<()>, timeout: i64) -> Result<()> {
async fn send_loop(sender: async_channel::Sender<()>, timeout: Duration) -> Result<()> {
loop {
util::sleep(timeout as u64).await;
sleep(timeout).await;
sender.send(()).await?;
}
}
@@ -52,6 +52,8 @@ pub struct Raft<T> {
pub(super) last_term: u64,
pub(super) last_heartbeat: i64,
p2p_sender: Sender,
msgs_channel: Channel<T>,
@@ -61,7 +63,7 @@ pub struct Raft<T> {
seen_msgs: Arc<Mutex<FxHashMap<String, i64>>>,
settings: RaftSettings,
pub(super) settings: RaftSettings,
pending_msgs: Vec<T>,
}
@@ -104,6 +106,7 @@ impl<T: Decodable + Encodable + Clone> Raft<T> {
acked_length: MapLength(FxHashMap::default()),
nodes: Arc::new(Mutex::new(FxHashMap::default())),
last_term: 0,
last_heartbeat: Utc::now().timestamp(),
p2p_sender,
msgs_channel,
commits_channel,
@@ -126,45 +129,39 @@ impl<T: Decodable + Encodable + Clone> Raft<T> {
) -> Result<()> {
let p2p_send_task = executor.spawn(p2p_send_loop(self.p2p_sender.1.clone(), p2p.clone()));
let prune_seen_messages_task = executor.spawn(prune_map::<String>(
self.seen_msgs.clone(),
self.settings.prun_messages_duration,
));
let prune_seen_messages_task = executor
.spawn(prune_map::<String>(self.seen_msgs.clone(), self.settings.prun_duration));
let prune_nodes_id_task = executor
.spawn(prune_map::<NodeId>(self.nodes.clone(), self.settings.prun_nodes_ids_duration));
let (node_id_sx, node_id_rv) = async_channel::unbounded::<()>();
let send_node_id_loop_task =
executor.spawn(send_node_id_loop(node_id_sx, self.settings.node_id_timeout));
let prune_nodes_id_task =
executor.spawn(prune_map::<NodeId>(self.nodes.clone(), self.settings.prun_duration));
let mut rng = rand::thread_rng();
let (id_sx, id_rv) = async_channel::unbounded::<()>();
let (heartbeat_sx, heartbeat_rv) = async_channel::unbounded::<()>();
let (timeout_sx, timeout_rv) = async_channel::unbounded::<()>();
let id_timeout = Duration::from_secs(self.settings.id_timeout);
let send_id_task = executor.spawn(send_loop(id_sx, id_timeout));
let heartbeat_timeout = Duration::from_millis(self.settings.heartbeat_timeout);
let send_heartbeat_task = executor.spawn(send_loop(heartbeat_sx, heartbeat_timeout));
let timeout =
Duration::from_secs(rng.gen_range(0..self.settings.timeout) + self.settings.timeout);
let send_timeout_task = executor.spawn(send_loop(timeout_sx, timeout));
let broadcast_msg_rv = self.msgs_channel.1.clone();
loop {
let timeout = if self.role == Role::Leader {
self.settings.heartbeat_timeout
} else {
rng.gen_range(0..self.settings.timeout) + self.settings.timeout
};
let timeout = Duration::from_millis(timeout);
let mut result: Result<()>;
select! {
m = p2p_recv_channel.recv().fuse() => result = self.handle_method(m?).await,
m = broadcast_msg_rv.recv().fuse() => result = self.broadcast_msg(&m?,None).await,
_ = node_id_rv.recv().fuse() => result = self.send_node_id_msg().await,
_ = task::sleep(timeout).fuse() => {
result = if self.role == Role::Leader {
self.send_heartbeat().await
}else {
self.send_vote_request().await
};
},
let mut result = select! {
m = p2p_recv_channel.recv().fuse() => self.handle_method(m?).await,
m = broadcast_msg_rv.recv().fuse() => self.broadcast_msg(&m?,None).await,
_ = id_rv.recv().fuse() => self.send_id_msg().await,
_ = heartbeat_rv.recv().fuse() => self.send_heartbeat().await,
_ = timeout_rv.recv().fuse() => self.send_vote_request().await,
_ = stop_signal.recv().fuse() => break,
}
};
// send pending messages
if !self.pending_msgs.is_empty() && self.role != Role::Candidate {
@@ -175,9 +172,8 @@ impl<T: Decodable + Encodable + Clone> Raft<T> {
self.pending_msgs = vec![];
}
match result {
Ok(_) => {}
Err(e) => warn!(target: "raft", "warn: {}", e),
if let Err(e) = result {
warn!(target: "raft", "warn: {}", e);
}
}
@@ -185,7 +181,9 @@ impl<T: Decodable + Encodable + Clone> Raft<T> {
p2p_send_task.cancel().await;
prune_seen_messages_task.cancel().await;
prune_nodes_id_task.cancel().await;
send_node_id_loop_task.cancel().await;
send_id_task.cancel().await;
send_heartbeat_task.cancel().await;
send_timeout_task.cancel().await;
self.datastore.flush().await?;
Ok(())
}
@@ -213,9 +211,9 @@ impl<T: Decodable + Encodable + Clone> Raft<T> {
self.id.clone()
}
async fn send_node_id_msg(&self) -> Result<()> {
let node_id_msg = serialize(&NodeIdMsg { id: self.id.clone() });
self.send(None, &node_id_msg, NetMsgMethod::NodeIdMsg, None).await?;
async fn send_id_msg(&self) -> Result<()> {
let id_msg = serialize(&NodeIdMsg { id: self.id.clone() });
self.send(None, &id_msg, NetMsgMethod::NodeIdMsg, None).await?;
Ok(())
}
@@ -238,7 +236,6 @@ impl<T: Decodable + Encodable + Clone> Raft<T> {
.await?;
}
Role::Candidate => {
warn!("The role is Candidate, add the msg to pending_msgs");
self.pending_msgs.push(msg.clone());
}
}
@@ -255,6 +252,7 @@ impl<T: Decodable + Encodable + Clone> Raft<T> {
self.receive_log_response(lr).await?;
}
NetMsgMethod::LogRequest => {
self.last_heartbeat = Utc::now().timestamp();
let lr: LogRequest = deserialize(&msg.payload)?;
self.receive_log_request(lr).await?;
}

View File

@@ -1,3 +1,4 @@
use chrono::Utc;
use log::info;
use crate::{
@@ -12,7 +13,15 @@ use super::{
impl<T: Decodable + Encodable + Clone> Raft<T> {
pub(super) async fn send_vote_request(&mut self) -> Result<()> {
let self_id = self.id();
if self.role == Role::Leader {
return Ok(())
}
let last_heartbeat_duration = Utc::now().timestamp() - self.last_heartbeat;
if last_heartbeat_duration < self.settings.timeout as i64 {
return Ok(())
}
self.set_current_term(&(self.current_term()? + 1))?;
@@ -21,14 +30,13 @@ impl<T: Decodable + Encodable + Clone> Raft<T> {
self.role = Role::Candidate;
}
self.set_voted_for(&Some(self_id.clone()))?;
self.votes_received = vec![];
self.votes_received.push(self_id.clone());
self.set_voted_for(&Some(self.id()))?;
self.votes_received = vec![self.id()];
self.reset_last_term()?;
let request = VoteRequest {
node_id: self_id,
node_id: self.id(),
current_term: self.current_term()?,
log_length: self.logs_len(),
last_term: self.last_term,
@@ -40,6 +48,10 @@ impl<T: Decodable + Encodable + Clone> Raft<T> {
pub(super) async fn receive_vote_response(&mut self, vr: VoteResponse) -> Result<()> {
if self.role == Role::Candidate && vr.current_term == self.current_term()? && vr.ok {
if self.votes_received.contains(&vr.node_id) {
return Ok(())
}
self.votes_received.push(vr.node_id);
let nodes = self.nodes.lock().await;

View File

@@ -12,6 +12,10 @@ use super::{
impl<T: Decodable + Encodable + Clone> Raft<T> {
pub(super) async fn send_heartbeat(&mut self) -> Result<()> {
if self.role != Role::Leader {
return Ok(())
}
let nodes = self.nodes.lock().await;
let nodes_cloned = nodes.clone();
drop(nodes);

View File

@@ -2,23 +2,19 @@ use std::path::PathBuf;
#[derive(Clone, Debug)]
pub struct RaftSettings {
//
// Milliseconds
//
// the leader duration for sending heartbeat; in milliseconds
pub heartbeat_timeout: u64,
// the duration for electing new leader; in seconds
pub timeout: u64,
//
// Seconds
//
pub prun_messages_duration: i64,
pub prun_nodes_ids_duration: i64,
// must be greater than (timeout * 2)
pub node_id_timeout: i64,
// the duration for sending id to other nodes; in seconds
pub id_timeout: u64,
// this duration used to clean up hashmaps; in seconds
pub prun_duration: i64,
//
// Datastore path
//
pub datastore_path: PathBuf,
}
@@ -26,10 +22,9 @@ impl Default for RaftSettings {
fn default() -> Self {
Self {
heartbeat_timeout: 500,
timeout: 7000,
prun_messages_duration: 120,
prun_nodes_ids_duration: 120,
node_id_timeout: 16,
timeout: 6,
id_timeout: 12,
prun_duration: 240,
datastore_path: PathBuf::from(""),
}
}