mirror of
https://github.com/vacp2p/de-mls.git
synced 2026-01-09 05:27:59 -05:00
chore: implement consensus mechanism (#43)
* chore: implement consensus mechanism - Updated `Cargo.lock` - Refactored `Cargo.toml` - Enhanced `action_handlers.rs` to introduce a ban request feature, allowing users to send ban requests through Waku. - Implemented a new consensus module to manage proposal and voting processes, including state transitions for steward epochs. - Updated error handling in `error.rs` to accommodate new consensus-related errors. - Refactored `group.rs` and `user_actor.rs` to integrate the new consensus logic and improve state management. - Added tests for the consensus mechanism to ensure reliability and correctness of the voting process. * chore: update dependencies and refactor code for clarity * refactor: update voting mechanism and clean up user actor logic - Changed the return type of `complete_voting_for_steward` to return a vector of messages instead of a boolean. - Removed unused request types and their implementations related to proposal handling. - Updated the `handle_steward_flow_per_epoch` function to reflect changes in the voting process and improved logging. - Refactored tests to align with the new voting mechanism and ensure proper message handling. - Enhanced consensus logic to better handle vote counting and state transitions. * feat: implement real voting for all users - Improved logging messages for clarity during WebSocket message handling. - Added new serverless functions and shims for better integration with the frontend. - Introduced new manifest files for server configuration and routing. - Implemented initial setup for handling user votes and proposals in the consensus mechanism. - Updated error handling to accommodate new user vote actions and related messages. * consensus: update test * Enhance steward state management and consensus mechanism - Added detailed documentation in README for steward state management, including state definitions, transitions, and flow scenarios. - Updated `WakuNode` connection logic to include a timeout for peer connections. - Refactored message handling in `action_handlers.rs` to utilize new `AppMessage` structures for improved clarity. - Enhanced error handling in `error.rs` to cover new scenarios related to consensus and state transitions. - Updated tests to reflect changes in the consensus mechanism and steward flow, ensuring robustness and reliability. - Improved state machine logic to handle edge cases and guarantee proper state transitions during steward epochs. * Refactor - Updated `WakuMessageToSend` constructor to accept a slice for `app_id`, improving memory efficiency. - Enhanced error handling in `Group` struct to provide more descriptive error messages for invalid states. - Added detailed documentation for new methods in `Group` and `User` structs, clarifying their functionality and usage. - Refactored state machine logic to ensure proper transitions during steward epochs and voting processes. - Improved test coverage for group state management and message processing, ensuring robustness in various scenarios. * Update README for improved clarity and formatting * fix: fix lint issues and updates test flow * test: update user test
This commit is contained in:
committed by
GitHub
parent
e37a4b435f
commit
4ea1136012
481
Cargo.lock
generated
481
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -46,7 +46,6 @@ waku-sys = { git = "https://github.com/waku-org/waku-rust-bindings.git", branch
|
||||
rand = "0.8.5"
|
||||
serde_json = "1.0"
|
||||
serde = { version = "1.0.163", features = ["derive"] }
|
||||
tls_codec = "0.3.0"
|
||||
chrono = "0.4"
|
||||
sha2 = "0.10.8"
|
||||
|
||||
|
||||
45
README.md
45
README.md
@@ -2,7 +2,8 @@
|
||||
|
||||
Decentralized MLS PoC using a smart contract for group coordination
|
||||
|
||||
> Note: The frontend implementation is based on [chatr](https://github.com/0xLaurens/chatr), a real-time chat application built with Rust and SvelteKit
|
||||
> Note: The frontend implementation is based on [chatr](https://github.com/0xLaurens/chatr),
|
||||
> a real-time chat application built with Rust and SvelteKit
|
||||
|
||||
## Run Test Waku Node
|
||||
|
||||
@@ -44,6 +45,48 @@ Run from the root directory
|
||||
RUST_BACKTRACE=full RUST_LOG=info NODE_PORT=60001 PEER_ADDRESSES=/ip4/x.x.x.x/tcp/60000/p2p/xxxx...xxxx,/ip4/y.y.y.y/tcp/60000/p2p/yyyy...yyyy cargo run -- --nocapture
|
||||
```
|
||||
|
||||
## Steward State Management
|
||||
|
||||
The system implements a robust state machine for managing steward epochs with the following states:
|
||||
|
||||
### States
|
||||
|
||||
- **Working**: Normal operation where all users can send any message type freely
|
||||
- **Waiting**: Steward epoch active, only steward can send BATCH_PROPOSALS_MESSAGE
|
||||
- **Voting**: Consensus voting phase with only voting-related messages:
|
||||
- Everyone: VOTE, USER_VOTE
|
||||
- Steward only: VOTING_PROPOSAL, PROPOSAL
|
||||
- All other messages blocked during voting
|
||||
|
||||
### State Transitions
|
||||
|
||||
```text
|
||||
Working --start_steward_epoch()--> Waiting (if proposals exist)
|
||||
Working --start_steward_epoch()--> Working (if no proposals - no state change)
|
||||
Waiting --start_voting()---------> Voting
|
||||
Waiting --no_proposals_found()---> Working (edge case: proposals disappear)
|
||||
Voting --complete_voting(YES)----> Waiting --apply_proposals()--> Working
|
||||
Voting --complete_voting(NO)-----> Working
|
||||
```
|
||||
|
||||
### Steward Flow Scenarios
|
||||
|
||||
1. **No Proposals**: Steward stays in Working state throughout epoch
|
||||
2. **Successful Vote**:
|
||||
- **Steward**: Working → Waiting → Voting → Waiting → Working
|
||||
- **Non-Steward**: Working → Waiting → Voting → Working
|
||||
3. **Failed Vote**:
|
||||
- **Steward**: Working → Waiting → Voting → Working
|
||||
- **Non-Steward**: Working → Waiting → Voting → Working
|
||||
4. **Edge Case**: Working → Waiting → Working (if proposals disappear during voting)
|
||||
|
||||
### Guarantees
|
||||
|
||||
- Steward always returns to Working state after epoch completion
|
||||
- No infinite loops or stuck states
|
||||
- All edge cases properly handled
|
||||
- Robust error handling with detailed logging
|
||||
|
||||
### Example of ban user
|
||||
|
||||
In chat message block run ban command, note that user wallet address should be in the format without `0x`
|
||||
|
||||
@@ -108,7 +108,7 @@ impl WakuNode<Running> {
|
||||
for peer_address in peer_addresses {
|
||||
info!("Connecting to peer: {peer_address:?}");
|
||||
self.node
|
||||
.connect(&peer_address, None)
|
||||
.connect(&peer_address, Some(Duration::from_secs(10)))
|
||||
.await
|
||||
.map_err(|e| DeliveryServiceError::WakuConnectPeerError(e.to_string()))?;
|
||||
info!("Connected to peer: {peer_address:?}");
|
||||
@@ -148,12 +148,12 @@ impl WakuMessageToSend {
|
||||
/// - subtopic: The subtopic to send the message to
|
||||
/// - group_id: The group to send the message to
|
||||
/// - app_id: The app is unique identifier for the application that is sending the message for filtering own messages
|
||||
pub fn new(msg: Vec<u8>, subtopic: &str, group_id: String, app_id: Vec<u8>) -> Self {
|
||||
pub fn new(msg: Vec<u8>, subtopic: &str, group_id: &str, app_id: &[u8]) -> Self {
|
||||
Self {
|
||||
msg,
|
||||
subtopic: subtopic.to_string(),
|
||||
group_id,
|
||||
app_id,
|
||||
group_id: group_id.to_string(),
|
||||
app_id: app_id.to_vec(),
|
||||
}
|
||||
}
|
||||
/// Build a WakuMessage from the message to send
|
||||
@@ -178,10 +178,15 @@ pub async fn run_waku_node(
|
||||
node_port: String,
|
||||
peer_addresses: Option<Vec<Multiaddr>>,
|
||||
waku_sender: Sender<WakuMessage>,
|
||||
reciever: &mut Receiver<WakuMessageToSend>,
|
||||
receiver: &mut Receiver<WakuMessageToSend>,
|
||||
) -> Result<(), DeliveryServiceError> {
|
||||
info!("Initializing waku node");
|
||||
let waku_node_init = WakuNode::new(node_port.parse::<usize>().unwrap()).await?;
|
||||
let waku_node_init = WakuNode::new(
|
||||
node_port
|
||||
.parse::<usize>()
|
||||
.expect("Failed to parse node port"),
|
||||
)
|
||||
.await?;
|
||||
let waku_node = waku_node_init.start(waku_sender).await?;
|
||||
info!("Waku node started");
|
||||
|
||||
@@ -191,7 +196,7 @@ pub async fn run_waku_node(
|
||||
}
|
||||
|
||||
info!("Waiting for message to send to waku");
|
||||
while let Some(msg) = reciever.recv().await {
|
||||
while let Some(msg) = receiver.recv().await {
|
||||
info!("Received message to send to waku");
|
||||
let id = waku_node.send_message(msg).await?;
|
||||
info!("Successfully publish message with id: {id:?}");
|
||||
|
||||
@@ -45,7 +45,7 @@ impl Message<WakuMessage> for Application {
|
||||
#[tokio::test(flavor = "multi_thread")]
|
||||
async fn test_waku_client() {
|
||||
env_logger::init();
|
||||
let group_name = "new_group".to_string();
|
||||
let group_name = "new_group";
|
||||
let mut pubsub = PubSub::<WakuMessage>::new();
|
||||
|
||||
let (sender, _) = channel::<WakuMessage>(100);
|
||||
@@ -98,8 +98,8 @@ async fn test_waku_client() {
|
||||
.send_message(WakuMessageToSend::new(
|
||||
"test_message_1".as_bytes().to_vec(),
|
||||
APP_MSG_SUBTOPIC,
|
||||
group_name.clone(),
|
||||
uuid.clone(),
|
||||
group_name,
|
||||
&uuid,
|
||||
))
|
||||
.await;
|
||||
info!("res: {:?}", res);
|
||||
|
||||
1
frontend/.netlify/functions-internal/render.json
Normal file
1
frontend/.netlify/functions-internal/render.json
Normal file
@@ -0,0 +1 @@
|
||||
{"config":{"nodeModuleFormat":"esm"},"version":1}
|
||||
37
frontend/.netlify/functions-internal/render.mjs
Normal file
37
frontend/.netlify/functions-internal/render.mjs
Normal file
@@ -0,0 +1,37 @@
|
||||
import { init } from '../serverless.js';
|
||||
|
||||
export const handler = init({
|
||||
appDir: "_app",
|
||||
appPath: "_app",
|
||||
assets: new Set(["favicon.png"]),
|
||||
mimeTypes: {".png":"image/png"},
|
||||
_: {
|
||||
client: {"start":{"file":"_app/immutable/entry/start.530cd74f.js","imports":["_app/immutable/entry/start.530cd74f.js","_app/immutable/chunks/index.b5cfe40e.js","_app/immutable/chunks/singletons.995fdd8e.js","_app/immutable/chunks/index.0a9737cc.js"],"stylesheets":[],"fonts":[]},"app":{"file":"_app/immutable/entry/app.e73d9bc3.js","imports":["_app/immutable/entry/app.e73d9bc3.js","_app/immutable/chunks/index.b5cfe40e.js"],"stylesheets":[],"fonts":[]}},
|
||||
nodes: [
|
||||
() => import('../server/nodes/0.js'),
|
||||
() => import('../server/nodes/1.js'),
|
||||
() => import('../server/nodes/2.js'),
|
||||
() => import('../server/nodes/3.js')
|
||||
],
|
||||
routes: [
|
||||
{
|
||||
id: "/",
|
||||
pattern: /^\/$/,
|
||||
params: [],
|
||||
page: { layouts: [0], errors: [1], leaf: 2 },
|
||||
endpoint: null
|
||||
},
|
||||
{
|
||||
id: "/chat",
|
||||
pattern: /^\/chat\/?$/,
|
||||
params: [],
|
||||
page: { layouts: [0], errors: [1], leaf: 3 },
|
||||
endpoint: null
|
||||
}
|
||||
],
|
||||
matchers: async () => {
|
||||
|
||||
return { };
|
||||
}
|
||||
}
|
||||
});
|
||||
@@ -0,0 +1 @@
|
||||
div.svelte-lzwg39{width:20px;opacity:0;height:20px;border-radius:10px;background:var(--primary, #61d345);position:relative;transform:rotate(45deg);animation:svelte-lzwg39-circleAnimation 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275) forwards;animation-delay:100ms}div.svelte-lzwg39::after{content:'';box-sizing:border-box;animation:svelte-lzwg39-checkmarkAnimation 0.2s ease-out forwards;opacity:0;animation-delay:200ms;position:absolute;border-right:2px solid;border-bottom:2px solid;border-color:var(--secondary, #fff);bottom:6px;left:6px;height:10px;width:6px}@keyframes svelte-lzwg39-circleAnimation{from{transform:scale(0) rotate(45deg);opacity:0}to{transform:scale(1) rotate(45deg);opacity:1}}@keyframes svelte-lzwg39-checkmarkAnimation{0%{height:0;width:0;opacity:0}40%{height:0;width:6px;opacity:1}100%{opacity:1;height:10px}}div.svelte-10jnndo{width:20px;opacity:0;height:20px;border-radius:10px;background:var(--primary, #ff4b4b);position:relative;transform:rotate(45deg);animation:svelte-10jnndo-circleAnimation 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275) forwards;animation-delay:100ms}div.svelte-10jnndo::after,div.svelte-10jnndo::before{content:'';animation:svelte-10jnndo-firstLineAnimation 0.15s ease-out forwards;animation-delay:150ms;position:absolute;border-radius:3px;opacity:0;background:var(--secondary, #fff);bottom:9px;left:4px;height:2px;width:12px}div.svelte-10jnndo:before{animation:svelte-10jnndo-secondLineAnimation 0.15s ease-out forwards;animation-delay:180ms;transform:rotate(90deg)}@keyframes svelte-10jnndo-circleAnimation{from{transform:scale(0) rotate(45deg);opacity:0}to{transform:scale(1) rotate(45deg);opacity:1}}@keyframes svelte-10jnndo-firstLineAnimation{from{transform:scale(0);opacity:0}to{transform:scale(1);opacity:1}}@keyframes svelte-10jnndo-secondLineAnimation{from{transform:scale(0) rotate(90deg);opacity:0}to{transform:scale(1) rotate(90deg);opacity:1}}div.svelte-bj4lu8{width:12px;height:12px;box-sizing:border-box;border:2px solid;border-radius:100%;border-color:var(--secondary, #e0e0e0);border-right-color:var(--primary, #616161);animation:svelte-bj4lu8-rotate 1s linear infinite}@keyframes svelte-bj4lu8-rotate{from{transform:rotate(0deg)}to{transform:rotate(360deg)}}.indicator.svelte-1c92bpz{position:relative;display:flex;justify-content:center;align-items:center;min-width:20px;min-height:20px}.status.svelte-1c92bpz{position:absolute}.animated.svelte-1c92bpz{position:relative;transform:scale(0.6);opacity:0.4;min-width:20px;animation:svelte-1c92bpz-enter 0.3s 0.12s cubic-bezier(0.175, 0.885, 0.32, 1.275) forwards}@keyframes svelte-1c92bpz-enter{from{transform:scale(0.6);opacity:0.4}to{transform:scale(1);opacity:1}}.message.svelte-o805t1{display:flex;justify-content:center;margin:4px 10px;color:inherit;flex:1 1 auto;white-space:pre-line}@keyframes svelte-15lyehg-enterAnimation{0%{transform:translate3d(0, calc(var(--factor) * -200%), 0) scale(0.6);opacity:0.5}100%{transform:translate3d(0, 0, 0) scale(1);opacity:1}}@keyframes svelte-15lyehg-exitAnimation{0%{transform:translate3d(0, 0, -1px) scale(1);opacity:1}100%{transform:translate3d(0, calc(var(--factor) * -150%), -1px) scale(0.6);opacity:0}}@keyframes svelte-15lyehg-fadeInAnimation{0%{opacity:0}100%{opacity:1}}@keyframes svelte-15lyehg-fadeOutAnimation{0%{opacity:1}100%{opacity:0}}.base.svelte-15lyehg{display:flex;align-items:center;background:#fff;color:#363636;line-height:1.3;will-change:transform;box-shadow:0 3px 10px rgba(0, 0, 0, 0.1), 0 3px 3px rgba(0, 0, 0, 0.05);max-width:350px;pointer-events:auto;padding:8px 10px;border-radius:8px}.transparent.svelte-15lyehg{opacity:0}.enter.svelte-15lyehg{animation:svelte-15lyehg-enterAnimation 0.35s cubic-bezier(0.21, 1.02, 0.73, 1) forwards}.exit.svelte-15lyehg{animation:svelte-15lyehg-exitAnimation 0.4s cubic-bezier(0.06, 0.71, 0.55, 1) forwards}.fadeIn.svelte-15lyehg{animation:svelte-15lyehg-fadeInAnimation 0.35s cubic-bezier(0.21, 1.02, 0.73, 1) forwards}.fadeOut.svelte-15lyehg{animation:svelte-15lyehg-fadeOutAnimation 0.4s cubic-bezier(0.06, 0.71, 0.55, 1) forwards}.wrapper.svelte-1pakgpd{left:0;right:0;display:flex;position:absolute;transform:translateY(calc(var(--offset, 16px) * var(--factor) * 1px))}.transition.svelte-1pakgpd{transition:all 230ms cubic-bezier(0.21, 1.02, 0.73, 1)}.active.svelte-1pakgpd{z-index:9999}.active.svelte-1pakgpd>*{pointer-events:auto}.toaster.svelte-jyff3d{--default-offset:16px;position:fixed;z-index:9999;top:var(--default-offset);left:var(--default-offset);right:var(--default-offset);bottom:var(--default-offset);pointer-events:none}
|
||||
3085
frontend/.netlify/server/_app/immutable/assets/_layout.ba8665a3.css
Normal file
3085
frontend/.netlify/server/_app/immutable/assets/_layout.ba8665a3.css
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,242 @@
|
||||
import { d as derived, w as writable } from "./index2.js";
|
||||
import { k as get_store_value } from "./index3.js";
|
||||
function writableDerived(origins, derive, reflect, initial) {
|
||||
var childDerivedSetter, originValues, blockNextDerive = false;
|
||||
var reflectOldValues = reflect.length >= 2;
|
||||
var wrappedDerive = (got, set) => {
|
||||
childDerivedSetter = set;
|
||||
if (reflectOldValues) {
|
||||
originValues = got;
|
||||
}
|
||||
if (!blockNextDerive) {
|
||||
let returned = derive(got, set);
|
||||
if (derive.length < 2) {
|
||||
set(returned);
|
||||
} else {
|
||||
return returned;
|
||||
}
|
||||
}
|
||||
blockNextDerive = false;
|
||||
};
|
||||
var childDerived = derived(origins, wrappedDerive, initial);
|
||||
var singleOrigin = !Array.isArray(origins);
|
||||
function doReflect(reflecting) {
|
||||
var setWith = reflect(reflecting, originValues);
|
||||
if (singleOrigin) {
|
||||
blockNextDerive = true;
|
||||
origins.set(setWith);
|
||||
} else {
|
||||
setWith.forEach((value, i) => {
|
||||
blockNextDerive = true;
|
||||
origins[i].set(value);
|
||||
});
|
||||
}
|
||||
blockNextDerive = false;
|
||||
}
|
||||
var tryingSet = false;
|
||||
function update2(fn) {
|
||||
var isUpdated, mutatedBySubscriptions, oldValue, newValue;
|
||||
if (tryingSet) {
|
||||
newValue = fn(get_store_value(childDerived));
|
||||
childDerivedSetter(newValue);
|
||||
return;
|
||||
}
|
||||
var unsubscribe = childDerived.subscribe((value) => {
|
||||
if (!tryingSet) {
|
||||
oldValue = value;
|
||||
} else if (!isUpdated) {
|
||||
isUpdated = true;
|
||||
} else {
|
||||
mutatedBySubscriptions = true;
|
||||
}
|
||||
});
|
||||
newValue = fn(oldValue);
|
||||
tryingSet = true;
|
||||
childDerivedSetter(newValue);
|
||||
unsubscribe();
|
||||
tryingSet = false;
|
||||
if (mutatedBySubscriptions) {
|
||||
newValue = get_store_value(childDerived);
|
||||
}
|
||||
if (isUpdated) {
|
||||
doReflect(newValue);
|
||||
}
|
||||
}
|
||||
return {
|
||||
subscribe: childDerived.subscribe,
|
||||
set(value) {
|
||||
update2(() => value);
|
||||
},
|
||||
update: update2
|
||||
};
|
||||
}
|
||||
const TOAST_LIMIT = 20;
|
||||
const toasts = writable([]);
|
||||
const pausedAt = writable(null);
|
||||
const toastTimeouts = /* @__PURE__ */ new Map();
|
||||
const addToRemoveQueue = (toastId) => {
|
||||
if (toastTimeouts.has(toastId)) {
|
||||
return;
|
||||
}
|
||||
const timeout = setTimeout(() => {
|
||||
toastTimeouts.delete(toastId);
|
||||
remove(toastId);
|
||||
}, 1e3);
|
||||
toastTimeouts.set(toastId, timeout);
|
||||
};
|
||||
const clearFromRemoveQueue = (toastId) => {
|
||||
const timeout = toastTimeouts.get(toastId);
|
||||
if (timeout) {
|
||||
clearTimeout(timeout);
|
||||
}
|
||||
};
|
||||
function update(toast2) {
|
||||
if (toast2.id) {
|
||||
clearFromRemoveQueue(toast2.id);
|
||||
}
|
||||
toasts.update(($toasts) => $toasts.map((t) => t.id === toast2.id ? { ...t, ...toast2 } : t));
|
||||
}
|
||||
function add(toast2) {
|
||||
toasts.update(($toasts) => [toast2, ...$toasts].slice(0, TOAST_LIMIT));
|
||||
}
|
||||
function upsert(toast2) {
|
||||
if (get_store_value(toasts).find((t) => t.id === toast2.id)) {
|
||||
update(toast2);
|
||||
} else {
|
||||
add(toast2);
|
||||
}
|
||||
}
|
||||
function dismiss(toastId) {
|
||||
toasts.update(($toasts) => {
|
||||
if (toastId) {
|
||||
addToRemoveQueue(toastId);
|
||||
} else {
|
||||
$toasts.forEach((toast2) => {
|
||||
addToRemoveQueue(toast2.id);
|
||||
});
|
||||
}
|
||||
return $toasts.map((t) => t.id === toastId || toastId === void 0 ? { ...t, visible: false } : t);
|
||||
});
|
||||
}
|
||||
function remove(toastId) {
|
||||
toasts.update(($toasts) => {
|
||||
if (toastId === void 0) {
|
||||
return [];
|
||||
}
|
||||
return $toasts.filter((t) => t.id !== toastId);
|
||||
});
|
||||
}
|
||||
function startPause(time) {
|
||||
pausedAt.set(time);
|
||||
}
|
||||
function endPause(time) {
|
||||
let diff;
|
||||
pausedAt.update(($pausedAt) => {
|
||||
diff = time - ($pausedAt || 0);
|
||||
return null;
|
||||
});
|
||||
toasts.update(($toasts) => $toasts.map((t) => ({
|
||||
...t,
|
||||
pauseDuration: t.pauseDuration + diff
|
||||
})));
|
||||
}
|
||||
const defaultTimeouts = {
|
||||
blank: 4e3,
|
||||
error: 4e3,
|
||||
success: 2e3,
|
||||
loading: Infinity,
|
||||
custom: 4e3
|
||||
};
|
||||
function useToasterStore(toastOptions = {}) {
|
||||
const mergedToasts = writableDerived(toasts, ($toasts) => $toasts.map((t) => ({
|
||||
...toastOptions,
|
||||
...toastOptions[t.type],
|
||||
...t,
|
||||
duration: t.duration || toastOptions[t.type]?.duration || toastOptions?.duration || defaultTimeouts[t.type],
|
||||
style: [toastOptions.style, toastOptions[t.type]?.style, t.style].join(";")
|
||||
})), ($toasts) => $toasts);
|
||||
return {
|
||||
toasts: mergedToasts,
|
||||
pausedAt
|
||||
};
|
||||
}
|
||||
const isFunction = (valOrFunction) => typeof valOrFunction === "function";
|
||||
const resolveValue = (valOrFunction, arg) => isFunction(valOrFunction) ? valOrFunction(arg) : valOrFunction;
|
||||
const genId = (() => {
|
||||
let count = 0;
|
||||
return () => {
|
||||
count += 1;
|
||||
return count.toString();
|
||||
};
|
||||
})();
|
||||
const prefersReducedMotion = (() => {
|
||||
let shouldReduceMotion;
|
||||
return () => {
|
||||
if (shouldReduceMotion === void 0 && typeof window !== "undefined") {
|
||||
const mediaQuery = matchMedia("(prefers-reduced-motion: reduce)");
|
||||
shouldReduceMotion = !mediaQuery || mediaQuery.matches;
|
||||
}
|
||||
return shouldReduceMotion;
|
||||
};
|
||||
})();
|
||||
const createToast = (message, type = "blank", opts) => ({
|
||||
createdAt: Date.now(),
|
||||
visible: true,
|
||||
type,
|
||||
ariaProps: {
|
||||
role: "status",
|
||||
"aria-live": "polite"
|
||||
},
|
||||
message,
|
||||
pauseDuration: 0,
|
||||
...opts,
|
||||
id: opts?.id || genId()
|
||||
});
|
||||
const createHandler = (type) => (message, options) => {
|
||||
const toast2 = createToast(message, type, options);
|
||||
upsert(toast2);
|
||||
return toast2.id;
|
||||
};
|
||||
const toast = (message, opts) => createHandler("blank")(message, opts);
|
||||
toast.error = createHandler("error");
|
||||
toast.success = createHandler("success");
|
||||
toast.loading = createHandler("loading");
|
||||
toast.custom = createHandler("custom");
|
||||
toast.dismiss = (toastId) => {
|
||||
dismiss(toastId);
|
||||
};
|
||||
toast.remove = (toastId) => remove(toastId);
|
||||
toast.promise = (promise, msgs, opts) => {
|
||||
const id = toast.loading(msgs.loading, { ...opts, ...opts?.loading });
|
||||
promise.then((p) => {
|
||||
toast.success(resolveValue(msgs.success, p), {
|
||||
id,
|
||||
...opts,
|
||||
...opts?.success
|
||||
});
|
||||
return p;
|
||||
}).catch((e) => {
|
||||
toast.error(resolveValue(msgs.error, e), {
|
||||
id,
|
||||
...opts,
|
||||
...opts?.error
|
||||
});
|
||||
});
|
||||
return promise;
|
||||
};
|
||||
const CheckmarkIcon_svelte_svelte_type_style_lang = "";
|
||||
const ErrorIcon_svelte_svelte_type_style_lang = "";
|
||||
const LoaderIcon_svelte_svelte_type_style_lang = "";
|
||||
const ToastIcon_svelte_svelte_type_style_lang = "";
|
||||
const ToastMessage_svelte_svelte_type_style_lang = "";
|
||||
const ToastBar_svelte_svelte_type_style_lang = "";
|
||||
const ToastWrapper_svelte_svelte_type_style_lang = "";
|
||||
const Toaster_svelte_svelte_type_style_lang = "";
|
||||
export {
|
||||
update as a,
|
||||
endPause as e,
|
||||
prefersReducedMotion as p,
|
||||
startPause as s,
|
||||
toast as t,
|
||||
useToasterStore as u
|
||||
};
|
||||
78
frontend/.netlify/server/chunks/index.js
Normal file
78
frontend/.netlify/server/chunks/index.js
Normal file
@@ -0,0 +1,78 @@
|
||||
let HttpError = class HttpError2 {
|
||||
/**
|
||||
* @param {number} status
|
||||
* @param {{message: string} extends App.Error ? (App.Error | string | undefined) : App.Error} body
|
||||
*/
|
||||
constructor(status, body) {
|
||||
this.status = status;
|
||||
if (typeof body === "string") {
|
||||
this.body = { message: body };
|
||||
} else if (body) {
|
||||
this.body = body;
|
||||
} else {
|
||||
this.body = { message: `Error: ${status}` };
|
||||
}
|
||||
}
|
||||
toString() {
|
||||
return JSON.stringify(this.body);
|
||||
}
|
||||
};
|
||||
let Redirect = class Redirect2 {
|
||||
/**
|
||||
* @param {300 | 301 | 302 | 303 | 304 | 305 | 306 | 307 | 308} status
|
||||
* @param {string} location
|
||||
*/
|
||||
constructor(status, location) {
|
||||
this.status = status;
|
||||
this.location = location;
|
||||
}
|
||||
};
|
||||
let ActionFailure = class ActionFailure2 {
|
||||
/**
|
||||
* @param {number} status
|
||||
* @param {T} [data]
|
||||
*/
|
||||
constructor(status, data) {
|
||||
this.status = status;
|
||||
this.data = data;
|
||||
}
|
||||
};
|
||||
function error(status, message) {
|
||||
if (isNaN(status) || status < 400 || status > 599) {
|
||||
throw new Error(`HTTP error status codes must be between 400 and 599 — ${status} is invalid`);
|
||||
}
|
||||
return new HttpError(status, message);
|
||||
}
|
||||
function json(data, init) {
|
||||
const body = JSON.stringify(data);
|
||||
const headers = new Headers(init?.headers);
|
||||
if (!headers.has("content-length")) {
|
||||
headers.set("content-length", encoder.encode(body).byteLength.toString());
|
||||
}
|
||||
if (!headers.has("content-type")) {
|
||||
headers.set("content-type", "application/json");
|
||||
}
|
||||
return new Response(body, {
|
||||
...init,
|
||||
headers
|
||||
});
|
||||
}
|
||||
const encoder = new TextEncoder();
|
||||
function text(body, init) {
|
||||
const headers = new Headers(init?.headers);
|
||||
if (!headers.has("content-length")) {
|
||||
headers.set("content-length", encoder.encode(body).byteLength.toString());
|
||||
}
|
||||
return new Response(body, {
|
||||
...init,
|
||||
headers
|
||||
});
|
||||
}
|
||||
export {
|
||||
ActionFailure as A,
|
||||
HttpError as H,
|
||||
Redirect as R,
|
||||
error as e,
|
||||
json as j,
|
||||
text as t
|
||||
};
|
||||
92
frontend/.netlify/server/chunks/index2.js
Normal file
92
frontend/.netlify/server/chunks/index2.js
Normal file
@@ -0,0 +1,92 @@
|
||||
import { n as noop, l as safe_not_equal, h as subscribe, r as run_all, p as is_function } from "./index3.js";
|
||||
const subscriber_queue = [];
|
||||
function readable(value, start) {
|
||||
return {
|
||||
subscribe: writable(value, start).subscribe
|
||||
};
|
||||
}
|
||||
function writable(value, start = noop) {
|
||||
let stop;
|
||||
const subscribers = /* @__PURE__ */ new Set();
|
||||
function set(new_value) {
|
||||
if (safe_not_equal(value, new_value)) {
|
||||
value = new_value;
|
||||
if (stop) {
|
||||
const run_queue = !subscriber_queue.length;
|
||||
for (const subscriber of subscribers) {
|
||||
subscriber[1]();
|
||||
subscriber_queue.push(subscriber, value);
|
||||
}
|
||||
if (run_queue) {
|
||||
for (let i = 0; i < subscriber_queue.length; i += 2) {
|
||||
subscriber_queue[i][0](subscriber_queue[i + 1]);
|
||||
}
|
||||
subscriber_queue.length = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
function update(fn) {
|
||||
set(fn(value));
|
||||
}
|
||||
function subscribe2(run, invalidate = noop) {
|
||||
const subscriber = [run, invalidate];
|
||||
subscribers.add(subscriber);
|
||||
if (subscribers.size === 1) {
|
||||
stop = start(set) || noop;
|
||||
}
|
||||
run(value);
|
||||
return () => {
|
||||
subscribers.delete(subscriber);
|
||||
if (subscribers.size === 0 && stop) {
|
||||
stop();
|
||||
stop = null;
|
||||
}
|
||||
};
|
||||
}
|
||||
return { set, update, subscribe: subscribe2 };
|
||||
}
|
||||
function derived(stores, fn, initial_value) {
|
||||
const single = !Array.isArray(stores);
|
||||
const stores_array = single ? [stores] : stores;
|
||||
const auto = fn.length < 2;
|
||||
return readable(initial_value, (set) => {
|
||||
let started = false;
|
||||
const values = [];
|
||||
let pending = 0;
|
||||
let cleanup = noop;
|
||||
const sync = () => {
|
||||
if (pending) {
|
||||
return;
|
||||
}
|
||||
cleanup();
|
||||
const result = fn(single ? values[0] : values, set);
|
||||
if (auto) {
|
||||
set(result);
|
||||
} else {
|
||||
cleanup = is_function(result) ? result : noop;
|
||||
}
|
||||
};
|
||||
const unsubscribers = stores_array.map((store, i) => subscribe(store, (value) => {
|
||||
values[i] = value;
|
||||
pending &= ~(1 << i);
|
||||
if (started) {
|
||||
sync();
|
||||
}
|
||||
}, () => {
|
||||
pending |= 1 << i;
|
||||
}));
|
||||
started = true;
|
||||
sync();
|
||||
return function stop() {
|
||||
run_all(unsubscribers);
|
||||
cleanup();
|
||||
started = false;
|
||||
};
|
||||
});
|
||||
}
|
||||
export {
|
||||
derived as d,
|
||||
readable as r,
|
||||
writable as w
|
||||
};
|
||||
250
frontend/.netlify/server/chunks/index3.js
Normal file
250
frontend/.netlify/server/chunks/index3.js
Normal file
@@ -0,0 +1,250 @@
|
||||
function noop() {
|
||||
}
|
||||
function run(fn) {
|
||||
return fn();
|
||||
}
|
||||
function blank_object() {
|
||||
return /* @__PURE__ */ Object.create(null);
|
||||
}
|
||||
function run_all(fns) {
|
||||
fns.forEach(run);
|
||||
}
|
||||
function is_function(thing) {
|
||||
return typeof thing === "function";
|
||||
}
|
||||
function safe_not_equal(a, b) {
|
||||
return a != a ? b == b : a !== b || (a && typeof a === "object" || typeof a === "function");
|
||||
}
|
||||
function subscribe(store, ...callbacks) {
|
||||
if (store == null) {
|
||||
return noop;
|
||||
}
|
||||
const unsub = store.subscribe(...callbacks);
|
||||
return unsub.unsubscribe ? () => unsub.unsubscribe() : unsub;
|
||||
}
|
||||
function get_store_value(store) {
|
||||
let value;
|
||||
subscribe(store, (_) => value = _)();
|
||||
return value;
|
||||
}
|
||||
let current_component;
|
||||
function set_current_component(component) {
|
||||
current_component = component;
|
||||
}
|
||||
function get_current_component() {
|
||||
if (!current_component)
|
||||
throw new Error("Function called outside component initialization");
|
||||
return current_component;
|
||||
}
|
||||
function onDestroy(fn) {
|
||||
get_current_component().$$.on_destroy.push(fn);
|
||||
}
|
||||
function setContext(key, context) {
|
||||
get_current_component().$$.context.set(key, context);
|
||||
return context;
|
||||
}
|
||||
function getContext(key) {
|
||||
return get_current_component().$$.context.get(key);
|
||||
}
|
||||
const _boolean_attributes = [
|
||||
"allowfullscreen",
|
||||
"allowpaymentrequest",
|
||||
"async",
|
||||
"autofocus",
|
||||
"autoplay",
|
||||
"checked",
|
||||
"controls",
|
||||
"default",
|
||||
"defer",
|
||||
"disabled",
|
||||
"formnovalidate",
|
||||
"hidden",
|
||||
"inert",
|
||||
"ismap",
|
||||
"itemscope",
|
||||
"loop",
|
||||
"multiple",
|
||||
"muted",
|
||||
"nomodule",
|
||||
"novalidate",
|
||||
"open",
|
||||
"playsinline",
|
||||
"readonly",
|
||||
"required",
|
||||
"reversed",
|
||||
"selected"
|
||||
];
|
||||
const boolean_attributes = /* @__PURE__ */ new Set([..._boolean_attributes]);
|
||||
const invalid_attribute_name_character = /[\s'">/=\u{FDD0}-\u{FDEF}\u{FFFE}\u{FFFF}\u{1FFFE}\u{1FFFF}\u{2FFFE}\u{2FFFF}\u{3FFFE}\u{3FFFF}\u{4FFFE}\u{4FFFF}\u{5FFFE}\u{5FFFF}\u{6FFFE}\u{6FFFF}\u{7FFFE}\u{7FFFF}\u{8FFFE}\u{8FFFF}\u{9FFFE}\u{9FFFF}\u{AFFFE}\u{AFFFF}\u{BFFFE}\u{BFFFF}\u{CFFFE}\u{CFFFF}\u{DFFFE}\u{DFFFF}\u{EFFFE}\u{EFFFF}\u{FFFFE}\u{FFFFF}\u{10FFFE}\u{10FFFF}]/u;
|
||||
function spread(args, attrs_to_add) {
|
||||
const attributes = Object.assign({}, ...args);
|
||||
if (attrs_to_add) {
|
||||
const classes_to_add = attrs_to_add.classes;
|
||||
const styles_to_add = attrs_to_add.styles;
|
||||
if (classes_to_add) {
|
||||
if (attributes.class == null) {
|
||||
attributes.class = classes_to_add;
|
||||
} else {
|
||||
attributes.class += " " + classes_to_add;
|
||||
}
|
||||
}
|
||||
if (styles_to_add) {
|
||||
if (attributes.style == null) {
|
||||
attributes.style = style_object_to_string(styles_to_add);
|
||||
} else {
|
||||
attributes.style = style_object_to_string(merge_ssr_styles(attributes.style, styles_to_add));
|
||||
}
|
||||
}
|
||||
}
|
||||
let str = "";
|
||||
Object.keys(attributes).forEach((name) => {
|
||||
if (invalid_attribute_name_character.test(name))
|
||||
return;
|
||||
const value = attributes[name];
|
||||
if (value === true)
|
||||
str += " " + name;
|
||||
else if (boolean_attributes.has(name.toLowerCase())) {
|
||||
if (value)
|
||||
str += " " + name;
|
||||
} else if (value != null) {
|
||||
str += ` ${name}="${value}"`;
|
||||
}
|
||||
});
|
||||
return str;
|
||||
}
|
||||
function merge_ssr_styles(style_attribute, style_directive) {
|
||||
const style_object = {};
|
||||
for (const individual_style of style_attribute.split(";")) {
|
||||
const colon_index = individual_style.indexOf(":");
|
||||
const name = individual_style.slice(0, colon_index).trim();
|
||||
const value = individual_style.slice(colon_index + 1).trim();
|
||||
if (!name)
|
||||
continue;
|
||||
style_object[name] = value;
|
||||
}
|
||||
for (const name in style_directive) {
|
||||
const value = style_directive[name];
|
||||
if (value) {
|
||||
style_object[name] = value;
|
||||
} else {
|
||||
delete style_object[name];
|
||||
}
|
||||
}
|
||||
return style_object;
|
||||
}
|
||||
const ATTR_REGEX = /[&"]/g;
|
||||
const CONTENT_REGEX = /[&<]/g;
|
||||
function escape(value, is_attr = false) {
|
||||
const str = String(value);
|
||||
const pattern = is_attr ? ATTR_REGEX : CONTENT_REGEX;
|
||||
pattern.lastIndex = 0;
|
||||
let escaped = "";
|
||||
let last = 0;
|
||||
while (pattern.test(str)) {
|
||||
const i = pattern.lastIndex - 1;
|
||||
const ch = str[i];
|
||||
escaped += str.substring(last, i) + (ch === "&" ? "&" : ch === '"' ? """ : "<");
|
||||
last = i + 1;
|
||||
}
|
||||
return escaped + str.substring(last);
|
||||
}
|
||||
function escape_attribute_value(value) {
|
||||
const should_escape = typeof value === "string" || value && typeof value === "object";
|
||||
return should_escape ? escape(value, true) : value;
|
||||
}
|
||||
function escape_object(obj) {
|
||||
const result = {};
|
||||
for (const key in obj) {
|
||||
result[key] = escape_attribute_value(obj[key]);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
function each(items, fn) {
|
||||
let str = "";
|
||||
for (let i = 0; i < items.length; i += 1) {
|
||||
str += fn(items[i], i);
|
||||
}
|
||||
return str;
|
||||
}
|
||||
const missing_component = {
|
||||
$$render: () => ""
|
||||
};
|
||||
function validate_component(component, name) {
|
||||
if (!component || !component.$$render) {
|
||||
if (name === "svelte:component")
|
||||
name += " this={...}";
|
||||
throw new Error(`<${name}> is not a valid SSR component. You may need to review your build config to ensure that dependencies are compiled, rather than imported as pre-compiled modules. Otherwise you may need to fix a <${name}>.`);
|
||||
}
|
||||
return component;
|
||||
}
|
||||
let on_destroy;
|
||||
function create_ssr_component(fn) {
|
||||
function $$render(result, props, bindings, slots, context) {
|
||||
const parent_component = current_component;
|
||||
const $$ = {
|
||||
on_destroy,
|
||||
context: new Map(context || (parent_component ? parent_component.$$.context : [])),
|
||||
// these will be immediately discarded
|
||||
on_mount: [],
|
||||
before_update: [],
|
||||
after_update: [],
|
||||
callbacks: blank_object()
|
||||
};
|
||||
set_current_component({ $$ });
|
||||
const html = fn(result, props, bindings, slots);
|
||||
set_current_component(parent_component);
|
||||
return html;
|
||||
}
|
||||
return {
|
||||
render: (props = {}, { $$slots = {}, context = /* @__PURE__ */ new Map() } = {}) => {
|
||||
on_destroy = [];
|
||||
const result = { title: "", head: "", css: /* @__PURE__ */ new Set() };
|
||||
const html = $$render(result, props, {}, $$slots, context);
|
||||
run_all(on_destroy);
|
||||
return {
|
||||
html,
|
||||
css: {
|
||||
code: Array.from(result.css).map((css) => css.code).join("\n"),
|
||||
map: null
|
||||
// TODO
|
||||
},
|
||||
head: result.title + result.head
|
||||
};
|
||||
},
|
||||
$$render
|
||||
};
|
||||
}
|
||||
function add_attribute(name, value, boolean) {
|
||||
if (value == null || boolean && !value)
|
||||
return "";
|
||||
const assignment = boolean && value === true ? "" : `="${escape(value, true)}"`;
|
||||
return ` ${name}${assignment}`;
|
||||
}
|
||||
function style_object_to_string(style_object) {
|
||||
return Object.keys(style_object).filter((key) => style_object[key]).map((key) => `${key}: ${escape_attribute_value(style_object[key])};`).join(" ");
|
||||
}
|
||||
function add_styles(style_object) {
|
||||
const styles = style_object_to_string(style_object);
|
||||
return styles ? ` style="${styles}"` : "";
|
||||
}
|
||||
export {
|
||||
add_styles as a,
|
||||
spread as b,
|
||||
create_ssr_component as c,
|
||||
escape_object as d,
|
||||
escape as e,
|
||||
merge_ssr_styles as f,
|
||||
add_attribute as g,
|
||||
subscribe as h,
|
||||
each as i,
|
||||
getContext as j,
|
||||
get_store_value as k,
|
||||
safe_not_equal as l,
|
||||
missing_component as m,
|
||||
noop as n,
|
||||
onDestroy as o,
|
||||
is_function as p,
|
||||
run_all as r,
|
||||
setContext as s,
|
||||
validate_component as v
|
||||
};
|
||||
179
frontend/.netlify/server/chunks/internal.js
Normal file
179
frontend/.netlify/server/chunks/internal.js
Normal file
@@ -0,0 +1,179 @@
|
||||
import { c as create_ssr_component, s as setContext, v as validate_component, m as missing_component } from "./index3.js";
|
||||
import "./shared-server.js";
|
||||
let base = "";
|
||||
let assets = base;
|
||||
const initial = { base, assets };
|
||||
function reset() {
|
||||
base = initial.base;
|
||||
assets = initial.assets;
|
||||
}
|
||||
function set_assets(path) {
|
||||
assets = initial.assets = path;
|
||||
}
|
||||
function afterUpdate() {
|
||||
}
|
||||
function set_building() {
|
||||
}
|
||||
const Root = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let { stores } = $$props;
|
||||
let { page } = $$props;
|
||||
let { constructors } = $$props;
|
||||
let { components = [] } = $$props;
|
||||
let { form } = $$props;
|
||||
let { data_0 = null } = $$props;
|
||||
let { data_1 = null } = $$props;
|
||||
{
|
||||
setContext("__svelte__", stores);
|
||||
}
|
||||
afterUpdate(stores.page.notify);
|
||||
if ($$props.stores === void 0 && $$bindings.stores && stores !== void 0)
|
||||
$$bindings.stores(stores);
|
||||
if ($$props.page === void 0 && $$bindings.page && page !== void 0)
|
||||
$$bindings.page(page);
|
||||
if ($$props.constructors === void 0 && $$bindings.constructors && constructors !== void 0)
|
||||
$$bindings.constructors(constructors);
|
||||
if ($$props.components === void 0 && $$bindings.components && components !== void 0)
|
||||
$$bindings.components(components);
|
||||
if ($$props.form === void 0 && $$bindings.form && form !== void 0)
|
||||
$$bindings.form(form);
|
||||
if ($$props.data_0 === void 0 && $$bindings.data_0 && data_0 !== void 0)
|
||||
$$bindings.data_0(data_0);
|
||||
if ($$props.data_1 === void 0 && $$bindings.data_1 && data_1 !== void 0)
|
||||
$$bindings.data_1(data_1);
|
||||
let $$settled;
|
||||
let $$rendered;
|
||||
do {
|
||||
$$settled = true;
|
||||
{
|
||||
stores.page.set(page);
|
||||
}
|
||||
$$rendered = `
|
||||
|
||||
|
||||
${constructors[1] ? `${validate_component(constructors[0] || missing_component, "svelte:component").$$render(
|
||||
$$result,
|
||||
{ data: data_0, this: components[0] },
|
||||
{
|
||||
this: ($$value) => {
|
||||
components[0] = $$value;
|
||||
$$settled = false;
|
||||
}
|
||||
},
|
||||
{
|
||||
default: () => {
|
||||
return `${validate_component(constructors[1] || missing_component, "svelte:component").$$render(
|
||||
$$result,
|
||||
{ data: data_1, form, this: components[1] },
|
||||
{
|
||||
this: ($$value) => {
|
||||
components[1] = $$value;
|
||||
$$settled = false;
|
||||
}
|
||||
},
|
||||
{}
|
||||
)}`;
|
||||
}
|
||||
}
|
||||
)}` : `${validate_component(constructors[0] || missing_component, "svelte:component").$$render(
|
||||
$$result,
|
||||
{ data: data_0, form, this: components[0] },
|
||||
{
|
||||
this: ($$value) => {
|
||||
components[0] = $$value;
|
||||
$$settled = false;
|
||||
}
|
||||
},
|
||||
{}
|
||||
)}`}
|
||||
|
||||
${``}`;
|
||||
} while (!$$settled);
|
||||
return $$rendered;
|
||||
});
|
||||
const options = {
|
||||
app_template_contains_nonce: false,
|
||||
csp: { "mode": "auto", "directives": { "upgrade-insecure-requests": false, "block-all-mixed-content": false }, "reportOnly": { "upgrade-insecure-requests": false, "block-all-mixed-content": false } },
|
||||
csrf_check_origin: true,
|
||||
embedded: false,
|
||||
env_public_prefix: "PUBLIC_",
|
||||
hooks: null,
|
||||
// added lazily, via `get_hooks`
|
||||
preload_strategy: "modulepreload",
|
||||
root: Root,
|
||||
service_worker: false,
|
||||
templates: {
|
||||
app: ({ head, body, assets: assets2, nonce, env }) => '<!DOCTYPE html>\n<html lang="en">\n <head>\n <meta charset="utf-8" />\n <link rel="icon" href="' + assets2 + '/favicon.png" />\n <meta name="viewport" content="width=device-width" />\n ' + head + '\n </head>\n <body data-sveltekit-preload-data="hover">\n <div style="display: contents">' + body + "</div>\n </body>\n</html>\n",
|
||||
error: ({ status, message }) => '<!DOCTYPE html>\n<html lang="en">\n <head>\n <meta charset="utf-8" />\n <title>' + message + `</title>
|
||||
|
||||
<style>
|
||||
body {
|
||||
--bg: white;
|
||||
--fg: #222;
|
||||
--divider: #ccc;
|
||||
background: var(--bg);
|
||||
color: var(--fg);
|
||||
font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen,
|
||||
Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
height: 100vh;
|
||||
}
|
||||
|
||||
.error {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
max-width: 32rem;
|
||||
margin: 0 1rem;
|
||||
}
|
||||
|
||||
.status {
|
||||
font-weight: 200;
|
||||
font-size: 3rem;
|
||||
line-height: 1;
|
||||
position: relative;
|
||||
top: -0.05rem;
|
||||
}
|
||||
|
||||
.message {
|
||||
border-left: 1px solid var(--divider);
|
||||
padding: 0 0 0 1rem;
|
||||
margin: 0 0 0 1rem;
|
||||
min-height: 2.5rem;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.message h1 {
|
||||
font-weight: 400;
|
||||
font-size: 1em;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
@media (prefers-color-scheme: dark) {
|
||||
body {
|
||||
--bg: #222;
|
||||
--fg: #ddd;
|
||||
--divider: #666;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="error">
|
||||
<span class="status">` + status + '</span>\n <div class="message">\n <h1>' + message + "</h1>\n </div>\n </div>\n </body>\n</html>\n"
|
||||
},
|
||||
version_hash: "1vmmy0u"
|
||||
};
|
||||
function get_hooks() {
|
||||
return {};
|
||||
}
|
||||
export {
|
||||
assets as a,
|
||||
base as b,
|
||||
set_building as c,
|
||||
get_hooks as g,
|
||||
options as o,
|
||||
reset as r,
|
||||
set_assets as s
|
||||
};
|
||||
11
frontend/.netlify/server/chunks/shared-server.js
Normal file
11
frontend/.netlify/server/chunks/shared-server.js
Normal file
@@ -0,0 +1,11 @@
|
||||
let public_env = {};
|
||||
function set_private_env(environment) {
|
||||
}
|
||||
function set_public_env(environment) {
|
||||
public_env = environment;
|
||||
}
|
||||
export {
|
||||
set_private_env as a,
|
||||
public_env as p,
|
||||
set_public_env as s
|
||||
};
|
||||
30
frontend/.netlify/server/entries/fallbacks/error.svelte.js
Normal file
30
frontend/.netlify/server/entries/fallbacks/error.svelte.js
Normal file
@@ -0,0 +1,30 @@
|
||||
import { j as getContext, c as create_ssr_component, h as subscribe, e as escape } from "../../chunks/index3.js";
|
||||
const getStores = () => {
|
||||
const stores = getContext("__svelte__");
|
||||
return {
|
||||
page: {
|
||||
subscribe: stores.page.subscribe
|
||||
},
|
||||
navigating: {
|
||||
subscribe: stores.navigating.subscribe
|
||||
},
|
||||
updated: stores.updated
|
||||
};
|
||||
};
|
||||
const page = {
|
||||
/** @param {(value: any) => void} fn */
|
||||
subscribe(fn) {
|
||||
const store = getStores().page;
|
||||
return store.subscribe(fn);
|
||||
}
|
||||
};
|
||||
const Error$1 = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let $page, $$unsubscribe_page;
|
||||
$$unsubscribe_page = subscribe(page, (value) => $page = value);
|
||||
$$unsubscribe_page();
|
||||
return `<h1>${escape($page.status)}</h1>
|
||||
<p>${escape($page.error?.message)}</p>`;
|
||||
});
|
||||
export {
|
||||
Error$1 as default
|
||||
};
|
||||
289
frontend/.netlify/server/entries/pages/_layout.svelte.js
Normal file
289
frontend/.netlify/server/entries/pages/_layout.svelte.js
Normal file
@@ -0,0 +1,289 @@
|
||||
import { o as onDestroy, c as create_ssr_component, a as add_styles, e as escape, v as validate_component, m as missing_component, b as spread, d as escape_object, f as merge_ssr_styles, g as add_attribute, h as subscribe, i as each } from "../../chunks/index3.js";
|
||||
import { u as useToasterStore, t as toast, s as startPause, e as endPause, a as update, p as prefersReducedMotion } from "../../chunks/Toaster.svelte_svelte_type_style_lang.js";
|
||||
const app = "";
|
||||
function calculateOffset(toast2, $toasts, opts) {
|
||||
const { reverseOrder, gutter = 8, defaultPosition } = opts || {};
|
||||
const relevantToasts = $toasts.filter((t) => (t.position || defaultPosition) === (toast2.position || defaultPosition) && t.height);
|
||||
const toastIndex = relevantToasts.findIndex((t) => t.id === toast2.id);
|
||||
const toastsBefore = relevantToasts.filter((toast3, i) => i < toastIndex && toast3.visible).length;
|
||||
const offset = relevantToasts.filter((t) => t.visible).slice(...reverseOrder ? [toastsBefore + 1] : [0, toastsBefore]).reduce((acc, t) => acc + (t.height || 0) + gutter, 0);
|
||||
return offset;
|
||||
}
|
||||
const handlers = {
|
||||
startPause() {
|
||||
startPause(Date.now());
|
||||
},
|
||||
endPause() {
|
||||
endPause(Date.now());
|
||||
},
|
||||
updateHeight: (toastId, height) => {
|
||||
update({ id: toastId, height });
|
||||
},
|
||||
calculateOffset
|
||||
};
|
||||
function useToaster(toastOptions) {
|
||||
const { toasts, pausedAt } = useToasterStore(toastOptions);
|
||||
const timeouts = /* @__PURE__ */ new Map();
|
||||
let _pausedAt;
|
||||
const unsubscribes = [
|
||||
pausedAt.subscribe(($pausedAt) => {
|
||||
if ($pausedAt) {
|
||||
for (const [, timeoutId] of timeouts) {
|
||||
clearTimeout(timeoutId);
|
||||
}
|
||||
timeouts.clear();
|
||||
}
|
||||
_pausedAt = $pausedAt;
|
||||
}),
|
||||
toasts.subscribe(($toasts) => {
|
||||
if (_pausedAt) {
|
||||
return;
|
||||
}
|
||||
const now = Date.now();
|
||||
for (const t of $toasts) {
|
||||
if (timeouts.has(t.id)) {
|
||||
continue;
|
||||
}
|
||||
if (t.duration === Infinity) {
|
||||
continue;
|
||||
}
|
||||
const durationLeft = (t.duration || 0) + t.pauseDuration - (now - t.createdAt);
|
||||
if (durationLeft < 0) {
|
||||
if (t.visible) {
|
||||
toast.dismiss(t.id);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
timeouts.set(t.id, setTimeout(() => toast.dismiss(t.id), durationLeft));
|
||||
}
|
||||
})
|
||||
];
|
||||
onDestroy(() => {
|
||||
for (const unsubscribe of unsubscribes) {
|
||||
unsubscribe();
|
||||
}
|
||||
});
|
||||
return { toasts, handlers };
|
||||
}
|
||||
const css$7 = {
|
||||
code: "div.svelte-lzwg39{width:20px;opacity:0;height:20px;border-radius:10px;background:var(--primary, #61d345);position:relative;transform:rotate(45deg);animation:svelte-lzwg39-circleAnimation 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275) forwards;animation-delay:100ms}div.svelte-lzwg39::after{content:'';box-sizing:border-box;animation:svelte-lzwg39-checkmarkAnimation 0.2s ease-out forwards;opacity:0;animation-delay:200ms;position:absolute;border-right:2px solid;border-bottom:2px solid;border-color:var(--secondary, #fff);bottom:6px;left:6px;height:10px;width:6px}@keyframes svelte-lzwg39-circleAnimation{from{transform:scale(0) rotate(45deg);opacity:0}to{transform:scale(1) rotate(45deg);opacity:1}}@keyframes svelte-lzwg39-checkmarkAnimation{0%{height:0;width:0;opacity:0}40%{height:0;width:6px;opacity:1}100%{opacity:1;height:10px}}",
|
||||
map: null
|
||||
};
|
||||
const CheckmarkIcon = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let { primary = "#61d345" } = $$props;
|
||||
let { secondary = "#fff" } = $$props;
|
||||
if ($$props.primary === void 0 && $$bindings.primary && primary !== void 0)
|
||||
$$bindings.primary(primary);
|
||||
if ($$props.secondary === void 0 && $$bindings.secondary && secondary !== void 0)
|
||||
$$bindings.secondary(secondary);
|
||||
$$result.css.add(css$7);
|
||||
return `
|
||||
|
||||
|
||||
<div class="svelte-lzwg39"${add_styles({
|
||||
"--primary": primary,
|
||||
"--secondary": secondary
|
||||
})}></div>`;
|
||||
});
|
||||
const css$6 = {
|
||||
code: "div.svelte-10jnndo{width:20px;opacity:0;height:20px;border-radius:10px;background:var(--primary, #ff4b4b);position:relative;transform:rotate(45deg);animation:svelte-10jnndo-circleAnimation 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275) forwards;animation-delay:100ms}div.svelte-10jnndo::after,div.svelte-10jnndo::before{content:'';animation:svelte-10jnndo-firstLineAnimation 0.15s ease-out forwards;animation-delay:150ms;position:absolute;border-radius:3px;opacity:0;background:var(--secondary, #fff);bottom:9px;left:4px;height:2px;width:12px}div.svelte-10jnndo:before{animation:svelte-10jnndo-secondLineAnimation 0.15s ease-out forwards;animation-delay:180ms;transform:rotate(90deg)}@keyframes svelte-10jnndo-circleAnimation{from{transform:scale(0) rotate(45deg);opacity:0}to{transform:scale(1) rotate(45deg);opacity:1}}@keyframes svelte-10jnndo-firstLineAnimation{from{transform:scale(0);opacity:0}to{transform:scale(1);opacity:1}}@keyframes svelte-10jnndo-secondLineAnimation{from{transform:scale(0) rotate(90deg);opacity:0}to{transform:scale(1) rotate(90deg);opacity:1}}",
|
||||
map: null
|
||||
};
|
||||
const ErrorIcon = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let { primary = "#ff4b4b" } = $$props;
|
||||
let { secondary = "#fff" } = $$props;
|
||||
if ($$props.primary === void 0 && $$bindings.primary && primary !== void 0)
|
||||
$$bindings.primary(primary);
|
||||
if ($$props.secondary === void 0 && $$bindings.secondary && secondary !== void 0)
|
||||
$$bindings.secondary(secondary);
|
||||
$$result.css.add(css$6);
|
||||
return `
|
||||
|
||||
|
||||
<div class="svelte-10jnndo"${add_styles({
|
||||
"--primary": primary,
|
||||
"--secondary": secondary
|
||||
})}></div>`;
|
||||
});
|
||||
const css$5 = {
|
||||
code: "div.svelte-bj4lu8{width:12px;height:12px;box-sizing:border-box;border:2px solid;border-radius:100%;border-color:var(--secondary, #e0e0e0);border-right-color:var(--primary, #616161);animation:svelte-bj4lu8-rotate 1s linear infinite}@keyframes svelte-bj4lu8-rotate{from{transform:rotate(0deg)}to{transform:rotate(360deg)}}",
|
||||
map: null
|
||||
};
|
||||
const LoaderIcon = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let { primary = "#616161" } = $$props;
|
||||
let { secondary = "#e0e0e0" } = $$props;
|
||||
if ($$props.primary === void 0 && $$bindings.primary && primary !== void 0)
|
||||
$$bindings.primary(primary);
|
||||
if ($$props.secondary === void 0 && $$bindings.secondary && secondary !== void 0)
|
||||
$$bindings.secondary(secondary);
|
||||
$$result.css.add(css$5);
|
||||
return `
|
||||
|
||||
|
||||
<div class="svelte-bj4lu8"${add_styles({
|
||||
"--primary": primary,
|
||||
"--secondary": secondary
|
||||
})}></div>`;
|
||||
});
|
||||
const css$4 = {
|
||||
code: ".indicator.svelte-1c92bpz{position:relative;display:flex;justify-content:center;align-items:center;min-width:20px;min-height:20px}.status.svelte-1c92bpz{position:absolute}.animated.svelte-1c92bpz{position:relative;transform:scale(0.6);opacity:0.4;min-width:20px;animation:svelte-1c92bpz-enter 0.3s 0.12s cubic-bezier(0.175, 0.885, 0.32, 1.275) forwards}@keyframes svelte-1c92bpz-enter{from{transform:scale(0.6);opacity:0.4}to{transform:scale(1);opacity:1}}",
|
||||
map: null
|
||||
};
|
||||
const ToastIcon = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let type;
|
||||
let icon;
|
||||
let iconTheme;
|
||||
let { toast: toast2 } = $$props;
|
||||
if ($$props.toast === void 0 && $$bindings.toast && toast2 !== void 0)
|
||||
$$bindings.toast(toast2);
|
||||
$$result.css.add(css$4);
|
||||
({ type, icon, iconTheme } = toast2);
|
||||
return `${typeof icon === "string" ? `<div class="animated svelte-1c92bpz">${escape(icon)}</div>` : `${typeof icon !== "undefined" ? `${validate_component(icon || missing_component, "svelte:component").$$render($$result, {}, {}, {})}` : `${type !== "blank" ? `<div class="indicator svelte-1c92bpz">${validate_component(LoaderIcon, "LoaderIcon").$$render($$result, Object.assign({}, iconTheme), {}, {})}
|
||||
${type !== "loading" ? `<div class="status svelte-1c92bpz">${type === "error" ? `${validate_component(ErrorIcon, "ErrorIcon").$$render($$result, Object.assign({}, iconTheme), {}, {})}` : `${validate_component(CheckmarkIcon, "CheckmarkIcon").$$render($$result, Object.assign({}, iconTheme), {}, {})}`}</div>` : ``}</div>` : ``}`}`}`;
|
||||
});
|
||||
const css$3 = {
|
||||
code: ".message.svelte-o805t1{display:flex;justify-content:center;margin:4px 10px;color:inherit;flex:1 1 auto;white-space:pre-line}",
|
||||
map: null
|
||||
};
|
||||
const ToastMessage = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let { toast: toast2 } = $$props;
|
||||
if ($$props.toast === void 0 && $$bindings.toast && toast2 !== void 0)
|
||||
$$bindings.toast(toast2);
|
||||
$$result.css.add(css$3);
|
||||
return `<div${spread([{ class: "message" }, escape_object(toast2.ariaProps)], { classes: "svelte-o805t1" })}>${typeof toast2.message === "string" ? `${escape(toast2.message)}` : `${validate_component(toast2.message || missing_component, "svelte:component").$$render($$result, { toast: toast2 }, {}, {})}`}
|
||||
</div>`;
|
||||
});
|
||||
const css$2 = {
|
||||
code: "@keyframes svelte-15lyehg-enterAnimation{0%{transform:translate3d(0, calc(var(--factor) * -200%), 0) scale(0.6);opacity:0.5}100%{transform:translate3d(0, 0, 0) scale(1);opacity:1}}@keyframes svelte-15lyehg-exitAnimation{0%{transform:translate3d(0, 0, -1px) scale(1);opacity:1}100%{transform:translate3d(0, calc(var(--factor) * -150%), -1px) scale(0.6);opacity:0}}@keyframes svelte-15lyehg-fadeInAnimation{0%{opacity:0}100%{opacity:1}}@keyframes svelte-15lyehg-fadeOutAnimation{0%{opacity:1}100%{opacity:0}}.base.svelte-15lyehg{display:flex;align-items:center;background:#fff;color:#363636;line-height:1.3;will-change:transform;box-shadow:0 3px 10px rgba(0, 0, 0, 0.1), 0 3px 3px rgba(0, 0, 0, 0.05);max-width:350px;pointer-events:auto;padding:8px 10px;border-radius:8px}.transparent.svelte-15lyehg{opacity:0}.enter.svelte-15lyehg{animation:svelte-15lyehg-enterAnimation 0.35s cubic-bezier(0.21, 1.02, 0.73, 1) forwards}.exit.svelte-15lyehg{animation:svelte-15lyehg-exitAnimation 0.4s cubic-bezier(0.06, 0.71, 0.55, 1) forwards}.fadeIn.svelte-15lyehg{animation:svelte-15lyehg-fadeInAnimation 0.35s cubic-bezier(0.21, 1.02, 0.73, 1) forwards}.fadeOut.svelte-15lyehg{animation:svelte-15lyehg-fadeOutAnimation 0.4s cubic-bezier(0.06, 0.71, 0.55, 1) forwards}",
|
||||
map: null
|
||||
};
|
||||
const ToastBar = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let { toast: toast2 } = $$props;
|
||||
let { position = void 0 } = $$props;
|
||||
let { style = "" } = $$props;
|
||||
let { Component = void 0 } = $$props;
|
||||
let factor;
|
||||
let animation;
|
||||
if ($$props.toast === void 0 && $$bindings.toast && toast2 !== void 0)
|
||||
$$bindings.toast(toast2);
|
||||
if ($$props.position === void 0 && $$bindings.position && position !== void 0)
|
||||
$$bindings.position(position);
|
||||
if ($$props.style === void 0 && $$bindings.style && style !== void 0)
|
||||
$$bindings.style(style);
|
||||
if ($$props.Component === void 0 && $$bindings.Component && Component !== void 0)
|
||||
$$bindings.Component(Component);
|
||||
$$result.css.add(css$2);
|
||||
{
|
||||
{
|
||||
const top = (toast2.position || position || "top-center").includes("top");
|
||||
factor = top ? 1 : -1;
|
||||
const [enter, exit] = prefersReducedMotion() ? ["fadeIn", "fadeOut"] : ["enter", "exit"];
|
||||
animation = toast2.visible ? enter : exit;
|
||||
}
|
||||
}
|
||||
return `<div class="${"base " + escape(toast2.height ? animation : "transparent", true) + " " + escape(toast2.className || "", true) + " svelte-15lyehg"}"${add_styles(merge_ssr_styles(escape(style, true) + "; " + escape(toast2.style, true), { "--factor": factor }))}>${Component ? `${validate_component(Component || missing_component, "svelte:component").$$render($$result, {}, {}, {
|
||||
message: () => {
|
||||
return `${validate_component(ToastMessage, "ToastMessage").$$render($$result, { toast: toast2, slot: "message" }, {}, {})}`;
|
||||
},
|
||||
icon: () => {
|
||||
return `${validate_component(ToastIcon, "ToastIcon").$$render($$result, { toast: toast2, slot: "icon" }, {}, {})}`;
|
||||
}
|
||||
})}` : `${slots.default ? slots.default({ ToastIcon, ToastMessage, toast: toast2 }) : `
|
||||
${validate_component(ToastIcon, "ToastIcon").$$render($$result, { toast: toast2 }, {}, {})}
|
||||
${validate_component(ToastMessage, "ToastMessage").$$render($$result, { toast: toast2 }, {}, {})}
|
||||
`}`}
|
||||
</div>`;
|
||||
});
|
||||
const css$1 = {
|
||||
code: ".wrapper.svelte-1pakgpd{left:0;right:0;display:flex;position:absolute;transform:translateY(calc(var(--offset, 16px) * var(--factor) * 1px))}.transition.svelte-1pakgpd{transition:all 230ms cubic-bezier(0.21, 1.02, 0.73, 1)}.active.svelte-1pakgpd{z-index:9999}.active.svelte-1pakgpd>*{pointer-events:auto}",
|
||||
map: null
|
||||
};
|
||||
const ToastWrapper = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let top;
|
||||
let bottom;
|
||||
let factor;
|
||||
let justifyContent;
|
||||
let { toast: toast2 } = $$props;
|
||||
let { setHeight } = $$props;
|
||||
let wrapperEl;
|
||||
if ($$props.toast === void 0 && $$bindings.toast && toast2 !== void 0)
|
||||
$$bindings.toast(toast2);
|
||||
if ($$props.setHeight === void 0 && $$bindings.setHeight && setHeight !== void 0)
|
||||
$$bindings.setHeight(setHeight);
|
||||
$$result.css.add(css$1);
|
||||
top = toast2.position?.includes("top") ? 0 : null;
|
||||
bottom = toast2.position?.includes("bottom") ? 0 : null;
|
||||
factor = toast2.position?.includes("top") ? 1 : -1;
|
||||
justifyContent = toast2.position?.includes("center") && "center" || toast2.position?.includes("right") && "flex-end" || null;
|
||||
return `<div class="${[
|
||||
"wrapper svelte-1pakgpd",
|
||||
(toast2.visible ? "active" : "") + " " + (!prefersReducedMotion() ? "transition" : "")
|
||||
].join(" ").trim()}"${add_styles({
|
||||
"--factor": factor,
|
||||
"--offset": toast2.offset,
|
||||
top,
|
||||
bottom,
|
||||
"justify-content": justifyContent
|
||||
})}${add_attribute("this", wrapperEl, 0)}>${toast2.type === "custom" ? `${validate_component(ToastMessage, "ToastMessage").$$render($$result, { toast: toast2 }, {}, {})}` : `${slots.default ? slots.default({ toast: toast2 }) : `
|
||||
${validate_component(ToastBar, "ToastBar").$$render($$result, { toast: toast2, position: toast2.position }, {}, {})}
|
||||
`}`}
|
||||
</div>`;
|
||||
});
|
||||
const css = {
|
||||
code: ".toaster.svelte-jyff3d{--default-offset:16px;position:fixed;z-index:9999;top:var(--default-offset);left:var(--default-offset);right:var(--default-offset);bottom:var(--default-offset);pointer-events:none}",
|
||||
map: null
|
||||
};
|
||||
const Toaster = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let $toasts, $$unsubscribe_toasts;
|
||||
let { reverseOrder = false } = $$props;
|
||||
let { position = "top-center" } = $$props;
|
||||
let { toastOptions = void 0 } = $$props;
|
||||
let { gutter = 8 } = $$props;
|
||||
let { containerStyle = void 0 } = $$props;
|
||||
let { containerClassName = void 0 } = $$props;
|
||||
const { toasts, handlers: handlers2 } = useToaster(toastOptions);
|
||||
$$unsubscribe_toasts = subscribe(toasts, (value) => $toasts = value);
|
||||
let _toasts;
|
||||
if ($$props.reverseOrder === void 0 && $$bindings.reverseOrder && reverseOrder !== void 0)
|
||||
$$bindings.reverseOrder(reverseOrder);
|
||||
if ($$props.position === void 0 && $$bindings.position && position !== void 0)
|
||||
$$bindings.position(position);
|
||||
if ($$props.toastOptions === void 0 && $$bindings.toastOptions && toastOptions !== void 0)
|
||||
$$bindings.toastOptions(toastOptions);
|
||||
if ($$props.gutter === void 0 && $$bindings.gutter && gutter !== void 0)
|
||||
$$bindings.gutter(gutter);
|
||||
if ($$props.containerStyle === void 0 && $$bindings.containerStyle && containerStyle !== void 0)
|
||||
$$bindings.containerStyle(containerStyle);
|
||||
if ($$props.containerClassName === void 0 && $$bindings.containerClassName && containerClassName !== void 0)
|
||||
$$bindings.containerClassName(containerClassName);
|
||||
$$result.css.add(css);
|
||||
_toasts = $toasts.map((toast2) => ({
|
||||
...toast2,
|
||||
position: toast2.position || position,
|
||||
offset: handlers2.calculateOffset(toast2, $toasts, {
|
||||
reverseOrder,
|
||||
gutter,
|
||||
defaultPosition: position
|
||||
})
|
||||
}));
|
||||
$$unsubscribe_toasts();
|
||||
return `<div class="${"toaster " + escape(containerClassName || "", true) + " svelte-jyff3d"}"${add_attribute("style", containerStyle, 0)}>${each(_toasts, (toast2) => {
|
||||
return `${validate_component(ToastWrapper, "ToastWrapper").$$render(
|
||||
$$result,
|
||||
{
|
||||
toast: toast2,
|
||||
setHeight: (height) => handlers2.updateHeight(toast2.id, height)
|
||||
},
|
||||
{},
|
||||
{}
|
||||
)}`;
|
||||
})}
|
||||
</div>`;
|
||||
});
|
||||
const Layout = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
return `${validate_component(Toaster, "Toaster").$$render($$result, {}, {}, {})}
|
||||
<div class="flex flex-col h-screen sm:justify-center sm:items-center"><div class="mx-auto px-3 md:px-0 sm:w-full md:w-4/5 lg:w-3/5 pt-24 sm:pt-0">${slots.default ? slots.default({}) : ``}</div></div>`;
|
||||
});
|
||||
export {
|
||||
Layout as default
|
||||
};
|
||||
39
frontend/.netlify/server/entries/pages/_page.svelte.js
Normal file
39
frontend/.netlify/server/entries/pages/_page.svelte.js
Normal file
@@ -0,0 +1,39 @@
|
||||
import { c as create_ssr_component, e as escape, i as each, g as add_attribute } from "../../chunks/index3.js";
|
||||
import "../../chunks/Toaster.svelte_svelte_type_style_lang.js";
|
||||
const Page = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let status, rooms;
|
||||
let { data } = $$props;
|
||||
let eth_pk = "";
|
||||
let room = "";
|
||||
let create_new_room = false;
|
||||
const filled_in = () => {
|
||||
return !(eth_pk.length > 0 && room.length > 0);
|
||||
};
|
||||
if ($$props.data === void 0 && $$bindings.data && data !== void 0)
|
||||
$$bindings.data(data);
|
||||
({ status, rooms } = data);
|
||||
return `<div class="flex flex-col justify-center"><div class="title"><h1 class="text-3xl font-bold text-center">Chatr: a Websocket chatroom</h1></div>
|
||||
<div class="join self-center"></div>
|
||||
<div class="rooms self-center my-5"><div class="flex justify-between py-2"><h2 class="text-xl font-bold ">List of active chatroom's
|
||||
</h2>
|
||||
<button class="btn btn-square btn-sm btn-accent">↻</button></div>
|
||||
${status && rooms.length < 1 ? `<div class="card bg-base-300 w-96 shadow-xl text-center"><div class="card-body"><h3 class="card-title ">${escape(status)}</h3></div></div>` : ``}
|
||||
${rooms ? `${each(rooms, (room2) => {
|
||||
return `<button class="card bg-base-300 w-96 shadow-xl my-3 w-full"><div class="card-body"><div class="flex justify-between"><h2 class="card-title">${escape(room2)}</h2>
|
||||
<button class="btn btn-primary btn-md">Select Room</button>
|
||||
</div></div>
|
||||
</button>`;
|
||||
})}` : ``}</div>
|
||||
<div class="create self-center my-5 w-[40rem]"><div><label class="label" for="eth-private-key"><span class="label-text">Eth Private Key</span></label>
|
||||
<input id="eth-private-key" placeholder="Eth Private Key" class="input input-bordered input-primary w-full bg-base-200 mb-4 mr-3"${add_attribute("value", eth_pk, 0)}></div>
|
||||
<div><label class="label" for="room-name"><span class="label-text">Room name</span></label>
|
||||
<input id="room-name" placeholder="Room Name" class="input input-bordered input-primary w-full bg-base-200 mb-4 mr-3"${add_attribute("value", room, 0)}></div>
|
||||
<div class="form-control"><label class="label cursor-pointer"><span class="label-text">Create Room</span>
|
||||
<input type="checkbox" class="checkbox checkbox-primary"${add_attribute("checked", create_new_room, 1)}></label></div>
|
||||
<button class="btn btn-primary" ${filled_in() ? "disabled" : ""}>Join Room.</button></div>
|
||||
<div class="github self-center"><p>Check out <a class="link link-accent" href="https://github.com/0xLaurens/chatr" target="_blank" rel="noreferrer">Chatr</a>, to view the source code!
|
||||
</p></div></div>`;
|
||||
});
|
||||
export {
|
||||
Page as default
|
||||
};
|
||||
19
frontend/.netlify/server/entries/pages/_page.ts.js
Normal file
19
frontend/.netlify/server/entries/pages/_page.ts.js
Normal file
@@ -0,0 +1,19 @@
|
||||
import { p as public_env } from "../../chunks/shared-server.js";
|
||||
const load = async ({ fetch }) => {
|
||||
try {
|
||||
let url = `${public_env.PUBLIC_API_URL}`;
|
||||
if (url.endsWith("/")) {
|
||||
url = url.slice(0, -1);
|
||||
}
|
||||
const res = await fetch(`${url}/rooms`);
|
||||
return await res.json();
|
||||
} catch (e) {
|
||||
return {
|
||||
status: "API offline (try again in a min)",
|
||||
rooms: []
|
||||
};
|
||||
}
|
||||
};
|
||||
export {
|
||||
load
|
||||
};
|
||||
128
frontend/.netlify/server/entries/pages/chat/_page.svelte.js
Normal file
128
frontend/.netlify/server/entries/pages/chat/_page.svelte.js
Normal file
@@ -0,0 +1,128 @@
|
||||
import { c as create_ssr_component, h as subscribe, o as onDestroy, g as add_attribute, e as escape, i as each } from "../../../chunks/index3.js";
|
||||
import { w as writable } from "../../../chunks/index2.js";
|
||||
import { p as public_env } from "../../../chunks/shared-server.js";
|
||||
import { t as toast } from "../../../chunks/Toaster.svelte_svelte_type_style_lang.js";
|
||||
import "../../../chunks/index.js";
|
||||
const eth_private_key = writable("");
|
||||
const group = writable("");
|
||||
const createNewRoom = writable(false);
|
||||
function guard(name) {
|
||||
return () => {
|
||||
throw new Error(`Cannot call ${name}(...) on the server`);
|
||||
};
|
||||
}
|
||||
const goto = guard("goto");
|
||||
const Page = create_ssr_component(($$result, $$props, $$bindings, slots) => {
|
||||
let $group, $$unsubscribe_group;
|
||||
let $eth_private_key, $$unsubscribe_eth_private_key;
|
||||
let $createNewRoom, $$unsubscribe_createNewRoom;
|
||||
$$unsubscribe_group = subscribe(group, (value) => $group = value);
|
||||
$$unsubscribe_eth_private_key = subscribe(eth_private_key, (value) => $eth_private_key = value);
|
||||
$$unsubscribe_createNewRoom = subscribe(createNewRoom, (value) => $createNewRoom = value);
|
||||
let status = "🔴";
|
||||
let statusTip = "Disconnected";
|
||||
let message = "";
|
||||
let messages = [];
|
||||
let socket;
|
||||
let interval;
|
||||
let delay = 2e3;
|
||||
let timeout = false;
|
||||
let currentVotingProposal = null;
|
||||
let showVotingUI = false;
|
||||
function connect() {
|
||||
socket = new WebSocket(`${public_env.PUBLIC_WEBSOCKET_URL}/ws`);
|
||||
socket.addEventListener("open", () => {
|
||||
status = "🟢";
|
||||
statusTip = "Connected";
|
||||
timeout = false;
|
||||
socket.send(JSON.stringify({
|
||||
eth_private_key: $eth_private_key,
|
||||
group_id: $group,
|
||||
should_create: $createNewRoom
|
||||
}));
|
||||
});
|
||||
socket.addEventListener("close", () => {
|
||||
status = "🔴";
|
||||
statusTip = "Disconnected";
|
||||
if (timeout == false) {
|
||||
delay = 2e3;
|
||||
timeout = true;
|
||||
}
|
||||
});
|
||||
socket.addEventListener("message", function(event) {
|
||||
if (event.data == "Username already taken.") {
|
||||
toast.error(event.data);
|
||||
goto("/");
|
||||
} else {
|
||||
try {
|
||||
const data = JSON.parse(event.data);
|
||||
if (data.type === "voting_proposal") {
|
||||
currentVotingProposal = data.proposal;
|
||||
showVotingUI = true;
|
||||
toast.success("New voting proposal received!");
|
||||
} else {
|
||||
messages = [...messages, event.data];
|
||||
}
|
||||
} catch (e) {
|
||||
messages = [...messages, event.data];
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
onDestroy(() => {
|
||||
if (socket) {
|
||||
socket.close();
|
||||
}
|
||||
if (interval) {
|
||||
clearInterval(interval);
|
||||
}
|
||||
timeout = false;
|
||||
});
|
||||
{
|
||||
{
|
||||
if (interval || !timeout && interval) {
|
||||
clearInterval(interval);
|
||||
}
|
||||
if (timeout == true) {
|
||||
interval = setInterval(
|
||||
() => {
|
||||
if (delay < 3e4)
|
||||
delay = delay * 2;
|
||||
console.log("reconnecting in:", delay);
|
||||
connect();
|
||||
},
|
||||
delay
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
$$unsubscribe_group();
|
||||
$$unsubscribe_eth_private_key();
|
||||
$$unsubscribe_createNewRoom();
|
||||
return `<div class="container mx-auto p-4 max-w-4xl"><div class="flex justify-between items-center mb-8"><h1 class="text-3xl font-bold">MLS Chat <span class="tooltip"${add_attribute("data-tip", statusTip, 0)}>${escape(status)}</span></h1>
|
||||
<button class="btn btn-accent">Clear Messages</button></div>
|
||||
|
||||
|
||||
<div class="text-center mb-4"><button class="btn btn-warning btn-outline">🧪 Test Voting Proposal
|
||||
</button></div>
|
||||
|
||||
|
||||
${showVotingUI && currentVotingProposal ? `<div class="card bg-warning shadow-xl my-5"><div class="card-body"><h2 class="card-title text-warning-content">Voting Proposal</h2>
|
||||
<div class="text-warning-content"><p><strong>Group Name:</strong> ${escape(currentVotingProposal.group_name)}</p>
|
||||
<p><strong>Proposal ID:</strong> ${escape(currentVotingProposal.proposal_id)}</p>
|
||||
<p><strong>Proposal Payload:</strong></p>
|
||||
<div class="ml-4 whitespace-pre-line">${escape(currentVotingProposal.payload)}</div></div>
|
||||
<div class="card-actions justify-end mt-4"><button class="btn btn-success">Vote YES</button>
|
||||
<button class="btn btn-error">Vote NO</button>
|
||||
<button class="btn btn-ghost">Dismiss</button></div></div></div>` : ``}
|
||||
|
||||
<div class="card h-96 flex-grow bg-base-300 shadow-xl my-10"><div class="card-body"><div class="flex flex-col overflow-y-auto max-h-80 scroll-smooth">${each(messages, (msg) => {
|
||||
return `<div class="my-2">${escape(msg)}</div>`;
|
||||
})}</div></div></div>
|
||||
|
||||
<div class="message-box flex justify-end"><form><input placeholder="Message" class="input input-bordered input-primary w-[51rem] bg-base-200 mb-2"${add_attribute("value", message, 0)}>
|
||||
<button class="btn btn-primary w-full sm:w-auto btn-wide">Send</button></form></div></div>`;
|
||||
});
|
||||
export {
|
||||
Page as default
|
||||
};
|
||||
2674
frontend/.netlify/server/index.js
Normal file
2674
frontend/.netlify/server/index.js
Normal file
File diff suppressed because it is too large
Load Diff
10
frontend/.netlify/server/internal.js
Normal file
10
frontend/.netlify/server/internal.js
Normal file
@@ -0,0 +1,10 @@
|
||||
import { g, o, s, c } from "./chunks/internal.js";
|
||||
import { a, s as s2 } from "./chunks/shared-server.js";
|
||||
export {
|
||||
g as get_hooks,
|
||||
o as options,
|
||||
s as set_assets,
|
||||
c as set_building,
|
||||
a as set_private_env,
|
||||
s2 as set_public_env
|
||||
};
|
||||
35
frontend/.netlify/server/manifest-full.js
Normal file
35
frontend/.netlify/server/manifest-full.js
Normal file
@@ -0,0 +1,35 @@
|
||||
export const manifest = {
|
||||
appDir: "_app",
|
||||
appPath: "_app",
|
||||
assets: new Set(["favicon.png"]),
|
||||
mimeTypes: {".png":"image/png"},
|
||||
_: {
|
||||
client: {"start":{"file":"_app/immutable/entry/start.530cd74f.js","imports":["_app/immutable/entry/start.530cd74f.js","_app/immutable/chunks/index.b5cfe40e.js","_app/immutable/chunks/singletons.995fdd8e.js","_app/immutable/chunks/index.0a9737cc.js"],"stylesheets":[],"fonts":[]},"app":{"file":"_app/immutable/entry/app.e73d9bc3.js","imports":["_app/immutable/entry/app.e73d9bc3.js","_app/immutable/chunks/index.b5cfe40e.js"],"stylesheets":[],"fonts":[]}},
|
||||
nodes: [
|
||||
() => import('./nodes/0.js'),
|
||||
() => import('./nodes/1.js'),
|
||||
() => import('./nodes/2.js'),
|
||||
() => import('./nodes/3.js')
|
||||
],
|
||||
routes: [
|
||||
{
|
||||
id: "/",
|
||||
pattern: /^\/$/,
|
||||
params: [],
|
||||
page: { layouts: [0], errors: [1], leaf: 2 },
|
||||
endpoint: null
|
||||
},
|
||||
{
|
||||
id: "/chat",
|
||||
pattern: /^\/chat\/?$/,
|
||||
params: [],
|
||||
page: { layouts: [0], errors: [1], leaf: 3 },
|
||||
endpoint: null
|
||||
}
|
||||
],
|
||||
matchers: async () => {
|
||||
|
||||
return { };
|
||||
}
|
||||
}
|
||||
};
|
||||
35
frontend/.netlify/server/manifest.js
Normal file
35
frontend/.netlify/server/manifest.js
Normal file
@@ -0,0 +1,35 @@
|
||||
export const manifest = {
|
||||
appDir: "_app",
|
||||
appPath: "_app",
|
||||
assets: new Set(["favicon.png"]),
|
||||
mimeTypes: {".png":"image/png"},
|
||||
_: {
|
||||
client: {"start":{"file":"_app/immutable/entry/start.530cd74f.js","imports":["_app/immutable/entry/start.530cd74f.js","_app/immutable/chunks/index.b5cfe40e.js","_app/immutable/chunks/singletons.995fdd8e.js","_app/immutable/chunks/index.0a9737cc.js"],"stylesheets":[],"fonts":[]},"app":{"file":"_app/immutable/entry/app.e73d9bc3.js","imports":["_app/immutable/entry/app.e73d9bc3.js","_app/immutable/chunks/index.b5cfe40e.js"],"stylesheets":[],"fonts":[]}},
|
||||
nodes: [
|
||||
() => import('./nodes/0.js'),
|
||||
() => import('./nodes/1.js'),
|
||||
() => import('./nodes/2.js'),
|
||||
() => import('./nodes/3.js')
|
||||
],
|
||||
routes: [
|
||||
{
|
||||
id: "/",
|
||||
pattern: /^\/$/,
|
||||
params: [],
|
||||
page: { layouts: [0], errors: [1], leaf: 2 },
|
||||
endpoint: null
|
||||
},
|
||||
{
|
||||
id: "/chat",
|
||||
pattern: /^\/chat\/?$/,
|
||||
params: [],
|
||||
page: { layouts: [0], errors: [1], leaf: 3 },
|
||||
endpoint: null
|
||||
}
|
||||
],
|
||||
matchers: async () => {
|
||||
|
||||
return { };
|
||||
}
|
||||
}
|
||||
};
|
||||
8
frontend/.netlify/server/nodes/0.js
Normal file
8
frontend/.netlify/server/nodes/0.js
Normal file
@@ -0,0 +1,8 @@
|
||||
|
||||
|
||||
export const index = 0;
|
||||
export const component = async () => (await import('../entries/pages/_layout.svelte.js')).default;
|
||||
export const file = '_app/immutable/entry/_layout.svelte.04ad52d9.js';
|
||||
export const imports = ["_app/immutable/entry/_layout.svelte.04ad52d9.js","_app/immutable/chunks/index.b5cfe40e.js","_app/immutable/chunks/Toaster.svelte_svelte_type_style_lang.9dbf3392.js","_app/immutable/chunks/index.0a9737cc.js"];
|
||||
export const stylesheets = ["_app/immutable/assets/_layout.785cca11.css","_app/immutable/assets/Toaster.5032d475.css"];
|
||||
export const fonts = [];
|
||||
8
frontend/.netlify/server/nodes/1.js
Normal file
8
frontend/.netlify/server/nodes/1.js
Normal file
@@ -0,0 +1,8 @@
|
||||
|
||||
|
||||
export const index = 1;
|
||||
export const component = async () => (await import('../entries/fallbacks/error.svelte.js')).default;
|
||||
export const file = '_app/immutable/entry/error.svelte.6777b5ad.js';
|
||||
export const imports = ["_app/immutable/entry/error.svelte.6777b5ad.js","_app/immutable/chunks/index.b5cfe40e.js","_app/immutable/chunks/singletons.995fdd8e.js","_app/immutable/chunks/index.0a9737cc.js"];
|
||||
export const stylesheets = [];
|
||||
export const fonts = [];
|
||||
10
frontend/.netlify/server/nodes/2.js
Normal file
10
frontend/.netlify/server/nodes/2.js
Normal file
@@ -0,0 +1,10 @@
|
||||
import * as universal from '../entries/pages/_page.ts.js';
|
||||
|
||||
export const index = 2;
|
||||
export const component = async () => (await import('../entries/pages/_page.svelte.js')).default;
|
||||
export const file = '_app/immutable/entry/_page.svelte.0bc1703f.js';
|
||||
export { universal };
|
||||
export const universal_id = "src/routes/+page.ts";
|
||||
export const imports = ["_app/immutable/entry/_page.svelte.0bc1703f.js","_app/immutable/chunks/index.b5cfe40e.js","_app/immutable/chunks/navigation.6147d380.js","_app/immutable/chunks/index.0a9737cc.js","_app/immutable/chunks/singletons.995fdd8e.js","_app/immutable/chunks/public.f5860d05.js","_app/immutable/chunks/Toaster.svelte_svelte_type_style_lang.9dbf3392.js","_app/immutable/entry/_page.ts.703a6e58.js","_app/immutable/chunks/public.f5860d05.js","_app/immutable/chunks/_page.55f56988.js"];
|
||||
export const stylesheets = ["_app/immutable/assets/Toaster.5032d475.css"];
|
||||
export const fonts = [];
|
||||
8
frontend/.netlify/server/nodes/3.js
Normal file
8
frontend/.netlify/server/nodes/3.js
Normal file
@@ -0,0 +1,8 @@
|
||||
|
||||
|
||||
export const index = 3;
|
||||
export const component = async () => (await import('../entries/pages/chat/_page.svelte.js')).default;
|
||||
export const file = '_app/immutable/entry/chat-page.svelte.d01d3c01.js';
|
||||
export const imports = ["_app/immutable/entry/chat-page.svelte.d01d3c01.js","_app/immutable/chunks/index.b5cfe40e.js","_app/immutable/chunks/navigation.6147d380.js","_app/immutable/chunks/index.0a9737cc.js","_app/immutable/chunks/singletons.995fdd8e.js","_app/immutable/chunks/public.f5860d05.js","_app/immutable/chunks/Toaster.svelte_svelte_type_style_lang.9dbf3392.js"];
|
||||
export const stylesheets = ["_app/immutable/assets/Toaster.5032d475.css"];
|
||||
export const fonts = [];
|
||||
361
frontend/.netlify/serverless.js
Normal file
361
frontend/.netlify/serverless.js
Normal file
@@ -0,0 +1,361 @@
|
||||
import './shims.js';
|
||||
import { Server } from './server/index.js';
|
||||
import 'assert';
|
||||
import 'net';
|
||||
import 'http';
|
||||
import 'stream';
|
||||
import 'buffer';
|
||||
import 'util';
|
||||
import 'querystring';
|
||||
import 'stream/web';
|
||||
import 'worker_threads';
|
||||
import 'perf_hooks';
|
||||
import 'util/types';
|
||||
import 'url';
|
||||
import 'string_decoder';
|
||||
import 'events';
|
||||
import 'tls';
|
||||
import 'async_hooks';
|
||||
import 'console';
|
||||
import 'zlib';
|
||||
import 'crypto';
|
||||
|
||||
var setCookie = {exports: {}};
|
||||
|
||||
var defaultParseOptions = {
|
||||
decodeValues: true,
|
||||
map: false,
|
||||
silent: false,
|
||||
};
|
||||
|
||||
function isNonEmptyString(str) {
|
||||
return typeof str === "string" && !!str.trim();
|
||||
}
|
||||
|
||||
function parseString(setCookieValue, options) {
|
||||
var parts = setCookieValue.split(";").filter(isNonEmptyString);
|
||||
|
||||
var nameValuePairStr = parts.shift();
|
||||
var parsed = parseNameValuePair(nameValuePairStr);
|
||||
var name = parsed.name;
|
||||
var value = parsed.value;
|
||||
|
||||
options = options
|
||||
? Object.assign({}, defaultParseOptions, options)
|
||||
: defaultParseOptions;
|
||||
|
||||
try {
|
||||
value = options.decodeValues ? decodeURIComponent(value) : value; // decode cookie value
|
||||
} catch (e) {
|
||||
console.error(
|
||||
"set-cookie-parser encountered an error while decoding a cookie with value '" +
|
||||
value +
|
||||
"'. Set options.decodeValues to false to disable this feature.",
|
||||
e
|
||||
);
|
||||
}
|
||||
|
||||
var cookie = {
|
||||
name: name,
|
||||
value: value,
|
||||
};
|
||||
|
||||
parts.forEach(function (part) {
|
||||
var sides = part.split("=");
|
||||
var key = sides.shift().trimLeft().toLowerCase();
|
||||
var value = sides.join("=");
|
||||
if (key === "expires") {
|
||||
cookie.expires = new Date(value);
|
||||
} else if (key === "max-age") {
|
||||
cookie.maxAge = parseInt(value, 10);
|
||||
} else if (key === "secure") {
|
||||
cookie.secure = true;
|
||||
} else if (key === "httponly") {
|
||||
cookie.httpOnly = true;
|
||||
} else if (key === "samesite") {
|
||||
cookie.sameSite = value;
|
||||
} else {
|
||||
cookie[key] = value;
|
||||
}
|
||||
});
|
||||
|
||||
return cookie;
|
||||
}
|
||||
|
||||
function parseNameValuePair(nameValuePairStr) {
|
||||
// Parses name-value-pair according to rfc6265bis draft
|
||||
|
||||
var name = "";
|
||||
var value = "";
|
||||
var nameValueArr = nameValuePairStr.split("=");
|
||||
if (nameValueArr.length > 1) {
|
||||
name = nameValueArr.shift();
|
||||
value = nameValueArr.join("="); // everything after the first =, joined by a "=" if there was more than one part
|
||||
} else {
|
||||
value = nameValuePairStr;
|
||||
}
|
||||
|
||||
return { name: name, value: value };
|
||||
}
|
||||
|
||||
function parse(input, options) {
|
||||
options = options
|
||||
? Object.assign({}, defaultParseOptions, options)
|
||||
: defaultParseOptions;
|
||||
|
||||
if (!input) {
|
||||
if (!options.map) {
|
||||
return [];
|
||||
} else {
|
||||
return {};
|
||||
}
|
||||
}
|
||||
|
||||
if (input.headers && input.headers["set-cookie"]) {
|
||||
// fast-path for node.js (which automatically normalizes header names to lower-case
|
||||
input = input.headers["set-cookie"];
|
||||
} else if (input.headers) {
|
||||
// slow-path for other environments - see #25
|
||||
var sch =
|
||||
input.headers[
|
||||
Object.keys(input.headers).find(function (key) {
|
||||
return key.toLowerCase() === "set-cookie";
|
||||
})
|
||||
];
|
||||
// warn if called on a request-like object with a cookie header rather than a set-cookie header - see #34, 36
|
||||
if (!sch && input.headers.cookie && !options.silent) {
|
||||
console.warn(
|
||||
"Warning: set-cookie-parser appears to have been called on a request object. It is designed to parse Set-Cookie headers from responses, not Cookie headers from requests. Set the option {silent: true} to suppress this warning."
|
||||
);
|
||||
}
|
||||
input = sch;
|
||||
}
|
||||
if (!Array.isArray(input)) {
|
||||
input = [input];
|
||||
}
|
||||
|
||||
options = options
|
||||
? Object.assign({}, defaultParseOptions, options)
|
||||
: defaultParseOptions;
|
||||
|
||||
if (!options.map) {
|
||||
return input.filter(isNonEmptyString).map(function (str) {
|
||||
return parseString(str, options);
|
||||
});
|
||||
} else {
|
||||
var cookies = {};
|
||||
return input.filter(isNonEmptyString).reduce(function (cookies, str) {
|
||||
var cookie = parseString(str, options);
|
||||
cookies[cookie.name] = cookie;
|
||||
return cookies;
|
||||
}, cookies);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
Set-Cookie header field-values are sometimes comma joined in one string. This splits them without choking on commas
|
||||
that are within a single set-cookie field-value, such as in the Expires portion.
|
||||
|
||||
This is uncommon, but explicitly allowed - see https://tools.ietf.org/html/rfc2616#section-4.2
|
||||
Node.js does this for every header *except* set-cookie - see https://github.com/nodejs/node/blob/d5e363b77ebaf1caf67cd7528224b651c86815c1/lib/_http_incoming.js#L128
|
||||
React Native's fetch does this for *every* header, including set-cookie.
|
||||
|
||||
Based on: https://github.com/google/j2objc/commit/16820fdbc8f76ca0c33472810ce0cb03d20efe25
|
||||
Credits to: https://github.com/tomball for original and https://github.com/chrusart for JavaScript implementation
|
||||
*/
|
||||
function splitCookiesString(cookiesString) {
|
||||
if (Array.isArray(cookiesString)) {
|
||||
return cookiesString;
|
||||
}
|
||||
if (typeof cookiesString !== "string") {
|
||||
return [];
|
||||
}
|
||||
|
||||
var cookiesStrings = [];
|
||||
var pos = 0;
|
||||
var start;
|
||||
var ch;
|
||||
var lastComma;
|
||||
var nextStart;
|
||||
var cookiesSeparatorFound;
|
||||
|
||||
function skipWhitespace() {
|
||||
while (pos < cookiesString.length && /\s/.test(cookiesString.charAt(pos))) {
|
||||
pos += 1;
|
||||
}
|
||||
return pos < cookiesString.length;
|
||||
}
|
||||
|
||||
function notSpecialChar() {
|
||||
ch = cookiesString.charAt(pos);
|
||||
|
||||
return ch !== "=" && ch !== ";" && ch !== ",";
|
||||
}
|
||||
|
||||
while (pos < cookiesString.length) {
|
||||
start = pos;
|
||||
cookiesSeparatorFound = false;
|
||||
|
||||
while (skipWhitespace()) {
|
||||
ch = cookiesString.charAt(pos);
|
||||
if (ch === ",") {
|
||||
// ',' is a cookie separator if we have later first '=', not ';' or ','
|
||||
lastComma = pos;
|
||||
pos += 1;
|
||||
|
||||
skipWhitespace();
|
||||
nextStart = pos;
|
||||
|
||||
while (pos < cookiesString.length && notSpecialChar()) {
|
||||
pos += 1;
|
||||
}
|
||||
|
||||
// currently special character
|
||||
if (pos < cookiesString.length && cookiesString.charAt(pos) === "=") {
|
||||
// we found cookies separator
|
||||
cookiesSeparatorFound = true;
|
||||
// pos is inside the next cookie, so back up and return it.
|
||||
pos = nextStart;
|
||||
cookiesStrings.push(cookiesString.substring(start, lastComma));
|
||||
start = pos;
|
||||
} else {
|
||||
// in param ',' or param separator ';',
|
||||
// we continue from that comma
|
||||
pos = lastComma + 1;
|
||||
}
|
||||
} else {
|
||||
pos += 1;
|
||||
}
|
||||
}
|
||||
|
||||
if (!cookiesSeparatorFound || pos >= cookiesString.length) {
|
||||
cookiesStrings.push(cookiesString.substring(start, cookiesString.length));
|
||||
}
|
||||
}
|
||||
|
||||
return cookiesStrings;
|
||||
}
|
||||
|
||||
setCookie.exports = parse;
|
||||
setCookie.exports.parse = parse;
|
||||
setCookie.exports.parseString = parseString;
|
||||
var splitCookiesString_1 = setCookie.exports.splitCookiesString = splitCookiesString;
|
||||
|
||||
/**
|
||||
* Splits headers into two categories: single value and multi value
|
||||
* @param {Headers} headers
|
||||
* @returns {{
|
||||
* headers: Record<string, string>,
|
||||
* multiValueHeaders: Record<string, string[]>
|
||||
* }}
|
||||
*/
|
||||
function split_headers(headers) {
|
||||
/** @type {Record<string, string>} */
|
||||
const h = {};
|
||||
|
||||
/** @type {Record<string, string[]>} */
|
||||
const m = {};
|
||||
|
||||
headers.forEach((value, key) => {
|
||||
if (key === 'set-cookie') {
|
||||
m[key] = splitCookiesString_1(value);
|
||||
} else {
|
||||
h[key] = value;
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
headers: h,
|
||||
multiValueHeaders: m
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {import('@sveltejs/kit').SSRManifest} manifest
|
||||
* @returns {import('@netlify/functions').Handler}
|
||||
*/
|
||||
function init(manifest) {
|
||||
const server = new Server(manifest);
|
||||
|
||||
let init_promise = server.init({
|
||||
env: process.env
|
||||
});
|
||||
|
||||
return async (event, context) => {
|
||||
if (init_promise !== null) {
|
||||
await init_promise;
|
||||
init_promise = null;
|
||||
}
|
||||
|
||||
const response = await server.respond(to_request(event), {
|
||||
platform: { context },
|
||||
getClientAddress() {
|
||||
return event.headers['x-nf-client-connection-ip'];
|
||||
}
|
||||
});
|
||||
|
||||
const partial_response = {
|
||||
statusCode: response.status,
|
||||
...split_headers(response.headers)
|
||||
};
|
||||
|
||||
if (!is_text(response.headers.get('content-type'))) {
|
||||
// Function responses should be strings (or undefined), and responses with binary
|
||||
// content should be base64 encoded and set isBase64Encoded to true.
|
||||
// https://github.com/netlify/functions/blob/main/src/function/response.ts
|
||||
return {
|
||||
...partial_response,
|
||||
isBase64Encoded: true,
|
||||
body: Buffer.from(await response.arrayBuffer()).toString('base64')
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
...partial_response,
|
||||
body: await response.text()
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {import('@netlify/functions').HandlerEvent} event
|
||||
* @returns {Request}
|
||||
*/
|
||||
function to_request(event) {
|
||||
const { httpMethod, headers, rawUrl, body, isBase64Encoded } = event;
|
||||
|
||||
/** @type {RequestInit} */
|
||||
const init = {
|
||||
method: httpMethod,
|
||||
headers: new Headers(headers)
|
||||
};
|
||||
|
||||
if (httpMethod !== 'GET' && httpMethod !== 'HEAD') {
|
||||
const encoding = isBase64Encoded ? 'base64' : 'utf-8';
|
||||
init.body = typeof body === 'string' ? Buffer.from(body, encoding) : body;
|
||||
}
|
||||
|
||||
return new Request(rawUrl, init);
|
||||
}
|
||||
|
||||
const text_types = new Set([
|
||||
'application/xml',
|
||||
'application/json',
|
||||
'application/x-www-form-urlencoded',
|
||||
'multipart/form-data'
|
||||
]);
|
||||
|
||||
/**
|
||||
* Decides how the body should be parsed based on its mime type
|
||||
*
|
||||
* @param {string | undefined | null} content_type The `content-type` header of a request/response.
|
||||
* @returns {boolean}
|
||||
*/
|
||||
function is_text(content_type) {
|
||||
if (!content_type) return true; // defaults to json
|
||||
const type = content_type.split(';')[0].toLowerCase(); // get the mime type
|
||||
|
||||
return type.startsWith('text/') || type.endsWith('+xml') || text_types.has(type);
|
||||
}
|
||||
|
||||
export { init };
|
||||
17792
frontend/.netlify/shims.js
Normal file
17792
frontend/.netlify/shims.js
Normal file
File diff suppressed because one or more lines are too long
@@ -1,117 +1,197 @@
|
||||
<script lang="ts">
|
||||
import {onMount, onDestroy} from "svelte";
|
||||
import {user, channel, eth_private_key, group, createNewRoom} from "../../lib/stores/user";
|
||||
import {goto} from '$app/navigation';
|
||||
import {env} from '$env/dynamic/public'
|
||||
import toast from "svelte-french-toast";
|
||||
import { json } from "@sveltejs/kit";
|
||||
import { onMount, onDestroy } from "svelte";
|
||||
import {
|
||||
user,
|
||||
channel,
|
||||
eth_private_key,
|
||||
group,
|
||||
createNewRoom,
|
||||
} from "../../lib/stores/user";
|
||||
import { goto } from "$app/navigation";
|
||||
import { env } from "$env/dynamic/public";
|
||||
import toast from "svelte-french-toast";
|
||||
import { json } from "@sveltejs/kit";
|
||||
|
||||
let status = "🔴";
|
||||
let statusTip = "Disconnected";
|
||||
let message = "";
|
||||
let messages: any[] = [];
|
||||
let socket: WebSocket;
|
||||
let interval: number;
|
||||
let delay = 2000;
|
||||
let timeout = false;
|
||||
$: {
|
||||
if (interval || (!timeout && interval)) {
|
||||
clearInterval(interval);
|
||||
}
|
||||
let status = "🔴";
|
||||
let statusTip = "Disconnected";
|
||||
let message = "";
|
||||
let messages: any[] = [];
|
||||
let socket: WebSocket;
|
||||
let interval: number;
|
||||
let delay = 2000;
|
||||
let timeout = false;
|
||||
|
||||
if (timeout == true) {
|
||||
interval = setInterval(() => {
|
||||
if (delay < 30_000) delay = delay * 2;
|
||||
console.log("reconnecting in:", delay)
|
||||
connect();
|
||||
}, delay)
|
||||
}
|
||||
// Voting state
|
||||
let currentVotingProposal: any = null;
|
||||
let showVotingUI = false;
|
||||
|
||||
$: {
|
||||
if (interval || (!timeout && interval)) {
|
||||
clearInterval(interval);
|
||||
}
|
||||
|
||||
function connect() {
|
||||
socket = new WebSocket(`${env.PUBLIC_WEBSOCKET_URL}/ws`)
|
||||
socket.addEventListener("open", () => {
|
||||
status = "🟢"
|
||||
statusTip = "Connected";
|
||||
timeout = false;
|
||||
socket.send(JSON.stringify({
|
||||
eth_private_key: $eth_private_key,
|
||||
group_id: $group,
|
||||
should_create: $createNewRoom,
|
||||
}));
|
||||
})
|
||||
|
||||
socket.addEventListener("close", () => {
|
||||
status = "🔴";
|
||||
statusTip = "Disconnected";
|
||||
if (timeout == false) {
|
||||
delay = 2000;
|
||||
timeout = true;
|
||||
}
|
||||
})
|
||||
|
||||
socket.addEventListener('message', function (event) {
|
||||
if (event.data == "Username already taken.") {
|
||||
toast.error(event.data)
|
||||
goto("/");
|
||||
} else {
|
||||
messages = [...messages, event.data]
|
||||
}
|
||||
})
|
||||
if (timeout == true) {
|
||||
interval = setInterval(() => {
|
||||
if (delay < 30_000) delay = delay * 2;
|
||||
console.log("reconnecting in:", delay);
|
||||
connect();
|
||||
}, delay);
|
||||
}
|
||||
}
|
||||
|
||||
onMount(() => {
|
||||
if ($eth_private_key.length < 1 || $group.length < 1 ) {
|
||||
toast.error("Something went wrong!")
|
||||
goto("/");
|
||||
} else {
|
||||
connect()
|
||||
}
|
||||
function connect() {
|
||||
socket = new WebSocket(`${env.PUBLIC_WEBSOCKET_URL}/ws`);
|
||||
socket.addEventListener("open", () => {
|
||||
status = "🟢";
|
||||
statusTip = "Connected";
|
||||
timeout = false;
|
||||
socket.send(
|
||||
JSON.stringify({
|
||||
eth_private_key: $eth_private_key,
|
||||
group_id: $group,
|
||||
should_create: $createNewRoom,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
socket.addEventListener("close", () => {
|
||||
status = "🔴";
|
||||
statusTip = "Disconnected";
|
||||
if (timeout == false) {
|
||||
delay = 2000;
|
||||
timeout = true;
|
||||
}
|
||||
});
|
||||
|
||||
socket.addEventListener("message", function (event) {
|
||||
if (event.data == "Username already taken.") {
|
||||
toast.error(event.data);
|
||||
goto("/");
|
||||
} else {
|
||||
// Try to parse as JSON to check for voting proposals
|
||||
try {
|
||||
const data = JSON.parse(event.data);
|
||||
if (data.type === "voting_proposal") {
|
||||
currentVotingProposal = data.proposal;
|
||||
showVotingUI = true;
|
||||
toast.success("New voting proposal received!");
|
||||
} else {
|
||||
messages = [...messages, event.data];
|
||||
}
|
||||
} catch (e) {
|
||||
// If not JSON, treat as regular message
|
||||
messages = [...messages, event.data];
|
||||
}
|
||||
)
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
onDestroy(() => {
|
||||
if (socket) {
|
||||
socket.close()
|
||||
}
|
||||
if (interval) {
|
||||
clearInterval(interval)
|
||||
}
|
||||
timeout = false
|
||||
})
|
||||
onMount(() => {
|
||||
if ($eth_private_key.length < 1 || $group.length < 1) {
|
||||
toast.error("Something went wrong!");
|
||||
goto("/");
|
||||
} else {
|
||||
connect();
|
||||
}
|
||||
});
|
||||
|
||||
const sendMessage = () => {
|
||||
socket.send(JSON.stringify({
|
||||
message: message,
|
||||
group_id: $group,
|
||||
}));
|
||||
message = "";
|
||||
};
|
||||
const clear_messages = () => {
|
||||
messages = [];
|
||||
};
|
||||
onDestroy(() => {
|
||||
if (socket) {
|
||||
socket.close();
|
||||
}
|
||||
if (interval) {
|
||||
clearInterval(interval);
|
||||
}
|
||||
timeout = false;
|
||||
});
|
||||
|
||||
const sendMessage = () => {
|
||||
socket.send(
|
||||
JSON.stringify({
|
||||
message: message,
|
||||
group_id: $group,
|
||||
})
|
||||
);
|
||||
message = "";
|
||||
};
|
||||
|
||||
const clear_messages = () => {
|
||||
messages = [];
|
||||
};
|
||||
|
||||
const submitVote = (vote: boolean) => {
|
||||
if (currentVotingProposal) {
|
||||
socket.send(
|
||||
JSON.stringify({
|
||||
type: "user_vote",
|
||||
proposal_id: currentVotingProposal.proposal_id,
|
||||
vote: vote,
|
||||
group_id: $group,
|
||||
})
|
||||
);
|
||||
|
||||
toast.success(`Vote submitted: ${vote ? "YES" : "NO"}`);
|
||||
showVotingUI = false;
|
||||
currentVotingProposal = null;
|
||||
}
|
||||
};
|
||||
|
||||
const dismissVoting = () => {
|
||||
showVotingUI = false;
|
||||
currentVotingProposal = null;
|
||||
};
|
||||
</script>
|
||||
|
||||
<div class="title flex justify-between">
|
||||
<h1 class="text-3xl font-bold cursor-default">Chat Room <span class="tooltip" data-tip="{statusTip}">{status}</span>
|
||||
</h1>
|
||||
<button class="btn btn-accent" on:click={clear_messages}>clear</button>
|
||||
<h1 class="text-3xl font-bold cursor-default">
|
||||
Chat Room <span class="tooltip" data-tip={statusTip}>{status}</span>
|
||||
</h1>
|
||||
<button class="btn btn-accent" on:click={clear_messages}>clear</button>
|
||||
</div>
|
||||
<div class="card h-96 flex-grow bg-base-300 shadow-xl my-10">
|
||||
|
||||
<!-- Voting UI -->
|
||||
{#if showVotingUI && currentVotingProposal}
|
||||
<div class="card bg-warning shadow-xl my-5">
|
||||
<div class="card-body">
|
||||
<div class="flex flex-col overflow-y-auto max-h-80 scroll-smooth">
|
||||
{#each messages as msg}
|
||||
<div class="my-2">{msg}</div>
|
||||
{/each}
|
||||
<h2 class="card-title text-warning-content">Voting Proposal</h2>
|
||||
<div class="text-warning-content">
|
||||
<p><strong>Group Name:</strong> {currentVotingProposal.group_name}</p>
|
||||
<p><strong>Proposal ID:</strong> {currentVotingProposal.proposal_id}</p>
|
||||
<p><strong>Proposal Payload:</strong></p>
|
||||
<div class="ml-4 whitespace-pre-line">
|
||||
{currentVotingProposal.payload}
|
||||
</div>
|
||||
</div>
|
||||
<div class="card-actions justify-end mt-4">
|
||||
<button class="btn btn-success" on:click={() => submitVote(true)}
|
||||
>Vote YES</button
|
||||
>
|
||||
<button class="btn btn-error" on:click={() => submitVote(false)}
|
||||
>Vote NO</button
|
||||
>
|
||||
<button class="btn btn-ghost" on:click={dismissVoting}>Dismiss</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{/if}
|
||||
|
||||
<div class="card h-96 flex-grow bg-base-300 shadow-xl my-10">
|
||||
<div class="card-body">
|
||||
<div class="flex flex-col overflow-y-auto max-h-80 scroll-smooth">
|
||||
{#each messages as msg}
|
||||
<div class="my-2">{msg}</div>
|
||||
{/each}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="message-box flex justify-end">
|
||||
<form on:submit|preventDefault={sendMessage}>
|
||||
<input placeholder="Message" class="input input-bordered input-primary w-[51rem] bg-base-200 mb-2"
|
||||
bind:value={message}>
|
||||
<button class="btn btn-primary w-full sm:w-auto btn-wide">Send</button>
|
||||
</form>
|
||||
<form on:submit|preventDefault={sendMessage}>
|
||||
<input
|
||||
placeholder="Message"
|
||||
class="input input-bordered input-primary w-[51rem] bg-base-200 mb-2"
|
||||
bind:value={message}
|
||||
/>
|
||||
<button class="btn btn-primary w-full sm:w-auto btn-wide">Send</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
@@ -68,7 +68,7 @@ impl Identity {
|
||||
}
|
||||
|
||||
pub fn identity_string(&self) -> String {
|
||||
address_string(self.credential_with_key.credential.serialized_content())
|
||||
Address::from_slice(self.credential_with_key.credential.serialized_content()).to_string()
|
||||
}
|
||||
|
||||
pub fn signer(&self) -> &SignatureKeyPair {
|
||||
@@ -90,18 +90,10 @@ impl Identity {
|
||||
|
||||
impl Display for Identity {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(
|
||||
f,
|
||||
"{}",
|
||||
Address::from_slice(self.credential_with_key.credential.serialized_content())
|
||||
)
|
||||
write!(f, "{}", self.identity_string())
|
||||
}
|
||||
}
|
||||
|
||||
pub fn address_string(identity: &[u8]) -> String {
|
||||
Address::from_slice(identity).to_string()
|
||||
}
|
||||
|
||||
pub fn random_identity() -> Result<Identity, IdentityError> {
|
||||
let signer = PrivateKeySigner::random();
|
||||
let user_address = signer.address();
|
||||
|
||||
@@ -6,9 +6,9 @@ use tokio_util::sync::CancellationToken;
|
||||
use waku_bindings::WakuMessage;
|
||||
|
||||
use crate::{
|
||||
message::wrap_conversation_message_into_application_msg,
|
||||
protos::messages::v1::{AppMessage, BanRequest, ConversationMessage},
|
||||
user::{User, UserAction},
|
||||
user_actor::{LeaveGroupRequest, RemoveUserRequest, SendGroupMessage},
|
||||
user_actor::{BuildBanMessage, LeaveGroupRequest, SendGroupMessage, UserVoteRequest},
|
||||
ws_actor::{RawWsMessage, WsAction, WsActor},
|
||||
AppState,
|
||||
};
|
||||
@@ -39,15 +39,16 @@ pub async fn handle_user_actions(
|
||||
|
||||
app_state
|
||||
.content_topics
|
||||
.lock()
|
||||
.unwrap()
|
||||
.write()
|
||||
.await
|
||||
.retain(|topic| topic.application_name != group_name);
|
||||
info!("Leave group: {:?}", &group_name);
|
||||
let app_message = wrap_conversation_message_into_application_msg(
|
||||
format!("You're removed from the group {group_name}").into_bytes(),
|
||||
"system".to_string(),
|
||||
group_name.clone(),
|
||||
);
|
||||
let app_message: AppMessage = ConversationMessage {
|
||||
message: format!("You're removed from the group {group_name}").into_bytes(),
|
||||
sender: "SYSTEM".to_string(),
|
||||
group_name: group_name.clone(),
|
||||
}
|
||||
.into();
|
||||
ws_actor.ask(app_message).await?;
|
||||
cancel_token.cancel();
|
||||
}
|
||||
@@ -68,11 +69,12 @@ pub async fn handle_ws_action(
|
||||
info!("Got unexpected connect: {:?}", &connect);
|
||||
}
|
||||
WsAction::UserMessage(msg) => {
|
||||
let app_message = wrap_conversation_message_into_application_msg(
|
||||
msg.message.clone().into_bytes(),
|
||||
"me".to_string(),
|
||||
msg.group_id.clone(),
|
||||
);
|
||||
let app_message: AppMessage = ConversationMessage {
|
||||
message: msg.message.clone(),
|
||||
sender: "me".to_string(),
|
||||
group_name: msg.group_id.clone(),
|
||||
}
|
||||
.into();
|
||||
ws_actor.ask(app_message).await?;
|
||||
|
||||
let pmt = user_actor
|
||||
@@ -85,21 +87,69 @@ pub async fn handle_ws_action(
|
||||
}
|
||||
WsAction::RemoveUser(user_to_ban, group_name) => {
|
||||
info!("Got remove user: {:?}", &user_to_ban);
|
||||
user_actor
|
||||
.ask(RemoveUserRequest {
|
||||
user_to_ban: user_to_ban.clone(),
|
||||
|
||||
// Create a ban request message to send to the group
|
||||
let ban_request_msg = BanRequest {
|
||||
user_to_ban: user_to_ban.clone(),
|
||||
requester: "someone".to_string(), // The current user is the requester
|
||||
group_name: group_name.clone(),
|
||||
};
|
||||
|
||||
// Send the ban request directly via Waku if the user is not the steward
|
||||
// If steward, need to add remove proposal to the group and sent notification to the group
|
||||
let waku_msg = user_actor
|
||||
.ask(BuildBanMessage {
|
||||
ban_request: ban_request_msg,
|
||||
group_name: group_name.clone(),
|
||||
})
|
||||
.await?;
|
||||
waku_node.send(waku_msg).await?;
|
||||
|
||||
let app_message = wrap_conversation_message_into_application_msg(
|
||||
format!("Remove proposal for user {user_to_ban} added to steward queue")
|
||||
.into_bytes(),
|
||||
"system".to_string(),
|
||||
group_name.clone(),
|
||||
);
|
||||
// Send a local confirmation message
|
||||
let app_message: AppMessage = ConversationMessage {
|
||||
message: format!("Ban request for user {user_to_ban} sent to group").into_bytes(),
|
||||
sender: "system".to_string(),
|
||||
group_name: group_name.clone(),
|
||||
}
|
||||
.into();
|
||||
ws_actor.ask(app_message).await?;
|
||||
}
|
||||
WsAction::UserVote {
|
||||
proposal_id,
|
||||
vote,
|
||||
group_id,
|
||||
} => {
|
||||
info!("Got user vote: proposal_id={proposal_id}, vote={vote}, group={group_id}");
|
||||
|
||||
// Process the user vote:
|
||||
// if it come from the user, send the vote result to Waku
|
||||
// if it come from the steward, just process it and return None
|
||||
let user_vote_result = user_actor
|
||||
.ask(UserVoteRequest {
|
||||
group_name: group_id.clone(),
|
||||
proposal_id,
|
||||
vote,
|
||||
})
|
||||
.await?;
|
||||
|
||||
// Send a local confirmation message
|
||||
let app_message: AppMessage = ConversationMessage {
|
||||
message: format!(
|
||||
"Your vote ({}) has been submitted for proposal {proposal_id}",
|
||||
if vote { "YES" } else { "NO" },
|
||||
)
|
||||
.into_bytes(),
|
||||
sender: "SYSTEM".to_string(),
|
||||
group_name: group_id.clone(),
|
||||
}
|
||||
.into();
|
||||
ws_actor.ask(app_message).await?;
|
||||
|
||||
// Send the vote result to Waku
|
||||
if let Some(waku_msg) = user_vote_result {
|
||||
waku_node.send(waku_msg).await?;
|
||||
}
|
||||
}
|
||||
WsAction::DoNothing => {}
|
||||
}
|
||||
|
||||
|
||||
305
src/consensus/mod.rs
Normal file
305
src/consensus/mod.rs
Normal file
@@ -0,0 +1,305 @@
|
||||
//! Consensus module implementing HashGraph-like consensus for distributed voting
|
||||
//!
|
||||
//! This module implements the consensus protocol described in the [RFC](https://github.com/vacp2p/rfc-index/blob/consensus-hashgraph-like/vac/raw/consensus-hashgraphlike.md)
|
||||
//!
|
||||
//! The consensus is designed to work with GossipSub-like networks and provides:
|
||||
//! - Proposal management
|
||||
//! - Vote collection and validation
|
||||
//! - Consensus reached detection
|
||||
|
||||
use crate::error::ConsensusError;
|
||||
use crate::protos::messages::v1::consensus::v1::{Proposal, Vote};
|
||||
use crate::LocalSigner;
|
||||
use log::info;
|
||||
use prost::Message;
|
||||
use sha2::{Digest, Sha256};
|
||||
use std::collections::HashMap;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
use tokio::sync::broadcast;
|
||||
use uuid::Uuid;
|
||||
|
||||
pub mod service;
|
||||
|
||||
// Re-export protobuf types for compatibility with generated code
|
||||
pub mod v1 {
|
||||
pub use crate::protos::messages::v1::consensus::v1::{Proposal, Vote};
|
||||
}
|
||||
|
||||
pub use service::ConsensusService;
|
||||
|
||||
/// Consensus events emitted when consensus state changes
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum ConsensusEvent {
|
||||
/// Consensus has been reached for a proposal
|
||||
ConsensusReached { proposal_id: u32, result: bool },
|
||||
/// Consensus failed due to timeout or other reasons
|
||||
ConsensusFailed { proposal_id: u32, reason: String },
|
||||
}
|
||||
|
||||
/// Consensus configuration
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct ConsensusConfig {
|
||||
/// Minimum number of votes required for consensus (as percentage of expected voters)
|
||||
pub consensus_threshold: f64,
|
||||
/// Timeout for consensus rounds in seconds
|
||||
pub consensus_timeout: u64,
|
||||
/// Maximum number of rounds before consensus is considered failed
|
||||
pub max_rounds: u32,
|
||||
/// Whether to use liveness criteria for silent peers
|
||||
pub liveness_criteria: bool,
|
||||
}
|
||||
|
||||
impl Default for ConsensusConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
consensus_threshold: 0.67, // 67% supermajority
|
||||
consensus_timeout: 10, // 10 seconds
|
||||
max_rounds: 3, // Maximum 3 rounds
|
||||
liveness_criteria: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Consensus state for a proposal
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum ConsensusState {
|
||||
/// Proposal is active and accepting votes
|
||||
Active,
|
||||
/// Consensus has been reached
|
||||
ConsensusReached(bool), // true for yes, false for no
|
||||
/// Consensus failed (timeout or insufficient votes)
|
||||
Failed,
|
||||
/// Proposal has expired
|
||||
Expired,
|
||||
}
|
||||
|
||||
/// Consensus session for a specific proposal
|
||||
#[derive(Debug)]
|
||||
pub struct ConsensusSession {
|
||||
pub proposal: Proposal,
|
||||
pub state: ConsensusState,
|
||||
pub votes: HashMap<Vec<u8>, Vote>, // vote_owner -> Vote
|
||||
pub created_at: u64,
|
||||
pub config: ConsensusConfig,
|
||||
pub event_sender: Option<broadcast::Sender<(String, ConsensusEvent)>>,
|
||||
pub group_name: String,
|
||||
}
|
||||
|
||||
impl ConsensusSession {
|
||||
pub fn new(
|
||||
proposal: Proposal,
|
||||
config: ConsensusConfig,
|
||||
event_sender: Option<broadcast::Sender<(String, ConsensusEvent)>>,
|
||||
group_name: &str,
|
||||
) -> Self {
|
||||
let now = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("Failed to get current time")
|
||||
.as_secs();
|
||||
|
||||
Self {
|
||||
proposal,
|
||||
state: ConsensusState::Active,
|
||||
votes: HashMap::new(),
|
||||
created_at: now,
|
||||
config,
|
||||
event_sender,
|
||||
group_name: group_name.to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn set_consensus_threshold(&mut self, consensus_threshold: f64) {
|
||||
self.config.consensus_threshold = consensus_threshold
|
||||
}
|
||||
|
||||
/// Add a vote to the session
|
||||
pub fn add_vote(&mut self, vote: Vote) -> Result<(), ConsensusError> {
|
||||
match self.state {
|
||||
ConsensusState::Active => {
|
||||
// Check if voter already voted
|
||||
if self.votes.contains_key(&vote.vote_owner) {
|
||||
return Err(ConsensusError::DuplicateVote);
|
||||
}
|
||||
|
||||
// Add vote into the session and proposal
|
||||
self.votes.insert(vote.vote_owner.clone(), vote.clone());
|
||||
self.proposal.votes.push(vote.clone());
|
||||
|
||||
// Check if consensus can be reached after adding the vote
|
||||
self.check_consensus();
|
||||
Ok(())
|
||||
}
|
||||
ConsensusState::ConsensusReached(_) => {
|
||||
info!(
|
||||
"[consensus::mod::add_vote]: Consensus already reached for proposal {}, skipping vote",
|
||||
self.proposal.proposal_id
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
_ => Err(ConsensusError::SessionNotActive),
|
||||
}
|
||||
}
|
||||
|
||||
/// Count the number of required votes to reach consensus
|
||||
fn count_required_votes(&self) -> usize {
|
||||
let expected_voters = self.proposal.expected_voters_count as usize;
|
||||
if expected_voters <= 2 {
|
||||
expected_voters
|
||||
} else {
|
||||
((expected_voters as f64) * self.config.consensus_threshold) as usize
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if consensus has been reached
|
||||
///
|
||||
/// - `ConsensusReached(true)` if yes votes > no votes
|
||||
/// - `ConsensusReached(false)`
|
||||
/// - if no votes > yes votes
|
||||
/// - if no votes == yes votes and we have all votes
|
||||
/// - `Active`
|
||||
/// - if no votes == yes votes and we don't have all votes
|
||||
/// - if total votes < required votes (we wait for more votes)
|
||||
fn check_consensus(&mut self) {
|
||||
let total_votes = self.votes.len();
|
||||
let yes_votes = self.votes.values().filter(|v| v.vote).count();
|
||||
let no_votes = total_votes - yes_votes;
|
||||
|
||||
// Check if we have all expected votes (only calculate consensus immediately if ALL votes received)
|
||||
let expected_voters = self.proposal.expected_voters_count as usize;
|
||||
let required_votes = self.count_required_votes();
|
||||
// For <= 2 voters, we require all votes to reach consensus
|
||||
if total_votes >= required_votes {
|
||||
// All votes received - calculate consensus immediately
|
||||
if yes_votes > no_votes {
|
||||
self.state = ConsensusState::ConsensusReached(true);
|
||||
info!(
|
||||
"[consensus::mod::check_consensus]: Enough votes received {yes_votes}-{no_votes} - consensus reached: YES"
|
||||
);
|
||||
self.emit_consensus_event(ConsensusEvent::ConsensusReached {
|
||||
proposal_id: self.proposal.proposal_id,
|
||||
result: true,
|
||||
});
|
||||
} else if no_votes > yes_votes {
|
||||
self.state = ConsensusState::ConsensusReached(false);
|
||||
info!(
|
||||
"[consensus::mod::check_consensus]: Enough votes received {yes_votes}-{no_votes} - consensus reached: NO"
|
||||
);
|
||||
self.emit_consensus_event(ConsensusEvent::ConsensusReached {
|
||||
proposal_id: self.proposal.proposal_id,
|
||||
result: false,
|
||||
});
|
||||
} else {
|
||||
// Tie - if it's all votes, we reject the proposal
|
||||
if total_votes == expected_voters {
|
||||
self.state = ConsensusState::ConsensusReached(false);
|
||||
info!(
|
||||
"[consensus::mod::check_consensus]: All votes received and tie - consensus not reached"
|
||||
);
|
||||
self.emit_consensus_event(ConsensusEvent::ConsensusReached {
|
||||
proposal_id: self.proposal.proposal_id,
|
||||
result: false,
|
||||
});
|
||||
} else {
|
||||
// Tie - if it's not all votes, we wait for more votes
|
||||
self.state = ConsensusState::Active;
|
||||
info!(
|
||||
"[consensus::mod::check_consensus]: Not enough votes received - consensus not reached"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Emit a consensus event
|
||||
fn emit_consensus_event(&self, event: ConsensusEvent) {
|
||||
if let Some(sender) = &self.event_sender {
|
||||
info!(
|
||||
"[consensus::mod::emit_consensus_event]: Emitting consensus event: {event:?} for proposal {}",
|
||||
self.proposal.proposal_id
|
||||
);
|
||||
let _ = sender.send((self.group_name.clone(), event));
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if the session is still active
|
||||
pub fn is_active(&self) -> bool {
|
||||
matches!(self.state, ConsensusState::Active)
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute the hash of a vote
|
||||
pub fn compute_vote_hash(vote: &Vote) -> Vec<u8> {
|
||||
let mut hasher = Sha256::new();
|
||||
hasher.update(vote.vote_id.to_le_bytes());
|
||||
hasher.update(&vote.vote_owner);
|
||||
hasher.update(vote.proposal_id.to_le_bytes());
|
||||
hasher.update(vote.timestamp.to_le_bytes());
|
||||
hasher.update([vote.vote as u8]);
|
||||
hasher.update(&vote.parent_hash);
|
||||
hasher.update(&vote.received_hash);
|
||||
hasher.finalize().to_vec()
|
||||
}
|
||||
|
||||
/// Create a vote for an incoming proposal based on user's vote
|
||||
async fn create_vote_for_proposal<S: LocalSigner>(
|
||||
proposal: &Proposal,
|
||||
user_vote: bool,
|
||||
signer: S,
|
||||
) -> Result<Vote, ConsensusError> {
|
||||
let now = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)?
|
||||
.as_secs();
|
||||
|
||||
// Get the latest vote as parent and received hash
|
||||
let (parent_hash, received_hash) = if let Some(latest_vote) = proposal.votes.last() {
|
||||
// Check if we already voted (same voter)
|
||||
let is_same_voter = latest_vote.vote_owner == signer.address_bytes();
|
||||
if is_same_voter {
|
||||
// Same voter: parent_hash should be the hash of our previous vote
|
||||
(latest_vote.vote_hash.clone(), Vec::new())
|
||||
} else {
|
||||
// Different voter: parent_hash is empty, received_hash is the hash of the latest vote
|
||||
(Vec::new(), latest_vote.vote_hash.clone())
|
||||
}
|
||||
} else {
|
||||
(Vec::new(), Vec::new())
|
||||
};
|
||||
|
||||
// Create our vote with user's choice
|
||||
let mut vote = Vote {
|
||||
vote_id: Uuid::new_v4().as_u128() as u32,
|
||||
vote_owner: signer.address_bytes(),
|
||||
proposal_id: proposal.proposal_id,
|
||||
timestamp: now,
|
||||
vote: user_vote, // Use the user's actual vote choice
|
||||
parent_hash,
|
||||
received_hash,
|
||||
vote_hash: Vec::new(), // Will be computed below
|
||||
signature: Vec::new(), // Will be signed below
|
||||
};
|
||||
|
||||
// Compute vote hash and signature
|
||||
vote.vote_hash = compute_vote_hash(&vote);
|
||||
let vote_bytes = vote.encode_to_vec();
|
||||
vote.signature = signer
|
||||
.local_sign_message(&vote_bytes)
|
||||
.await
|
||||
.map_err(|e| ConsensusError::InvalidSignature(e.to_string()))?;
|
||||
|
||||
Ok(vote)
|
||||
}
|
||||
|
||||
/// Statistics about consensus sessions
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct ConsensusStats {
|
||||
pub total_sessions: usize,
|
||||
pub active_sessions: usize,
|
||||
pub consensus_reached: usize,
|
||||
pub failed_sessions: usize,
|
||||
}
|
||||
|
||||
impl Default for ConsensusService {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
690
src/consensus/service.rs
Normal file
690
src/consensus/service.rs
Normal file
@@ -0,0 +1,690 @@
|
||||
//! Consensus service for managing consensus sessions and HashGraph integration
|
||||
|
||||
use crate::consensus::{
|
||||
compute_vote_hash, create_vote_for_proposal, ConsensusConfig, ConsensusEvent, ConsensusSession,
|
||||
ConsensusState, ConsensusStats,
|
||||
};
|
||||
use crate::error::ConsensusError;
|
||||
use crate::protos::messages::v1::consensus::v1::{Proposal, Vote};
|
||||
use crate::{verify_vote_hash, LocalSigner};
|
||||
use log::info;
|
||||
use prost::Message;
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
use tokio::sync::{broadcast, RwLock};
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Consensus service that manages multiple consensus sessions for multiple groups
|
||||
#[derive(Clone)]
|
||||
pub struct ConsensusService {
|
||||
/// Active consensus sessions organized by group: group_name -> proposal_id -> session
|
||||
sessions: Arc<RwLock<HashMap<String, HashMap<u32, ConsensusSession>>>>,
|
||||
/// Maximum number of voting sessions to keep per group
|
||||
max_sessions_per_group: usize,
|
||||
/// Event sender for consensus events
|
||||
event_sender: broadcast::Sender<(String, ConsensusEvent)>,
|
||||
}
|
||||
|
||||
impl ConsensusService {
|
||||
/// Create a new consensus service
|
||||
pub fn new() -> Self {
|
||||
let (event_sender, _) = broadcast::channel(1000);
|
||||
Self {
|
||||
sessions: Arc::new(RwLock::new(HashMap::new())),
|
||||
max_sessions_per_group: 10,
|
||||
event_sender,
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a new consensus service with custom max sessions per group
|
||||
pub fn new_with_max_sessions(max_sessions_per_group: usize) -> Self {
|
||||
let (event_sender, _) = broadcast::channel(1000);
|
||||
Self {
|
||||
sessions: Arc::new(RwLock::new(HashMap::new())),
|
||||
max_sessions_per_group,
|
||||
event_sender,
|
||||
}
|
||||
}
|
||||
|
||||
/// Subscribe to consensus events
|
||||
pub fn subscribe_to_events(&self) -> broadcast::Receiver<(String, ConsensusEvent)> {
|
||||
self.event_sender.subscribe()
|
||||
}
|
||||
|
||||
pub async fn set_consensus_threshold_for_group_session(
|
||||
&mut self,
|
||||
group_name: &str,
|
||||
proposal_id: u32,
|
||||
consensus_threshold: f64,
|
||||
) -> Result<(), ConsensusError> {
|
||||
let mut sessions = self.sessions.write().await;
|
||||
let group_sessions = sessions
|
||||
.entry(group_name.to_string())
|
||||
.or_insert_with(HashMap::new);
|
||||
|
||||
let session = group_sessions
|
||||
.get_mut(&proposal_id)
|
||||
.ok_or(ConsensusError::SessionNotFound)?;
|
||||
|
||||
session.set_consensus_threshold(consensus_threshold);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub async fn create_proposal(
|
||||
&self,
|
||||
group_name: &str,
|
||||
name: String,
|
||||
payload: String,
|
||||
proposal_owner: Vec<u8>,
|
||||
expected_voters_count: u32,
|
||||
expiration_time: u64,
|
||||
liveness_criteria_yes: bool,
|
||||
) -> Result<Proposal, ConsensusError> {
|
||||
let proposal_id = Uuid::new_v4().as_u128() as u32;
|
||||
let now = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)?
|
||||
.as_secs();
|
||||
let config = ConsensusConfig::default();
|
||||
|
||||
// Create proposal with steward's vote
|
||||
let proposal = Proposal {
|
||||
name,
|
||||
payload,
|
||||
proposal_id,
|
||||
proposal_owner,
|
||||
votes: vec![],
|
||||
expected_voters_count,
|
||||
round: 1,
|
||||
timestamp: now,
|
||||
expiration_time: now + expiration_time,
|
||||
liveness_criteria_yes,
|
||||
};
|
||||
|
||||
// Create consensus session
|
||||
|
||||
let session = ConsensusSession::new(
|
||||
proposal.clone(),
|
||||
config.clone(),
|
||||
Some(self.event_sender.clone()),
|
||||
group_name,
|
||||
);
|
||||
|
||||
// Get timeout from session config before adding to sessions
|
||||
let timeout_seconds = config.consensus_timeout;
|
||||
|
||||
// Add session to group and handle cleanup in a single lock operation
|
||||
{
|
||||
let mut sessions = self.sessions.write().await;
|
||||
let group_sessions = sessions
|
||||
.entry(group_name.to_string())
|
||||
.or_insert_with(HashMap::new);
|
||||
group_sessions.insert(proposal_id, session);
|
||||
|
||||
// Clean up old sessions if we exceed the limit (within the same lock)
|
||||
if group_sessions.len() > self.max_sessions_per_group {
|
||||
// Sort sessions by creation time and keep the most recent ones
|
||||
let mut session_entries: Vec<_> = group_sessions.drain().collect();
|
||||
session_entries.sort_by(|a, b| b.1.created_at.cmp(&a.1.created_at));
|
||||
|
||||
// Keep only the most recent sessions
|
||||
for (proposal_id, session) in session_entries
|
||||
.into_iter()
|
||||
.take(self.max_sessions_per_group)
|
||||
{
|
||||
group_sessions.insert(proposal_id, session);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Start automatic timeout handling for this proposal using session config
|
||||
let self_clone = self.clone();
|
||||
let group_name_owned = group_name.to_string();
|
||||
tokio::spawn(async move {
|
||||
let timeout_duration = std::time::Duration::from_secs(timeout_seconds);
|
||||
tokio::time::sleep(timeout_duration).await;
|
||||
|
||||
if self_clone
|
||||
.get_consensus_result(&group_name_owned, proposal_id)
|
||||
.await
|
||||
.is_some()
|
||||
{
|
||||
info!(
|
||||
"Consensus result already exists for proposal {proposal_id}, skipping timeout"
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Apply timeout consensus if still active
|
||||
if self_clone
|
||||
.handle_consensus_timeout(&group_name_owned, proposal_id)
|
||||
.await
|
||||
.is_ok()
|
||||
{
|
||||
info!(
|
||||
"Automatic timeout applied for proposal {proposal_id} after {timeout_seconds}s"
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
Ok(proposal)
|
||||
}
|
||||
|
||||
/// Create a new proposal with steward's vote attached
|
||||
pub async fn vote_on_proposal<S: LocalSigner>(
|
||||
&self,
|
||||
group_name: &str,
|
||||
proposal_id: u32,
|
||||
steward_vote: bool,
|
||||
signer: S,
|
||||
) -> Result<Proposal, ConsensusError> {
|
||||
let vote_id = Uuid::new_v4().as_u128() as u32;
|
||||
let now = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)?
|
||||
.as_secs();
|
||||
|
||||
// Create steward's vote first
|
||||
let steward_vote_obj = Vote {
|
||||
vote_id,
|
||||
vote_owner: signer.address_bytes(),
|
||||
proposal_id,
|
||||
timestamp: now,
|
||||
vote: steward_vote,
|
||||
parent_hash: Vec::new(), // First vote, no parent
|
||||
received_hash: Vec::new(), // First vote, no received
|
||||
vote_hash: Vec::new(), // Will be computed below
|
||||
signature: Vec::new(), // Will be signed below
|
||||
};
|
||||
|
||||
// Compute vote hash and signature for steward's vote
|
||||
let mut steward_vote_obj = steward_vote_obj;
|
||||
steward_vote_obj.vote_hash = compute_vote_hash(&steward_vote_obj);
|
||||
let vote_bytes = steward_vote_obj.encode_to_vec();
|
||||
steward_vote_obj.signature = signer
|
||||
.local_sign_message(&vote_bytes)
|
||||
.await
|
||||
.map_err(|e| ConsensusError::InvalidSignature(e.to_string()))?;
|
||||
|
||||
let mut sessions = self.sessions.write().await;
|
||||
let group_sessions = sessions
|
||||
.entry(group_name.to_string())
|
||||
.or_insert_with(HashMap::new);
|
||||
let session = group_sessions
|
||||
.get_mut(&proposal_id)
|
||||
.ok_or(ConsensusError::SessionNotFound)?;
|
||||
|
||||
session.add_vote(steward_vote_obj.clone())?;
|
||||
|
||||
Ok(session.proposal.clone())
|
||||
}
|
||||
|
||||
/// 1. Check the signatures of the each votes in proposal, in particular for proposal P_1,
|
||||
/// verify the signature of V_1 where V_1 = P_1.votes\[0\] with V_1.signature and V_1.vote_owner
|
||||
/// 2. Do parent_hash check: If there are repeated votes from the same sender,
|
||||
/// check that the hash of the former vote is equal to the parent_hash of the later vote.
|
||||
/// 3. Do received_hash check: If there are multiple votes in a proposal,
|
||||
/// check that the hash of a vote is equal to the received_hash of the next one.
|
||||
pub fn validate_proposal(&self, proposal: &Proposal) -> Result<(), ConsensusError> {
|
||||
// Validate each vote individually first
|
||||
for vote in proposal.votes.iter() {
|
||||
self.validate_vote(vote, proposal.expiration_time)?;
|
||||
}
|
||||
|
||||
// Validate vote chain integrity according to RFC
|
||||
self.validate_vote_chain(&proposal.votes)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn validate_vote(&self, vote: &Vote, expiration_time: u64) -> Result<(), ConsensusError> {
|
||||
if vote.vote_owner.is_empty() {
|
||||
return Err(ConsensusError::EmptyVoteOwner);
|
||||
}
|
||||
|
||||
if vote.vote_hash.is_empty() {
|
||||
return Err(ConsensusError::EmptyVoteHash);
|
||||
}
|
||||
|
||||
if vote.signature.is_empty() {
|
||||
return Err(ConsensusError::EmptySignature);
|
||||
}
|
||||
|
||||
let expected_hash = compute_vote_hash(vote);
|
||||
if vote.vote_hash != expected_hash {
|
||||
return Err(ConsensusError::InvalidVoteHash);
|
||||
}
|
||||
|
||||
// Encode vote without signature to verify signature
|
||||
let mut vote_copy = vote.clone();
|
||||
vote_copy.signature = Vec::new();
|
||||
let vote_copy_bytes = vote_copy.encode_to_vec();
|
||||
|
||||
// Validate signature
|
||||
let verified = verify_vote_hash(&vote.signature, &vote.vote_owner, &vote_copy_bytes)?;
|
||||
|
||||
if !verified {
|
||||
return Err(ConsensusError::InvalidVoteSignature);
|
||||
}
|
||||
|
||||
let now = SystemTime::now().duration_since(UNIX_EPOCH)?.as_secs();
|
||||
|
||||
// Check that vote timestamp is not in the future
|
||||
if vote.timestamp > now {
|
||||
return Err(ConsensusError::InvalidVoteTimestamp);
|
||||
}
|
||||
|
||||
// Check that vote timestamp is within expiration threshold
|
||||
if now - vote.timestamp > expiration_time {
|
||||
return Err(ConsensusError::VoteExpired);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Validate vote chain integrity according to RFC specification
|
||||
fn validate_vote_chain(&self, votes: &[Vote]) -> Result<(), ConsensusError> {
|
||||
if votes.len() <= 1 {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
for i in 0..votes.len() - 1 {
|
||||
let current_vote = &votes[i];
|
||||
let next_vote = &votes[i + 1];
|
||||
|
||||
// RFC requirement: received_hash of next vote should equal hash of current vote
|
||||
if current_vote.vote_hash != next_vote.received_hash {
|
||||
return Err(ConsensusError::ReceivedHashMismatch);
|
||||
}
|
||||
|
||||
// RFC requirement: if same voter, parent_hash should equal hash of previous vote
|
||||
if current_vote.vote_owner == next_vote.vote_owner
|
||||
&& current_vote.vote_hash != next_vote.parent_hash
|
||||
{
|
||||
return Err(ConsensusError::ParentHashMismatch);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Process incoming proposal message
|
||||
pub async fn process_incoming_proposal(
|
||||
&self,
|
||||
group_name: &str,
|
||||
proposal: Proposal,
|
||||
) -> Result<(), ConsensusError> {
|
||||
info!(
|
||||
"[consensus::service::process_incoming_proposal]: Processing incoming proposal for group {group_name}"
|
||||
);
|
||||
let mut sessions = self.sessions.write().await;
|
||||
let group_sessions = sessions
|
||||
.entry(group_name.to_string())
|
||||
.or_insert_with(HashMap::new);
|
||||
|
||||
// Check if proposal already exists
|
||||
if group_sessions.contains_key(&proposal.proposal_id) {
|
||||
return Err(ConsensusError::ProposalAlreadyExist);
|
||||
}
|
||||
|
||||
// Validate proposal including vote chain integrity
|
||||
self.validate_proposal(&proposal)?;
|
||||
|
||||
// Create new session without our vote - user will vote later
|
||||
let mut session = ConsensusSession::new(
|
||||
proposal.clone(),
|
||||
ConsensusConfig::default(),
|
||||
Some(self.event_sender.clone()),
|
||||
group_name,
|
||||
);
|
||||
|
||||
session.add_vote(proposal.votes[0].clone())?;
|
||||
group_sessions.insert(proposal.proposal_id, session);
|
||||
|
||||
// Clean up old sessions if we exceed the limit (within the same lock)
|
||||
if group_sessions.len() > self.max_sessions_per_group {
|
||||
// Sort sessions by creation time and keep the most recent ones
|
||||
let mut session_entries: Vec<_> = group_sessions.drain().collect();
|
||||
session_entries.sort_by(|a, b| b.1.created_at.cmp(&a.1.created_at));
|
||||
|
||||
// Keep only the most recent sessions
|
||||
for (proposal_id, session) in session_entries
|
||||
.into_iter()
|
||||
.take(self.max_sessions_per_group)
|
||||
{
|
||||
group_sessions.insert(proposal_id, session);
|
||||
}
|
||||
}
|
||||
|
||||
info!("[consensus::service::process_incoming_proposal]: Proposal stored, waiting for user vote");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Process user vote for a proposal
|
||||
pub async fn process_user_vote<S: LocalSigner>(
|
||||
&self,
|
||||
group_name: &str,
|
||||
proposal_id: u32,
|
||||
user_vote: bool,
|
||||
signer: S,
|
||||
) -> Result<Vote, ConsensusError> {
|
||||
let mut sessions = self.sessions.write().await;
|
||||
let group_sessions = sessions
|
||||
.get_mut(group_name)
|
||||
.ok_or(ConsensusError::GroupNotFound)?;
|
||||
|
||||
let session = group_sessions
|
||||
.get_mut(&proposal_id)
|
||||
.ok_or(ConsensusError::SessionNotFound)?;
|
||||
|
||||
// Check if user already voted
|
||||
let user_address = signer.address_bytes();
|
||||
if session.votes.values().any(|v| v.vote_owner == user_address) {
|
||||
return Err(ConsensusError::UserAlreadyVoted);
|
||||
}
|
||||
|
||||
// Create our vote based on the user's choice
|
||||
let our_vote = create_vote_for_proposal(&session.proposal, user_vote, signer).await?;
|
||||
|
||||
session.add_vote(our_vote.clone())?;
|
||||
|
||||
Ok(our_vote)
|
||||
}
|
||||
|
||||
/// Process incoming vote
|
||||
pub async fn process_incoming_vote(
|
||||
&self,
|
||||
group_name: &str,
|
||||
vote: Vote,
|
||||
) -> Result<(), ConsensusError> {
|
||||
info!(
|
||||
"[consensus::service::process_incoming_vote]: Processing incoming vote for group {group_name}"
|
||||
);
|
||||
let mut sessions = self.sessions.write().await;
|
||||
let group_sessions = sessions
|
||||
.get_mut(group_name)
|
||||
.ok_or(ConsensusError::GroupNotFound)?;
|
||||
|
||||
let session = group_sessions
|
||||
.get_mut(&vote.proposal_id)
|
||||
.ok_or(ConsensusError::SessionNotFound)?;
|
||||
|
||||
self.validate_vote(&vote, session.proposal.expiration_time)?;
|
||||
|
||||
// Add vote to session
|
||||
session.add_vote(vote.clone())?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get liveness criteria for a proposal
|
||||
pub async fn get_proposal_liveness_criteria(
|
||||
&self,
|
||||
group_name: &str,
|
||||
proposal_id: u32,
|
||||
) -> Option<bool> {
|
||||
let sessions = self.sessions.read().await;
|
||||
if let Some(group_sessions) = sessions.get(group_name) {
|
||||
if let Some(session) = group_sessions.get(&proposal_id) {
|
||||
return Some(session.proposal.liveness_criteria_yes);
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Get consensus result for a proposal
|
||||
pub async fn get_consensus_result(&self, group_name: &str, proposal_id: u32) -> Option<bool> {
|
||||
let sessions = self.sessions.read().await;
|
||||
if let Some(group_sessions) = sessions.get(group_name) {
|
||||
if let Some(session) = group_sessions.get(&proposal_id) {
|
||||
match session.state {
|
||||
ConsensusState::ConsensusReached(result) => Some(result),
|
||||
_ => None,
|
||||
}
|
||||
} else {
|
||||
None
|
||||
}
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Get active proposals for a specific group
|
||||
pub async fn get_active_proposals(&self, group_name: &str) -> Vec<Proposal> {
|
||||
let sessions = self.sessions.read().await;
|
||||
if let Some(group_sessions) = sessions.get(group_name) {
|
||||
group_sessions
|
||||
.values()
|
||||
.filter(|session| session.is_active())
|
||||
.map(|session| session.proposal.clone())
|
||||
.collect()
|
||||
} else {
|
||||
Vec::new()
|
||||
}
|
||||
}
|
||||
|
||||
/// Clean up expired sessions for all groups
|
||||
pub async fn cleanup_expired_sessions(&self) {
|
||||
let mut sessions = self.sessions.write().await;
|
||||
let now = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.expect("Failed to get current time")
|
||||
.as_secs();
|
||||
|
||||
let group_names: Vec<String> = sessions.keys().cloned().collect();
|
||||
|
||||
for group_name in group_names {
|
||||
if let Some(group_sessions) = sessions.get_mut(&group_name) {
|
||||
group_sessions.retain(|_, session| {
|
||||
now <= session.proposal.expiration_time && session.is_active()
|
||||
});
|
||||
|
||||
// Clean up old sessions if we exceed the limit
|
||||
if group_sessions.len() > self.max_sessions_per_group {
|
||||
// Sort sessions by creation time and keep the most recent ones
|
||||
let mut session_entries: Vec<_> = group_sessions.drain().collect();
|
||||
session_entries.sort_by(|a, b| b.1.created_at.cmp(&a.1.created_at));
|
||||
|
||||
// Keep only the most recent sessions
|
||||
for (proposal_id, session) in session_entries
|
||||
.into_iter()
|
||||
.take(self.max_sessions_per_group)
|
||||
{
|
||||
group_sessions.insert(proposal_id, session);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get session statistics for a specific group
|
||||
pub async fn get_group_stats(&self, group_name: &str) -> ConsensusStats {
|
||||
let sessions = self.sessions.read().await;
|
||||
if let Some(group_sessions) = sessions.get(group_name) {
|
||||
let total_sessions = group_sessions.len();
|
||||
let active_sessions = group_sessions.values().filter(|s| s.is_active()).count();
|
||||
let consensus_reached = group_sessions
|
||||
.values()
|
||||
.filter(|s| matches!(s.state, ConsensusState::ConsensusReached(_)))
|
||||
.count();
|
||||
|
||||
ConsensusStats {
|
||||
total_sessions,
|
||||
active_sessions,
|
||||
consensus_reached,
|
||||
failed_sessions: total_sessions - active_sessions - consensus_reached,
|
||||
}
|
||||
} else {
|
||||
ConsensusStats {
|
||||
total_sessions: 0,
|
||||
active_sessions: 0,
|
||||
consensus_reached: 0,
|
||||
failed_sessions: 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get overall session statistics across all groups
|
||||
pub async fn get_overall_stats(&self) -> ConsensusStats {
|
||||
let sessions = self.sessions.read().await;
|
||||
let mut total_sessions = 0;
|
||||
let mut active_sessions = 0;
|
||||
let mut consensus_reached = 0;
|
||||
|
||||
for group_sessions in sessions.values() {
|
||||
total_sessions += group_sessions.len();
|
||||
active_sessions += group_sessions.values().filter(|s| s.is_active()).count();
|
||||
consensus_reached += group_sessions
|
||||
.values()
|
||||
.filter(|s| matches!(s.state, ConsensusState::ConsensusReached(_)))
|
||||
.count();
|
||||
}
|
||||
|
||||
ConsensusStats {
|
||||
total_sessions,
|
||||
active_sessions,
|
||||
consensus_reached,
|
||||
failed_sessions: total_sessions - active_sessions - consensus_reached,
|
||||
}
|
||||
}
|
||||
|
||||
/// Get all group names that have active sessions
|
||||
pub async fn get_active_groups(&self) -> Vec<String> {
|
||||
let sessions = self.sessions.read().await;
|
||||
sessions
|
||||
.iter()
|
||||
.filter(|(_, group_sessions)| {
|
||||
group_sessions.values().any(|session| session.is_active())
|
||||
})
|
||||
.map(|(group_name, _)| group_name.clone())
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Remove all sessions for a specific group
|
||||
pub async fn remove_group_sessions(&self, group_name: &str) {
|
||||
let mut sessions = self.sessions.write().await;
|
||||
sessions.remove(group_name);
|
||||
}
|
||||
|
||||
/// Check if we have enough votes for consensus (2n/3 threshold)
|
||||
pub async fn has_sufficient_votes(&self, group_name: &str, proposal_id: u32) -> bool {
|
||||
let sessions = self.sessions.read().await;
|
||||
|
||||
if let Some(group_sessions) = sessions.get(group_name) {
|
||||
if let Some(session) = group_sessions.get(&proposal_id) {
|
||||
let total_votes = session.votes.len() as u32;
|
||||
let expected_voters = session.proposal.expected_voters_count;
|
||||
self.check_sufficient_votes(
|
||||
total_votes,
|
||||
expected_voters,
|
||||
session.config.consensus_threshold,
|
||||
)
|
||||
} else {
|
||||
false
|
||||
}
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
/// Handle consensus when timeout is reached
|
||||
pub async fn handle_consensus_timeout(
|
||||
&self,
|
||||
group_name: &str,
|
||||
proposal_id: u32,
|
||||
) -> Result<bool, ConsensusError> {
|
||||
// First, check if consensus was already reached to avoid unnecessary work
|
||||
let mut sessions = self.sessions.write().await;
|
||||
if let Some(group_sessions) = sessions.get_mut(group_name) {
|
||||
if let Some(session) = group_sessions.get_mut(&proposal_id) {
|
||||
// Check if consensus was already reached
|
||||
match session.state {
|
||||
crate::consensus::ConsensusState::ConsensusReached(result) => {
|
||||
info!("Consensus already reached for proposal {proposal_id}, skipping timeout");
|
||||
Ok(result)
|
||||
}
|
||||
_ => {
|
||||
// Calculate consensus result
|
||||
let total_votes = session.votes.len() as u32;
|
||||
let expected_voters = session.proposal.expected_voters_count;
|
||||
let result = if self.check_sufficient_votes(
|
||||
total_votes,
|
||||
expected_voters,
|
||||
session.config.consensus_threshold,
|
||||
) {
|
||||
// We have sufficient votes (2n/3) - calculate result based on votes
|
||||
self.calculate_consensus_result(
|
||||
&session.votes,
|
||||
session.proposal.liveness_criteria_yes,
|
||||
)
|
||||
} else {
|
||||
// Insufficient votes - apply liveness criteria
|
||||
session.proposal.liveness_criteria_yes
|
||||
};
|
||||
|
||||
// Apply timeout consensus
|
||||
session.state = crate::consensus::ConsensusState::ConsensusReached(result);
|
||||
info!("Timeout consensus applied for proposal {proposal_id}: {result} (liveness criteria)");
|
||||
|
||||
// Emit consensus event
|
||||
session.emit_consensus_event(
|
||||
crate::consensus::ConsensusEvent::ConsensusReached {
|
||||
proposal_id,
|
||||
result,
|
||||
},
|
||||
);
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
Err(ConsensusError::SessionNotFound)
|
||||
}
|
||||
} else {
|
||||
Err(ConsensusError::SessionNotFound)
|
||||
}
|
||||
}
|
||||
|
||||
/// Helper method to calculate required votes for consensus
|
||||
fn calculate_required_votes(&self, expected_voters: u32, consensus_threshold: f64) -> u32 {
|
||||
if expected_voters == 1 || expected_voters == 2 {
|
||||
expected_voters
|
||||
} else {
|
||||
((expected_voters as f64) * consensus_threshold) as u32
|
||||
}
|
||||
}
|
||||
|
||||
/// Helper method to check if sufficient votes exist for consensus
|
||||
fn check_sufficient_votes(
|
||||
&self,
|
||||
total_votes: u32,
|
||||
expected_voters: u32,
|
||||
consensus_threshold: f64,
|
||||
) -> bool {
|
||||
let required_votes = self.calculate_required_votes(expected_voters, consensus_threshold);
|
||||
println!(
|
||||
"[consensus::service::check_sufficient_votes]: Total votes: {total_votes}, Expected voters: {expected_voters}, Consensus threshold: {consensus_threshold}, Required votes: {required_votes}"
|
||||
);
|
||||
total_votes >= required_votes
|
||||
}
|
||||
|
||||
/// Helper method to calculate consensus result based on votes
|
||||
fn calculate_consensus_result(
|
||||
&self,
|
||||
votes: &HashMap<Vec<u8>, Vote>,
|
||||
liveness_criteria_yes: bool,
|
||||
) -> bool {
|
||||
let total_votes = votes.len() as u32;
|
||||
let yes_votes = votes.values().filter(|v| v.vote).count() as u32;
|
||||
let no_votes = total_votes - yes_votes;
|
||||
|
||||
if yes_votes > no_votes {
|
||||
true
|
||||
} else if no_votes > yes_votes {
|
||||
false
|
||||
} else {
|
||||
// Tie - apply liveness criteria
|
||||
liveness_criteria_yes
|
||||
}
|
||||
}
|
||||
}
|
||||
164
src/error.rs
164
src/error.rs
@@ -1,5 +1,4 @@
|
||||
use alloy::signers::local::LocalSignerError;
|
||||
use kameo::error::SendError;
|
||||
use openmls::group::WelcomeError;
|
||||
use openmls::{
|
||||
framing::errors::MlsMessageError,
|
||||
@@ -7,41 +6,58 @@ use openmls::{
|
||||
prelude::{
|
||||
CommitToPendingProposalsError, CreateMessageError, MergeCommitError,
|
||||
MergePendingCommitError, NewGroupError, ProcessMessageError, ProposeAddMemberError,
|
||||
RemoveMembersError,
|
||||
},
|
||||
};
|
||||
use openmls_rust_crypto::MemoryStorageError;
|
||||
use std::{str::Utf8Error, string::FromUtf8Error};
|
||||
use std::string::FromUtf8Error;
|
||||
|
||||
use ds::{waku_actor::WakuMessageToSend, DeliveryServiceError};
|
||||
use mls_crypto::error::IdentityError;
|
||||
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
pub enum ConsensusError {
|
||||
#[error("Invalid vote ID")]
|
||||
InvalidVoteId,
|
||||
#[error("Invalid vote hash")]
|
||||
InvalidVoteHash,
|
||||
#[error(transparent)]
|
||||
MessageError(#[from] MessageError),
|
||||
|
||||
#[error("Verification failed")]
|
||||
InvalidVoteSignature,
|
||||
#[error("Duplicate vote")]
|
||||
DuplicateVote,
|
||||
#[error("Empty vote owner")]
|
||||
EmptyVoteOwner,
|
||||
#[error("Vote expired")]
|
||||
VoteExpired,
|
||||
#[error("Invalid vote hash")]
|
||||
InvalidVoteHash,
|
||||
#[error("Empty vote hash")]
|
||||
EmptyVoteHash,
|
||||
#[error("Received hash mismatch")]
|
||||
ReceivedHashMismatch,
|
||||
#[error("Parent hash mismatch")]
|
||||
ParentHashMismatch,
|
||||
#[error("Invalid vote timestamp")]
|
||||
InvalidVoteTimestamp,
|
||||
|
||||
#[error("Session not active")]
|
||||
SessionNotActive,
|
||||
#[error("Invalid message")]
|
||||
InvalidMessage,
|
||||
#[error("Proposal not found")]
|
||||
ProposalNotFound,
|
||||
#[error("Vote validation failed")]
|
||||
VoteValidationFailed,
|
||||
#[error("Consensus timeout")]
|
||||
ConsensusTimeout,
|
||||
#[error("Insufficient votes for consensus")]
|
||||
InsufficientVotes,
|
||||
#[error("Invalid signature")]
|
||||
InvalidSignature,
|
||||
#[error("Proposal expired")]
|
||||
ProposalExpired,
|
||||
#[error("An unknown consensus error occurred: {0}")]
|
||||
Other(String),
|
||||
#[error("Group not found")]
|
||||
GroupNotFound,
|
||||
#[error("Session not found")]
|
||||
SessionNotFound,
|
||||
|
||||
#[error("User already voted")]
|
||||
UserAlreadyVoted,
|
||||
|
||||
#[error("Proposal already exist in consensus service")]
|
||||
ProposalAlreadyExist,
|
||||
|
||||
#[error("Empty signature")]
|
||||
EmptySignature,
|
||||
#[error("Invalid signature: {0}")]
|
||||
InvalidSignature(String),
|
||||
|
||||
#[error("Failed to get current time")]
|
||||
FailedToGetCurrentTime(#[from] std::time::SystemTimeError),
|
||||
}
|
||||
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
@@ -52,6 +68,10 @@ pub enum MessageError {
|
||||
InvalidJson(#[from] serde_json::Error),
|
||||
#[error("Failed to serialize or deserialize MLS message: {0}")]
|
||||
InvalidMlsMessage(#[from] MlsMessageError),
|
||||
#[error("Invalid alloy signature: {0}")]
|
||||
InvalidAlloySignature(#[from] alloy::primitives::SignatureError),
|
||||
#[error("Mismatched length: expected {expect}, got {actual}")]
|
||||
MismatchedLength { expect: usize, actual: usize },
|
||||
}
|
||||
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
@@ -65,15 +85,15 @@ pub enum GroupError {
|
||||
MlsGroupNotSet,
|
||||
#[error("Group still active")]
|
||||
GroupStillActive,
|
||||
#[error("Empty welcome message")]
|
||||
EmptyWelcomeMessage,
|
||||
#[error("Member not found")]
|
||||
MemberNotFound,
|
||||
#[error("Invalid state transition")]
|
||||
InvalidStateTransition,
|
||||
#[error("Invalid state transition from {from} to {to}")]
|
||||
InvalidStateTransition { from: String, to: String },
|
||||
#[error("Empty proposals for current epoch")]
|
||||
EmptyProposals,
|
||||
#[error("Invalid state [{state}] to send message [{message_type}]")]
|
||||
InvalidStateToMessageSend { state: String, message_type: String },
|
||||
|
||||
#[error("Failed to decode hex address: {0}")]
|
||||
HexDecodeError(#[from] alloy::hex::FromHexError),
|
||||
#[error("Unable to create MLS group: {0}")]
|
||||
UnableToCreateGroup(#[from] NewGroupError<MemoryStorageError>),
|
||||
#[error("Unable to merge pending commit in MLS group: {0}")]
|
||||
@@ -84,8 +104,6 @@ pub enum GroupError {
|
||||
InvalidProcessMessage(#[from] ProcessMessageError),
|
||||
#[error("Unable to encrypt MLS message: {0}")]
|
||||
UnableToEncryptMlsMessage(#[from] CreateMessageError),
|
||||
#[error("Unable to remove members: {0}")]
|
||||
UnableToRemoveMembers(#[from] RemoveMembersError<MemoryStorageError>),
|
||||
#[error("Unable to create proposal to add members: {0}")]
|
||||
UnableToCreateProposal(#[from] ProposeAddMemberError<MemoryStorageError>),
|
||||
#[error("Unable to create proposal to remove members: {0}")]
|
||||
@@ -96,21 +114,10 @@ pub enum GroupError {
|
||||
),
|
||||
#[error("Unable to store pending proposal: {0}")]
|
||||
UnableToStorePendingProposal(#[from] MemoryStorageError),
|
||||
|
||||
#[error("Failed to serialize mls message: {0}")]
|
||||
MlsMessageError(#[from] MlsMessageError),
|
||||
|
||||
#[error("UTF-8 parsing error: {0}")]
|
||||
Utf8ParsingError(#[from] FromUtf8Error),
|
||||
#[error("JSON processing error: {0}")]
|
||||
JsonError(#[from] serde_json::Error),
|
||||
#[error("Serialization error: {0}")]
|
||||
SerializationError(#[from] tls_codec::Error),
|
||||
#[error("Failed to decode app message: {0}")]
|
||||
AppMessageDecodeError(String),
|
||||
|
||||
#[error("An unknown error occurred: {0}")]
|
||||
Other(anyhow::Error),
|
||||
AppMessageDecodeError(#[from] prost::DecodeError),
|
||||
}
|
||||
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
@@ -123,63 +130,56 @@ pub enum UserError {
|
||||
GroupError(#[from] GroupError),
|
||||
#[error(transparent)]
|
||||
MessageError(#[from] MessageError),
|
||||
#[error(transparent)]
|
||||
ConsensusError(#[from] ConsensusError),
|
||||
|
||||
#[error("Group already exists: {0}")]
|
||||
GroupAlreadyExistsError(String),
|
||||
#[error("Group not found: {0}")]
|
||||
GroupNotFoundError(String),
|
||||
|
||||
#[error("Unsupported message type.")]
|
||||
UnsupportedMessageType,
|
||||
#[error("Group already exists")]
|
||||
GroupAlreadyExistsError,
|
||||
#[error("Group not found")]
|
||||
GroupNotFoundError,
|
||||
#[error("MLS group not initialized")]
|
||||
MlsGroupNotInitialized,
|
||||
#[error("Welcome message cannot be empty.")]
|
||||
EmptyWelcomeMessageError,
|
||||
#[error("Failed to extract welcome message")]
|
||||
FailedToExtractWelcomeMessage,
|
||||
#[error("Message verification failed")]
|
||||
MessageVerificationFailed,
|
||||
|
||||
#[error("Failed to create group: {0}")]
|
||||
UnableToCreateGroup(String),
|
||||
#[error("Failed to send message to user: {0}")]
|
||||
UnableToHandleStewardEpoch(String),
|
||||
#[error("Failed to process steward message: {0}")]
|
||||
ProcessStewardMessageError(String),
|
||||
#[error("Failed to get group update requests: {0}")]
|
||||
GetGroupUpdateRequestsError(String),
|
||||
#[error("Invalid user action: {0}")]
|
||||
InvalidUserAction(String),
|
||||
#[error("Failed to start voting: {0}")]
|
||||
UnableToStartVoting(String),
|
||||
#[error("Unknown content topic type: {0}")]
|
||||
UnknownContentTopicType(String),
|
||||
#[error("Failed to send message to ws: {0}")]
|
||||
UnableToSendMessageToWs(String),
|
||||
#[error("Invalid group state: {0}")]
|
||||
InvalidGroupState(String),
|
||||
#[error("No proposals found")]
|
||||
NoProposalsFound,
|
||||
#[error("Invalid app message type")]
|
||||
InvalidAppMessageType,
|
||||
|
||||
#[error("Failed to create staged join: {0}")]
|
||||
MlsWelcomeError(#[from] WelcomeError<MemoryStorageError>),
|
||||
|
||||
#[error("UTF-8 parsing error: {0}")]
|
||||
Utf8ParsingError(#[from] FromUtf8Error),
|
||||
#[error("UTF-8 string parsing error: {0}")]
|
||||
Utf8StringParsingError(#[from] Utf8Error),
|
||||
#[error("JSON processing error: {0}")]
|
||||
JsonError(#[from] serde_json::Error),
|
||||
#[error("Serialization error: {0}")]
|
||||
SerializationError(#[from] tls_codec::Error),
|
||||
#[error("Failed to parse signer: {0}")]
|
||||
SignerParsingError(#[from] LocalSignerError),
|
||||
|
||||
#[error("Failed to publish message: {0}")]
|
||||
KameoPublishMessageError(#[from] SendError<WakuMessageToSend, DeliveryServiceError>),
|
||||
#[error("Failed to create group: {0}")]
|
||||
KameoCreateGroupError(String),
|
||||
#[error("Failed to send message to user: {0}")]
|
||||
KameoSendMessageError(String),
|
||||
#[error("Failed to get income key packages: {0}")]
|
||||
GetIncomeKeyPackagesError(String),
|
||||
#[error("Failed to process steward message: {0}")]
|
||||
ProcessStewardMessageError(String),
|
||||
#[error("Failed to process proposals: {0}")]
|
||||
ProcessProposalsError(String),
|
||||
#[error("Unsupported mls message type")]
|
||||
UnsupportedMlsMessageType,
|
||||
#[error("Failed to decode welcome message: {0}")]
|
||||
WelcomeMessageDecodeError(#[from] prost::DecodeError),
|
||||
#[error("Failed to apply proposals: {0}")]
|
||||
ApplyProposalsError(String),
|
||||
#[error("Failed to deserialize mls message in: {0}")]
|
||||
MlsMessageInDeserializeError(String),
|
||||
#[error("Failed to create invite proposal: {0}")]
|
||||
CreateInviteProposalError(String),
|
||||
MlsMessageInDeserializeError(#[from] openmls::prelude::Error),
|
||||
#[error("Failed to try into protocol message: {0}")]
|
||||
TryIntoProtocolMessageError(String),
|
||||
#[error("Failed to get group update requests: {0}")]
|
||||
GetGroupUpdateRequestsError(String),
|
||||
|
||||
TryIntoProtocolMessageError(#[from] openmls::framing::errors::ProtocolMessageError),
|
||||
#[error("Failed to send message to waku: {0}")]
|
||||
WakuSendMessageError(#[from] tokio::sync::mpsc::error::SendError<WakuMessageToSend>),
|
||||
}
|
||||
|
||||
669
src/group.rs
669
src/group.rs
@@ -1,51 +1,79 @@
|
||||
use alloy::hex;
|
||||
use kameo::Actor;
|
||||
use log::info;
|
||||
use log::{error, info};
|
||||
use openmls::{
|
||||
group::{GroupEpoch, GroupId, MlsGroup, MlsGroupCreateConfig},
|
||||
prelude::{
|
||||
Credential, CredentialWithKey, KeyPackage, LeafNodeIndex, OpenMlsProvider,
|
||||
ApplicationMessage, CredentialWithKey, KeyPackage, LeafNodeIndex, OpenMlsProvider,
|
||||
ProcessedMessageContent, ProtocolMessage,
|
||||
},
|
||||
};
|
||||
use openmls_basic_credential::SignatureKeyPair;
|
||||
use prost::Message;
|
||||
use std::{fmt::Display, sync::Arc};
|
||||
use tokio::sync::Mutex;
|
||||
use tokio::sync::{Mutex, RwLock};
|
||||
use uuid;
|
||||
|
||||
use crate::{
|
||||
consensus::v1::{Proposal, Vote},
|
||||
error::GroupError,
|
||||
message::{
|
||||
wrap_batch_proposals_into_application_msg, wrap_conversation_message_into_application_msg,
|
||||
wrap_group_announcement_in_welcome_msg, wrap_invitation_into_welcome_msg,
|
||||
},
|
||||
protos::messages::v1::{app_message, AppMessage},
|
||||
message::{message_types, MessageType},
|
||||
protos::messages::v1::{app_message, AppMessage, BatchProposalsMessage, WelcomeMessage},
|
||||
state_machine::{GroupState, GroupStateMachine},
|
||||
steward::GroupUpdateRequest,
|
||||
};
|
||||
use ds::{waku_actor::WakuMessageToSend, APP_MSG_SUBTOPIC, WELCOME_SUBTOPIC};
|
||||
use mls_crypto::openmls_provider::MlsProvider;
|
||||
|
||||
/// Represents the action to take after processing a group message or event.
|
||||
///
|
||||
/// This enum defines the possible outcomes when processing group-related operations,
|
||||
/// allowing the caller to determine the appropriate next steps.
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum GroupAction {
|
||||
GroupAppMsg(AppMessage),
|
||||
GroupProposal(Proposal),
|
||||
GroupVote(Vote),
|
||||
LeaveGroup,
|
||||
DoNothing,
|
||||
}
|
||||
|
||||
impl Display for GroupAction {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
GroupAction::GroupAppMsg(_) => write!(f, "Message will be printed to the app"),
|
||||
GroupAction::GroupProposal(_) => write!(f, "Get proposal for voting"),
|
||||
GroupAction::GroupVote(_) => write!(f, "Get vote for proposal"),
|
||||
GroupAction::LeaveGroup => write!(f, "User will leave the group"),
|
||||
GroupAction::DoNothing => write!(f, "Do Nothing"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Represents a group in the MLS-based messaging system.
|
||||
///
|
||||
/// The Group struct manages the lifecycle of an MLS group, including member management,
|
||||
/// proposal handling, and state transitions. It integrates with the state machine
|
||||
/// to enforce proper group operations and steward epoch management.
|
||||
///
|
||||
/// ## Key Features:
|
||||
/// - MLS group management and message processing
|
||||
/// - Steward epoch coordination and proposal handling
|
||||
/// - State machine integration for proper workflow enforcement
|
||||
/// - Member addition/removal through proposals
|
||||
/// - Message validation and permission checking
|
||||
#[derive(Clone, Debug, Actor)]
|
||||
pub struct Group {
|
||||
group_name: String,
|
||||
mls_group: Option<Arc<Mutex<MlsGroup>>>,
|
||||
is_kp_shared: bool,
|
||||
app_id: Vec<u8>,
|
||||
state_machine: GroupStateMachine,
|
||||
state_machine: Arc<RwLock<GroupStateMachine>>,
|
||||
}
|
||||
|
||||
impl Group {
|
||||
pub fn new(
|
||||
group_name: String,
|
||||
group_name: &str,
|
||||
is_creation: bool,
|
||||
provider: Option<&MlsProvider>,
|
||||
signer: Option<&SignatureKeyPair>,
|
||||
@@ -53,14 +81,14 @@ impl Group {
|
||||
) -> Result<Self, GroupError> {
|
||||
let uuid = uuid::Uuid::new_v4().as_bytes().to_vec();
|
||||
let mut group = Group {
|
||||
group_name: group_name.clone(),
|
||||
group_name: group_name.to_string(),
|
||||
mls_group: None,
|
||||
is_kp_shared: false,
|
||||
app_id: uuid.clone(),
|
||||
state_machine: if is_creation {
|
||||
GroupStateMachine::new_with_steward()
|
||||
Arc::new(RwLock::new(GroupStateMachine::new_with_steward()))
|
||||
} else {
|
||||
GroupStateMachine::new()
|
||||
Arc::new(RwLock::new(GroupStateMachine::new()))
|
||||
},
|
||||
};
|
||||
|
||||
@@ -87,11 +115,20 @@ impl Group {
|
||||
Ok(group)
|
||||
}
|
||||
|
||||
/// Get the identities of all current group members.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Vector of member identity bytes
|
||||
///
|
||||
/// ## Errors:
|
||||
/// - `GroupError::MlsGroupNotSet` if MLS group is not initialized
|
||||
pub async fn members_identity(&self) -> Result<Vec<Vec<u8>>, GroupError> {
|
||||
if !self.is_mls_group_initialized() {
|
||||
return Err(GroupError::MlsGroupNotSet);
|
||||
}
|
||||
let mls_group = self.mls_group.as_ref().unwrap().lock().await;
|
||||
let mls_group = self
|
||||
.mls_group
|
||||
.as_ref()
|
||||
.ok_or_else(|| GroupError::MlsGroupNotSet)?
|
||||
.lock()
|
||||
.await;
|
||||
let x = mls_group
|
||||
.members()
|
||||
.map(|m| m.credential.serialized_content().to_vec())
|
||||
@@ -99,14 +136,26 @@ impl Group {
|
||||
Ok(x)
|
||||
}
|
||||
|
||||
/// Find the leaf node index of a member by their identity.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `identity`: The member's identity bytes
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - `Some(LeafNodeIndex)` if member is found, `None` otherwise
|
||||
///
|
||||
/// ## Errors:
|
||||
/// - `GroupError::MlsGroupNotSet` if MLS group is not initialized
|
||||
pub async fn find_member_index(
|
||||
&self,
|
||||
identity: Vec<u8>,
|
||||
) -> Result<Option<LeafNodeIndex>, GroupError> {
|
||||
if !self.is_mls_group_initialized() {
|
||||
return Err(GroupError::MlsGroupNotSet);
|
||||
}
|
||||
let mls_group = self.mls_group.as_ref().unwrap().lock().await;
|
||||
let mls_group = self
|
||||
.mls_group
|
||||
.as_ref()
|
||||
.ok_or_else(|| GroupError::MlsGroupNotSet)?
|
||||
.lock()
|
||||
.await;
|
||||
let x = mls_group.members().find_map(|m| {
|
||||
if m.credential.serialized_content() == identity {
|
||||
Some(m.index)
|
||||
@@ -117,137 +166,282 @@ impl Group {
|
||||
Ok(x)
|
||||
}
|
||||
|
||||
/// Get the current epoch of the MLS group.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Current group epoch
|
||||
///
|
||||
/// ## Errors:
|
||||
/// - `GroupError::MlsGroupNotSet` if MLS group is not initialized
|
||||
pub async fn epoch(&self) -> Result<GroupEpoch, GroupError> {
|
||||
if !self.is_mls_group_initialized() {
|
||||
return Err(GroupError::MlsGroupNotSet);
|
||||
}
|
||||
let mls_group = self.mls_group.as_ref().unwrap().lock().await;
|
||||
let mls_group = self
|
||||
.mls_group
|
||||
.as_ref()
|
||||
.ok_or_else(|| GroupError::MlsGroupNotSet)?
|
||||
.lock()
|
||||
.await;
|
||||
Ok(mls_group.epoch())
|
||||
}
|
||||
|
||||
/// Set the MLS group instance for this group.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `mls_group`: The MLS group instance to set
|
||||
///
|
||||
/// ## Effects:
|
||||
/// - Sets `is_kp_shared` to `true`
|
||||
/// - Stores the MLS group in an `Arc<Mutex<MlsGroup>>`
|
||||
pub fn set_mls_group(&mut self, mls_group: MlsGroup) -> Result<(), GroupError> {
|
||||
self.is_kp_shared = true;
|
||||
self.mls_group = Some(Arc::new(Mutex::new(mls_group)));
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Check if the MLS group is initialized.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - `true` if MLS group is set, `false` otherwise
|
||||
pub fn is_mls_group_initialized(&self) -> bool {
|
||||
self.mls_group.is_some()
|
||||
}
|
||||
|
||||
/// Check if the key package has been shared.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - `true` if key package is shared, `false` otherwise
|
||||
pub fn is_kp_shared(&self) -> bool {
|
||||
self.is_kp_shared
|
||||
}
|
||||
|
||||
/// Set the key package shared status.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `is_kp_shared`: Whether the key package is shared
|
||||
pub fn set_kp_shared(&mut self, is_kp_shared: bool) {
|
||||
self.is_kp_shared = is_kp_shared;
|
||||
}
|
||||
|
||||
pub fn is_steward(&self) -> bool {
|
||||
self.state_machine.has_steward()
|
||||
/// Check if this group has a steward configured.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - `true` if steward is configured, `false` otherwise
|
||||
pub async fn is_steward(&self) -> bool {
|
||||
self.state_machine.read().await.has_steward()
|
||||
}
|
||||
|
||||
pub fn app_id(&self) -> Vec<u8> {
|
||||
self.app_id.clone()
|
||||
/// Get the application ID for this group.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Reference to the application ID bytes
|
||||
pub fn app_id(&self) -> &[u8] {
|
||||
&self.app_id
|
||||
}
|
||||
|
||||
pub fn decrypt_steward_msg(&self, message: Vec<u8>) -> Result<KeyPackage, GroupError> {
|
||||
if !self.is_steward() {
|
||||
return Err(GroupError::StewardNotSet);
|
||||
}
|
||||
let steward = self.state_machine.get_steward().unwrap();
|
||||
let msg: KeyPackage = steward.decrypt_message(message)?;
|
||||
/// Get the group name as bytes.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Reference to the group name bytes
|
||||
pub fn group_name_bytes(&self) -> &[u8] {
|
||||
self.group_name.as_bytes()
|
||||
}
|
||||
|
||||
/// Generate a steward announcement message for this group.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Waku message containing the steward announcement
|
||||
///
|
||||
/// ## Errors:
|
||||
/// - `GroupError::StewardNotSet` if no steward is configured
|
||||
///
|
||||
/// ## Effects:
|
||||
/// - Refreshes the steward's key pair
|
||||
/// - Creates a new group announcement
|
||||
pub async fn generate_steward_message(&mut self) -> Result<WakuMessageToSend, GroupError> {
|
||||
let mut state_machine = self.state_machine.write().await;
|
||||
let steward = state_machine
|
||||
.get_steward_mut()
|
||||
.ok_or(GroupError::StewardNotSet)?;
|
||||
steward.refresh_key_pair().await;
|
||||
|
||||
let welcome_msg: WelcomeMessage = steward.create_announcement().await.into();
|
||||
let msg_to_send = WakuMessageToSend::new(
|
||||
welcome_msg.encode_to_vec(),
|
||||
WELCOME_SUBTOPIC,
|
||||
&self.group_name,
|
||||
self.app_id(),
|
||||
);
|
||||
Ok(msg_to_send)
|
||||
}
|
||||
|
||||
/// Decrypt a steward message using the group's steward key.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `message`: The encrypted message bytes
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Decrypted KeyPackage
|
||||
///
|
||||
/// ## Errors:
|
||||
/// - `GroupError::StewardNotSet` if no steward is configured
|
||||
/// - Various decryption errors from the steward
|
||||
pub async fn decrypt_steward_msg(
|
||||
&mut self,
|
||||
message: Vec<u8>,
|
||||
) -> Result<KeyPackage, GroupError> {
|
||||
let state_machine = self.state_machine.read().await;
|
||||
let steward = state_machine
|
||||
.get_steward()
|
||||
.ok_or(GroupError::StewardNotSet)?;
|
||||
let msg: KeyPackage = steward.decrypt_message(message).await?;
|
||||
Ok(msg)
|
||||
}
|
||||
|
||||
// Functions to store proposals in steward queue
|
||||
|
||||
/// Store an invite proposal in the steward queue for the current epoch.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `key_package`: The key package of the member to add
|
||||
///
|
||||
/// ## Effects:
|
||||
/// - Adds an AddMember proposal to the current epoch
|
||||
/// - Proposal will be processed in the next steward epoch
|
||||
pub async fn store_invite_proposal(
|
||||
&mut self,
|
||||
key_package: Box<KeyPackage>,
|
||||
) -> Result<(), GroupError> {
|
||||
self.state_machine
|
||||
let mut state_machine = self.state_machine.write().await;
|
||||
state_machine
|
||||
.add_proposal(GroupUpdateRequest::AddMember(key_package))
|
||||
.await;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn store_remove_proposal(&mut self, identity: Vec<u8>) -> Result<(), GroupError> {
|
||||
self.state_machine
|
||||
/// Store a remove proposal in the steward queue for the current epoch.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `identity`: The identity string of the member to remove
|
||||
///
|
||||
/// ## Effects:
|
||||
/// - Adds a RemoveMember proposal to the current epoch
|
||||
/// - Proposal will be processed in the next steward epoch
|
||||
pub async fn store_remove_proposal(&mut self, identity: String) -> Result<(), GroupError> {
|
||||
let mut state_machine = self.state_machine.write().await;
|
||||
state_machine
|
||||
.add_proposal(GroupUpdateRequest::RemoveMember(identity))
|
||||
.await;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Fucntions to process protocol messages
|
||||
/// Process an application message and determine the appropriate action.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `message`: The application message to process
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - `GroupAction` indicating what action should be taken
|
||||
///
|
||||
/// ## Effects:
|
||||
/// - For ban requests from stewards: automatically adds remove proposals
|
||||
/// - For other messages: processes normally
|
||||
///
|
||||
/// ## Supported Message Types:
|
||||
/// - Conversation messages
|
||||
/// - Proposals
|
||||
/// - Votes
|
||||
/// - Ban requests
|
||||
pub async fn process_application_message(
|
||||
&mut self,
|
||||
message: ApplicationMessage,
|
||||
) -> Result<GroupAction, GroupError> {
|
||||
let app_msg = AppMessage::decode(message.into_bytes().as_slice())?;
|
||||
match app_msg.payload {
|
||||
Some(app_message::Payload::ConversationMessage(conversation_message)) => {
|
||||
info!("[group::process_application_message]: Processing conversation message");
|
||||
Ok(GroupAction::GroupAppMsg(conversation_message.into()))
|
||||
}
|
||||
Some(app_message::Payload::Proposal(proposal)) => {
|
||||
info!("[group::process_application_message]: Processing proposal message");
|
||||
Ok(GroupAction::GroupProposal(proposal))
|
||||
}
|
||||
Some(app_message::Payload::Vote(vote)) => {
|
||||
info!("[group::process_application_message]: Processing vote message");
|
||||
Ok(GroupAction::GroupVote(vote))
|
||||
}
|
||||
Some(app_message::Payload::BanRequest(ban_request)) => {
|
||||
info!("[group::process_application_message]: Processing ban request message");
|
||||
|
||||
if self.is_steward().await {
|
||||
info!(
|
||||
"[group::process_application_message]: Steward adding remove proposal for user {}",
|
||||
ban_request.user_to_ban.clone()
|
||||
);
|
||||
self.store_remove_proposal(ban_request.user_to_ban.clone())
|
||||
.await?;
|
||||
} else {
|
||||
info!(
|
||||
"[group::process_application_message]: Non-steward received ban request message"
|
||||
);
|
||||
}
|
||||
|
||||
Ok(GroupAction::GroupAppMsg(ban_request.into()))
|
||||
}
|
||||
_ => Ok(GroupAction::DoNothing),
|
||||
}
|
||||
}
|
||||
|
||||
/// Process a protocol message from the MLS group.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `message`: The protocol message to process
|
||||
/// - `provider`: The MLS provider for processing
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - `GroupAction` indicating what action should be taken
|
||||
///
|
||||
/// ## Effects:
|
||||
/// - Processes MLS group messages
|
||||
/// - Handles member removal scenarios
|
||||
/// - Stores pending proposals
|
||||
///
|
||||
/// ## Supported Message Types:
|
||||
/// - Application messages
|
||||
/// - Proposal messages
|
||||
/// - External join proposals
|
||||
/// - Staged commit messages
|
||||
pub async fn process_protocol_msg(
|
||||
&mut self,
|
||||
message: ProtocolMessage,
|
||||
provider: &MlsProvider,
|
||||
signature_key: Vec<u8>,
|
||||
) -> Result<GroupAction, GroupError> {
|
||||
let group_id = message.group_id().as_slice().to_vec();
|
||||
if group_id != self.group_name.as_bytes().to_vec() {
|
||||
let group_id = message.group_id().as_slice();
|
||||
if group_id != self.group_name_bytes() {
|
||||
return Ok(GroupAction::DoNothing);
|
||||
}
|
||||
if !self.is_mls_group_initialized() {
|
||||
return Err(GroupError::MlsGroupNotSet);
|
||||
}
|
||||
let mut mls_group = self.mls_group.as_mut().unwrap().lock().await;
|
||||
|
||||
let mut mls_group = self
|
||||
.mls_group
|
||||
.as_ref()
|
||||
.ok_or_else(|| GroupError::MlsGroupNotSet)?
|
||||
.lock()
|
||||
.await;
|
||||
// If the message is from a previous epoch, we don't need to process it and it's a commit for welcome message
|
||||
if message.epoch() < mls_group.epoch() && message.epoch() == 0.into() {
|
||||
return Ok(GroupAction::DoNothing);
|
||||
}
|
||||
|
||||
let processed_message = mls_group.process_message(provider, message)?;
|
||||
let processed_message_credential: Credential = processed_message.credential().clone();
|
||||
|
||||
match processed_message.into_content() {
|
||||
ProcessedMessageContent::ApplicationMessage(application_message) => {
|
||||
let sender_name = {
|
||||
let user_id = mls_group.members().find_map(|m| {
|
||||
if m.credential.serialized_content()
|
||||
== processed_message_credential.serialized_content()
|
||||
&& (signature_key != m.signature_key.as_slice())
|
||||
{
|
||||
Some(hex::encode(m.credential.serialized_content()))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
});
|
||||
if user_id.is_none() {
|
||||
return Ok(GroupAction::DoNothing);
|
||||
}
|
||||
user_id.unwrap()
|
||||
};
|
||||
|
||||
let app_msg_bytes = application_message.into_bytes();
|
||||
let app_msg_bytes_slice = app_msg_bytes.as_slice();
|
||||
let app_msg = AppMessage::decode(app_msg_bytes_slice)
|
||||
.map_err(|e| GroupError::AppMessageDecodeError(e.to_string()))?;
|
||||
|
||||
match app_msg.payload {
|
||||
Some(app_message::Payload::ConversationMessage(conversation_message)) => {
|
||||
let msg_to_send = wrap_conversation_message_into_application_msg(
|
||||
conversation_message.message,
|
||||
sender_name,
|
||||
self.group_name.clone(),
|
||||
);
|
||||
return Ok(GroupAction::GroupAppMsg(msg_to_send));
|
||||
}
|
||||
_ => return Ok(GroupAction::DoNothing),
|
||||
}
|
||||
drop(mls_group);
|
||||
self.process_application_message(application_message).await
|
||||
}
|
||||
ProcessedMessageContent::ProposalMessage(proposal_ptr) => {
|
||||
let res = mls_group
|
||||
.store_pending_proposal(provider.storage(), proposal_ptr.as_ref().clone());
|
||||
if res.is_err() {
|
||||
return Err(GroupError::UnableToStorePendingProposal(res.err().unwrap()));
|
||||
}
|
||||
mls_group
|
||||
.store_pending_proposal(provider.storage(), proposal_ptr.as_ref().clone())?;
|
||||
Ok(GroupAction::DoNothing)
|
||||
}
|
||||
ProcessedMessageContent::ExternalJoinProposalMessage(_external_proposal_ptr) => {
|
||||
Ok(GroupAction::DoNothing)
|
||||
}
|
||||
ProcessedMessageContent::ExternalJoinProposalMessage(_external_proposal_ptr) => (),
|
||||
ProcessedMessageContent::StagedCommitMessage(commit_ptr) => {
|
||||
let mut remove_proposal: bool = false;
|
||||
if commit_ptr.self_removed() {
|
||||
@@ -255,51 +449,62 @@ impl Group {
|
||||
}
|
||||
mls_group.merge_staged_commit(provider, *commit_ptr)?;
|
||||
if remove_proposal {
|
||||
// here we need to remove group instance locally and
|
||||
// also remove correspond key package from local storage ans sc storage
|
||||
if mls_group.is_active() {
|
||||
return Err(GroupError::GroupStillActive);
|
||||
}
|
||||
return Ok(GroupAction::LeaveGroup);
|
||||
Ok(GroupAction::LeaveGroup)
|
||||
} else {
|
||||
Ok(GroupAction::DoNothing)
|
||||
}
|
||||
}
|
||||
};
|
||||
Ok(GroupAction::DoNothing)
|
||||
}
|
||||
|
||||
pub fn generate_steward_message(&mut self) -> Result<WakuMessageToSend, GroupError> {
|
||||
if !self.is_steward() {
|
||||
return Err(GroupError::StewardNotSet);
|
||||
}
|
||||
let steward = self.state_machine.get_steward_mut().unwrap();
|
||||
steward.refresh_key_pair();
|
||||
|
||||
let msg_to_send = WakuMessageToSend::new(
|
||||
wrap_group_announcement_in_welcome_msg(steward.create_announcement()).encode_to_vec(),
|
||||
WELCOME_SUBTOPIC,
|
||||
self.group_name.clone(),
|
||||
self.app_id.clone(),
|
||||
);
|
||||
Ok(msg_to_send)
|
||||
}
|
||||
|
||||
/// Build and validate a message for sending to the group.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `provider`: The MLS provider for message creation
|
||||
/// - `signer`: The signature key pair for signing
|
||||
/// - `msg`: The application message to build
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Waku message ready for transmission
|
||||
///
|
||||
/// ## Effects:
|
||||
/// - Validates message can be sent in current state
|
||||
/// - Creates MLS message with proper signing
|
||||
///
|
||||
/// ## Validation:
|
||||
/// - Checks state machine permissions
|
||||
/// - Ensures steward status and proposal availability
|
||||
pub async fn build_message(
|
||||
&mut self,
|
||||
provider: &MlsProvider,
|
||||
signer: &SignatureKeyPair,
|
||||
msg: &AppMessage,
|
||||
) -> Result<WakuMessageToSend, GroupError> {
|
||||
// Check if message can be sent in current state
|
||||
let is_steward = self.is_steward();
|
||||
let is_steward = self.is_steward().await;
|
||||
let has_proposals = self.get_pending_proposals_count().await > 0;
|
||||
|
||||
if !self.can_send_message(is_steward, has_proposals) {
|
||||
return Err(GroupError::InvalidStateTransition);
|
||||
let message_type = msg
|
||||
.payload
|
||||
.as_ref()
|
||||
.map(|p| p.message_type())
|
||||
.unwrap_or(message_types::UNKNOWN);
|
||||
|
||||
// Check if message can be sent in current state
|
||||
let state_machine = self.state_machine.read().await;
|
||||
let current_state = state_machine.current_state();
|
||||
if !state_machine.can_send_message_type(is_steward, has_proposals, message_type) {
|
||||
return Err(GroupError::InvalidStateToMessageSend {
|
||||
state: current_state.to_string(),
|
||||
message_type: message_type.to_string(),
|
||||
});
|
||||
}
|
||||
let message_out = self
|
||||
.mls_group
|
||||
.as_mut()
|
||||
.unwrap()
|
||||
.ok_or_else(|| GroupError::MlsGroupNotSet)?
|
||||
.lock()
|
||||
.await
|
||||
.create_message(provider, signer, &msg.encode_to_vec())?
|
||||
@@ -307,70 +512,142 @@ impl Group {
|
||||
Ok(WakuMessageToSend::new(
|
||||
message_out,
|
||||
APP_MSG_SUBTOPIC,
|
||||
self.group_name.clone(),
|
||||
self.app_id.clone(),
|
||||
&self.group_name,
|
||||
self.app_id(),
|
||||
))
|
||||
}
|
||||
|
||||
// State management methods
|
||||
pub fn get_state(&self) -> GroupState {
|
||||
self.state_machine.current_state()
|
||||
}
|
||||
|
||||
pub fn can_send_message(&self, is_steward: bool, has_proposals: bool) -> bool {
|
||||
self.state_machine
|
||||
.can_send_message(is_steward, has_proposals)
|
||||
/// Get the current state of the group state machine.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Current `GroupState` of the group
|
||||
pub async fn get_state(&self) -> GroupState {
|
||||
self.state_machine.read().await.current_state()
|
||||
}
|
||||
|
||||
/// Get the number of pending proposals for the current epoch
|
||||
pub async fn get_pending_proposals_count(&self) -> usize {
|
||||
let count = self.state_machine.get_current_epoch_proposals_count().await;
|
||||
info!("State machine reports {count} current epoch proposals");
|
||||
count
|
||||
self.state_machine
|
||||
.read()
|
||||
.await
|
||||
.get_current_epoch_proposals_count()
|
||||
.await
|
||||
}
|
||||
|
||||
/// Get the number of pending proposals for the voting epoch
|
||||
pub async fn get_voting_proposals_count(&self) -> usize {
|
||||
let count = self.state_machine.get_voting_epoch_proposals_count().await;
|
||||
info!("State machine reports {count} voting proposals");
|
||||
count
|
||||
self.state_machine
|
||||
.read()
|
||||
.await
|
||||
.get_voting_epoch_proposals_count()
|
||||
.await
|
||||
}
|
||||
|
||||
/// Get the proposals for the voting epoch
|
||||
pub async fn get_proposals_for_voting_epoch(&self) -> Vec<GroupUpdateRequest> {
|
||||
self.state_machine.get_voting_epoch_proposals().await
|
||||
self.state_machine
|
||||
.read()
|
||||
.await
|
||||
.get_voting_epoch_proposals()
|
||||
.await
|
||||
}
|
||||
|
||||
/// Start a new steward epoch, moving proposals from the previous epoch to the voting epoch
|
||||
/// and transitioning to Waiting state.
|
||||
pub async fn start_steward_epoch(&mut self) -> Result<(), GroupError> {
|
||||
if !self.is_steward() {
|
||||
return Err(GroupError::StewardNotSet);
|
||||
}
|
||||
|
||||
// Start new epoch and move proposals from current epoch to voting epoch
|
||||
self.state_machine.start_steward_epoch().await?;
|
||||
Ok(())
|
||||
/// Start voting on proposals for the current epoch
|
||||
pub async fn start_voting(&mut self) -> Result<(), GroupError> {
|
||||
self.state_machine.write().await.start_voting()
|
||||
}
|
||||
|
||||
/// Start voting on proposals for the current epoch, transitioning to Voting state.
|
||||
pub fn start_voting(&mut self) -> Result<(), GroupError> {
|
||||
self.state_machine.start_voting()
|
||||
/// Complete voting and update state based on result
|
||||
pub async fn complete_voting(&mut self, vote_result: bool) -> Result<(), GroupError> {
|
||||
self.state_machine
|
||||
.write()
|
||||
.await
|
||||
.complete_voting(vote_result)
|
||||
}
|
||||
|
||||
/// Complete voting, updating group state based on the result.
|
||||
pub fn complete_voting(&mut self, vote_result: bool) -> Result<(), GroupError> {
|
||||
self.state_machine.complete_voting(vote_result)
|
||||
/// Start working state (for non-steward peers after consensus or edge case recovery)
|
||||
pub async fn start_working(&mut self) {
|
||||
self.state_machine.write().await.start_working();
|
||||
}
|
||||
|
||||
/// Start consensus reached state (for non-steward peers after consensus)
|
||||
pub async fn start_consensus_reached(&mut self) {
|
||||
self.state_machine.write().await.start_consensus_reached();
|
||||
}
|
||||
|
||||
/// Recover from consensus failure by transitioning back to Working state
|
||||
pub async fn recover_from_consensus_failure(&mut self) -> Result<(), GroupError> {
|
||||
self.state_machine
|
||||
.write()
|
||||
.await
|
||||
.recover_from_consensus_failure()
|
||||
}
|
||||
|
||||
/// Start waiting state (for non-steward peers after consensus or edge case recovery)
|
||||
pub async fn start_waiting(&mut self) {
|
||||
self.state_machine.write().await.start_waiting();
|
||||
}
|
||||
|
||||
/// Start steward epoch with validation
|
||||
pub async fn start_steward_epoch_with_validation(&mut self) -> Result<usize, GroupError> {
|
||||
self.state_machine
|
||||
.write()
|
||||
.await
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
}
|
||||
|
||||
/// Handle successful vote for group
|
||||
pub async fn handle_yes_vote(&mut self) -> Result<(), GroupError> {
|
||||
self.state_machine.write().await.handle_yes_vote().await
|
||||
}
|
||||
|
||||
/// Handle failed vote for group
|
||||
pub async fn handle_no_vote(&mut self) -> Result<(), GroupError> {
|
||||
self.state_machine.write().await.handle_no_vote().await
|
||||
}
|
||||
|
||||
/// Start waiting state when steward sends batch proposals after consensus
|
||||
pub async fn start_waiting_after_consensus(&mut self) -> Result<(), GroupError> {
|
||||
self.state_machine
|
||||
.write()
|
||||
.await
|
||||
.start_waiting_after_consensus()
|
||||
}
|
||||
|
||||
/// Create a batch proposals message and welcome message for the current epoch.
|
||||
/// Returns [batch_proposals_msg, welcome_msg] where welcome_msg is only included if there are new members.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `provider`: The MLS provider for proposal creation
|
||||
/// - `signer`: The signature key pair for signing
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Vector of Waku messages: [batch_proposals_msg, welcome_msg]
|
||||
/// - Welcome message is only included if there are new members to add
|
||||
///
|
||||
/// ## Preconditions:
|
||||
/// - Must be a steward
|
||||
/// - Must have proposals in the voting epoch
|
||||
///
|
||||
/// ## Effects:
|
||||
/// - Creates MLS proposals for all pending group updates
|
||||
/// - Commits all proposals to the MLS group
|
||||
/// - Merges the commit to apply changes
|
||||
///
|
||||
/// ## Supported Proposal Types:
|
||||
/// - AddMember: Adds new member with key package
|
||||
/// - RemoveMember: Removes member by identity
|
||||
///
|
||||
/// ## Errors:
|
||||
/// - `GroupError::StewardNotSet` if not a steward
|
||||
/// - `GroupError::EmptyProposals` if no proposals exist
|
||||
/// - Various MLS processing errors
|
||||
pub async fn create_batch_proposals_message(
|
||||
&mut self,
|
||||
provider: &MlsProvider,
|
||||
signer: &SignatureKeyPair,
|
||||
) -> Result<Vec<WakuMessageToSend>, GroupError> {
|
||||
if !self.is_steward() {
|
||||
if !self.is_steward().await {
|
||||
return Err(GroupError::StewardNotSet);
|
||||
}
|
||||
|
||||
@@ -380,85 +657,95 @@ impl Group {
|
||||
return Err(GroupError::EmptyProposals);
|
||||
}
|
||||
|
||||
// Pre-collect member indices to avoid borrow checker issues
|
||||
let mut member_indices = Vec::new();
|
||||
for proposal in &proposals {
|
||||
if let GroupUpdateRequest::RemoveMember(identity) = proposal {
|
||||
let member_index = self.find_member_index(identity.clone()).await?;
|
||||
// Convert the address string to bytes for proper MLS credential matching
|
||||
let identity_bytes = if let Some(hex_string) = identity.strip_prefix("0x") {
|
||||
// Remove 0x prefix and convert to bytes
|
||||
hex::decode(hex_string)?
|
||||
} else {
|
||||
// Assume it's already a hex string without 0x prefix
|
||||
hex::decode(identity)?
|
||||
};
|
||||
|
||||
let member_index = self.find_member_index(identity_bytes).await?;
|
||||
member_indices.push(member_index);
|
||||
} else {
|
||||
member_indices.push(None);
|
||||
}
|
||||
}
|
||||
|
||||
let mut mls_proposals = Vec::new();
|
||||
let mut mls_group = self.mls_group.as_mut().unwrap().lock().await;
|
||||
let (out_messages, welcome) = {
|
||||
let mut mls_group = self
|
||||
.mls_group
|
||||
.as_mut()
|
||||
.ok_or_else(|| GroupError::MlsGroupNotSet)?
|
||||
.lock()
|
||||
.await;
|
||||
|
||||
// Convert each GroupUpdateRequest to MLS proposal
|
||||
for (i, proposal) in proposals.iter().enumerate() {
|
||||
match proposal {
|
||||
GroupUpdateRequest::AddMember(boxed_key_package) => {
|
||||
let (mls_message_out, _proposal_ref) = mls_group.propose_add_member(
|
||||
provider,
|
||||
signer,
|
||||
boxed_key_package.as_ref(),
|
||||
)?;
|
||||
mls_proposals.push(mls_message_out.to_bytes()?);
|
||||
}
|
||||
GroupUpdateRequest::RemoveMember(_identity) => {
|
||||
if let Some(index) = member_indices[i] {
|
||||
let (mls_message_out, _proposal_ref) =
|
||||
mls_group.propose_remove_member(provider, signer, index)?;
|
||||
// Convert each GroupUpdateRequest to MLS proposal
|
||||
for (i, proposal) in proposals.iter().enumerate() {
|
||||
match proposal {
|
||||
GroupUpdateRequest::AddMember(boxed_key_package) => {
|
||||
let (mls_message_out, _proposal_ref) = mls_group.propose_add_member(
|
||||
provider,
|
||||
signer,
|
||||
boxed_key_package.as_ref(),
|
||||
)?;
|
||||
mls_proposals.push(mls_message_out.to_bytes()?);
|
||||
}
|
||||
GroupUpdateRequest::RemoveMember(identity) => {
|
||||
if let Some(index) = member_indices[i] {
|
||||
let (mls_message_out, _proposal_ref) =
|
||||
mls_group.propose_remove_member(provider, signer, index)?;
|
||||
mls_proposals.push(mls_message_out.to_bytes()?);
|
||||
} else {
|
||||
error!("[create_batch_proposals_message]: Failed to find member index for identity: {identity}");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create commit with all proposals
|
||||
let (out_messages, welcome, _group_info) =
|
||||
mls_group.commit_to_pending_proposals(provider, signer)?;
|
||||
|
||||
// Merge the commit
|
||||
mls_group.merge_pending_commit(provider)?;
|
||||
// Create commit with all proposals
|
||||
let (out_messages, welcome, _group_info) =
|
||||
mls_group.commit_to_pending_proposals(provider, signer)?;
|
||||
|
||||
// Merge the commit
|
||||
mls_group.merge_pending_commit(provider)?;
|
||||
(out_messages, welcome)
|
||||
};
|
||||
// Create batch proposals message (without welcome)
|
||||
let batch_msg = wrap_batch_proposals_into_application_msg(
|
||||
self.group_name.clone(),
|
||||
let batch_msg: AppMessage = BatchProposalsMessage {
|
||||
group_name: self.group_name_bytes().to_vec(),
|
||||
mls_proposals,
|
||||
out_messages.to_bytes()?,
|
||||
);
|
||||
commit_message: out_messages.to_bytes()?,
|
||||
}
|
||||
.into();
|
||||
|
||||
let batch_waku_msg = WakuMessageToSend::new(
|
||||
batch_msg.encode_to_vec(),
|
||||
APP_MSG_SUBTOPIC,
|
||||
self.group_name.clone(),
|
||||
self.app_id.clone(),
|
||||
&self.group_name,
|
||||
self.app_id(),
|
||||
);
|
||||
|
||||
let mut messages = vec![batch_waku_msg];
|
||||
|
||||
// Create separate welcome message if there are new members
|
||||
if let Some(welcome) = welcome {
|
||||
let welcome_msg = wrap_invitation_into_welcome_msg(welcome)?;
|
||||
|
||||
let welcome_msg: WelcomeMessage = welcome.try_into()?;
|
||||
let welcome_waku_msg = WakuMessageToSend::new(
|
||||
welcome_msg.encode_to_vec(),
|
||||
WELCOME_SUBTOPIC,
|
||||
self.group_name.clone(),
|
||||
self.app_id.clone(),
|
||||
&self.group_name,
|
||||
self.app_id(),
|
||||
);
|
||||
|
||||
messages.push(welcome_waku_msg);
|
||||
}
|
||||
|
||||
Ok(messages)
|
||||
}
|
||||
|
||||
pub async fn remove_proposals_and_complete(&mut self) -> Result<(), GroupError> {
|
||||
self.state_machine.remove_proposals_and_complete().await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for Group {
|
||||
|
||||
109
src/lib.rs
109
src/lib.rs
@@ -29,23 +29,29 @@
|
||||
//!
|
||||
//! ## Steward Epoch Flow
|
||||
//!
|
||||
//! The system operates in epochs managed by a steward:
|
||||
//! The system operates in epochs managed by a steward with robust state management:
|
||||
//!
|
||||
//! 1. **Working State**: Normal operation, all users can send messages
|
||||
//! 2. **Waiting State**: Steward epoch active, collecting proposals
|
||||
//! 3. **Voting State**: Consensus voting on collected proposals
|
||||
//! 1. **Working State**: Normal operation, all users can send any message freely
|
||||
//! 2. **Waiting State**: Steward epoch active, only steward can send BATCH_PROPOSALS_MESSAGE
|
||||
//! 3. **Voting State**: Consensus voting, restricted message types (VOTE/USER_VOTE for all, VOTING_PROPOSAL/PROPOSAL for steward only)
|
||||
//!
|
||||
//! ### State Transitions
|
||||
//! ### Complete State Transitions
|
||||
//!
|
||||
//! ```text
|
||||
//! Working --start_steward_epoch()--> Waiting (if proposals exist)
|
||||
//! Working --start_steward_epoch()--> Working (if no proposals)
|
||||
//! Working --start_steward_epoch()--> Working (if no proposals - no state change)
|
||||
//! Waiting --start_voting()---------> Voting
|
||||
//! Voting --complete_voting(true)--> Waiting (vote passed)
|
||||
//! Voting --complete_voting(false)-> Working (vote failed)
|
||||
//! Waiting --apply_proposals_and_complete()--> Working
|
||||
//! Waiting --no_proposals_found()---> Working (edge case: proposals disappear during voting)
|
||||
//! Voting --complete_voting(YES)----> Waiting --apply_proposals()--> Working
|
||||
//! Voting --complete_voting(NO)-----> Working
|
||||
//! ```
|
||||
//!
|
||||
//! ### Steward State Guarantees
|
||||
//!
|
||||
//! - **Always returns to Working**: Steward transitions back to Working state after every epoch
|
||||
//! - **No proposals handling**: If no proposals exist, steward stays in Working state
|
||||
//! - **Edge case coverage**: All scenarios including proposal disappearance are handled
|
||||
//! - **Robust error handling**: Invalid state transitions are prevented and logged
|
||||
//! ## Message Flow
|
||||
//!
|
||||
//! ### Regular Messages
|
||||
@@ -86,6 +92,7 @@
|
||||
//! - **Waku**: Decentralized messaging protocol
|
||||
//! - **Alloy**: Ethereum wallet and signing
|
||||
|
||||
use alloy::primitives::{Address, PrimitiveSignature};
|
||||
use ecies::{decrypt, encrypt};
|
||||
use libsecp256k1::{sign, verify, Message, PublicKey, SecretKey, Signature as libSignature};
|
||||
use rand::thread_rng;
|
||||
@@ -94,14 +101,14 @@ use std::{
|
||||
collections::HashSet,
|
||||
sync::{Arc, Mutex},
|
||||
};
|
||||
use tokio::sync::mpsc::Sender;
|
||||
use tokio::sync::{mpsc::Sender, RwLock};
|
||||
use waku_bindings::{WakuContentTopic, WakuMessage};
|
||||
|
||||
use ds::waku_actor::WakuMessageToSend;
|
||||
use error::{GroupError, MessageError};
|
||||
|
||||
pub mod action_handlers;
|
||||
// pub mod consensus;
|
||||
pub mod consensus;
|
||||
pub mod error;
|
||||
pub mod group;
|
||||
pub mod message;
|
||||
@@ -115,16 +122,19 @@ pub mod ws_actor;
|
||||
pub mod protos {
|
||||
pub mod messages {
|
||||
pub mod v1 {
|
||||
pub mod consensus {
|
||||
pub mod v1 {
|
||||
include!(concat!(env!("OUT_DIR"), "/consensus.v1.rs"));
|
||||
}
|
||||
}
|
||||
include!(concat!(env!("OUT_DIR"), "/de_mls.messages.v1.rs"));
|
||||
include!(concat!(env!("OUT_DIR"), "/consensus.v1.rs"));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct AppState {
|
||||
pub waku_node: Sender<WakuMessageToSend>,
|
||||
pub rooms: Mutex<HashSet<String>>,
|
||||
pub content_topics: Arc<Mutex<Vec<WakuContentTopic>>>,
|
||||
pub content_topics: Arc<RwLock<Vec<WakuContentTopic>>>,
|
||||
pub pubsub: tokio::sync::broadcast::Sender<WakuMessage>,
|
||||
}
|
||||
|
||||
@@ -153,10 +163,13 @@ pub fn verify_message(
|
||||
signature: &[u8],
|
||||
public_key: &[u8],
|
||||
) -> Result<bool, MessageError> {
|
||||
const COMPRESSED_PUBLIC_KEY_SIZE: usize = 33;
|
||||
|
||||
let digest = sha256::Hash::hash(message);
|
||||
let msg = Message::parse(&digest.to_byte_array());
|
||||
let signature = libSignature::parse_der(signature)?;
|
||||
let mut pub_key_bytes: [u8; 33] = [0; 33];
|
||||
|
||||
let mut pub_key_bytes: [u8; COMPRESSED_PUBLIC_KEY_SIZE] = [0; COMPRESSED_PUBLIC_KEY_SIZE];
|
||||
pub_key_bytes[..].copy_from_slice(public_key);
|
||||
let public_key = PublicKey::parse_compressed(&pub_key_bytes)?;
|
||||
Ok(verify(&msg, &signature, &public_key))
|
||||
@@ -174,16 +187,49 @@ pub fn decrypt_message(message: &[u8], secret_key: SecretKey) -> Result<Vec<u8>,
|
||||
}
|
||||
|
||||
/// Check if a content topic exists in a list of topics or if the list is empty
|
||||
pub fn match_content_topic(
|
||||
content_topics: &Arc<Mutex<Vec<WakuContentTopic>>>,
|
||||
pub async fn match_content_topic(
|
||||
content_topics: &Arc<RwLock<Vec<WakuContentTopic>>>,
|
||||
topic: &WakuContentTopic,
|
||||
) -> bool {
|
||||
let locked_topics = content_topics.lock().unwrap();
|
||||
let locked_topics = content_topics.read().await;
|
||||
locked_topics.is_empty() || locked_topics.iter().any(|t| t == topic)
|
||||
}
|
||||
|
||||
pub trait LocalSigner {
|
||||
fn local_sign_message(
|
||||
&self,
|
||||
message: &[u8],
|
||||
) -> impl std::future::Future<Output = Result<Vec<u8>, anyhow::Error>> + Send;
|
||||
|
||||
fn address(&self) -> Address;
|
||||
fn address_string(&self) -> String;
|
||||
fn address_bytes(&self) -> Vec<u8>;
|
||||
}
|
||||
|
||||
pub fn verify_vote_hash(
|
||||
signature: &[u8],
|
||||
public_key: &[u8],
|
||||
message: &[u8],
|
||||
) -> Result<bool, MessageError> {
|
||||
let signature_bytes: [u8; 65] =
|
||||
signature
|
||||
.try_into()
|
||||
.map_err(|_| MessageError::MismatchedLength {
|
||||
expect: 65,
|
||||
actual: signature.len(),
|
||||
})?;
|
||||
let signature = PrimitiveSignature::from_raw_array(&signature_bytes)?;
|
||||
let address = signature.recover_address_from_msg(message)?;
|
||||
let address_bytes = address.as_slice().to_vec();
|
||||
Ok(address_bytes == public_key)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use alloy::signers::local::PrivateKeySigner;
|
||||
|
||||
use crate::{verify_vote_hash, LocalSigner};
|
||||
|
||||
use super::{decrypt_message, encrypt_message, generate_keypair, sign_message, verify_message};
|
||||
|
||||
#[test]
|
||||
@@ -191,17 +237,32 @@ mod tests {
|
||||
let message = b"Hello, world!";
|
||||
let (public_key, secret_key) = generate_keypair();
|
||||
let signature = sign_message(message, &secret_key);
|
||||
let verified = verify_message(message, &signature, &public_key.serialize_compressed());
|
||||
assert!(verified.is_ok());
|
||||
assert!(verified.unwrap());
|
||||
let verified = verify_message(message, &signature, &public_key.serialize_compressed())
|
||||
.expect("Failed to verify message");
|
||||
assert!(verified);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_encrypt_decrypt_message() {
|
||||
let message = b"Hello, world!";
|
||||
let (public_key, secret_key) = generate_keypair();
|
||||
let encrypted = encrypt_message(message, &public_key.serialize_compressed());
|
||||
let decrypted = decrypt_message(&encrypted.unwrap(), secret_key);
|
||||
assert_eq!(message, decrypted.unwrap().as_slice());
|
||||
let encrypted = encrypt_message(message, &public_key.serialize_compressed())
|
||||
.expect("Failed to encrypt message");
|
||||
let decrypted = decrypt_message(&encrypted, secret_key).expect("Failed to decrypt message");
|
||||
assert_eq!(message, decrypted.as_slice());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_local_signer() {
|
||||
let signer = PrivateKeySigner::random();
|
||||
let message = b"Hello, world!";
|
||||
let signature = signer
|
||||
.local_sign_message(message)
|
||||
.await
|
||||
.expect("Failed to sign message");
|
||||
|
||||
let verified = verify_vote_hash(&signature, &signer.address_bytes(), message)
|
||||
.expect("Failed to verify vote hash");
|
||||
assert!(verified);
|
||||
}
|
||||
}
|
||||
|
||||
51
src/main.rs
51
src/main.rs
@@ -14,7 +14,7 @@ use std::{
|
||||
net::SocketAddr,
|
||||
sync::{Arc, Mutex},
|
||||
};
|
||||
use tokio::sync::mpsc::channel;
|
||||
use tokio::sync::{mpsc::channel, RwLock};
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tower_http::cors::{Any, CorsLayer};
|
||||
use waku_bindings::{Multiaddr, WakuMessage};
|
||||
@@ -39,15 +39,18 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let peer_addresses = std::env::var("PEER_ADDRESSES")
|
||||
.map(|val| {
|
||||
val.split(",")
|
||||
.map(|addr| addr.parse::<Multiaddr>().unwrap())
|
||||
.map(|addr| {
|
||||
addr.parse::<Multiaddr>()
|
||||
.expect("Failed to parse peer address")
|
||||
})
|
||||
.collect()
|
||||
})
|
||||
.expect("PEER_ADDRESSES is not set");
|
||||
|
||||
let content_topics = Arc::new(Mutex::new(Vec::new()));
|
||||
let content_topics = Arc::new(RwLock::new(Vec::new()));
|
||||
|
||||
let (waku_sender, mut waku_receiver) = channel::<WakuMessage>(100);
|
||||
let (sender, mut reciever) = channel::<WakuMessageToSend>(100);
|
||||
let (sender, mut receiver) = channel::<WakuMessageToSend>(100);
|
||||
let (tx, _) = tokio::sync::broadcast::channel(100);
|
||||
|
||||
let app_state = Arc::new(AppState {
|
||||
@@ -76,7 +79,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
info!("Starting waku node");
|
||||
tokio::task::block_in_place(move || {
|
||||
tokio::runtime::Handle::current().block_on(async move {
|
||||
run_waku_node(node_port, Some(peer_addresses), waku_sender, &mut reciever).await
|
||||
run_waku_node(node_port, Some(peer_addresses), waku_sender, &mut receiver).await
|
||||
})
|
||||
})?;
|
||||
|
||||
@@ -138,7 +141,13 @@ async fn handle_socket(socket: WebSocket, state: Arc<AppState>) {
|
||||
group_id: connect.group_id.clone(),
|
||||
should_create_group: connect.should_create,
|
||||
});
|
||||
let mut rooms = state.rooms.lock().unwrap();
|
||||
let mut rooms = match state.rooms.lock() {
|
||||
Ok(rooms) => rooms,
|
||||
Err(e) => {
|
||||
log::error!("Failed to acquire rooms lock: {e}");
|
||||
continue;
|
||||
}
|
||||
};
|
||||
if !rooms.contains(&connect.group_id.clone()) {
|
||||
rooms.insert(connect.group_id.clone());
|
||||
}
|
||||
@@ -153,9 +162,13 @@ async fn handle_socket(socket: WebSocket, state: Arc<AppState>) {
|
||||
}
|
||||
}
|
||||
|
||||
let user_actor = create_user_instance(main_loop_connection.unwrap().clone(), state.clone())
|
||||
.await
|
||||
.expect("Failed to start main loop");
|
||||
let user_actor = create_user_instance(
|
||||
main_loop_connection.unwrap().clone(),
|
||||
state.clone(),
|
||||
ws_actor.clone(),
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create user instance");
|
||||
|
||||
let user_actor_clone = user_actor.clone();
|
||||
let state_clone = state.clone();
|
||||
@@ -168,11 +181,11 @@ async fn handle_socket(socket: WebSocket, state: Arc<AppState>) {
|
||||
while let Ok(msg) = user_waku_receiver.recv().await {
|
||||
let content_topic = msg.content_topic.clone();
|
||||
// Check if message belongs to a relevant topic
|
||||
if !match_content_topic(&state_clone.content_topics, &content_topic) {
|
||||
if !match_content_topic(&state_clone.content_topics, &content_topic).await {
|
||||
error!("Content topic not match: {content_topic:?}");
|
||||
return;
|
||||
};
|
||||
info!("Received message from waku that matches content topic");
|
||||
info!("[handle_socket]: Received message from waku that matches content topic");
|
||||
let res = handle_user_actions(
|
||||
msg,
|
||||
state_clone.waku_node.clone(),
|
||||
@@ -191,7 +204,7 @@ async fn handle_socket(socket: WebSocket, state: Arc<AppState>) {
|
||||
let user_ref_clone = user_actor.clone();
|
||||
let mut recv_messages_ws = {
|
||||
tokio::spawn(async move {
|
||||
info!("Running recieve messages from websocket");
|
||||
info!("Running receive messages from websocket");
|
||||
while let Some(Ok(Message::Text(text))) = ws_receiver.next().await {
|
||||
let res = handle_ws_action(
|
||||
RawWsMessage { message: text },
|
||||
@@ -213,7 +226,7 @@ async fn handle_socket(socket: WebSocket, state: Arc<AppState>) {
|
||||
recv_messages_ws.abort();
|
||||
}
|
||||
_ = (&mut recv_messages_ws) => {
|
||||
info!("recieve messages from websocket finished");
|
||||
info!("receive messages from websocket finished");
|
||||
recv_messages_ws.abort();
|
||||
}
|
||||
_ = cancel_token.cancelled() => {
|
||||
@@ -227,7 +240,17 @@ async fn handle_socket(socket: WebSocket, state: Arc<AppState>) {
|
||||
}
|
||||
|
||||
async fn get_rooms(State(state): State<Arc<AppState>>) -> String {
|
||||
let rooms = state.rooms.lock().unwrap();
|
||||
let rooms = match state.rooms.lock() {
|
||||
Ok(rooms) => rooms,
|
||||
Err(e) => {
|
||||
log::error!("Failed to acquire rooms lock: {e}");
|
||||
return json!({
|
||||
"status": "Error acquiring rooms lock",
|
||||
"rooms": []
|
||||
})
|
||||
.to_string();
|
||||
}
|
||||
};
|
||||
let vec = rooms.iter().collect::<Vec<&String>>();
|
||||
match vec.len() {
|
||||
0 => json!({
|
||||
|
||||
244
src/message.rs
244
src/message.rs
@@ -16,63 +16,59 @@
|
||||
//! - [`AppMessage`]
|
||||
//! - [`ConversationMessage`]
|
||||
//! - [`BatchProposalsMessage`]
|
||||
//! - [`BanRequest`]
|
||||
//! - [`VotingProposal`]
|
||||
//! - [`UserVote`]
|
||||
//!
|
||||
|
||||
use crate::{
|
||||
consensus::v1::{Proposal, Vote},
|
||||
encrypt_message,
|
||||
protos::messages::v1::{app_message, UserKeyPackage},
|
||||
protos::messages::v1::{app_message, UserKeyPackage, UserVote, VotingProposal},
|
||||
verify_message, MessageError,
|
||||
};
|
||||
// use log::info;
|
||||
use openmls::prelude::{KeyPackage, MlsMessageOut};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::fmt::Display;
|
||||
|
||||
// use crate::protos::messages::v1::{
|
||||
// welcome_message, GroupAnnouncement, InvitationToJoin, WelcomeMessage, AppMessage, ConversationMessage, UserKeyPackage,
|
||||
// };
|
||||
use crate::protos::messages::v1::{
|
||||
welcome_message, AppMessage, BatchProposalsMessage, ConversationMessage, GroupAnnouncement,
|
||||
InvitationToJoin, WelcomeMessage,
|
||||
welcome_message, AppMessage, BanRequest, BatchProposalsMessage, ConversationMessage,
|
||||
GroupAnnouncement, InvitationToJoin, WelcomeMessage,
|
||||
};
|
||||
|
||||
// WELCOME MESSAGE SUBTOPIC
|
||||
// Message type constants for consistency and type safety
|
||||
pub mod message_types {
|
||||
pub const CONVERSATION_MESSAGE: &str = "ConversationMessage";
|
||||
pub const BATCH_PROPOSALS_MESSAGE: &str = "BatchProposalsMessage";
|
||||
pub const BAN_REQUEST: &str = "BanRequest";
|
||||
pub const PROPOSAL: &str = "Proposal";
|
||||
pub const VOTE: &str = "Vote";
|
||||
pub const VOTING_PROPOSAL: &str = "VotingProposal";
|
||||
pub const USER_VOTE: &str = "UserVote";
|
||||
pub const UNKNOWN: &str = "Unknown";
|
||||
}
|
||||
|
||||
pub fn wrap_group_announcement_in_welcome_msg(
|
||||
group_announcement: GroupAnnouncement,
|
||||
) -> WelcomeMessage {
|
||||
WelcomeMessage {
|
||||
payload: Some(welcome_message::Payload::GroupAnnouncement(
|
||||
group_announcement,
|
||||
)),
|
||||
/// Trait for getting message type as a string constant
|
||||
pub trait MessageType {
|
||||
fn message_type(&self) -> &'static str;
|
||||
}
|
||||
|
||||
impl MessageType for app_message::Payload {
|
||||
fn message_type(&self) -> &'static str {
|
||||
use message_types::*;
|
||||
match self {
|
||||
app_message::Payload::ConversationMessage(_) => CONVERSATION_MESSAGE,
|
||||
app_message::Payload::BatchProposalsMessage(_) => BATCH_PROPOSALS_MESSAGE,
|
||||
app_message::Payload::BanRequest(_) => BAN_REQUEST,
|
||||
app_message::Payload::Proposal(_) => PROPOSAL,
|
||||
app_message::Payload::Vote(_) => VOTE,
|
||||
app_message::Payload::VotingProposal(_) => VOTING_PROPOSAL,
|
||||
app_message::Payload::UserVote(_) => USER_VOTE,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn wrap_user_kp_into_welcome_msg(
|
||||
encrypted_kp: Vec<u8>,
|
||||
) -> Result<WelcomeMessage, MessageError> {
|
||||
let user_key_package = UserKeyPackage {
|
||||
encrypt_kp: encrypted_kp,
|
||||
};
|
||||
let welcome_message = WelcomeMessage {
|
||||
payload: Some(welcome_message::Payload::UserKeyPackage(user_key_package)),
|
||||
};
|
||||
Ok(welcome_message)
|
||||
}
|
||||
pub fn wrap_invitation_into_welcome_msg(
|
||||
mls_message: MlsMessageOut,
|
||||
) -> Result<WelcomeMessage, MessageError> {
|
||||
let mls_bytes = mls_message.to_bytes()?;
|
||||
let invitation = InvitationToJoin {
|
||||
mls_message_out_bytes: mls_bytes,
|
||||
};
|
||||
|
||||
let welcome_message = WelcomeMessage {
|
||||
payload: Some(welcome_message::Payload::InvitationToJoin(invitation)),
|
||||
};
|
||||
Ok(welcome_message)
|
||||
}
|
||||
|
||||
// WELCOME MESSAGE SUBTOPIC
|
||||
impl GroupAnnouncement {
|
||||
pub fn new(pub_key: Vec<u8>, signature: Vec<u8>) -> Self {
|
||||
GroupAnnouncement {
|
||||
@@ -87,47 +83,45 @@ impl GroupAnnouncement {
|
||||
}
|
||||
|
||||
pub fn encrypt(&self, kp: KeyPackage) -> Result<Vec<u8>, MessageError> {
|
||||
// TODO: replace json in encryption and decryption
|
||||
let key_package = serde_json::to_vec(&kp)?;
|
||||
let encrypted = encrypt_message(&key_package, &self.eth_pub_key)?;
|
||||
Ok(encrypted)
|
||||
}
|
||||
}
|
||||
|
||||
// APPLICATION MESSAGE SUBTOPIC
|
||||
|
||||
pub fn wrap_conversation_message_into_application_msg(
|
||||
message: Vec<u8>,
|
||||
sender: String,
|
||||
group_name: String,
|
||||
) -> AppMessage {
|
||||
AppMessage {
|
||||
payload: Some(app_message::Payload::ConversationMessage(
|
||||
ConversationMessage {
|
||||
message,
|
||||
sender,
|
||||
group_name,
|
||||
},
|
||||
)),
|
||||
impl From<GroupAnnouncement> for WelcomeMessage {
|
||||
fn from(group_announcement: GroupAnnouncement) -> Self {
|
||||
WelcomeMessage {
|
||||
payload: Some(welcome_message::Payload::GroupAnnouncement(
|
||||
group_announcement,
|
||||
)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn wrap_batch_proposals_into_application_msg(
|
||||
group_name: String,
|
||||
mls_proposals: Vec<Vec<u8>>,
|
||||
commit_message: Vec<u8>,
|
||||
) -> AppMessage {
|
||||
AppMessage {
|
||||
payload: Some(app_message::Payload::BatchProposalsMessage(
|
||||
BatchProposalsMessage {
|
||||
group_name: group_name.into_bytes(),
|
||||
mls_proposals,
|
||||
commit_message,
|
||||
},
|
||||
)),
|
||||
impl TryFrom<MlsMessageOut> for WelcomeMessage {
|
||||
type Error = MessageError;
|
||||
fn try_from(mls_message: MlsMessageOut) -> Result<Self, MessageError> {
|
||||
let mls_bytes = mls_message.to_bytes()?;
|
||||
let invitation = InvitationToJoin {
|
||||
mls_message_out_bytes: mls_bytes,
|
||||
};
|
||||
|
||||
Ok(WelcomeMessage {
|
||||
payload: Some(welcome_message::Payload::InvitationToJoin(invitation)),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl From<UserKeyPackage> for WelcomeMessage {
|
||||
fn from(user_key_package: UserKeyPackage) -> Self {
|
||||
WelcomeMessage {
|
||||
payload: Some(welcome_message::Payload::UserKeyPackage(user_key_package)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// APP MESSAGE SUBTOPIC
|
||||
impl Display for AppMessage {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match &self.payload {
|
||||
@@ -139,15 +133,123 @@ impl Display for AppMessage {
|
||||
String::from_utf8_lossy(&conversation_message.message)
|
||||
)
|
||||
}
|
||||
_ => write!(f, "Invalid message"),
|
||||
Some(app_message::Payload::BatchProposalsMessage(batch_msg)) => {
|
||||
write!(
|
||||
f,
|
||||
"BatchProposalsMessage: {} proposals for group {}",
|
||||
batch_msg.mls_proposals.len(),
|
||||
String::from_utf8_lossy(&batch_msg.group_name)
|
||||
)
|
||||
}
|
||||
Some(app_message::Payload::BanRequest(ban_request)) => {
|
||||
write!(
|
||||
f,
|
||||
"SYSTEM: {} wants to ban {}",
|
||||
ban_request.requester, ban_request.user_to_ban
|
||||
)
|
||||
}
|
||||
Some(app_message::Payload::Proposal(proposal)) => {
|
||||
write!(
|
||||
f,
|
||||
"Proposal: ID {} with {} votes for {} voters",
|
||||
proposal.proposal_id,
|
||||
proposal.votes.len(),
|
||||
proposal.expected_voters_count
|
||||
)
|
||||
}
|
||||
Some(app_message::Payload::Vote(vote)) => {
|
||||
write!(
|
||||
f,
|
||||
"Vote: {} for proposal {} ({})",
|
||||
if vote.vote { "YES" } else { "NO" },
|
||||
vote.proposal_id,
|
||||
vote.vote_id
|
||||
)
|
||||
}
|
||||
Some(app_message::Payload::VotingProposal(voting_proposal)) => {
|
||||
write!(
|
||||
f,
|
||||
"VotingProposal: ID {} for group {}",
|
||||
voting_proposal.proposal_id, voting_proposal.group_name
|
||||
)
|
||||
}
|
||||
Some(app_message::Payload::UserVote(user_vote)) => {
|
||||
write!(
|
||||
f,
|
||||
"UserVote: {} for proposal {} in group {}",
|
||||
if user_vote.vote { "YES" } else { "NO" },
|
||||
user_vote.proposal_id,
|
||||
user_vote.group_name
|
||||
)
|
||||
}
|
||||
None => write!(f, "Empty message"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<VotingProposal> for AppMessage {
|
||||
fn from(voting_proposal: VotingProposal) -> Self {
|
||||
AppMessage {
|
||||
payload: Some(app_message::Payload::VotingProposal(voting_proposal)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<UserVote> for AppMessage {
|
||||
fn from(user_vote: UserVote) -> Self {
|
||||
AppMessage {
|
||||
payload: Some(app_message::Payload::UserVote(user_vote)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<ConversationMessage> for AppMessage {
|
||||
fn from(conversation_message: ConversationMessage) -> Self {
|
||||
AppMessage {
|
||||
payload: Some(app_message::Payload::ConversationMessage(
|
||||
conversation_message,
|
||||
)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<BatchProposalsMessage> for AppMessage {
|
||||
fn from(batch_proposals_message: BatchProposalsMessage) -> Self {
|
||||
AppMessage {
|
||||
payload: Some(app_message::Payload::BatchProposalsMessage(
|
||||
batch_proposals_message,
|
||||
)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<BanRequest> for AppMessage {
|
||||
fn from(ban_request: BanRequest) -> Self {
|
||||
AppMessage {
|
||||
payload: Some(app_message::Payload::BanRequest(ban_request)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<Proposal> for AppMessage {
|
||||
fn from(proposal: Proposal) -> Self {
|
||||
AppMessage {
|
||||
payload: Some(app_message::Payload::Proposal(proposal)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<Vote> for AppMessage {
|
||||
fn from(vote: Vote) -> Self {
|
||||
AppMessage {
|
||||
payload: Some(app_message::Payload::Vote(vote)),
|
||||
}
|
||||
}
|
||||
}
|
||||
/// This struct is used to represent the message from the user that we got from web socket
|
||||
#[derive(Deserialize, Debug, PartialEq, Serialize)]
|
||||
pub struct UserMessage {
|
||||
pub message: String,
|
||||
pub message: Vec<u8>,
|
||||
pub group_id: String,
|
||||
}
|
||||
|
||||
|
||||
@@ -4,13 +4,26 @@ syntax = "proto3";
|
||||
|
||||
package de_mls.messages.v1;
|
||||
|
||||
import "messages/v1/consensus.proto";
|
||||
|
||||
message AppMessage {
|
||||
oneof payload {
|
||||
ConversationMessage conversation_message = 1;
|
||||
BatchProposalsMessage batch_proposals_message = 2;
|
||||
BanRequest ban_request = 3;
|
||||
consensus.v1.Proposal proposal = 4;
|
||||
consensus.v1.Vote vote = 5;
|
||||
VotingProposal voting_proposal = 6;
|
||||
UserVote user_vote = 7;
|
||||
}
|
||||
}
|
||||
|
||||
message BanRequest {
|
||||
string user_to_ban = 1;
|
||||
string requester = 2;
|
||||
string group_name = 3;
|
||||
}
|
||||
|
||||
message ConversationMessage {
|
||||
bytes message = 1;
|
||||
string sender = 2;
|
||||
@@ -21,4 +34,17 @@ message BatchProposalsMessage {
|
||||
bytes group_name = 1;
|
||||
repeated bytes mls_proposals = 2; // Individual MLS proposal messages
|
||||
bytes commit_message = 3; // MLS commit message
|
||||
}
|
||||
}
|
||||
|
||||
// New message types for voting
|
||||
message VotingProposal {
|
||||
uint32 proposal_id = 1;
|
||||
string group_name = 2;
|
||||
string payload = 3;
|
||||
}
|
||||
|
||||
message UserVote {
|
||||
uint32 proposal_id = 1;
|
||||
bool vote = 2;
|
||||
string group_name = 3;
|
||||
}
|
||||
|
||||
@@ -5,45 +5,26 @@ package consensus.v1;
|
||||
// Proposal represents a consensus proposal that needs voting
|
||||
message Proposal {
|
||||
string name = 10; // Proposal name
|
||||
int32 proposal_id = 11; // Unique identifier of the proposal
|
||||
bytes proposal_owner = 12; // Public key of the creator
|
||||
repeated Vote votes = 13; // Vote list in the proposal
|
||||
int32 expected_voters_count = 14; // Maximum number of distinct voters
|
||||
int32 round = 15; // Number of Votes
|
||||
int64 timestamp = 16; // Creation time of proposal
|
||||
int64 expiration_time = 17; // The time interval that the proposal is active.
|
||||
bool liveness_criteria_yes = 18; // Shows how managing the silent peers vote
|
||||
string payload = 11; // Payload of the proposal
|
||||
uint32 proposal_id = 12; // Unique identifier of the proposal
|
||||
bytes proposal_owner = 13; // Public key of the creator
|
||||
repeated Vote votes = 14; // Vote list in the proposal
|
||||
uint32 expected_voters_count = 15; // Maximum number of distinct voters
|
||||
uint32 round = 16; // Number of Votes
|
||||
uint64 timestamp = 17; // Creation time of proposal
|
||||
uint64 expiration_time = 18; // The time interval that the proposal is active.
|
||||
bool liveness_criteria_yes = 19; // Shows how managing the silent peers vote
|
||||
}
|
||||
|
||||
// Vote represents a single vote in a consensus proposal
|
||||
message Vote {
|
||||
int32 vote_id = 20; // Unique identifier of the vote
|
||||
uint32 vote_id = 20; // Unique identifier of the vote
|
||||
bytes vote_owner = 21; // Voter's public key
|
||||
int64 timestamp = 22; // Time when the vote was cast
|
||||
bool vote = 23; // Vote bool value (true/false)
|
||||
bytes parent_hash = 24; // Hash of previous owner's Vote
|
||||
bytes received_hash = 25; // Hash of previous received Vote
|
||||
bytes vote_hash = 26; // Hash of all previously defined fields in Vote
|
||||
bytes signature = 27; // Signature of vote_hash
|
||||
uint32 proposal_id = 22; // Proposal ID (for the vote)
|
||||
uint64 timestamp = 23; // Time when the vote was cast
|
||||
bool vote = 24; // Vote bool value (true/false)
|
||||
bytes parent_hash = 25; // Hash of previous owner's Vote
|
||||
bytes received_hash = 26; // Hash of previous received Vote
|
||||
bytes vote_hash = 27; // Hash of all previously defined fields in Vote
|
||||
bytes signature = 28; // Signature of vote_hash
|
||||
}
|
||||
|
||||
// ConsensusMessage wraps consensus-related messages
|
||||
message ConsensusMessage {
|
||||
oneof payload {
|
||||
Proposal proposal = 1;
|
||||
Vote vote = 2;
|
||||
ConsensusResult result = 3;
|
||||
}
|
||||
}
|
||||
|
||||
// ConsensusResult represents the final result of a consensus round
|
||||
message ConsensusResult {
|
||||
int32 proposal_id = 1; // ID of the proposal this result belongs to
|
||||
bool consensus_reached = 2; // Whether consensus was reached
|
||||
bool final_decision = 3; // The final decision (true/false)
|
||||
int32 total_votes = 4; // Total number of votes received
|
||||
int32 yes_votes = 5; // Number of yes votes
|
||||
int32 no_votes = 6; // Number of no votes
|
||||
int64 consensus_time = 7; // Time when consensus was reached
|
||||
repeated bytes participants = 8; // List of participant public keys
|
||||
}
|
||||
@@ -6,43 +6,126 @@
|
||||
//!
|
||||
//! # States
|
||||
//!
|
||||
//! - **Working**: Normal operation state where users can send messages freely
|
||||
//! - **Waiting**: Steward epoch state where only steward can send messages with proposals
|
||||
//! - **Voting**: Transitional state during voting process where no messages are allowed
|
||||
//! - **Working**: Normal operation state where users can send any message freely
|
||||
//! - **Waiting**: Steward epoch state where only steward can send BATCH_PROPOSALS_MESSAGE (if proposals exist)
|
||||
//! - **Voting**: Voting state where everyone can send VOTE/USER_VOTE, only steward can send VOTING_PROPOSAL/PROPOSAL
|
||||
//! - **ConsensusReached**: Consensus achieved, waiting for steward to send batch proposals
|
||||
//! - **ConsensusFailed**: Consensus failed due to timeout or other reasons
|
||||
//!
|
||||
//! # State Transitions
|
||||
//!
|
||||
//! ```text
|
||||
//! Working --start_steward_epoch()--> Waiting (if proposals exist)
|
||||
//! Working --start_steward_epoch()--> Working (if no proposals)
|
||||
//! Waiting --start_voting()---------> Voting
|
||||
//! Voting --complete_voting(true)--> Waiting (vote passed)
|
||||
//! Voting --complete_voting(false)-> Working (vote failed)
|
||||
//! Waiting --apply_proposals_and_complete()--> Working
|
||||
//! Working -- start_steward_epoch_with_validation() --> Waiting (if proposals exist)
|
||||
//! Working -- start_steward_epoch_with_validation() --> Working (if no proposals, returns 0)
|
||||
//! Waiting -- start_voting() --> Voting
|
||||
//! Voting -- complete_voting(true) --> ConsensusReached (vote passed)
|
||||
//! Voting -- complete_voting(false) --> Working (vote failed)
|
||||
//! ConsensusReached -- start_waiting_after_consensus() --> Waiting (steward sends batch proposals)
|
||||
//! Waiting -- handle_yes_vote() --> Working (after successful vote and proposal application)
|
||||
//! ConsensusFailed -- recover_from_consensus_failure() --> Working (recovery)
|
||||
//! ```
|
||||
//!
|
||||
//! # Message Type Permissions by State
|
||||
//!
|
||||
//! ## Working State
|
||||
//! - **All users**: Can send any message type
|
||||
//!
|
||||
//! ## Waiting State
|
||||
//! - **Steward with proposals**: Can send BATCH_PROPOSALS_MESSAGE
|
||||
//! - **All users**: All other message types blocked
|
||||
//!
|
||||
//! ## Voting State
|
||||
//! - **All users**: Can send VOTE and USER_VOTE
|
||||
//! - **Steward only**: Can send VOTING_PROPOSAL and PROPOSAL
|
||||
//! - **All users**: All other message types blocked
|
||||
//!
|
||||
//! ## ConsensusReached State
|
||||
//! - **Steward with proposals**: Can send BATCH_PROPOSALS_MESSAGE
|
||||
//! - **All users**: All other message types blocked
|
||||
//!
|
||||
//! ## ConsensusFailed State
|
||||
//! - **All users**: No messages allowed
|
||||
//!
|
||||
//! # Steward Flow Scenarios
|
||||
//!
|
||||
//! ## Scenario 1: No Proposals Initially
|
||||
//! ```text
|
||||
//! Working --start_steward_epoch_with_validation()--> Working (stays in Working, returns 0)
|
||||
//! ```
|
||||
//!
|
||||
//! ## Scenario 2: Successful Vote with Proposals
|
||||
//! **Steward:**
|
||||
//! ```text
|
||||
//! Working --start_steward_epoch_with_validation()--> Waiting --start_voting()--> Voting
|
||||
//! --complete_voting(true)--> ConsensusReached --start_waiting_after_consensus()--> Waiting
|
||||
//! --handle_yes_vote()--> Working
|
||||
//! ```
|
||||
//! **Non-Steward:**
|
||||
//! ```text
|
||||
//! Working --steward_starts_epoch()--> Waiting --start_voting()--> Voting
|
||||
//! --start_consensus_reached()--> ConsensusReached --start_waiting()--> Waiting
|
||||
//! --handle_yes_vote()--> Working
|
||||
//! ```
|
||||
//!
|
||||
//! ## Scenario 3: Failed Vote
|
||||
//! **Steward:**
|
||||
//! ```text
|
||||
//! Working --start_steward_epoch_with_validation()--> Waiting --start_voting()--> Voting
|
||||
//! --complete_voting(false)--> Working
|
||||
//! ```
|
||||
//! **Non-Steward:**
|
||||
//! ```text
|
||||
//! Working --steward_starts_epoch()--> Waiting --start_voting()--> Voting
|
||||
//! --start_consensus_reached()--> ConsensusReached --start_consensus_failed()--> ConsensusFailed
|
||||
//! --recover_from_consensus_failure()--> Working
|
||||
//! ```
|
||||
//!
|
||||
//! # Key Methods
|
||||
//!
|
||||
//! - `start_steward_epoch_with_validation()`: Main entry point for starting steward epochs with proposal validation
|
||||
//! - `start_voting()`: Transitions to voting state from any non-voting state
|
||||
//! - `complete_voting(vote_result)`: Handles voting completion and transitions based on result
|
||||
//! - `handle_yes_vote()`: Applies proposals and returns to working state after successful vote
|
||||
//! - `start_waiting_after_consensus()`: Transitions from ConsensusReached to Waiting for batch proposal processing
|
||||
//! - `recover_from_consensus_failure()`: Recovers from consensus failure back to Working state
|
||||
//!
|
||||
//! # Proposal Management
|
||||
//!
|
||||
//! - Proposals are collected in the current epoch and moved to voting epoch when steward epoch starts
|
||||
//! - After successful voting, proposals are applied and cleared from voting epoch
|
||||
//! - Failed votes result in proposals being discarded and return to working state
|
||||
|
||||
use std::fmt::Display;
|
||||
|
||||
use log::info;
|
||||
|
||||
use crate::message::message_types;
|
||||
use crate::steward::Steward;
|
||||
use crate::{steward::GroupUpdateRequest, GroupError};
|
||||
|
||||
/// Represents the different states a group can be in during the steward epoch flow
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub enum GroupState {
|
||||
/// Normal operation state - users can send messages freely
|
||||
/// Normal operation state - users can send any message freely
|
||||
Working,
|
||||
/// Waiting state during steward epoch - only steward can send messages with proposals
|
||||
/// Waiting state during steward epoch - only steward can send BATCH_PROPOSALS_MESSAGE
|
||||
Waiting,
|
||||
/// Transitional state during voting process
|
||||
/// Voting state - everyone can send VOTE/USER_VOTE, only steward can send VOTING_PROPOSAL/PROPOSAL
|
||||
Voting,
|
||||
/// Consensus reached state - consensus achieved, waiting for steward to send batch proposals
|
||||
ConsensusReached,
|
||||
/// Consensus failed state - consensus failed due to timeout or other reasons
|
||||
ConsensusFailed,
|
||||
}
|
||||
|
||||
impl Display for GroupState {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
let state = match self {
|
||||
GroupState::Working => "Working - Normal operation",
|
||||
GroupState::Waiting => "Waiting - Steward epoch active",
|
||||
GroupState::Voting => "Voting - Vote in progress",
|
||||
GroupState::Working => "Working",
|
||||
GroupState::Waiting => "Waiting",
|
||||
GroupState::Voting => "Voting",
|
||||
GroupState::ConsensusReached => "ConsensusReached",
|
||||
GroupState::ConsensusFailed => "ConsensusFailed",
|
||||
};
|
||||
write!(f, "{state}")
|
||||
}
|
||||
@@ -79,113 +162,190 @@ impl GroupStateMachine {
|
||||
self.state.clone()
|
||||
}
|
||||
|
||||
/// Check if a message can be sent in the current state
|
||||
pub fn can_send_message(&self, is_steward: bool, has_proposals: bool) -> bool {
|
||||
/// Check if a specific message type can be sent in the current state.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `is_steward`: Whether the sender is a steward
|
||||
/// - `has_proposals`: Whether there are proposals available (for steward operations)
|
||||
/// - `message_type`: The type of message to check
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - `true` if the message can be sent, `false` otherwise
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Used to enforce message type permissions based on current state and sender role.
|
||||
/// This ensures proper state machine behavior and prevents invalid operations.
|
||||
pub fn can_send_message_type(
|
||||
&self,
|
||||
is_steward: bool,
|
||||
has_proposals: bool,
|
||||
message_type: &str,
|
||||
) -> bool {
|
||||
match self.state {
|
||||
GroupState::Working => true, // Anyone can send messages in working state
|
||||
GroupState::Waiting => is_steward && has_proposals, // Only steward with proposals can send
|
||||
GroupState::Voting => false, // No one can send messages during voting
|
||||
GroupState::Working => true, // Anyone can send any message in working state
|
||||
GroupState::Waiting => {
|
||||
// In waiting state, only steward can send BATCH_PROPOSALS_MESSAGE
|
||||
match message_type {
|
||||
message_types::BATCH_PROPOSALS_MESSAGE => is_steward && has_proposals,
|
||||
_ => false, // All other messages blocked during waiting
|
||||
}
|
||||
}
|
||||
GroupState::Voting => {
|
||||
// In voting state, only voting-related messages allowed
|
||||
match message_type {
|
||||
message_types::VOTE => true, // Everyone can send votes
|
||||
message_types::USER_VOTE => true, // Everyone can send user votes
|
||||
message_types::VOTING_PROPOSAL => is_steward, // Only steward can send voting proposals
|
||||
message_types::PROPOSAL => is_steward, // Only steward can send proposals
|
||||
_ => false, // All other messages blocked during voting
|
||||
}
|
||||
}
|
||||
GroupState::ConsensusReached => {
|
||||
// In ConsensusReached state, only steward can send BATCH_PROPOSALS_MESSAGE
|
||||
match message_type {
|
||||
message_types::BATCH_PROPOSALS_MESSAGE => is_steward && has_proposals,
|
||||
_ => false, // All other messages blocked during ConsensusReached
|
||||
}
|
||||
}
|
||||
GroupState::ConsensusFailed => {
|
||||
// In ConsensusFailed state, no messages are allowed
|
||||
false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Start a new steward epoch, transitioning to Waiting state
|
||||
pub async fn start_steward_epoch(&mut self) -> Result<(), GroupError> {
|
||||
println!(
|
||||
"State machine: start_steward_epoch called, current state: {:?}",
|
||||
self.state
|
||||
);
|
||||
if self.state != GroupState::Working {
|
||||
println!(
|
||||
"State machine: Invalid state transition from {:?} to Waiting",
|
||||
self.state
|
||||
);
|
||||
return Err(GroupError::InvalidStateTransition);
|
||||
}
|
||||
|
||||
self.state = GroupState::Waiting;
|
||||
println!("State machine: Transitioned from Working to Waiting");
|
||||
|
||||
self.steward.as_mut().unwrap().start_new_epoch().await;
|
||||
println!("State machine: Started new epoch");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Start voting on proposals for the current epoch, transitioning to Voting state
|
||||
/// Start voting on proposals for the current epoch, transitioning to Voting state.
|
||||
///
|
||||
/// ## Preconditions:
|
||||
/// - Can be called from any state except Voting (prevents double voting)
|
||||
///
|
||||
/// ## State Transition:
|
||||
/// Any State (except Voting) → Voting
|
||||
pub fn start_voting(&mut self) -> Result<(), GroupError> {
|
||||
println!(
|
||||
"State machine: start_voting called, current state: {:?}",
|
||||
self.state
|
||||
);
|
||||
if self.state != GroupState::Waiting {
|
||||
println!(
|
||||
"State machine: Invalid state transition from {:?} to Voting",
|
||||
self.state
|
||||
);
|
||||
return Err(GroupError::InvalidStateTransition);
|
||||
if self.state == GroupState::Voting {
|
||||
return Err(GroupError::InvalidStateTransition {
|
||||
from: self.state.to_string(),
|
||||
to: "Voting".to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
self.state = GroupState::Voting;
|
||||
println!("State machine: Transitioned from Waiting to Voting");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Complete voting and update state based on result
|
||||
/// Complete voting and update state based on result.
|
||||
///
|
||||
/// ## Preconditions:
|
||||
/// - Must be in Voting state
|
||||
///
|
||||
/// ## State Transitions:
|
||||
/// - Vote YES: Voting → ConsensusReached (consensus achieved, waiting for batch proposals)
|
||||
/// - Vote NO: Voting → Working (proposals discarded)
|
||||
pub fn complete_voting(&mut self, vote_result: bool) -> Result<(), GroupError> {
|
||||
println!(
|
||||
"State machine: complete_voting called with result {}, current state: {:?}",
|
||||
vote_result, self.state
|
||||
);
|
||||
if self.state != GroupState::Voting {
|
||||
println!(
|
||||
"State machine: Invalid state transition from {:?} to {}",
|
||||
self.state,
|
||||
if vote_result { "Waiting" } else { "Working" }
|
||||
);
|
||||
return Err(GroupError::InvalidStateTransition);
|
||||
return Err(GroupError::InvalidStateTransition {
|
||||
from: self.state.to_string(),
|
||||
to: if vote_result {
|
||||
"ConsensusReached"
|
||||
} else {
|
||||
"Working"
|
||||
}
|
||||
.to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
if vote_result {
|
||||
// Vote passed - stay in waiting state for proposal application
|
||||
self.state = GroupState::Waiting;
|
||||
println!("State machine: Vote passed, staying in Waiting state");
|
||||
// Vote YES - go to ConsensusReached state to wait for steward to send batch proposals
|
||||
info!("[complete_voting]: Vote YES, transitioning to ConsensusReached state");
|
||||
self.start_consensus_reached();
|
||||
} else {
|
||||
// Vote failed - return to working state
|
||||
self.state = GroupState::Working;
|
||||
println!("State machine: Vote failed, returning to Working state");
|
||||
// Vote NO - return to working state
|
||||
info!("[complete_voting]: Vote NO, transitioning to Working state");
|
||||
self.start_working();
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Apply proposals and complete the steward epoch
|
||||
pub async fn remove_proposals_and_complete(&mut self) -> Result<(), GroupError> {
|
||||
println!(
|
||||
"State machine: remove_proposals_and_complete called, current state: {:?}",
|
||||
self.state
|
||||
);
|
||||
if self.state != GroupState::Waiting {
|
||||
println!(
|
||||
"State machine: Invalid state transition from {:?} to Working",
|
||||
self.state
|
||||
);
|
||||
return Err(GroupError::InvalidStateTransition);
|
||||
}
|
||||
/// Start consensus reached state (for non-steward peers after consensus).
|
||||
///
|
||||
/// ## State Transition:
|
||||
/// Any State → ConsensusReached
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Called by non-steward peers when consensus is reached during voting.
|
||||
/// This allows them to transition to the appropriate state for waiting
|
||||
/// for the steward to process and send batch proposals.
|
||||
pub fn start_consensus_reached(&mut self) {
|
||||
self.state = GroupState::ConsensusReached;
|
||||
info!("[start_consensus_reached] Transitioning to ConsensusReached state");
|
||||
}
|
||||
|
||||
// Apply proposals for current epoch from steward
|
||||
if let Some(steward) = &mut self.steward {
|
||||
steward.empty_voting_epoch_proposals().await;
|
||||
} else {
|
||||
return Err(GroupError::StewardNotSet);
|
||||
/// Start consensus failed state (for peers after consensus failure).
|
||||
///
|
||||
/// ## State Transition:
|
||||
/// Any State → ConsensusFailed
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Called when consensus fails due to timeout or other reasons.
|
||||
/// This state blocks all message types until recovery is initiated.
|
||||
pub fn start_consensus_failed(&mut self) {
|
||||
self.state = GroupState::ConsensusFailed;
|
||||
info!("[start_consensus_failed] Transitioning to ConsensusFailed state");
|
||||
}
|
||||
|
||||
/// Recover from consensus failure by transitioning back to Working state
|
||||
pub fn recover_from_consensus_failure(&mut self) -> Result<(), GroupError> {
|
||||
if self.state != GroupState::ConsensusFailed {
|
||||
return Err(GroupError::InvalidStateTransition {
|
||||
from: self.state.to_string(),
|
||||
to: "Working".to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
self.state = GroupState::Working;
|
||||
println!("State machine: Transitioned from Waiting to Working");
|
||||
|
||||
info!("[recover_from_consensus_failure] Recovering from consensus failure, transitioning to Working state");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get the count of proposals in the current epoch
|
||||
/// Start working state (for non-steward peers after consensus or edge case recovery).
|
||||
///
|
||||
/// ## State Transition:
|
||||
/// Any State → Working
|
||||
///
|
||||
/// ## Usage:
|
||||
/// - Non-steward peers: Called after receiving consensus results
|
||||
/// - Edge case recovery: Called when proposals disappear during voting phase
|
||||
/// - General recovery: Can be used to reset to normal operation from any state
|
||||
///
|
||||
/// ## Note:
|
||||
/// This method provides a safe way to transition back to normal operation
|
||||
/// and is commonly used for recovery scenarios.
|
||||
pub fn start_working(&mut self) {
|
||||
self.state = GroupState::Working;
|
||||
info!("[start_working] Transitioning to Working state");
|
||||
}
|
||||
|
||||
/// Start waiting state (for non-steward peers after consensus).
|
||||
///
|
||||
/// ## State Transition:
|
||||
/// Any State → Waiting
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Called by non-steward peers to transition to waiting state,
|
||||
/// typically after consensus is reached and they need to wait for
|
||||
/// the steward to process and send batch proposals.
|
||||
pub fn start_waiting(&mut self) {
|
||||
self.state = GroupState::Waiting;
|
||||
info!("[start_waiting] Transitioning to Waiting state");
|
||||
}
|
||||
|
||||
/// Get the count of proposals in the current epoch.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Number of proposals currently collected for the next steward epoch
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Used to check if there are proposals to vote on before starting a steward epoch.
|
||||
pub async fn get_current_epoch_proposals_count(&self) -> usize {
|
||||
if let Some(steward) = &self.steward {
|
||||
steward.get_current_epoch_proposals_count().await
|
||||
@@ -194,7 +354,13 @@ impl GroupStateMachine {
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the count of proposals in the voting epoch
|
||||
/// Get the count of proposals in the voting epoch.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Number of proposals currently being voted on
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Used during voting to track how many proposals are being considered.
|
||||
pub async fn get_voting_epoch_proposals_count(&self) -> usize {
|
||||
if let Some(steward) = &self.steward {
|
||||
steward.get_voting_epoch_proposals_count().await
|
||||
@@ -203,7 +369,13 @@ impl GroupStateMachine {
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the proposals in the voting epoch
|
||||
/// Get the proposals in the voting epoch.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Vector of proposals currently being voted on
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Used during voting to access the actual proposal details for processing.
|
||||
pub async fn get_voting_epoch_proposals(&self) -> Vec<GroupUpdateRequest> {
|
||||
if let Some(steward) = &self.steward {
|
||||
steward.get_voting_epoch_proposals().await
|
||||
@@ -212,27 +384,179 @@ impl GroupStateMachine {
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a proposal to the current epoch
|
||||
/// Add a proposal to the current epoch.
|
||||
///
|
||||
/// ## Parameters:
|
||||
/// - `proposal`: The group update request to add
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Called to submit new proposals for consideration in the next steward epoch.
|
||||
/// Proposals are collected and will be moved to the voting epoch when
|
||||
/// `start_steward_epoch_with_validation()` is called.
|
||||
pub async fn add_proposal(&mut self, proposal: GroupUpdateRequest) {
|
||||
if let Some(steward) = &mut self.steward {
|
||||
steward.add_proposal(proposal).await;
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if this state machine has a steward
|
||||
/// Check if this state machine has a steward configured.
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - `true` if a steward is configured, `false` otherwise
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Used to verify steward availability before attempting steward epoch operations.
|
||||
pub fn has_steward(&self) -> bool {
|
||||
self.steward.is_some()
|
||||
}
|
||||
|
||||
/// Get a reference to the steward (if available)
|
||||
/// Get a reference to the steward (if available).
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - `Some(&Steward)` if steward is configured, `None` otherwise
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Used to access steward functionality for read-only operations.
|
||||
pub fn get_steward(&self) -> Option<&Steward> {
|
||||
self.steward.as_ref()
|
||||
}
|
||||
|
||||
/// Get a mutable reference to the steward (if available)
|
||||
/// Get a mutable reference to the steward (if available).
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - `Some(&mut Steward)` if steward is configured, `None` otherwise
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Used to access steward functionality for read-write operations.
|
||||
pub fn get_steward_mut(&mut self) -> Option<&mut Steward> {
|
||||
self.steward.as_mut()
|
||||
}
|
||||
|
||||
/// Handle steward epoch start with proposal validation.
|
||||
/// This is the main entry point for starting steward epochs.
|
||||
///
|
||||
/// ## Preconditions:
|
||||
/// - Must be in Working state
|
||||
/// - Must have a steward configured
|
||||
///
|
||||
/// ## State Transitions:
|
||||
/// - **With proposals**: Working → Waiting (returns proposal count)
|
||||
/// - **No proposals**: Working → Working (stays in Working, returns 0)
|
||||
///
|
||||
/// ## Returns:
|
||||
/// - Number of proposals collected for voting (0 if no proposals)
|
||||
///
|
||||
/// ## Usage:
|
||||
/// This method should be used instead of `start_steward_epoch()` for external calls
|
||||
/// as it provides proper proposal validation and state management.
|
||||
pub async fn start_steward_epoch_with_validation(&mut self) -> Result<usize, GroupError> {
|
||||
if self.state != GroupState::Working {
|
||||
return Err(GroupError::InvalidStateTransition {
|
||||
from: self.state.to_string(),
|
||||
to: "Waiting".to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
// Always check if steward is set - required for steward epoch operations
|
||||
if !self.has_steward() {
|
||||
return Err(GroupError::StewardNotSet);
|
||||
}
|
||||
|
||||
// Check if there are proposals to vote on
|
||||
let proposal_count = self.get_current_epoch_proposals_count().await;
|
||||
|
||||
if proposal_count == 0 {
|
||||
// No proposals, stay in Working state but still return 0
|
||||
// This indicates a successful steward epoch start with no proposals
|
||||
Ok(0)
|
||||
} else {
|
||||
// Start steward epoch and transition to Waiting
|
||||
self.state = GroupState::Waiting;
|
||||
self.steward
|
||||
.as_mut()
|
||||
.ok_or(GroupError::StewardNotSet)?
|
||||
.start_new_epoch()
|
||||
.await;
|
||||
Ok(proposal_count)
|
||||
}
|
||||
}
|
||||
|
||||
/// Handle proposal application and completion after successful voting.
|
||||
///
|
||||
/// ## Preconditions:
|
||||
/// - Must be in ConsensusReached or Waiting state
|
||||
/// - Must have a steward configured
|
||||
///
|
||||
/// ## State Transition:
|
||||
/// ConsensusReached/Waiting → Working
|
||||
///
|
||||
/// ## Actions:
|
||||
/// - Clears voting epoch proposals
|
||||
/// - Transitions to Working state
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Called after successful voting to empty the voting epoch proposals and transition to Working state.
|
||||
pub async fn handle_yes_vote(&mut self) -> Result<(), GroupError> {
|
||||
// Check state transition validity - can be called from ConsensusReached or Waiting state
|
||||
if self.state != GroupState::ConsensusReached && self.state != GroupState::Waiting {
|
||||
return Err(GroupError::InvalidStateTransition {
|
||||
from: self.state.to_string(),
|
||||
to: "Working".to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
let steward = self.steward.as_mut().ok_or(GroupError::StewardNotSet)?;
|
||||
steward.empty_voting_epoch_proposals().await;
|
||||
|
||||
self.state = GroupState::Working;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Start waiting state when steward sends batch proposals after consensus.
|
||||
/// This transitions from ConsensusReached to Waiting state.
|
||||
///
|
||||
/// ## Preconditions:
|
||||
/// - Must be in ConsensusReached state
|
||||
///
|
||||
/// ## State Transition:
|
||||
/// ConsensusReached → Waiting
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Called when steward needs to send batch proposals after consensus is reached.
|
||||
/// This allows the steward to process and send proposals while maintaining proper state flow.
|
||||
pub fn start_waiting_after_consensus(&mut self) -> Result<(), GroupError> {
|
||||
if self.state != GroupState::ConsensusReached {
|
||||
return Err(GroupError::InvalidStateTransition {
|
||||
from: self.state.to_string(),
|
||||
to: "Waiting".to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
self.state = GroupState::Waiting;
|
||||
info!(
|
||||
"[start_waiting_after_consensus] Transitioning from ConsensusReached to Waiting state"
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Handle failed vote cleanup.
|
||||
///
|
||||
/// ## Preconditions:
|
||||
/// - Must have a steward configured
|
||||
///
|
||||
/// ## Actions:
|
||||
/// - Clears voting epoch proposals
|
||||
/// - Does not change state
|
||||
///
|
||||
/// ## Usage:
|
||||
/// Called after failed votes to clean up proposals. The caller is responsible
|
||||
/// for transitioning to the appropriate state (typically Working).
|
||||
pub async fn handle_no_vote(&mut self) -> Result<(), GroupError> {
|
||||
let steward = self.steward.as_mut().ok_or(GroupError::StewardNotSet)?;
|
||||
steward.empty_voting_epoch_proposals().await;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for GroupStateMachine {
|
||||
@@ -266,9 +590,16 @@ mod tests {
|
||||
// Initial state should be Working
|
||||
assert_eq!(state_machine.current_state(), GroupState::Working);
|
||||
|
||||
// Add a proposal to switch to waiting state
|
||||
state_machine
|
||||
.add_proposal(GroupUpdateRequest::RemoveMember(
|
||||
"0x3c44cdddb6a900fa2b585dd299e03d12fa4293bc".to_string(),
|
||||
))
|
||||
.await;
|
||||
|
||||
// Test start_steward_epoch
|
||||
state_machine
|
||||
.start_steward_epoch()
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
assert_eq!(state_machine.current_state(), GroupState::Waiting);
|
||||
@@ -283,64 +614,129 @@ mod tests {
|
||||
state_machine
|
||||
.complete_voting(true)
|
||||
.expect("Failed to complete voting");
|
||||
assert_eq!(state_machine.current_state(), GroupState::ConsensusReached);
|
||||
|
||||
// Test start_waiting_after_consensus
|
||||
state_machine
|
||||
.start_waiting_after_consensus()
|
||||
.expect("Failed to start waiting after consensus");
|
||||
assert_eq!(state_machine.current_state(), GroupState::Waiting);
|
||||
|
||||
// Test apply_proposals_and_complete
|
||||
state_machine
|
||||
.remove_proposals_and_complete()
|
||||
.handle_yes_vote()
|
||||
.await
|
||||
.expect("Failed to apply proposals");
|
||||
assert_eq!(state_machine.current_state(), GroupState::Working);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_message_permissions() {
|
||||
async fn test_message_type_permissions() {
|
||||
let mut state_machine = GroupStateMachine::new_with_steward();
|
||||
|
||||
// Working state - anyone can send messages
|
||||
assert!(state_machine.can_send_message(false, false)); // Regular user, no proposals
|
||||
assert!(state_machine.can_send_message(true, false)); // Steward, no proposals
|
||||
assert!(state_machine.can_send_message(true, true)); // Steward, with proposals
|
||||
// Working state - all message types allowed
|
||||
assert!(state_machine.can_send_message_type(false, false, message_types::BAN_REQUEST));
|
||||
assert!(state_machine.can_send_message_type(
|
||||
false,
|
||||
false,
|
||||
message_types::CONVERSATION_MESSAGE
|
||||
));
|
||||
assert!(state_machine.can_send_message_type(
|
||||
true,
|
||||
false,
|
||||
message_types::BATCH_PROPOSALS_MESSAGE
|
||||
));
|
||||
|
||||
// Add a proposal to switch to waiting state
|
||||
state_machine
|
||||
.add_proposal(GroupUpdateRequest::RemoveMember(
|
||||
"0x3c44cdddb6a900fa2b585dd299e03d12fa4293bc".to_string(),
|
||||
))
|
||||
.await;
|
||||
|
||||
// Start steward epoch
|
||||
state_machine
|
||||
.start_steward_epoch()
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
|
||||
// Waiting state - only steward with proposals can send messages
|
||||
assert!(!state_machine.can_send_message(false, false)); // Regular user, no proposals
|
||||
assert!(!state_machine.can_send_message(false, true)); // Regular user, with proposals
|
||||
assert!(!state_machine.can_send_message(true, false)); // Steward, no proposals
|
||||
assert!(state_machine.can_send_message(true, true)); // Steward, with proposals
|
||||
// Waiting state - test specific message types
|
||||
// All messages not allowed from anyone EXCEPT BATCH_PROPOSALS_MESSAGE
|
||||
assert!(!state_machine.can_send_message_type(false, false, message_types::BAN_REQUEST));
|
||||
assert!(!state_machine.can_send_message_type(
|
||||
false,
|
||||
false,
|
||||
message_types::CONVERSATION_MESSAGE
|
||||
));
|
||||
assert!(!state_machine.can_send_message_type(false, false, message_types::VOTE));
|
||||
assert!(!state_machine.can_send_message_type(false, false, message_types::USER_VOTE));
|
||||
assert!(!state_machine.can_send_message_type(false, false, message_types::VOTING_PROPOSAL));
|
||||
assert!(!state_machine.can_send_message_type(false, false, message_types::PROPOSAL));
|
||||
|
||||
// BatchProposalsMessage should only be allowed from steward with proposals
|
||||
assert!(!state_machine.can_send_message_type(
|
||||
false,
|
||||
false,
|
||||
message_types::BATCH_PROPOSALS_MESSAGE
|
||||
));
|
||||
assert!(!state_machine.can_send_message_type(
|
||||
true,
|
||||
false,
|
||||
message_types::BATCH_PROPOSALS_MESSAGE
|
||||
));
|
||||
assert!(state_machine.can_send_message_type(
|
||||
true,
|
||||
true,
|
||||
message_types::BATCH_PROPOSALS_MESSAGE
|
||||
));
|
||||
|
||||
// Start voting
|
||||
state_machine
|
||||
.start_voting()
|
||||
.expect("Failed to start voting");
|
||||
|
||||
// Voting state - no one can send messages
|
||||
assert!(!state_machine.can_send_message(false, false));
|
||||
assert!(!state_machine.can_send_message(false, true));
|
||||
assert!(!state_machine.can_send_message(true, false));
|
||||
assert!(!state_machine.can_send_message(true, true));
|
||||
// Voting state - only voting-related messages allowed
|
||||
// Everyone can send votes and user votes
|
||||
assert!(state_machine.can_send_message_type(false, false, message_types::VOTE));
|
||||
assert!(state_machine.can_send_message_type(false, false, message_types::USER_VOTE));
|
||||
|
||||
// Only steward can send voting proposals and proposals
|
||||
assert!(!state_machine.can_send_message_type(false, false, message_types::VOTING_PROPOSAL));
|
||||
assert!(state_machine.can_send_message_type(true, false, message_types::VOTING_PROPOSAL));
|
||||
assert!(!state_machine.can_send_message_type(false, false, message_types::PROPOSAL));
|
||||
assert!(state_machine.can_send_message_type(true, false, message_types::PROPOSAL));
|
||||
|
||||
// All other message types blocked during voting
|
||||
assert!(!state_machine.can_send_message_type(
|
||||
false,
|
||||
false,
|
||||
message_types::CONVERSATION_MESSAGE
|
||||
));
|
||||
assert!(!state_machine.can_send_message_type(false, false, message_types::BAN_REQUEST));
|
||||
assert!(!state_machine.can_send_message_type(
|
||||
false,
|
||||
false,
|
||||
message_types::BATCH_PROPOSALS_MESSAGE
|
||||
));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_invalid_state_transitions() {
|
||||
let mut state_machine = GroupStateMachine::new();
|
||||
|
||||
// Cannot start voting from Working state
|
||||
let result = state_machine.start_voting();
|
||||
assert!(matches!(result, Err(GroupError::InvalidStateTransition)));
|
||||
|
||||
// Cannot complete voting from Working state
|
||||
let result = state_machine.complete_voting(true);
|
||||
assert!(matches!(result, Err(GroupError::InvalidStateTransition)));
|
||||
assert!(matches!(
|
||||
result,
|
||||
Err(GroupError::InvalidStateTransition { .. })
|
||||
));
|
||||
|
||||
// Cannot apply proposals from Working state
|
||||
let result = state_machine.remove_proposals_and_complete().await;
|
||||
assert!(matches!(result, Err(GroupError::InvalidStateTransition)));
|
||||
let result = state_machine.handle_yes_vote().await;
|
||||
assert!(matches!(
|
||||
result,
|
||||
Err(GroupError::InvalidStateTransition { .. })
|
||||
));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
@@ -349,12 +745,14 @@ mod tests {
|
||||
|
||||
// Add some proposals
|
||||
state_machine
|
||||
.add_proposal(GroupUpdateRequest::RemoveMember(vec![1, 2, 3]))
|
||||
.add_proposal(GroupUpdateRequest::RemoveMember(
|
||||
"0x3c44cdddb6a900fa2b585dd299e03d12fa4293bc".to_string(),
|
||||
))
|
||||
.await;
|
||||
|
||||
// Start steward epoch - should collect proposals
|
||||
state_machine
|
||||
.start_steward_epoch()
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
assert_eq!(state_machine.get_voting_epoch_proposals_count().await, 1);
|
||||
@@ -367,11 +765,41 @@ mod tests {
|
||||
.complete_voting(true)
|
||||
.expect("Failed to complete voting");
|
||||
state_machine
|
||||
.remove_proposals_and_complete()
|
||||
.handle_yes_vote()
|
||||
.await
|
||||
.expect("Failed to apply proposals");
|
||||
|
||||
// Proposals should be applied and count should be reset
|
||||
assert_eq!(state_machine.get_current_epoch_proposals_count().await, 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_state_snapshot_consistency() {
|
||||
let mut state_machine = GroupStateMachine::new_with_steward();
|
||||
|
||||
// Add some proposals
|
||||
state_machine
|
||||
.add_proposal(GroupUpdateRequest::RemoveMember(
|
||||
"0x3c44cdddb6a900fa2b585dd299e03d12fa4293bc".to_string(),
|
||||
))
|
||||
.await;
|
||||
|
||||
// Get a snapshot before state transition
|
||||
let snapshot1 = state_machine.get_current_epoch_proposals_count().await;
|
||||
assert_eq!(snapshot1, 1);
|
||||
|
||||
// Start steward epoch
|
||||
state_machine
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
|
||||
// Get a snapshot after state transition
|
||||
let snapshot2 = state_machine.get_current_epoch_proposals_count().await;
|
||||
assert_eq!(snapshot2, 0);
|
||||
|
||||
// Verify that the snapshots are consistent within themselves
|
||||
assert!(snapshot1 > 0);
|
||||
assert_ne!(snapshot1, snapshot2);
|
||||
}
|
||||
}
|
||||
|
||||
119
src/steward.rs
119
src/steward.rs
@@ -1,14 +1,15 @@
|
||||
use alloy::primitives::Address;
|
||||
use libsecp256k1::{PublicKey, SecretKey};
|
||||
use openmls::prelude::KeyPackage;
|
||||
use std::sync::Arc;
|
||||
use std::{fmt::Display, str::FromStr, sync::Arc};
|
||||
use tokio::sync::Mutex;
|
||||
|
||||
use crate::{protos::messages::v1::GroupAnnouncement, *};
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Steward {
|
||||
eth_pub: PublicKey,
|
||||
eth_secr: SecretKey,
|
||||
eth_pub: Arc<Mutex<PublicKey>>,
|
||||
eth_secr: Arc<Mutex<SecretKey>>,
|
||||
current_epoch_proposals: Arc<Mutex<Vec<GroupUpdateRequest>>>,
|
||||
voting_epoch_proposals: Arc<Mutex<Vec<GroupUpdateRequest>>>,
|
||||
}
|
||||
@@ -16,7 +17,22 @@ pub struct Steward {
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub enum GroupUpdateRequest {
|
||||
AddMember(Box<KeyPackage>),
|
||||
RemoveMember(Vec<u8>),
|
||||
RemoveMember(String),
|
||||
}
|
||||
|
||||
impl Display for GroupUpdateRequest {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
GroupUpdateRequest::AddMember(kp) => {
|
||||
let id = Address::from_slice(kp.leaf_node().credential().serialized_content());
|
||||
writeln!(f, "Add Member: {id:#?}")
|
||||
}
|
||||
GroupUpdateRequest::RemoveMember(id) => {
|
||||
let id = Address::from_str(id).unwrap();
|
||||
writeln!(f, "Remove Member: {id:#?}")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Steward {
|
||||
@@ -29,54 +45,48 @@ impl Steward {
|
||||
pub fn new() -> Self {
|
||||
let (public_key, private_key) = generate_keypair();
|
||||
Steward {
|
||||
eth_pub: public_key,
|
||||
eth_secr: private_key,
|
||||
eth_pub: Arc::new(Mutex::new(public_key)),
|
||||
eth_secr: Arc::new(Mutex::new(private_key)),
|
||||
current_epoch_proposals: Arc::new(Mutex::new(Vec::new())),
|
||||
voting_epoch_proposals: Arc::new(Mutex::new(Vec::new())),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn refresh_key_pair(&mut self) {
|
||||
pub async fn refresh_key_pair(&mut self) {
|
||||
let (public_key, private_key) = generate_keypair();
|
||||
self.eth_pub = public_key;
|
||||
self.eth_secr = private_key;
|
||||
*self.eth_pub.lock().await = public_key;
|
||||
*self.eth_secr.lock().await = private_key;
|
||||
}
|
||||
|
||||
pub fn create_announcement(&self) -> GroupAnnouncement {
|
||||
let signature = sign_message(&self.eth_pub.serialize_compressed(), &self.eth_secr);
|
||||
GroupAnnouncement::new(self.eth_pub.serialize_compressed().to_vec(), signature)
|
||||
pub async fn create_announcement(&self) -> GroupAnnouncement {
|
||||
let pub_key = self.eth_pub.lock().await;
|
||||
let sec_key = self.eth_secr.lock().await;
|
||||
let signature = sign_message(&pub_key.serialize_compressed(), &sec_key);
|
||||
GroupAnnouncement::new(pub_key.serialize_compressed().to_vec(), signature)
|
||||
}
|
||||
|
||||
pub fn decrypt_message(&self, message: Vec<u8>) -> Result<KeyPackage, MessageError> {
|
||||
let msg: Vec<u8> = decrypt_message(&message, self.eth_secr)?;
|
||||
// TODO: replace json in encryption and decryption
|
||||
pub async fn decrypt_message(&self, message: Vec<u8>) -> Result<KeyPackage, MessageError> {
|
||||
let sec_key = self.eth_secr.lock().await;
|
||||
let msg: Vec<u8> = decrypt_message(&message, *sec_key)?;
|
||||
let key_package: KeyPackage = serde_json::from_slice(&msg)?;
|
||||
Ok(key_package)
|
||||
}
|
||||
|
||||
/// Start a new steward epoch, moving current proposals to the epoch proposals map and incrementing the epoch.
|
||||
/// Start a new steward epoch, moving current proposals to the epoch proposals map.
|
||||
pub async fn start_new_epoch(&mut self) {
|
||||
// Get proposals from current epoch and store them for this epoch
|
||||
let proposals = self
|
||||
.current_epoch_proposals
|
||||
.lock()
|
||||
.await
|
||||
.drain(0..)
|
||||
.collect::<Vec<_>>();
|
||||
// Use a single atomic operation to move proposals between epochs
|
||||
let proposals = {
|
||||
let mut current = self.current_epoch_proposals.lock().await;
|
||||
current.drain(0..).collect::<Vec<_>>()
|
||||
};
|
||||
|
||||
// Store proposals for this epoch (for voting and application)
|
||||
if !proposals.is_empty() {
|
||||
self.voting_epoch_proposals
|
||||
.lock()
|
||||
.await
|
||||
.extend(proposals.clone());
|
||||
let mut voting = self.voting_epoch_proposals.lock().await;
|
||||
voting.extend(proposals);
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn get_current_epoch_proposals(&self) -> Vec<GroupUpdateRequest> {
|
||||
self.current_epoch_proposals.lock().await.clone()
|
||||
}
|
||||
|
||||
pub async fn get_current_epoch_proposals_count(&self) -> usize {
|
||||
self.current_epoch_proposals.lock().await.len()
|
||||
}
|
||||
@@ -101,3 +111,50 @@ impl Steward {
|
||||
self.current_epoch_proposals.lock().await.push(proposal);
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::str::FromStr;
|
||||
|
||||
use alloy::signers::local::PrivateKeySigner;
|
||||
use mls_crypto::openmls_provider::{MlsProvider, CIPHERSUITE};
|
||||
use openmls::prelude::{BasicCredential, CredentialWithKey, KeyPackage};
|
||||
use openmls_basic_credential::SignatureKeyPair;
|
||||
|
||||
use crate::steward::GroupUpdateRequest;
|
||||
#[tokio::test]
|
||||
async fn test_display_group_update_request() {
|
||||
let user_eth_priv_key =
|
||||
"0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d";
|
||||
let signer =
|
||||
PrivateKeySigner::from_str(user_eth_priv_key).expect("Failed to create signer");
|
||||
let user_address = signer.address();
|
||||
|
||||
let ciphersuite = CIPHERSUITE;
|
||||
let provider = MlsProvider::default();
|
||||
|
||||
let credential = BasicCredential::new(user_address.as_slice().to_vec());
|
||||
let signer = SignatureKeyPair::new(ciphersuite.signature_algorithm())
|
||||
.expect("Error generating a signature key pair.");
|
||||
let credential_with_key = CredentialWithKey {
|
||||
credential: credential.into(),
|
||||
signature_key: signer.public().into(),
|
||||
};
|
||||
let key_package_bundle = KeyPackage::builder()
|
||||
.build(ciphersuite, &provider, &signer, credential_with_key)
|
||||
.expect("Error building key package bundle.");
|
||||
let key_package = key_package_bundle.key_package();
|
||||
|
||||
let proposal_add_member = GroupUpdateRequest::AddMember(Box::new(key_package.clone()));
|
||||
assert_eq!(
|
||||
proposal_add_member.to_string(),
|
||||
"Add Member: 0x70997970c51812dc3a010c7d01b50e0d17dc79c8\n"
|
||||
);
|
||||
|
||||
let proposal_remove_member = GroupUpdateRequest::RemoveMember(user_address.to_string());
|
||||
assert_eq!(
|
||||
proposal_remove_member.to_string(),
|
||||
"Remove Member: 0x70997970c51812dc3a010c7d01b50e0d17dc79c8\n"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
1704
src/user.rs
1704
src/user.rs
File diff suppressed because it is too large
Load Diff
@@ -4,7 +4,9 @@ use waku_bindings::WakuMessage;
|
||||
use ds::waku_actor::WakuMessageToSend;
|
||||
|
||||
use crate::{
|
||||
consensus::ConsensusEvent,
|
||||
error::UserError,
|
||||
protos::messages::v1::BanRequest,
|
||||
user::{User, UserAction},
|
||||
};
|
||||
|
||||
@@ -33,8 +35,7 @@ impl Message<CreateGroupRequest> for User {
|
||||
msg: CreateGroupRequest,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.create_group(msg.group_name.clone(), msg.is_creation)
|
||||
.await?;
|
||||
self.create_group(&msg.group_name, msg.is_creation).await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
@@ -51,7 +52,7 @@ impl Message<StewardMessageRequest> for User {
|
||||
msg: StewardMessageRequest,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.prepare_steward_msg(msg.group_name.clone()).await
|
||||
self.prepare_steward_msg(&msg.group_name).await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -67,32 +68,13 @@ impl Message<LeaveGroupRequest> for User {
|
||||
msg: LeaveGroupRequest,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.leave_group(msg.group_name.clone()).await?;
|
||||
self.leave_group(&msg.group_name).await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
pub struct RemoveUserRequest {
|
||||
pub user_to_ban: String,
|
||||
pub group_name: String,
|
||||
}
|
||||
|
||||
impl Message<RemoveUserRequest> for User {
|
||||
type Reply = Result<(), UserError>;
|
||||
|
||||
async fn handle(
|
||||
&mut self,
|
||||
msg: RemoveUserRequest,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
// Add remove proposal to steward instead of direct removal
|
||||
self.add_remove_proposal(msg.group_name, msg.user_to_ban)
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
pub struct SendGroupMessage {
|
||||
pub message: String,
|
||||
pub message: Vec<u8>,
|
||||
pub group_name: String,
|
||||
}
|
||||
|
||||
@@ -104,9 +86,28 @@ impl Message<SendGroupMessage> for User {
|
||||
msg: SendGroupMessage,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.build_group_message(&msg.message, msg.group_name).await
|
||||
self.build_group_message(msg.message, &msg.group_name).await
|
||||
}
|
||||
}
|
||||
|
||||
pub struct BuildBanMessage {
|
||||
pub ban_request: BanRequest,
|
||||
pub group_name: String,
|
||||
}
|
||||
|
||||
impl Message<BuildBanMessage> for User {
|
||||
type Reply = Result<WakuMessageToSend, UserError>;
|
||||
|
||||
async fn handle(
|
||||
&mut self,
|
||||
msg: BuildBanMessage,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.process_ban_request(msg.ban_request, &msg.group_name)
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
// New state machine message types
|
||||
pub struct StartStewardEpochRequest {
|
||||
pub group_name: String,
|
||||
@@ -120,71 +121,71 @@ impl Message<StartStewardEpochRequest> for User {
|
||||
msg: StartStewardEpochRequest,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.start_steward_epoch(msg.group_name).await
|
||||
self.start_steward_epoch(&msg.group_name).await
|
||||
}
|
||||
}
|
||||
|
||||
pub struct StartVotingRequest {
|
||||
pub struct GetProposalsForStewardVotingRequest {
|
||||
pub group_name: String,
|
||||
}
|
||||
|
||||
impl Message<StartVotingRequest> for User {
|
||||
type Reply = Result<Vec<u8>, UserError>; // Returns vote_id
|
||||
impl Message<GetProposalsForStewardVotingRequest> for User {
|
||||
type Reply = Result<UserAction, UserError>; // Returns proposal_id
|
||||
|
||||
async fn handle(
|
||||
&mut self,
|
||||
msg: StartVotingRequest,
|
||||
msg: GetProposalsForStewardVotingRequest,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.start_voting(msg.group_name).await
|
||||
let (_, action) = self
|
||||
.get_proposals_for_steward_voting(&msg.group_name)
|
||||
.await?;
|
||||
Ok(action)
|
||||
}
|
||||
}
|
||||
|
||||
pub struct CompleteVotingRequest {
|
||||
pub struct UserVoteRequest {
|
||||
pub group_name: String,
|
||||
pub vote_id: Vec<u8>,
|
||||
pub proposal_id: u32,
|
||||
pub vote: bool,
|
||||
}
|
||||
|
||||
impl Message<CompleteVotingRequest> for User {
|
||||
type Reply = Result<bool, UserError>; // Returns vote result
|
||||
impl Message<UserVoteRequest> for User {
|
||||
type Reply = Result<Option<WakuMessageToSend>, UserError>;
|
||||
|
||||
async fn handle(
|
||||
&mut self,
|
||||
msg: CompleteVotingRequest,
|
||||
msg: UserVoteRequest,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.complete_voting(msg.group_name, msg.vote_id).await
|
||||
let action = self
|
||||
.process_user_vote(msg.proposal_id, msg.vote, &msg.group_name)
|
||||
.await?;
|
||||
match action {
|
||||
UserAction::SendToWaku(waku_msg) => Ok(Some(waku_msg)),
|
||||
UserAction::DoNothing => Ok(None),
|
||||
_ => Err(UserError::InvalidUserAction(
|
||||
"Vote action must result in Waku message".to_string(),
|
||||
)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct ApplyProposalsAndCompleteRequest {
|
||||
// Consensus event message handler
|
||||
pub struct ConsensusEventMessage {
|
||||
pub group_name: String,
|
||||
pub event: ConsensusEvent,
|
||||
}
|
||||
|
||||
impl Message<ApplyProposalsAndCompleteRequest> for User {
|
||||
impl Message<ConsensusEventMessage> for User {
|
||||
type Reply = Result<Vec<WakuMessageToSend>, UserError>;
|
||||
|
||||
async fn handle(
|
||||
&mut self,
|
||||
msg: ApplyProposalsAndCompleteRequest,
|
||||
msg: ConsensusEventMessage,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.apply_proposals(msg.group_name).await
|
||||
}
|
||||
}
|
||||
|
||||
pub struct RemoveProposalsAndCompleteRequest {
|
||||
pub group_name: String,
|
||||
}
|
||||
|
||||
impl Message<RemoveProposalsAndCompleteRequest> for User {
|
||||
type Reply = Result<(), UserError>;
|
||||
|
||||
async fn handle(
|
||||
&mut self,
|
||||
msg: RemoveProposalsAndCompleteRequest,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.remove_proposals_and_complete(msg.group_name).await
|
||||
self.handle_consensus_event(&msg.group_name, msg.event)
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,25 +4,31 @@ use kameo::actor::ActorRef;
|
||||
use log::{error, info};
|
||||
use std::{str::FromStr, sync::Arc, time::Duration};
|
||||
|
||||
use crate::user::User;
|
||||
use crate::user::{User, UserAction};
|
||||
use crate::user_actor::{
|
||||
ApplyProposalsAndCompleteRequest, CompleteVotingRequest, CreateGroupRequest,
|
||||
RemoveProposalsAndCompleteRequest, StartStewardEpochRequest, StartVotingRequest,
|
||||
StewardMessageRequest,
|
||||
ConsensusEventMessage, CreateGroupRequest, GetProposalsForStewardVotingRequest,
|
||||
StartStewardEpochRequest, StewardMessageRequest,
|
||||
};
|
||||
use crate::ws_actor::WsActor;
|
||||
use crate::LocalSigner;
|
||||
use crate::{error::UserError, AppState, Connection};
|
||||
|
||||
pub const STEWARD_EPOCH: u64 = 60;
|
||||
pub const STEWARD_EPOCH: u64 = 15;
|
||||
|
||||
pub async fn create_user_instance(
|
||||
connection: Connection,
|
||||
app_state: Arc<AppState>,
|
||||
ws_actor: ActorRef<WsActor>,
|
||||
) -> Result<ActorRef<User>, UserError> {
|
||||
let signer = PrivateKeySigner::from_str(&connection.eth_private_key)?;
|
||||
let user_address = signer.address().to_string();
|
||||
let user_address = signer.address_string();
|
||||
let group_name: String = connection.group_id.clone();
|
||||
// Create user
|
||||
let user = User::new(&connection.eth_private_key)?;
|
||||
|
||||
// Set up consensus event forwarding before spawning the actor
|
||||
let consensus_events = user.subscribe_to_consensus_events();
|
||||
|
||||
let user_ref = kameo::spawn(user);
|
||||
user_ref
|
||||
.ask(CreateGroupRequest {
|
||||
@@ -30,20 +36,21 @@ pub async fn create_user_instance(
|
||||
is_creation: connection.should_create_group,
|
||||
})
|
||||
.await
|
||||
.map_err(|e| UserError::KameoCreateGroupError(e.to_string()))?;
|
||||
.map_err(|e| UserError::UnableToCreateGroup(e.to_string()))?;
|
||||
|
||||
let mut content_topics = build_content_topics(&group_name);
|
||||
info!("Building content topics: {content_topics:?}");
|
||||
app_state
|
||||
.content_topics
|
||||
.lock()
|
||||
.unwrap()
|
||||
.write()
|
||||
.await
|
||||
.append(&mut content_topics);
|
||||
|
||||
if connection.should_create_group {
|
||||
info!("User {user_address:?} start sending steward message for group {group_name:?}");
|
||||
let user_clone = user_ref.clone();
|
||||
let group_name_clone = group_name.clone();
|
||||
let app_state_steward = app_state.clone();
|
||||
tokio::spawn(async move {
|
||||
let mut interval = tokio::time::interval(Duration::from_secs(STEWARD_EPOCH));
|
||||
loop {
|
||||
@@ -52,10 +59,11 @@ pub async fn create_user_instance(
|
||||
handle_steward_flow_per_epoch(
|
||||
user_clone.clone(),
|
||||
group_name_clone.clone(),
|
||||
app_state.clone(),
|
||||
app_state_steward.clone(),
|
||||
ws_actor.clone(),
|
||||
)
|
||||
.await
|
||||
.map_err(|e| UserError::KameoSendMessageError(e.to_string()))?;
|
||||
.map_err(|e| UserError::UnableToHandleStewardEpoch(e.to_string()))?;
|
||||
Ok::<(), UserError>(())
|
||||
}
|
||||
.await
|
||||
@@ -64,19 +72,73 @@ pub async fn create_user_instance(
|
||||
});
|
||||
};
|
||||
|
||||
// Set up consensus event forwarding loop
|
||||
let user_ref_consensus = user_ref.clone();
|
||||
let mut consensus_events_receiver = consensus_events;
|
||||
let app_state_consensus = app_state.clone();
|
||||
tokio::spawn(async move {
|
||||
info!("Starting consensus event forwarding loop");
|
||||
while let Ok((group_name, event)) = consensus_events_receiver.recv().await {
|
||||
info!("Forwarding consensus event for group {group_name}: {event:?}");
|
||||
let result = user_ref_consensus
|
||||
.ask(ConsensusEventMessage {
|
||||
group_name: group_name.clone(),
|
||||
event,
|
||||
})
|
||||
.await;
|
||||
|
||||
match result {
|
||||
Ok(commit_messages) => {
|
||||
// Send commit messages to Waku if any
|
||||
if !commit_messages.is_empty() {
|
||||
info!(
|
||||
"Sending {} commit messages to Waku for group {}",
|
||||
commit_messages.len(),
|
||||
group_name
|
||||
);
|
||||
for msg in commit_messages {
|
||||
if let Err(e) = app_state_consensus.waku_node.send(msg).await {
|
||||
error!("Error sending commit message to Waku: {e}");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Error forwarding consensus event: {e}");
|
||||
}
|
||||
}
|
||||
}
|
||||
info!("Consensus event forwarding loop ended");
|
||||
});
|
||||
|
||||
Ok(user_ref)
|
||||
}
|
||||
|
||||
/// Enhanced steward epoch flow with state machine:
|
||||
/// 1. Start steward epoch (collect pending proposals, change state to Waiting if there are proposals)
|
||||
/// 2. Send new steward key to the waku node
|
||||
/// 3. If there are proposals, start voting process (change state to Voting)
|
||||
/// 4. Complete voting (change state based on result)
|
||||
/// 5. If vote passed, apply proposals and complete (change state back to Working)
|
||||
///
|
||||
/// ## Complete Flow Steps:
|
||||
/// 1. **Start steward epoch**: Collect pending proposals
|
||||
/// - If no proposals: Stay in Working state, complete epoch without voting
|
||||
/// - If proposals exist: Transition Working → Waiting
|
||||
/// 2. **Send steward key**: Broadcast new steward key to waku network
|
||||
/// 3. **Voting phase** (only if proposals exist):
|
||||
/// - Get proposals for voting: Transition Waiting → Voting
|
||||
/// - If no proposals found (edge case): Transition Waiting → Working, complete epoch
|
||||
/// - If proposals found: Send voting proposal to group members
|
||||
/// 4. **Complete voting**: Handle consensus result
|
||||
/// - Vote YES: Transition Voting → Waiting → Working (after applying proposals)
|
||||
/// - Vote NO: Transition Voting → Working (proposals discarded)
|
||||
/// 5. **Apply proposals** (only if vote passed): Execute group changes and return to Working
|
||||
///
|
||||
/// ## State Guarantees:
|
||||
/// - Steward always returns to Working state after epoch completion
|
||||
/// - No proposals scenario never leaves Working state
|
||||
/// - All edge cases properly handled with state transitions
|
||||
pub async fn handle_steward_flow_per_epoch(
|
||||
user: ActorRef<User>,
|
||||
group_name: String,
|
||||
app_state: Arc<AppState>,
|
||||
ws_actor: ActorRef<WsActor>,
|
||||
) -> Result<(), UserError> {
|
||||
info!("Starting steward epoch for group: {group_name}");
|
||||
|
||||
@@ -99,59 +161,35 @@ pub async fn handle_steward_flow_per_epoch(
|
||||
|
||||
if proposals_count == 0 {
|
||||
info!("No proposals to vote on for group: {group_name}, completing epoch without voting");
|
||||
info!("Steward epoch completed for group: {group_name} (no proposals)");
|
||||
return Ok(());
|
||||
}
|
||||
} else {
|
||||
info!("Found {proposals_count} proposals to vote on for group: {group_name}");
|
||||
|
||||
info!("Found {proposals_count} proposals to vote on for group: {group_name}");
|
||||
|
||||
// Step 3: Start voting process
|
||||
let vote_id = user
|
||||
.ask(StartVotingRequest {
|
||||
group_name: group_name.clone(),
|
||||
})
|
||||
.await
|
||||
.map_err(|e| UserError::ProcessProposalsError(e.to_string()))?;
|
||||
|
||||
info!("Started voting with vote_id: {vote_id:?} for group: {group_name}");
|
||||
|
||||
// Step 4: Complete voting (in a real implementation, this would wait for actual votes)
|
||||
// For now, we'll simulate the voting process
|
||||
let vote_result = user
|
||||
.ask(CompleteVotingRequest {
|
||||
group_name: group_name.clone(),
|
||||
vote_id: vote_id.clone(),
|
||||
})
|
||||
.await
|
||||
.map_err(|e| UserError::ApplyProposalsError(e.to_string()))?;
|
||||
|
||||
info!("Voting completed with result: {vote_result} for group: {group_name}");
|
||||
|
||||
// Step 5: If vote passed, apply proposals and complete
|
||||
if vote_result {
|
||||
let msgs = user
|
||||
.ask(ApplyProposalsAndCompleteRequest {
|
||||
// Step 3: Start voting process - steward gets proposals for voting
|
||||
let action = user
|
||||
.ask(GetProposalsForStewardVotingRequest {
|
||||
group_name: group_name.clone(),
|
||||
})
|
||||
.await
|
||||
.map_err(|e| UserError::ApplyProposalsError(e.to_string()))?;
|
||||
.map_err(|e| UserError::UnableToStartVoting(e.to_string()))?;
|
||||
|
||||
// Only send messages if there are any (when there are proposals)
|
||||
for msg in msgs {
|
||||
app_state.waku_node.send(msg).await?;
|
||||
// Step 4: Send proposals to ws to steward to vote or do nothing if no proposals
|
||||
// After voting, steward sends vote and proposal to waku node and start consensus process
|
||||
match action {
|
||||
UserAction::SendToApp(app_msg) => {
|
||||
info!("Sending app message to ws");
|
||||
ws_actor.ask(app_msg).await.map_err(|e| {
|
||||
UserError::UnableToSendMessageToWs(format!("Failed to send message to ws: {e}"))
|
||||
})?;
|
||||
}
|
||||
UserAction::DoNothing => {
|
||||
info!("No action to take for group: {group_name}");
|
||||
return Ok(());
|
||||
}
|
||||
_ => {
|
||||
return Err(UserError::InvalidUserAction(action.to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
info!("Proposals applied and steward epoch completed for group: {group_name}");
|
||||
} else {
|
||||
info!("Vote failed, returning to working state for group: {group_name}");
|
||||
}
|
||||
|
||||
user.ask(RemoveProposalsAndCompleteRequest {
|
||||
group_name: group_name.clone(),
|
||||
})
|
||||
.await
|
||||
.map_err(|e| UserError::ApplyProposalsError(e.to_string()))?;
|
||||
|
||||
info!("Removing proposals and completing steward epoch for group: {group_name}");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
109
src/ws_actor.rs
109
src/ws_actor.rs
@@ -4,10 +4,12 @@ use kameo::{
|
||||
message::{Context, Message},
|
||||
Actor,
|
||||
};
|
||||
use log::info;
|
||||
use serde_json::Value;
|
||||
|
||||
use crate::{
|
||||
message::{ConnectMessage, UserMessage},
|
||||
protos::messages::v1::AppMessage,
|
||||
protos::messages::v1::{app_message, AppMessage},
|
||||
};
|
||||
|
||||
/// This actor is used to handle messages from web socket
|
||||
@@ -15,7 +17,8 @@ use crate::{
|
||||
pub struct WsActor {
|
||||
/// This is the sender of the open web socket connection
|
||||
pub ws_sender: SplitSink<WebSocket, WsMessage>,
|
||||
/// This variable is used to check if the user has connected to the ws, if not, we parce message as ConnectMessage
|
||||
/// This variable is used to check if the user has connected to the ws,
|
||||
/// if not, we parse message as ConnectMessage
|
||||
pub is_initialized: bool,
|
||||
}
|
||||
|
||||
@@ -31,20 +34,23 @@ impl WsActor {
|
||||
/// This enum is used to represent the actions that can be performed on the web socket
|
||||
/// Connect - this action is used to return connection data to the user
|
||||
/// UserMessage - this action is used to handle message from web socket and return it to the user
|
||||
/// RemoveUser - this action is used to remove a user from the group
|
||||
/// DoNothing - this action is used for test purposes (return empty action if message is not valid)
|
||||
#[derive(Debug, PartialEq)]
|
||||
pub enum WsAction {
|
||||
Connect(ConnectMessage),
|
||||
UserMessage(UserMessage),
|
||||
RemoveUser(String, String),
|
||||
UserVote {
|
||||
proposal_id: u32,
|
||||
vote: bool,
|
||||
group_id: String,
|
||||
},
|
||||
DoNothing,
|
||||
}
|
||||
|
||||
/// This struct is used to represent the raw message from the web socket.
|
||||
/// It is used to handle the message from the web socket and return it to the user
|
||||
/// We can parse it to the ConnectMessage or UserMessage
|
||||
/// if it starts with "/ban" it will be parsed to RemoveUser, otherwise it will be parsed to UserMessage
|
||||
#[derive(Debug, PartialEq)]
|
||||
pub struct RawWsMessage {
|
||||
pub message: String,
|
||||
@@ -63,29 +69,65 @@ impl Message<RawWsMessage> for WsActor {
|
||||
self.is_initialized = true;
|
||||
return Ok(WsAction::Connect(connect_message));
|
||||
}
|
||||
match serde_json::from_str(&msg.message) {
|
||||
Ok(UserMessage { message, group_id }) => {
|
||||
if message.starts_with("/") {
|
||||
let mut tokens = message.split_whitespace();
|
||||
match tokens.next() {
|
||||
Some("/ban") => {
|
||||
let user_to_ban = tokens.next();
|
||||
if user_to_ban.is_none() {
|
||||
return Err(WsError::InvalidMessage);
|
||||
} else {
|
||||
let user_to_ban = user_to_ban.unwrap().to_lowercase();
|
||||
return Ok(WsAction::RemoveUser(
|
||||
user_to_ban.to_string(),
|
||||
group_id.clone(),
|
||||
));
|
||||
}
|
||||
match serde_json::from_str::<Value>(&msg.message) {
|
||||
Ok(json_data) => {
|
||||
// Handle different JSON message types
|
||||
if let Some(type_field) = json_data.get("type") {
|
||||
if let Some("user_vote") = type_field.as_str() {
|
||||
if let (Some(proposal_id), Some(vote), Some(group_id)) = (
|
||||
json_data.get("proposal_id").and_then(|v| v.as_u64()),
|
||||
json_data.get("vote").and_then(|v| v.as_bool()),
|
||||
json_data.get("group_id").and_then(|v| v.as_str()),
|
||||
) {
|
||||
return Ok(WsAction::UserVote {
|
||||
proposal_id: proposal_id as u32,
|
||||
vote,
|
||||
group_id: group_id.to_string(),
|
||||
});
|
||||
}
|
||||
_ => return Err(WsError::InvalidMessage),
|
||||
}
|
||||
}
|
||||
Ok(WsAction::UserMessage(UserMessage { message, group_id }))
|
||||
|
||||
// Check if it's a UserMessage format
|
||||
if let (Some(message), Some(group_id)) = (
|
||||
json_data.get("message").and_then(|v| v.as_str()),
|
||||
json_data.get("group_id").and_then(|v| v.as_str()),
|
||||
) {
|
||||
// Handle commands
|
||||
if message.starts_with("/") {
|
||||
let mut tokens = message.split_whitespace();
|
||||
match tokens.next() {
|
||||
Some("/ban") => {
|
||||
let user_to_ban = tokens.next();
|
||||
if let Some(user_to_ban) = user_to_ban {
|
||||
let user_to_ban = user_to_ban.to_lowercase();
|
||||
return Ok(WsAction::RemoveUser(
|
||||
user_to_ban.to_string(),
|
||||
group_id.to_string(),
|
||||
));
|
||||
} else {
|
||||
return Err(WsError::InvalidMessage);
|
||||
}
|
||||
}
|
||||
_ => return Err(WsError::InvalidMessage),
|
||||
}
|
||||
}
|
||||
|
||||
return Ok(WsAction::UserMessage(UserMessage {
|
||||
message: message.as_bytes().to_vec(),
|
||||
group_id: group_id.to_string(),
|
||||
}));
|
||||
}
|
||||
|
||||
Err(WsError::InvalidMessage)
|
||||
}
|
||||
Err(_) => {
|
||||
// Try to parse as UserMessage as fallback
|
||||
match serde_json::from_str::<UserMessage>(&msg.message) {
|
||||
Ok(user_msg) => Ok(WsAction::UserMessage(user_msg)),
|
||||
Err(_) => Err(WsError::InvalidMessage),
|
||||
}
|
||||
}
|
||||
Err(_) => Err(WsError::InvalidMessage),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -99,9 +141,24 @@ impl Message<AppMessage> for WsActor {
|
||||
msg: AppMessage,
|
||||
_ctx: Context<'_, Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
self.ws_sender
|
||||
.send(WsMessage::Text(msg.to_string()))
|
||||
.await?;
|
||||
// Check if this is a voting proposal and format it specially for the frontend
|
||||
let message_text =
|
||||
if let Some(app_message::Payload::VotingProposal(voting_proposal)) = &msg.payload {
|
||||
// Format as JSON for the frontend to parse
|
||||
info!("[ws_actor::handle]: Sending voting proposal to ws");
|
||||
serde_json::json!({
|
||||
"type": "voting_proposal",
|
||||
"proposal": {
|
||||
"proposal_id": voting_proposal.proposal_id,
|
||||
"group_name": voting_proposal.group_name,
|
||||
"payload": voting_proposal.payload
|
||||
}
|
||||
})
|
||||
.to_string()
|
||||
} else {
|
||||
msg.to_string()
|
||||
};
|
||||
self.ws_sender.send(WsMessage::Text(message_text)).await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
359
tests/consensus_multi_group_test.rs
Normal file
359
tests/consensus_multi_group_test.rs
Normal file
@@ -0,0 +1,359 @@
|
||||
use alloy::signers::local::PrivateKeySigner;
|
||||
use de_mls::consensus::{compute_vote_hash, ConsensusEvent, ConsensusService};
|
||||
use de_mls::protos::messages::v1::consensus::v1::Vote;
|
||||
use de_mls::LocalSigner;
|
||||
use prost::Message;
|
||||
use std::time::Duration;
|
||||
use uuid::Uuid;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_basic_consensus_service() {
|
||||
// Create consensus service
|
||||
let consensus_service = ConsensusService::new();
|
||||
|
||||
let group_name = "test_group";
|
||||
let expected_voters_count = 3;
|
||||
|
||||
let signer = PrivateKeySigner::random();
|
||||
let proposal_owner_address = signer.address();
|
||||
let proposal_owner = proposal_owner_address.to_string().as_bytes().to_vec();
|
||||
|
||||
// Create a proposal
|
||||
let proposal = consensus_service
|
||||
.create_proposal(
|
||||
group_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner,
|
||||
expected_voters_count,
|
||||
300,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let proposal = consensus_service
|
||||
.vote_on_proposal(group_name, proposal.proposal_id, true, signer)
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
// Verify proposal was created
|
||||
let active_proposals = consensus_service.get_active_proposals(group_name).await;
|
||||
assert_eq!(active_proposals.len(), 1);
|
||||
assert_eq!(active_proposals[0].proposal_id, proposal.proposal_id);
|
||||
|
||||
// Verify group statistics
|
||||
let group_stats = consensus_service.get_group_stats(group_name).await;
|
||||
assert_eq!(group_stats.total_sessions, 1);
|
||||
assert_eq!(group_stats.active_sessions, 1);
|
||||
|
||||
// Verify consensus threshold calculation
|
||||
// With 3 expected voters, we need 2n/3 = 2 votes for consensus
|
||||
// Initially we have 1 vote (steward), so we don't have sufficient votes
|
||||
assert!(
|
||||
!consensus_service
|
||||
.has_sufficient_votes(group_name, proposal.proposal_id)
|
||||
.await
|
||||
);
|
||||
|
||||
let signer_2 = PrivateKeySigner::random();
|
||||
let proposal_owner_2 = signer_2.address_bytes();
|
||||
// Add 1 more vote (total 2 votes)
|
||||
let mut vote = Vote {
|
||||
vote_id: Uuid::new_v4().as_u128() as u32,
|
||||
vote_owner: proposal_owner_2,
|
||||
proposal_id: proposal.proposal_id,
|
||||
timestamp: std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.expect("Failed to get current time")
|
||||
.as_secs(),
|
||||
vote: true,
|
||||
parent_hash: Vec::new(),
|
||||
received_hash: proposal.votes[0].vote_hash.clone(), // Reference steward's vote hash
|
||||
vote_hash: Vec::new(),
|
||||
signature: Vec::new(),
|
||||
};
|
||||
|
||||
// Compute vote hash
|
||||
vote.vote_hash = compute_vote_hash(&vote);
|
||||
let vote_bytes = vote.encode_to_vec();
|
||||
vote.signature = signer_2
|
||||
.local_sign_message(&vote_bytes)
|
||||
.await
|
||||
.expect("Failed to sign vote");
|
||||
|
||||
consensus_service
|
||||
.process_incoming_vote(group_name, vote)
|
||||
.await
|
||||
.expect("Failed to process vote");
|
||||
|
||||
// Now we should have sufficient votes (2 out of 3 expected voters)
|
||||
assert!(
|
||||
consensus_service
|
||||
.has_sufficient_votes(group_name, proposal.proposal_id)
|
||||
.await
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_multi_group_consensus_service() {
|
||||
// Create consensus service with max 10 sessions per group
|
||||
let consensus_service = ConsensusService::new_with_max_sessions(10);
|
||||
|
||||
// Test group 1
|
||||
let group1_name = "test_group_1";
|
||||
let group1_members_count = 3;
|
||||
let signer_1 = PrivateKeySigner::random();
|
||||
let proposal_owner_1 = signer_1.address_bytes();
|
||||
|
||||
// Test group 2
|
||||
let group2_name = "test_group_2";
|
||||
let group2_members_count = 3;
|
||||
let signer_2 = PrivateKeySigner::random();
|
||||
let proposal_owner_2 = signer_2.address_bytes();
|
||||
|
||||
// Create proposals for group 1
|
||||
let proposal_1 = consensus_service
|
||||
.create_proposal(
|
||||
group1_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner_1,
|
||||
group1_members_count,
|
||||
300,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let _proposal_1 = consensus_service
|
||||
.vote_on_proposal(group1_name, proposal_1.proposal_id, true, signer_1)
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
let proposal_2 = consensus_service
|
||||
.create_proposal(
|
||||
group2_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner_2.clone(),
|
||||
group2_members_count,
|
||||
300,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let _proposal_2 = consensus_service
|
||||
.vote_on_proposal(group2_name, proposal_2.proposal_id, true, signer_2.clone())
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
// Create proposal for group 2
|
||||
let proposal_3 = consensus_service
|
||||
.create_proposal(
|
||||
group2_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner_2,
|
||||
group2_members_count,
|
||||
300,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let _proposal_3 = consensus_service
|
||||
.vote_on_proposal(group2_name, proposal_3.proposal_id, true, signer_2)
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
// Verify proposals are created for both groups
|
||||
let group1_proposals = consensus_service.get_active_proposals(group1_name).await;
|
||||
let group2_proposals = consensus_service.get_active_proposals(group2_name).await;
|
||||
|
||||
assert_eq!(group1_proposals.len(), 1);
|
||||
assert_eq!(group2_proposals.len(), 2);
|
||||
|
||||
// Verify group statistics
|
||||
let group1_stats = consensus_service.get_group_stats(group1_name).await;
|
||||
let group2_stats = consensus_service.get_group_stats(group2_name).await;
|
||||
|
||||
assert_eq!(group1_stats.total_sessions, 1);
|
||||
assert_eq!(group1_stats.active_sessions, 1);
|
||||
assert_eq!(group2_stats.total_sessions, 2);
|
||||
assert_eq!(group2_stats.active_sessions, 2);
|
||||
|
||||
// Verify overall statistics
|
||||
let overall_stats = consensus_service.get_overall_stats().await;
|
||||
assert_eq!(overall_stats.total_sessions, 3);
|
||||
assert_eq!(overall_stats.active_sessions, 3);
|
||||
|
||||
// Verify active groups
|
||||
let active_groups = consensus_service.get_active_groups().await;
|
||||
assert_eq!(active_groups.len(), 2);
|
||||
assert!(active_groups.contains(&group1_name.to_string()));
|
||||
assert!(active_groups.contains(&group2_name.to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_consensus_threshold_calculation() {
|
||||
let consensus_service = ConsensusService::new();
|
||||
let mut consensus_events = consensus_service.subscribe_to_events();
|
||||
|
||||
let group_name = "test_group_threshold";
|
||||
let expected_voters_count = 5;
|
||||
let signer = PrivateKeySigner::random();
|
||||
let proposal_owner = signer.address_bytes();
|
||||
|
||||
// Create a proposal
|
||||
let proposal = consensus_service
|
||||
.create_proposal(
|
||||
group_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner,
|
||||
expected_voters_count,
|
||||
300,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let proposal = consensus_service
|
||||
.vote_on_proposal(group_name, proposal.proposal_id, true, signer)
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
// With 5 expected voters, we need 2n/3 = 3.33... -> 4 votes for consensus
|
||||
// Initially we have 1 vote (steward), so we don't have sufficient votes
|
||||
assert!(
|
||||
!consensus_service
|
||||
.has_sufficient_votes(group_name, proposal.proposal_id)
|
||||
.await
|
||||
);
|
||||
|
||||
for _ in 0..4 {
|
||||
let signer = PrivateKeySigner::random();
|
||||
let vote_owner = signer.address_bytes();
|
||||
let mut vote = Vote {
|
||||
vote_id: Uuid::new_v4().as_u128() as u32,
|
||||
vote_owner: vote_owner.clone(),
|
||||
proposal_id: proposal.proposal_id,
|
||||
timestamp: std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.expect("Failed to get current time")
|
||||
.as_secs(),
|
||||
vote: true,
|
||||
parent_hash: Vec::new(),
|
||||
received_hash: proposal.votes[0].vote_hash.clone(), // Reference previous vote's hash
|
||||
vote_hash: Vec::new(),
|
||||
signature: Vec::new(),
|
||||
};
|
||||
|
||||
// Compute vote hash
|
||||
vote.vote_hash = compute_vote_hash(&vote);
|
||||
let vote_bytes = vote.encode_to_vec();
|
||||
vote.signature = signer
|
||||
.local_sign_message(&vote_bytes)
|
||||
.await
|
||||
.expect("Failed to sign vote");
|
||||
|
||||
let result = consensus_service
|
||||
.process_incoming_vote(group_name, vote.clone())
|
||||
.await;
|
||||
|
||||
result.expect("Failed to process vote");
|
||||
}
|
||||
|
||||
// With 4 out of 5 votes, we should have sufficient votes for consensus
|
||||
assert!(
|
||||
consensus_service
|
||||
.has_sufficient_votes(group_name, proposal.proposal_id)
|
||||
.await
|
||||
);
|
||||
|
||||
// Subscribe to consensus events and wait for natural consensus
|
||||
let proposal_id = proposal.proposal_id;
|
||||
let group_name_clone = group_name;
|
||||
|
||||
// Wait for consensus event with timeout
|
||||
let timeout_duration = Duration::from_secs(15);
|
||||
let consensus_result = tokio::time::timeout(timeout_duration, async {
|
||||
while let Ok((event_group_name, event)) = consensus_events.recv().await {
|
||||
if event_group_name == group_name_clone {
|
||||
match event {
|
||||
ConsensusEvent::ConsensusReached {
|
||||
proposal_id: event_proposal_id,
|
||||
result,
|
||||
} => {
|
||||
if event_proposal_id == proposal_id {
|
||||
println!("Consensus reached for proposal {proposal_id}: {result}");
|
||||
return Ok(result);
|
||||
}
|
||||
}
|
||||
ConsensusEvent::ConsensusFailed {
|
||||
proposal_id: event_proposal_id,
|
||||
reason,
|
||||
} => {
|
||||
if event_proposal_id == proposal_id {
|
||||
println!("Consensus failed for proposal {proposal_id}: {reason}");
|
||||
return Err(format!("Consensus failed: {reason}"));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Err("Event channel closed".to_string())
|
||||
})
|
||||
.await
|
||||
.expect("Timeout waiting for consensus event")
|
||||
.expect("Consensus should succeed");
|
||||
|
||||
// Should have consensus result based on 2n/3 threshold
|
||||
assert!(consensus_result); // All votes were true, so result should be true
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_remove_group_sessions() {
|
||||
let consensus_service = ConsensusService::new();
|
||||
|
||||
let group_name = "test_group_remove";
|
||||
let expected_voters_count = 2;
|
||||
let signer = PrivateKeySigner::random();
|
||||
let proposal_owner = signer.address_bytes();
|
||||
|
||||
// Create a proposal
|
||||
let proposal = consensus_service
|
||||
.create_proposal(
|
||||
group_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner,
|
||||
expected_voters_count,
|
||||
300,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let _proposal = consensus_service
|
||||
.vote_on_proposal(group_name, proposal.proposal_id, true, signer)
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
// Verify proposal exists
|
||||
let group_stats = consensus_service.get_group_stats(group_name).await;
|
||||
assert_eq!(group_stats.total_sessions, 1);
|
||||
|
||||
// Remove group sessions
|
||||
consensus_service.remove_group_sessions(group_name).await;
|
||||
|
||||
// Verify group sessions are removed
|
||||
let group_stats_after = consensus_service.get_group_stats(group_name).await;
|
||||
assert_eq!(group_stats_after.total_sessions, 0);
|
||||
|
||||
// Verify group is not in active groups
|
||||
let active_groups = consensus_service.get_active_groups().await;
|
||||
assert!(!active_groups.contains(&group_name.to_string()));
|
||||
}
|
||||
599
tests/consensus_realtime_test.rs
Normal file
599
tests/consensus_realtime_test.rs
Normal file
@@ -0,0 +1,599 @@
|
||||
use alloy::signers::local::PrivateKeySigner;
|
||||
use de_mls::consensus::{compute_vote_hash, ConsensusEvent, ConsensusService};
|
||||
use de_mls::protos::messages::v1::consensus::v1::Vote;
|
||||
use de_mls::LocalSigner;
|
||||
use prost::Message;
|
||||
use std::time::Duration;
|
||||
use uuid::Uuid;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_realtime_consensus_waiting() {
|
||||
// Create consensus service
|
||||
let consensus_service = ConsensusService::new();
|
||||
|
||||
let group_name = "test_group_realtime";
|
||||
let expected_voters_count = 3;
|
||||
|
||||
let signer = PrivateKeySigner::random();
|
||||
let proposal_owner = signer.address_bytes();
|
||||
|
||||
// Create a proposal
|
||||
let proposal = consensus_service
|
||||
.create_proposal(
|
||||
group_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner,
|
||||
expected_voters_count,
|
||||
300,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let proposal = consensus_service
|
||||
.vote_on_proposal(group_name, proposal.proposal_id, true, signer)
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
println!("Created proposal with ID: {}", proposal.proposal_id);
|
||||
|
||||
// Subscribe to consensus events
|
||||
let mut consensus_events = consensus_service.subscribe_to_events();
|
||||
let proposal_id = proposal.proposal_id;
|
||||
|
||||
// Start a background task that waits for consensus events
|
||||
let group_name_clone = group_name;
|
||||
let consensus_waiter = tokio::spawn(async move {
|
||||
println!("Starting consensus event waiter for proposal {proposal_id:?}");
|
||||
|
||||
// Wait for consensus event with timeout
|
||||
let timeout_duration = Duration::from_secs(10);
|
||||
match tokio::time::timeout(timeout_duration, async {
|
||||
while let Ok((event_group_name, event)) = consensus_events.recv().await {
|
||||
if event_group_name == group_name_clone {
|
||||
match event {
|
||||
ConsensusEvent::ConsensusReached {
|
||||
proposal_id: event_proposal_id,
|
||||
result,
|
||||
} => {
|
||||
if event_proposal_id == proposal_id {
|
||||
println!("Consensus reached for proposal {proposal_id}: {result}");
|
||||
return Ok(result);
|
||||
}
|
||||
}
|
||||
ConsensusEvent::ConsensusFailed {
|
||||
proposal_id: event_proposal_id,
|
||||
reason,
|
||||
} => {
|
||||
if event_proposal_id == proposal_id {
|
||||
println!("Consensus failed for proposal {proposal_id}: {reason}");
|
||||
return Err(format!("Consensus failed: {reason}"));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Err("Event channel closed".to_string())
|
||||
})
|
||||
.await
|
||||
{
|
||||
Ok(result) => {
|
||||
println!("Consensus event waiter result: {result:?}");
|
||||
result
|
||||
}
|
||||
Err(_) => {
|
||||
println!("Consensus event waiter timed out");
|
||||
Err("Timeout waiting for consensus".to_string())
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Wait a bit to ensure the waiter is running
|
||||
tokio::time::sleep(Duration::from_millis(100)).await;
|
||||
|
||||
// Add votes to reach consensus
|
||||
let mut previous_vote_hash = proposal.votes[0].vote_hash.clone(); // Start with steward's vote hash
|
||||
|
||||
for i in 1..expected_voters_count {
|
||||
let signer = PrivateKeySigner::random();
|
||||
let proposal_owner = signer.address_bytes();
|
||||
let mut vote = Vote {
|
||||
vote_id: Uuid::new_v4().as_u128() as u32,
|
||||
vote_owner: proposal_owner,
|
||||
proposal_id: proposal.proposal_id,
|
||||
timestamp: std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.expect("Failed to get current time")
|
||||
.as_secs(),
|
||||
vote: true,
|
||||
parent_hash: Vec::new(),
|
||||
received_hash: previous_vote_hash.clone(), // Reference previous vote's hash
|
||||
vote_hash: Vec::new(),
|
||||
signature: Vec::new(),
|
||||
};
|
||||
|
||||
// Compute vote hash
|
||||
vote.vote_hash = compute_vote_hash(&vote);
|
||||
let vote_bytes = vote.encode_to_vec();
|
||||
vote.signature = signer
|
||||
.local_sign_message(&vote_bytes)
|
||||
.await
|
||||
.expect("Failed to sign vote");
|
||||
|
||||
println!("Adding vote {} for proposal {}", i, proposal.proposal_id);
|
||||
consensus_service
|
||||
.process_incoming_vote(group_name, vote.clone())
|
||||
.await
|
||||
.expect("Failed to process vote");
|
||||
|
||||
// Update previous vote hash for next iteration
|
||||
previous_vote_hash = vote.vote_hash.clone();
|
||||
|
||||
// Small delay between votes
|
||||
tokio::time::sleep(Duration::from_millis(50)).await;
|
||||
}
|
||||
|
||||
// Wait for consensus result
|
||||
let consensus_result = consensus_waiter
|
||||
.await
|
||||
.expect("Consensus waiter task failed");
|
||||
|
||||
// Verify consensus was reached
|
||||
assert!(consensus_result.is_ok());
|
||||
let result = consensus_result.unwrap();
|
||||
assert!(result); // Should be true (yes votes)
|
||||
|
||||
println!("Test completed successfully - consensus reached!");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_consensus_timeout() {
|
||||
// Create consensus service
|
||||
let consensus_service = ConsensusService::new();
|
||||
|
||||
let group_name = "test_group_timeout";
|
||||
let expected_voters_count = 5;
|
||||
let signer = PrivateKeySigner::random();
|
||||
let proposal_owner = signer.address_bytes();
|
||||
|
||||
// Need 4 votes for consensus
|
||||
// Create a proposal
|
||||
let proposal = consensus_service
|
||||
.create_proposal(
|
||||
group_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner,
|
||||
expected_voters_count,
|
||||
300,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let proposal = consensus_service
|
||||
.vote_on_proposal(group_name, proposal.proposal_id, true, signer)
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
println!("Created proposal with ID: {}", proposal.proposal_id);
|
||||
|
||||
// Subscribe to consensus events for timeout test
|
||||
let mut consensus_events = consensus_service.subscribe_to_events();
|
||||
let proposal_id = proposal.proposal_id;
|
||||
|
||||
// Start consensus event waiter with timeout
|
||||
let group_name_clone = group_name;
|
||||
let consensus_waiter = tokio::spawn(async move {
|
||||
println!("Starting consensus event waiter with timeout for proposal {proposal_id:?}");
|
||||
|
||||
// Wait for consensus event - should timeout and trigger liveness criteria
|
||||
let timeout_duration = Duration::from_secs(12); // Wait longer than consensus timeout (10s)
|
||||
match tokio::time::timeout(timeout_duration, async {
|
||||
while let Ok((event_group_name, event)) = consensus_events.recv().await {
|
||||
if event_group_name == group_name_clone {
|
||||
match event {
|
||||
ConsensusEvent::ConsensusReached { proposal_id: event_proposal_id, result } => {
|
||||
if event_proposal_id == proposal_id {
|
||||
println!("Consensus reached for proposal {proposal_id}: {result} (via timeout/liveness criteria)");
|
||||
return Ok(result);
|
||||
}
|
||||
}
|
||||
ConsensusEvent::ConsensusFailed { proposal_id: event_proposal_id, reason } => {
|
||||
if event_proposal_id == proposal_id {
|
||||
println!("Consensus failed for proposal {proposal_id}: {reason}");
|
||||
return Err(format!("Consensus failed: {reason}"));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Err("Event channel closed".to_string())
|
||||
}).await {
|
||||
Ok(result) => result,
|
||||
Err(_) => Err("Test timeout waiting for consensus event".to_string())
|
||||
}
|
||||
});
|
||||
|
||||
// Don't add any additional votes - should timeout and apply liveness criteria
|
||||
|
||||
// Wait for consensus result
|
||||
let consensus_result = consensus_waiter
|
||||
.await
|
||||
.expect("Consensus waiter task failed");
|
||||
|
||||
// Verify timeout occurred and liveness criteria was applied
|
||||
// With liveness_criteria_yes = true, should return Ok(true)
|
||||
assert!(consensus_result.is_ok());
|
||||
let result = consensus_result.unwrap();
|
||||
assert!(result); // Should be true due to liveness criteria
|
||||
|
||||
println!("Test completed successfully - timeout occurred and liveness criteria applied!");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_consensus_with_mixed_votes() {
|
||||
// Create consensus service
|
||||
let consensus_service = ConsensusService::new();
|
||||
let signer = PrivateKeySigner::random();
|
||||
let proposal_owner = signer.address_bytes();
|
||||
|
||||
let group_name = "test_group_mixed";
|
||||
let expected_voters_count = 3;
|
||||
|
||||
// Create a proposal
|
||||
let proposal = consensus_service
|
||||
.create_proposal(
|
||||
group_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner,
|
||||
expected_voters_count,
|
||||
300,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let proposal = consensus_service
|
||||
.vote_on_proposal(group_name, proposal.proposal_id, true, signer)
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
println!("Created proposal with ID: {}", proposal.proposal_id);
|
||||
|
||||
// Subscribe to consensus events
|
||||
let mut consensus_events = consensus_service.subscribe_to_events();
|
||||
let proposal_id = proposal.proposal_id;
|
||||
|
||||
// Start a background task that waits for consensus events
|
||||
let group_name_clone = group_name;
|
||||
let consensus_waiter = tokio::spawn(async move {
|
||||
println!("Starting consensus event waiter for proposal {proposal_id:?}");
|
||||
|
||||
// Wait for consensus event with timeout
|
||||
let timeout_duration = Duration::from_secs(15); // Allow time for votes to be processed
|
||||
match tokio::time::timeout(timeout_duration, async {
|
||||
while let Ok((event_group_name, event)) = consensus_events.recv().await {
|
||||
if event_group_name == group_name_clone {
|
||||
match event {
|
||||
ConsensusEvent::ConsensusReached {
|
||||
proposal_id: event_proposal_id,
|
||||
result,
|
||||
} => {
|
||||
if event_proposal_id == proposal_id {
|
||||
println!("Consensus reached for proposal {proposal_id}: {result}");
|
||||
return Ok(result);
|
||||
}
|
||||
}
|
||||
ConsensusEvent::ConsensusFailed {
|
||||
proposal_id: event_proposal_id,
|
||||
reason,
|
||||
} => {
|
||||
if event_proposal_id == proposal_id {
|
||||
println!("Consensus failed for proposal {proposal_id}: {reason}");
|
||||
return Err(format!("Consensus failed: {reason}"));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Err("Event channel closed".to_string())
|
||||
})
|
||||
.await
|
||||
{
|
||||
Ok(result) => {
|
||||
println!("Consensus event waiter result: {result:?}");
|
||||
result
|
||||
}
|
||||
Err(_) => {
|
||||
println!("Consensus event waiter timed out");
|
||||
Err("Timeout waiting for consensus".to_string())
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Wait a bit to ensure the waiter is running
|
||||
tokio::time::sleep(Duration::from_millis(100)).await;
|
||||
|
||||
// Add mixed votes: one yes, one no
|
||||
let votes = vec![(2, false), (3, false)];
|
||||
let mut previous_vote_hash = proposal.votes[0].vote_hash.clone(); // Start with steward's vote hash
|
||||
|
||||
for (i, vote_value) in votes {
|
||||
let signer = PrivateKeySigner::random();
|
||||
let proposal_owner = signer.address_bytes();
|
||||
let mut vote = Vote {
|
||||
vote_id: Uuid::new_v4().as_u128() as u32,
|
||||
vote_owner: proposal_owner,
|
||||
proposal_id: proposal.proposal_id,
|
||||
timestamp: std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.expect("Failed to get current time")
|
||||
.as_secs(),
|
||||
vote: vote_value,
|
||||
parent_hash: Vec::new(),
|
||||
received_hash: previous_vote_hash.clone(), // Reference previous vote's hash
|
||||
vote_hash: Vec::new(),
|
||||
signature: Vec::new(),
|
||||
};
|
||||
|
||||
// Compute vote hash
|
||||
vote.vote_hash = compute_vote_hash(&vote);
|
||||
let vote_bytes = vote.encode_to_vec();
|
||||
vote.signature = signer
|
||||
.local_sign_message(&vote_bytes)
|
||||
.await
|
||||
.expect("Failed to sign vote");
|
||||
|
||||
println!(
|
||||
"Adding vote {} (value: {}) for proposal {}",
|
||||
i, vote_value, proposal.proposal_id
|
||||
);
|
||||
consensus_service
|
||||
.process_incoming_vote(group_name, vote.clone())
|
||||
.await
|
||||
.expect("Failed to process vote");
|
||||
|
||||
// Update previous vote hash for next iteration
|
||||
previous_vote_hash = vote.vote_hash.clone();
|
||||
|
||||
// Small delay between votes
|
||||
tokio::time::sleep(Duration::from_millis(50)).await;
|
||||
}
|
||||
|
||||
// Wait for consensus result
|
||||
let consensus_result = consensus_waiter
|
||||
.await
|
||||
.expect("Consensus waiter task failed");
|
||||
|
||||
// Verify consensus was reached
|
||||
assert!(consensus_result.is_ok());
|
||||
let result = consensus_result.unwrap();
|
||||
// With 2 no votes and 1 yes vote, consensus should be no (false)
|
||||
// However, if it times out, liveness criteria (true) will be applied
|
||||
println!("Mixed votes test result: {result}");
|
||||
// Don't assert specific result since it depends on timing vs. liveness criteria
|
||||
|
||||
println!("Test completed successfully - consensus reached with mixed votes!");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_rfc_vote_chain_validation() {
|
||||
use de_mls::consensus::compute_vote_hash;
|
||||
use de_mls::LocalSigner;
|
||||
|
||||
// Create consensus service
|
||||
let consensus_service = ConsensusService::new();
|
||||
|
||||
let group_name = "test_rfc_validation";
|
||||
let expected_voters_count = 3;
|
||||
|
||||
let signer1 = PrivateKeySigner::random();
|
||||
let signer2 = PrivateKeySigner::random();
|
||||
let _signer3 = PrivateKeySigner::random();
|
||||
|
||||
// Create first proposal with steward vote
|
||||
let proposal = consensus_service
|
||||
.create_proposal(
|
||||
group_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
signer1.address_bytes(),
|
||||
expected_voters_count,
|
||||
300,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let proposal = consensus_service
|
||||
.vote_on_proposal(group_name, proposal.proposal_id, true, signer1)
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
println!("Created proposal with ID: {}", proposal.proposal_id);
|
||||
|
||||
// Create second vote from different voter
|
||||
let mut vote2 = Vote {
|
||||
vote_id: Uuid::new_v4().as_u128() as u32,
|
||||
vote_owner: signer2.address_bytes(),
|
||||
proposal_id: proposal.proposal_id,
|
||||
timestamp: std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.expect("Failed to get current time")
|
||||
.as_secs(),
|
||||
vote: true,
|
||||
parent_hash: Vec::new(), // Different voter, no parent
|
||||
received_hash: proposal.votes[0].vote_hash.clone(), // Should be hash of first vote
|
||||
vote_hash: Vec::new(),
|
||||
signature: Vec::new(),
|
||||
};
|
||||
|
||||
// Compute vote hash and signature
|
||||
vote2.vote_hash = compute_vote_hash(&vote2);
|
||||
let vote2_bytes = vote2.encode_to_vec();
|
||||
vote2.signature = signer2
|
||||
.local_sign_message(&vote2_bytes)
|
||||
.await
|
||||
.expect("Failed to sign vote");
|
||||
|
||||
// Create proposal with two votes from different voters
|
||||
let mut test_proposal = proposal.clone();
|
||||
test_proposal.votes.push(vote2.clone());
|
||||
|
||||
// Validate the proposal - should pass RFC validation
|
||||
let validation_result = consensus_service.validate_proposal(&test_proposal);
|
||||
assert!(
|
||||
validation_result.is_ok(),
|
||||
"RFC validation should pass: {validation_result:?}"
|
||||
);
|
||||
|
||||
// Test invalid vote chain (wrong received_hash)
|
||||
let mut invalid_proposal = test_proposal.clone();
|
||||
invalid_proposal.votes[1].received_hash = vec![0; 32]; // Wrong hash
|
||||
|
||||
let invalid_result = consensus_service.validate_proposal(&invalid_proposal);
|
||||
assert!(
|
||||
invalid_result.is_err(),
|
||||
"Invalid vote chain should be rejected"
|
||||
);
|
||||
|
||||
println!("RFC vote chain validation test completed successfully!");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_event_driven_timeout() {
|
||||
// Create consensus service
|
||||
let consensus_service = ConsensusService::new();
|
||||
|
||||
let group_name = "test_group_event_timeout";
|
||||
let expected_voters_count = 3;
|
||||
let signer = PrivateKeySigner::random();
|
||||
let proposal_owner = signer.address_bytes();
|
||||
|
||||
// Create a proposal with only one vote (steward vote) - should timeout and apply liveness criteria
|
||||
let proposal = consensus_service
|
||||
.create_proposal(
|
||||
group_name,
|
||||
"Test Proposal".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner,
|
||||
expected_voters_count,
|
||||
300,
|
||||
true, // liveness criteria = true
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal");
|
||||
|
||||
let proposal = consensus_service
|
||||
.vote_on_proposal(group_name, proposal.proposal_id, true, signer)
|
||||
.await
|
||||
.expect("Failed to vote on proposal");
|
||||
|
||||
println!(
|
||||
"Created proposal with ID: {} - waiting for timeout",
|
||||
proposal.proposal_id
|
||||
);
|
||||
|
||||
// Subscribe to consensus events
|
||||
let mut consensus_events = consensus_service.subscribe_to_events();
|
||||
let proposal_id = proposal.proposal_id;
|
||||
let group_name_clone = group_name;
|
||||
|
||||
// Wait for consensus event (should timeout after 10 seconds and apply liveness criteria)
|
||||
let timeout_duration = Duration::from_secs(12); // Wait longer than consensus timeout (10s)
|
||||
let consensus_result = tokio::time::timeout(timeout_duration, async {
|
||||
while let Ok((event_group_name, event)) = consensus_events.recv().await {
|
||||
if event_group_name == group_name_clone {
|
||||
match event {
|
||||
ConsensusEvent::ConsensusReached {
|
||||
proposal_id: event_proposal_id,
|
||||
result,
|
||||
} => {
|
||||
if event_proposal_id == proposal_id {
|
||||
println!("Consensus reached for proposal {proposal_id}: {result} (via timeout/liveness criteria)");
|
||||
return result;
|
||||
}
|
||||
}
|
||||
ConsensusEvent::ConsensusFailed {
|
||||
proposal_id: event_proposal_id,
|
||||
reason,
|
||||
} => {
|
||||
if event_proposal_id == proposal_id {
|
||||
panic!("Consensus failed for proposal {proposal_id}: {reason}");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
panic!("Event channel closed unexpectedly");
|
||||
})
|
||||
.await
|
||||
.expect("Timeout waiting for consensus event");
|
||||
|
||||
// Should be true due to liveness criteria
|
||||
assert!(consensus_result);
|
||||
|
||||
println!("Test completed successfully - event-driven timeout worked!");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_liveness_criteria_functionality() {
|
||||
// Create consensus service
|
||||
let consensus_service = ConsensusService::new();
|
||||
|
||||
let group_name = "test_group_liveness";
|
||||
let expected_voters_count = 3;
|
||||
let signer = PrivateKeySigner::random();
|
||||
let proposal_owner = signer.address_bytes();
|
||||
|
||||
// Test liveness criteria = false
|
||||
let proposal_false = consensus_service
|
||||
.create_proposal(
|
||||
group_name,
|
||||
"Test Proposal False".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner.clone(),
|
||||
expected_voters_count,
|
||||
300,
|
||||
false, // liveness criteria = false
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal with liveness_criteria_yes = false");
|
||||
|
||||
// Test liveness criteria getter
|
||||
let liveness_false = consensus_service
|
||||
.get_proposal_liveness_criteria(group_name, proposal_false.proposal_id)
|
||||
.await;
|
||||
assert_eq!(liveness_false, Some(false));
|
||||
|
||||
// Test liveness criteria = true
|
||||
let proposal_true = consensus_service
|
||||
.create_proposal(
|
||||
group_name,
|
||||
"Test Proposal True".to_string(),
|
||||
"Test payload".to_string(),
|
||||
proposal_owner,
|
||||
expected_voters_count,
|
||||
300,
|
||||
true, // liveness criteria = true
|
||||
)
|
||||
.await
|
||||
.expect("Failed to create proposal with liveness_criteria_yes = true");
|
||||
|
||||
// Test liveness criteria getter
|
||||
let liveness_true = consensus_service
|
||||
.get_proposal_liveness_criteria(group_name, proposal_true.proposal_id)
|
||||
.await;
|
||||
assert_eq!(liveness_true, Some(true));
|
||||
|
||||
// Test non-existent proposal
|
||||
let liveness_none = consensus_service
|
||||
.get_proposal_liveness_criteria("nonexistent", 99999)
|
||||
.await;
|
||||
assert_eq!(liveness_none, None);
|
||||
|
||||
println!("Test completed successfully - liveness criteria functionality verified!");
|
||||
}
|
||||
@@ -5,10 +5,10 @@ use mls_crypto::openmls_provider::MlsProvider;
|
||||
#[tokio::test]
|
||||
async fn test_state_machine_transitions() {
|
||||
let crypto = MlsProvider::default();
|
||||
let id_steward = random_identity().expect("Failed to create identity");
|
||||
let mut id_steward = random_identity().expect("Failed to create identity");
|
||||
|
||||
let mut group = Group::new(
|
||||
"test_group".to_string(),
|
||||
"test_group",
|
||||
true,
|
||||
Some(&crypto),
|
||||
Some(id_steward.signer()),
|
||||
@@ -17,82 +17,67 @@ async fn test_state_machine_transitions() {
|
||||
.expect("Failed to create group");
|
||||
|
||||
// Initial state should be Working
|
||||
assert_eq!(group.get_state(), GroupState::Working);
|
||||
assert_eq!(group.get_state().await, GroupState::Working);
|
||||
|
||||
// Test start_steward_epoch
|
||||
group
|
||||
.start_steward_epoch()
|
||||
// Test start_steward_epoch_with_validation
|
||||
let proposal_count = group
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
assert_eq!(group.get_state(), GroupState::Waiting);
|
||||
assert_eq!(proposal_count, 0); // No proposals initially
|
||||
assert_eq!(group.get_state().await, GroupState::Working); // Should stay in Working
|
||||
|
||||
// Test start_voting
|
||||
group.start_voting().expect("Failed to start voting");
|
||||
assert_eq!(group.get_state(), GroupState::Voting);
|
||||
// Add some proposals
|
||||
let kp_user = id_steward
|
||||
.generate_key_package(&crypto)
|
||||
.expect("Failed to generate key package");
|
||||
group
|
||||
.store_invite_proposal(Box::new(kp_user))
|
||||
.await
|
||||
.expect("Failed to store proposal");
|
||||
|
||||
// Now start steward epoch with proposals
|
||||
let proposal_count = group
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
assert_eq!(proposal_count, 1); // Should have 1 proposal
|
||||
assert_eq!(group.get_state().await, GroupState::Waiting);
|
||||
|
||||
// Test start_voting_with_validation
|
||||
group.start_voting().await.expect("Failed to start voting");
|
||||
assert_eq!(group.get_state().await, GroupState::Voting);
|
||||
|
||||
// Test complete_voting with success
|
||||
group
|
||||
.complete_voting(true)
|
||||
.await
|
||||
.expect("Failed to complete voting");
|
||||
assert_eq!(group.get_state(), GroupState::Waiting);
|
||||
assert_eq!(group.get_state().await, GroupState::ConsensusReached);
|
||||
|
||||
// Test apply_proposals
|
||||
// Test start_waiting_after_consensus
|
||||
group
|
||||
.remove_proposals_and_complete()
|
||||
.start_waiting_after_consensus()
|
||||
.await
|
||||
.expect("Failed to remove proposals");
|
||||
assert_eq!(group.get_state(), GroupState::Working);
|
||||
.expect("Failed to start waiting after consensus");
|
||||
assert_eq!(group.get_state().await, GroupState::Waiting);
|
||||
|
||||
// Test apply_proposals_and_complete
|
||||
group
|
||||
.handle_yes_vote()
|
||||
.await
|
||||
.expect("Failed to apply proposals");
|
||||
assert_eq!(group.get_state().await, GroupState::Working);
|
||||
assert_eq!(group.get_pending_proposals_count().await, 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_state_machine_permissions() {
|
||||
let crypto = MlsProvider::default();
|
||||
let id_steward = random_identity().expect("Failed to create identity");
|
||||
|
||||
let mut group = Group::new(
|
||||
"test_group".to_string(),
|
||||
true,
|
||||
Some(&crypto),
|
||||
Some(id_steward.signer()),
|
||||
Some(&id_steward.credential_with_key()),
|
||||
)
|
||||
.expect("Failed to create group");
|
||||
|
||||
// Working state - anyone can send messages
|
||||
assert!(group.can_send_message(false, false)); // Regular user, no proposals
|
||||
assert!(group.can_send_message(true, false)); // Steward, no proposals
|
||||
assert!(group.can_send_message(true, true)); // Steward, with proposals
|
||||
|
||||
// Start steward epoch
|
||||
group
|
||||
.start_steward_epoch()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
|
||||
// Waiting state - only steward with proposals can send messages
|
||||
assert!(!group.can_send_message(false, false)); // Regular user, no proposals
|
||||
assert!(!group.can_send_message(false, true)); // Regular user, with proposals
|
||||
assert!(!group.can_send_message(true, false)); // Steward, no proposals
|
||||
assert!(group.can_send_message(true, true)); // Steward, with proposals
|
||||
|
||||
// Start voting
|
||||
group.start_voting().expect("Failed to start voting");
|
||||
|
||||
// Voting state - no one can send messages
|
||||
assert!(!group.can_send_message(false, false));
|
||||
assert!(!group.can_send_message(false, true));
|
||||
assert!(!group.can_send_message(true, false));
|
||||
assert!(!group.can_send_message(true, true));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_invalid_state_transitions() {
|
||||
let crypto = MlsProvider::default();
|
||||
let id_steward = random_identity().expect("Failed to create identity");
|
||||
let mut id_steward = random_identity().expect("Failed to create identity");
|
||||
|
||||
let mut group = Group::new(
|
||||
"test_group".to_string(),
|
||||
"test_group",
|
||||
true,
|
||||
Some(&crypto),
|
||||
Some(id_steward.signer()),
|
||||
@@ -100,26 +85,54 @@ async fn test_invalid_state_transitions() {
|
||||
)
|
||||
.expect("Failed to create group");
|
||||
|
||||
// Cannot start voting from Working state
|
||||
let result = group.start_voting();
|
||||
assert!(matches!(result, Err(GroupError::InvalidStateTransition)));
|
||||
|
||||
// Cannot complete voting from Working state
|
||||
let result = group.complete_voting(true);
|
||||
assert!(matches!(result, Err(GroupError::InvalidStateTransition)));
|
||||
let result = group.complete_voting(true).await;
|
||||
assert!(matches!(
|
||||
result,
|
||||
Err(GroupError::InvalidStateTransition { .. })
|
||||
));
|
||||
|
||||
// Cannot apply proposals from Working state
|
||||
let result = group.remove_proposals_and_complete().await;
|
||||
assert!(matches!(result, Err(GroupError::InvalidStateTransition)));
|
||||
let result = group.handle_yes_vote().await;
|
||||
assert!(matches!(
|
||||
result,
|
||||
Err(GroupError::InvalidStateTransition { .. })
|
||||
));
|
||||
|
||||
// Start steward epoch
|
||||
group
|
||||
.start_steward_epoch()
|
||||
// Start steward epoch - but there are no proposals, so it should stay in Working state
|
||||
let proposal_count = group
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
assert_eq!(proposal_count, 0); // No proposals
|
||||
assert_eq!(group.get_state().await, GroupState::Working); // Should still be in Working state
|
||||
|
||||
// Cannot apply proposals from Working state (even after steward epoch start with no proposals)
|
||||
let result = group.handle_yes_vote().await;
|
||||
assert!(matches!(
|
||||
result,
|
||||
Err(GroupError::InvalidStateTransition { .. })
|
||||
));
|
||||
|
||||
// Add a proposal to actually transition to Waiting state
|
||||
let kp_user = id_steward
|
||||
.generate_key_package(&crypto)
|
||||
.expect("Failed to generate key package");
|
||||
group
|
||||
.store_invite_proposal(Box::new(kp_user))
|
||||
.await
|
||||
.expect("Failed to store proposal");
|
||||
|
||||
// Now start steward epoch with proposals - should transition to Waiting
|
||||
let proposal_count = group
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
assert_eq!(proposal_count, 1); // Should have 1 proposal
|
||||
assert_eq!(group.get_state().await, GroupState::Waiting); // Should now be in Waiting state
|
||||
|
||||
// Can apply proposals from Waiting state (even with no proposals)
|
||||
let result = group.remove_proposals_and_complete().await;
|
||||
let result = group.handle_yes_vote().await;
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
@@ -130,7 +143,7 @@ async fn test_proposal_counting() {
|
||||
let mut id_user = random_identity().expect("Failed to create identity");
|
||||
|
||||
let mut group = Group::new(
|
||||
"test_group".to_string(),
|
||||
"test_group",
|
||||
true,
|
||||
Some(&crypto),
|
||||
Some(id_steward.signer()),
|
||||
@@ -148,28 +161,111 @@ async fn test_proposal_counting() {
|
||||
.await
|
||||
.expect("Failed to store proposal");
|
||||
group
|
||||
.store_remove_proposal(vec![1, 2, 3])
|
||||
.store_remove_proposal("test_user".to_string())
|
||||
.await
|
||||
.expect("Failed to put remove proposal");
|
||||
|
||||
// Start steward epoch - should collect proposals
|
||||
group
|
||||
.start_steward_epoch()
|
||||
let proposal_count = group
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
assert_eq!(proposal_count, 2); // Should have 2 proposals
|
||||
assert_eq!(group.get_state().await, GroupState::Waiting);
|
||||
assert_eq!(group.get_voting_proposals_count().await, 2);
|
||||
|
||||
// Complete the flow
|
||||
group.start_voting().expect("Failed to start voting");
|
||||
group.start_voting().await.expect("Failed to start voting");
|
||||
group
|
||||
.complete_voting(true)
|
||||
.await
|
||||
.expect("Failed to complete voting");
|
||||
group
|
||||
.remove_proposals_and_complete()
|
||||
.handle_yes_vote()
|
||||
.await
|
||||
.expect("Failed to remove proposals");
|
||||
.expect("Failed to apply proposals");
|
||||
|
||||
// Proposals count should be reset
|
||||
assert_eq!(group.get_voting_proposals_count().await, 0);
|
||||
assert_eq!(group.get_pending_proposals_count().await, 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_steward_validation() {
|
||||
let _crypto = MlsProvider::default();
|
||||
let _id_steward = random_identity().expect("Failed to create identity");
|
||||
|
||||
// Create group without steward
|
||||
let mut group = Group::new(
|
||||
"test_group",
|
||||
false, // No steward
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
)
|
||||
.expect("Failed to create group");
|
||||
|
||||
// Should fail to start steward epoch without steward
|
||||
let result = group.start_steward_epoch_with_validation().await;
|
||||
assert!(matches!(result, Err(GroupError::StewardNotSet)));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_consensus_result_handling() {
|
||||
let crypto = MlsProvider::default();
|
||||
let id_steward = random_identity().expect("Failed to create identity");
|
||||
|
||||
let mut group = Group::new(
|
||||
"test_group",
|
||||
true,
|
||||
Some(&crypto),
|
||||
Some(id_steward.signer()),
|
||||
Some(&id_steward.credential_with_key()),
|
||||
)
|
||||
.expect("Failed to create group");
|
||||
|
||||
// Start steward epoch and voting
|
||||
group
|
||||
.start_steward_epoch_with_validation()
|
||||
.await
|
||||
.expect("Failed to start steward epoch");
|
||||
group.start_voting().await.expect("Failed to start voting");
|
||||
|
||||
// Test consensus result handling for steward
|
||||
let result = group.complete_voting(true).await;
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(group.get_state().await, GroupState::ConsensusReached);
|
||||
|
||||
// Test invalid consensus result handling (not in voting state)
|
||||
let result = group.complete_voting(true).await;
|
||||
assert!(matches!(
|
||||
result,
|
||||
Err(GroupError::InvalidStateTransition { .. })
|
||||
));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_voting_validation_edge_cases() {
|
||||
let _crypto = MlsProvider::default();
|
||||
let _id_steward = random_identity().expect("Failed to create identity");
|
||||
|
||||
let mut group = Group::new(
|
||||
"test_group",
|
||||
true,
|
||||
Some(&_crypto),
|
||||
Some(_id_steward.signer()),
|
||||
Some(&_id_steward.credential_with_key()),
|
||||
)
|
||||
.expect("Failed to create group");
|
||||
|
||||
// Test starting voting from Working state (should transition to Waiting first)
|
||||
group.start_voting().await.expect("Failed to start voting");
|
||||
assert_eq!(group.get_state().await, GroupState::Voting);
|
||||
|
||||
// Test starting voting from Voting state (should fail)
|
||||
let result = group.start_voting().await;
|
||||
assert!(matches!(
|
||||
result,
|
||||
Err(GroupError::InvalidStateTransition { .. })
|
||||
));
|
||||
}
|
||||
|
||||
1445
tests/user_test.rs
1445
tests/user_test.rs
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user