We previously deleted the call to `unregister()` after the refinery is
successful on a16dd562d9.
It was considered redundant since the `unregister()` call happens in
`subscribe_on_stop()`, which for refine session is called directly after
the refinery process finishes (after `channel.stop()`).
However, in a highly async environment the `unregister()` call in
`subscribe_on_stop()` can be called before the call to `move_host()`, meaning
that the host would then be stuck in the `Moving` state.
We have fixed this by specifying that `unregister` should only be called
in `subscribe_on_stop()` to peers that are not part of a refine session.
We seperately call `unregister` after `move_host` in the refinery.
This commit also fixes some documentation.
Note: the call to `unregister()` is highly fragile and can lead to race
conditions. We are working to replace this with something more robust
(like `tombstone()`).
- All rpc use same fn to perform requests towards darkfid\n- Moved all rpc related Drk fns to rpc.rs\n- Fixed subscribe where if darkfid went off, drk subscription errored and drk hanged
unregister() will get called when the refine session channel disconnects
in session::remove_sub_on_stop(). Calling it here is actually dangerous
and creates rare race conditions.
We organize this functionality into distinct methods which get called
higher up, for example rather than manually resizing inside of store(),
we call resize() after we call store().
This is about reducing the "critical section" where locks are held and
using function scopes to ensure locks are released as quickly as possible.
Rationale: using a sync Mutex wherever possible is the recommended
method.
Additionally, using a sync Mutex here fixes some really weird fairness
behaviors we observed in the smol::lock::RwLock where writers in the
priority queue were occassionally ignored.
We don't need to use an AsyncMutex here since we're not holding across .await points or for long periods of time.
Using a sync Mutex here also fixes some really weird fairness behaviors we observed in the smol::lock::Mutex where writers in the priority queue were occasionally getting ignored. This was apparently not a deadlock since subsequent and prior readers and writers were able to access the data with no problems.
We don't need to use an AsyncMutex here since we're not holding across .await points or for long periods of time.
Using a sync Mutex here also fixes some really weird fairness behaviors we observed in the smol::lock::Mutex where writers in the priority queue were occasionally getting ignored. This was apparently not a deadlock since subsequent and prior readers and writers were able to access the data with no problems.