We don't need to use an AsyncMutex here since we're not holding across .await points or for long periods of time.
Using a sync Mutex here also fixes some really weird fairness behaviors we observed in the smol::lock::Mutex where writers in the priority queue were occasionally getting ignored. This was apparently not a deadlock since subsequent and prior readers and writers were able to access the data with no problems.
We don't need to use an AsyncMutex here since we're not holding across .await points or for long periods of time.
Using a sync Mutex here also fixes some really weird fairness behaviors we observed in the smol::lock::Mutex where writers in the priority queue were occasionally getting ignored. This was apparently not a deadlock since subsequent and prior readers and writers were able to access the data with no problems.
We check whether there are any remaining channels when we remove a
channel in remove_sub_on_stop(). If the channel list is empty,
we call notify() on disconnect_pubisher and set its inner value to true.
Note that this only signals when we do not have any connections, and
does not update to false when new connections are formed.
Previously we were locking the greylist to prevent the index from changing
while performing a handshake. This isn't necessary. It is acceptable
for the greylist index to change while the handshake is ongoing.
The thing we don't want to happen is that the addr we are operating on
is somehow removed from the list- this should never happen since it has
been marked as "Refine" in the HostRegistry. Therefore, calling unwrap()
on get_index_at_addr() should pass 100% of the time (and if it doesn't,
that indicates a logic failure elsewhere in the code).
RwLock is overkill in this case since there is only ever one reader and
one writer.
For more info why a sync Mutex is appropiate in this case, see commit 65a8e9a44fa3c835158550e7eb5b5e1946e3028f
Locks should be sync for simple data operations and async if:
1. the lock must be held across an .await point
2. the new data being stored in the lock is calculated from data already inside the lock
Since ipv6_available is a fast and simple data operation it is more appropiate to use
a sync Mutex here.
unwrap() can panic in the rare case that a node disconnects while we are
in the middle of accepting a connection from it.
In the case that this happens we should instead just exit with an error.
Selecting a random port allows us to run the test on the same machine
concurrently without conflicts.
Note that selecting a port for the seed node and seed sync session is
fairly safe, since we immediately start the node after the port is
selected, meaning most of the time the same port will not come up as
available if another test is being run concurrently.
However there is slightly less certainty with the manual session, since
we first generate a list of available ports, and then start them in a
loop, creating a slight delay and increasing the probability that
another test, run concurrently, could select one of those ports as
available.