This wraps the reader side into a BufReader and read_from_stream now
reads byte-by-byte from the stream using an intermediate buffer.
The BufReader doc[0] claims that using this for such reading operation
is advantageous and useful for small and repeated reads to a socket.
read_from_stream stops reading after reaching LF, and optionally pops
CR from the back of the buffer to handle CRLF endings.
[0] https://docs.rs/smol/latest/smol/io/struct.BufReader.html
* More explicit error message for users who have failed to generate their own key pair.
* Tweaked error message.
---------
Co-authored-by: dark-john <dark-john@fake-email.com>
There was an error in calculating the timestamp for the next "rotation",
i.e. when the Dag should prune.
This commit fixes:
- An underflow when calculating the sleep time for the Dag pruning
process
- An off-by-one error that caused next_rotation_timestamp to give a
timestamp in the past
Unit tests have also been added to prevent the above problems from
occurring again. They should be general enough to catch rotation periods
that are more complex than a period of `1`, which is what we are
currently using
Change approach to dynamically calculate Genesis rather than storing it
in the EventGraph struct.
Add error-level debug messages in the case when a peer requests an
outdated Event and the node responds without an error. This indicates
that our Dag contains outdated events, indicating that the previous
prune failed.
Added debug statements when an EventPut occurs for an event with
a timestamp that is older than the Genesis event of the EventGraph.
Ideally this shouldn't happen but could theoretically occur if a node
does not properly prune its DAG.
In order to do this, `genesis` and `days_rotation` were added as fields
to EventGraph and 'getter' methods were added to retrieve these values.
Refactor the Tor Dialers to return an error instead of panicking via
unwrap(). In any case, we don't expect this to happen because new
dialers are instantiated using a macro that ensures that endpoints have
valid hosts and ports.
This will initialize the sled tree on-demand. Additionally, we make the
handling more robust and will disconnect the client on any potential sled
errors that might happen.
We also make mark_seen() write atomically into the sled tree via sled::Batch.