TryLater runs periodically (around every 5 minutes), so if the user's suspension
expires within that time and they log in (datetime.now > timeout date),
days_remaining_in_timeout will be 0 and account.in_timeout = True, which is the
same as a permanent suspension. So the modal showed the user as being in a permanent
suspension until TryLater was run next.
Adding a buffer of an hour will show that the suspension still has a day remaining
rather than it being a permanent suspension if the TryLater queue isn't backed up more
than an hour.
They will always be calculated on the fly from tree. This will let us
store less data in permacache so reading and writing should be faster.
The tree takes roughly 25% as much space as the tree plus cids, depth,
and parents.
We will calculate cids, depth, and parents from the tree, but continue
to update and write these values. This will allow us to gather timings
on the calculation step and be able to revert safely if needed.
Testing on an example tree with 16,000 comments takes ~7ms to
calculate the cids, depth, and parents.
This will allow us to stop storing cids, depth, and parents and
instead only store tree and calculate the rest.
If a set of events is causing an HTTP 413 response from the actual event
collector, this takes them out of the queue to prevent it from
continuing to crash repeatedly on them, and adds them into a separate
one that we can monitor/process on its own.
There were a couple of errors with the way the maximum event size was being
limited that this fixes:
* Only the maximum batch size was being accounted for, not the maximum
size of an individual event (which is 20% of the batch limit).
* When truncation was done, we were adding an is_truncated field to the
event, but the truncation didn't account for the size of this new
field.
This commit fixes these issues and moves the truncation-handling into a
new wrapper class called PublishableEvent, instead of doing it inside
the queue-processor itself. It also takes advantage of the
application_headers support on amqp messages to send info about which
field is truncatable separate from the actual event data. This lets us
avoid needing to deserialize the JSON unless truncation is actually
necessary (and supported).
Graphite events are also added so that we can more easily track how
often oversize events need to be truncated or dropped.
Hypothesis: Threads with a high rate of new comments cause Cassandra
performance issues by constantly mutating their permacache entry and increasing
the amount compactions required. The majority of the problem is the velocity of
the updates, not the size of the updates.
It's not necessary because deleted comments are handled in the builder.
Deleting the comment from the depth dict can make the comment tree
inconsistent when attempting to add a child comment.
New client-side screenview event. Event data is added explicitly to
`r.config.event_target` via `extra_js_config` and via `render_params`
for listings. There is no url parsing from javascript.
Using a sort value for sticky comments was leading to a lot of pain
when dealing with rendering edge cases (specifically hiding children).
Using this approach we more directly special case the sticky comment
and explicitly do not render its children when viewing top level
comments, which is much cleaner.
P.S. This is almost entirely @bsimpson63's code and idea, with some
extra docs. Thank you Brian!
If the data layer variable (`googleTagManager` in this case) isn't initialized
when loading GTM than you cannot use page view triggers with data layer
variables.