timetext() deals with seconds, and it is currently giving weird results
for any time period that's longer than the max int value of 2147483647
seconds (about 68 years). The total number of seconds was already being
declared as a long, but one of the intermediate variables used in the
calculations was still an int and caused this issue.
This switches that variable to long as well, which resolves the problem.
The unused variable `count2` was also removed.
The add() function uses a get() at the beginning to avoid creating
duplicate relations, but concurrent requests can get past this point and
will throw a CreationError when the rel is committed. This handles that
case as well and returns None, the same as if the initial get()
determines that the rel already exists.
Currently, passing an invalid hex string as a wiki revision ID will
cause a 500, due to an unhandled ValueError. This catches it so that a
404 will be returned instead, similar to incorrect (but valid hex)
revision IDs.
Storing a private IP address for a user indicates that they're using an
internal "app" (mweb, modmail, etc.) that isn't passing along the user's
actual IP correctly.
Storing these IPs is not useful at all, so this commit checks if an IP
is private before storing it. However, getting to that point signals an
issue that needs to be resolved, so this will send a graphite event that
we can alert on to know that we have an app that needs to be adjusted.
There are a few urls in the system that are extremely long (100,000+
chars). Cassandra's maximum key length is 64kb, so the whole url can't
be used as a key for these. This truncates the url at 65,000 characters
to prevent that from happening.
We should probably also implement a maximum url length, but this
prevents crashes from the pre-existing urls longer than that.
The previous attempt at this fix (296f378b) still did not resolve the
issue.
This commit falls back to g.locale if c.locale is not set, instead of
trying to fall back to the LC_NUMERIC env variable.
This endpoint is currently crashing if nothing is supplied for the
flair_csv param. Returning immediately seems reasonable - no flair
updates requested, so there's nothing to do.
Some pools are read-only by setting g.disallow_db_writes True. If
a write is attempted there can be an exception. Instead of handling
the exception we should just immediately abort in OAuth2AccessController
because it will always need to write.
Omitting `keep_blank_values` was dropping blank query parameters.
Furthermore, converting the output of `parse_qsl` to a dictionary
was unnecessarily modifying the order of parameters since dicts
are not ordered. Fortunately `urllib.urlencode` also accepts a
sequence of two-element tuples and the order of parameters in
the encoded string will match the order of parameter tuples in the
sequence.
There will never be a "subreddit" attribute already on the link.
That only gets added to Wrapped Link objects, and even if we called
wrapped_link.archived we'd fall through to Link.archived which no longer
has access to the "subreddit" attribute.