Subreddit sticky posts (announcements) were originally implemented with
only allowing a single sticky to be set. When we added support for a
second sticky, there was some transitional code added that would handle
automatically converting the old single sticky to the new storage format
for multiple.
Everything's been long converted now, so this is no longer necessary to
keep around.
This fed non-Exception events into /admin/errors. None of these have been
actively monitored for years. If we do care about monitoring stuff like
this we should use graphite or Sentry.
The deleted property previously wasn't defined as a bool, so setting it
to True or False ended up with the strings "True" or "False" being saved
to the backing CF. This meant that trying to explicitly set a client as
not deleted would only work if you set the value as an empty string (or
completely removed the property), since attempting to set it as False
would save a string that still evaluates as True in a boolean context.
As part of this change, I also ran a backfill script that went through
and converted all existing .deleted values in the database. Anything
with a value of "True" was converted to "1" (since that's the only value
that something defined in _bool_props will accept as True), and it just
removed the property from anything with empty strings as values.
In addition, this adds .deleted to OAuth2Client._defaults so that it's
not necessary to use getattr() all the time to access it.
Reverting a special page that isn't stored as an attr on the Subreddit
(the automod config or the stylesheet) currently causes a crash here
(but with no other adverse side effects).
It should be valid to define a check for one of the account thresholds
similar to:
author:
comment_karma: 0
to check for exactly that value. However, since this is parsed in as an
int, trying to do a regex match on it (to check for the presence of an
operator - equal/greater-than/less-than sign) crashes.
This ensures that the comparison value is converted to a string so that
the rest of the code can handle it.
timetext() deals with seconds, and it is currently giving weird results
for any time period that's longer than the max int value of 2147483647
seconds (about 68 years). The total number of seconds was already being
declared as a long, but one of the intermediate variables used in the
calculations was still an int and caused this issue.
This switches that variable to long as well, which resolves the problem.
The unused variable `count2` was also removed.
The add() function uses a get() at the beginning to avoid creating
duplicate relations, but concurrent requests can get past this point and
will throw a CreationError when the rel is committed. This handles that
case as well and returns None, the same as if the initial get()
determines that the rel already exists.
Currently, passing an invalid hex string as a wiki revision ID will
cause a 500, due to an unhandled ValueError. This catches it so that a
404 will be returned instead, similar to incorrect (but valid hex)
revision IDs.
Storing a private IP address for a user indicates that they're using an
internal "app" (mweb, modmail, etc.) that isn't passing along the user's
actual IP correctly.
Storing these IPs is not useful at all, so this commit checks if an IP
is private before storing it. However, getting to that point signals an
issue that needs to be resolved, so this will send a graphite event that
we can alert on to know that we have an app that needs to be adjusted.
There are a few urls in the system that are extremely long (100,000+
chars). Cassandra's maximum key length is 64kb, so the whole url can't
be used as a key for these. This truncates the url at 65,000 characters
to prevent that from happening.
We should probably also implement a maximum url length, but this
prevents crashes from the pre-existing urls longer than that.
The previous attempt at this fix (296f378b) still did not resolve the
issue.
This commit falls back to g.locale if c.locale is not set, instead of
trying to fall back to the LC_NUMERIC env variable.
This endpoint is currently crashing if nothing is supplied for the
flair_csv param. Returning immediately seems reasonable - no flair
updates requested, so there's nothing to do.