This was causing all pages that use this validator to be unable to
handle leading/trailing spaces around the username, since when chkuser()
is called it will always return None due to their presence. Changing
this will make a lot of pages more tolerant with usernames, like the
message compose page, pages for banning users, adding them as approved
submitters or moderators, etc.
Replacing with "upvoted" and "downvoted" everywhere respectively.
This keeps the /liked and /disliked paths for user pages working for API
clients, but does a 301 redirect for non-API clients.
We use the [Open Graph protocol][0] to hint to Facebook and other sites what
images they should show as an on-site preview of reddit content. For most of
our comments pages, that ends up being the thumbnail image, which is pretty
small.
We're now collecting preview images that are sometimes larger than the
thumbnails, so let's start using them when we can!
[0]: http://ogp.me/
This lets us take advantage of the lookups that were batched in
Link.add_props and Comment.add_props rather than calling
Subreddit.is_moderator() multiple times.
replace_render lets us stub out portions of the rendered output that change
frequently (e.g. score, rank in a listing, etc.) to make it more cachable,
but only html rendered output is cached.
If a rule definition specifies "type: any" (or does not specify type at
all), we try to create two separate rules for both comments and
submissions using the rule definition. If a particular type fails to
create, we just throw it away silently, considering this to mean that
the rule definition was actually only intended to apply to one type or
the other. However, if there is an error in the rule that results in
*both* the submission and comment attempts being thrown away, this means
that the entire rule will just end up being silently ignored. This adds
an error for that case, so that the user knows they need to fix their
definition.
A RuleTarget's matches dict (which stores the results of the search
checks on that item) was only being cleared in the __init__ function, so
it was actually being retained across checks against multiple different
items. This meant that {{match}} placeholders would occasionally be
replaced by the result of a match from an entirely different item. This
change makes it so that the matches will be cleared at the start of each
check instead.
The {{permalink}} placeholder (which is auto-included in messages and
modmails) has been using relative links, but some apps don't handle
these properly, so it's better to go back to using a full link with the
domain.
In addition, this makes it so that non-top-level comments will have
context=3 added on their permalinks.
This action lets AutoMod behave more like reddit's spam filter, so it
can remove something but leave it in the modqueue (and unmoderated if
it's a Link) for easier review by moderators.
This also rearranges the init logic for RuleTargets slightly, which will
have a side-effect of strengthening the validation a little bit. For
example, after this change it will not be possible to set "is_top_level"
on a submission, since that only makes sense for comments. Previously it
would have validated but been silently ignored.
Remove feature flag for subreddit relevancy and expose the sort
parameter for subreddit search.
This restores the `activity` sort option that was broken when
search was providerized.
First, `UrlParser.update_query` didn't like 7-bit unclean values.
`unicode()` should work everywhere `str()` did.
Second, the check for emedded NBSPs in `UrlParser.is_web_safe_url`
could be bypassed since `b'\xa0'` couldn't automatically be promoted
to unicode (thus `u'\xa0'` != b'\xa0'.) The check was fixed to handle
the NBSP char in either unicode or byte strings.
The regex to find {{match}}-type placeholders is currently being
overly-greedy in cases with multiple placeholders in the same string.
For example, something like "Your post was removed because {{match}} is
in the title, please remove {{match}} from the title and resubmit." will
fail due to the regex spanning from the start of the first placeholder
until the end of the second. Replacing the "." in the regex with "[^}]"
will ensure that the matches stay confined to the same placeholder.
Race conditions between multiple automoderator_q consumers are causing
repeat actions (especially noticeable for things like leaving comments
or sending modmails) when multiple consumers end up processing the same
item at the same time (due to things like edits putting it in the queue
multiple times).
In the case of changes that cause previously-valid rule definitions to
start raising an exception, there may be some configurations that were
saved before the change was made, and so now contain an invalid
definition. This will cause the automod queue consumer(s) to hang if an
item from any of those subreddits needs to be checked, since it will be
unable to initialize the Ruleset to be able to process it. This commit
makes it so that items from subreddits with invalid configurations will
be skipped and an error logged.
Previously, a rule would be considered to need media data if any checks
included a media check at all. For example, a check that looked at both
the submission's title and the media_title for a particular word would
not be applied to posts without media data. However, when in a compound
check like this, we should consider the media data optional, since there
are other non-media places that can be checked without it. This commit
does that, changing it so that a check will only cause the rule to
require media data if ALL of the fields being checked come from the
media, instead of if ANY of them do.
Not forcing these to unicode is causing occasional UnicodeDecodeErrors
when the replacement contains non-ascii characters (seems especially
common from the media_ placeholders from oembed data).
See http://docs.python-requests.org/en/latest/api/#migrating-to-1-x
for the rationale,
`.json()` also differs from `.json` in that it `raise`s instead of
returning `None` on a decoding error, but that shouldn't affect us
anywhere.
Conflicts:
r2/r2/lib/media.py
Some proxies ignore `no-cache`. That's bad when the responses are
credentialed.
We work around that by sending `Cache-Control: private, no-cache`,
since proxies generally won't strip that.
Thanks to a report by @pokechu022. Previously we were only checking
if the user was a moderator with permission to moderate the wiki.
This didn't affect anything until the automoderator config page was
added, as all other special pages are autocreated and don't pass
the `may_not_create` check for _any_ user.
The anonymous gilding message allows for replies to the user. This will let
users reply to gilders that have revealed themselves instead of making them
start a new message thread.
People could see who gilded them anonymously by having them as a friend and
including a specific note. This removes friend notes from messages that have a
display_author.