Our throttling on the share dialog has been a bit odd. We didn't allow you to
share more than once every `RL_RESET_MINUTES`, **unless** you have at least
`MIN_RATE_LIMIT_KARMA` in the subreddit you're sharing from. Why was this tied
to the subreddit? Who knows!
Rather than having a draconian limit and exempting some users from it, the rate
limits on sharing should now be at a reasonable level for everyone (and
configurable as we go with `RL_SHARE_AVG_PER_SEC`). In addition, we're now
using the fairly new ratelimit library, which allows for burstable usage, so a
user who wants to share several things in a row but then goes back to normal
browsing is much less likely to get throttled.
This commit contains a number of improvements for the message sent to
users when they are banned from a subreddit:
* Message is different for temporary bans, and includes duration
* Includes information about being able to reply to the message, and the
fact that circumventing a ban is considered a site rules violation
* Nicer formatting by blockquoting the note from the moderators
* Fixes the "phantom modmail notification" when someone is banned
* Refactored into separate function, which cleans up notify_user_added
This change was intended primarily to address an issue related to
duplicate username mentions - each individual mention was being counted
towards the per-comment limit, instead of it only counting mentions of
different users. So mentioning the same user 4 times in a comment would
cause butler to bail out instead of sending a mention to that user.
So this makes it so that extract_user_mentions now returns the
mentioned usernames as a set instead of a list, and also removes the
support inside the function itself for only returning up to a maximum
number of mentions. It is now up to the caller to worry about handling
the number of mentions returned (which actually simplifies things almost
everywhere this is currently being used).
This was mostly already covered by the check in should_check_item that
makes AutoMod skip over items that have been removed by moderators, but
it didn't cover rules that triggered on a comment and used that to
approve the parent submission. Rules using that sort of check/action
would cause parent submissions that had previously been removed by
moderators to be re-approved, which isn't desirable.
The variable that holds the result of attempting to match the search
check was not being reset for each individual check, so if a check was
skipped due to its fields being discarded, it would keep the same value
it had from a previous check.
In practice, this means that a check like this will not match on a link
submission to baddomain.com:
url+body: "baddomain.com"
~body: "I'm an exception"
The match from the url check would still be present after the ~body
check was skipped, so this would end up causing the rule to be
considered unsatisfied, since the ~body check would seem to have matched
successfully.
[A BuzzFeed article][0] with a bunch of comment embeds has been causing
difficulty for the data team, since none of the embeds have the
`data-embed-created` field in the embed.
Figuring out why that's happening is another issue we should tackle, but in the
meantime, let's avoid sending a nonsense value if we don't have something
useful.
[0]: http://www.buzzfeed.com/sarahgalo/books-according-to-reddit
As a few users have pointed out, it doesn't make much sense for OP replies to
be hidden in Q&A sort due to downvotes, since the entire point of the sort is
to make it easier to find what they're saying.
So now we'll override the comment score threshold for those particular comments
while in Q&A sort.
From experimentation, it appears Facebook is taking `og:image:url` as only a
suggestion (as opposed to `og:image`, which it always uses), and thus people
posting reddit links on Facebook are ending up with thumbnails from other
submissions, courtesy of the read next box.
So, let's hack it back to the way it was before reddit/reddit@9470bb87.
Also, to note, currently reddit's scraper *also* ignores `og:image:url`. So,
yeah, there's blame to share around.
Some scrapers don't like relative urls in the Open Graph tags. We fixed this
most places in reddit/reddit@c1e2796da, but forgot one place: subreddit listing
pages. Now they should all be good.
There was a hardcoded form id in the "413 Too Big" error response,
which was causing the error message to be rendered next to the wrong
upload form if the page contained multiple image upload forms.
The 'buttons' link breaks on the /subreddits page. Since the only place on
the page it appears to be being used it the header, just setting the
sr_path param to False looks like the easiest fix.
* add ref tag to links
* add GA events.
* reduce thumbnail size to prevent cropping
* prepend title with NSFW stamp if NSFW.
* change 'also in' to 'discussions in'.
* Move nav arrows to left side.