This moves the cloudsearch and geoip timers into a common namespace to
clean up graphite and make it more clear what they are:
* cloudsearch_timer -> providers.cloudsearch
* geoip_service_timer -> providers.geoip.*
As well as for /api/v1/revoke_token. These endpoints aren't 'via_oauth'
so they miss the standard cases in MinimalController.check_cors().
Additionally, we can be slightly more limiting for these requests.
It's annoying to use the common subreddit syntax of `/r/foo` and be told you
can't do that. We've already been normalizing this in a few places, but now we
should be accepting `/r/foo` or `r/foo` everywhere and silently stripping that
to just plain ol' `foo` for the rest of the code.
Here are the validators I found doing some sort of subreddit name check, and
where they're used:
- VSubredditName
* multi.py
+ PUT_multi_subreddit - add a subreddit to a multi
- VAvailableSubredditName
* api.py
+ POST_site_admin - create or configure a subreddit
- VMultiPath - already requires /r/
* multi.py a bunch of places.
- VSubredditList - already doing this
- VSRByName
* preferences.py - stylesheets everywhere
* api.py
+ POST_compose - when sending a PM from a subreddit
* multi.py
+ GET_list_sr_multis - getting a subreddit's multis
+ GET_multi_subreddit - get data about a subreddit in a multi
+ DELETE_multi_subreddit - remove a subreddit from a multi
- VSRByNames
* api.py
+ GET_subreddit_recommendations
+ POST_rec_feedback - recommender feedback
One important thing to note: we don't want to just modify
`Subreddit.is_valid_name()` because that's used in lower-level code, like when
creating a `Subreddit` object, and that could cause all sorts of problems.
There were several issues occuring related to rules that require data
from the media embed (which is added on to Link objects asynchronously
by the media scraper). Items are re-checked by AutoMod when the scraper
adds an item, but some rules that required that data were still being
checked and executed anyway before it had actually completed, so were
not behaving correctly.
This change makes the checks to see if the rule needs the data from
the media object more robust, which should ensure that these rules are
never processed until the item comes into the queue again when the
scraper attaches the embed data.
Previously something like "combined_karma: 0" would not be able to be
validated against the regex, because the yaml parser would have made the
value into an int, and regex can only be applied to strings.
This will allow approval rules to react to things being removed by the
spam filter. Currently, AutoMod often ends up checking items before the
filter finishes processing them, so it's already determined that it
doesn't need to auto-approve something before the filter has even
decided to remove it.
The server side was correctly adding the box but the client side was
ignoring it because r.config.gold was false. This removes the
r.config.gold check and relies on the comment visits box being there to
indicate if the user should have access to the feature.
In reworking new comment highlighting I introduced a regression that
caused child comments to share the timestamp of their parent regardless
of their own time. This was caused by an insufficiently specific
selector.
The structure of a nested comment view looks like:
<div class="comment">
<div class="entry">
<p class="tagline">
<time>
<time class="edited-timestamp">
<div class="child">
<div class="listing">
<div class="comment">
...
Selecting '.tagline time' from beneath '.comment' would pick up child
comment timestamps as well and we'd overwrite their timestamp cache. We
would also pick up edited timestamps, but that doesn't appear to do
anything bad since they're not live.
This fixes the bug by specifically sticking to the direct descendants.
Previously, the server would check the user's previous visits when
rendering a comment page and add comment-period-N classes to comments
depending on where they fell in relation to those visits. The client
side would then add or remove a new-comment class to every comment with
the appropriate (or older) comment-period class on first load or when
the previous visit selection changed.
This removes that server-side addition of comment-period-N classes and
replaces it with ScrollUpdater-based updating of comments based on their
actual timestamps. The goal is to reduce some server-side ugliness and
extraneous memcached lookups.
Previously would cause silent crashes when trying to save if they
attempted to use an unhashable type (generally a list) as the value for
standard, such as: "standard: [one, two]".
Previously, an invalid search check key like
"body+title (includes)#name" would fail the regex, and throw an error by
trying to proceed in parse_match_fields_key() with the match object
being None. This caused the wiki validation to simply fail to save with
no error displayed at all.
This controller wraps up common functionality for controllers
that only serve endpoints that require OAuth to access. This includes
appropriate pagecaching (or lack thereof) and forced authentication
methods.
For loading new ads, use visibilitychange event if supported (it generally is). This means that, in theory, a new ad should load in one of the following cases:
1. Active tab changes.
2. Browser is minimized then maximized.
3. Browser window is covered up then uncovered.
4. OS goes to sleep/is locked then woken up/unlocked.
This makes a lot more sense than the current trigger, which is just focus.
Unfortunately support for cases 2-4 is spotty, but almost all browsers support
case 1.
Loads a new ad when user re-focuses the window, under the following conditions:
1. Ad must be the active item in the spotlight box.
2. Ad must be visible (in the viewport and not hidden).
3. More than 1.5 seconds must have elapsed since the last ad was loaded.
As reported in reddit/reddit#1291, we've been loading some images in our embed
widgets (the old ones, not the new comment embeds) over http. This causes
warnings in most browsers when the embedding page is loaded over https, since
we dropping down to insecure elements.
Now we're always loading them over https. Alternatively, we could use
protocol-relative urls, but I figure there's no harm in always using https, and
it's simpler and causes fewer weird issues with browsers.
If we can't figure out a good image to hint as a thumbnail for a page via
`og:image`, we set it to the reddit snoo icon. However, we have been making
this a protocol-relative url. This doesn't appear to be against [the spec][0],
but it does create problems for some scrapers.
Now we force it to be an https url, which should resolve some of those issues.
[0]: http://opengraphprotocol.org/#url