A "ref tag" is a query parameter we may pass to ourselves to let
us know where a user came from, and more specifically, what part
of the page they clicked on to get there. This allows us to know,
for example, if users are clicking through from trending
subreddits or if they're just clicking on the subreddit from a
link. This helps us know if features are used/valuable or not.
A ref tag will be passed through and then removed from the URL via
javascript post-load to make a clean URL that's nice for
copy/pasting elsewhere, and doesn't give false positives if they
do paste it somewhere else.
If you go to a userpage and sort by top (in either the overview or comments
tabs), and restrict the time range to anything other than "all time", no
comments will be shown.
The data in these listings is built from functions in `lib/db/queries.py`
(specifically from `get_comments()` down). This ends up trying to pull the
query results from permacache (in `CachedResults.fetch_multi()`), defaulting to
an empty list if no cache entry is found.
Now, the cache entry is supposed to be populated periodically by a cronjob that
calls `scripts/compute_time_listings`. This script (and its Python helpers in
`lib/mr_top.py` and `lib/mr_tools/`) generates a dump of data from Postgresql,
then reads through that and builds up entries to insert into the cache. As
with many scripts of this sort, it expects to get in some bad data, and so
performs some basic sanity checks.
The problem is that the sanity checks have been throwing out all comments.
With no new comments, there's nothing new to put into the cache!
The root of this was a refactoring in reddit/reddit@3511b08 that combined
several different scripts that were doing similar things. Unfortunately, we
ended up requiring the `url` field on comments, which doesn't exist because,
well, comments aren't links.
Now we have two sets of fields that we expect to get, one for comments and one
for links, and all is good.
We also now have a one-line summary of processed/skipped entries printed out,
which will help to make a problem like this more obvious in the future.
If an email address is already unconfirmed, unsubscribed, or on
a block list, it will return a 400 level response. Unconfirmed
will be most likely by far. Here we handle those cases.
Adds an an opt-in checkbox to the registration flow for the
upvoted newsletter, a project Alexis and Heath are working on.
This does not associate any data with the user's account, it
just sends their email address to the campaign monitor API if
they opted in.
Previously the HTML fragment for a comment embed was defined
only in JS. This isn't ideal when we want to be able to return
that fragment in other contexts (like the forthcoming oEmbed).
We've launched the new new mobile web experience - yay! Since we switched over
the mobile hint for Google in reddit/reddit@63f054d, I figured it was probably
time to do so in the footer as well.
Oh, and it redirects all http traffic to https, so we should just point to that
directly.
Crawlers will first go to http://www.reddit.com, see the alternate
link for a mobile page at http://m.reddit.com, go there and then
be redirected to https://m.reddit.com since m.reddit.com is HTTPS
only via a CDN page rule. This sidesteps that extra request.
The cached version of a CommentPane is valid only if the user is not the
author of any of the comments. Previously we would retrieve and build
the comment tree for the user to check if we could use the cache, and then
on a cache miss we'd re-retrieve and re-build the tree for a logged out user
to create a version suitable for caching.
This commit changes it so the CommentBuilder can retrieve the comments that
will be shown so we can check if the user authored any of them. Then if we
can use the cache and there's a cache miss we can build the rendered tree
for the logged out user without re-retrieving, and the building happens only
at this stage.
On the search page, the "limit my search to [subreddit]" checkbox
should always reflect the current search and not the stored preference.
In the sidebar, the checkbox will continue to be checked or not
checked according to the stored preference.
An approve action should only trigger on a reported item if the rule was
specifically looking for it being reported. Previously, any rule with an
approve action could end up triggering on an item when it was re-checked
due to a report.
When the safe search feature is enabled, NSFW links and subreddits
will not appear in search results unless the over18 preference has
been set by the user.
For easy access to the over18 preference, add a link to enable NSFW
search results on the main search page and the subreddit search page.
Clicking on the link will redirect to the /over18 confirmation page.
The link will not appear if the over18 preference was already enabled.
This was leading to strange behaviour like `<$>timesince</$>` in a
random string field rending as `just now`. It might have been necessary
when we were caching `JsonTemplate`s, but we don't anymore.
Having the import here means we can't import anything from `filters`
with a clean build environment. `wrapped` won't exist yet, and we need
`scriptsafe_dumps` for the JS build!
This fixes an issue where we were parsing `request.cookies` before
checking if the cookie header was even valid:
86a3d262f2/r2/r2/controllers/reddit_base.py (L1477)
This lead to a lot of hard-to-track down issues like us ending up
in `<Controller.post>` with inconsistent state due to `pre` aborting.
Before we were throwing them out in `RedditController.pre`, but we
need `request.cookies` in `MinimalController.pre` as well:
86a3d262f2/r2/r2/controllers/reddit_base.py (L1099)
Since `RedditController.pre` uses `request.cookies` before calling
`MinimalController.pre`, I took the easy route and put the header
cleaning code in `BaseController.__before__`, before either `pre`
gets run.