If the scrape of a link fails for some reason, this will add a "retry
thumb" button to the link, which will re-add it to the queue to try
again. This will appear to similar people as the "nsfw" button - either
mods of the subreddit, the submitter, or the sponsor for a promoted
link.
Helpful because Amazon S3 doesn't support switching between gzipped and
non-gzipped content so often times resources are uploaded directly
gzipped with gzipped headers. Because all modern browsers support gzip,
S3 always serves gzip and it's not generally a problem. Urllib, however,
doesn't support gzip so gzipped resource currently fail to decode when
scraped.
User may have bought gift as a gold and not have gold themself. The
user receiving the gold will get a PM once the transaction is complete
and that message will let them know about the gold lounge.
The "explore-page" css class was removed at some point but it's needed for
correct placement of the "Explore" title (since this page doesn't have hot,
new, etc tabs.)
get_recommendations was recently changed to accept id36s instead of subreddit
objects for the omit list. This change updates the recommender api endpoint
to match. It also makes the VSRByNames() validator return an empty dict in
case of error so it's safe to call .values() on the result.
Note: Fixes the broken "more suggestions" link in multi recommendations.
I've received multiple requests from people to confirm why their site
isn't scraping properly despite being set up with embedly. The reason is
the scraper_q wasn't restarted recently and therefore it was still using
an old service list.
Since the service list is memoized, it doesn't really matter if we
"fetch" it on every iteration. This also allows the lookup to be moved
out of the higher layers of the queue which shouldn't have knowledge of
embedly down to the correct place.