I've received multiple requests from people to confirm why their site
isn't scraping properly despite being set up with embedly. The reason is
the scraper_q wasn't restarted recently and therefore it was still using
an old service list.
Since the service list is memoized, it doesn't really matter if we
"fetch" it on every iteration. This also allows the lookup to be moved
out of the higher layers of the queue which shouldn't have knowledge of
embedly down to the correct place.
Shows a mix of content from:
- subreddits recommended for the user (based on subscriptions and multis)
- rising threads
- items from discovery-focused subreddits
Listing items emphasize the subreddit name and have feedback controls.
The AccountSRPrefs class builds a user preferences model on-the-fly from
subscriptions, multireddits, and a record of recent user feedback.
The AccountSRFeedback column family stores a user's recent interactions with
the recommendation UI. For example, it records which srs the user dismissed
as uninteresting, and keeps track of which srs were recommended recently to
make sure we don't show the same ones too often.
Each type of feedback has a ttl after which it disappears from the db.
Consolidate duplicated logic and make methods more direct. There were
many monolithic methods that did many things and were called all over
the place when the desired action was just one piece of the method.
`create_customer` does address verification and we choose not
to make any charges if the verifications fail. We need to wait
until after that verification before creating a subscription.
Since iframes are "replaced elements" they don't auto fill space when a
left and right absolute position is specified. The hacky solution is to
add a container <div> which we can size appropriately and then fill with
the <iframe>.