Previously, unicode domain names were only checked for validity, and the
coversion was dropped. This can cause problems down the line where we expect
URL's to be ascii.
Previously, c.secure status was determined based on the domain used.
This allows for the status to vary independently of domain for greater
flexibility.
Note: it is critical that the load balancer strips any X-Forwarded-Proto
headers that may've been sent by the client.
These are media objects that can be embedded safely in an HTTPS page.
Only a handful of services support this through embedly right now, and
that list is hardcoded until embedly's service API is updated.
This is primarily about removing the dead old scrapers in favor of a
streamlined oEmbed-based system. The new embed.ly scraper uses their
Service API to determine which URLs can be scraped through them. This
removes the giant list of domains and regexes in the code and makes the
scraper_q proc always have the latest list of scrapable domains.
These offers change occasionally, are only available in the US, other
parts of these pages aren't i18n, and phrases like "10% off" cause
issues with Babel, so it's just a lot simpler overall to drop this.
This level makes it so that all items of the chosen type submitted to
the subreddit will be filtered initially. This is useful for subreddits
that want to be extremely strict with their submissions, as well as for
many subreddits to be able to go into "lockdown" in the case of a raid
or something similar.
Previously, if an invalid image name were passed to this endpoint, it
would fail because VCssName would return an empty string and we'd never
check for the validity of the value.
Fixesreddit/reddit#883.