set_live_promotions had a redundant "links" parameter
(an item that can be calculated from its other param).
This removes that from the function call, as well
as from the return value of its partner, get_live_promotions
Adds callback to the subreddit selector on the promo edit page that updates
an available inventory graph when the selected target changes. Shows available
front page inventory if no target is selected.
Right now this is set up in a new route. Eventually it should replace the old
promoted/edit_promo route.
TODO: change color of graph to show when inventory is running low
TODO: i18n of javascript strings
Lets admins and sponsors view ads that were scheduled to run on a certain
day or that were scheduled to launch a certain day. Useful for doing damage
control when there are site problems.
The first time we tried to move /r/all/comments to the new query cache,
the row quickly grew to be massive because tombstones were piling up in
the row cache. The row started taking seconds to retrieve after only 12
hours.
We reverted.
We took gc_grace_seconds down to 30 minutes which is relatively safe in
the query cache (prunes can be re-executed without issue and lost
deletes of non-pruned things will be covered by keep_fns).
Additionally, we switched to the leveled compaction strategy for the
relevant column families.
Then we tried this again. This time, things ran fine for three days
before we started seeing out-of-memory issues on the nodes responsible
for this key. The row size was rather large again.
We reverted.
Now, we're trying again with three more changes, working on the
hypothesis that runaway growth of a hot query can happen because prunes
start failing after a small bad spike.
* tdb_cassandra.max_column_count has been drastically reduced in favor
of xget for the models that actually need to fetch hugely wide rows.
This saves memory pressure via materialized thrift buffers in general
and when the row grows large for whatever reason.
* The pruning behaviour has been tweaked to only prune a portion of the
extraneous columns if there are a large number. This should reduce
the likelihood that prunes will fail after a row has grown too much.
* This query is now in its own column family that is designed to have
its rowcache disabled.
Why bother shoehorning this query into this data model, you say? It's a
canary for extreme scaling of other queries. If we can't fix this
problem for this query, we should re-evaluate the whole data model.
We no longer write to the old last modified system for stylesheets
anyway, so there's no point in checking. S3 handles last modified for
the staticized stylesheets anyway.
This script will determine if changes from the upstream branch to the
current branch have caused any additional reports from pep8, pep257, and
pyflakes.