Plugins must meet the following requirements in order to be installed
on a Vagrant build:
1. Be listed in `plugins` in the Vagrantfile
2. Be cloned to the designated install directory
3. Adhere to plugin naming conventions
This applies to all open-source plugins.
With some experimentation, we've determined that it is not necessary for
the site to function these days. There's no noticeable effect on p90 or
p99 response times in the loggedout cache pool and several large traffic
events have occurred while the pagecache was off. By removing this
layer, we reduce complexity, remove a possible source of poisoning bugs,
and make it easier to do stuff like logged out A/B experiments.
This installs a ZooKeeper server in development and adds two
configuration options. Live config and Secrets can now be sourced from
either ZooKeeper or the local config. This allows local installs to
continue with the easy workflow of just modifying the INI file to test
out changes, while allowing us to develop ZK features locally as well
(such as throttles).
.drone.yml is used by Drone CI to execute builds in response to pushes to
the repo. install/drone.sh is similar to the existing install/travis.sh in
that it does some final work to prep the environment for running tests.
In local dev we'll only ever have a single cache pool with all
keys routed to it. Set the wildcard fallback so that all keys are
routed even if the mcrouter config isn't fully up to date.
**Explanation**: compute_time_listings is slow. Really slow. At a quick glance, here are the jobs running right now:
date: Sun Jan 17 20:04:56 PST 2016
-rw-rw-r-- 1 ri ri 1.2G Jan 17 12:37 comment-week-data.dump
-rw-rw-r-- 1 ri ri 683M Jan 17 12:25 comment-week-thing.dump
-rw-rw-r-- 1 ri ri 53G Jan 16 07:13 comment-year-data.dump
-rw-rw-r-- 1 ri ri 31G Jan 16 04:37 comment-year-thing.dump
-rw-rw-r-- 1 ri ri 276M Jan 17 17:04 link-week-data.dump
-rw-rw-r-- 1 ri ri 70M Jan 17 17:03 link-week-thing.dump
So the currently running top-comments-by-year listing has been running for nearly 37 hours and isn't done. top-comments-by-week has been running for 8 hours. top-links-by-week has been running for 3 hours. And this is just me checking on currently running jobs, not actual completion times.
The slow bit is the actual writing to Cassandra in `write_permacache`. This is mostly because `write_permacache` is extremely naive and blocks waiting for individual writes with no batching or parallelisation. There are a lot of ways to work around this and some of them will become easier when we're not longer writing out to the permacache at all, but until then (and even after that) this approach lets us keep doing the simple-to-understand thing while parallelising some of the work.
**The approach**: `compute_time_listings` is written as a mapreduce job in our `mr_tools` toolkit, with `write_permacache` as the final reducer. In `mr_tools`, you can run multiple reducers as long as a given reducer can be guaranteed to receive all of the keys for the same key. So this patch adds `hashdist.py`, a tool that runs multiple copies of a target job and distributes lines to them from stdin using their first tab-delimited field to meet this promise. (The same script could apply to mappers and sorts too but in my tests for this job the gains were minimal because `write_permacache` is still the bottleneck up to a large number of reducers.)
**Numbers**: A top-links-by-hour listing in prod right now takes 1m46.387s to run. This patch reduces that to 0m43.960s using 2 jobs (a 60% savings). That top-links-by-week job that before I killed after 3 hours completed in 56m47.329s. The top-links-by-year job that I killed last week at over 36 hours finished in 19 hours.
**Downsides**: It costs some additional RAM: roughly 10mb for hashdist.py and 100mb in memory for each additional copy of the job. It multiplies the effective load on Cassandra by the number of jobs (although I have no reason to believe that it's practical to overload Cassandra this way right now; I've tested up to 5 jobs).
**Further work**: with this we could easily do sort|reducer fusion to significantly reduce the work required by the sorter. `hashdist.py` as written is pretty slow and is only acceptable because `write_permcache` is even slower; a non-Python implementation would be straight forward and way faster.
The only way to test travis is to deploy to master with a `.travis.yml` file and wait for travis-ci to pick up the change. To do some pre-testing here, the `Vagrantfile` has been split into two distinct machines:
* `default` is, as before, the full reddit build (we have an opportunity to rename this)
* `travis` is a minimal install with only the packages and services required to run `nosetests` against the codebase. It tries to mimic what will happen when we try to build this on travis-ci's workers.
As part of this addition, I've moved `install-reddit.sh` to `install/reddit.sh` and populated that `install` folder with common scripts used for both `default` and `travis` builds.