Merge remote-tracking branch 'origin/devel' into hash-login-tokens

Conflicts:
	History.md
This commit is contained in:
Emily Stark
2013-12-26 17:24:07 -06:00
71 changed files with 1314 additions and 309 deletions

View File

@@ -8,9 +8,14 @@
# For any emails that show up in the shortlog that aren't in one of
# these lists, figure out their GitHub username and add them.
GITHUB: AlexeyMK <alexey@alexeymk.com>
GITHUB: ansman <nicklas@ansman.se>
GITHUB: awwx <andrew.wilcox@gmail.com>
GITHUB: codeinthehole <david.winterbottom@gmail.com>
GITHUB: dandv <ddascalescu+github@gmail.com>
GITHUB: DenisGorbachev <Denis.Gorbachev@faster-than-wind.ru>
GITHUB: emgee3 <hello@gravitronic.com>
GITHUB: FooBarWidget <honglilai@gmail.com>
GITHUB: jacott <geoffjacobsen@gmail.com>
GITHUB: Maxhodges <Max@whiterabbitpress.com>
GITHUB: meawoppl <meawoppl@gmail.com>
@@ -31,6 +36,7 @@ METEOR: estark37 <emily@meteor.com>
METEOR: estark37 <estark37@gmail.com>
METEOR: glasser <glasser@meteor.com>
METEOR: gschmidt <geoff@geoffschmidt.com>
METEOR: karayu <lele.yu@gmail.com>
METEOR: n1mmy <nim@meteor.com>
METEOR: sixolet <naomi@meteor.com>
METEOR: Slava <slava@meteor.com>

View File

@@ -1,15 +1,82 @@
## vNEXT
## v.NEXT
* Rework hot code push. The new `autoupdate` package drives automatic
reloads on update using standard DDP messages instead of a hardcoded
message at DDP startup. Now the hot code push only triggers when
client code changes; server only code changes will not cause the page
to reload.
* Hash login tokens before storing them in the database.
## v0.7.0.1
* Two fixes to `meteor run` Mongo startup bugs that could lead to hangs with the
message "Initializing mongo database... this may take a moment.". #1696
* Apply the Node patch to 0.10.24 as well (see the 0.7.0 section for details).
* Fix gratuitous IE7 incompatibility. #1690
## v0.7.0
This version of Meteor contains a patch for a bug in Node 0.10 which
most commonly affects websockets. The patch is against Node version
0.10.22 and 0.10.23. We strongly recommend using one of these precise
versions of Node in production so that the patch will be applied. If you
use a newer version of Node with this version of Meteor, Meteor will not
apply the patch and will instead disable websockets.
* Rework how Meteor gets realtime database updates from MongoDB. Meteor
now reads the MongoDB "oplog" -- a special collection that records all
the write operations as they are applied to your database. This means
changes to the database are instantly noticed and reflected in Meteor,
whether they originated from Meteor or from an external database
client. Oplog tailing is automatically enabled in development mode
with `meteor run`, and can be enabled in production with the
`MONGO_OPLOG_URL` environment variable. Currently the only supported
selectors are equality checks; `$`-operators, `limit` and `skip`
queries fall back to the original poll-and-diff algorithm. See
https://github.com/meteor/meteor/wiki/Oplog-Observe-Driver
for details.
* Add `Meteor.onConnection` and add `this.connection` to method
invocations and publish functions. These can be used to store data
associated with individual clients between subscriptions and method
calls. See http://docs.meteor.com/#meteor_onconnection for details.
calls. See http://docs.meteor.com/#meteor_onconnection for details. #1611
* Rework hot code push. The new `autoupdate` package drives automatic
reloads on update using standard DDP messages instead of a hardcoded
message at DDP startup. Now the hot code push only triggers when
client code changes; server-only code changes will not cause the page
to reload.
* New `facts` package publishes internal statistics about Meteor.
* Add an explicit check that publish functions return a cursor, an array
of cursors, or a falsey value. This is a safety check to to prevent
users from accidentally returning Collection.findOne() or some other
value and expecting it to be published.
* Implement `$each`, `$sort`, and `$slice` options for minimongo's `$push`
modifier. #1492
* Introduce `--raw-logs` option to `meteor run` to disable log
coloring and timestamps.
* Add `WebAppInternals.setBundledJsCssPrefix()` to control where the
client loads bundled JavaScript and CSS files. This allows serving
files from a CDN to decrease page load times and reduce server load.
* Attempt to exit cleanly on `SIGHUP`. Stop accepting incoming
connections, kill DDP connections, and finish all outstanding requests
for static assets.
* In the HTTP server, only keep sockets with no active HTTP requests alive for 5
seconds.
* Fix handling of `fields` option in minimongo when only `_id` is present. #1651
* Fix issue where setting `process.env.MAIL_URL` in app code would not
alter where mail was sent. This was a regression in 0.6.6 from 0.6.5. #1649
* Use stderr instead of stdout (for easier automation in shell scripts) when
prompting for passwords and when downloading the dev bundle. #1600
* Bundler failures cause non-zero exit code in `meteor run`. #1515
@@ -18,22 +85,40 @@
* Support `EJSON.clone` for `Meteor.Error`. As a result, they are properly
stringified in DDP even if thrown through a `Future`. #1482
* Fail explicitly when publishing non-cursors.
* Fix passing `transform: null` option to `collection.allow()` to disable
transformation in validators. #1659
* Implement `$each`, `$sort`, and `$slice` options for minimongo's `$push`
modifier. #1492
* Fix livedata error on `this.removed` during session shutdown. #1540 #1553
* Fix incompatibility with Phusion Passenger by removing an unused line. #1613
* Ensure install script creates /usr/local on machines where it does not
exist (eg. fresh install of OSX Mavericks).
* Set x-forwarded-* headers in `meteor run`.
* Clean up package dirs containing only ".build".
* Check for matching hostname before doing end-of-oauth redirect.
* Only count files that actually go in the cache towards the `appcache`
size check. #1653.
* Increase the maximum size spiderable will return for a page from 200kB
to 5MB.
* New 'facts' package publishes internal statistics about Meteor.
* Hash login tokens before storing them in the database.
* Upgraded dependencies:
* SockJS server from 0.3.7 to 0.3.8
* SockJS server from 0.3.7 to 0.3.8, including new faye-websocket module.
* Node from 0.10.21 to 0.10.22
* MongoDB from 2.4.6 to 2.4.8
* clean-css from 1.1.2 to 2.0.2
* uglify-js from a fork of 2.4.0 to 2.4.7
* handlebars npm module no longer available outside of handlebars package
Patches contributed by GitHub users awwx, mcbain, rzymek.
Patches contributed by GitHub users AlexeyMK, awwx, dandv, DenisGorbachev,
emgee3, FooBarWidget, mitar, mcbain, rzymek, and sdarnell.
## v0.6.6.3

View File

@@ -217,6 +217,7 @@ Copyright (c) 2011 Esa-Matti Suuronen esa-matti@suuronen.org
----------
node-gyp: https://github.com/TooTallNate/node-gyp
keypress: https://github.com/TooTallNate/keypress
bindings: https://github.com/TooTallNate/node-bindings
----------
Copyright (c) 2012 Nathan Rajlich <nathan@tootallnate.net>
@@ -287,7 +288,7 @@ archy: https://github.com/substack/node-archy
shell-quote: https://github.com/substack/node-shell-quote
deep-equal: https://github.com/substack/node-deep-equal
editor: https://github.com/substack/node-editor
minimist: https://github.com/substack/node-minimist
minimist: https://github.com/substack/minimist
quotemeta: https://github.com/substack/quotemeta
----------
@@ -542,6 +543,14 @@ geojson-utils: https://github.com/maxogden/geojson-js-utils
Copyright (c) 2010 Max Ogden
----------
bcrypt: https://github.com/ncb000gt/node.bcrypt.js
----------
Copyright (c) 2010 Nicholas Campbell
==============
Apache License
==============
@@ -641,11 +650,10 @@ BSD Licenses
============
----------
uglify-js: https://github.com/mishoo/UglifyJS
uglify-js: https://github.com/mishoo/UglifyJS2
----------
Copyright 2010 (c) Mihai Bazon <mihai.bazon@gmail.com>
Based on parse-js (http://marijn.haverbeke.nl/parse-js/).
Copyright 2012-2013 (c) Mihai Bazon <mihai.bazon@gmail.com>
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
@@ -1263,6 +1271,7 @@ IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
----------
npm-user-validate: https://github.com/robertkowalski/npm-user-validate
github-url-from-username-repo: https://github.com/robertkowalski/github-url-from-username-repo
----------
Copyright (c) Robert Kowalski
@@ -1738,7 +1747,9 @@ The externally maintained libraries used by libuv are:
----------
nodejs: http://nodejs.org/
readable-stream: https://github.com/isaacs/readable-stream/
readable-stream: https://github.com/isaacs/readable-stream
npm: https://github.com/isaacs/npm
init-package-json: https://github.com/isaacs/init-package-json
----------

View File

@@ -40,7 +40,8 @@ can run Meteor directly from a git checkout.
If you're the sort of person who likes to build everything from scratch,
you can build all the Meteor dependencies (node.js, npm, mongodb, etc)
with the provided script. If you do not run this script, Meteor will
with the provided script. This requires git, a C and C++ compiler,
autotools, and scons. If you do not run this script, Meteor will
automatically download pre-compiled binaries when you first run it.
# OPTIONAL

View File

@@ -1 +1 @@
galaxy-follower-6
sso-1

View File

@@ -154,8 +154,7 @@ other packages. However sometimes load order dependencies in your
application are unavoidable. The JavaScript and CSS files in an
application are loaded according to these rules:
* Files in the `lib` directory at the root of your application are
loaded first.
* Files in directories named `lib` are loaded first.
* Files that match `main.*` are loaded after everything else.
@@ -826,10 +825,10 @@ To get started, run
This command will generate a fully-contained Node.js application in the form of
a tarball. To run this application, you need to provide Node.js 0.10 and a
MongoDB server. (The current release of Meteor has been tested with Node
0.10.21.) You can then run the application by invoking node, specifying the HTTP
port for the application to listen on, and the MongoDB endpoint. If you don't
already have a MongoDB server, we can recommend our friends at
[MongoHQ](http://mongohq.com).
0.10.22, and is recommended for use with 0.10.22 through 0.10.24 only.) You can
then run the application by invoking node, specifying the HTTP port for the
application to listen on, and the MongoDB endpoint. If you don't already have a
MongoDB server, we can recommend our friends at [MongoHQ](http://mongohq.com).
$ PORT=3000 MONGO_URL=mongodb://localhost:27017/myapp node bundle/main.js

View File

@@ -1,5 +1,5 @@
// While galaxy apps are on their own special meteor releases, override
// Meteor.release here.
if (Meteor.isClient) {
Meteor.release = Meteor.release ? "0.6.6.3" : undefined;
Meteor.release = Meteor.release ? "0.7.0.1" : undefined;
}

View File

@@ -1 +1 @@
0.6.6.2
0.7.0.1

View File

@@ -1 +1 @@
0.6.6.2
0.7.0.1

View File

@@ -1 +1 @@
0.6.6.2
0.7.0.1

View File

@@ -1 +1 @@
0.6.6.2
0.7.0.1

2
meteor
View File

@@ -1,6 +1,6 @@
#!/bin/bash
BUNDLE_VERSION=0.3.25
BUNDLE_VERSION=0.3.26
# OS Check. Put here because here is where we download the precompiled
# bundles that are arch specific.

View File

@@ -480,10 +480,10 @@ if (Meteor.isClient) (function () {
function (test, expect) {
var self = this;
// Test that deleting a user logs out that user's connections.
Meteor.loginWithPassword(this.username, this.password, function (err) {
Meteor.loginWithPassword(this.username, this.password, expect(function (err) {
test.isFalse(err);
Meteor.call("removeUser", self.username);
});
}));
},
waitForLoggedOutStep
]);

View File

@@ -13,15 +13,23 @@ AppConfig.findGalaxy = _.once(function () {
var ultra = AppConfig.findGalaxy();
var subFuture = new Future();
if (ultra)
var subFutureJobs = new Future();
if (ultra) {
ultra.subscribe("oneApp", process.env.GALAXY_APP, subFuture.resolver());
var OneAppApps;
ultra.subscribe("oneJob", process.env.GALAXY_JOB, subFutureJobs.resolver());
}
var Apps;
var Jobs;
var Services;
var collectionFuture = new Future();
Meteor.startup(function () {
if (ultra) {
OneAppApps = new Meteor.Collection("apps", {
Apps = new Meteor.Collection("apps", {
connection: ultra
});
Jobs = new Meteor.Collection("jobs", {
connection: ultra
});
Services = new Meteor.Collection('services', {
@@ -36,9 +44,15 @@ Meteor.startup(function () {
// places.
AppConfig._getAppCollection = function () {
collectionFuture.wait();
return OneAppApps;
return Apps;
};
AppConfig._getJobsCollection = function () {
collectionFuture.wait();
return Jobs;
};
var staticAppConfig;
try {
@@ -72,25 +86,41 @@ AppConfig.getAppConfig = function () {
return staticAppConfig;
}
subFuture.wait();
var myApp = OneAppApps.findOne(process.env.GALAXY_APP);
if (myApp)
return myApp.config;
throw new Error("there is no app config for this app");
var myApp = Apps.findOne(process.env.GALAXY_APP);
if (!myApp) {
throw new Error("there is no app config for this app");
}
var config = myApp.config;
return config;
};
AppConfig.getStarForThisJob = function () {
if (ultra) {
subFutureJobs.wait();
var job = Jobs.findOne(process.env.GALAXY_JOB);
if (job) {
return job.star;
}
}
return null;
};
AppConfig.configurePackage = function (packageName, configure) {
var appConfig = AppConfig.getAppConfig(); // Will either be based in the env var,
// or wait for galaxy to connect.
var lastConfig =
(appConfig && appConfig.packages && appConfig.packages[packageName]) || {};
(appConfig && appConfig.packages &&
appConfig.packages[packageName]) || {};
// Always call the configure callback "soon" even if the initial configuration
// is empty (synchronously, though deferred would be OK).
// XXX make sure that all callers of configurePackage deal well with multiple
// callback invocations! eg, email does not
configure(lastConfig);
var configureIfDifferent = function (app) {
if (!EJSON.equals(app.config && app.config.packages && app.config.packages[packageName],
lastConfig)) {
if (!EJSON.equals(
app.config && app.config.packages && app.config.packages[packageName],
lastConfig)) {
lastConfig = app.config.packages[packageName];
configure(lastConfig);
}
@@ -104,7 +134,7 @@ AppConfig.configurePackage = function (packageName, configure) {
// there's a Meteor.startup() that produces the various collections, make
// sure it runs first before we continue.
collectionFuture.wait();
subHandle = OneAppApps.find(process.env.GALAXY_APP).observe({
subHandle = Apps.find(process.env.GALAXY_APP).observe({
added: configureIfDifferent,
changed: configureIfDifferent
});
@@ -119,7 +149,6 @@ AppConfig.configurePackage = function (packageName, configure) {
};
};
AppConfig.configureService = function (serviceName, configure) {
if (ultra) {
// there's a Meteor.startup() that produces the various collections, make

View File

@@ -43,19 +43,48 @@ Autoupdate.newClientAvailable = function () {
};
Meteor.subscribe("meteor_autoupdate_clientVersions", {
onError: function (error) {
Meteor._debug("autoupdate subscription failed:", error);
},
onReady: function () {
if (Package.reload) {
Deps.autorun(function (computation) {
if (ClientVersions.findOne({current: true}) &&
(! ClientVersions.findOne({_id: autoupdateVersion}))) {
computation.stop();
Package.reload.Reload._reload();
}
});
}
}
var retry = new Retry({
// Unlike the stream reconnect use of Retry, which we want to be instant
// in normal operation, this is a wacky failure. We don't want to retry
// right away, we can start slowly.
//
// A better way than timeconstants here might be to use the knowledge
// of when we reconnect to help trigger these retries. Typically, the
// server fixing code will result in a restart and reconnect, but
// potentially the subscription could have a transient error.
minCount: 0, // don't do any immediate retries
baseTimeout: 30*1000 // start with 30s
});
var failures = 0;
Autoupdate._retrySubscription = function () {
Meteor.subscribe("meteor_autoupdate_clientVersions", {
onError: function (error) {
Meteor._debug("autoupdate subscription failed:", error);
failures++;
retry.retryLater(failures, function () {
// Just retry making the subscription, don't reload the whole
// page. While reloading would catch more cases (for example,
// the server went back a version and is now doing old-style hot
// code push), it would also be more prone to reload loops,
// which look really bad to the user. Just retrying the
// subscription over DDP means it is at least possible to fix by
// updating the server.
Autoupdate._retrySubscription();
});
},
onReady: function () {
if (Package.reload) {
Deps.autorun(function (computation) {
if (ClientVersions.findOne({current: true}) &&
(! ClientVersions.findOne({_id: autoupdateVersion}))) {
computation.stop();
Package.reload.Reload._reload();
}
});
}
}
});
};
Autoupdate._retrySubscription();

View File

@@ -1,10 +1,11 @@
Package.describe({
summary: "Update the client when new client code is available"
summary: "Update the client when new client code is available",
internal: true
});
Package.on_use(function (api) {
api.use('webapp', 'server');
api.use('deps', 'client');
api.use(['deps', 'retry'], 'client');
api.use(['livedata', 'mongo-livedata'], ['client', 'server']);
api.use('deps', 'client');
api.use('reload', 'client', {weak: true});

View File

@@ -0,0 +1,4 @@
# Normally, variables should be file-local, but this file is loaded with {bare:
# true}, so it should be readable by bare_tests.js
VariableSetByCoffeeBareTestSetup = 5678

View File

@@ -0,0 +1,3 @@
Tinytest.add("coffeescript - bare", function (test) {
test.equal(VariableSetByCoffeeBareTestSetup, 5678);
});

View File

@@ -14,6 +14,8 @@ Package._transitional_registerBuildPlugin({
Package.on_test(function (api) {
api.use(['coffeescript', 'tinytest']);
api.use(['coffeescript-test-helper'], ['client', 'server']);
api.add_files('bare_test_setup.coffee', ['client'], {bare: true});
api.add_files('bare_tests.js', ['client']);
api.add_files([
'coffeescript_test_setup.js',
'tests/coffeescript_tests.coffee',

View File

@@ -149,7 +149,8 @@ var handler = function (compileStep, isLiterate) {
path: outputFile,
sourcePath: compileStep.inputPath,
data: sourceWithMap.source,
sourceMap: sourceWithMap.sourceMap
sourceMap: sourceWithMap.sourceMap,
bare: compileStep.fileOptions.bare
});
};

View File

@@ -33,10 +33,11 @@ _.extend(Ctl, {
var numServers = Ctl.getJobsByApp(
Ctl.myAppName(), {program: program, done: false}).count();
if (numServers === 0) {
Ctl.startServerlikeProgram(program, tags, admin);
return Ctl.startServerlikeProgram(program, tags, admin);
} else {
console.log(program, "already running.");
}
return null;
},
startServerlikeProgram: function (program, tags, admin) {
@@ -47,6 +48,7 @@ _.extend(Ctl, {
var proxyConfig;
var bindPathPrefix = "";
var jobId = null;
if (admin) {
bindPathPrefix = "/" + encodeURIComponent(Ctl.myAppName()).replace(/\./g, '_');
}
@@ -60,7 +62,7 @@ _.extend(Ctl, {
});
// XXX args? env?
Ctl.prettyCall(Ctl.findGalaxy(), 'run', [Ctl.myAppName(), program, {
jobId = Ctl.prettyCall(Ctl.findGalaxy(), 'run', [Ctl.myAppName(), program, {
exitPolicy: 'restart',
env: {
ROOT_URL: "https://" + appConfig.sitename + bindPathPrefix,
@@ -80,6 +82,7 @@ _.extend(Ctl, {
tags: tags
}]);
console.log("Started", program);
return jobId;
},
findCommand: function (name) {
@@ -130,6 +133,35 @@ _.extend(Ctl, {
}
},
updateProxyActiveTags: function (tags) {
var proxy;
var proxyTagSwitchFuture = new Future;
AppConfig.configureService('proxy', function (proxyService) {
try {
proxy = Follower.connect(proxyService.providers.proxy, {
group: "proxy"
});
proxy.call('updateTags', Ctl.myAppName(), tags);
proxy.disconnect();
if (!proxyTagSwitchFuture.isResolved())
proxyTagSwitchFuture['return']();
} catch (e) {
if (!proxyTagSwitchFuture.isResolved())
proxyTagSwitchFuture['throw'](e);
}
});
var proxyTimeout = Meteor.setTimeout(function () {
if (!proxyTagSwitchFuture.isResolved())
proxyTagSwitchFuture['throw'](
new Error("timed out looking for a proxy " +
"or trying to change tags on it " +
proxy.status().status));
}, 10*1000);
proxyTagSwitchFuture.wait();
Meteor.clearTimeout(proxyTimeout);
},
jobsCollection: _.once(function () {
return new Meteor.Collection("jobs", {manager: Ctl.findGalaxy()});
}),

View File

@@ -6,7 +6,7 @@ Package.describe({
Npm.depends({optimist: '0.6.0'});
Package.on_use(function (api) {
api.use(['underscore', 'livedata', 'mongo-livedata', 'follower-livedata'], 'server');
api.use(['underscore', 'livedata', 'mongo-livedata', 'follower-livedata', 'application-configuration'], 'server');
api.export('Ctl', 'server');
api.add_files('ctl-helper.js', 'server');
});

View File

@@ -1,3 +1,5 @@
var Future = Npm.require("fibers/future");
Ctl.Commands.push({
name: "help",
func: function (argv) {
@@ -35,6 +37,10 @@ var startFun = function (argv) {
);
process.exit(1);
}
Ctl.subscribeToAppJobs(Ctl.myAppName());
var jobs = Ctl.jobsCollection();
var thisJob = jobs.findOne(Ctl.myJobId());
Ctl.updateProxyActiveTags(['', thisJob.star]);
if (Ctl.hasProgram("console")) {
console.log("starting console for app", Ctl.myAppName());
Ctl.startServerlikeProgramIfNotPresent("console", ["admin"], true);
@@ -89,58 +95,90 @@ Ctl.Commands.push({
func: stopFun
});
var waitForDone = function (jobCollection, jobId) {
var fut = new Future();
var found = false;
try {
var observation = jobCollection.find(jobId).observe({
added: function (doc) {
found = true;
if (doc.done)
fut['return']();
},
changed: function (doc) {
if (doc.done)
fut['return']();
},
removed: function (doc) {
fut['return']();
}
});
// if the document doesn't exist at all, it's certainly done.
if (!found)
fut['return']();
fut.wait();
} finally {
observation.stop();
}
};
Ctl.Commands.push({
name: "beginUpdate",
help: "Stop this app to begin an update",
func: stopFun
});
Ctl.Commands.push({
name: "scale",
help: "Scale jobs",
func: function (argv) {
if (argv.help || argv._.length === 0 || _.contains(argv._, 'ctl')) {
process.stderr.write(
"Usage: ctl scale program1=n [...] \n" +
"\n" +
"Scales some programs. Runs or kills jobs until there are n non-done jobs\n" +
"in that state.\n"
);
process.exit(1);
Ctl.subscribeToAppJobs(Ctl.myAppName());
var jobs = Ctl.jobsCollection();
var thisJob = jobs.findOne(Ctl.myJobId());
// Look at all the server jobs that are on the old star.
var oldJobSelector = {
app: Ctl.myAppName(),
star: {$ne: thisJob.star},
program: "server",
done: false
};
var oldServers = jobs.find(oldJobSelector).fetch();
// Start a new job for each of them.
var newServersAlreadyPresent = jobs.find({
app: Ctl.myAppName(),
star: thisJob.star,
program: "server",
done: false
}).count();
// discount any new servers we've already started.
oldServers.splice(0, newServersAlreadyPresent);
console.log("starting " + oldServers.length + " new servers to match old");
_.each(oldServers, function (oldServer) {
Ctl.startServerlikeProgram("server",
oldServer.tags,
oldServer.env.ADMIN_APP);
});
// Wait for them all to come up and bind to the proxy.
Meteor._sleepForMs(10000); // XXX: Eventually make sure they're proxy-bound.
Ctl.updateProxyActiveTags(['', thisJob.star]);
// (eventually) tell the proxy to switch over to using the new star
// One by one, kill all the old star's server jobs.
var jobToKill = jobs.findOne(oldJobSelector);
while (jobToKill) {
Ctl.kill("server", jobToKill._id);
// Wait for it to go down
waitForDone(jobs, jobToKill._id);
// Spend some time in between to allow any reconnect storm to die down.
Meteor._sleepForMs(5000);
jobToKill = jobs.findOne(oldJobSelector);
}
var scales = _.map(argv._, function (arg) {
var m = arg.match(/^(.+)=(\d+)$/);
if (!m) {
console.log("Bad scaling argument; should be program=number.");
process.exit(1);
}
return {program: m[1], scale: parseInt(m[2])};
});
_.each(scales, function (s) {
var jobs = Ctl.getJobsByApp(
Ctl.myAppName(), {program: s.program, done: false});
jobs.forEach(function (job) {
--s.scale;
// Is this an extraneous job, more than the number that we need? Kill
// it!
if (s.scale < 0) {
Ctl.kill(s.program, job._id);
}
});
// Now start any jobs that are necessary.
if (s.scale <= 0)
return;
console.log("Starting %d jobs for %s", s.scale, s.program);
_.times(s.scale, function () {
// XXX args? env?
Ctl.prettyCall(Ctl.findGalaxy(), 'run', [Ctl.myAppName(), s.program, {
exitPolicy: 'restart'
}]);
});
// Now kill all old non-server jobs. They're less important.
jobs.find({
app: Ctl.myAppName(),
star: {$ne: thisJob.star},
program: {$ne: "server"},
done: false
}).forEach(function (job) {
Ctl.kill(job.program, job._id);
});
// fin
process.exit(0);
}
});

View File

@@ -4,7 +4,7 @@ Package.describe({
});
Package.on_use(function (api) {
api.use(['underscore', 'livedata', 'mongo-livedata', 'ctl-helper'], 'server');
api.use(['underscore', 'livedata', 'mongo-livedata', 'ctl-helper', 'application-configuration', 'follower-livedata'], 'server');
api.export('main', 'server');
api.add_files('ctl.js', 'server');
});

View File

@@ -78,6 +78,8 @@ var devModeSend = function (mc) {
// This approach does not prevent other writers to stdout from interleaving.
stream.write("====== BEGIN MAIL #" + devmode_mail_id + " ======\n");
stream.write("(Mail not sent; to enable sending, set the MAIL_URL " +
"environment variable.)\n");
mc.streamMessage();
mc.pipe(stream, {end: false});
var future = new Future;

View File

@@ -20,6 +20,8 @@ Tinytest.add("email - dev mode smoke test", function (test) {
// XXX brittle if mailcomposer changes header order, etc
test.equal(stream.getContentsAsString("utf8"),
"====== BEGIN MAIL #0 ======\n" +
"(Mail not sent; to enable sending, set the MAIL_URL " +
"environment variable.)\n" +
"MIME-Version: 1.0\r\n" +
"X-Meteor-Test: a custom header\r\n" +
"From: foo@example.com\r\n" +

View File

@@ -1,6 +1,6 @@
Facts = {};
var serverFactsCollection = 'Facts.server';
var serverFactsCollection = 'meteor_Facts_server';
if (Meteor.isServer) {
// By default, we publish facts to no user if autopublish is off, and to all
@@ -45,7 +45,7 @@ if (Meteor.isServer) {
// called?
Meteor.defer(function () {
// XXX Also publish facts-by-package.
Meteor.publish("facts", function () {
Meteor.publish("meteor_facts", function () {
var sub = this;
if (!userIdFilter(this.userId)) {
sub.ready();
@@ -59,13 +59,10 @@ if (Meteor.isServer) {
activeSubscriptions = _.without(activeSubscriptions, sub);
});
sub.ready();
});
}, {is_auto: true});
});
} else {
Facts.server = new Meteor.Collection(serverFactsCollection);
// XXX making all clients subscribe all the time is wasteful.
// add an interface here
// Meteor.subscribe("facts");
Template.serverFacts.factsByPackage = function () {
return Facts.server.find();
@@ -78,4 +75,17 @@ if (Meteor.isServer) {
});
return factArray;
};
// Subscribe when the template is first made, and unsubscribe when it
// is removed. If for some reason puts two copies of the template on
// the screen at once, we'll subscribe twice. Meh.
Template.serverFacts.created = function () {
this._stopHandle = Meteor.subscribe("meteor_facts");
};
Template.serverFacts.destroyed = function () {
if (this._stopHandle) {
this._stopHandle.stop();
this._stopHandle = null;
}
};
}

View File

@@ -1,5 +1,6 @@
Package.describe({
summary: "Publish internal and custom app statistics"
summary: "Publish internal app statistics",
internal: true
});
Package.on_use(function (api) {

View File

@@ -1,7 +1,7 @@
{
"dependencies": {
"handlebars": {
"from": "https://github.com/meteor/handlebars.js/tarball/543ec6689cf663cfda2d8f26c3c27de40aad7bd5",
"version": "https://github.com/meteor/handlebars.js/tarball/543ec6689cf663cfda2d8f26c3c27de40aad7bd5",
"dependencies": {
"optimist": {
"version": "0.3.7",

View File

@@ -1,7 +1,7 @@
{
"dependencies": {
"esprima": {
"from": "https://github.com/ariya/esprima/tarball/2a41dbf0ddadade0b09a9a7cc9a0c8df9c434018"
"version": "https://github.com/ariya/esprima/tarball/2a41dbf0ddadade0b09a9a7cc9a0c8df9c434018"
},
"escope": {
"version": "1.0.0",

View File

@@ -45,7 +45,7 @@ _.extend(MethodInvocation.prototype, {
throw new Error("Can't call setUserId in a method after calling unblock");
self.userId = userId;
self._setUserId(userId);
},
}
});
parseDDP = function (stringMessage) {

View File

@@ -6,7 +6,8 @@ Package.describe({
Npm.depends({sockjs: "0.3.8", websocket: "1.0.8"});
Package.on_use(function (api) {
api.use(['check', 'random', 'ejson', 'json', 'underscore', 'deps', 'logging'],
api.use(['check', 'random', 'ejson', 'json', 'underscore', 'deps',
'logging', 'retry'],
['client', 'server']);
// It is OK to use this package on a server architecture without making a
@@ -33,7 +34,6 @@ Package.on_use(function (api) {
// Transport
api.use('reload', 'client', {weak: true});
api.add_files('common.js');
api.add_files('retry.js', ['client', 'server']);
api.add_files(['sockjs-0.3.4.js', 'stream_client_sockjs.js'], 'client');
api.add_files('stream_client_nodejs.js', 'server');
api.add_files('stream_client_common.js', ['client', 'server']);

View File

@@ -74,7 +74,7 @@ StreamServer = function () {
// XXX COMPAT WITH 0.6.6. Send the old style welcome message, which
// will force old clients to reload. Remove this once we're not
// concerned about people upgrading from a pre-0.6.7 release. Also,
// concerned about people upgrading from a pre-0.7.0 release. Also,
// remove the clause in the client that ignores the welcome message
// (livedata_connection.js)
socket.send(JSON.stringify({server_id: "0"}));

View File

@@ -1,20 +1,37 @@
// This is not an ideal name, but we can change it later.
// Meteor._localStorage is not an ideal name, but we can change it later.
if (window.localStorage) {
Meteor._localStorage = {
getItem: function (key) {
return window.localStorage.getItem(key);
},
setItem: function (key, value) {
window.localStorage.setItem(key, value);
},
removeItem: function (key) {
window.localStorage.removeItem(key);
}
};
// Let's test to make sure that localStorage actually works. For example, in
// Safari with private browsing on, window.localStorage exists but actually
// trying to use it throws.
var key = '_localstorage_test_' + Random.id();
var retrieved;
try {
window.localStorage.setItem(key, key);
retrieved = window.localStorage.getItem(key);
window.localStorage.removeItem(key);
} catch (e) {
// ... ignore
}
if (key === retrieved) {
Meteor._localStorage = {
getItem: function (key) {
return window.localStorage.getItem(key);
},
setItem: function (key, value) {
window.localStorage.setItem(key, value);
},
removeItem: function (key) {
window.localStorage.removeItem(key);
}
};
}
}
// XXX eliminate dependency on jQuery, detect browsers ourselves
else if ($.browser.msie) { // If we are on IE, which support userData
// Else, if we are on IE, which support userData
if (!Meteor._localStorage && $.browser.msie) {
var userdata = document.createElement('span'); // could be anything
userdata.style.behavior = 'url("#default#userData")';
userdata.id = 'localstorage-helper';
@@ -40,7 +57,9 @@ else if ($.browser.msie) { // If we are on IE, which support userData
return userdata.getAttribute(key);
}
};
} else {
}
if (!Meteor._localStorage) {
Meteor._debug(
"You are running a browser with no localStorage or userData "
+ "support. Logging in from one tab will not cause another "

View File

@@ -5,6 +5,7 @@ Package.describe({
Package.on_use(function (api) {
api.use('jquery', 'client'); // XXX only used for browser detection. remove.
api.use('random', 'client');
api.add_files('localstorage.js', 'client');
});

View File

@@ -51,7 +51,7 @@ var META_COLOR = 'blue';
// XXX package
var RESTRICTED_KEYS = ['time', 'timeInexact', 'level', 'file', 'line',
'program', 'originApp', 'stderr'];
'program', 'originApp', 'satellite', 'stderr'];
var FORMATTED_KEYS = RESTRICTED_KEYS.concat(['app', 'message']);
@@ -202,6 +202,7 @@ Log.format = function (obj, options) {
var originApp = obj.originApp;
var message = obj.message || '';
var program = obj.program || '';
var satellite = obj.satellite;
var stderr = obj.stderr || '';
_.each(FORMATTED_KEYS, function(key) {
@@ -239,6 +240,9 @@ Log.format = function (obj, options) {
['(', (program ? program + ':' : ''), file, ':', lineNumber, ') '].join('')
: '';
if (satellite)
sourceInfo += ['[', satellite, ']'].join('');
var stderrIndicator = stderr ? '(STDERR) ' : '';
var metaPrefix = [

View File

@@ -92,7 +92,8 @@ Tinytest.add("logging - log", function (test) {
test.throws(function () {
log({level: 'not the right level'});
});
_.each(['file', 'line', 'program', 'originApp'], function (restrictedKey) {
_.each(['file', 'line', 'program', 'originApp', 'satellite'],
function (restrictedKey) {
test.throws(function () {
var obj = {};
obj[restrictedKey] = 'usage of restricted key';

View File

@@ -0,0 +1,120 @@
// Temporary workaround for https://github.com/joyent/node/issues/6506
// Our fix involves replicating a bunch of functions in order to change
// a single line.
var PATCH_VERSIONS = ['v0.10.22', 'v0.10.23', 'v0.10.24'];
if (!_.contains(PATCH_VERSIONS, process.version)) {
if (!process.env.DISABLE_WEBSOCKETS) {
console.error("This version of Meteor contains a patch for a bug in Node v0.10.");
console.error("The patch is against only versions 0.10.22 through 0.10.24.");
console.error("You are using version " + process.version + " instead, so we cannot apply the patch.");
console.error("To mitigate the most common effect of the bug, websockets will be disabled.");
console.error("To enable websockets, use Node v0.10.22 through v0.10.24, or upgrade to a later version of Meteor (if available).");
process.env.DISABLE_WEBSOCKETS = 't';
}
} else {
// This code is all copied from Node's lib/_stream_writable.js, git tag
// v0.10.22, with one change (see "BUGFIX").
var Writable = Npm.require('_stream_writable');
var Duplex = Npm.require('_stream_duplex');
Writable.prototype.write = function(chunk, encoding, cb) {
var state = this._writableState;
var ret = false;
if (typeof encoding === 'function') {
cb = encoding;
encoding = null;
}
if (Buffer.isBuffer(chunk))
encoding = 'buffer';
else if (!encoding)
encoding = state.defaultEncoding;
if (typeof cb !== 'function')
cb = function() {};
if (state.ended)
writeAfterEnd(this, state, cb);
else if (validChunk(this, state, chunk, cb))
ret = writeOrBuffer(this, state, chunk, encoding, cb);
return ret;
};
// Duplex doesn't directly inherit from Writable: it copies over this function
// explicitly. So we have to do it too.
Duplex.prototype.write = Writable.prototype.write;
function writeAfterEnd(stream, state, cb) {
var er = new Error('write after end');
// TODO: defer error events consistently everywhere, not just the cb
stream.emit('error', er);
process.nextTick(function() {
cb(er);
});
}
function validChunk(stream, state, chunk, cb) {
var valid = true;
if (!Buffer.isBuffer(chunk) &&
'string' !== typeof chunk &&
chunk !== null &&
chunk !== undefined &&
!state.objectMode) {
var er = new TypeError('Invalid non-string/buffer chunk');
stream.emit('error', er);
process.nextTick(function() {
cb(er);
});
valid = false;
}
return valid;
}
function writeOrBuffer(stream, state, chunk, encoding, cb) {
chunk = decodeChunk(state, chunk, encoding);
if (Buffer.isBuffer(chunk))
encoding = 'buffer';
var len = state.objectMode ? 1 : chunk.length;
state.length += len;
var ret = state.length < state.highWaterMark;
// This next line is the BUGFIX:
state.needDrain = state.needDrain || !ret;
if (state.writing)
state.buffer.push(new WriteReq(chunk, encoding, cb));
else
doWrite(stream, state, len, chunk, encoding, cb);
return ret;
}
function decodeChunk(state, chunk, encoding) {
if (!state.objectMode &&
state.decodeStrings !== false &&
typeof chunk === 'string') {
chunk = new Buffer(chunk, encoding);
}
return chunk;
}
function WriteReq(chunk, encoding, cb) {
this.chunk = chunk;
this.encoding = encoding;
this.callback = cb;
}
function doWrite(stream, state, len, chunk, encoding, cb) {
state.writelen = len;
state.writecb = cb;
state.writing = true;
state.sync = true;
stream._write(chunk, encoding, state.onwrite);
state.sync = false;
}
}

View File

@@ -15,6 +15,9 @@ Package.on_use(function (api) {
api.export('Meteor');
// Workaround for https://github.com/joyent/node/issues/6506
api.add_files('node-issue-6506-workaround.js', 'server');
api.add_files('client_environment.js', 'client');
api.add_files('server_environment.js', 'server');
api.add_files('helpers.js', ['client', 'server']);

View File

@@ -1,7 +1,7 @@
{
"dependencies": {
"clean-css": {
"version": "1.1.2",
"version": "2.0.2",
"dependencies": {
"commander": {
"version": "2.0.0"
@@ -9,16 +9,16 @@
}
},
"uglify-js": {
"from": "https://github.com/meteor/UglifyJS2/tarball/bb0a762d12d2ecd058b9d7b57f16b4c289378d9c",
"version": "2.4.7",
"dependencies": {
"async": {
"version": "0.2.9"
},
"source-map": {
"version": "0.1.30",
"version": "0.1.31",
"dependencies": {
"amdefine": {
"version": "0.0.8"
"version": "0.1.0"
}
}
},

View File

@@ -1,2 +1,8 @@
CleanCSSProcess = Npm.require('clean-css').process;
var CleanCss = Npm.require('clean-css');
CleanCSSProcess = function (source, options) {
var instance = new CleanCss(options);
return instance.minify(source);
};
UglifyJSMinify = Npm.require('uglify-js').minify;

View File

@@ -4,9 +4,8 @@ Package.describe({
});
Npm.depends({
"clean-css": "1.1.2",
// Fork of 2.4.0 fixing https://github.com/mishoo/UglifyJS2/pull/308
"uglify-js": "https://github.com/meteor/UglifyJS2/tarball/bb0a762d12d2ecd058b9d7b57f16b4c289378d9c"
"clean-css": "2.0.2",
"uglify-js": "2.4.7"
});
Package.on_use(function (api) {

View File

@@ -4,12 +4,25 @@ var isArray = function (x) {
return _.isArray(x) && !EJSON.isBinary(x);
};
// If x is an array, true if f(e) is true for some e in x
// (but never try f(x) directly)
// Otherwise, true if f(x) is true.
//
// Use this in cases where f(Array) should never be true...
// for example, equality comparisons to non-arrays,
// ordering comparisons (which should always be false if either side
// is an array), regexps (need string), mod (needs number)...
// XXX ensure comparisons are always false if LHS is an array
// XXX ensure comparisons among different types are false
var _anyIfArray = function (x, f) {
if (isArray(x))
return _.any(x, f);
return f(x);
};
// True if f(x) is true, or x is an array and f(e) is true for some e in x.
//
// Use this for most operators where an array could satisfy the predicate.
var _anyIfArrayPlus = function (x, f) {
if (f(x))
return true;

View File

@@ -1,7 +1,7 @@
{
"dependencies": {
"mongodb": {
"from": "https://github.com/meteor/node-mongodb-native/tarball/779bbac916a751f305d84c727a6cc7dfddab7924",
"version": "https://github.com/meteor/node-mongodb-native/tarball/779bbac916a751f305d84c727a6cc7dfddab7924",
"dependencies": {
"bson": {
"version": "0.2.2"

View File

@@ -488,6 +488,12 @@ Meteor.Collection.prototype._dropIndex = function (index) {
throw new Error("Can only call _dropIndex on server collections");
self._collection._dropIndex(index);
};
Meteor.Collection.prototype._dropCollection = function () {
var self = this;
if (!self._collection.dropCollection)
throw new Error("Can only call _dropCollection on server collections");
self._collection.dropCollection();
};
Meteor.Collection.prototype._createCappedCollection = function (byteSize) {
var self = this;
if (!self._collection._createCappedCollection)

View File

@@ -15,6 +15,9 @@ _.extend(DocFetcher.prototype, {
// If you make multiple calls to fetch() with the same cacheKey (a string),
// DocFetcher may assume that they all return the same document. (It does
// not check to see if collectionName/id match.)
//
// You may assume that callback is never called synchronously (and in fact
// OplogObserveDriver does so).
fetch: function (collectionName, id, cacheKey, callback) {
var self = this;

View File

@@ -280,7 +280,7 @@ MongoConnection.prototype._insert = function (collection_name, document,
var write = self._maybeBeginWrite();
var refresh = function () {
Meteor.refresh({ collection: collection_name, id: document._id });
Meteor.refresh({collection: collection_name, id: document._id });
};
callback = bindEnvironmentForWrite(writeCallback(write, refresh, callback));
try {
@@ -341,6 +341,25 @@ MongoConnection.prototype._remove = function (collection_name, selector,
}
};
MongoConnection.prototype._dropCollection = function (collectionName, cb) {
var self = this;
var write = self._maybeBeginWrite();
var refresh = function () {
Meteor.refresh({collection: collectionName, id: null,
dropCollection: true});
};
cb = bindEnvironmentForWrite(writeCallback(write, refresh, cb));
try {
var collection = self._getCollection(collectionName);
collection.drop(cb);
} catch (e) {
write.committed();
throw e;
}
};
MongoConnection.prototype._update = function (collection_name, selector, mod,
options, callback) {
var self = this;
@@ -536,7 +555,7 @@ var simulateUpsertWithInsertedId = function (collection, selector, mod,
doUpdate();
};
_.each(["insert", "update", "remove"], function (method) {
_.each(["insert", "update", "remove", "dropCollection"], function (method) {
MongoConnection.prototype[method] = function (/* arguments */) {
var self = this;
return Meteor._wrapAsync(self["_" + method]).apply(self, arguments);
@@ -879,7 +898,7 @@ MongoConnection.prototype.tail = function (cursorDescription, docCallback) {
var stopped = false;
var lastTS = undefined;
Meteor.defer(function () {
var loop = function () {
while (true) {
if (stopped)
return;
@@ -911,9 +930,16 @@ MongoConnection.prototype.tail = function (cursorDescription, docCallback) {
cursorDescription.collectionName,
newSelector,
cursorDescription.options));
// Mongo failover takes many seconds. Retry in a bit. (Without this
// setTimeout, we peg the CPU at 100% and never notice the actual
// failover.
Meteor.setTimeout(loop, 100);
break;
}
}
});
};
Meteor.defer(loop);
return {
stop: function () {
@@ -973,9 +999,8 @@ MongoConnection.prototype._observeChanges = function (
_testOnlyPollCallback: callbacks._testOnlyPollCallback
});
// This field is only set for the first ObserveHandle in an
// ObserveMultiplexer. It is only there for use tests.
observeHandle._observeDriver = observeDriver;
// This field is only set for use in tests.
multiplexer._observeDriver = observeDriver;
}
// Blocks until the initial adds have been sent.
@@ -993,10 +1018,6 @@ MongoConnection.prototype._observeChanges = function (
listenAll = function (cursorDescription, listenCallback) {
var listeners = [];
forEachTrigger(cursorDescription, function (trigger) {
// The "drop collection" event is used by the oplog crossbar, not the
// invalidation crossbar.
if (trigger.dropCollection)
return;
listeners.push(DDPServer._InvalidationCrossbar.listen(
trigger, listenCallback));
});
@@ -1018,7 +1039,7 @@ forEachTrigger = function (cursorDescription, triggerCallback) {
_.each(specificIds, function (id) {
triggerCallback(_.extend({id: id}, key));
});
triggerCallback(_.extend({dropCollection: true}, key));
triggerCallback(_.extend({dropCollection: true, id: null}, key));
} else {
triggerCallback(key);
}

View File

@@ -23,6 +23,15 @@ if (Meteor.isServer) {
});
}
var runInFence = function (f) {
if (Meteor.isClient) {
f();
} else {
var fence = new DDPServer._WriteFence;
DDPServer._CurrentWriteFence.withValue(fence, f);
fence.armAndWait();
}
};
// Helpers for upsert tests
@@ -382,13 +391,9 @@ Tinytest.addAsync("mongo-livedata - fuzz test, " + idGeneration, function(test,
}
});
// XXX What if there are multiple observe handles on the ObserveMultiplexer?
// There shouldn't be because the collection has a name unique to this
// run.
if (Meteor.isServer) {
// For now, has to be polling (not oplog).
test.isTrue(obs._observeDriver);
test.isTrue(obs._observeDriver._suspendPolling);
// For now, has to be polling (not oplog) because it is ordered observe.
test.isTrue(obs._multiplexer._observeDriver._suspendPolling);
}
var step = 0;
@@ -423,7 +428,7 @@ Tinytest.addAsync("mongo-livedata - fuzz test, " + idGeneration, function(test,
finishObserve(function () {
if (Meteor.isServer)
obs._observeDriver._suspendPolling();
obs._multiplexer._observeDriver._suspendPolling();
// Do a batch of 1-10 operations
var batch_count = rnd(10) + 1;
@@ -456,7 +461,7 @@ Tinytest.addAsync("mongo-livedata - fuzz test, " + idGeneration, function(test,
}
}
if (Meteor.isServer)
obs._observeDriver._resumePolling();
obs._multiplexer._observeDriver._resumePolling();
});
@@ -478,16 +483,6 @@ Tinytest.addAsync("mongo-livedata - fuzz test, " + idGeneration, function(test,
});
var runInFence = function (f) {
if (Meteor.isClient) {
f();
} else {
var fence = new DDPServer._WriteFence;
DDPServer._CurrentWriteFence.withValue(fence, f);
fence.armAndWait();
}
};
Tinytest.addAsync("mongo-livedata - scribbling, " + idGeneration, function (test, onComplete) {
var run = test.runId();
var coll;
@@ -1887,13 +1882,280 @@ Meteor.isServer && Tinytest.add("mongo-livedata - oplog - _disableOplog", functi
if (MongoInternals.defaultRemoteCollectionDriver().mongo._oplogHandle) {
var observeWithOplog = coll.find({x: 5})
.observeChanges({added: function () {}});
test.isTrue(observeWithOplog._observeDriver);
test.isTrue(observeWithOplog._observeDriver._usesOplog);
test.isTrue(observeWithOplog._multiplexer._observeDriver._usesOplog);
observeWithOplog.stop();
}
var observeWithoutOplog = coll.find({x: 6}, {_disableOplog: true})
.observeChanges({added: function () {}});
test.isTrue(observeWithoutOplog._observeDriver);
test.isFalse(observeWithoutOplog._observeDriver._usesOplog);
test.isFalse(observeWithoutOplog._multiplexer._observeDriver._usesOplog);
observeWithoutOplog.stop();
});
Meteor.isServer && Tinytest.add("mongo-livedata - oplog - include selector fields", function (test) {
var collName = "includeSelector" + Random.id();
var coll = new Meteor.Collection(collName);
var docId = coll.insert({a: 1, b: [3, 2], c: 'foo'});
test.isTrue(docId);
// Wait until we've processed the insert oplog entry. (If the insert shows up
// during the observeChanges, the bug in question is not consistently
// reproduced.) We don't have to do this for polling observe (eg
// --disable-oplog).
var oplog = MongoInternals.defaultRemoteCollectionDriver().mongo._oplogHandle;
oplog && oplog.waitUntilCaughtUp();
var output = [];
var handle = coll.find({a: 1, b: 2}, {fields: {c: 1}}).observeChanges({
added: function (id, fields) {
output.push(['added', id, fields]);
},
changed: function (id, fields) {
output.push(['changed', id, fields]);
},
removed: function (id) {
output.push(['removed', id]);
}
});
// Initially should match the document.
test.length(output, 1);
test.equal(output.shift(), ['added', docId, {c: 'foo'}]);
// Update in such a way that, if we only knew about the published field 'c'
// and the changed field 'b' (but not the field 'a'), we would think it didn't
// match any more. (This is a regression test for a bug that existed because
// we used to not use the shared projection in the initial query.)
runInFence(function () {
coll.update(docId, {$set: {'b.0': 2, c: 'bar'}});
});
test.length(output, 1);
test.equal(output.shift(), ['changed', docId, {c: 'bar'}]);
handle.stop();
});
Meteor.isServer && Tinytest.add("mongo-livedata - oplog - transform", function (test) {
var collName = "oplogTransform" + Random.id();
var coll = new Meteor.Collection(collName);
var docId = coll.insert({a: 25, x: {x: 5, y: 9}});
test.isTrue(docId);
// Wait until we've processed the insert oplog entry. (If the insert shows up
// during the observeChanges, the bug in question is not consistently
// reproduced.) We don't have to do this for polling observe (eg
// --disable-oplog).
var oplog = MongoInternals.defaultRemoteCollectionDriver().mongo._oplogHandle;
oplog && oplog.waitUntilCaughtUp();
var cursor = coll.find({}, {transform: function (doc) {
return doc.x;
}});
var changesOutput = [];
var changesHandle = cursor.observeChanges({
added: function (id, fields) {
changesOutput.push(['added', fields]);
}
});
// We should get untransformed fields via observeChanges.
test.length(changesOutput, 1);
test.equal(changesOutput.shift(), ['added', {a: 25, x: {x: 5, y: 9}}]);
changesHandle.stop();
var transformedOutput = [];
var transformedHandle = cursor.observe({
added: function (doc) {
transformedOutput.push(['added', doc]);
}
});
test.length(transformedOutput, 1);
test.equal(transformedOutput.shift(), ['added', {x: 5, y: 9}]);
transformedHandle.stop();
});
Meteor.isServer && Tinytest.add("mongo-livedata - oplog - drop collection", function (test) {
var collName = "dropCollection" + Random.id();
var coll = new Meteor.Collection(collName);
var doc1Id = coll.insert({a: 'foo', c: 1});
var doc2Id = coll.insert({b: 'bar'});
var doc3Id = coll.insert({a: 'foo', c: 2});
var tmp;
var output = [];
var handle = coll.find({a: 'foo'}).observeChanges({
added: function (id, fields) {
output.push(['added', id, fields]);
},
changed: function (id) {
output.push(['changed']);
},
removed: function (id) {
output.push(['removed', id]);
}
});
test.length(output, 2);
// make order consistent
if (output.length === 2 && output[0][1] === doc3Id) {
tmp = output[0];
output[0] = output[1];
output[1] = tmp;
}
test.equal(output.shift(), ['added', doc1Id, {a: 'foo', c: 1}]);
test.equal(output.shift(), ['added', doc3Id, {a: 'foo', c: 2}]);
// Wait until we've processed the insert oplog entry, so that we are in a
// steady state (and we don't see the dropped docs because we are FETCHING).
var oplog = MongoInternals.defaultRemoteCollectionDriver().mongo._oplogHandle;
oplog && oplog.waitUntilCaughtUp();
// Drop the collection. Should remove all docs.
runInFence(function () {
coll._dropCollection();
});
test.length(output, 2);
// make order consistent
if (output.length === 2 && output[0][1] === doc3Id) {
tmp = output[0];
output[0] = output[1];
output[1] = tmp;
}
test.equal(output.shift(), ['removed', doc1Id]);
test.equal(output.shift(), ['removed', doc3Id]);
// Put something back in.
var doc4Id;
runInFence(function () {
doc4Id = coll.insert({a: 'foo', c: 3});
});
test.length(output, 1);
test.equal(output.shift(), ['added', doc4Id, {a: 'foo', c: 3}]);
handle.stop();
});
var TestCustomType = function (head, tail) {
// use different field names on the object than in JSON, to ensure we are
// actually treating this as an opaque object.
this.myHead = head;
this.myTail = tail;
};
_.extend(TestCustomType.prototype, {
clone: function () {
return new TestCustomType(this.myHead, this.myTail);
},
equals: function (other) {
return other instanceof TestCustomType
&& EJSON.equals(this.myHead, other.myHead)
&& EJSON.equals(this.myTail, other.myTail);
},
typeName: function () {
return 'someCustomType';
},
toJSONValue: function () {
return {head: this.myHead, tail: this.myTail};
}
});
EJSON.addType('someCustomType', function (json) {
return new TestCustomType(json.head, json.tail);
});
testAsyncMulti("mongo-livedata - oplog - update EJSON", [
function (test, expect) {
var self = this;
var collectionName = "ejson" + Random.id();
if (Meteor.isClient) {
Meteor.call('createInsecureCollection', collectionName);
Meteor.subscribe('c-' + collectionName);
}
self.collection = new Meteor.Collection(collectionName);
self.date = new Date;
self.objId = new Meteor.Collection.ObjectID;
self.id = self.collection.insert(
{d: self.date, oi: self.objId,
custom: new TestCustomType('a', 'b')},
expect(function (err, res) {
test.isFalse(err);
test.equal(self.id, res);
}));
},
function (test, expect) {
var self = this;
self.changes = [];
self.handle = self.collection.find({}).observeChanges({
added: function (id, fields) {
self.changes.push(['a', id, fields]);
},
changed: function (id, fields) {
self.changes.push(['c', id, fields]);
},
removed: function (id) {
self.changes.push(['r', id]);
}
});
test.length(self.changes, 1);
test.equal(self.changes.shift(),
['a', self.id,
{d: self.date, oi: self.objId,
custom: new TestCustomType('a', 'b')}]);
// First, replace the entire custom object.
// (runInFence is useful for the server, using expect() is useful for the
// client)
runInFence(function () {
self.collection.update(
self.id, {$set: {custom: new TestCustomType('a', 'c')}},
expect(function (err) {
test.isFalse(err);
}));
});
},
function (test, expect) {
var self = this;
test.length(self.changes, 1);
test.equal(self.changes.shift(),
['c', self.id, {custom: new TestCustomType('a', 'c')}]);
// Now, sneakily replace just a piece of it. Meteor won't do this, but
// perhaps you are accessing Mongo directly.
runInFence(function () {
self.collection.update(
self.id, {$set: {'custom.EJSON$value.EJSONtail': 'd'}},
expect(function (err) {
test.isFalse(err);
}));
});
},
function (test, expect) {
var self = this;
test.length(self.changes, 1);
test.equal(self.changes.shift(),
['c', self.id, {custom: new TestCustomType('a', 'd')}]);
// Update a date and an ObjectID too.
self.date2 = new Date(self.date.valueOf() + 1000);
self.objId2 = new Meteor.Collection.ObjectID;
runInFence(function () {
self.collection.update(
self.id, {$set: {d: self.date2, oi: self.objId2}},
expect(function (err) {
test.isFalse(err);
}));
});
},
function (test, expect) {
var self = this;
test.length(self.changes, 1);
test.equal(self.changes.shift(),
['c', self.id, {d: self.date2, oi: self.objId2}]);
self.handle.stop();
}
]);

View File

@@ -2,7 +2,7 @@ var Fiber = Npm.require('fibers');
var Future = Npm.require('fibers/future');
var PHASE = {
INITIALIZING: 1,
QUERYING: 1,
FETCHING: 2,
STEADY: 3
};
@@ -14,7 +14,6 @@ var PHASE = {
// it by calling the stop() method.
OplogObserveDriver = function (options) {
var self = this;
self._usesOplog = true; // tests look at this
self._cursorDescription = options.cursorDescription;
@@ -27,9 +26,9 @@ OplogObserveDriver = function (options) {
self._stopHandles = [];
Package.facts && Package.facts.Facts.incrementServerFact(
"mongo-livedata", "oplog-observers", 1);
"mongo-livedata", "observe-drivers-oplog", 1);
self._phase = PHASE.INITIALIZING;
self._phase = PHASE.QUERYING;
self._published = new LocalCollection._IdMap;
var selector = self._cursorDescription.selector;
@@ -39,30 +38,31 @@ OplogObserveDriver = function (options) {
self._projectionFn = LocalCollection._compileProjection(projection);
// Projection function, result of combining important fields for selector and
// existing fields projection
var sharedProjection = LocalCollection._combineSelectorAndProjection(
self._sharedProjection = LocalCollection._combineSelectorAndProjection(
selector, projection);
self._sharedProjectionFn = LocalCollection._compileProjection(
sharedProjection);
self._sharedProjection);
self._needToFetch = new LocalCollection._IdMap;
self._currentlyFetching = null;
self._fetchGeneration = 0;
self._requeryWhenDoneThisQuery = false;
self._writesToCommitWhenWeReachSteady = [];
forEachTrigger(self._cursorDescription, function (trigger) {
self._stopHandles.push(self._mongoHandle._oplogHandle.onOplogEntry(
trigger, function (notification) {
var op = notification.op;
if (op.op === 'c') {
// XXX actually, drop collection needs to be handled by doing a
// re-query
self._published.forEach(function (fields, id) {
self._remove(id);
});
if (notification.dropCollection) {
// Note: this call is not allowed to block on anything (especially on
// waiting for oplog entries to catch up) because that will block
// onOplogEntry!
self._needToPollQuery();
} else {
// All other operators should be handled depending on phase
if (self._phase === PHASE.INITIALIZING)
self._handleOplogEntryInitializing(op);
if (self._phase === PHASE.QUERYING)
self._handleOplogEntryQuerying(op);
else
self._handleOplogEntrySteadyOrFetching(op);
}
@@ -82,21 +82,23 @@ OplogObserveDriver = function (options) {
var write = fence.beginWrite();
// This write cannot complete until we've caught up to "this point" in the
// oplog, and then made it back to the steady state.
Meteor.defer(complete);
self._mongoHandle._oplogHandle.waitUntilCaughtUp();
if (self._stopped) {
// We're stopped, so just immediately commit.
write.committed();
} else if (self._phase === PHASE.STEADY) {
// Make sure that all of the callbacks have made it through the
// multiplexer and been delivered to ObserveHandles before committing
// writes.
self._multiplexer.onFlush(function () {
Meteor.defer(function () {
self._mongoHandle._oplogHandle.waitUntilCaughtUp();
if (self._stopped) {
// We're stopped, so just immediately commit.
write.committed();
});
} else {
self._writesToCommitWhenWeReachSteady.push(write);
}
} else if (self._phase === PHASE.STEADY) {
// Make sure that all of the callbacks have made it through the
// multiplexer and been delivered to ObserveHandles before committing
// writes.
self._multiplexer.onFlush(function () {
write.committed();
});
} else {
self._writesToCommitWhenWeReachSteady.push(write);
}
});
complete();
}
));
@@ -125,11 +127,18 @@ _.extend(OplogObserveDriver.prototype, {
self._published.remove(id);
self._multiplexer.removed(id);
},
_handleDoc: function (id, newDoc) {
_handleDoc: function (id, newDoc, mustMatchNow) {
var self = this;
newDoc = _.clone(newDoc);
var matchesNow = newDoc && self._selectorFn(newDoc);
if (mustMatchNow && !matchesNow) {
throw Error("expected " + EJSON.stringify(newDoc) + " to match "
+ EJSON.stringify(self._cursorDescription));
}
var matchedBefore = self._published.has(id);
if (matchesNow && !matchedBefore) {
self._add(newDoc);
} else if (matchedBefore && !matchesNow) {
@@ -154,6 +163,7 @@ _.extend(OplogObserveDriver.prototype, {
throw new Error("phase in fetchModifiedDocuments: " + self._phase);
self._currentlyFetching = self._needToFetch;
var thisGeneration = ++self._fetchGeneration;
self._needToFetch = new LocalCollection._IdMap;
var waiting = 0;
var anyError = null;
@@ -168,17 +178,27 @@ _.extend(OplogObserveDriver.prototype, {
if (err) {
if (!anyError)
anyError = err;
} else if (!self._stopped) {
} else if (!self._stopped && self._phase === PHASE.FETCHING
&& self._fetchGeneration === thisGeneration) {
// We re-check the generation in case we've had an explicit
// _pollQuery call which should effectively cancel this round of
// fetches. (_pollQuery increments the generation.)
self._handleDoc(id, doc);
}
waiting--;
if (waiting == 0)
// Because fetch() never calls its callback synchronously, this is
// safe (ie, we won't call fut.return() before the forEach is done).
if (waiting === 0)
fut.return();
});
});
fut.wait();
// XXX do this even if we've switched to PHASE.QUERYING?
if (anyError)
throw anyError;
// Exit now if we've had a _pollQuery call.
if (self._phase === PHASE.QUERYING)
return;
self._currentlyFetching = null;
}
self._beSteady();
@@ -194,7 +214,7 @@ _.extend(OplogObserveDriver.prototype, {
});
});
},
_handleOplogEntryInitializing: function (op) {
_handleOplogEntryQuerying: function (op) {
var self = this;
self._needToFetch.set(idForOp(op), op.ts.toString());
},
@@ -226,18 +246,25 @@ _.extend(OplogObserveDriver.prototype, {
// replacement (in which case we can just directly re-evaluate the
// selector)?
var isReplace = !_.has(op.o, '$set') && !_.has(op.o, '$unset');
// If this modifier modifies something inside an EJSON custom type (ie,
// anything with EJSON$), then we can't try to use
// LocalCollection._modify, since that just mutates the EJSON encoding,
// not the actual object.
var canDirectlyModifyDoc =
!isReplace && modifierCanBeDirectlyApplied(op.o);
if (isReplace) {
self._handleDoc(id, _.extend({_id: id}, op.o));
} else if (self._published.has(id)) {
} else if (self._published.has(id) && canDirectlyModifyDoc) {
// Oh great, we actually know what the document is, so we can apply
// this directly.
var newDoc = EJSON.clone(self._published.get(id));
newDoc._id = id;
LocalCollection._modify(newDoc, op.o);
self._handleDoc(id, self._sharedProjectionFn(newDoc));
} else if (LocalCollection._canSelectorBecomeTrueByModifier(
self._cursorDescription.selector, op.o)) {
} else if (!canDirectlyModifyDoc ||
LocalCollection._canSelectorBecomeTrueByModifier(
self._cursorDescription.selector, op.o)) {
self._needToFetch.set(id, op.ts.toString());
if (self._phase === PHASE.STEADY)
self._fetchModifiedDocuments();
@@ -251,7 +278,7 @@ _.extend(OplogObserveDriver.prototype, {
if (self._stopped)
throw new Error("oplog stopped surprisingly early");
var initialCursor = new Cursor(self._mongoHandle, self._cursorDescription);
var initialCursor = self._cursorForQuery();
initialCursor.forEach(function (initialDoc) {
self._add(initialDoc);
});
@@ -261,21 +288,143 @@ _.extend(OplogObserveDriver.prototype, {
// stop() to be called.)
self._multiplexer.ready();
self._doneQuerying();
},
// In various circumstances, we may just want to stop processing the oplog and
// re-run the initial query, just as if we were a PollingObserveDriver.
//
// This function may not block, because it is called from an oplog entry
// handler.
//
// XXX We should call this when we detect that we've been in FETCHING for "too
// long".
//
// XXX We should call this when we detect Mongo failover (since that might
// mean that some of the oplog entries we have processed have been rolled
// back). The Node Mongo driver is in the middle of a bunch of huge
// refactorings, including the way that it notifies you when primary
// changes. Will put off implementing this until driver 1.4 is out.
_pollQuery: function () {
var self = this;
if (self._stopped)
return;
// Yay, we get to forget about all the things we thought we had to fetch.
self._needToFetch = new LocalCollection._IdMap;
self._currentlyFetching = null;
++self._fetchGeneration; // ignore any in-flight fetches
self._phase = PHASE.QUERYING;
// Defer so that we don't block.
Meteor.defer(function () {
// subtle note: _published does not contain _id fields, but newResults
// does
var newResults = new LocalCollection._IdMap;
var cursor = self._cursorForQuery();
cursor.forEach(function (doc) {
newResults.set(doc._id, doc);
});
self._publishNewResults(newResults);
self._doneQuerying();
});
},
// Transitions to QUERYING and runs another query, or (if already in QUERYING)
// ensures that we will query again later.
//
// This function may not block, because it is called from an oplog entry
// handler.
_needToPollQuery: function () {
var self = this;
if (self._stopped)
return;
// If we're not already in the middle of a query, we can query now (possibly
// pausing FETCHING).
if (self._phase !== PHASE.QUERYING) {
self._pollQuery();
return;
}
// We're currently in QUERYING. Set a flag to ensure that we run another
// query when we're done.
self._requeryWhenDoneThisQuery = true;
},
_doneQuerying: function () {
var self = this;
if (self._stopped)
return;
self._mongoHandle._oplogHandle.waitUntilCaughtUp();
if (self._stopped)
return;
if (self._phase !== PHASE.INITIALIZING)
if (self._phase !== PHASE.QUERYING)
throw Error("Phase unexpectedly " + self._phase);
if (self._needToFetch.empty()) {
if (self._requeryWhenDoneThisQuery) {
self._requeryWhenDoneThisQuery = false;
self._pollQuery();
} else if (self._needToFetch.empty()) {
self._beSteady();
} else {
self._fetchModifiedDocuments();
}
},
_cursorForQuery: function () {
var self = this;
// The query we run is almost the same as the cursor we are observing, with
// a few changes. We need to read all the fields that are relevant to the
// selector, not just the fields we are going to publish (that's the
// "shared" projection). And we don't want to apply any transform in the
// cursor, because observeChanges shouldn't use the transform.
var options = _.clone(self._cursorDescription.options);
options.fields = self._sharedProjection;
delete options.transform;
// We are NOT deep cloning fields or selector here, which should be OK.
var description = new CursorDescription(
self._cursorDescription.collectionName,
self._cursorDescription.selector,
options);
return new Cursor(self._mongoHandle, description);
},
// Replace self._published with newResults (both are IdMaps), invoking observe
// callbacks on the multiplexer.
//
// XXX This is very similar to LocalCollection._diffQueryUnorderedChanges. We
// should really: (a) Unify IdMap and OrderedDict into Unordered/OrderedDict (b)
// Rewrite diff.js to use these classes instead of arrays and objects.
_publishNewResults: function (newResults) {
var self = this;
// First remove anything that's gone. Be careful not to modify
// self._published while iterating over it.
var idsToRemove = [];
self._published.forEach(function (doc, id) {
if (!newResults.has(id))
idsToRemove.push(id);
});
_.each(idsToRemove, function (id) {
self._remove(id);
});
// Now do adds and changes.
newResults.forEach(function (doc, id) {
// "true" here means to throw if we think this doc doesn't match the
// selector.
self._handleDoc(id, doc, true);
});
},
// This stop function is invoked from the onStop of the ObserveMultiplexer, so
// it shouldn't actually be possible to call it until the multiplexer is
// ready.
@@ -306,7 +455,7 @@ _.extend(OplogObserveDriver.prototype, {
self._listenersHandle = null;
Package.facts && Package.facts.Facts.incrementServerFact(
"mongo-livedata", "oplog-observers", -1);
"mongo-livedata", "observe-drivers-oplog", -1);
}
});
@@ -358,5 +507,12 @@ OplogObserveDriver.cursorSupported = function (cursorDescription) {
});
};
var modifierCanBeDirectlyApplied = function (modifier) {
return _.all(modifier, function (fields, operation) {
return _.all(fields, function (value, field) {
return !/EJSON\$/.test(field);
});
});
};
MongoTest.OplogObserveDriver = OplogObserveDriver;

View File

@@ -134,22 +134,15 @@ _.extend(OplogHandle.prototype, {
return;
}
// Insert the future into our list. Almost always, this will be at the end,
// but it's conceivable that if we fail over from one primary to another,
// the oplog entries we see will go backwards.
var insertAfter = self._catchingUpFutures.length;
while (insertAfter - 1 > 0
&& self._catchingUpFutures[insertAfter - 1].ts.greaterThan(ts)) {
insertAfter--;
}
// XXX this can occur if we fail over from one primary to another. so this
// check needs to be removed before we merge oplog. that said, it has been
// helpful so far at proving that we are properly using poolSize 1. Also, we
// could keep something like it if we could actually detect failover; see
// https://github.com/mongodb/node-mongodb-native/issues/1120
if (insertAfter !== self._catchingUpFutures.length) {
throw Error("found misordered oplog: "
+ showTS(_.last(self._catchingUpFutures).ts) + " vs "
+ showTS(ts));
}
var f = new Future;
self._catchingUpFutures.splice(insertAfter, 0, {ts: ts, future: f});
f.wait();

View File

@@ -72,7 +72,7 @@ PollingObserveDriver = function (options) {
self._unthrottledEnsurePollIsScheduled();
Package.facts && Package.facts.Facts.incrementServerFact(
"mongo-livedata", "mongo-pollsters", 1);
"mongo-livedata", "observe-drivers-polling", 1);
};
_.extend(PollingObserveDriver.prototype, {
@@ -174,6 +174,6 @@ _.extend(PollingObserveDriver.prototype, {
self._stopped = true;
_.each(self._stopCallbacks, function (c) { c(); });
Package.facts && Package.facts.Facts.incrementServerFact(
"mongo-livedata", "mongo-pollsters", -1);
"mongo-livedata", "observe-drivers-polling", -1);
}
});

View File

@@ -10,7 +10,8 @@ _.extend(MongoInternals.RemoteCollectionDriver.prototype, {
var ret = {};
_.each(
['find', 'findOne', 'insert', 'update', , 'upsert',
'remove', '_ensureIndex', '_dropIndex', '_createCappedCollection'],
'remove', '_ensureIndex', '_dropIndex', '_createCappedCollection',
'dropCollection'],
function (m) {
ret[m] = _.bind(self.mongo[m], self.mongo, name);
});

1
packages/retry/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.build*

10
packages/retry/package.js Normal file
View File

@@ -0,0 +1,10 @@
Package.describe({
summary: "Retry logic with exponential backoff",
internal: true
});
Package.on_use(function (api) {
api.use('underscore', ['client', 'server']);
api.export('Retry');
api.add_files('retry.js', ['client', 'server']);
});

View File

@@ -1,24 +1,23 @@
// Retry logic with an exponential backoff.
//
// options:
// baseTimeout: time for initial reconnect attempt (ms).
// exponent: exponential factor to increase timeout each attempt.
// maxTimeout: maximum time between retries (ms).
// minCount: how many times to reconnect "instantly".
// minTimeout: time to wait for the first `minCount` retries (ms).
// fuzz: factor to randomize retry times by (to avoid retry storms).
Retry = function (options) {
var self = this;
_.extend(self, _.defaults(_.clone(options || {}), {
// time for initial reconnect attempt.
baseTimeout: 1000,
// exponential factor to increase timeout each attempt.
baseTimeout: 1000, // 1 second
exponent: 2.2,
// maximum time between reconnects. keep this intentionally
// high-ish to ensure a server can recover from a failure caused
// by load
// The default is high-ish to ensure a server can recover from a
// failure caused by load.
maxTimeout: 5 * 60000, // 5 minutes
// time to wait for the first 2 retries. this helps page reload
// speed during dev mode restarts, but doesn't hurt prod too
// much (due to CONNECT_TIMEOUT)
minTimeout: 10,
// how many times to try to reconnect 'instantly'
minCount: 2,
// fuzz factor to randomize reconnect times by. avoid reconnect
// storms.
fuzz: 0.5 // +- 25%
}));
self.retryTimer = null;

View File

@@ -132,7 +132,7 @@ _.extend(TestCaseResults.prototype, {
this.equal(actual[i], expected[i]);
}
} else {
matched = _.isEqual(expected, actual);
matched = EJSON.equals(expected, actual);
}
if (matched === !!not) {
@@ -309,7 +309,16 @@ _.extend(TestCase.prototype, {
return true;
};
var results = new TestCaseResults(self, onEvent,
var wrappedOnEvent = function (e) {
// If this trace prints, it means you ran some test.* function after the
// test finished! Another symptom will be that the test will display as
// "waiting" even when it counts as passed or failed.
if (completed)
console.trace("event after complete!");
return onEvent(e);
};
var results = new TestCaseResults(self, wrappedOnEvent,
function (e) {
if (markComplete())
onException(e);

View File

@@ -0,0 +1,3 @@
._meteor_detect_css {
width: 0px;
}

View File

@@ -9,6 +9,7 @@ Npm.depends({connect: "2.9.0",
Package.on_use(function (api) {
api.use(['logging', 'underscore', 'routepolicy'], 'server');
api.use(['underscore'], 'client');
api.use(['application-configuration', 'follower-livedata'], {
unordered: true
});
@@ -18,5 +19,8 @@ Package.on_use(function (api) {
// way on browser-policy here, but we use it when it is loaded, and it can be
// loaded after webapp.
api.export(['WebApp', 'main', 'WebAppInternals'], 'server');
api.export(['WebApp'], 'client');
api.add_files('webapp_server.js', 'server');
api.add_files('webapp_client.js', 'client');
api.add_files('css_detect.css', 'client');
});

View File

@@ -0,0 +1,12 @@
WebApp = {
_isCssLoaded: function () {
return _.find(document.styleSheets, function (sheet) {
if (sheet.cssText && !sheet.cssRules) // IE8
return sheet.cssText.match(/_meteor_detect_css/);
return _.find(sheet.cssRules, function (rule) {
return rule.selectorText === '._meteor_detect_css';
});
});
}
};

View File

@@ -20,6 +20,19 @@ WebAppInternals = {};
var bundledJsCssPrefix;
// The reload safetybelt is some js that will be loaded after everything else in
// the HTML. In some multi-server deployments, when you update, you have a
// chance of hitting an old server for the HTML and the new server for the JS or
// CSS. This prevents you from displaying the page in that case, and instead
// reloads it, presumably all on the new version now.
var RELOAD_SAFETYBELT = "\n" +
"if (typeof Package === 'undefined' || \n" +
" ! Package.webapp || \n" +
" ! Package.webapp.WebApp || \n" +
" ! Package.webapp.WebApp._isCssLoaded()) \n" +
" document.location.reload(); \n";
var makeAppNamePathPrefix = function (appName) {
return encodeURIComponent(appName).replace(/\./g, '_');
};
@@ -290,6 +303,7 @@ var runWebAppServer = function () {
}
});
// Serve static files from the manifest.
// This is inspired by the 'static' middleware.
app.use(function (req, res, next) {
@@ -306,12 +320,20 @@ var runWebAppServer = function () {
return;
}
var serveStaticJs = function (s) {
res.writeHead(200, { 'Content-type': 'application/javascript' });
res.write(s);
res.end();
};
if (pathname === "/meteor_runtime_config.js" &&
! WebAppInternals.inlineScriptsAllowed()) {
res.writeHead(200, { 'Content-type': 'application/javascript' });
res.write("__meteor_runtime_config__ = " +
JSON.stringify(__meteor_runtime_config__) + ";");
res.end();
serveStaticJs("__meteor_runtime_config__ = " +
JSON.stringify(__meteor_runtime_config__) + ";");
return;
} else if (pathname === "/meteor_reload_safetybelt.js" &&
! WebAppInternals.inlineScriptsAllowed()) {
serveStaticJs(RELOAD_SAFETYBELT);
return;
}
@@ -519,11 +541,18 @@ var runWebAppServer = function () {
/##RUNTIME_CONFIG##/,
"<script type='text/javascript'>__meteor_runtime_config__ = " +
JSON.stringify(__meteor_runtime_config__) + ";</script>");
boilerplateHtml = boilerplateHtml.replace(
/##RELOAD_SAFETYBELT##/,
"<script type='text/javascript'>"+RELOAD_SAFETYBELT+"</script>");
} else {
boilerplateHtml = boilerplateHtml.replace(
/##RUNTIME_CONFIG##/,
"<script type='text/javascript' src='##ROOT_URL_PATH_PREFIX##/meteor_runtime_config.js'></script>"
);
boilerplateHtml = boilerplateHtml.replace(
/##RELOAD_SAFETYBELT##/,
"<script type='text/javascript' src='##ROOT_URL_PATH_PREFIX##/meteor_reload_safetybelt.js'></script>");
}
boilerplateHtml = boilerplateHtml.replace(
/##ROOT_URL_PATH_PREFIX##/g,
@@ -566,7 +595,6 @@ var runWebAppServer = function () {
proxyConf = configuration.proxy;
}
Log("Attempting to bind to proxy at " + proxyService.providers.proxy);
console.log(proxyConf);
WebAppInternals.bindToProxy(_.extend({
proxyEndpoint: proxyService.providers.proxy
}, proxyConf), proxyServiceName);
@@ -661,10 +689,16 @@ WebAppInternals.bindToProxy = function (proxyConfig, proxyServiceName) {
};
};
var version = "";
if (!process.env.ADMIN_APP) {
var AppConfig = Package["application-configuration"].AppConfig;
version = AppConfig.getStarForThisJob() || "";
}
proxy.call('bindDdp', {
pid: pid,
bindTo: ddpBindTo,
proxyTo: {
tags: [version],
host: host,
port: port,
pathPrefix: bindPathPrefix + '/websocket'
@@ -678,6 +712,7 @@ WebAppInternals.bindToProxy = function (proxyConfig, proxyServiceName) {
pathPrefix: bindPathPrefix
},
proxyTo: {
tags: [version],
host: host,
port: port,
pathPrefix: bindPathPrefix
@@ -693,6 +728,7 @@ WebAppInternals.bindToProxy = function (proxyConfig, proxyServiceName) {
ssl: true
},
proxyTo: {
tags: [version],
host: host,
port: port,
pathPrefix: bindPathPrefix

View File

@@ -1,5 +1,4 @@
=> Meteor 0.6.6.3: Fix CPU runaway while watching files in large
projects and occasional server crash on session disconnect.
=> Meteor 0.7.0.1: Fix failure to initialize local MongoDB server.
This release is being downloaded in the background. Update your
project to Meteor 0.6.6.3 by running 'meteor update'.
project to Meteor 0.7.0.1 by running 'meteor update'.

View File

@@ -72,6 +72,12 @@
{
"release": "0.6.6.3"
},
{
"release": "0.7.0"
},
{
"release": "0.7.0.1"
},
{
"release": "NEXT"
}

View File

@@ -71,13 +71,12 @@ umask 022
mkdir build
cd build
# Temporarily use a fork of 0.10.21 plus a change to fix websockets.
git clone git://github.com/meteor/node.git
git clone git://github.com/joyent/node.git
cd node
# When upgrading node versions, also update the values of MIN_NODE_VERSION at
# the top of tools/meteor.js and tools/server/boot.js, and the text in
# docs/client/concepts.html and the README in tools/bundler.js.
git checkout dev-bundle-0.3.24
git checkout v0.10.22
./configure --prefix="$DIR"
make -j4
@@ -110,6 +109,7 @@ npm install shell-quote@0.0.1 # now at 1.3.3, which adds plenty of options to
npm install eachline@2.3.3
npm install source-map@0.1.30
npm install source-map-support@0.2.3
npm install bcrypt@0.7.7
# Using the unreleased "caronte" branch rewrite of http-proxy (which will become
# 1.0.0), plus this PR:
@@ -160,7 +160,7 @@ make install
# click 'changelog' under the current version, then 'release notes' in
# the upper right.
cd "$DIR/build"
MONGO_VERSION="2.4.6"
MONGO_VERSION="2.4.8"
# We use Meteor fork since we added some changes to the building script.
# Our patches allow us to link most of the libraries statically.

View File

@@ -792,6 +792,7 @@ _.extend(ClientTarget.prototype, {
html.push(_.escape(js.url));
html.push('"></script>\n');
});
html.push('\n\n##RELOAD_SAFETYBELT##');
html.push('\n\n');
html.push(self.head.join('\n')); // unescaped!
html.push('\n' +
@@ -1473,7 +1474,8 @@ var writeSiteArchive = function (targets, outputPath, options) {
builder.write('README', { data: new Buffer(
"This is a Meteor application bundle. It has only one dependency:\n" +
"Node.js 0.10 (with the 'fibers' package). The current release of Meteor\n" +
"has been tested with Node 0.10.21. To run the application:\n" +
"has been tested with Node 0.10.22 and works best with 0.10.22 through\n" +
"0.10.24. To run the application:\n" +
"\n" +
" $ rm -r programs/server/node_modules/fibers\n" +
" $ npm install fibers@1.0.1\n" +
@@ -1674,6 +1676,16 @@ exports.bundle = function (appDir, outputPath, options) {
// Recover by ignoring this program
return;
}
// Programs must (for now) contain a `package.js` file. If not, then
// perhaps the directory we are seeing is left over from another git
// branch or something and we should ignore it. We don't actually parse
// the package.js file here, though (but we do restart if it is later
// added or changed).
if (watch.readAndWatchFile(
watchSet, path.join(programsDir, item, 'package.js')) === null) {
return;
}
targets[item] = true; // will be overwritten with actual target later
// Read attributes.json, if it exists

View File

@@ -23,8 +23,8 @@ Fiber(function () {
var cleanup = require('./cleanup.js');
var Future = require('fibers/future');
// This code is duplicated in app/server/server.js.
var MIN_NODE_VERSION = 'v0.10.21';
// This code is duplicated in tools/server/boot.js.
var MIN_NODE_VERSION = 'v0.10.22';
if (require('semver').lt(process.version, MIN_NODE_VERSION)) {
process.stderr.write(
'Meteor requires Node ' + MIN_NODE_VERSION + ' or later.\n');

View File

@@ -543,11 +543,16 @@ _.extend(exports, {
var topLevel = self._shrinkwrappedDependenciesTree(dir);
var minimizeModule = function (module) {
var minimized = {};
if (self._isGitHubTarball(module.from))
minimized.from = module.from;
else
minimized.version = module.version;
var version;
if (module.resolved &&
!module.resolved.match(/^https:\/\/registry.npmjs.org\//)) {
version = module.resolved;
} else if (self._isGitHubTarball(module.from)) {
version = module.from;
} else {
version = module.version;
}
var minimized = {version: version};
if (module.dependencies) {
minimized.dependencies = {};

View File

@@ -25,7 +25,7 @@ var find_mongo_pids = function (app_dir, port, callback) {
_.each(stdout.split('\n'), function (ps_line) {
// matches mongos we start.
var m = ps_line.match(/^\s*(\d+).+mongod .+--port (\d+) --dbpath (.+)(?:\/|\\)\.meteor(?:\/|\\)local(?:\/|\\)db --replSet /);
var m = ps_line.match(/^\s*(\d+).+mongod .+--port (\d+) --dbpath (.+)(?:\/|\\)\.meteor(?:\/|\\)local(?:\/|\\)db(?: |$)/);
if (m && m.length === 4) {
var found_pid = parseInt(m[1]);
var found_port = parseInt(m[2]);
@@ -161,9 +161,11 @@ exports.launchMongo = function (options) {
}
var portFile = path.join(dbPath, 'METEOR-PORT');
var portFileExists = false;
var createReplSet = true;
try {
createReplSet = +(fs.readFileSync(portFile)) !== options.port;
portFileExists = true;
} catch (e) {
if (!e || e.code !== 'ENOENT')
throw e;
@@ -176,6 +178,11 @@ exports.launchMongo = function (options) {
// replSet configuration. It's also a little slow to initiate a new replSet,
// thus the attempt to not do it unless the port changes.)
if (createReplSet) {
// Delete the port file, so we don't mistakenly believe that the DB is
// still configured.
if (portFileExists)
fs.unlinkSync(portFile);
try {
var dbFiles = fs.readdirSync(dbPath);
} catch (e) {
@@ -199,13 +206,16 @@ exports.launchMongo = function (options) {
var child_process = require('child_process');
var replSetName = 'meteor';
var proc = child_process.spawn(mongod_path, [
// nb: cli-test.sh and find_mongo_pids assume that the next four arguments
// exist in this order without anything in between
// nb: cli-test.sh and find_mongo_pids make strong assumptions about the
// order of the arguments! Check them before changing any arguments.
'--bind_ip', '127.0.0.1',
'--smallfiles',
'--nohttpinterface',
'--port', options.port,
'--dbpath', dbPath,
// Use an 8MB oplog rather than 256MB. Uses less space on disk and
// initializes faster. (Not recommended for production!)
'--oplogSize', '8',
'--replSet', replSetName
]);
@@ -231,36 +241,65 @@ exports.launchMongo = function (options) {
proc.stdout.setEncoding('utf8');
var listening = false;
var replSetReady = false;
var replSetReadyToBeInitiated = false;
var alreadyInitiatedReplSet = false;
var alreadyCalledOnListen = false;
var maybeCallOnListen = function () {
if (listening && replSetReady) {
if (listening && replSetReady && !alreadyCalledOnListen) {
if (createReplSet)
fs.writeFileSync(portFile, options.port);
alreadyCalledOnListen = true;
onListen();
}
};
var maybeInitiateReplset = function () {
// We need to want to create a replset, be confident that the server is
// listening, be confident that the server's replset implementation is
// ready to be initiated, and have not already done it.
if (!(createReplSet && listening && replSetReadyToBeInitiated
&& !alreadyInitiatedReplSet)) {
return;
}
alreadyInitiatedReplSet = true;
// Connect to it and start a replset.
var db = new mongoNpmModule.Db(
'meteor', new mongoNpmModule.Server('127.0.0.1', options.port),
{safe: true});
db.open(function(err, db) {
if (err)
throw err;
db.admin().command({
replSetInitiate: {
_id: replSetName,
members: [{_id : 0, host: '127.0.0.1:' + options.port}]
}
}, function (err, result) {
if (err)
throw err;
// why this isn't in the error is unclear.
if (result && result.documents && result.documents[0]
&& result.documents[0].errmsg) {
throw result.document[0].errmsg;
}
db.close(true);
});
});
};
proc.stdout.on('data', function (data) {
// note: don't use "else ifs" in this, because 'data' can have multiple
// lines
if (/config from self or any seed \(EMPTYCONFIG\)/.test(data)) {
replSetReadyToBeInitiated = true;
maybeInitiateReplset();
}
if (/ \[initandlisten\] waiting for connections on port/.test(data)) {
if (createReplSet) {
// Connect to it and start a replset.
var db = new mongoNpmModule.Db(
'meteor', new mongoNpmModule.Server('127.0.0.1', options.port),
{safe: true});
db.open(function(err, db) {
if (err)
throw err;
db.admin().command({
replSetInitiate: {
_id: replSetName,
members: [{_id : 0, host: '127.0.0.1:' + options.port}]
}
}, function (err, result) {
if (err)
throw err;
db.close(true);
});
});
}
listening = true;
maybeInitiateReplset();
maybeCallOnListen();
}

View File

@@ -677,7 +677,7 @@ exports.run = function (context, options) {
mongoStartupPrintTimer = setTimeout(function () {
process.stdout.write("Initializing mongo database... this may take a moment.\n");
}, 3000);
}, 5000);
updater.startUpdateChecks(context);
launch();

View File

@@ -5,8 +5,8 @@ var Future = require(path.join("fibers", "future"));
var _ = require('underscore');
var sourcemap_support = require('source-map-support');
// This code is duplicated in tools/server/server.js.
var MIN_NODE_VERSION = 'v0.10.21';
// This code is duplicated in tools/meteor.js.
var MIN_NODE_VERSION = 'v0.10.22';
if (require('semver').lt(process.version, MIN_NODE_VERSION)) {
process.stderr.write(

View File

@@ -58,7 +58,7 @@ var _assertCorrectPackageNpmDir = function(deps) {
// copy fields with values generated by shrinkwrap that can't be known to the
// test author. We set keys on val always in this order so that comparison works well.
var val = {};
_.each(['version', 'from', 'resolved', 'dependencies'], function(key) {
_.each(['version', 'dependencies'], function(key) {
if (expected[key])
val[key] = expected[key];
else if (actualMeteorNpmShrinkwrapDependencies[name] && actualMeteorNpmShrinkwrapDependencies[name][key])
@@ -264,7 +264,13 @@ assert.doesNotThrow(function () {
var tmpOutputDir = tmpDir();
var result = bundler.bundle(appWithPackageDir, tmpOutputDir, {nodeModulesMode: 'skip', releaseStamp: 'none', library: lib});
assert.strictEqual(result.errors, false, result.errors && result.errors[0]);
try {
_assertCorrectPackageNpmDir(deps);
} catch (e) {
console.log("ACTUAL", e.actual)
console.log("EXPECTED", e.expected)
throw e
}
_assertCorrectBundleNpmContents(tmpOutputDir, deps);
// Check that a string introduced by our fork is in the source.
assert(/clientMaxAge = 604800000/.test(

View File

@@ -34,8 +34,8 @@ assert.doesNotThrow(function () {
// verify that contents are minified
var appHtml = fs.readFileSync(path.join(tmpOutputDir, "programs",
"client", "app.html"), 'utf8');
assert(/src=\"##ROOT_URL_PATH_PREFIX##\/[0-9a-f]{40,40}.js\"/.test(appHtml));
assert(!(/src=\"##ROOT_URL_PATH_PREFIX##\/packages/.test(appHtml)));
assert(/src=\"##BUNDLED_JS_CSS_PREFIX##\/[0-9a-f]{40,40}.js\"/.test(appHtml));
assert(!(/src=\"##BUNDLED_JS_CSS_PREFIX##\/packages/.test(appHtml)));
});
console.log("nodeModules: 'skip', no minify");
@@ -50,11 +50,11 @@ assert.doesNotThrow(function () {
// verify that contents are not minified
var appHtml = fs.readFileSync(path.join(tmpOutputDir, "programs",
"client", "app.html"), 'utf8');
assert(!(/src=\"##ROOT_URL_PATH_PREFIX##\/[0-9a-f]{40,40}.js\"/.test(appHtml)));
assert(/src=\"##ROOT_URL_PATH_PREFIX##\/packages\/meteor/.test(appHtml));
assert(/src=\"##ROOT_URL_PATH_PREFIX##\/packages\/deps/.test(appHtml));
assert(!(/src=\"##BUNDLED_JS_CSS_PREFIX##\/[0-9a-f]{40,40}.js\"/.test(appHtml)));
assert(/src=\"##BUNDLED_JS_CSS_PREFIX##\/packages\/meteor/.test(appHtml));
assert(/src=\"##BUNDLED_JS_CSS_PREFIX##\/packages\/deps/.test(appHtml));
// verify that tests aren't included
assert(!(/src=\"##ROOT_URL_PATH_PREFIX##\/package-tests\/meteor/.test(appHtml)));
assert(!(/src=\"##BUNDLED_JS_CSS_PREFIX##\/package-tests\/meteor/.test(appHtml)));
});
console.log("nodeModules: 'skip', no minify, testPackages: ['meteor']");
@@ -70,7 +70,7 @@ assert.doesNotThrow(function () {
// verify that tests for the meteor package are included
var appHtml = fs.readFileSync(path.join(tmpOutputDir, "programs",
"client", "app.html"));
assert(/src=\"##ROOT_URL_PATH_PREFIX##\/packages\/meteor:tests\.js/.test(appHtml));
assert(/src=\"##BUNDLED_JS_CSS_PREFIX##\/packages\/meteor:tests\.js/.test(appHtml));
});
console.log("nodeModules: 'copy'");