This does not mean that Meteor.call or Meteor.apply now return a Promise.
Completion of the method call is merely delayed until the Promise is
resolved or rejected, at which point the calling code asynchronously
receives the resulting value or exception.
These changes were inspired by this forum thread:
https://forums.meteor.com/t/fibers-and-meteor-promise-npm-pacakge/6531/7
We've been shipping the `logging` package to the client even though it
isn't used on the client by any core packages. Now that the `logging`
package is removable from your app, let's make it actually removable
by deleting totally worthless dependencies that exist for bizarre
historical reasons.
For example, some packages, like `reload` and `mongo`, depend on
`logging` because that's where `Meteor._debug` used to be, before it
was moved to the `meteor` package and `logging` was repurposed for
something else. The `ddp-server` package had a crazy overreaching set
of dependencies, pulling in a bunch of client-side libraries even though
it only has server-side code of its own.
JSON support is in all browsers we could conceivably care about; it's
ES5 and has perfect support in IE 8. No need to ship a library as
a core package, and no need to specify dependencies on it.
We could conceivably publish a new version of the `json` package that
is empty, so that apps using packages that still say `use("json")`
also get the code size reduction, but we'll wait until someone requests
that.
This included removing some internal version constraints. It would be
nice if package A could say "use B@2.0.0" (when both have changed), but
when they're both in the release, we need to make a release that has a
B@2.0.0-rc in it, which doesn't match that constraint. Fortunately,
constraints aren't necessary within a release anyway.
Modified testing to be extensible and easy to modify and more thorough
Refactored common code into ddp-rate-limiter-tests-common.js
Explicitly defined timeToReset to be passed back to user from livedata_server.js and appended to error object
Refactored rate-limit package to have a new rule class that organizes the rule attributes appropriately.
Moved all the Rule specific methods from RateLimiter to the Rule prototype. Reformatted code to match Meteor
code style.
Cleaned up rate-limit package to remove old methods before refactor. Renamed private
variables inside rate-limit package. Updated livedata_server.js to include rate limiting
for both methods and subscriptions in their respective protocol_handlers. Currently no default
rule for subscriptions in the global DDPRateLimiter.Fixed ddp-rate-limiter tests to work as well.
Need to add checks and throw errors if wrong format / input to rate-limit package. Updated default rule in ddp-rate-limiter
to reflect new rate-limit generic design and moved the location of the DDPRateLimiter in livedata_server.js. Need to fix tests
for both ddp-rate-limiter and rate-limit packages with updated code and clean up rate-limit package.
Changed rate-limit package to take generic rules and match generic inputs to those rules.
Now, users can use the rate-limit package for whatever they would like. Tests still need to
be updated and need to change location of ddp-rate-limiter to include subscriptions.
Need to remove duplicate code in rate-limit from previous implementation which hardcoded DDP types
into the rate limiting package by making the input be a DDPCommon.MethodInvocation object.
DDPRateLimiter is a global rate limiter with a public API to add rules, set the default
error message and an option to pass in a configuration of rules. It is integrated into
DDP already to check on every method invocation.
Before this change, number of catch-up attempts was N*M, where N is number of
writes inside of the fence, and M is number of active observers on affected collections.
Every catch up issues yet another query to find the latest oplog entry.
It was extremely inefficient, in terms of both CPU usage and added latency.
After executing write-heavy methods, application process was occupied for many seconds
doing the same thing over and over again.
This change provides a performance improvement for all kinds of workloads.
The changes to use a regularly scheduled timer rather than tearing them
down and setting them back up again mean that the following could occur:
- ping/pong occurs
- server sends packet, acked (so `_seenPacket` is true)
- connection dies
- interval timer picks up, notices `_seenPacket` is true, sets it to
false, continues
- interval timer picks up, finally notices `_seenPacket` is false.
That is, it can take up to two interval cycles to detect that a
connection has gone away. Accordingly, I halve the `heartbeatInterval`
so that we detect the connection has gone away, in the worst possible
case, in the same amount of time.
`heartbeatTimeout`, being a `setTimeout` based function, does not
require similar adjustment.