We now throw more specific errors, consistent between both tools and
package-version-parser (copy-and-pasted code, sadly, but we really do
have to make this check before uniload-from-checkout).
It now no longer has N nested 2-element hashmaps; the 2-element split
happens at the top level in the JS object. And a nested hash_map is now
just a set (it's great that we can rely on the interned-ness of
constraints!)
8% benchmark speedup and less code.
Instead of recalculating the "edge versions", just keep the list of
alternatives for each dep sorted and look at the first and last.
Provides another 25% speedup to the benchmark, deletes a lot of
code, *AND* removes the one part of the constraint solver that tries to
have a deep understanding of the different constraint types other than
the "is this constraint satisfied" function (ie, makes it easier to add
more constraint types later).
Instead of having two different spots where we do special checks to see
if a piece of the state might have only one alternative, we just ensure
that the ResolverState itself always eagerly converts one-alternative
dependencies into choices.
Factor out the "state" into its own class, ResolverState. The big
difference from the previous state object: it actually explicitly tracks
the set of potential UnitVersions for every active dependency. This
essentially replaces the DependencyList class.
Because we always know exactly how many options there are for a given
dependency, we can both generalize and simplify the "propagate
transitive exact deps" optimization. That optimization only worked on
"foo@=1.2.3" dependencies, which meant it didn't apply in any other
situation where there was only one possible package to choose. But there
are a whole lot of other situations like that: local packages, packages
that just don't have many versions, packages that already have a lot of
constraints applied to them, etc. By tracking the set of potential
alternatives, we can just make sure to always expand 1-alternative units
first. We also maintain the aspect of the optimization where we don't
need to call the cost function until we've actually gotten to a state
with multiple neighbors.
This keeps #2410 fixed as well.
I've removed the constraintAncestor support as part of this refactoring,
so some error messages may be worse than they were before. But this
should set me up pretty well to improve error messages tomorrow.
We haven't yet decided how we want to do versioning for packages that
mostly just wrap non-Meteor code that has its own version numbers. We
might stick to totally-unrelated version numbers (and maybe add a
"wrapped version" field that gets displayed in the upgrade/downgrade
messages?), or change to matching upstream versions (with techniques for
dealing with changes to packaging, a la debian_revision), or something
different.
But since changing to match upstream versions is a possibility, let's
make sure that that operation won't be viewed as a "downgrade" by
updating the wrapped packages whose upstream versions are 0.*.
Introduces a "Patience" class which lets CPU-bound operations like the
constraint solver yield every so often, and print messages if an
operation (CPU-bound or not) are taken or not.
In Chrome, the built-in, non-standard property `e.stack` starts with `“Error: ”+e.message`. So we said `(e.stack || e.message)`. However, in Cordova, e.stack does not include e.message. So detect whether e.stack includes e.message or not and act accordingly (to avoid losing the message or printing it twice).
Also add some comments and stop printing “Exception from Deps afterFlush function function”.
Now ServiceConnection's guarantee is that once a DDP connection is
successfully negotiated, it won't restart. This relies on the assumption
that the only use of reconnect({_force: true}) is DDP protocol
negotiation!
Drop some unnecessary (and flawed, for this application) `disconnect`
stream events.
Also, remove some unnecessary `new` calls.
Fixes 'meteor mongo some-galaxy app'.
- ServiceConnection should never try to reconnect. It's already the case
that we don't hold open ServiceConnections over long periods while
idle; it makes the class much simpler if it corresponds to a single
TCP connection. This also means that as soon as we have one connection
failure (eg you're offline) we can fail instantly instead of retrying
pointlessly.
- Drop the explicit timeout code in ServiceConnection. There's already
timeout handling in stream_client, and now that we don't retry, it
actually takes effect.
- Be more rigorous about uses of Future in ServiceConnection. Ensure
that each Future is only used once (ie, avoid "Future resolved more
than once" errors). Hopefully fixes#2390.
- ServiceConnection constructor now blocks until it's connected (and
throws if there's a connection failure). Maybe this introduces a tiny
bit more latency to the connection, but it makes it much easier to
handle errors properly.
- In packageClient.handlePackageServerConnectionError, show the error
message corresponding to the connection failure.
- In Node, the (newish) error passed to the Stream callback is now a
"DDP.ConnectionError" object. We can detect this in the tool (and we
don't even need to do some complex uniload/instanceof dance, since
error classes made with Meteor.makeErrorType label themselves with a
string errorType). We also no longer have a special
ServiceConnection.ConnectionTimeoutError.
We test event capturing using the <video> “play” event, because it is a non-bubbling event native to modern browsers. We previously had the src URL be a video on the Internet, but even if the video could not be accessed, the test still seemed to work.
So now set the “src” to “”. Seems to work in IE 9, Firefox, Safari, Chrome.
They are still used internally by the constraint solver (to implement
update --breaking) but cannot be externally specified.
Also, stop supporting "@none", whatever that was.
This resolves#2403. Specifically, if you implement some form of
two-way databinding, and you modify an input field in some way
other than adding characters to the end, the insertion point
jumps to the end.
Still need to write a test for this.
This should be a performance win (no need to load constraint-solver
unless you actually need to use it!), and it's what I wanted to do
initially instead of lazily loading mori, but it wasn't feasible with
the old super-recursive catalog.
This fixes an issue where running 'curmeteor rebuild' twice (!) died the
second time with a "Can't load npm module 'mori'" error. This is because
uniload (when run from a checkout) sets up Npm.require to read directly
from PACKAGE/.build.PACKAGE/npm/node_modules, which might get deleted
later in the process! There are some complex and maybe slow ways to
resolve this general issue (copy the node_modules somewhere else?) but
for now, the easiest way to avoid the issue is just to load Npm modules
immediately inside packages which need to be uniloaded.
Actually verify that uniloaded packages are in the list. Add missing
'ejson'. Remove (ah well) test that relies on ability to uniload an app
package (which shouldn't work anyway, but it would be nice to test
uniload Assets...)