(still outstanding: changes to package publication workflow)
A package marked debugOnly in the package source is not to be bundled in production.
Moreover, if a package/app depends on a debugOnly package, that entire tree should
not be bundled. (But we should take it into account when figuring out versions!)
Does the following:
- In the catalog, we have a function that takes in a set of versions and a set of original
constraints and traverses it, recursively, to build a subset of versions that we *should*
bundle, and the corresponding subset of versions that we shouldn't (because they are either
debugOnly themselves or pulled in by debugOnly packages). (We do this in the catalog because
it is an addon onto the results of the constraint solver, tied deeply into our representation
of data)
- In the packageLoader, we keep track packages & versions that we should bundle, and also,
packages that we should exclude. We do this in the package-loader because, essentially, that's the
object that we use to keep the results of the constraint-solver, and we already propagate it to all
functions that care about it. (Possibly we should subsequently rename it later).
- In the compiler, when we figure out buildTimeDependencies, we ask if we need to bundle debug
builds. If not, we filter them out (see above). Also, when we actually build together unibuilds,
we don't touch the ones that the packageloader tells us to exclude (which ensures that they don't make
it into the final product).
- In the project, we keep track of whether this project is building in debug mode. That's because the project
is where we keep the state of our curent project that we are building, and if we are ever in the state of
building multiple things, then that's the code that we would need to touch (see also that we make a similar
assumption when solving constraints).
- Adds the option to keepthe project debug-build-free and calls it in commands when approporiate.
Two things:
- to determine if two versions are compatible, we need to know their ECVs. (earliest
compatible versions). If the version that we have is local, then we don't have access to the
version record of the server version, so we can't figure out its ECV. That's why in the olden
days, there was a hack to store ECVs separately ('forgotten ECVs'). The new catalog didn't
have that function implemented -- it might not need it, but in that case, it would need to make
changes to the constraint solver that might be risky at this point. In any case, implementing this
function in the new world is pretty easy and solves the problem for now.
- when we look for things, we look in the local catalog, then the server catalog and if the server
catalog can't find it, it will refresh. However, sometimes, we are looking for something that the
server catalog cannot POSSIBLY have (ie: it has a build ID). That's fine, actually, but it causes
an extra refresh on the server catalog that we don't need. I put in a break to make sure that, if we
know for a fact that the server catalog does not have a version record (ie: it has a build id), we don't
bother looking in it and just return null to begin with. That should help.
This is a horrible horrible hack. If for no other reason that (without
EJSON.clone around) it leaves internal pieces of catalog.official and
catalog.complete intertwined.
But! It does mean that every single tool startup now only has to read
and parse packages.data.json once instead of twice. Which speeds up
'meteor --version' from 1 second to 0.5.
We'll solve this for real with the sqlite refactoring. But this is fast
and easy for now.
Replace catalog.getLatestVersion with catalog.getLatestMainlineVersion,
which skips prerelease versions (those with dashes in the
version). Ensure that this function is only used by high-level commands
like 'meteor list'. Replace other uses of that function with other
equivalent functions.
Also, don't stack trace on 'meteor add' constraint failure.
Drop the "at-least" constraint type entirely. It was not user-accessible
and was only used in the form ">=0.0.0" to represent a constraint with
no version constraint at all. This type of constraint is now called
"any-reasonable".
The definition of "any-reasonable" is:
- Any version that is not a pre-release (has no dash)
- Or a pre-release version that is explicitly mentioned in a TOP-LEVEL
constraint passed to the constraint solver
For example, constraints from .meteor/packages, constraints from the
release, and constraints from the command line of "meteor add" end up
being top-level.
Why only top-level-constrained pre-release versions, and not versions we
find explicitly desired by some other desired version while walking the
graph?
The constraint solver assumes that adding a constraint to the resolver
state can't make previously impossible choices now possible. If
pre-releases mentioned anywhere worked, then applying the constraints
"any reasonable" followed by "1.2.3-rc1" would result in "1.2.3-rc1"
ruled first impossible and then possible again. That's no good, so we
have to fix the meaning based on something at the start. (We could try
to apply our prerelease-avoidance tactics solely in the cost functions,
but then it becomes a much less strict rule.)
At the very least, this change should allow you to run meteor on a
preview branch like cordova-hcp without getting a conflict between the
prerelease package on the branch/release and the lack of an explicit
constraint in .meteor/packages on that package, because we are
reintepreting the .meteor/packages constraint as meaning "anything
reasonable" and the in-the-release version counts as reasonable.
Provide good messages when you provide invalid things to api.use and
api.imply.
Provide better message when you provide invalid things to
api.versionsFrom.
Drop "notInitialized" hack from catalog: now that we load things in
order, it's not necessary. (This means life will break if you use
api.versionsFrom in a uniloaded package. So don't do that.)