(still outstanding: changes to package publication workflow)
A package marked debugOnly in the package source is not to be bundled in production.
Moreover, if a package/app depends on a debugOnly package, that entire tree should
not be bundled. (But we should take it into account when figuring out versions!)
Does the following:
- In the catalog, we have a function that takes in a set of versions and a set of original
constraints and traverses it, recursively, to build a subset of versions that we *should*
bundle, and the corresponding subset of versions that we shouldn't (because they are either
debugOnly themselves or pulled in by debugOnly packages). (We do this in the catalog because
it is an addon onto the results of the constraint solver, tied deeply into our representation
of data)
- In the packageLoader, we keep track packages & versions that we should bundle, and also,
packages that we should exclude. We do this in the package-loader because, essentially, that's the
object that we use to keep the results of the constraint-solver, and we already propagate it to all
functions that care about it. (Possibly we should subsequently rename it later).
- In the compiler, when we figure out buildTimeDependencies, we ask if we need to bundle debug
builds. If not, we filter them out (see above). Also, when we actually build together unibuilds,
we don't touch the ones that the packageloader tells us to exclude (which ensures that they don't make
it into the final product).
- In the project, we keep track of whether this project is building in debug mode. That's because the project
is where we keep the state of our curent project that we are building, and if we are ever in the state of
building multiple things, then that's the code that we would need to touch (see also that we make a similar
assumption when solving constraints).
- Adds the option to keepthe project debug-build-free and calls it in commands when approporiate.
When we publish things to the test packaging server, we use the versionsFrom
argument that is very difficult to set up right to actually work. Most of the time
we don't really set that up and just agree that those tests should fail. As such, I
am going to mark most of those tests as checkout-only for now (we still usually check
that we can publish manually anyway as part of poking at the release in QA, and there is
not a lot of reason that I can think of why publishing from release would be different than
pubishing from checkout. So, marking these as checkout-only until we can get a better
testing infrastructure for this in place.
Rewrote parts of the update server package data unit test to compare a pre-sync and a
post-sync catalog, rather than the output of package client's attempts to contact the server.
This is because in the new world, there is no accurate output, and instead, the function
modifies the catalog in place. Additionally, removed the old function that read from
data.json, since it was not used anymore, and cleaned up some comments and return values in
package-client. We no longer claim to return the contents of data.json, instead we return
an object that signals if we should reset the entire catalog, and/or if our connection to the
server flat-out failed. I am not sure that this is the best abstraction for this piece of code
(why does package-client modify the catalog, but not reset it, for example? Since resetting
has consequences, in the ideal world, the package-client would only have the logic to get data
from the server and it would be up to the catalog to figure out how to insert it into sql lite,
I think, maybe. Regardless, now is not the time to do that refactoring.)
The test is a little odd in the following ways. First, comparing every record ever published is
something that is already significantly harder than it used to be, and will get only harder from
there. (However, the test claims to check that no data has been erased, so we need to check it).
We check the vague existence of most records, on the theory that it is unlikely that
we only got a portion of the record in, rather than the entire thing. Second, it doesn't check the
actual contents on disk, because I wasn't sure about writing another interface to sqllite this
late in the game. There are some ways to get around this -- we could be sneaky and submit a non-blank
syncToken in some way (faking a previous sync), so we only get the packages touched since (time X).
Usually, that might violate some internal consistency, but we only care about the contents at this stage.
Second, we should probably write some method on the catalogs to compare themselves instead of making
and querying a copy.For now, though, I think that this is sufficiently expedient.
Also, the test publishes 5 packages. That's a lot of packages, so I marked it as slow.
Many of these (mostly in top level commands in commands-packages.js) are
not super well thought out: they use a new "doOrDie" helper to run some
function in a capture and exit if there are any messages. We really
need to get a little more thoughtful about the big picture of error
handling (combining "build" errors, network errors, catalog errors,
etc). But this at least allows the addition of more buildmessage
assertions.
At the very least, this ensures that if you edit a package.js in a local
package while "meteor run" is running, that instead of crashing the tool
it properly shows the buildmessage and lets you fix the issue.
also, in self-test, only set $METEOR_PACKAGE_SERVER_URL for the specific
runs that actually want the test server (using a tag) rather than kinda
always by accident
Hopefully this isn't ignoring real error cases. The whole
previousSolution data-rewriting hack probably needs to be fixed
anyway. But this seems to work?