On the server, Meteor attempts to avoid bundling node_modules code by
replacing entry point modules with a stub that calls module.useNode() (see
packages/modules-runtime/server.js). This trick allows evaluating server
node_modules natively in Node.js, faithfully preserving all Node-specific
behaviors, such as module.id being an absolute file system path, the
__dirname and __filename variables, the ability to import binary .node
modules, and so on.
However, starting in Node.js 12.16.0 (Meteor 1.9.1+), modules evaluated
natively by Node are considered ECMAScript modules (ESM) if the closest
package.json file has "type": "module" (or has an .mjs file extension).
This poses a problem for the module.useNode() trick, because ESM modules
cannot be imported synchronously using require (which is currently how
module.useNode() works).
To work around this new error, this commit checks package.json for "type":
"module" in ImportScanner#shouldUseNode to determine whether it's safe to
use the module.useNode() trick.
The good news is that ESM modules don't have access to nearly as many
Node.js-specific quirks: no module, require, or exports variables; no
__dirname, no __filename; no ability to import JSON or other non-ESM file
types (at least right now). So it seems somewhat less important for ESM
code (compared to CommonJS code) to bail out into native Node.js execution
using module.useNode(). In other words, bundling server code should not
affect its execution in nearly as many cases, if that code is ESM rather
than legacy CommonJS.
If this good news turns out to be overly optimistic, we can consider using
a different kind of bailout stub that's capable of importing ESM using
dynamic import(). For now, making sure we avoid bailing out for ESM code
like @babel/runtime/helpers/esm/* is the priority.
Commit 646fa4e3eefixed#10547 by
restricting optimisticLookupPackageJson to package.json files with a
"name" property, which effectively skipped over intermediate package.json
files with additional properties.
However, in Node.js 12.16.0 (Meteor 1.9.1+), modules evaluated natively by
Node are considered ECMAScript modules if the closest package.json file
has "type": "module" (or has an .mjs file extension). This poses a problem
for the module.useNode() trick (see packages/modules-runtime/server.js),
because ESM modules cannot be imported using require.
For example, recent versions of the @babel/runtime package have a
@babel/runtime/helpers/esm/package.json file for the ESM versions of its
helpers (which specifies "type": "module"), but that package.json file
does not have a "name" property, because it is not the root package.json
file representing the entire @babel/runtime package.
I considered making the "name" restriction configurable, but that would
have fragmented the caching of optimisticLookupPackageJson. Instead, I
made it return an array of all potentially relevant package.json objects,
which can be safely cached.
This means that the caller has to iterate over the array, but there is
only one call site for this function (in tools/isobuild/package-source.js)
right now, so that wasn't too much work.
We haven't always updated this minimum version when we've changed the
Node.js version bundled with Meteor, which is fine because most deployment
strategies (including Galaxy) use the right version of Node.js
automatically. With Meteor 1.9 and Node.js 12.14.0, however, it seems
important that we make absolutely sure new Meteor apps are not getting run
in production with an end-of-life'd version of Node.js (v8).
Although the Meteor jquery package is no long a core package (and thus is
not tied to the Meteor release), it seems like a good idea to nudge folks
towards installing jquery from npm, instead of relying on the very old
version (1.12.1) residing in meteor/packages/non-core/jquery/jquery.js.
Closes#10289.
source-map 0.7.0+ has a much faster Rust WASM implementation.
Because this needs to be loaded, the constructor is now asynchronous.
The consumer should also be destroyed after it's no longer needed.
https://github.com/meteor/meteor/pull/10772#issuecomment-553517459
The assertion in tools/fs/optimistic.ts was failing if I passed a relative
path for --test-app-path, and passing the path as a second argument when
calling assert made it easier to tell what was going on, so I decided to
keep that change.
Now that files.rename uses Promise.prototype.await on Windows, it's
important to be sure it never gets called outside of a Fiber. Though we
don't run our full test suite on Windows, we can validate this expectation
by wrapping files.rename on all platforms. This commit should be reverted
once the validation is complete.
Falling back to a full recursive copy was MUCH more expensive than waiting
a short amount of time before retrying the rename.
This aligns with the way graceful-fs handles EPERM and EACCES errors on
Windows: https://www.npmjs.com/package/graceful-fs#improvements-over-fs-module
Using fs.writeFileSync in a serial style becomes especially costly when
we're writing a lot of files. In a recent profiling exercise I did on
Windows, nearly 80% of the time taken by Builder#_copyDirectory was spent
just closing the written files. By using the asynchronous fs.writeFile
function, we should be able to parallelize at least some of this work, and
await all the promises at the very end of copying the directory.
* Avoid modifying unibuild.watchSet in PackageSourceBatch._watchOutputFiles.
Should fix#10736 by preventing old hashes from remaining in
unibuild.watchSet, which was sometimes causing isUpToDate to fail
immediately upon restart.
* Regression test for issue #10736.
In PR #10720, we introduced the makeCheapPathFunction in an effort to
reduce the caching overhead for very frequently called (and already pretty
quick) operations like files.stat.
However, the default maximum LRU cache size of Math.pow(2, 16) can cause
quite a bit of cache eviction churn for large applications, which @veered
has identified as a potential source of build performance problems.
By setting the maximum cache size to Math.pow(2, 20) instead, I am no
longer seeing any files.stat calls in the profiling output for rebuilding
a large internal app, saving several seconds of rebuild time. The obvious
downside is that this cache might accumulate more memory over time, which
is why I didn't just set the max to Infinity, though that might be a
viable option if the total set of paths ever stat'd is small enough to fit
into the available memory.
In the future, I hope to find ways of managing LRU cache size that respond
to actual memory pressure (relative to available memory), rather than
pruning the cache after an arbitrary numeric threshold is reached.