conflict: Applied collation parameters to Minimongo.Matcher and Minimongo.Sorter calls in both the changeStreams and oplog sections of the new structure

This commit is contained in:
Michael Vogt
2026-03-18 14:39:24 -05:00
103 changed files with 13772 additions and 3920 deletions

View File

@@ -13,7 +13,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22.x
node-version: 24.x
- run: npm ci
- name: Run ESLint@8
run: npx eslint@8 "./npm-packages/meteor-installer/**/*.js"

View File

@@ -8,7 +8,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22.x
node-version: 24.x
- run: cd scripts/admin/check-legacy-syntax && npm ci
- name: Check syntax
run: cd scripts/admin/check-legacy-syntax && node check-syntax.js

View File

@@ -32,7 +32,8 @@ jobs:
- Babel
- Blaze
- Coffeescript
- Library
- Full Skeleton
- Tailwind Skeleton
- Monorepo
- React
- R.Router
@@ -71,7 +72,7 @@ jobs:
run: npm install
- name: Install test deps
run: npm run install:modern
run: npm run install:e2e
- name: Prepare Meteor
run: ./meteor --get-ready
@@ -83,4 +84,4 @@ jobs:
retry_on: error
timeout_minutes: 15
retry_wait_seconds: 90
command: npm run test:modern -- -t="${{ matrix.category }}"
command: npm run test:e2e -- -t="${{ matrix.category }}"

View File

@@ -13,7 +13,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22.x
node-version: 24.x
- name: Build the Guide
run: npm ci && npm run build
- name: Deploy to Netlify for preview

View File

@@ -21,7 +21,7 @@ jobs:
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: 22.x
node-version: 24.x
cache: npm
- run: npm ci
- run: npm test

60
.github/workflows/test-packages.yml vendored Normal file
View File

@@ -0,0 +1,60 @@
name: Test Packages
on:
pull_request:
jobs:
test-packages:
strategy:
fail-fast: false
matrix:
reactivity_order:
- 'changeStreams,polling'
- 'oplog,polling'
runs-on: ubuntu-22.04
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.reactivity_order }}
cancel-in-progress: true
timeout-minutes: 90
env:
CXX: g++-12
phantom: false
PUPPETEER_DOWNLOAD_PATH: /home/runner/.npm/chromium
TEST_PACKAGES_EXCLUDE: stylus
METEOR_MODERN: true
NODE_ENV: CI
METEOR_REACTIVITY_ORDER: ${{ matrix.reactivity_order }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: 22.17.0
- name: Restore caches
uses: actions/cache@v4
with:
path: |
~/.npm
.meteor
.babel-cache
dev_bundle
/home/runner/.npm/chromium
key: ${{ runner.os }}-node-22.17-${{ hashFiles('meteor', '**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-22.17-
- name: Install system dependencies
run: |
sudo apt-get update
sudo apt-get install -y g++-12 libnss3
- name: Install npm dependencies
run: npm install
- name: Run test-in-console suite
run: ./packages/test-in-console/run.sh

36
.github/workflows/unit-tests.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: Unit Tests
on:
pull_request:
paths:
- 'tools/**'
- 'scripts/**'
- 'package.json'
- '.github/workflows/unit-tests.yml'
push:
branches:
- devel
concurrency:
group: unit-tests-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
name: Unit Tests
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 22.x
- name: Install unit test deps
run: npm run install:unit
- name: Run unit tests
run: npm run test:unit

View File

@@ -47,7 +47,7 @@ jobs:
- name: Setup Node.js
uses: actions/setup-node@v2
with:
node-version: 22.x
node-version: 24.x
- name: Cache dependencies
id: meteor-cache

View File

@@ -1,34 +0,0 @@
language: node_js
os: linux
dist: jammy
sudo: required
services: xvfb
node_js:
- "22.17.0"
cache:
directories:
- ".meteor"
- ".babel-cache"
script:
- travis_retry ./packages/test-in-console/run.sh
env:
global:
- CXX=g++-12
- phantom=false
- PUPPETEER_DOWNLOAD_PATH=~/.npm/chromium
- TEST_PACKAGES_EXCLUDE=stylus
- METEOR_MODERN=true
addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- g++-12
- libnss3
before_install:
- cat /etc/apt/sources.list
- python3 --version
- echo "deb http://archive.ubuntu.com/ubuntu jammy main universe" | sudo tee -a /etc/apt/sources.list
- sudo apt-get update
- sudo apt-get install -y libnss3

View File

@@ -9,7 +9,8 @@ Full-stack JavaScript platform for modern web and mobile applications.
./meteor create my-app # Create app
./meteor self-test # CLI tests
./meteor test-packages ./packages/<name> # Package tests
npm run test:modern # E2E tests (Jest + Playwright)
npm run test:unit # Unit tests (Jest)
npm run test:e2e # E2E tests (Jest + Playwright)
```
## Structure

View File

@@ -115,79 +115,98 @@ For the rest, try looking nearby for a `README.md`. For example, [`isobuild`](t
## Tests
### Test against the local meteor copy
When running tests that use `./meteor`, be sure to run them against the checked-out copy of Meteor instead of the globally-installed version. This ensures tests run against your local development version.
When running any tests, be sure to run them against the checked-out copy of Meteor instead of
the globally-installed version. This means ensuring that the command is `path-to-meteor-checkout/meteor` and not just `meteor`.
The repository has four test layers, each covering a different scope:
This is important so that tests are run against your local development version and not the stable (installed) Meteor release.
| Command | Layer | Scope |
|---------|-------|-------|
| `npm run test:unit` | **Unit** (Jest) | Pure logic in `tools/`, `scripts/`, and helpers: fast, no Meteor runtime needed |
| `npm run test:e2e` | **E2E** (Jest + Playwright) | Bundler integration and skeleton apps: creates real Meteor projects, launches a browser |
| `./meteor self-test` | **Self-test** (custom) | Meteor CLI tool itself, spawns sandboxed Meteor processes to verify commands end-to-end |
| `./meteor test-packages` | **Package** (TinyTest) | Atmosphere packages in `packages/`, runs inside a Meteor app with the full reactive runtime |
### Running tests on Meteor core
### Unit tests (Jest)
When you are working with code in the core Meteor packages, you will want to make sure you run the
full test-suite (including the tests you added) to ensure you haven't broken anything in Meteor. The
`test-packages` command will do just that for you:
Unit tests cover pure helpers, scripts, and tool logic that does not require the Meteor runtime. They use [Jest](https://jestjs.io/) configured in `tools/unit-tests/`, targeting `tools/**/*.test.js` and `scripts/**/*.test.js`.
./meteor test-packages
```sh
# Install dependencies (first time)
npm run install:unit
Exactly in the same way that [`test-packages` works in standalone Meteor apps](https://guide.meteor.com/writing-atmosphere-packages.html#testing), the `test-packages` command will start up a Meteor app with [TinyTest](./packages/tinytest/README.md). To view the results, just connect to `http://localhost:3000`.
# Run all unit tests
npm run test:unit
If you want to see results in the console you can use:
# Run a specific test file
npm run test:unit -- tools/path/to/file.test.js
PUPPETEER_DOWNLOAD_PATH=~/.npm/chromium ./packages/test-in-console/run.sh
# Run tests matching a name pattern
npm run test:unit -- -t "my test name"
```
> [PUPPETEER_DOWNLOAD_PATH](https://github.com/dfernandez79/puppeteer/blob/main/README.md#q-chromium-gets-downloaded-on-every-npm-ci-run-how-can-i-cache-the-download) is optional but this is useful to skip Downloading Chromium on every run
Place test files next to the module they test using the `*.test.js` naming convention. Jest will pick them up automatically.
> We run our tests on Travis like above.
### E2E tests (Jest + Playwright)
#### Running specific tests
End-to-end tests in `tools/modern-tests/` validate that Meteor skeletons and bundler integrations work correctly. They create real Meteor apps, start dev servers, and assert behavior in a headless Chromium browser.
Specific package tests can be run by passing a `<package name>` or `<package path>` to the `test-packages` command. For example, to run `mongo` tests, it's possible to run:
```sh
# Install dependencies (first time)
npm run install:e2e
./meteor test-packages mongo
# Run all E2E tests
npm run test:e2e
For more fine-grained control, if you're interested in running only the specific tests that relate to the functionality you're working on, you can filter individual tests by using the `TINYTEST_FILTER` environment variable (which supports regex's). For example, to run only the package tests that verify `new Mongo.Collection` behavior, try:
# Run a specific suite
npm run test:e2e -- -t="React"
```
TINYTEST_FILTER="collection - call new Mongo.Collection" ./meteor test-packages
Each test has a corresponding app fixture in `tools/modern-tests/apps/`. See that directory for examples when adding new E2E tests.
You can also provide the same filters for `./packages/test-in-console/run.sh` explained above.
### Self-tests (Meteor tool)
### Running Meteor Tool self-tests
The Meteor CLI has its own "self-test" framework that spawns sandboxed Meteor processes. It tests commands like `create`, `build`, `deploy`, and `publish`.
While TinyTest and the `test-packages` command can be used to test internal Meteor packages, they cannot be used to test the Meteor Tool itself. The Meteor Tool is a node app that uses a home-grown "self test" system.
```sh
# List all self-tests
./meteor self-test --list
#### Listing available tests
# Run all self-tests
./meteor self-test
To see a list of tests included in the self-test system, use the `--list` option:
# Run tests matching a regex
./meteor self-test "^[a-b]"
./meteor self-test --list
# Exclude tests matching a regex
./meteor self-test --exclude "^[a-b]"
#### Running specific tests
# Skip retries during development
./meteor self-test --retries 0
```
The self-test commands support a regular-expression syntax in order to specific/search for specific tests. For example, to search for tests starting with `a` or `b`, it's possible to run:
### Package tests (TinyTest)
./meteor self-test "^[a-b]" --list
When working with core Atmosphere packages, use `test-packages` to run their tests via [TinyTest](./packages/tinytest/README.md). This starts a Meteor app, view results at `http://localhost:3000`.
Simply remove the `--list` flag to actually run the matching tests.
```sh
# Test all packages
./meteor test-packages
#### Excluding specific tests
# Test a specific package
./meteor test-packages mongo
In a similar way to the method of specifying which tests TO run, there is a way to specify which tests should NOT run. Again, using regular-expressions, this command will NOT list any tests which start with `a` or `b`:
# Filter by test name (supports regex), using --filter or -f
./meteor test-packages --filter "collection - call new Mongo.Collection"
./meteor self-test --exclude "^[a-b]" --list
# Equivalent using the environment variable
TINYTEST_FILTER="collection - call new Mongo.Collection" ./meteor test-packages
```
Simply remove the `--list` flag to actually run the matching tests.
For headless console output:
#### Avoiding retries
On CI we want to retry the tests to avoid false failures but in development can take some time if you retry every time a test is failing. So to avoid retries use:
./meteor self-test --retries 0
#### More reading
For even more details on how to run Meteor Tool "self tests", please refer to the [Testing section of the Meteor Tool README](https://github.com/meteor/meteor/blob/master/tools/README.md#testing).
```sh
PUPPETEER_DOWNLOAD_PATH=~/.npm/chromium ./packages/test-in-console/run.sh
```
### Continuous integration

View File

@@ -241,8 +241,8 @@ Here's example of defining a rule and adding it into the `DDPRateLimiter`:
```js
// Define a rule that matches login attempts by non-admin users.
const loginRule = {
userId(userId) {
const user = Meteor.users.findOne(userId);
async userId(userId) {
const user = await Meteor.users.findOneAsync(userId);
return user && user.type !== 'admin';
},
@@ -266,8 +266,8 @@ default English error message.
Here is an example with a custom error message:
```js
const setupGoogleAuthenticatorRule = {
userId(userId) {
const user = Meteor.users.findOne(userId);
async userId(userId) {
const user = await Meteor.users.findOneAsync(userId);
return user;
},
type: 'method',

2
meteor
View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash
BUNDLE_VERSION=22.22.0.3
BUNDLE_VERSION=24.14.0.4
# OS Check. Put here because here is where we download the precompiled
# bundles that are arch specific.

View File

@@ -5,10 +5,10 @@ set -u
UNAME=$(uname)
ARCH=$(uname -m)
NODE_VERSION=14.21.3
NODE_VERSION=24.14.0
MONGO_VERSION_64BIT=6.0.3
MONGO_VERSION_32BIT=3.2.22
NPM_VERSION=6.14.18
NPM_VERSION=11.10.1
if [ "$UNAME" == "Linux" ] ; then
if [ "$ARCH" != "i686" -a "$ARCH" != "x86_64" ] ; then

View File

@@ -10,7 +10,7 @@ var packageJson = {
dependencies: {
// Explicit dependency because we are replacing it with a bundled version
// and we want to make sure there are no dependencies on a higher version
npm: "10.9.4",
npm: "11.10.1",
pacote: "https://github.com/meteor/pacote/tarball/a81b0324686e85d22c7688c47629d4009000e8b8",
"node-gyp": "9.4.0",
"@mapbox/node-pre-gyp": "1.0.11",

View File

@@ -1,6 +1,6 @@
{
"name": "meteor",
"version": "3.4.0",
"version": "3.5.0-beta",
"description": "Install Meteor",
"main": "install.js",
"scripts": {

2234
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -12,32 +12,34 @@
},
"homepage": "https://www.meteor.com/",
"devDependencies": {
"@babel/core": "^7.21.3",
"@babel/eslint-parser": "^7.21.3",
"@babel/eslint-plugin": "^7.19.1",
"@babel/preset-react": "^7.18.6",
"@babel/core": "^7.29.0",
"@babel/eslint-parser": "^7.28.6",
"@babel/eslint-plugin": "^7.27.1",
"@babel/preset-react": "^7.28.5",
"@types/lodash.isempty": "^4.4.9",
"@types/node": "^18.16.18",
"@types/node": "^24.10.13",
"@types/sockjs": "^0.3.36",
"@types/sockjs-client": "^1.5.4",
"@typescript-eslint/eslint-plugin": "^5.56.0",
"@typescript-eslint/parser": "^5.56.0",
"eslint": "^8.36.0",
"eslint-config-prettier": "^8.8.0",
"eslint-config-vazco": "^7.1.0",
"@typescript-eslint/eslint-plugin": "^6.21.0",
"@typescript-eslint/parser": "^6.21.0",
"eslint": "^8.57.1",
"eslint-config-prettier": "^9.1.2",
"eslint-config-vazco": "^7.4.0",
"eslint-plugin-eslint-comments": "^3.2.0",
"eslint-plugin-import": "^2.27.5",
"eslint-plugin-jsx-a11y": "^6.7.1",
"eslint-plugin-prettier": "^4.2.1",
"eslint-plugin-react": "^7.32.2",
"eslint-plugin-react-hooks": "^4.6.0",
"prettier": "^2.8.8",
"typescript": "^5.4.5"
"eslint-plugin-import": "^2.32.0",
"eslint-plugin-jsx-a11y": "^6.10.2",
"eslint-plugin-prettier": "^5.5.5",
"eslint-plugin-react": "^7.37.5",
"eslint-plugin-react-hooks": "^4.6.2",
"prettier": "^3.8.1",
"typescript": "^5.9.3"
},
"scripts": {
"install:modern": "cd tools/modern-tests && npm install && npx playwright install --with-deps chromium chromium-headless-shell",
"install:unit": "cd tools/unit-tests && npm install",
"test:unit": "cd tools/unit-tests && npm test",
"test:idle-bot": "node --test .github/scripts/__tests__/inactive-issues.test.js",
"test:modern": "cd tools/modern-tests && npm test -- "
"install:e2e": "cd tools/modern-tests && npm install && npx playwright install --with-deps chromium chromium-headless-shell",
"test:e2e": "cd tools/modern-tests && npm test -- "
},
"jshintConfig": {
"esversion": 11

View File

@@ -101,7 +101,10 @@ export namespace Accounts {
collection?: string | undefined;
loginTokenExpirationHours?: number | undefined;
tokenSequenceLength?: number | undefined;
clientStorage?: 'session' | 'local';
// Storage strategy for client tokens: 'local' (persist), 'session' (per-tab), or 'none' (in-memory only)
clientStorage?: 'session' | 'local' | 'none';
// Enable hybrid HttpOnly cookie + short-lived token flow
useHttpOnlyCookies?: boolean | undefined;
}): void;
function onLogin(

View File

@@ -9,7 +9,8 @@ import {AccountsCommon} from "./accounts_common.js";
* @param {Object} options an object with fields:
* @param {Object} options.connection Optional DDP connection to reuse.
* @param {String} options.ddpUrl Optional URL for creating a new DDP connection.
* @param {'session' | 'local'} options.clientStorage Optional Define what kind of storage you want for credentials on the client. Default is 'local' to use `localStorage`. Set to 'session' to use session storage.
* @param {'session' | 'local' | 'none'} options.clientStorage Optional Define what kind of storage you want for credentials on the client. Default is 'local' to use `localStorage`. Set to 'session' to use session storage. Use 'none' to avoid persisting tokens.
* @param {Boolean} options.useHttpOnlyCookies Optional Enable HttpOnly cookie flow for auth resume. When enabled, the client will try to refresh a login token from a server HttpOnly cookie during startup, and will sync the cookie after logins/logouts.
*/
export class AccountsClient extends AccountsCommon {
constructor(options) {
@@ -29,9 +30,15 @@ export class AccountsClient extends AccountsCommon {
this.initStorageLocation();
// Read HttpOnly cookie setting from options or public settings
this._useHttpOnlyCookies = !!(options?.useHttpOnlyCookies || Meteor.settings?.public?.packages?.accounts?.useHttpOnlyCookies);
// Defined in localstorage_token.js.
this._initLocalStorage();
// Try to resume via HttpOnly cookie if enabled
this._initHttpOnlyCookieLogin();
// This is for .registerClientLoginFunction & .callLoginFunction.
this._loginFuncs = {};
@@ -42,13 +49,29 @@ export class AccountsClient extends AccountsCommon {
initStorageLocation(options) {
// Determine whether to use local or session storage to storage credentials and anything else.
this.storageLocation = (options?.clientStorage === 'session' || Meteor.settings?.public?.packages?.accounts?.clientStorage === 'session') ? window.sessionStorage : Meteor._localStorage;
const desired = options?.clientStorage || Meteor.settings?.public?.packages?.accounts?.clientStorage;
if (desired === 'session') {
this.storageLocation = window.sessionStorage;
} else if (desired === 'none') {
// In-memory, non-persistent storage shim
const mem = new Map();
this.storageLocation = {
getItem: (k) => mem.get(k) || null,
setItem: (k, v) => { mem.set(k, String(v)); },
removeItem: (k) => { mem.delete(k); },
};
} else {
this.storageLocation = Meteor._localStorage;
}
}
config(options) {
super.config(options);
this.initStorageLocation(options);
if (Object.prototype.hasOwnProperty.call(options || {}, 'useHttpOnlyCookies')) {
this._useHttpOnlyCookies = !!options.useHttpOnlyCookies;
}
}
///
@@ -142,11 +165,11 @@ export class AccountsClient extends AccountsCommon {
this._loggingOut.set(false);
this._loginCallbacksCalled = false;
this.makeClientLoggedOut();
callback && callback();
callback?.();
})
.catch((e) => {
this._loggingOut.set(false);
callback && callback(e);
callback?.(e);
});
}
@@ -166,11 +189,11 @@ export class AccountsClient extends AccountsCommon {
this._loggingOut.set(false);
this._loginCallbacksCalled = false;
this.makeClientLoggedOut();
callback && callback();
callback?.();
})
.catch((e) => {
this._loggingOut.set(false);
callback && callback(e);
callback?.(e);
});
}
@@ -215,7 +238,7 @@ export class AccountsClient extends AccountsCommon {
'removeOtherTokens',
[],
{ wait: true },
err => callback && callback(err)
err => callback?.(err)
);
}
@@ -441,11 +464,19 @@ export class AccountsClient extends AccountsCommon {
this._unstoreLoginToken();
this.connection.setUserId(null);
this._reconnectStopper && this._reconnectStopper.stop();
// Clear HttpOnly cookie if enabled
if (this._useHttpOnlyCookies) {
this._clearHttpOnlyCookie();
}
}
makeClientLoggedIn(userId, token, tokenExpires) {
this._storeLoginToken(userId, token, tokenExpires);
this.connection.setUserId(userId);
// Sync HttpOnly cookie if enabled
if (this._useHttpOnlyCookies) {
this._setHttpOnlyCookie(token, tokenExpires);
}
}
///
@@ -532,6 +563,29 @@ export class AccountsClient extends AccountsCommon {
});
};
// Attempt startup login using an HttpOnly cookie by requesting a
// short-lived resume token from the server.
async loginWithCookie() {
try {
const res = await fetch('/_accounts/cookie/refresh', {
method: 'GET',
credentials: 'include',
headers: { 'Accept': 'application/json' },
});
if (!res.ok) return;
const body = await res.json();
if (body && body.token) {
this.loginWithToken(body.token, (err) => {
if (err) {
this.makeClientLoggedOut();
}
});
}
} catch (_e) {
// ignore
}
}
// Semi-internal API. Call this function to re-enable auto login after
// if it was disabled at startup.
_enableAutoLogin() {
@@ -573,6 +627,26 @@ export class AccountsClient extends AccountsCommon {
this._lastLoginTokenWhenPolled = null;
};
async _setHttpOnlyCookie(token, tokenExpires) {
try {
await fetch('/_accounts/cookie/set', {
method: 'POST',
credentials: 'include',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ token, tokenExpires }),
});
} catch (_e) {}
}
async _clearHttpOnlyCookie() {
try {
await fetch('/_accounts/cookie/clear', {
method: 'POST',
credentials: 'include'
});
} catch (_e) {}
}
// This is private, but it is exported for now because it is used by a
// test in accounts-password.
_storedLoginToken() {
@@ -667,6 +741,15 @@ export class AccountsClient extends AccountsCommon {
}, 3000);
};
_initHttpOnlyCookieLogin() {
if (!this._useHttpOnlyCookies) return;
// Only attempt cookie resume if we didn't find a local token
const hasLocalToken = !!this._storedLoginToken();
if (!hasLocalToken) {
this.loginWithCookie();
}
}
_pollStoredLoginToken() {
if (! this._autoLoginEnabled) {
return;
@@ -710,6 +793,22 @@ export class AccountsClient extends AccountsCommon {
attemptToMatchHash(this, this.savedHash, defaultSuccessHandler);
};
/**
* @summary Shared implementation for registering account link callbacks.
* @param {String} type The callback type (e.g. 'reset-password', 'verify-email', 'enroll-account').
* @param {Function} callback The function to call when the link is clicked.
* @locus Client
*/
_registerLinkCallback(type, callback) {
if (this._accountsCallbacks[type]) {
Meteor._debug(
`Accounts callback for "${type}" was registered more than once. ` +
"Only one callback added will be executed."
);
}
this._accountsCallbacks[type] = callback;
};
/**
* @summary Register a function to call when a reset password link is clicked
* in an email sent by
@@ -728,12 +827,7 @@ export class AccountsClient extends AccountsCommon {
* @locus Client
*/
onResetPasswordLink(callback) {
if (this._accountsCallbacks["reset-password"]) {
Meteor._debug("Accounts.onResetPasswordLink was called more than once. " +
"Only one callback added will be executed.");
}
this._accountsCallbacks["reset-password"] = callback;
this._registerLinkCallback("reset-password", callback);
};
/**
@@ -755,12 +849,7 @@ export class AccountsClient extends AccountsCommon {
* @locus Client
*/
onEmailVerificationLink(callback) {
if (this._accountsCallbacks["verify-email"]) {
Meteor._debug("Accounts.onEmailVerificationLink was called more than once. " +
"Only one callback added will be executed.");
}
this._accountsCallbacks["verify-email"] = callback;
this._registerLinkCallback("verify-email", callback);
};
/**
@@ -782,12 +871,7 @@ export class AccountsClient extends AccountsCommon {
* @locus Client
*/
onEnrollmentLink(callback) {
if (this._accountsCallbacks["enroll-account"]) {
Meteor._debug("Accounts.onEnrollmentLink was called more than once. " +
"Only one callback added will be executed.");
}
this._accountsCallbacks["enroll-account"] = callback;
this._registerLinkCallback("enroll-account", callback);
};
}

View File

@@ -24,6 +24,7 @@ const VALID_CONFIG_KEYS = [
'loginTokenExpirationHours',
'tokenSequenceLength',
'clientStorage',
'useHttpOnlyCookies',
'ddpUrl',
'connection',
];
@@ -260,7 +261,7 @@ export class AccountsCommon {
// We need to validate the oauthSecretKey option at the time
// Accounts.config is called. We also deliberately don't store the
// oauthSecretKey in Accounts._options.
if (Object.prototype.hasOwnProperty.call(options, 'oauthSecretKey')) {
if (Object.hasOwn(options, 'oauthSecretKey')) {
if (Meteor.isClient) {
throw new Error(
'The oauthSecretKey option may only be specified on the server'

View File

@@ -0,0 +1,64 @@
// Client-side tests for HttpOnly cookie auth flow
// Ensures token is not accessible via JS (HttpOnly) and resume works.
if (Meteor.isClient) {
Tinytest.addAsync('accounts cookie - login with password sets HttpOnly cookie and not readable via document.cookie', (test, done) => {
// Enable cookie auth
Accounts.config({ useHttpOnlyCookies: true });
Accounts._isolateLoginTokenForTest();
const username = `u_${Random.id()}`;
const password = `p_${Random.id()}`;
Accounts.createUser({ username, password }, (err) => {
test.isUndefined(err, 'error creating user');
// After login, a token is stored in storageLocation, but cookie should be HttpOnly.
// We cannot directly read HttpOnly cookie, so assert it does NOT appear in document.cookie substring
const token = Accounts._storedLoginToken();
test.isTrue(!!token, 'token stored locally');
const cookieStr = document.cookie || '';
test.isFalse(/meteor_login_token=/.test(cookieStr), 'HttpOnly cookie not exposed to JS');
Meteor.logout(() => done());
});
});
Tinytest.addAsync('accounts cookie - clearing cookie on logout makes refresh return 204', async (test, done) => {
Accounts.config({ useHttpOnlyCookies: true });
Accounts._isolateLoginTokenForTest();
const username = `u3_${Random.id()}`;
const password = `p3_${Random.id()}`;
await new Promise((resolve, reject) => Accounts.createUser({ username, password }, (e) => e ? reject(e) : resolve()));
test.isTrue(!!Meteor.userId());
// Poll refresh until cookie is set (200), because _setHttpOnlyCookie is async and may not
// have completed yet — a stale cookie from a previous test could also cause 401. will be fixed after https://github.com/meteor/meteor/pull/14069
let r;
const loginStart = Date.now();
while (true) {
r = await fetch('/_accounts/cookie/refresh', { credentials: 'include' });
if (r.status === 200 || Date.now() - loginStart > 4000) break;
await new Promise(res => setTimeout(res, 100));
}
test.equal(r.status, 200, 'cookie set after login');
await new Promise(res => Meteor.logout(() => res()));
test.isFalse(!!Meteor.userId());
// Poll refresh until 204 or timeout because _clearHttpOnlyCookie is async
const start = Date.now();
const check = async () => {
const resp = await fetch('/_accounts/cookie/refresh', { credentials: 'include' });
if (resp.status === 204) {
test.equal(resp.status, 204, 'cookie cleared after logout');
done();
} else if (Date.now() - start > 4000) {
test.fail(`Expected 204 after logout, got ${resp.status}`);
done();
} else {
setTimeout(check, 100);
}
};
check();
});
}

View File

@@ -0,0 +1,197 @@
if (Meteor.isServer) {
const COOKIE_NAME = 'meteor_login_token';
const REFRESH_PATH = '/_accounts/cookie/refresh';
const SET_PATH = '/_accounts/cookie/set';
const CLEAR_PATH = '/_accounts/cookie/clear';
// Utility: simple HTTP request using Node's http/https depending on absoluteUrl
const request = async (method, path, { headers, body } = {}) => {
const url = Meteor.absoluteUrl(path.replace(/^\//,''));
const { URL } = Npm.require('url');
const u = new URL(url);
const opts = {
protocol: u.protocol,
hostname: u.hostname,
port: u.port,
path: u.pathname + (u.search||''),
method,
headers: headers || {}
};
const httpLib = Npm.require(u.protocol === 'https:' ? 'https' : 'http');
return new Promise((resolve, reject) => {
const req = httpLib.request(opts, (res) => {
let data = '';
res.setEncoding('utf8');
res.on('data', chunk => data += chunk);
res.on('end', () => {
let json;
try { json = data ? JSON.parse(data) : undefined; } catch (_e) {}
resolve({ status: res.statusCode, headers: res.headers, body: data, json });
});
});
req.on('error', reject);
if (body) req.write(body);
req.end();
});
};
Tinytest.addAsync('accounts cookie - set endpoint sets HttpOnly cookie with flags', async (test, done) => {
// Create a user & token
const userId = await Accounts.insertUserDoc({}, { username: Random.id() });
const stamped = Accounts._generateStampedLoginToken();
await Accounts._insertLoginToken(userId, stamped);
const res = await request('POST', SET_PATH, {
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ token: stamped.token })
});
test.equal(res.status, 200);
const setCookie = res.headers['set-cookie'] && res.headers['set-cookie'][0];
test.isTrue(/meteor_login_token=/.test(setCookie), 'cookie name');
test.isTrue(/HttpOnly/i.test(setCookie), 'HttpOnly flag');
test.isTrue(/SameSite=Lax/i.test(setCookie), 'SameSite flag');
// Secure might be absent in test (http) environment, accept either
done();
});
Tinytest.addAsync('accounts cookie - refresh returns token & expiry when cookie valid', async (test, done) => {
const userId = await Accounts.insertUserDoc({}, { username: Random.id() });
const stamped = Accounts._generateStampedLoginToken();
await Accounts._insertLoginToken(userId, stamped);
const setRes = await request('POST', SET_PATH, {
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ token: stamped.token })
});
const cookieHeader = setRes.headers['set-cookie'][0].split(';')[0];
const refreshRes = await request('GET', REFRESH_PATH, { headers: { 'Cookie': cookieHeader } });
test.equal(refreshRes.status, 200);
test.equal(refreshRes.json.token, stamped.token);
test.isTrue(!!refreshRes.json.tokenExpires, 'tokenExpires present');
done();
});
Tinytest.addAsync('accounts cookie - refresh 204 when no cookie', async (test, done) => {
const res = await request('GET', REFRESH_PATH);
test.equal(res.status, 204);
done();
});
Tinytest.addAsync('accounts cookie - refresh 401 for invalid token', async (test, done) => {
const fakeCookie = 'meteor_login_token=faketoken123';
const res = await request('GET', REFRESH_PATH, { headers: { 'Cookie': fakeCookie } });
test.equal(res.status, 401);
done();
});
Tinytest.addAsync('accounts cookie - clear removes cookie (expires in past)', async (test, done) => {
const userId = await Accounts.insertUserDoc({}, { username: Random.id() });
const stamped = Accounts._generateStampedLoginToken();
await Accounts._insertLoginToken(userId, stamped);
const setRes = await request('POST', SET_PATH, { headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ token: stamped.token }) });
const cookieToSend = setRes.headers['set-cookie'][0].split(';')[0];
const clearRes = await request('POST', CLEAR_PATH, { headers: { 'Cookie': cookieToSend } });
test.equal(clearRes.status, 200);
const cleared = clearRes.headers['set-cookie'][0];
test.isTrue(/Expires=Thu, 01 Jan 1970|Expires=Wed, 31 Dec 1969/.test(cleared) || /1970 GMT/.test(cleared), 'expired date');
done();
});
Tinytest.addAsync('accounts cookie - refresh 204 body empty & no Set-Cookie', async (test, done) => {
const res = await request('GET', REFRESH_PATH);
test.equal(res.status, 204);
test.equal(res.body, '');
test.isUndefined(res.headers['set-cookie']);
done();
});
Tinytest.addAsync('accounts cookie - expired token yields 401 expired', async (test, done) => {
const userId = await Accounts.insertUserDoc({}, { username: Random.id() });
const stamped = Accounts._generateStampedLoginToken();
// Force past expiry by moving earlier than the configured lifetime
const lifetimeMs = Accounts._getTokenLifetimeMs ? Accounts._getTokenLifetimeMs() : (90 * 24 * 60 * 60 * 1000);
stamped.when = new Date(Date.now() - lifetimeMs - 1000);
await Accounts._insertLoginToken(userId, stamped);
const setRes = await request('POST', SET_PATH, { headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ token: stamped.token }) });
const cookieHeader = setRes.headers['set-cookie'][0].split(';')[0];
const refreshRes = await request('GET', REFRESH_PATH, { headers: { 'Cookie': cookieHeader } });
test.equal(refreshRes.status, 401);
test.equal(refreshRes.json && refreshRes.json.error, 'expired');
done();
});
Tinytest.addAsync('accounts cookie - revoked token yields 401 invalid_cookie', async (test, done) => {
const userId = await Accounts.insertUserDoc({}, { username: Random.id() });
const stamped = Accounts._generateStampedLoginToken();
await Accounts._insertLoginToken(userId, stamped);
const setRes = await request('POST', SET_PATH, { headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ token: stamped.token }) });
// Remove all login tokens simulating revocation
await Meteor.users.updateAsync(userId, { $set: { 'services.resume.loginTokens': [] } });
const cookieHeader = setRes.headers['set-cookie'][0].split(';')[0];
const refreshRes = await request('GET', REFRESH_PATH, { headers: { 'Cookie': cookieHeader } });
test.equal(refreshRes.status, 401);
test.equal(refreshRes.json && refreshRes.json.error, 'invalid_cookie');
done();
});
Tinytest.addAsync('accounts cookie - clear idempotent (second clear still expired)', async (test, done) => {
const userId = await Accounts.insertUserDoc({}, { username: Random.id() });
const stamped = Accounts._generateStampedLoginToken();
await Accounts._insertLoginToken(userId, stamped);
const setRes = await request('POST', SET_PATH, { headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ token: stamped.token }) });
const cookie = setRes.headers['set-cookie'][0].split(';')[0];
const first = await request('POST', CLEAR_PATH, { headers: { 'Cookie': cookie } });
const second = await request('POST', CLEAR_PATH, { headers: { 'Cookie': cookie } });
test.equal(first.status, 200);
test.equal(second.status, 200);
const hdr = second.headers['set-cookie'][0];
test.isTrue(/Expires=Thu, 01 Jan 1970|Expires=Wed, 31 Dec 1969/.test(hdr) || /1970 GMT/.test(hdr));
done();
});
Tinytest.addAsync('accounts cookie - cookie path and no domain leakage', async (test, done) => {
const userId = await Accounts.insertUserDoc({}, { username: Random.id() });
const stamped = Accounts._generateStampedLoginToken();
await Accounts._insertLoginToken(userId, stamped);
const res = await request('POST', SET_PATH, { headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ token: stamped.token }) });
const setCookie = res.headers['set-cookie'][0];
test.isTrue(/Path=\//.test(setCookie), 'Path=/ present');
test.isFalse(/Domain=/.test(setCookie), 'no Domain attribute by default');
done();
});
Tinytest.addAsync('accounts cookie - invalid HTTP methods return 405', async (test, done) => {
const postRefresh = await request('POST', REFRESH_PATH); // should be GET
const getSet = await request('GET', SET_PATH); // should be POST
const getClear = await request('GET', CLEAR_PATH); // should be POST
test.equal(postRefresh.status, 405);
test.equal(getSet.status, 405);
test.equal(getClear.status, 405);
done();
});
Tinytest.addAsync('accounts cookie - set missing token returns 400', async (test, done) => {
const res = await request('POST', SET_PATH, { headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({}) });
test.equal(res.status, 400);
test.equal(res.json && res.json.error, 'invalid_token');
done();
});
Tinytest.addAsync('accounts cookie - set invalid JSON returns 400', async (test, done) => {
// send malformed JSON: request helper will send body as-is
const res = await request('POST', SET_PATH, { headers: { 'Content-Type': 'application/json' }, body: '{bad' });
test.equal(res.status, 400);
test.equal(res.json && res.json.error, 'invalid_token');
done();
});
Tinytest.addAsync('accounts cookie - set accepts long token (current behavior)', async (test, done) => {
const longToken = Array(5000).fill('a').join('');
// Expect 200 with current implementation (no length enforcement)
const res = await request('POST', SET_PATH, { headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ token: longToken }) });
test.equal(res.status, 200);
done();
});
}

View File

@@ -7,7 +7,6 @@ import {
} from './accounts_common.js';
import { URL } from 'meteor/url';
const hasOwn = Object.prototype.hasOwnProperty;
/**
* @summary Constructor for the `Accounts` namespace on the server.
@@ -801,7 +800,7 @@ export class AccountsServer extends AccountsCommon {
if (Package["oauth-encryption"]) {
const { OAuthEncryption } = Package["oauth-encryption"]
if (hasOwn.call(options, 'secret') && OAuthEncryption.keyIsLoaded())
if (Object.hasOwn(options, 'secret') && OAuthEncryption.keyIsLoaded())
options.secret = OAuthEncryption.seal(options.secret);
}
@@ -904,10 +903,8 @@ export class AccountsServer extends AccountsCommon {
// - forLoggedInUser {Array} Array of fields published to the logged-in user
// - forOtherUsers {Array} Array of fields published to users that aren't logged in
addAutopublishFields(opts) {
this._autopublishFields.loggedInUser.push.apply(
this._autopublishFields.loggedInUser, opts.forLoggedInUser);
this._autopublishFields.otherUsers.push.apply(
this._autopublishFields.otherUsers, opts.forOtherUsers);
this._autopublishFields.loggedInUser.push(...(opts.forLoggedInUser || []));
this._autopublishFields.otherUsers.push(...(opts.forOtherUsers || []));
};
// Replaces the fields to be automatically
@@ -1008,7 +1005,7 @@ export class AccountsServer extends AccountsCommon {
// the observe that we started when we associated the connection with
// this token.
_removeTokenFromConnection(connectionId) {
if (hasOwn.call(this._userObservesForConnections, connectionId)) {
if (Object.hasOwn(this._userObservesForConnections, connectionId)) {
const observe = this._userObservesForConnections[connectionId];
if (typeof observe === 'number') {
// We're in the process of setting up an observe for this connection. We
@@ -1214,13 +1211,13 @@ export class AccountsServer extends AccountsCommon {
};
// @override from accounts_common.js
config(options) {
config(...args) {
// Call the overridden implementation of the method.
const superResult = AccountsCommon.prototype.config.apply(this, arguments);
const superResult = AccountsCommon.prototype.config.apply(this, args);
// If the user set loginExpirationInDays to null, then we need to clear the
// timer that periodically expires tokens.
if (hasOwn.call(this._options, 'loginExpirationInDays') &&
if (Object.hasOwn(this._options, 'loginExpirationInDays') &&
this._options.loginExpirationInDays === null &&
this.expireTokenInterval) {
Meteor.clearInterval(this.expireTokenInterval);
@@ -1377,7 +1374,7 @@ export class AccountsServer extends AccountsCommon {
"Can't use updateOrCreateUserFromExternalService with internal service "
+ serviceName);
}
if (!hasOwn.call(serviceData, 'id')) {
if (!Object.hasOwn(serviceData, 'id')) {
throw new Error(
`Service data for service ${serviceName} must include id`);
}
@@ -1527,7 +1524,7 @@ export class AccountsServer extends AccountsCommon {
) {
// Some tests need the ability to add users with the same case insensitive
// value, hence the _skipCaseInsensitiveChecksForTest check
const skipCheck = Object.prototype.hasOwnProperty.call(
const skipCheck = Object.hasOwn(
this._skipCaseInsensitiveChecksForTest,
fieldValue
);

View File

@@ -1,3 +1,4 @@
import "./accounts_url_tests.js";
import "./accounts_reconnect_tests.js";
import "./accounts_client_tests.js";
import "./accounts_cookie_client_tests.js";

View File

@@ -1,6 +1,6 @@
Package.describe({
summary: "A user account system",
version: "3.2.0",
version: '3.3.0-beta350.7',
});
Package.onUse((api) => {
@@ -14,6 +14,8 @@ Package.onUse((api) => {
api.use("callback-hook", ["client", "server"]);
api.use("reactive-var", "client");
api.use("url", ["client", "server"]);
api.use("webapp", "server");
api.use("routepolicy", "server");
// needed for getting the currently logged-in user and handling reconnects
api.use("ddp", ["client", "server"]);

View File

@@ -0,0 +1,160 @@
import RoutePolicy from 'meteor/routepolicy';
import { WebApp } from 'meteor/webapp';
// Declare these routes as network to avoid clashes with static assets
const COOKIE_BASE_PATH = '/_accounts/cookie';
try {
RoutePolicy.declare(COOKIE_BASE_PATH + '/', 'network');
} catch (_e) {
// ignore duplicate declarations
}
const COOKIE_NAME = 'meteor_login_token';
function parseCookies(req) {
const header = req.headers && req.headers.cookie;
const cookies = {};
if (!header) return cookies;
header.split(';').forEach((part) => {
const idx = part.indexOf('=');
if (idx === -1) return;
const k = part.slice(0, idx).trim();
const v = decodeURIComponent(part.slice(idx + 1).trim());
cookies[k] = v;
});
return cookies;
}
function isSecureRequest(req) {
// honor proxies that set x-forwarded-proto
const xfp = (req.headers['x-forwarded-proto'] || '').split(',')[0];
return req.connection?.encrypted || xfp === 'https' || req.protocol === 'https';
}
function serializeCookie(name, value, options = {}) {
const parts = [`${name}=${encodeURIComponent(String(value))}`];
if (options.maxAge != null) parts.push(`Max-Age=${Math.floor(options.maxAge)}`);
if (options.domain) parts.push(`Domain=${options.domain}`);
parts.push(`Path=${options.path || '/'}`);
if (options.expires) parts.push(`Expires=${options.expires.toUTCString()}`);
if (options.httpOnly !== false) parts.push('HttpOnly');
if (options.secure) parts.push('Secure');
if (options.sameSite) parts.push(`SameSite=${options.sameSite}`);
return parts.join('; ');
}
async function readJson(req) {
return await new Promise((resolve) => {
let data = '';
req.setEncoding('utf8');
req.on('data', (chunk) => { data += chunk; });
req.on('end', () => {
try {
resolve(JSON.parse(data || '{}'));
} catch (_e) {
resolve({});
}
});
});
}
function sendJson(res, code, body) {
const payload = JSON.stringify(body || {});
res.writeHead(code, {
'Content-Type': 'application/json; charset=utf-8',
'Content-Length': Buffer.byteLength(payload)
});
res.end(payload);
}
// POST /_accounts/cookie/set
// Body: { token: string, tokenExpires?: string|number }
WebApp.handlers.use(async (req, res, next) => {
if (!req.url.startsWith(COOKIE_BASE_PATH + '/set')) return next();
if (req.method !== 'POST') {
res.writeHead(405);
return res.end();
}
const body = await readJson(req);
const token = body && body.token;
if (!token || typeof token !== 'string') {
return sendJson(res, 400, { error: 'invalid_token' });
}
// Try to find a matching token to get expiration
let expires;
try {
const hashed = Accounts._hashLoginToken(token);
const user = await Accounts.users.findOneAsync({
$or: [
{ 'services.resume.loginTokens.hashedToken': hashed },
{ 'services.resume.loginTokens.token': token },
]
}, { fields: { 'services.resume.loginTokens': 1 } });
if (user) {
const t = (user.services.resume.loginTokens || []).find((st) => st.hashedToken === hashed || st.token === token);
if (t && t.when) {
expires = Accounts._tokenExpiration(t.when);
}
}
} catch (_e) {}
const secure = isSecureRequest(req);
// Default cookie opts; prefer Lax to allow same-site navigations
const cookie = serializeCookie(COOKIE_NAME, token, {
path: '/',
httpOnly: true,
secure,
sameSite: 'Lax',
expires: expires instanceof Date ? expires : undefined,
});
res.setHeader('Set-Cookie', cookie);
return sendJson(res, 200, { ok: true });
});
// GET /_accounts/cookie/refresh
// Returns { token, tokenExpires, id } if cookie is valid
WebApp.handlers.use(async (req, res, next) => {
if (!req.url.startsWith(COOKIE_BASE_PATH + '/refresh')) return next();
if (req.method !== 'GET') {
res.writeHead(405);
return res.end();
}
const cookies = parseCookies(req);
const token = cookies[COOKIE_NAME];
if (!token) return sendJson(res, 204, {});
try {
const hashed = Accounts._hashLoginToken(token);
// Find user and token
const user = await Accounts.users.findOneAsync({
$or: [
{ 'services.resume.loginTokens.hashedToken': hashed },
{ 'services.resume.loginTokens.token': token },
]
}, { fields: { 'services.resume.loginTokens': 1 } });
if (!user) return sendJson(res, 401, { error: 'invalid_cookie' });
const stamped = (user.services.resume.loginTokens || []).find((st) => st.hashedToken === hashed || st.token === token);
if (!stamped) return sendJson(res, 401, { error: 'invalid_cookie' });
const tokenExpires = Accounts._tokenExpiration(stamped.when);
if (new Date() >= tokenExpires) return sendJson(res, 401, { error: 'expired' });
return sendJson(res, 200, { token, tokenExpires });
} catch (e) {
return sendJson(res, 500, { error: 'server_error' });
}
});
// POST /_accounts/cookie/clear
WebApp.handlers.use(async (req, res, next) => {
if (!req.url.startsWith(COOKIE_BASE_PATH + '/clear')) return next();
if (req.method !== 'POST') {
res.writeHead(405);
return res.end();
}
const secure = isSecureRequest(req);
const expired = new Date(0);
const cookie = serializeCookie(COOKIE_NAME, '', {
path: '/', httpOnly: true, secure, sameSite: 'Lax', expires: expired
});
res.setHeader('Set-Cookie', cookie);
return sendJson(res, 200, { ok: true });
});

View File

@@ -8,6 +8,9 @@ Accounts = new AccountsServer(Meteor.server, { ...Meteor.settings.packages?.acco
// TODO[FIBERS]: I need TLA
Accounts.init().then();
// Register HttpOnly cookie endpoints and helpers
import './server_http_cookies.js';
// Users table. Don't use the normal autopublish, since we want to hide
// some fields. Code to autopublish this is in accounts_server.js.
// XXX Allow users to configure this collection name.

View File

@@ -1,2 +1,3 @@
import "./accounts_tests.js";
import "./accounts_reconnect_tests.js";
import "./accounts_cookie_server_tests.js";

View File

@@ -5,7 +5,7 @@ Package.describe({
// 2.2.x in the future. The version was also bumped to 2.0.0 temporarily
// during the Meteor 1.5.1 release process, so versions 2.0.0-beta.2
// through -beta.5 and -rc.0 have already been published.
version: "3.2.2",
version: '3.2.3-beta350.7',
});
Npm.depends({

View File

@@ -1,4 +1,6 @@
Accounts._connectionCloseDelayMsForTests = 1000;
Accounts._options.ambiguousErrorMessages = false;
const makeTestConnAsync =
(test) =>
new Promise((resolve, reject) => {

View File

@@ -1,28 +1,91 @@
import { DDP } from '../common/namespace.js';
import { Meteor } from 'meteor/meteor';
import { loadAsyncStubHelpers } from "./queue_stub_helpers";
import { DDP } from '../common/namespace.js';
import { loadAsyncStubHelpers } from './queue_stub_helpers';
const normalizeRuntimePrefix = runtimePrefix => {
if (!runtimePrefix) {
return '';
}
const withLeadingSlash = runtimePrefix.startsWith('/')
? runtimePrefix
: `/${runtimePrefix}`;
return withLeadingSlash.endsWith('/')
? withLeadingSlash
: `${withLeadingSlash}/`;
};
const extractPathPrefix = (absoluteUrl, runtimeConfig) => {
const pathFromAbsoluteUrl = (() => {
if (!absoluteUrl) {
return '';
}
try {
return new URL(absoluteUrl).pathname || '/';
} catch {
return '';
}
})();
const normalizedRuntimePrefix = normalizeRuntimePrefix(runtimeConfig.ROOT_URL_PATH_PREFIX);
if (pathFromAbsoluteUrl && pathFromAbsoluteUrl !== '/') {
return pathFromAbsoluteUrl.startsWith('/')
? pathFromAbsoluteUrl
: `/${pathFromAbsoluteUrl}`;
}
if (normalizedRuntimePrefix) {
return normalizedRuntimePrefix;
}
if (pathFromAbsoluteUrl) {
return pathFromAbsoluteUrl.startsWith('/')
? pathFromAbsoluteUrl
: `/${pathFromAbsoluteUrl}`;
}
return '/';
};
export const _calculateDDPUrl = ({
absoluteUrl,
runtimeConfig = Object.create(null),
browserHost,
browserProtocol,
}) => {
if (runtimeConfig.DDP_DEFAULT_CONNECTION_URL) {
return runtimeConfig.DDP_DEFAULT_CONNECTION_URL;
}
const protocol = (absoluteUrl && absoluteUrl.split('//')[0]) || browserProtocol;
const pathPrefix = extractPathPrefix(absoluteUrl, runtimeConfig);
return `${protocol}//${browserHost}${pathPrefix}`;
};
const getDDPUrl = () => {
const runtimeConfig = typeof __meteor_runtime_config__ !== 'undefined'
? __meteor_runtime_config__
: Object.create(null);
return _calculateDDPUrl({
absoluteUrl: Meteor.absoluteUrl(),
runtimeConfig,
browserHost: window.location.host,
browserProtocol: window.location.protocol,
});
};
// Meteor.refresh can be called on the client (if you're in common code) but it
// only has an effect on the server.
Meteor.refresh = () => {};
// By default, try to connect back to the same endpoint as the page
// was served from.
//
// XXX We should be doing this a different way. Right now we don't
// include ROOT_URL_PATH_PREFIX when computing ddpUrl. (We don't
// include it on the server when computing
// DDP_DEFAULT_CONNECTION_URL, and we don't include it in our
// default, '/'.) We get by with this because DDP.connect then
// forces the URL passed to it to be interpreted relative to the
// app's deploy path, even if it is absolute. Instead, we should
// make DDP_DEFAULT_CONNECTION_URL, if set, include the path prefix;
// make the default ddpUrl be '' rather that '/'; and make
// _translateUrl in stream_client_common.js not force absolute paths
// to be treated like relative paths. See also
// stream_client_common.js #RationalizingRelativeDDPURLs
const runtimeConfig = typeof __meteor_runtime_config__ !== 'undefined' ? __meteor_runtime_config__ : Object.create(null);
const ddpUrl = runtimeConfig.DDP_DEFAULT_CONNECTION_URL || '/';
// By default, connect to the current browser host so mirrored domains
// establish their websocket connection against the same host users loaded.
// Keep the protocol and app path from Meteor.absoluteUrl() to preserve
// force-ssl and deploy-path behavior.
const ddpUrl = getDDPUrl() || '/';
const retry = new Retry();

View File

@@ -33,6 +33,11 @@ export class ConnectionStreamHandlers {
return;
}
// Track received message count for session resumption (excluding ping/pong)
if (!this._connection._ignoredMsgsForSessionOutOfDateCheck.includes(msg.msg)) {
this._connection._receivedCount++;
}
// Important: This was missing from previous version
// We need to set the current version before routing the message
if (msg.msg === 'connected') {
@@ -139,6 +144,7 @@ export class ConnectionStreamHandlers {
const msg = { msg: 'connect' };
if (this._connection._lastSessionId) {
msg.session = this._connection._lastSessionId;
msg.receivedCount = this._connection._receivedCount;
}
msg.version = this._connection._versionSuggestion || this._connection._supportedDDPVersions[0];
this._connection._versionSuggestion = msg.version;

View File

@@ -93,6 +93,10 @@ export class Connection {
}
self._lastSessionId = null;
// how many messages we've received (excluding ping/pong).
// when we try to reconnect to the server, it will check this against the number of messages it sent.
// if there is a mismatch, our info is out of date and we need a clean session.
self._receivedCount = 0;
self._versionSuggestion = null; // The last proposed DDP version.
self._version = null; // The DDP version agreed on by client and server.
self._stores = Object.create(null); // name -> object with methods
@@ -102,6 +106,7 @@ export class Connection {
self._heartbeatInterval = options.heartbeatInterval;
self._heartbeatTimeout = options.heartbeatTimeout;
self._ignoredMsgsForSessionOutOfDateCheck = ['ping', 'pong'];
// Tracks methods which the user has tried to call but which have not yet
// called their user callback (ie, they are waiting on their result or for all
@@ -1081,11 +1086,13 @@ export class Connection {
* @locus Client
*/
disconnect(...args) {
this._send({ msg: 'disconnect' });
return this._stream.disconnect(...args);
}
close() {
return this._stream.disconnect({ _permanent: true });
// _permanent is used by the underlying stream to prevent reconnection attempts
return this.disconnect({ _permanent: true });
}
///

View File

@@ -43,10 +43,15 @@ export class MessageProcessors {
if (reconnectedToPreviousSession) {
// Successful reconnection -- pick up where we left off.
// Don't reset stores since we're continuing the same session.
self._resetStores = false;
return;
}
// Server doesn't have our data anymore. Re-sync a new session.
// Reset the received count since we're starting a new session.
// Set to 1 because the 'connected' message itself counts.
self._receivedCount = 1;
// Forget about messages we were buffering for unknown collections. They'll
// be resent if still relevant.

View File

@@ -1,6 +1,6 @@
Package.describe({
summary: "Meteor's latency-compensated distributed data client",
version: "3.1.1",
version: '3.2.0-beta350.7',
documentation: null,
});
@@ -68,4 +68,5 @@ Package.onTest((api) => {
api.addFiles("test/async_stubs/server_setup.js", "server");
api.addFiles("test/livedata_callAsync_tests.js");
api.addFiles("test/allow_deny_setup.js");
api.addFiles("test/client_convenience_tests.js", "client");
});

View File

@@ -0,0 +1,91 @@
import { _calculateDDPUrl } from '../client/client_convenience.js';
Tinytest.add(
'ddp-client - client convenience uses DDP_DEFAULT_CONNECTION_URL when configured',
function(test) {
const ddpUrl = _calculateDDPUrl({
absoluteUrl: 'https://example.com/',
runtimeConfig: {
DDP_DEFAULT_CONNECTION_URL: 'https://example.net/'
},
browserHost: 'example.net',
browserProtocol: 'https:',
});
test.equal(ddpUrl, 'https://example.net/');
}
);
Tinytest.add(
'ddp-client - client convenience fallback uses current browser host for mirror domains',
function(test) {
const ddpUrl = _calculateDDPUrl({
absoluteUrl: 'https://example.com/',
runtimeConfig: Object.create(null),
browserHost: 'example.net',
browserProtocol: 'https:',
});
test.equal(ddpUrl, 'https://example.net/');
}
);
Tinytest.add(
'ddp-client - client convenience fallback keeps app path prefix (subdirectory)',
function(test) {
const ddpUrl = _calculateDDPUrl({
absoluteUrl: 'https://example.com/my-app/',
runtimeConfig: {
ROOT_URL_PATH_PREFIX: '/my-app'
},
browserHost: 'example.net',
browserProtocol: 'https:',
});
test.equal(ddpUrl, 'https://example.net/my-app/');
}
);
Tinytest.add(
'ddp-client - client convenience fallback uses ROOT_URL_PATH_PREFIX when absoluteUrl is root',
function(test) {
const ddpUrl = _calculateDDPUrl({
absoluteUrl: 'https://example.com/',
runtimeConfig: {
ROOT_URL_PATH_PREFIX: '/my-app'
},
browserHost: 'example.net',
browserProtocol: 'https:',
});
test.equal(ddpUrl, 'https://example.net/my-app/');
}
);
Tinytest.add(
'ddp-client - client convenience fallback keeps browser host port',
function(test) {
const ddpUrl = _calculateDDPUrl({
absoluteUrl: 'https://example.com/',
runtimeConfig: Object.create(null),
browserHost: 'example.net:3443',
browserProtocol: 'https:',
});
test.equal(ddpUrl, 'https://example.net:3443/');
}
);
Tinytest.add(
'ddp-client - client convenience fallback keeps protocol from absoluteUrl',
function(test) {
const ddpUrl = _calculateDDPUrl({
absoluteUrl: 'https://example.com/',
runtimeConfig: Object.create(null),
browserHost: 'example.net',
browserProtocol: 'http:',
});
test.equal(ddpUrl, 'https://example.net/');
}
);

View File

@@ -20,14 +20,16 @@ const newConnection = function(stream, options) {
);
};
const makeConnectMessage = function(session) {
const makeConnectMessage = function(session, receivedCount) {
const msg = {
msg: 'connect',
version: DDPCommon.SUPPORTED_DDP_VERSIONS[0],
support: DDPCommon.SUPPORTED_DDP_VERSIONS
support: DDPCommon.SUPPORTED_DDP_VERSIONS,
};
if (session) msg.session = session;
if (receivedCount) msg.receivedCount = receivedCount;
return msg;
};
@@ -869,7 +871,7 @@ Tinytest.addAsync('livedata stub - reconnect', async function(test, onComplete)
// sub. The wait method still is blocked.
await stream.reset();
testGotMessage(test, stream, makeConnectMessage(SESSION_ID));
testGotMessage(test, stream, makeConnectMessage(SESSION_ID, conn._receivedCount));
testGotMessage(test, stream, methodMessage);
testGotMessage(test, stream, subMessage);
@@ -990,7 +992,7 @@ if (Meteor.isClient) {
await stream.reset();
// verify that a reconnect message was sent.
testGotMessage(test, stream, makeConnectMessage(SESSION_ID));
testGotMessage(test, stream, makeConnectMessage(SESSION_ID, conn._receivedCount));
// Make sure that the stream triggers connection.
await stream.receive({ msg: 'connected', session: SESSION_ID + 1 });
@@ -1114,7 +1116,7 @@ if (Meteor.isClient) {
// in. Reconnect quiescence happens as soon as 'connected' is received because
// there are no pending methods or subs in need of revival.
await stream.reset();
testGotMessage(test, stream, makeConnectMessage(SESSION_ID));
testGotMessage(test, stream, makeConnectMessage(SESSION_ID, conn._receivedCount));
// Still holding out hope for session resumption, so nothing updated yet.
test.equal(coll.find().count(), 1);
test.equal(await coll.findOneAsync(stubWrittenId), {
@@ -1209,7 +1211,7 @@ if (Meteor.isClient) {
// but slowMethod gets called via onReconnect. Reconnect quiescence is now
// blocking on slowMethod.
await stream.reset();
testGotMessage(test, stream, makeConnectMessage(SESSION_ID + 1));
testGotMessage(test, stream, makeConnectMessage(SESSION_ID + 1, conn._receivedCount));
const slowMethodId = testGotMessage(test, stream, {
msg: 'method',
method: 'slowMethod',
@@ -1330,7 +1332,7 @@ Tinytest.addAsync('livedata stub - reconnect method which only got data', async
// Reset stream. Method gets resent (with same ID), and blocks reconnect
// quiescence.
await stream.reset();
testGotMessage(test, stream, makeConnectMessage(SESSION_ID));
testGotMessage(test, stream, makeConnectMessage(SESSION_ID, conn._receivedCount));
testGotMessage(test, stream, {
msg: 'method',
method: 'doLittle',
@@ -1807,7 +1809,7 @@ addReconnectTests(
// reconnect
stream.sent = [];
await stream.reset();
testGotMessage(test, stream, makeConnectMessage(conn._lastSessionId));
testGotMessage(test, stream, makeConnectMessage(conn._lastSessionId, conn._receivedCount));
// Test that we sent what we expect to send, and we're blocked on
// what we expect to be blocked. The subsequent logic to correctly
@@ -2033,7 +2035,7 @@ addReconnectTests(
// reconnect
stream.sent = [];
await stream.reset();
testGotMessage(test, stream, makeConnectMessage(conn._lastSessionId));
testGotMessage(test, stream, makeConnectMessage(conn._lastSessionId, conn._receivedCount));
// Test that we sent what we expect to send, and we're blocked on
// what we expect to be blocked. The subsequent logic to correctly
@@ -2084,7 +2086,7 @@ addReconnectTests(
// initial connect
stream.sent = [];
await stream.reset();
testGotMessage(test, stream, makeConnectMessage(conn._lastSessionId));
testGotMessage(test, stream, makeConnectMessage(conn._lastSessionId, conn._receivedCount));
// Test that we sent just the login message.
const loginId = testGotMessage(test, stream, {
@@ -2152,7 +2154,7 @@ addReconnectTests('livedata stub - reconnect double wait method', async function
// Reset stream. halfwayMethod does NOT get resent, but reconnectMethod does!
// Reconnect quiescence happens when reconnectMethod is done.
await stream.reset();
testGotMessage(test, stream, makeConnectMessage(SESSION_ID));
testGotMessage(test, stream, makeConnectMessage(SESSION_ID, conn._receivedCount));
const reconnectId = testGotMessage(test, stream, {
msg: 'method',
method: 'reconnectMethod',
@@ -2257,7 +2259,7 @@ Tinytest.addAsync('livedata stub - subscribe errors', async function(test) {
// stream reset: reconnect!
await stream.reset();
// We send a connect.
testGotMessage(test, stream, makeConnectMessage(SESSION_ID));
testGotMessage(test, stream, makeConnectMessage(SESSION_ID, conn._receivedCount));
// We should NOT re-sub to the sub, because we processed the error.
test.length(stream.sent, 0);
test.isFalse(onReadyFired);
@@ -2376,7 +2378,7 @@ if (Meteor.isClient) {
// Initiate reconnect.
await stream.reset();
testGotMessage(test, stream, makeConnectMessage(SESSION_ID));
testGotMessage(test, stream, makeConnectMessage(SESSION_ID, conn._receivedCount));
testGotMessage(test, stream, subMessage);
await stream.receive({ msg: 'connected', session: SESSION_ID + 1 });
@@ -2559,8 +2561,137 @@ if (Meteor.isClient) {
);
}
// ============================================================================
// DDP Session Resumption Tests (Client-side)
// ============================================================================
Tinytest.addAsync('livedata connection - receivedCount tracking', async function(test) {
const stream = new StubStream();
const conn = newConnection(stream);
// Initially receivedCount should be 0
test.equal(conn._receivedCount, 0);
await startAndConnect(test, stream);
// After receiving 'connected', receivedCount should be 1
// (the 'connected' message itself is counted)
test.equal(conn._receivedCount, 1);
// Receive some data messages
await stream.receive({ msg: 'added', collection: 'test', id: '1', fields: { a: 1 } });
test.equal(conn._receivedCount, 2);
await stream.receive({ msg: 'added', collection: 'test', id: '2', fields: { b: 2 } });
test.equal(conn._receivedCount, 3);
// Ping/pong should NOT increment receivedCount
await stream.receive({ msg: 'ping', id: 'ping1' });
test.equal(conn._receivedCount, 3, "ping should not increment receivedCount");
await stream.receive({ msg: 'pong', id: 'pong1' });
test.equal(conn._receivedCount, 3, "pong should not increment receivedCount");
// More data messages should continue incrementing
await stream.receive({ msg: 'changed', collection: 'test', id: '1', fields: { a: 2 } });
test.equal(conn._receivedCount, 4);
});
Tinytest.addAsync('livedata connection - receivedCount sent on reconnect', async function(test) {
const stream = new StubStream();
const conn = newConnection(stream);
await startAndConnect(test, stream);
// Receive some messages to build up receivedCount
await stream.receive({ msg: 'added', collection: 'test', id: '1', fields: {} });
await stream.receive({ msg: 'added', collection: 'test', id: '2', fields: {} });
await stream.receive({ msg: 'ready', subs: ['sub1'] });
const expectedReceivedCount = conn._receivedCount;
test.equal(expectedReceivedCount, 4); // connected + 3 messages
// Simulate disconnect and reconnect
await stream.reset();
// The connect message should include the receivedCount
const connectMsg = JSON.parse(stream.sent.shift());
test.equal(connectMsg.msg, 'connect');
test.equal(connectMsg.session, SESSION_ID);
test.equal(connectMsg.receivedCount, expectedReceivedCount,
"Connect message should include receivedCount for session resumption");
});
Tinytest.addAsync('livedata connection - receivedCount reset on new session', async function(test) {
const stream = new StubStream();
const conn = newConnection(stream);
await startAndConnect(test, stream);
// Build up some receivedCount
await stream.receive({ msg: 'added', collection: 'test', id: '1', fields: {} });
await stream.receive({ msg: 'added', collection: 'test', id: '2', fields: {} });
test.equal(conn._receivedCount, 3);
// Simulate reconnect
await stream.reset();
stream.sent.shift(); // consume connect message
// Server responds with a DIFFERENT session (new session, not resumed)
const newSessionId = SESSION_ID + '_new';
await stream.receive({ msg: 'connected', session: newSessionId });
// receivedCount should be reset to 1 (counting the new connected message)
test.equal(conn._receivedCount, 1,
"receivedCount should be reset to 1 when getting a new session");
test.equal(conn._lastSessionId, newSessionId);
});
Tinytest.addAsync('livedata connection - receivedCount preserved on session resume', async function(test) {
const stream = new StubStream();
const conn = newConnection(stream);
await startAndConnect(test, stream);
// Build up some receivedCount
await stream.receive({ msg: 'added', collection: 'test', id: '1', fields: {} });
await stream.receive({ msg: 'added', collection: 'test', id: '2', fields: {} });
const countBeforeDisconnect = conn._receivedCount;
test.equal(countBeforeDisconnect, 3);
// Simulate reconnect
await stream.reset();
stream.sent.shift(); // consume connect message
// Server responds with the SAME session (resumed)
await stream.receive({ msg: 'connected', session: SESSION_ID });
// receivedCount should continue from where it was (plus the connected message)
test.equal(conn._receivedCount, countBeforeDisconnect + 1,
"receivedCount should continue incrementing on session resume");
test.equal(conn._lastSessionId, SESSION_ID);
});
Tinytest.addAsync('livedata connection - disconnect sends disconnect message', async function(test) {
const stream = new StubStream();
const conn = newConnection(stream);
await startAndConnect(test, stream);
// Clear any pending messages
stream.sent.length = 0;
// Call disconnect
conn.disconnect();
// Should have sent a disconnect message
test.isTrue(stream.sent.length > 0, "Should have sent at least one message");
const disconnectMsg = JSON.parse(stream.sent.shift());
test.equal(disconnectMsg.msg, 'disconnect',
"disconnect() should send a disconnect message to the server");
});
// XXX also test:
// - reconnect, with session resume.
// - restart on update flag
// - on_update event
// - reloading when the app changes, including session migration

View File

@@ -27,6 +27,10 @@ Object.assign(StubStream.prototype, {
// no-op
},
disconnect: function() {
// no-op - for testing Connection.disconnect()
},
_lostConnection: function() {
// no-op
},

View File

@@ -30,7 +30,11 @@ Meteor.methods({
},
userId(userId) {
connection.lastRateLimitEvent.userId = userId;
return true;
return new Promise((resolve) => {
setTimeout(() => {
resolve(true);
}, 2);
});
},
type(type) {
// Special check to return proper name since 'getLastRateLimitEvent'

View File

@@ -1,10 +1,10 @@
export namespace DDPRateLimiter {
interface Matcher {
type?: string | ((type: string) => boolean) | undefined;
name?: string | ((name: string) => boolean) | undefined;
userId?: string | ((userId: string) => boolean) | undefined;
connectionId?: string | ((connectionId: string) => boolean) | undefined;
clientAddress?: string | ((clientAddress: string) => boolean) | undefined;
type?: string | ((type: string) => boolean) | ((type: string) => Promise<boolean>)| undefined;
name?: string | ((name: string) => boolean) | ((name: string) => Promise<boolean>)| undefined;
userId?: string | ((userId: string) => boolean) | ((userId: string) => Promise<boolean>)| undefined;
connectionId?: string | ((connectionId: string) => boolean) | ((connectionId: string) => Promise<boolean>)| undefined;
clientAddress?: string | ((clientAddress: string) => boolean) | ((clientAddress: string) => Promise<boolean>)| undefined;
}
function addRule(

View File

@@ -114,12 +114,15 @@ DDPRateLimiter.printRules = () => rateLimiter.rules;
*/
DDPRateLimiter.removeRule = id => rateLimiter.removeRule(id);
// This is accessed inside livedata_server.js, but shouldn't be called by any
// user.
DDPRateLimiter._increment = (input) => {
rateLimiter.increment(input);
};
// This is accessed inside livedata_server.js, but shouldn't be called by any
// user.
DDPRateLimiter._incrementRules = (rules, input) => rateLimiter.incrementRules(rules, input);
DDPRateLimiter._check = input => rateLimiter.check(input);
DDPRateLimiter.findAllMatchingRulesAsync = (input) => rateLimiter._findAllMatchingRulesAsync(input);
DDPRateLimiter._checkRules = (rules, input) => rateLimiter.checkRules(rules, input);
export { DDPRateLimiter };

View File

@@ -1,6 +1,6 @@
Package.describe({
name: 'ddp-rate-limiter',
version: '1.2.2',
version: '1.3.0-beta350.7',
// Brief, one-line summary of the package.
summary: 'The DDPRateLimiter allows users to add rate limits to DDP' +
' methods and subscriptions.',

View File

@@ -81,11 +81,16 @@ var Session = function (server, version, socket, options) {
var self = this;
self.id = Random.id();
// how many messages we've actually sent (not queued to send) excluding ping/pong
// we'll use this to detect mismatch of data on reconnect.
self.sentCount = 0;
self.server = server;
self.version = version;
self.initialized = false;
self.socket = socket;
self.options = options;
// Set to null when the session is destroyed. Multiple places below
// use this to determine if the session is alive or not.
@@ -134,6 +139,8 @@ var Session = function (server, version, socket, options) {
self.connectionHandle = {
id: self.id,
close: function () {
// Server-initiated close should not be resumable
self._expectingDisconnect = true;
self.close();
},
onClose: function (fn) {
@@ -175,6 +182,8 @@ var Session = function (server, version, socket, options) {
"livedata", "sessions", 1);
};
const ignoredMsgsForSessionOutOfDateCheck = ['ping', 'pong'];
Object.assign(Session.prototype, {
sendReady: function (subscriptionIds) {
var self = this;
@@ -269,77 +278,103 @@ Object.assign(Session.prototype, {
},
startUniversalSubs: function () {
var self = this;
const self = this;
// Make a shallow copy of the set of universal handlers and start them. If
// additional universal publishers start while we're running them (due to
// yielding), they will run separately as part of Server.publish.
var handlers = [...self.server.universal_publish_handlers];
handlers.forEach(function (handler) {
for (const handler of [...self.server.universal_publish_handlers]) {
self._startSubscription(handler);
});
}
},
// Stop heartbeat if running
_stopHeartbeat: function () {
if (this.heartbeat) {
this.heartbeat.stop();
this.heartbeat = null;
}
},
// Destroy this session and unregister it at the server.
close: function () {
var self = this;
const self = this;
// Destroy this session, even if it's not registered at the
// server. Stop all processing and tear everything down. If a socket
// was attached, close it.
// Already destroyed.
if (! self.inQueue)
// Already closing or closed - prevent multiple close() calls
if (self._isClosing) {
return;
}
self._isClosing = true;
// Drop the merge box data immediately.
self.inQueue = null;
self.collectionViews = new Map();
if (self.heartbeat) {
self.heartbeat.stop();
self.heartbeat = null;
if (self._removeTimeoutHandle) {
Meteor.clearTimeout(self._removeTimeoutHandle);
self._removeTimeoutHandle = null;
}
if (self.socket) {
self.socket.close();
self.socket._meteorSession = null;
self.socket = null;
}
Package['facts-base'] && Package['facts-base'].Facts.incrementServerFact(
"livedata", "sessions", -1);
// Stop heartbeat immediately - we don't need it during the grace period
// since we have no socket to send pings on anyway.
self._stopHeartbeat();
Meteor.defer(function () {
// Stop callbacks can yield, so we defer this on close.
// sub._isDeactivated() detects that we set inQueue to null and
// treats it as semi-deactivated (it will ignore incoming callbacks, etc).
self._deactivateAllSubscriptions();
self.server._removeSession(self, () => {
Package['facts-base'] && Package['facts-base'].Facts.incrementServerFact(
"livedata", "sessions", -1);
// Defer calling the close callbacks, so that the caller closing
// the session isn't waiting for all the callbacks to complete.
self._closeCallbacks.forEach(function (callback) {
callback();
self.inQueue = null;
self.collectionViews = new Map();
self._stopHeartbeat();
Meteor.defer(function () {
// stop callbacks can yield, so we defer this on close.
// sub._isDeactivated() detects that we set inQueue to null and
// treats it as semi-deactivated (it will ignore incoming callbacks, etc).
self._deactivateAllSubscriptions();
// Defer calling the close callbacks, so that the caller closing
// the session isn't waiting for all the callbacks to complete.
self._closeCallbacks.forEach(callback => {
callback();
});
});
});
// Unregister the session.
self.server._removeSession(self);
},
// Send a message (doing nothing if no socket is connected right now).
// It should be a JSON object (it will be stringified).
send: function (msg) {
const self = this;
const isIgnoredMsg = ignoredMsgsForSessionOutOfDateCheck.includes(msg.msg);
if (self.messageQueue && !isIgnoredMsg) {
self.messageQueue.push(msg);
if (self.messageQueue.length > self.options.maxMessageQueueLength) {
Meteor.clearTimeout(self._removeTimeoutHandle);
self._pendingRemoveFunction();
}
return;
}
if (self.socket) {
if (Meteor._printSentDDP)
Meteor._debug("Sent DDP", DDPCommon.stringifyDDP(msg));
if (!isIgnoredMsg) {
self.sentCount++;
}
self.socket.send(DDPCommon.stringifyDDP(msg));
}
},
// Send a connection error.
sendError: function (reason, offendingMessage) {
var self = this;
var msg = {msg: 'error', reason: reason};
const self = this;
const msg = {msg: 'error', reason: reason};
if (offendingMessage)
msg.offendingMessage = offendingMessage;
self.send(msg);
@@ -379,7 +414,7 @@ Object.assign(Session.prototype, {
// the client is still alive.
if (self.heartbeat) {
self.heartbeat.messageReceived();
};
}
if (self.version !== 'pre1' && msg_in.msg === 'ping') {
if (self._respondToPings)
@@ -391,6 +426,11 @@ Object.assign(Session.prototype, {
return;
}
if (msg_in.msg === 'disconnect') {
// Pre-empt the queue - a disconnect is imminent.
return self.protocol_handlers.disconnect.call(self, msg_in, () => {});
}
self.inQueue.push(msg_in);
if (self.workerRunning)
return;
@@ -444,6 +484,9 @@ Object.assign(Session.prototype, {
},
protocol_handlers: {
disconnect: function(msg) {
this._expectingDisconnect = true;
},
sub: async function (msg, unblock) {
var self = this;
@@ -487,8 +530,9 @@ Object.assign(Session.prototype, {
connectionId: self.id
};
DDPRateLimiter._increment(rateLimiterInput);
var rateLimitResult = DDPRateLimiter._check(rateLimiterInput);
const rules = await DDPRateLimiter.findAllMatchingRulesAsync(rateLimiterInput);
DDPRateLimiter._incrementRules(rules, rateLimiterInput);
const rateLimitResult = DDPRateLimiter._checkRules(rules, rateLimiterInput);
if (!rateLimitResult.allowed) {
self.send({
msg: 'nosub', id: msg.id,
@@ -568,44 +612,6 @@ Object.assign(Session.prototype, {
fence,
});
const promise = new Promise((resolve, reject) => {
// XXX It'd be better if we could hook into method handlers better but
// for now, we need to check if the ddp-rate-limiter exists since we
// have a weak requirement for the ddp-rate-limiter package to be added
// to our application.
if (Package['ddp-rate-limiter']) {
var DDPRateLimiter = Package['ddp-rate-limiter'].DDPRateLimiter;
var rateLimiterInput = {
userId: self.userId,
clientAddress: self.connectionHandle.clientAddress,
type: "method",
name: msg.method,
connectionId: self.id
};
DDPRateLimiter._increment(rateLimiterInput);
var rateLimitResult = DDPRateLimiter._check(rateLimiterInput)
if (!rateLimitResult.allowed) {
reject(new Meteor.Error(
"too-many-requests",
DDPRateLimiter.getErrorMessage(rateLimitResult),
{timeToReset: rateLimitResult.timeToReset}
));
return;
}
}
resolve(DDPServer._CurrentWriteFence.withValue(
fence,
() => DDP._CurrentMethodInvocation.withValue(
invocation,
() => maybeAuditArgumentChecks(
handler, invocation, msg.params,
"call to '" + msg.method + "'"
)
)
));
});
async function finish() {
await fence.arm();
unblock();
@@ -615,20 +621,57 @@ Object.assign(Session.prototype, {
msg: "result",
id: msg.id
};
return promise.then(async result => {
try {
// XXX It'd be better if we could hook into method handlers better but
// for now, we need to check if the ddp-rate-limiter exists since we
// have a weak requirement for the ddp-rate-limiter package to be added
// to our application.
if (Package['ddp-rate-limiter']) {
const DDPRateLimiter = Package['ddp-rate-limiter'].DDPRateLimiter;
var rateLimiterInput = {
userId: self.userId,
clientAddress: self.connectionHandle.clientAddress,
type: "method",
name: msg.method,
connectionId: self.id
};
const rules = await DDPRateLimiter.findAllMatchingRulesAsync(rateLimiterInput);
DDPRateLimiter._incrementRules(rules, rateLimiterInput);
const rateLimitResult = DDPRateLimiter._checkRules(rules, rateLimiterInput);
if (!rateLimitResult.allowed) {
throw new Meteor.Error(
"too-many-requests",
DDPRateLimiter.getErrorMessage(rateLimitResult),
{timeToReset: rateLimitResult.timeToReset}
);
}
}
const result = await DDPServer._CurrentWriteFence.withValue(
fence,
() => DDP._CurrentMethodInvocation.withValue(
invocation,
() => maybeAuditArgumentChecks(
handler, invocation, msg.params,
"call to '" + msg.method + "'"
)
)
);
await finish();
if (result !== undefined) {
payload.result = result;
}
self.send(payload);
}, async (exception) => {
} catch (exception) {
await finish();
payload.error = wrapInternalException(
exception,
`while invoking method '${msg.method}'`
);
self.send(payload);
});
};
}
},
@@ -1252,6 +1295,19 @@ Server = function (options = {}) {
// For testing, allow responding to pings to be disabled.
respondToPings: true,
defaultPublicationStrategy: publicationStrategies.SERVER_MERGE,
/**
* @summary How many messages should we queue during a non-graceful disconnect before we destroy the session, to help prevent memory leaks.
* @type {Number}
* @locus Server
*/
maxMessageQueueLength: 100,
/**
* @summary How long we should maintain a session for after a non-graceful disconnect before killing it
* sessions that reconnect within this time will be resumed with minimal performance impact.
* @type {Number}
* @locus Server
*/
disconnectGracePeriod: 15000,
...options,
};
@@ -1421,16 +1477,66 @@ Object.assign(Server.prototype, {
return;
}
// Yay, version matches! Create a new session.
// Yay, version matches! Resume existing session if possible, otherwise create a new one.
// Note: Troposphere depends on the ability to mutate
// Meteor.server.options.heartbeatTimeout! This is a hack, but it's life.
socket._meteorSession = new Session(self, version, socket, self.options);
self.sessions.set(socket._meteorSession.id, socket._meteorSession);
self.onConnectionHook.each(function (callback) {
if (socket._meteorSession)
callback(socket._meteorSession.connectionHandle);
return true;
});
const existingSession = self.sessions.get(msg.session);
// we've found a session with:
// the right ID
// a matching sent/received count
// was disconnected and hasn't been reconnected to yet.
if (existingSession && existingSession.sentCount === msg.receivedCount && existingSession._removeTimeoutHandle) {
Meteor.clearTimeout(existingSession._removeTimeoutHandle);
existingSession._removeTimeoutHandle = undefined;
existingSession._pendingRemoveFunction = undefined;
existingSession._isClosing = false; // Reset so session can be closed again later
socket._meteorSession = existingSession;
const messageQueue = existingSession.messageQueue;
existingSession.messageQueue = undefined;
existingSession.socket = socket;
// Restart heartbeat for the resumed session
if (existingSession.version !== 'pre1' && self.options.heartbeatInterval !== 0) {
socket.setWebsocketTimeout(0);
existingSession.heartbeat = new DDPCommon.Heartbeat({
heartbeatInterval: self.options.heartbeatInterval,
heartbeatTimeout: self.options.heartbeatTimeout,
onTimeout: function () {
existingSession.close();
},
sendPing: function () {
existingSession.send({msg: 'ping'});
}
});
existingSession.heartbeat.start();
}
// Send connected message so client can restart heartbeat and confirm resumption
existingSession.send({ msg: 'connected', session: existingSession.id });
if (messageQueue) {
Meteor.defer(() => {
messageQueue.forEach(msg => existingSession.send(msg));
});
}
// Note: onConnectionHook is NOT called on session resume - the connection
// is considered to be the same logical connection as before.
}
else {
// immediately remove the old session since we're out of date.
if (existingSession && existingSession._pendingRemoveFunction) {
Meteor.clearTimeout(existingSession._removeTimeoutHandle);
existingSession._pendingRemoveFunction();
}
socket._meteorSession = new Session(self, version, socket, self.options);
self.sessions.set(socket._meteorSession.id, socket._meteorSession);
self.onConnectionHook.each(function (callback) {
if (socket._meteorSession)
callback(socket._meteorSession.connectionHandle);
return true;
});
}
},
/**
* Register a publish handler function.
@@ -1520,9 +1626,31 @@ Object.assign(Server.prototype, {
}
},
_removeSession: function (session) {
_removeSession: function (session, callback = () => {}) {
var self = this;
self.sessions.delete(session.id);
const sessionRemoveFunction = () => {
// Guard against being called multiple times (e.g., from both overflow and timeout)
if (!self.sessions.has(session.id)) {
return;
}
// Clear timeout handle if it exists to prevent double execution
if (session._removeTimeoutHandle) {
Meteor.clearTimeout(session._removeTimeoutHandle);
session._removeTimeoutHandle = null;
}
session._pendingRemoveFunction = null;
self.sessions.delete(session.id);
callback();
};
if (session._expectingDisconnect) {
return sessionRemoveFunction();
}
session.messageQueue = [];
session._pendingRemoveFunction = sessionRemoveFunction;
if (session._removeTimeoutHandle) {
Meteor.clearTimeout(session._removeTimeoutHandle);
}
session._removeTimeoutHandle = Meteor.setTimeout(sessionRemoveFunction, self.options.disconnectGracePeriod);
},
/**

View File

@@ -168,9 +168,24 @@ Tinytest.addAsync('livedata server - async publish cursor', function(
connection: clientConn,
});
clientConn.subscribe('asyncPublishCursor', async () => {
const actual = await remoteCollection.find().fetch();
test.equal(actual[0].name, 'async');
onComplete();
// Wait for data to arrive - the subscription is ready but data may still be in transit
// This can happen when a previous test run was interrupted (page reload) and the
// server is still processing the old session's grace period
let attempts = 0;
const maxAttempts = 50; // 5 seconds max wait
const checkData = async () => {
const actual = await remoteCollection.find().fetch();
if (actual.length > 0) {
test.equal(actual[0].name, 'async');
onComplete();
} else if (attempts++ < maxAttempts) {
setTimeout(checkData, 100);
} else {
test.fail('Timed out waiting for data in async publish cursor test');
onComplete();
}
};
await checkData();
});
});
});

View File

@@ -1,3 +1,38 @@
// Helper to temporarily set disconnectGracePeriod for DDP resumption tests
// This ensures test isolation - other tests run with the default grace period
const DEFAULT_GRACE_PERIOD = Meteor.server.options.disconnectGracePeriod;
const TEST_GRACE_PERIOD = 5000; // Short grace period for fast tests (ms)
// Derived timing constants to avoid hardcoding throughout tests
const WITHIN_GRACE_PERIOD_MS = Math.floor(TEST_GRACE_PERIOD / 4); // Well within grace period
const AFTER_GRACE_PERIOD_MS = Math.ceil(TEST_GRACE_PERIOD * 1.5); // After grace period expires
const POLL_TIMEOUT_MS = TEST_GRACE_PERIOD * 2; // Max time to wait for async operations before failing
async function withTestGracePeriod(fn) {
const previous = Meteor.server.options.disconnectGracePeriod;
Meteor.server.options.disconnectGracePeriod = TEST_GRACE_PERIOD;
try {
await fn();
} finally {
Meteor.server.options.disconnectGracePeriod = previous ?? DEFAULT_GRACE_PERIOD;
}
}
// Helper to poll for a condition with timeout to prevent hanging tests
function pollUntil(conditionFn, timeoutMs = POLL_TIMEOUT_MS) {
return new Promise((resolve, reject) => {
const startTime = Date.now();
const interval = setInterval(() => {
if (conditionFn()) {
clearInterval(interval);
resolve();
} else if (Date.now() - startTime > timeoutMs) {
clearInterval(interval);
reject(new Error(`Timed out after ${timeoutMs}ms waiting for condition`));
}
}, 10);
});
}
Tinytest.addAsync(
"livedata server - connectionHandle.onClose()",
function (test, onComplete) {
@@ -593,4 +628,344 @@ function getTestConnections(test) {
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
// ============================================================================
// DDP Session Resumption Tests
// ============================================================================
// Test that unexpected disconnects allow session resumption within grace period
Tinytest.addAsync(
"livedata server - DDP resumption: unexpected disconnect preserves session",
async function (test) {
await withTestGracePeriod(async () => {
const { clientConn, serverConn } = await getTestConnections(test);
const originalSessionId = serverConn.id;
// Verify the session exists
test.isTrue(Meteor.server.sessions.has(originalSessionId));
// Simulate unexpected disconnect by forcing the stream to close
// without sending a disconnect message
clientConn._stream._lostConnection();
// Wait a bit but less than the grace period
await sleep(WITHIN_GRACE_PERIOD_MS);
// Session should still exist during grace period
test.isTrue(
Meteor.server.sessions.has(originalSessionId),
"Session should be preserved during grace period"
);
// Wait for grace period to expire
await sleep(AFTER_GRACE_PERIOD_MS);
// Session should be removed after grace period
test.isFalse(
Meteor.server.sessions.has(originalSessionId),
"Session should be removed after grace period expires"
);
});
}
);
// Test that graceful disconnects (client sends disconnect message) remove session immediately
Tinytest.addAsync(
"livedata server - DDP resumption: graceful disconnect removes session immediately",
async function (test) {
await withTestGracePeriod(async () => {
const { clientConn, serverConn } = await getTestConnections(test);
const originalSessionId = serverConn.id;
// Verify the session exists
test.isTrue(Meteor.server.sessions.has(originalSessionId));
// Graceful disconnect - this sends the disconnect message
clientConn.disconnect();
// Wait a moment for the disconnect to process
await sleep(WITHIN_GRACE_PERIOD_MS);
// Session should be removed immediately (not waiting for grace period)
test.isFalse(
Meteor.server.sessions.has(originalSessionId),
"Session should be removed immediately after graceful disconnect"
);
});
}
);
// Test that server-initiated close removes session immediately (not resumable)
Tinytest.addAsync(
"livedata server - DDP resumption: server-initiated close removes session immediately",
async function (test) {
await withTestGracePeriod(async () => {
const { serverConn } = await getTestConnections(test);
const originalSessionId = serverConn.id;
// Verify the session exists
test.isTrue(Meteor.server.sessions.has(originalSessionId));
// Server-initiated close via connectionHandle.close()
serverConn.close();
// Wait a moment for the close to process
await sleep(WITHIN_GRACE_PERIOD_MS);
// Session should be removed immediately (server kicks should not be resumable)
test.isFalse(
Meteor.server.sessions.has(originalSessionId),
"Session should be removed immediately after server-initiated close"
);
});
}
);
// Test that onConnection hook is NOT called on session resume
Tinytest.addAsync(
"livedata server - DDP resumption: onConnection not called on resume",
async function (test) {
await withTestGracePeriod(async () => {
let onConnectionCallCount = 0;
let lastConnectionId = null;
const handle = Meteor.onConnection(function (conn) {
onConnectionCallCount++;
lastConnectionId = conn.id;
});
// Create initial connection
const clientConn = DDP.connect(Meteor.absoluteUrl(), { retry: false });
// Wait for connection with timeout
await pollUntil(() => clientConn._lastSessionId);
const originalSessionId = clientConn._lastSessionId;
test.equal(onConnectionCallCount, 1, "onConnection should be called once on initial connect");
test.equal(lastConnectionId, originalSessionId);
// Get the server session and verify it exists
const serverSession = Meteor.server.sessions.get(originalSessionId);
test.isTrue(serverSession, "Server session should exist");
// Simulate unexpected disconnect
clientConn._stream._lostConnection();
// Wait a bit (less than grace period)
await sleep(WITHIN_GRACE_PERIOD_MS);
// Session should still exist
test.isTrue(
Meteor.server.sessions.has(originalSessionId),
"Session should still exist during grace period"
);
// Reconnect - this should resume the session
clientConn._stream.reconnect();
// Wait for reconnection with timeout
await pollUntil(() => clientConn.status().connected);
// Give it a moment to process
await sleep(WITHIN_GRACE_PERIOD_MS);
// IMPORTANT: Assert that session was actually resumed (same session ID)
// If this fails, the test is not actually testing resumption
test.equal(
clientConn._lastSessionId,
originalSessionId,
"Session should be resumed with same session ID"
);
// onConnection should NOT have been called again for a resumed session
test.equal(
onConnectionCallCount,
1,
"onConnection should not be called again on session resume"
);
handle.stop();
clientConn.disconnect();
});
}
);
// Test that server-initiated close prevents session resumption
Tinytest.addAsync(
"livedata server - DDP resumption: server close prevents resumption",
async function (test) {
await withTestGracePeriod(async () => {
let onConnectionCallCount = 0;
const handle = Meteor.onConnection(function (conn) {
onConnectionCallCount++;
});
// Create initial connection
const clientConn = DDP.connect(Meteor.absoluteUrl(), { retry: true });
// Wait for connection with timeout
await pollUntil(() => clientConn._lastSessionId);
const originalSessionId = clientConn._lastSessionId;
test.equal(onConnectionCallCount, 1, "onConnection should be called once on initial connect");
// Get the server session
const serverSession = Meteor.server.sessions.get(originalSessionId);
test.isTrue(serverSession, "Server session should exist");
// Server-initiated close (kick the client)
serverSession.connectionHandle.close();
// Wait for client to reconnect with new session (retry is enabled)
await pollUntil(() =>
clientConn.status().connected && clientConn._lastSessionId !== originalSessionId
);
// Should have a NEW session (not resumed)
test.notEqual(
clientConn._lastSessionId,
originalSessionId,
"Should have a new session ID after server-initiated close"
);
// onConnection should have been called again (new session, not resumed)
test.equal(
onConnectionCallCount,
2,
"onConnection should be called again after server-initiated close"
);
handle.stop();
clientConn.disconnect();
});
}
);
// Test that graceful client disconnect prevents session resumption
Tinytest.addAsync(
"livedata server - DDP resumption: graceful disconnect prevents resumption",
async function (test) {
await withTestGracePeriod(async () => {
let onConnectionCallCount = 0;
const handle = Meteor.onConnection(function (conn) {
onConnectionCallCount++;
});
// Create initial connection with retry enabled
const clientConn = DDP.connect(Meteor.absoluteUrl(), { retry: true });
// Wait for connection with timeout
await pollUntil(() => clientConn._lastSessionId);
const originalSessionId = clientConn._lastSessionId;
test.equal(onConnectionCallCount, 1, "onConnection should be called once on initial connect");
// Graceful disconnect (sends disconnect message)
clientConn.disconnect();
// Wait for session to be removed
await sleep(WITHIN_GRACE_PERIOD_MS);
// Session should be removed immediately
test.isFalse(
Meteor.server.sessions.has(originalSessionId),
"Session should be removed after graceful disconnect"
);
// Reconnect
clientConn.reconnect();
// Wait for reconnection with timeout
await pollUntil(() => clientConn.status().connected);
// Should have a NEW session (not resumed, because we gracefully disconnected)
test.notEqual(
clientConn._lastSessionId,
originalSessionId,
"Should have a new session ID after graceful disconnect and reconnect"
);
// onConnection should have been called again
test.equal(
onConnectionCallCount,
2,
"onConnection should be called again after graceful disconnect"
);
handle.stop();
clientConn.disconnect();
});
}
);
// Test that receivedCount mismatch causes new session (not resume)
Tinytest.addAsync(
"livedata server - DDP resumption: count mismatch creates new session",
async function (test) {
await withTestGracePeriod(async () => {
let onConnectionCallCount = 0;
const handle = Meteor.onConnection(function (conn) {
onConnectionCallCount++;
});
// Create initial connection
const clientConn = DDP.connect(Meteor.absoluteUrl(), { retry: false });
// Wait for connection with timeout
await pollUntil(() => clientConn._lastSessionId);
const originalSessionId = clientConn._lastSessionId;
test.equal(onConnectionCallCount, 1, "onConnection should be called once on initial connect");
// Get the server session
const serverSession = Meteor.server.sessions.get(originalSessionId);
test.isTrue(serverSession, "Server session should exist");
// Artificially increment sentCount to create a mismatch
// This simulates messages sent by server that client didn't receive
serverSession.sentCount += 5;
// Simulate unexpected disconnect
clientConn._stream._lostConnection();
// Wait a bit (less than grace period)
await sleep(WITHIN_GRACE_PERIOD_MS);
// Session should still exist during grace period
test.isTrue(
Meteor.server.sessions.has(originalSessionId),
"Session should still exist during grace period"
);
// Reconnect - this should NOT resume due to count mismatch
clientConn._stream.reconnect();
// Wait for reconnection with timeout
await pollUntil(() => clientConn.status().connected);
// Give it a moment to process
await sleep(WITHIN_GRACE_PERIOD_MS);
// Should have a NEW session (counts didn't match)
test.notEqual(
clientConn._lastSessionId,
originalSessionId,
"Should have a new session ID when counts mismatch"
);
// onConnection should have been called again (new session)
test.equal(
onConnectionCallCount,
2,
"onConnection should be called again when counts mismatch"
);
handle.stop();
clientConn.disconnect();
});
}
);

View File

@@ -1,10 +1,11 @@
Package.describe({
summary: "Meteor's latency-compensated distributed data server",
version: "3.1.2",
version: '3.2.0-beta350.7',
documentation: null,
});
Npm.depends({
"faye-websocket": "0.11.4",
"permessage-deflate2": "0.1.8",
sockjs: "0.3.24",
"lodash.once": "4.1.1",
@@ -77,4 +78,5 @@ Package.onTest(function (api) {
api.addFiles("livedata_server_async_tests.js", "server");
api.addFiles("session_view_tests.js", ["server"]);
api.addFiles("crossbar_tests.js", ["server"]);
api.addFiles("raw_websocket_tests.js", "server");
});

View File

@@ -0,0 +1,155 @@
// Tests for DISABLE_SOCKJS raw WebSocket mode and general /websocket endpoint.
// These tests verify that DDP connections work correctly regardless of
// the transport mode (SockJS or raw WebSocket).
const http = Npm.require('http');
const disableSockJS = !!process.env.DISABLE_SOCKJS;
// Helper: make an HTTP GET request to the given URL, return { statusCode, headers, body }
function httpGet(url) {
return new Promise((resolve, reject) => {
http.get(url, (res) => {
let body = '';
res.on('data', (chunk) => { body += chunk; });
res.on('end', () => {
resolve({ statusCode: res.statusCode, headers: res.headers, body });
});
}).on('error', reject);
});
}
// --- Tests that run in BOTH modes ---
Tinytest.addAsync(
'stream server - DDP connection works over /websocket',
function (test, onComplete) {
makeTestConnection(
test,
function (clientConn, serverConn) {
test.isTrue(clientConn.status().connected, 'client should be connected');
test.isTrue(typeof serverConn.id === 'string', 'server connection should have an id');
test.isTrue(serverConn.id.length > 0, 'server connection id should not be empty');
clientConn.disconnect();
onComplete();
},
onComplete
);
}
);
Tinytest.addAsync(
'stream server - connection supports method calls',
function (test, onComplete) {
makeTestConnection(
test,
function (clientConn, serverConn) {
clientConn.callAsync('livedata_server_test_inner').then((res) => {
test.equal(res, serverConn.id,
'method should see the correct connection id');
clientConn.disconnect();
onComplete();
});
},
onComplete
);
}
);
Tinytest.addAsync(
'stream server - plain HTTP to /websocket returns 400',
async function (test) {
// In both modes, a plain HTTP GET to /websocket should not serve the app.
// In DISABLE_SOCKJS mode, our middleware returns 400.
// In SockJS mode, SockJS handles it (returns non-200 or redirect).
const url = Meteor.absoluteUrl('websocket');
const result = await httpGet(url);
if (disableSockJS) {
test.equal(result.statusCode, 400,
'DISABLE_SOCKJS: /websocket should return 400 for plain HTTP');
test.equal(result.body, 'Not a valid websocket request',
'DISABLE_SOCKJS: /websocket should return clear error message');
} else {
// In SockJS mode, /websocket gets rewritten to /sockjs/websocket
// which returns a non-200 for plain HTTP. Just verify it's not
// serving the app HTML.
test.isTrue(
result.statusCode !== 200 || !result.body.includes('<!DOCTYPE'),
'SockJS: /websocket should not serve app HTML'
);
}
}
);
// --- Tests specific to DISABLE_SOCKJS mode ---
if (disableSockJS) {
Tinytest.addAsync(
'stream server - DISABLE_SOCKJS: /sockjs/info is not available',
async function (test) {
const url = Meteor.absoluteUrl('sockjs/info');
const result = await httpGet(url);
// Without SockJS, there's no /sockjs/info endpoint.
// The request falls through to the app handler (returns HTML, not JSON).
const ct = result.headers['content-type'] || '';
test.isFalse(
ct.includes('application/json'),
'/sockjs/info should not return JSON when DISABLE_SOCKJS is set'
);
}
);
Tinytest.addAsync(
'stream server - DISABLE_SOCKJS: connection has websocket-raw protocol',
function (test, onComplete) {
// In raw WebSocket mode, the server-side connection object should
// have protocol 'websocket-raw' (set by RawWebSocketConnection).
//
// We verify this by checking the server connection's internal state
// via the Meteor.onConnection hook.
const handle = Meteor.onConnection(function (conn) {
handle.stop();
// The connection object exposed via onConnection doesn't directly
// expose the protocol, but we can verify it has the expected shape.
test.isTrue(typeof conn.id === 'string');
test.isTrue(typeof conn.close === 'function');
test.isTrue(typeof conn.onClose === 'function');
test.isNotUndefined(conn.clientAddress,
'connection should have clientAddress');
});
makeTestConnection(
test,
function (clientConn, serverConn) {
clientConn.disconnect();
onComplete();
},
onComplete
);
}
);
}
// --- Tests specific to SockJS mode (default) ---
if (!disableSockJS) {
Tinytest.addAsync(
'stream server - SockJS: /sockjs/info returns JSON',
async function (test) {
const url = Meteor.absoluteUrl('sockjs/info');
const result = await httpGet(url);
test.equal(result.statusCode, 200,
'/sockjs/info should return 200 in SockJS mode');
const ct = result.headers['content-type'] || '';
test.isTrue(ct.includes('application/json'),
'/sockjs/info should return application/json');
const info = JSON.parse(result.body);
test.isTrue('websocket' in info,
'/sockjs/info response should contain websocket field');
}
);
}

View File

@@ -100,7 +100,10 @@ export class SessionCollectionView {
const docView = this.documents.get(id);
if (!docView) {
throw new Error(`Could not find element with id ${id} to change`);
// Document was already removed. This can happen in high-concurrency scenarios
// where the cache is updated synchronously but callbacks are processed
// asynchronously, and a remove was processed before this change.
return;
}
Object.entries(changed).forEach(([key, value]) => {
@@ -118,7 +121,10 @@ export class SessionCollectionView {
const docView = this.documents.get(id);
if (!docView) {
throw new Error(`Removed nonexistent document ${id}`);
// Document was already removed. This can happen in high-concurrency scenarios
// where the cache is updated synchronously but callbacks are processed
// asynchronously, causing duplicate removal attempts.
return;
}
docView.existsIn.delete(subscriptionHandle);

View File

@@ -1,4 +1,5 @@
import once from 'lodash.once';
import { EventEmitter } from 'events';
import zlib from 'node:zlib';
// By default, we use the permessage-deflate extension with default
@@ -33,69 +34,94 @@ var websocketExtensions = once(function () {
});
var pathPrefix = __meteor_runtime_config__.ROOT_URL_PATH_PREFIX || "";
var disableSockJS = !!process.env.DISABLE_SOCKJS;
// Wrapper around a raw faye-websocket connection that provides the same
// interface as a SockJS connection (as expected by _onConnection and
// livedata_server.js).
class RawWebSocketConnection extends EventEmitter {
constructor(ws, req, rawSocket) {
super();
this._ws = ws;
this._rawSocket = rawSocket;
this.protocol = 'websocket-raw';
this.id = Random.id();
// Copy relevant headers (same set as SockJS transport.js)
this.headers = {};
const headerKeys = [
'referer', 'x-client-ip', 'x-forwarded-for',
'x-forwarded-host', 'x-forwarded-port', 'x-cluster-client-ip',
'via', 'x-real-ip', 'x-forwarded-proto', 'x-ssl', 'dnt',
'host', 'user-agent', 'accept-language'
];
for (const key of headerKeys) {
if (req.headers[key]) this.headers[key] = req.headers[key];
}
this.remoteAddress = rawSocket.remoteAddress;
this.remotePort = rawSocket.remotePort;
this.url = req.url;
// Compatibility with SockJS internals that stream_server accesses
this._session = {
recv: {
connection: rawSocket,
protocol: 'websocket-raw'
}
};
ws.on('message', (event) => {
this.emit('data', event.data);
});
ws.on('close', () => {
this.emit('close');
this._ws = null;
});
ws.on('error', () => {
this.emit('close');
this._ws = null;
});
}
write(data) {
if (this._ws) this._ws.send(data);
}
send(data) {
this.write(data);
}
close() {
if (this._ws) this._ws.close();
}
setWebsocketTimeout(timeout) {
if (this._rawSocket) {
this._rawSocket.setTimeout(timeout);
}
}
}
StreamServer = function () {
var self = this;
self.registration_callbacks = [];
self.open_sockets = [];
// Because we are installing directly onto WebApp.httpServer instead of using
// WebApp.app, we have to process the path prefix ourselves.
self.prefix = pathPrefix + '/sockjs';
RoutePolicy.declare(self.prefix + '/', 'network');
// set up sockjs
var sockjs = Npm.require('sockjs');
var serverOptions = {
prefix: self.prefix,
log: function() {},
// this is the default, but we code it explicitly because we depend
// on it in stream_client:HEARTBEAT_TIMEOUT
heartbeat_delay: 45000,
// The default disconnect_delay is 5 seconds, but if the server ends up CPU
// bound for that much time, SockJS might not notice that the user has
// reconnected because the timer (of disconnect_delay ms) can fire before
// SockJS processes the new connection. Eventually we'll fix this by not
// combining CPU-heavy processing with SockJS termination (eg a proxy which
// converts to Unix sockets) but for now, raise the delay.
disconnect_delay: 60 * 1000,
// Allow disabling of CORS requests to address
// https://github.com/meteor/meteor/issues/8317.
disable_cors: !!process.env.DISABLE_SOCKJS_CORS,
// Set the USE_JSESSIONID environment variable to enable setting the
// JSESSIONID cookie. This is useful for setting up proxies with
// session affinity.
jsessionid: !!process.env.USE_JSESSIONID
};
// If you know your server environment (eg, proxies) will prevent websockets
// from ever working, set $DISABLE_WEBSOCKETS and SockJS clients (ie,
// browsers) will not waste time attempting to use them.
// (Your server will still have a /websocket endpoint.)
if (process.env.DISABLE_WEBSOCKETS) {
serverOptions.websocket = false;
if (disableSockJS) {
self._setupRawWebSocket();
} else {
serverOptions.faye_server_options = {
extensions: websocketExtensions()
};
self._setupSockJS();
}
};
self.server = sockjs.createServer(serverOptions);
Object.assign(StreamServer.prototype, {
// Shared connection handler used by both SockJS and raw WebSocket paths.
_onConnection(socket) {
var self = this;
// Install the sockjs handlers, but we want to keep around our own particular
// request handler that adjusts idle timeouts while we have an outstanding
// request. This compensates for the fact that sockjs removes all listeners
// for "request" to add its own.
WebApp.httpServer.removeListener(
'request', WebApp._timeoutAdjustmentRequestCallback);
self.server.installHandlers(WebApp.httpServer);
WebApp.httpServer.addListener(
'request', WebApp._timeoutAdjustmentRequestCallback);
// Support the /websocket endpoint
self._redirectWebsocketEndpoint();
self.server.on('connection', function (socket) {
// sockjs sometimes passes us null instead of a socket object
// so we need to guard against that. see:
// https://github.com/sockjs/sockjs-node/issues/121
@@ -112,18 +138,23 @@ StreamServer = function () {
// by explicitly setting the socket timeout to a relatively large time here,
// and setting it back to zero when we set up the heartbeat in
// livedata_server.js.
socket.setWebsocketTimeout = function (timeout) {
if ((socket.protocol === 'websocket' ||
socket.protocol === 'websocket-raw')
&& socket._session.recv) {
socket._session.recv.connection.setTimeout(timeout);
}
};
if (!socket.setWebsocketTimeout) {
socket.setWebsocketTimeout = function (timeout) {
if ((socket.protocol === 'websocket' ||
socket.protocol === 'websocket-raw')
&& socket._session.recv) {
socket._session.recv.connection.setTimeout(timeout);
}
};
}
socket.setWebsocketTimeout(45 * 1000);
socket.send = function (data) {
socket.write(data);
};
if (!socket.send) {
socket.send = function (data) {
socket.write(data);
};
}
socket.on('close', function () {
self.open_sockets = self.open_sockets.filter(function(value) {
return value !== socket;
@@ -142,11 +173,124 @@ StreamServer = function () {
self.registration_callbacks.forEach(function (callback) {
callback(socket);
});
});
},
};
// Set up the traditional SockJS transport (default when DISABLE_SOCKJS is
// not set). This is the original code path, moved here verbatim.
_setupSockJS() {
var self = this;
// Because we are installing directly onto WebApp.httpServer instead of using
// WebApp.app, we have to process the path prefix ourselves.
self.prefix = pathPrefix + '/sockjs';
RoutePolicy.declare(self.prefix + '/', 'network');
// set up sockjs
var sockjs = Npm.require('sockjs');
var serverOptions = {
prefix: self.prefix,
log: function() {},
// this is the default, but we code it explicitly because we depend
// on it in stream_client:HEARTBEAT_TIMEOUT
heartbeat_delay: 45000,
// The default disconnect_delay is 5 seconds, but if the server ends up CPU
// bound for that much time, SockJS might not notice that the user has
// reconnected because the timer (of disconnect_delay ms) can fire before
// SockJS processes the new connection. Eventually we'll fix this by not
// combining CPU-heavy processing with SockJS termination (eg a proxy which
// converts to Unix sockets) but for now, raise the delay.
disconnect_delay: 60 * 1000,
// Allow disabling of CORS requests to address
// https://github.com/meteor/meteor/issues/8317.
disable_cors: !!process.env.DISABLE_SOCKJS_CORS,
// Set the USE_JSESSIONID environment variable to enable setting the
// JSESSIONID cookie. This is useful for setting up proxies with
// session affinity.
jsessionid: !!process.env.USE_JSESSIONID
};
// If you know your server environment (eg, proxies) will prevent websockets
// from ever working, set $DISABLE_WEBSOCKETS and SockJS clients (ie,
// browsers) will not waste time attempting to use them.
// (Your server will still have a /websocket endpoint.)
if (process.env.DISABLE_WEBSOCKETS) {
serverOptions.websocket = false;
} else {
serverOptions.faye_server_options = {
extensions: websocketExtensions()
};
}
self.server = sockjs.createServer(serverOptions);
// Install the sockjs handlers, but we want to keep around our own particular
// request handler that adjusts idle timeouts while we have an outstanding
// request. This compensates for the fact that sockjs removes all listeners
// for "request" to add its own.
WebApp.httpServer.removeListener(
'request', WebApp._timeoutAdjustmentRequestCallback);
self.server.installHandlers(WebApp.httpServer);
WebApp.httpServer.addListener(
'request', WebApp._timeoutAdjustmentRequestCallback);
// Support the /websocket endpoint
self._redirectWebsocketEndpoint();
self.server.on('connection', function (socket) {
self._onConnection(socket);
});
},
// Set up raw WebSocket transport (when DISABLE_SOCKJS=1). No SockJS server,
// no polling transports, no /sockjs/info endpoint. Direct WebSocket only.
_setupRawWebSocket() {
var self = this;
var FayeWebSocket = Npm.require('faye-websocket');
RoutePolicy.declare(pathPrefix + '/websocket/', 'network');
// Reject plain HTTP requests to /websocket with a clear error message
// (same behavior as SockJS). Without this, they'd fall through to the
// app and return the main HTML page.
WebApp.rawConnectHandlers.use(function (req, res, next) {
var pathname = new URL(req.url, 'http://localhost').pathname;
if (pathname === pathPrefix + '/websocket' ||
pathname === pathPrefix + '/websocket/') {
res.writeHead(400, { 'Content-Type': 'text/plain' });
res.end('Not a valid websocket request');
} else {
next();
}
});
// We must take over existing 'upgrade' listeners (similar to what SockJS
// does via overshadowListeners) so that our handler runs first for the
// /websocket path, and other handlers (HMR, etc.) get the rest.
var httpServer = WebApp.httpServer;
var oldUpgradeListeners = httpServer.listeners('upgrade').slice(0);
httpServer.removeAllListeners('upgrade');
httpServer.on('upgrade', function (req, rawSocket, head) {
// req.url is a relative path — new URL() requires a base to parse it.
var pathname = new URL(req.url, 'http://localhost').pathname;
if (FayeWebSocket.isWebSocket(req) &&
(pathname === pathPrefix + '/websocket' ||
pathname === pathPrefix + '/websocket/')) {
var wsOptions = { extensions: websocketExtensions() };
var ws = new FayeWebSocket(req, rawSocket, head, null, wsOptions);
var meteorSocket = new RawWebSocketConnection(ws, req, rawSocket);
self._onConnection(meteorSocket);
} else {
// Pass to other upgrade handlers (HMR, etc.)
for (var i = 0; i < oldUpgradeListeners.length; i++) {
oldUpgradeListeners[i].call(httpServer, req, rawSocket, head);
}
}
});
},
Object.assign(StreamServer.prototype, {
// call my callback when a new socket connects.
// also call it for all current connections.
register: function (callback) {
@@ -201,4 +345,4 @@ Object.assign(StreamServer.prototype, {
httpServer.addListener(event, newListener);
});
}
});
});

View File

@@ -2,7 +2,7 @@ import {
isFunction,
isObject,
keysOf,
lengthOf,
lengthOfWithLimit,
hasOwn,
convertMapToObject,
isArguments,
@@ -98,7 +98,7 @@ EJSON.addType = (name, factory) => {
const builtinConverters = [
{ // Date
matchJSONValue(obj) {
return hasOwn(obj, '$date') && lengthOf(obj) === 1;
return hasOwn(obj, '$date') && lengthOfWithLimit(obj, 1) === 1;
},
matchObject(obj) {
return obj instanceof Date;
@@ -114,7 +114,7 @@ const builtinConverters = [
matchJSONValue(obj) {
return hasOwn(obj, '$regexp')
&& hasOwn(obj, '$flags')
&& lengthOf(obj) === 2;
&& lengthOfWithLimit(obj, 2) === 2;
},
matchObject(obj) {
return obj instanceof RegExp;
@@ -140,7 +140,7 @@ const builtinConverters = [
{ // NaN, Inf, -Inf. (These are the only objects with typeof !== 'object'
// which we match.)
matchJSONValue(obj) {
return hasOwn(obj, '$InfNaN') && lengthOf(obj) === 1;
return hasOwn(obj, '$InfNaN') && lengthOfWithLimit(obj, 1) === 1;
},
matchObject: isInfOrNaN,
toJSONValue(obj) {
@@ -160,7 +160,7 @@ const builtinConverters = [
},
{ // Binary
matchJSONValue(obj) {
return hasOwn(obj, '$binary') && lengthOf(obj) === 1;
return hasOwn(obj, '$binary') && lengthOfWithLimit(obj, 1) === 1;
},
matchObject(obj) {
return typeof Uint8Array !== 'undefined' && obj instanceof Uint8Array
@@ -175,12 +175,12 @@ const builtinConverters = [
},
{ // Escaping one level
matchJSONValue(obj) {
return hasOwn(obj, '$escape') && lengthOf(obj) === 1;
return hasOwn(obj, '$escape') && lengthOfWithLimit(obj, 1) === 1;
},
matchObject(obj) {
let match = false;
if (obj) {
const keyCount = lengthOf(obj);
const keyCount = lengthOfWithLimit(obj, 2);
if (keyCount === 1 || keyCount === 2) {
match =
builtinConverters.some(converter => converter.matchJSONValue(obj));
@@ -206,7 +206,7 @@ const builtinConverters = [
{ // Custom
matchJSONValue(obj) {
return hasOwn(obj, '$type')
&& hasOwn(obj, '$value') && lengthOf(obj) === 2;
&& hasOwn(obj, '$value') && lengthOfWithLimit(obj, 2) === 2;
},
matchObject(obj) {
return EJSON._isCustomType(obj);
@@ -288,25 +288,72 @@ const adjustTypesToJSONValue = obj => {
EJSON._adjustTypesToJSONValue = adjustTypesToJSONValue;
// Copy-on-write recursive EJSON→JSON converter.
// Only allocates new objects/arrays along paths that actually change,
// returning the original reference when nothing needs conversion.
const toJSONValueDeep = value => {
if (value === null || value === undefined) {
return value;
}
// Atom-level conversion (Date, Binary, custom types, etc.)
const replaced = toJSONValueHelper(value);
if (replaced !== undefined) {
return replaced;
}
// Primitives that aren't Inf/NaN pass through unchanged.
if (typeof value !== 'object') {
// Inf/NaN are the only non-object values that need conversion,
// and toJSONValueHelper already handled them.
return value;
}
const isArray = Array.isArray(value);
let result = null; // stays null until first change detected
if (isArray) {
for (let i = 0; i < value.length; i++) {
const child = value[i];
const converted = toJSONValueDeep(child);
if (converted !== child) {
result ??= value.slice(0, i);
result.push(converted);
} else if (result !== null) {
result.push(child);
}
}
} else {
const keys = keysOf(value);
for (let i = 0; i < keys.length; i++) {
const key = keys[i];
const child = value[key];
const converted = toJSONValueDeep(child);
if (converted !== child) {
if (result === null) {
result = {};
// backfill preceding keys
for (let j = 0; j < i; j++) {
result[keys[j]] = value[keys[j]];
}
}
result[key] = converted;
} else if (result !== null) {
result[key] = child;
}
}
}
return result ?? value;
};
/**
* @summary Serialize an EJSON-compatible value into its plain JSON
* representation.
* @locus Anywhere
* @param {EJSON} val A value to serialize to plain JSON.
*/
EJSON.toJSONValue = item => {
const changed = toJSONValueHelper(item);
if (changed !== undefined) {
return changed;
}
let newItem = item;
if (isObject(item)) {
newItem = EJSON.clone(item);
adjustTypesToJSONValue(newItem);
}
return newItem;
};
EJSON.toJSONValue = item => toJSONValueDeep(item);
// Either return the argument changed to have the non-json
// rep of itself (the Object version) or the argument itself.
@@ -364,19 +411,62 @@ const adjustTypesFromJSONValue = obj => {
EJSON._adjustTypesFromJSONValue = adjustTypesFromJSONValue;
// Copy-on-write recursive JSON→EJSON converter.
// Same lazy-allocation strategy as toJSONValueDeep.
const fromJSONValueDeep = value => {
if (value === null || typeof value !== 'object') {
return value;
}
// Check if this value itself is a JSON-encoded EJSON type (e.g. {$date: ...})
const replaced = fromJSONValueHelper(value);
if (replaced !== value) {
return replaced;
}
const isArray = Array.isArray(value);
let result = null;
if (isArray) {
for (let i = 0; i < value.length; i++) {
const child = value[i];
const converted = fromJSONValueDeep(child);
if (converted !== child) {
result ??= value.slice(0, i);
result.push(converted);
} else if (result !== null) {
result.push(child);
}
}
} else {
const keys = keysOf(value);
for (let i = 0; i < keys.length; i++) {
const key = keys[i];
const child = value[key];
const converted = fromJSONValueDeep(child);
if (converted !== child) {
if (result === null) {
result = {};
for (let j = 0; j < i; j++) {
result[keys[j]] = value[keys[j]];
}
}
result[key] = converted;
} else if (result !== null) {
result[key] = child;
}
}
}
return result ?? value;
};
/**
* @summary Deserialize an EJSON value from its plain JSON representation.
* @locus Anywhere
* @param {JSONCompatible} val A value to deserialize into EJSON.
*/
EJSON.fromJSONValue = item => {
let changed = fromJSONValueHelper(item);
if (changed === item && isObject(item)) {
changed = EJSON.clone(item);
adjustTypesFromJSONValue(changed);
}
return changed;
};
EJSON.fromJSONValue = item => fromJSONValueDeep(item);
/**
* @summary Serialize a value to a string. For EJSON values, the serialization
@@ -447,18 +537,21 @@ EJSON.equals = (a, b, options) => {
return true;
}
// This differs from the IEEE spec for NaN equality, b/c we don't want
// anything ever with a NaN to be poisoned from becoming equal to anything.
if (Number.isNaN(a) && Number.isNaN(b)) {
return true;
}
// if either one is falsy, they'd have to be === to be equal
if (!a || !b) {
// If types differ, they can't be equal.
// This also handles mixed null/primitive cases since typeof null is 'object'.
if (typeof a !== typeof b) {
return false;
}
if (!(isObject(a) && isObject(b))) {
// Same-type primitives that aren't === can only be equal if both are NaN.
// This skips the NaN check entirely for strings, booleans, etc.
if (typeof a !== 'object') {
return Number.isNaN(a) && Number.isNaN(b);
}
// Both are typeof 'object' — but either could be null.
// (If both were null, a === b would have caught it above.)
if (a === null || b === null) {
return false;
}
@@ -518,6 +611,9 @@ EJSON.equals = (a, b, options) => {
let ret;
const aKeys = keysOf(a);
const bKeys = keysOf(b);
if (aKeys.length !== bKeys.length) {
return false;
}
if (keyOrderSensitive) {
i = 0;
ret = aKeys.every(key => {

View File

@@ -63,6 +63,109 @@ Tinytest.add('ejson - equality and falsiness', test => {
test.isFalse(EJSON.equals({foo: 'foo'}, undefined));
});
Tinytest.add('ejson - equals type-mismatch early exit', test => {
// Cross-type primitives: typeof a !== typeof b → false
test.isFalse(EJSON.equals('hello', 42));
test.isFalse(EJSON.equals(42, 'hello'));
test.isFalse(EJSON.equals(1, true));
test.isFalse(EJSON.equals(true, 1));
test.isFalse(EJSON.equals('true', true));
test.isFalse(EJSON.equals(true, 'true'));
test.isFalse(EJSON.equals('1', 1));
test.isFalse(EJSON.equals(1, '1'));
// Falsy cross-type: both are falsy but different types
test.isFalse(EJSON.equals(0, false));
test.isFalse(EJSON.equals(false, 0));
test.isFalse(EJSON.equals('', 0));
test.isFalse(EJSON.equals(0, ''));
test.isFalse(EJSON.equals('', false));
test.isFalse(EJSON.equals(false, ''));
// null/undefined vs primitives (typeof null is 'object', differs from 'number'/'string')
test.isFalse(EJSON.equals(null, 0));
test.isFalse(EJSON.equals(0, null));
test.isFalse(EJSON.equals(null, ''));
test.isFalse(EJSON.equals('', null));
test.isFalse(EJSON.equals(null, false));
test.isFalse(EJSON.equals(false, null));
test.isFalse(EJSON.equals(undefined, 0));
test.isFalse(EJSON.equals(0, undefined));
test.isFalse(EJSON.equals(undefined, ''));
test.isFalse(EJSON.equals('', undefined));
test.isFalse(EJSON.equals(undefined, false));
test.isFalse(EJSON.equals(false, undefined));
test.isFalse(EJSON.equals(null, undefined));
test.isFalse(EJSON.equals(undefined, null));
});
Tinytest.add('ejson - equals same-type primitives', test => {
// Same-type, same-value → caught by a === b
test.isTrue(EJSON.equals(0, 0));
test.isTrue(EJSON.equals(1, 1));
test.isTrue(EJSON.equals(-1, -1));
test.isTrue(EJSON.equals('', ''));
test.isTrue(EJSON.equals('hello', 'hello'));
test.isTrue(EJSON.equals(true, true));
test.isTrue(EJSON.equals(false, false));
// Same-type, different-value → typeof a !== 'object', then NaN check returns false
test.isFalse(EJSON.equals(1, 2));
test.isFalse(EJSON.equals('a', 'b'));
test.isFalse(EJSON.equals(true, false));
test.isFalse(EJSON.equals(false, true));
test.isFalse(EJSON.equals(0, 1));
test.isTrue(EJSON.equals(0, -0)); // 0 === -0 in JS, caught by a === b
});
Tinytest.add('ejson - equals null vs object', test => {
// Both typeof 'object', but one is null
test.isFalse(EJSON.equals(null, {}));
test.isFalse(EJSON.equals({}, null));
test.isFalse(EJSON.equals(null, []));
test.isFalse(EJSON.equals([], null));
test.isFalse(EJSON.equals(null, new Date()));
test.isFalse(EJSON.equals(new Date(), null));
});
Tinytest.add('ejson - equals nested falsy and type-mismatch fields', test => {
// Objects with falsy fields of different types
test.isFalse(EJSON.equals({a: 0}, {a: false}));
test.isFalse(EJSON.equals({a: ''}, {a: 0}));
test.isFalse(EJSON.equals({a: ''}, {a: false}));
test.isFalse(EJSON.equals({a: null}, {a: undefined}));
test.isFalse(EJSON.equals({a: null}, {a: 0}));
test.isFalse(EJSON.equals({a: null}, {a: ''}));
test.isFalse(EJSON.equals({a: null}, {a: false}));
// Objects with same falsy values should be equal
test.isTrue(EJSON.equals({a: 0}, {a: 0}));
test.isTrue(EJSON.equals({a: ''}, {a: ''}));
test.isTrue(EJSON.equals({a: false}, {a: false}));
test.isTrue(EJSON.equals({a: null}, {a: null}));
test.isTrue(EJSON.equals({a: undefined}, {a: undefined}));
// Deeply nested type mismatches
test.isFalse(EJSON.equals(
{a: {b: {c: 0}}},
{a: {b: {c: false}}}
));
test.isFalse(EJSON.equals(
{a: {b: {c: null}}},
{a: {b: {c: undefined}}}
));
test.isTrue(EJSON.equals(
{a: {b: {c: 0}}},
{a: {b: {c: 0}}}
));
// Arrays with type-mismatched elements
test.isFalse(EJSON.equals([0, 1, 2], [false, 1, 2]));
test.isFalse(EJSON.equals([0, '', 2], [0, false, 2]));
test.isFalse(EJSON.equals([null], [undefined]));
test.isTrue(EJSON.equals([0, '', null], [0, '', null]));
});
Tinytest.add('ejson - NaN and Inf', test => {
test.equal(EJSON.parse('{"$InfNaN": 1}'), Infinity);
test.equal(EJSON.parse('{"$InfNaN": -1}'), -Infinity);
@@ -88,6 +191,200 @@ Tinytest.add('ejson - NaN and Inf', test => {
));
});
Tinytest.add('ejson - toJSONValue primitives pass through unchanged', test => {
test.equal(EJSON.toJSONValue(42), 42);
test.equal(EJSON.toJSONValue('hello'), 'hello');
test.equal(EJSON.toJSONValue(true), true);
test.equal(EJSON.toJSONValue(false), false);
test.equal(EJSON.toJSONValue(null), null);
test.equal(EJSON.toJSONValue(undefined), undefined);
test.equal(EJSON.toJSONValue(0), 0);
test.equal(EJSON.toJSONValue(''), '');
});
Tinytest.add('ejson - toJSONValue converts Date to {$date}', test => {
const d = new Date('2024-06-15T12:00:00Z');
const result = EJSON.toJSONValue(d);
test.equal(result, {$date: d.getTime()});
});
Tinytest.add('ejson - toJSONValue converts NaN and Infinity', test => {
test.equal(EJSON.toJSONValue(NaN), {$InfNaN: 0});
test.equal(EJSON.toJSONValue(Infinity), {$InfNaN: 1});
test.equal(EJSON.toJSONValue(-Infinity), {$InfNaN: -1});
});
Tinytest.add('ejson - toJSONValue handles pure-primitive objects', test => {
const obj = {a: 1, b: 'hello', c: true, d: null};
const result = EJSON.toJSONValue(obj);
test.equal(result, {a: 1, b: 'hello', c: true, d: null});
});
Tinytest.add('ejson - toJSONValue converts nested Dates', test => {
const d = new Date('2024-01-01');
const obj = {name: 'test', createdAt: d, meta: {updatedAt: d}};
const result = EJSON.toJSONValue(obj);
test.equal(result.name, 'test');
test.equal(result.createdAt, {$date: d.getTime()});
test.equal(result.meta, {updatedAt: {$date: d.getTime()}});
});
Tinytest.add('ejson - toJSONValue handles arrays', test => {
// Pure-primitive array
const arr = [1, 'two', true, null];
const result = EJSON.toJSONValue(arr);
test.equal(result, [1, 'two', true, null]);
// Array with a Date
const d = new Date();
const arrWithDate = ['a', d, 'b'];
const result2 = EJSON.toJSONValue(arrWithDate);
test.equal(result2[0], 'a');
test.equal(result2[1], {$date: d.getTime()});
test.equal(result2[2], 'b');
test.length(result2, 3);
// Empty array
test.equal(EJSON.toJSONValue([]), []);
});
Tinytest.add('ejson - toJSONValue handles NaN/Infinity inside objects and arrays', test => {
const obj = {a: 1, b: NaN, c: Infinity, d: -Infinity, e: 'normal'};
const result = EJSON.toJSONValue(obj);
test.equal(result.a, 1);
test.equal(result.b, {$InfNaN: 0});
test.equal(result.c, {$InfNaN: 1});
test.equal(result.d, {$InfNaN: -1});
test.equal(result.e, 'normal');
const arr = [NaN, 42, Infinity];
const result2 = EJSON.toJSONValue(arr);
test.equal(result2[0], {$InfNaN: 0});
test.equal(result2[1], 42);
test.equal(result2[2], {$InfNaN: 1});
});
Tinytest.add('ejson - toJSONValue escapes $-prefixed keys that look like EJSON types', test => {
const obj = {$date: 12345};
const result = EJSON.toJSONValue(obj);
// Should be wrapped in $escape to prevent misinterpretation
test.isTrue('$escape' in result);
test.equal(result.$escape.$date, 12345);
});
Tinytest.add('ejson - fromJSONValue primitives pass through unchanged', test => {
test.equal(EJSON.fromJSONValue(42), 42);
test.equal(EJSON.fromJSONValue('hello'), 'hello');
test.equal(EJSON.fromJSONValue(true), true);
test.equal(EJSON.fromJSONValue(false), false);
test.equal(EJSON.fromJSONValue(null), null);
test.equal(EJSON.fromJSONValue(0), 0);
test.equal(EJSON.fromJSONValue(''), '');
});
Tinytest.add('ejson - fromJSONValue converts {$date} to Date', test => {
const ts = 1718452800000;
const result = EJSON.fromJSONValue({$date: ts});
test.instanceOf(result, Date);
test.equal(result.getTime(), ts);
});
Tinytest.add('ejson - fromJSONValue converts {$InfNaN} back', test => {
test.isTrue(Number.isNaN(EJSON.fromJSONValue({$InfNaN: 0})));
test.equal(EJSON.fromJSONValue({$InfNaN: 1}), Infinity);
test.equal(EJSON.fromJSONValue({$InfNaN: -1}), -Infinity);
});
Tinytest.add('ejson - fromJSONValue handles pure-primitive objects', test => {
const obj = {a: 1, b: 'hello', c: true, d: null};
const result = EJSON.fromJSONValue(obj);
test.equal(result, {a: 1, b: 'hello', c: true, d: null});
});
Tinytest.add('ejson - fromJSONValue converts nested {$date} values', test => {
const ts = Date.now();
const obj = {name: 'test', createdAt: {$date: ts}, meta: {updatedAt: {$date: ts}}};
const result = EJSON.fromJSONValue(obj);
test.equal(result.name, 'test');
test.instanceOf(result.createdAt, Date);
test.equal(result.createdAt.getTime(), ts);
test.instanceOf(result.meta.updatedAt, Date);
test.equal(result.meta.updatedAt.getTime(), ts);
});
Tinytest.add('ejson - fromJSONValue handles arrays with EJSON types', test => {
const ts = Date.now();
const arr = ['a', {$date: ts}, 'b'];
const result = EJSON.fromJSONValue(arr);
test.equal(result[0], 'a');
test.instanceOf(result[1], Date);
test.equal(result[1].getTime(), ts);
test.equal(result[2], 'b');
test.length(result, 3);
// Pure-primitive array
test.equal(EJSON.fromJSONValue([1, 2, 3]), [1, 2, 3]);
// Empty array
test.equal(EJSON.fromJSONValue([]), []);
});
Tinytest.add('ejson - fromJSONValue unescapes $escape wrapper', test => {
const input = {$escape: {$date: 12345}};
const result = EJSON.fromJSONValue(input);
test.equal(result, {$date: 12345});
test.isFalse('$escape' in result);
});
Tinytest.add('ejson - toJSONValue/fromJSONValue round-trip', test => {
const d = new Date();
const cases = [
42,
'hello',
true,
null,
{a: 1, b: 'two'},
[1, 2, 3],
d,
NaN,
Infinity,
-Infinity,
{name: 'test', ts: d, scores: [1, 2, 3]},
{nested: {deep: {date: d, val: 42}}},
[d, 'a', {x: d}],
{$date: 12345}, // $-prefixed key → escape/unescape round-trip
{a: NaN, b: Infinity, c: -Infinity, d: 'normal'},
{}, // empty object
[], // empty array
];
cases.forEach(original => {
const json = EJSON.toJSONValue(original);
const restored = EJSON.fromJSONValue(json);
test.isTrue(
EJSON.equals(original, restored),
`Round-trip failed for: ${EJSON.stringify(original)}`
);
});
});
Tinytest.add('ejson - toJSONValue does not mutate the input', test => {
const d = new Date();
const obj = {name: 'test', createdAt: d, tags: ['a', 'b']};
const originalName = obj.name;
const originalDate = obj.createdAt;
const originalTags = obj.tags;
EJSON.toJSONValue(obj);
// Original object must be untouched
test.equal(obj.name, originalName);
test.equal(obj.createdAt, originalDate);
test.equal(obj.tags, originalTags);
test.instanceOf(obj.createdAt, Date);
test.equal(obj.tags[0], 'a');
});
Tinytest.add('ejson - clone', test => {
const cloneTest = (x, identical) => {
const y = EJSON.clone(x);

View File

@@ -1,6 +1,6 @@
Package.describe({
summary: 'Extended and Extensible JSON library',
version: '1.1.5',
version: '1.2.0-beta350.7',
});
Package.onUse(function onUse(api) {

View File

@@ -4,7 +4,29 @@ export const isObject = (fn) => typeof fn === 'object';
export const keysOf = (obj) => Object.keys(obj);
export const lengthOf = (obj) => Object.keys(obj).length;
export const lengthOf = (obj) => {
let count = 0;
for (const key in obj) {
if (hasOwn(obj, key)) count++;
}
return count;
};
/**
* Counts own properties of obj, but stops early once count exceeds limit.
* Useful for hot-path checks like `lengthOfWithLimit(obj, 1) === 1`
* without iterating all keys of large objects.
* @param {Object} obj
* @param {number} limit - stop counting beyond this value
* @returns {number} exact count if <= limit, otherwise limit + 1
*/
export const lengthOfWithLimit = (obj, limit) => {
let count = 0;
for (const key in obj) {
if (hasOwn(obj, key) && ++count > limit) return count;
}
return count;
};
export const hasOwn = (obj, prop) => Object.prototype.hasOwnProperty.call(obj, prop);

View File

@@ -259,6 +259,20 @@ Email.sendAsync = async function (options) {
);
}
// Check if 'from' address is still the unconfigured default.
// The default "example.com" domain (RFC 2606) will always fail to send emails
// since no SPF/DKIM/DMARC records can exist for it.
const isDefaultFrom = !email.from ||
email.from.includes('@example.com');
if (isDefaultFrom) {
console.warn(
'[Email] Warning: "from" address is not configured. ' +
'Using default "example.com" which will fail in production. ' +
'Set Accounts.emailTemplates.from to a valid email address.'
);
}
if (mailUrlEnv || mailUrlSettings) {
return getTransport().sendMail(email);
}

View File

@@ -1,6 +1,6 @@
Package.describe({
summary: "Send email messages",
version: "3.1.2",
version: "3.2.0-beta350.7",
});
Npm.depends({

View File

@@ -1,6 +1,6 @@
Package.describe({
summary: "The Meteor command-line tool",
version: "3.4.0",
version: '3.5.0-beta.7',
});
Package.includeTool();

View File

@@ -130,7 +130,7 @@ export const ASYNC_CURSOR_METHODS = [
*/
'forEach',
/**
* @summary Map callback over all matching documents. Returns an Array.
* @summary Map callback over all matching documents. Returns a Promise<Array>.
* @locus Anywhere
* @method mapAsync
* @instance
@@ -141,7 +141,7 @@ export const ASYNC_CURSOR_METHODS = [
* itself.
* @param {Any} [thisArg] An object which will be the value of `this` inside
* `callback`.
* @returns {Promise}
* @returns {Promise<Array>}
*/
'map',
];

View File

@@ -136,25 +136,34 @@ export default class Cursor {
* `callback`.
*/
forEach(callback, thisArg) {
if (this.reactive) {
this._depend({
addedBefore: true,
removed: true,
changed: true,
movedBefore: true,
});
let i = 0;
for (const doc of this) {
callback.call(thisArg, doc, i++, this);
}
}
this._getRawObjects({ ordered: true }).forEach((element, i) => {
// This doubles as a clone operation.
element = this._projectionFn(element);
/**
* @summary Call `callback` once for each matching document, sequentially and
* synchronously.
* @locus Anywhere
* @method forEachAsync
* @instance
* @memberOf Mongo.Cursor
* @param {IterationCallback} callback Function to call. It will be called
* with three arguments: the document, a
* 0-based index, and <em>cursor</em>
* itself.
* @param {Any} [thisArg] An object which will be the value of `this` inside
* `callback`.
* @returns {Promise}
*/
async forEachAsync(callback, thisArg) {
let i = 0;
if (this._transform) {
element = this._transform(element);
}
callback.call(thisArg, element, i, this);
});
for await (const doc of this) {
await callback.call(thisArg, doc, i++, this);
}
}
getTransform() {
@@ -162,7 +171,7 @@ export default class Cursor {
}
/**
* @summary Map callback over all matching documents. Returns an Array.
* @summary Map callback over all matching documents. Returns an Array.
* @locus Anywhere
* @method map
* @instance
@@ -184,6 +193,30 @@ export default class Cursor {
return result;
}
/**
* @summary Map callback over all matching documents. Returns a Promise<Array>.
* @locus Anywhere
* @method mapAsync
* @instance
* @memberOf Mongo.Cursor
* @param {IterationCallback} callback Function to call. It will be called
* with three arguments: the document, a
* 0-based index, and <em>cursor</em>
* itself.
* @param {Any} [thisArg] An object which will be the value of `this` inside
* `callback`.
* @returns {Promise<Array>}
*/
async mapAsync(callback, thisArg) {
const result = [];
await this.forEachAsync(async (doc, i) => {
result.push(await callback.call(thisArg, doc, i, this));
});
return result;
}
// options to contain:
// * callbacks for observe():
// - addedAt (document, atIndex)
@@ -564,6 +597,11 @@ export default class Cursor {
// Implements async version of cursor methods to keep collections isomorphic
ASYNC_CURSOR_METHODS.forEach(method => {
const asyncName = getAsyncMethodName(method);
if (Cursor.prototype[asyncName]) {
return;
}
Cursor.prototype[asyncName] = function(...args) {
try {
return Promise.resolve(this[method].apply(this, args));

View File

@@ -1720,7 +1720,7 @@ LocalCollection._removeFromResultsSync = (query, doc) => {
} else {
const id = doc._id; // in case callback mutates doc
query.removed(doc._id);
query.removed(id);
query.results.remove(id);
}
};
@@ -1734,7 +1734,7 @@ LocalCollection._removeFromResultsAsync = async (query, doc) => {
} else {
const id = doc._id; // in case callback mutates doc
await query.removed(doc._id);
await query.removed(id);
query.results.remove(id);
}
};

View File

@@ -4011,6 +4011,29 @@ Tinytest.addAsync('minimongo - asyncIterator', async (test) => {
test.equal(itemIds, ['a', 'b']);
});
['forEachAsync', 'mapAsync'].forEach((methodName) => {
Tinytest.addAsync(`minimongo - awaiting ${methodName} item callbacks`, async (test) => {
const collection = new LocalCollection();
collection.insert({ _id: 'a' });
collection.insert({ _id: 'b' });
const result = ['before'];
await collection.find()[methodName](async function(item) {
await Meteor._sleepForMs(0);
result.push(item._id + '1');
await Meteor._sleepForMs(0);
result.push(item._id + '2');
});
result.push('after');
// Verify that each item callback was awaited correctly, in order
test.equal(result, ['before', 'a1', 'a2', 'b1', 'b2', 'after']);
});
});
Tinytest.add('minimongo - operation result fields (sync)', test => {
const c = new LocalCollection();

View File

@@ -1,6 +1,6 @@
Package.describe({
summary: "Meteor's client-side datastore: a port of MongoDB to Javascript",
version: "2.0.5",
version: '2.1.0-beta350.7',
});
Package.onUse((api) => {

View File

@@ -0,0 +1,586 @@
import { Meteor } from 'meteor/meteor';
import { LocalCollection } from 'meteor/minimongo';
import { Random } from 'meteor/random';
import { MongoID } from 'meteor/mongo-id';
import { DDPServer } from 'meteor/ddp-server';
import { DiffSequence } from 'meteor/diff-sequence';
import { listenAll } from './mongo_driver';
import { replaceTypes, replaceMongoAtomWithMeteor } from './mongo_common';
import { compareOperationTimes } from './mongo_common';
const SUPPORTED_OPERATIONS = ['insert', 'update', 'replace', 'delete'];
/**
* ChangeStreamObserveDriver - MongoDB Change Streams based observe driver
*
* Uses MongoDB Change Streams to watch for real-time changes to a collection.
* Implements a stop callback system similar to PollingObserveDriver for proper
* resource cleanup when the driver is stopped.
*/
export class ChangeStreamObserveDriver {
constructor(options) {
this._usesChangeStreams = true;
this._cursorDescription = options.cursorDescription;
this._mongoHandle = options.mongoHandle;
this._multiplexer = options.multiplexer;
this._changeStream = null;
this._stopped = false;
this._stopCallbacks = [];
this._pendingWrites = [];
this._writesToCommitWhenReady = [];
this._isReady = false;
this._lastProcessedOperationTime = null;
this._catchingUpResolvers = [];
this._resolveTimeout = null;
this._matcher = options.matcher;
this._id = options.id || Random.id();
// Projection function similar to oplog driver
const projection = this._cursorDescription.options.projection || this._cursorDescription.options.fields;
if (projection) {
const baseProjectionFn = LocalCollection._compileProjection(projection);
this._projectionFn = (doc) => {
const projected = baseProjectionFn(doc);
if (projected && typeof projected === 'object') {
const { _id, ...fields } = projected;
return fields;
}
return projected;
};
} else {
this._projectionFn = (doc) => {
const { _id, ...fields } = doc;
return fields;
};
}
this._startListening();
this._startWatching();
}
_sendMultiplexerAdded(id, projectedDoc) {
// Apply EJSON transformation before sending to client
projectedDoc = replaceTypes(projectedDoc, replaceMongoAtomWithMeteor);
try {
this._multiplexer.added(id, projectedDoc);
} catch (error) {
console.error('[ChangeStreams] Error sending added document:', error);
}
}
async _startListening() {
// Register a listener to be notified when writes happen
// This follows the same pattern as OplogObserveDriver
const stopHandle = await listenAll(
this._cursorDescription,
() => {
// If we're not in a pre-fire write fence, we don't have to do anything.
const fence = DDPServer._getCurrentFence();
if (!fence || fence.fired)
return;
if (fence._changeStreamObserveDrivers) {
fence._changeStreamObserveDrivers[this._id] = this;
return;
}
fence._changeStreamObserveDrivers = {};
fence._changeStreamObserveDrivers[this._id] = this;
fence.onBeforeFire(async () => {
const drivers = fence._changeStreamObserveDrivers;
delete fence._changeStreamObserveDrivers;
// Process each driver that needs to be synchronized with the fence
for (const driver of Object.values(drivers)) {
if (driver._stopped) continue;
const write = await fence.beginWrite();
// Wait for the change stream to catch up with any pending operations
await driver._waitUntilCaughtUp();
// Process any pending writes immediately
driver._flushPendingWrites();
// If the driver is ready (initial adds complete), ensure all writes are committed
if (driver._isReady) {
await driver._multiplexer.onFlush(async () => {
await write.committed();
});
} else {
// If not ready yet, queue the write for later
driver._writesToCommitWhenReady.push(write);
}
}
});
}
);
// Register the stop handle
this._addStopCallback(() => stopHandle.stop());
}
_addStopCallback(callback) {
if (typeof callback !== 'function') {
throw new Error('Stop callback must be a function');
}
this._stopCallbacks.push(callback);
}
async _startWatching() {
if (this._stopped) return;
try {
const collection = this._mongoHandle.rawCollection(this._cursorDescription.collectionName);
// First, get all existing documents that match our selector
await this._sendInitialAdds(collection);
// Signal initial adds are complete (but delay being 'ready' for commits
// until the change stream is attached to avoid fence ordering gaps)
this._multiplexer.ready();
// Then start watching for changes
const pipeline = this._buildPipeline();
// Create change stream with appropriate options
const changeStreamOptions = {
fullDocument: 'updateLookup',
fullDocumentBeforeChange: 'whenAvailable'
};
this._changeStream = collection.watch(pipeline, changeStreamOptions);
// Register stop callback for the change stream
this._stopCallbacks.push(async () => {
if (this._changeStream) {
try {
await this._changeStream.close();
} catch (error) {
// Ignore errors when closing
}
this._changeStream = null;
}
});
// Handle change events
this._changeStream.on('change', Meteor.bindEnvironment((change) => {
if (this._stopped) return;
// Update last processed op time early so fences can unblock promptly
if (change && change.clusterTime) {
this._setLastProcessedOperationTime(change.clusterTime);
}
this._handleChange(change);
// Check if we're in a fence
const fence = DDPServer._getCurrentFence();
if (fence && !fence.fired) {
// Process immediately if we're in a fence
this._flushPendingWrites();
} else {
// Otherwise defer processing (similar to polling cycle)
Meteor.defer(() => {
if (!this._stopped) {
this._flushPendingWrites();
}
});
}
}));
// Handle errors and reconnection
this._changeStream.on('error', Meteor.bindEnvironment((error) => {
if (this._stopped) return;
console.error('ChangeStream error:', error);
// Attempt to restart after a delay
const timeoutId = setTimeout(() => {
if (!this._stopped) {
this._restartChangeStream();
}
}, Meteor?.settings?.packages?.mongo?.changeStream?.delay?.error || 100);
// Register timeout cleanup
this._addStopCallback(() => {
clearTimeout(timeoutId);
});
}));
this._changeStream.on('close', Meteor.bindEnvironment(() => {
if (!this._stopped) {
// Unexpected close, attempt restart
const timeoutId = setTimeout(() => {
if (!this._stopped) {
this._restartChangeStream();
}
}, Meteor?.settings?.packages?.mongo?.changeStream?.delay?.close || 100);
// Register timeout cleanup
this._addStopCallback(() => {
clearTimeout(timeoutId);
});
}
}));
// Now we can allow queued fence writes to commit safely
this._isReady = true;
await this._flushWritesToCommit();
// Remove the defer that was calling _flushPendingWrites
} catch (error) {
console.error('Failed to start ChangeStream:', error);
throw error;
}
}
async _sendInitialAdds(collection) {
if (this._stopped) return;
try {
// Build the same selector and options that the cursor would use
const selector = this._cursorDescription.selector || {};
const options = { ...this._cursorDescription.options };
// Find all existing documents
const cursor = collection.find(selector, options);
// Follow oplog driver pattern: get current fence and store write for later commit
const fence = DDPServer._getCurrentFence();
if (fence) {
this._writesToCommitWhenReady.push(fence.beginWrite());
}
// Send 'added' for each existing document that matches our matcher
let docCount = 0;
for await (const doc of cursor) {
if (this._stopped) return;
const id = typeof doc._id !== 'string' ? new MongoID.ObjectID(doc._id.toHexString()) : doc._id;
const projectedDoc = this._projectionFn ? this._projectionFn(doc) : doc;
this._sendMultiplexerAdded(id, projectedDoc);
docCount++;
}
// DON'T call ready() or flush here - let _startWatching handle it
} catch (error) {
console.error('Error sending initial adds for ChangeStream:', error);
throw error;
}
}
async _restartChangeStream() {
try {
// Close current stream using stop callbacks if they exist
if (this._changeStream) {
// Find and execute the change stream stop callback
const changeStreamCallback = this._stopCallbacks.find(cb =>
typeof cb._changeStream === 'function'
);
if (changeStreamCallback) {
await changeStreamCallback();
// Remove the old callback since we'll add a new one
this._stopCallbacks = this._stopCallbacks.filter(cb => cb !== changeStreamCallback);
}
}
await this._startWatching();
} catch (error) {
console.error('Failed to restart ChangeStream:', error);
}
}
_buildPipeline() {
// For now, use a simple pipeline that watches all operations
// We'll filter using our matcher in _handleChange
const selector = this._cursorDescription.selector;
if (!selector || Object.keys(selector).length === 0) {
// No selector, watch all changes
return [];
}
// Simple pipeline that just filters by operation type
// More complex selector filtering will be done in _handleChange
return [
{
$match: {
operationType: { $in: ['insert', 'update', 'replace', 'delete'] }
}
}
];
}
async _handleChange(change) {
if (this._stopped) return;
const { operationType, documentKey, fullDocument, fullDocumentBeforeChange, clusterTime } = change;
if (!SUPPORTED_OPERATIONS.includes(operationType)) {
return; // Ignore unsupported operations
}
let id = documentKey._id;
if (typeof documentKey._id?.toHexString === 'function') {
id = new MongoID.ObjectID(documentKey._id.toHexString());
}
// Update last processed operation time (redundant with early update, but safe)
if (clusterTime) {
this._setLastProcessedOperationTime(clusterTime);
}
// Store callback to be executed later when fence processes writes
// Don't try to capture fence here - it will be handled in onBeforeFire
const callbackData = {
operationType,
id,
fullDocument,
fullDocumentBeforeChange,
change
};
this._pendingWrites.push(callbackData);
}
_setLastProcessedOperationTime(ts) {
this._lastProcessedOperationTime = ts;
// Resolve any waiters whose target is <= current processed time
while (this._catchingUpResolvers.length > 0) {
const first = this._catchingUpResolvers[0];
if (compareOperationTimes(ts, first.ts) >= 0) {
this._catchingUpResolvers.shift();
try { first.resolver(); } catch (e) { /* ignore resolver errors */ }
} else {
break;
}
}
}
async _getServerOperationTime() {
const db = this._mongoHandle.db;
const admin = db.admin();
const commands = [
() => db.command({ ping: 1 }),
() => admin.command({ hello: 1 }),
() => admin.command({ ismaster: 1 })
];
const runCommandRecursive = async (index = 0) => {
if (index >= commands.length) {
return null;
}
try {
const res = await commands[index]();
return res?.operationTime || res?.$clusterTime?.clusterTime || null;
} catch (error) {
if (!error) {
return false;
}
// CommandNotFound https://www.mongodb.com/pt-br/docs/manual/reference/error-codes/
const isUnsupportedCommandError = error.code === 59;
if (isUnsupportedCommandError) {
return runCommandRecursive(index + 1);
}
throw error;
}
};
try {
return await runCommandRecursive();
} catch (error) {
console.error(`[ChangeStream ${this._id}] Failed to fetch server operation time:`, error);
return null;
}
}
async _flushPendingWrites() {
const callbacksToFlush = this._pendingWrites;
this._pendingWrites = [];
if (callbacksToFlush.length > 0) {
for (const callbackData of callbacksToFlush) {
try {
const { operationType, id, fullDocument, fullDocumentBeforeChange, change } = callbackData;
switch (operationType) {
case 'insert':
this._handleInsert(id, fullDocument);
break;
case 'update':
case 'replace':
this._handleUpdate(id, fullDocument, fullDocumentBeforeChange);
break;
case 'delete':
this._handleDelete(id, change);
break;
}
} catch (error) {
console.error(`[ChangeStream ${this._id}] Error processing callback:`, error);
}
}
}
}
async _flushWritesToCommit() {
// Similar to oplog driver's _beSteady method
const writes = this._writesToCommitWhenReady;
this._writesToCommitWhenReady = [];
if (writes.length > 0) {
await this._multiplexer.onFlush(async () => {
for (const write of writes) {
await write.committed();
}
});
}
}
_handleInsert(id, doc) {
// Apply projection and check if document matches our criteria
const matches = this._matcher.documentMatches(doc).result;
if (matches) {
const projectedDoc = this._projectionFn ? this._projectionFn(doc) : doc;
this._sendMultiplexerAdded(id, projectedDoc);
}
}
_handleUpdate(id, newDoc, oldDoc) {
// Determine which state (before/after) matches the cursor selector
const matchesAfter = this._matcher.documentMatches(newDoc || {}).result;
// Use the multiplexer cache (now updated synchronously) to check if we've seen this doc
const cachedDoc = this._multiplexer?._cache?.docs.get(id);
const matchesBefore = oldDoc
? (this._matcher.documentMatches(oldDoc).result)
: !!cachedDoc;
if (matchesAfter) {
if (!matchesBefore) {
// Document wasn't previously in the result set and now matches emit added
const projectedDoc = this._projectionFn ? this._projectionFn(newDoc) : newDoc;
this._sendMultiplexerAdded(id, projectedDoc);
return;
}
if (newDoc) {
// Compute the changed fields using the available pre-image or the cached doc
const oldDocForDiff = oldDoc || (cachedDoc ? { ...cachedDoc } : null);
if (oldDocForDiff) {
const projectedNew = this._projectionFn ? this._projectionFn(newDoc) : newDoc;
const projectedOld = this._projectionFn ? this._projectionFn(oldDocForDiff) : oldDocForDiff;
const changedFields = DiffSequence.makeChangedFields(projectedNew, projectedOld);
if (Object.keys(changedFields).length > 0) {
const transformedDoc = replaceTypes(changedFields, replaceMongoAtomWithMeteor);
this._multiplexer.changed(id, transformedDoc);
}
return;
}
// Without a pre-image we can't diff reliably; fall back to sending full doc
const projectedDoc = this._projectionFn ? this._projectionFn(newDoc) : newDoc;
const transformedDoc = replaceTypes(projectedDoc, replaceMongoAtomWithMeteor);
this._multiplexer.changed(id, transformedDoc);
}
return;
}
if (matchesBefore) {
// Document left the result set
this._multiplexer.removed(id);
}
// Otherwise the document didn't match before or after, so no-op
}
_handleDelete(id) {
if (this._multiplexer._cache?.docs.has(id)) {
this._multiplexer.removed(id);
}
}
async _waitUntilCaughtUp() {
// Wait until our change stream has processed events up to the
// server's current operation time. Mirrors oplog's wait logic.
if (this._stopped) return;
const targetTs = await this._getServerOperationTime();
if (!targetTs) {
// Best-effort fallback: yield to I/O but don't artificially delay
await new Promise((r) => setImmediate(r));
return;
}
if (this._lastProcessedOperationTime && compareOperationTimes(this._lastProcessedOperationTime, targetTs) >= 0) {
return;
}
// Insert in order so we can resolve from the front efficiently
let insertIdx = this._catchingUpResolvers.length;
while (insertIdx - 1 >= 0 && compareOperationTimes(this._catchingUpResolvers[insertIdx - 1]?.ts, targetTs) > 0) {
insertIdx--;
}
// Wait with an upper bound: release if it takes too long
let timeoutId = null;
const entry = { ts: targetTs, resolver: null };
const timeoutMs = Meteor?.settings?.packages?.mongo?.changeStream?.waitUntilCaughtUpTimeoutMs ?? 1000;
await new Promise((resolve) => {
entry.resolver = () => {
if (timeoutId) clearTimeout(timeoutId);
resolve();
};
// Insert our entry to be resolved when we process >= targetTs
this._catchingUpResolvers.splice(insertIdx, 0, entry);
// Safety valve: if it takes more than timeoutMs, just release
timeoutId = setTimeout(() => {
// Remove our entry if still pending
const idx = this._catchingUpResolvers.indexOf(entry);
if (idx !== -1) this._catchingUpResolvers.splice(idx, 1);
resolve();
}, timeoutMs);
});
}
async stop() {
if (this._stopped) return;
this._stopped = true;
// Execute all stop callbacks
for (const callback of this._stopCallbacks) {
try {
await callback();
} catch (error) {
console.error('Error in stop callback:', error);
}
}
// Handle any remaining pending writes (following oplog driver pattern)
for (const write of this._pendingWrites) {
if(!write || typeof write.committed !== 'function') continue;
await write.committed();
}
this._pendingWrites = [];
// Handle any remaining writes to commit
for (const write of this._writesToCommitWhenReady) {
await write.committed();
}
this._writesToCommitWhenReady = [];
// Clear callbacks array
this._stopCallbacks = [];
}
}

View File

@@ -168,3 +168,34 @@ export function replaceNames(filter, thing) {
}
return thing;
}
/**
* Compares two MongoDB operation times.
* @param {MongoDB.Timestamp|object} opTime1 - The first operation time to compare.
* @param {MongoDB.Timestamp|object} opTime2 - The second operation time to compare.
* @returns {number} - Returns a number indicating the comparison result:
* - A negative number if opTime1 is less than opTime2.
* - Zero if opTime1 is equal to opTime2.
* - A positive number if opTime1 is greater than opTime2.
*/
/**
* Compares two MongoDB operation times (opTimes).
*
* Both parameters accept any value accepted by the `MongoDB.Timestamp` constructor:
* - a `Long` (e.g., `new Timestamp(Long)`),
* - an object of the form `{ t: number, i: number }`,
* - or the legacy two-number form `low, high` (via `Timestamp(low, high)`), which is deprecated;
* prefer `{ t, i }` or a `Long`.
*
* The function constructs a `MongoDB.Timestamp` from `opTime1` and compares it to `opTime2`
* using `Timestamp#compare`.
*
* @param {MongoDB.Long|{t:number,i:number}|Array<number>|number} opTime1 - Operation time 1; any value accepted by `MongoDB.Timestamp`.
* For the two-number form you may provide an array `[low, high]`, but passing two separate numbers to the constructor is deprecated.
* @param {MongoDB.Long|{t:number,i:number}|Array<number>|number} opTime2 - Operation time 2; same accepted forms as `opTime1`.
* @returns {number} Comparison result: negative if `opTime1` < `opTime2`, zero if equal, positive if `opTime1` > `opTime2`.
*/
export function compareOperationTimes(opTime1, opTime2) {
return (new MongoDB.Timestamp(opTime1)).compare(opTime2);
}

View File

@@ -12,12 +12,24 @@ import { ObserveMultiplexer } from './observe_multiplex';
import { OplogObserveDriver } from './oplog_observe_driver';
import { OPLOG_COLLECTION, OplogHandle } from './oplog_tailing';
import { PollingObserveDriver } from './polling_observe_driver';
import { ChangeStreamObserveDriver } from './changestream_observe_driver';
const FILE_ASSET_SUFFIX = 'Asset';
const ASSETS_FOLDER = 'assets';
const APP_FOLDER = 'app';
const oplogCollectionWarnings = [];
const availableDrivers = ['changeStreams', 'oplog', 'polling']
const DEFAULT_REACTIVITY_ORDER = process.env.METEOR_REACTIVITY_ORDER ? process.env.METEOR_REACTIVITY_ORDER.split(',') : availableDrivers;
const reactivitySetting = Meteor.settings?.packages?.mongo?.reactivity;
if (Array.isArray(reactivitySetting)) {
for (const method of reactivitySetting) {
if (!availableDrivers.includes(method)) {
throw new Error(`Invalid Mongo reactivity method in settings: ${method}`);
}
}
}
export const MongoConnection = function (url, options) {
var self = this;
@@ -35,7 +47,6 @@ export const MongoConnection = function (url, options) {
}, userOptions);
// Internally the oplog connections specify their own maxPoolSize
// which we don't want to overwrite with any user defined value
if ('maxPoolSize' in options) {
@@ -89,7 +100,6 @@ export const MongoConnection = function (url, options) {
self._oplogHandle = new OplogHandle(options.oplogUrl, self.db.databaseName);
self._docFetcher = new DocFetcher(self);
}
};
MongoConnection.prototype._close = async function() {
@@ -801,22 +811,96 @@ MongoConnection.prototype.tail = function (cursorDescription, docCallback, timeo
};
};
Object.assign(MongoConnection.prototype, {
_observeChanges: async function (
const driverClasses = {
changeStreams: ChangeStreamObserveDriver,
oplog: OplogObserveDriver,
polling: PollingObserveDriver,
};
function _getConfiguredReactivityOrder () {
const reactivitySetting = Meteor.settings?.packages?.mongo?.reactivity;
const isArraySetting = Array.isArray(reactivitySetting);
const isStringSetting = typeof reactivitySetting === 'string';
const hasCustomDriverOrder = isArraySetting || isStringSetting;
if (reactivitySetting && !hasCustomDriverOrder) {
throw new Error('Meteor.settings.packages.mongo.reactivity must be a string or an array of observer drivers');
}
let configuredOrder = DEFAULT_REACTIVITY_ORDER;
if (hasCustomDriverOrder) {
if (isStringSetting) {
configuredOrder = [reactivitySetting];
} else {
configuredOrder = [];
for (const name of reactivitySetting) {
if (!configuredOrder.includes(name)) {
configuredOrder.push(name);
}
}
}
}
const invalidDriverNames = configuredOrder.filter(name => !driverClasses[name]);
if (invalidDriverNames.length) {
throw new Error(`Invalid Mongo reactivity driver(s): ${invalidDriverNames.join(', ')}`);
}
if (hasCustomDriverOrder && configuredOrder.length === 0) {
throw new Error('Meteor.settings.packages.mongo.reactivity must specify at least one observer driver');
}
return configuredOrder;
};
MongoConnection.prototype._selectReactivityDriver = async function (configuredOrder, driverChecks) {
const availabilityErrors = [];
let driverClass;
let matcher;
let sorter;
for (const driverName of configuredOrder) {
const checker = driverChecks[driverName];
if (!checker) {
availabilityErrors.push(`Unknown driver "${driverName}"`);
continue;
}
const result = await checker();
if (result.available) {
matcher = result.matcher;
sorter = result.sorter;
driverClass = driverClasses[driverName];
break;
}
if (result.reason) {
availabilityErrors.push(`${driverName}: ${result.reason}`);
}
}
return {
driverClass,
matcher,
sorter,
};
};
MongoConnection.prototype._observeChanges = async function (
cursorDescription, ordered, callbacks, nonMutatingCallbacks) {
var self = this;
const collectionName = cursorDescription.collectionName;
if (cursorDescription.options.tailable) {
return self._observeChangesTailable(cursorDescription, ordered, callbacks);
return this._observeChangesTailable(cursorDescription, ordered, callbacks);
}
// You may not filter out _id when observing changes, because the id is a core
// part of the observeChanges API.
const fieldsOptions = cursorDescription.options.projection || cursorDescription.options.fields;
if (fieldsOptions &&
(fieldsOptions._id === 0 ||
fieldsOptions._id === false)) {
if (fieldsOptions?._id === 0 ||
fieldsOptions?._id === false) {
throw Error("You may not observe a cursor with {fields: {_id: 0}}");
}
@@ -829,15 +913,15 @@ Object.assign(MongoConnection.prototype, {
// Find a matching ObserveMultiplexer, or create a new one. This next block is
// guaranteed to not yield (and it doesn't call anything that can observe a
// new query), so no other calls to this function can interleave with it.
if (observeKey in self._observeMultiplexers) {
multiplexer = self._observeMultiplexers[observeKey];
if (observeKey in this._observeMultiplexers) {
multiplexer = this._observeMultiplexers[observeKey];
} else {
firstHandle = true;
// Create a new ObserveMultiplexer.
multiplexer = new ObserveMultiplexer({
ordered: ordered,
onStop: function () {
delete self._observeMultiplexers[observeKey];
onStop: () => {
delete this._observeMultiplexers[observeKey];
return observeDriver.stop();
}
});
@@ -848,88 +932,190 @@ Object.assign(MongoConnection.prototype, {
nonMutatingCallbacks,
);
const oplogOptions = self?._oplogHandle?._oplogOptions || {};
const oplogOptions = (this._oplogHandle && this._oplogHandle._oplogOptions) || {};
const { includeCollections, excludeCollections } = oplogOptions;
if (firstHandle) {
var matcher, sorter;
var canUseOplog = [
function () {
// At a bare minimum, using the oplog requires us to have an oplog, to
// want unordered callbacks, and to not want a callback on the polls
// that won't happen.
return self._oplogHandle && !ordered &&
!callbacks._testOnlyPollCallback;
},
function () {
// We also need to check, if the collection of this Cursor is actually being "watched" by the Oplog handle
// if not, we have to fallback to long polling
if (excludeCollections?.length && excludeCollections.includes(collectionName)) {
if (!oplogCollectionWarnings.includes(collectionName)) {
console.warn(`Meteor.settings.packages.mongo.oplogExcludeCollections includes the collection ${collectionName} - your subscriptions will only use long polling!`);
oplogCollectionWarnings.push(collectionName); // we only want to show the warnings once per collection!
const configuredOrder = _getConfiguredReactivityOrder();
const driverChecks = {
changeStreams: async () => {
let localMatcher;
const reasons = [];
if (this._supportsChangeStreams === undefined) {
const serverReasons = [];
try {
// Change Streams require MongoDB 3.6+ and replica set or sharded cluster
const admin = this.db.admin();
const serverInfo = await admin.serverInfo();
const isMasterPromise = admin.command({ isMaster: 1 });
const versionString = serverInfo.version || 'unknown';
const versionParts = versionString.split('.').map(Number);
const major = Number.isFinite(versionParts[0]) ? versionParts[0] : 0;
const minor = Number.isFinite(versionParts[1]) ? versionParts[1] : 0;
// Check MongoDB version (3.6+)
const hasMinVersion = major > 3 || (major === 3 && minor >= 6);
if (!hasMinVersion) {
serverReasons.push(`Change Streams require MongoDB 3.6+ (current ${versionString})`);
} else {
// Check if we're running on a replica set or sharded cluster
const isMaster = await isMasterPromise;
const isReplicaSet = Boolean(isMaster.setName || isMaster.ismaster || isMaster.secondary);
const isSharded = isMaster.msg === 'isdbgrid';
if (!(isReplicaSet || isSharded)) {
serverReasons.push('Change Streams require a replica set or sharded cluster');
}
}
} catch (error) {
Meteor._debug("Error checking Change Stream support:", error);
serverReasons.push(`Error checking Change Stream support: ${error.message}`);
}
return false;
this._changeStreamServerReasons = serverReasons;
this._supportsChangeStreams = serverReasons.length === 0;
}
if (includeCollections?.length && !includeCollections.includes(collectionName)) {
if (!oplogCollectionWarnings.includes(collectionName)) {
console.warn(`Meteor.settings.packages.mongo.oplogIncludeCollections does not include the collection ${collectionName} - your subscriptions will only use long polling!`);
oplogCollectionWarnings.push(collectionName); // we only want to show the warnings once per collection!
if (!this._supportsChangeStreams) {
if (this._changeStreamServerReasons?.length) {
reasons.push(...this._changeStreamServerReasons);
} else {
reasons.push('Change Streams not supported by MongoDB deployment');
}
return false;
}
return true;
},
function () {
// We need to be able to compile the selector. Fall back to polling for
// some newfangled $selector that minimongo doesn't support yet.
if (ordered) {
reasons.push('Change Streams only supports unordered observeChanges');
}
if (callbacks._testOnlyPollCallback) {
reasons.push('Change Streams cannot be used with _testOnlyPollCallback');
}
if (reasons.length) {
return {
available: false,
reason: reasons.join('; '),
};
}
try {
matcher = new Minimongo.Matcher(
localMatcher = new Minimongo.Matcher(
cursorDescription.selector,
undefined,
cursorDescription.options.collation
);
return true;
} catch (e) {
// XXX make all compilation errors MinimongoError or something
// so that this doesn't ignore unrelated exceptions
if (Meteor.isClient && e instanceof MiniMongoQueryError) {
throw e;
}
return false;
}
},
function () {
// ... and the selector itself needs to support oplog.
return OplogObserveDriver.cursorSupported(cursorDescription, matcher);
},
function () {
// And we need to be able to compile the sort, if any. eg, can't be
// {$natural: 1}.
if (!cursorDescription.options.sort)
return true;
try {
sorter = new Minimongo.Sorter(
cursorDescription.options.sort,
cursorDescription.options.collation
);
return true;
} catch (e) {
// XXX make all compilation errors MinimongoError or something
// so that this doesn't ignore unrelated exceptions
return false;
}
}
].every(f => f()); // invoke each function and check if all return true
var driverClass = canUseOplog ? OplogObserveDriver : PollingObserveDriver;
return {
available: false,
reason: `Selector not supported for Change Streams: ${e.message}`,
};
}
return {
available: true,
matcher: localMatcher,
};
},
oplog: () => {
const reasons = [];
let localMatcher;
let localSorter;
if (!(this._oplogHandle && !ordered && !callbacks._testOnlyPollCallback)) {
reasons.push('Oplog tailing not available for this cursor');
}
if (!reasons.length) {
if (excludeCollections?.length && excludeCollections.includes(collectionName)) {
if (!oplogCollectionWarnings.includes(collectionName)) {
Meteor._debug(`Meteor.settings.packages.mongo.oplogExcludeCollections includes the collection ${collectionName} - your subscriptions will only use long polling!`);
oplogCollectionWarnings.push(collectionName); // we only want to show the warnings once per collection!
}
reasons.push('Collection is excluded from oplog tailing');
} else if (includeCollections?.length && !includeCollections.includes(collectionName)) {
if (!oplogCollectionWarnings.includes(collectionName)) {
Meteor._debug(`Meteor.settings.packages.mongo.oplogIncludeCollections does not include the collection ${collectionName} - your subscriptions will only use long polling!`);
oplogCollectionWarnings.push(collectionName); // we only want to show the warnings once per collection!
}
reasons.push('Collection is not included in oplog tailing');
}
}
if (!reasons.length) {
try {
localMatcher = new Minimongo.Matcher(
cursorDescription.selector,
undefined,
cursorDescription.options.collation
);
} catch (e) {
// XXX make all compilation errors MinimongoError or something
// so that this doesn't ignore unrelated exceptions
if (Meteor.isClient && e instanceof MiniMongoQueryError) {
throw e;
}
reasons.push(`Selector not supported for oplog: ${e.message}`);
}
}
if (!reasons.length && !OplogObserveDriver.cursorSupported(cursorDescription, localMatcher)) {
reasons.push('Cursor not supported by oplog');
}
if (!reasons.length && cursorDescription.options.sort) {
try {
localSorter = new Minimongo.Sorter(
cursorDescription.options.sort,
cursorDescription.options.collation
);
} catch (e) {
// XXX make all compilation errors MinimongoError or something
// so that this doesn't ignore unrelated exceptions
reasons.push('Sort not supported by oplog');
}
}
return {
available: reasons.length === 0,
matcher: localMatcher,
sorter: localSorter,
reason: reasons.join('; ')
};
},
polling: () => ({ available: true }),
};
let {
driverClass,
matcher: selectedMatcher,
sorter: selectedSorter,
} = await this._selectReactivityDriver(configuredOrder, driverChecks);
// Fallback to polling if no driver is available
if (!driverClass) {
Meteor._debug('No reactivity driver available for cursor, falling back to polling');
driverClass = PollingObserveDriver;
}
matcher = selectedMatcher;
sorter = selectedSorter;
observeDriver = new driverClass({
cursorDescription: cursorDescription,
mongoHandle: self,
multiplexer: multiplexer,
ordered: ordered,
matcher: matcher, // ignored by polling
sorter: sorter, // ignored by polling
cursorDescription,
mongoHandle: this,
multiplexer,
ordered,
matcher, // ignored by polling
sorter, // ignored by polling
_testOnlyPollCallback: callbacks._testOnlyPollCallback
});
@@ -940,11 +1126,9 @@ Object.assign(MongoConnection.prototype, {
// This field is only set for use in tests.
multiplexer._observeDriver = observeDriver;
}
self._observeMultiplexers[observeKey] = multiplexer;
this._observeMultiplexers[observeKey] = multiplexer;
// Blocks until the initial adds have been sent.
await multiplexer.addHandleAndSendInitialAdds(observeHandle);
return observeHandle;
},
});
}

View File

@@ -1,4 +1,5 @@
import isEmpty from "lodash.isempty";
import { EJSON } from "meteor/ejson";
import { ObserveHandle } from "./observe_handle";
interface ObserveMultiplexerOptions {
@@ -166,10 +167,15 @@ export class ObserveMultiplexer {
}
_applyCallback(callbackName: string, args: any[]) {
// Update cache SYNCHRONOUSLY so it's immediately available for subsequent
// operations. This prevents race conditions where an update event arrives
// before the insert has been recorded in the cache.
this._cache.applyChange[callbackName].apply(null, args);
// Queue the callback notifications asynchronously
this._queue.queueTask(async () => {
if (!this._handles) return;
await this._cache.applyChange[callbackName].apply(null, args);
if (
!this._ready() &&
callbackName !== "added" &&

View File

@@ -9,7 +9,7 @@
Package.describe({
summary: "Adaptor for using MongoDB and Minimongo over DDP",
version: "2.2.0",
version: '2.3.0-beta350.7',
});
Npm.depends({
@@ -96,6 +96,7 @@ Package.onUse(function (api) {
"mongo_common.js",
"asynchronous_cursor.js",
"cursor.ts",
"changestream_observe_driver.js",
],
"server"
);
@@ -134,6 +135,7 @@ Package.onTest(function (api) {
api.addFiles("tests/observe_changes_tests.js", ["client", "server"]);
api.addFiles("tests/collection_extensions_tests.js", ["client", "server"]);
api.addFiles("tests/oplog_tests.js", "server");
api.addFiles("tests/changestream_observe_driver_tests.js", "server");
api.addFiles("tests/oplog_v2_converter_tests.js", "server");
api.addFiles("tests/doc_fetcher_tests.js", "server");
api.addFiles("tests/collation_tests.js", ["client", "server"]);

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -309,17 +309,18 @@ if (Meteor.isServer) {
fooid,
{ noodles: 'alright', bacon: undefined },
]);
// Doesn't get update event, since modifies only hidden fields
await logger.expectNoResult(async () => {
await c.updateAsync(fooid, {
noodles: 'alright',
potatoes: 'meh',
apples: 'ok',
mac: 1,
cheese: 2,
const observerDriver = handle._multiplexer._observeDriver
if (!(observerDriver._usesChangeStreams)) {
await logger.expectNoResult(async () => {
await c.updateAsync(fooid, {
noodles: 'alright',
potatoes: 'meh',
apples: 'ok',
mac: 1,
cheese: 2,
})
});
});
}
await c.removeAsync(fooid);
await logger.expectResultOnly('removed', [fooid]);
@@ -418,7 +419,6 @@ Tinytest.addAsync(
);
Tinytest.addAsync(
'observeChanges - unordered - enters and exits result set through change',
async function(test, onComplete) {
@@ -519,8 +519,8 @@ if (Meteor.isServer) {
const [resolver1, promise1] = getPromiseAndResolver();
const [resolver2, promise2] = getPromiseAndResolver();
await self.insert({x: 2, y: 3});
self.expects.push(resolver1, resolver2);
await self.insert({x: 2, y: 3});
await self.insert({x: 3, y: 7}); // filtered out by the query
await self.insert({x: 4});
// Expect two added calls to happen.

View File

@@ -3,6 +3,11 @@ import { MiniMongoQueryError } from 'meteor/minimongo/common';
var randomId = Random.id();
var OplogCollection = new Mongo.Collection("oplog-" + randomId);
const DEFAULT_REACTIVITY = process.env.METEOR_REACTIVITY_ORDER ? process.env.METEOR_REACTIVITY_ORDER.split(',') : undefined;
var IS_OPLOG = DEFAULT_REACTIVITY && DEFAULT_REACTIVITY[0] === 'oplog';
if(!IS_OPLOG) return
Tinytest.addAsync('mongo-livedata - oplog - cursorSupported', async function(
test
) {

View File

@@ -1,6 +1,6 @@
Package.describe({
name: 'rate-limit',
version: '1.1.2',
version: '1.2.0-beta350.7',
// Brief, one-line summary of the package.
summary: 'An algorithm for rate limiting anything',
// URL to the Git repository containing the source code for this package.
@@ -10,14 +10,14 @@ Package.describe({
documentation: 'README.md',
});
Package.onUse(function(api) {
Package.onUse(function (api) {
api.use('random');
api.use('ecmascript');
api.mainModule('rate-limit.js');
api.export('RateLimiter');
});
Package.onTest(function(api) {
Package.onTest(function (api) {
api.use('test-helpers', ['client', 'server']);
api.use('ecmascript');
api.use('random');

View File

@@ -57,6 +57,23 @@ class Rule {
});
}
async matchAsync(input) {
for (const [key, matcher] of Object.entries(this._matchers)) {
if (matcher !== null) {
if (!hasOwn.call(input, key)) {
return false;
} else if (typeof matcher === 'function') {
if (!(await matcher(input[key]))) {
return false;
}
} else if (matcher !== input[key]) {
return false;
}
}
};
return true;
}
// Generates unique key string for provided input by concatenating all the
// keys in the matcher with the corresponding values in the input.
// Only called if rule matches input.
@@ -143,45 +160,63 @@ class RateLimiter {
const matchedRules = this._findAllMatchingRules(input);
matchedRules.forEach((rule) => {
const ruleResult = rule.apply(input);
let numInvocations = rule.counters[ruleResult.key];
if (ruleResult.timeToNextReset < 0) {
// Reset all the counters since the rule has reset
rule.resetCounter();
ruleResult.timeSinceLastReset = new Date().getTime() -
rule._lastResetTime;
ruleResult.timeToNextReset = rule.options.intervalTime;
numInvocations = 0;
}
if (numInvocations > rule.options.numRequestsAllowed) {
// Only update timeToReset if the new time would be longer than the
// previously set time. This is to ensure that if this input triggers
// multiple rules, we return the longest period of time until they can
// successfully make another call
if (reply.timeToReset < ruleResult.timeToNextReset) {
reply.timeToReset = ruleResult.timeToNextReset;
}
reply.allowed = false;
reply.numInvocationsLeft = 0;
reply.ruleId = rule.id;
rule._executeCallback(reply, input);
} else {
// If this is an allowed attempt and we haven't failed on any of the
// other rules that match, update the reply field.
if (rule.options.numRequestsAllowed - numInvocations <
reply.numInvocationsLeft && reply.allowed) {
reply.timeToReset = ruleResult.timeToNextReset;
reply.numInvocationsLeft = rule.options.numRequestsAllowed -
numInvocations;
}
reply.ruleId = rule.id;
rule._executeCallback(reply, input);
}
this._handleRuleResult(rule, ruleResult, reply, input);
});
return reply;
}
checkRules(rules, input) {
const reply = {
allowed: true,
timeToReset: 0,
numInvocationsLeft: Infinity,
};
rules.forEach((rule) => {
const ruleResult = rule.apply(input);
this._handleRuleResult(rule, ruleResult, reply, input);
});
return reply;
}
_handleRuleResult(rule, ruleResult, reply, input) {
let numInvocations = rule.counters[ruleResult.key];
if (ruleResult.timeToNextReset < 0) {
// Reset all the counters since the rule has reset
rule.resetCounter();
ruleResult.timeSinceLastReset = new Date().getTime() -
rule._lastResetTime;
ruleResult.timeToNextReset = rule.options.intervalTime;
numInvocations = 0;
}
if (numInvocations > rule.options.numRequestsAllowed) {
// Only update timeToReset if the new time would be longer than the
// previously set time. This is to ensure that if this input triggers
// multiple rules, we return the longest period of time until they can
// successfully make another call
if (reply.timeToReset < ruleResult.timeToNextReset) {
reply.timeToReset = ruleResult.timeToNextReset;
}
reply.allowed = false;
reply.numInvocationsLeft = 0;
reply.ruleId = rule.id;
rule._executeCallback(reply, input);
} else {
// If this is an allowed attempt and we haven't failed on any of the
// other rules that match, update the reply field.
if (rule.options.numRequestsAllowed - numInvocations <
reply.numInvocationsLeft && reply.allowed) {
reply.timeToReset = ruleResult.timeToNextReset;
reply.numInvocationsLeft = rule.options.numRequestsAllowed -
numInvocations;
}
reply.ruleId = rule.id;
rule._executeCallback(reply, input);
}
}
/**
* Adds a rule to dictionary of rules that are checked against on every call.
* Only inputs that pass all of the rules will be allowed. Returns unique rule
@@ -228,22 +263,36 @@ class RateLimiter {
increment(input) {
// Only increment rule counters that match this input
const matchedRules = this._findAllMatchingRules(input);
matchedRules.forEach((rule) => {
const ruleResult = rule.apply(input);
const _incrementForInput = (rule) => this._incrementRule(rule, input);
matchedRules.forEach(_incrementForInput);
}
if (ruleResult.timeSinceLastReset > rule.options.intervalTime) {
// Reset all the counters since the rule has reset
rule.resetCounter();
}
/**
* Increment counters in every rule that match to this input
* @param {array} rules Array of rules to increment
* @param {object} input Dictionary object containing attributes that may
* match to rules
*/
incrementRules(rules, input) {
const _incrementForInput = (rule) => this._incrementRule(rule, input);
rules.forEach(_incrementForInput);
}
// Check whether the key exists, incrementing it if so or otherwise
// adding the key and setting its value to 1
if (hasOwn.call(rule.counters, ruleResult.key)) {
rule.counters[ruleResult.key]++;
} else {
rule.counters[ruleResult.key] = 1;
}
});
_incrementRule(rule, input) {
const ruleResult = rule.apply(input);
if (ruleResult.timeSinceLastReset > rule.options.intervalTime) {
// Reset all the counters since the rule has reset
rule.resetCounter();
}
// Check whether the key exists, incrementing it if so or otherwise
// adding the key and setting its value to 1
if (hasOwn.call(rule.counters, ruleResult.key)) {
rule.counters[ruleResult.key]++;
} else {
rule.counters[ruleResult.key] = 1;
}
}
// Returns an array of all rules that apply to provided input
@@ -251,6 +300,16 @@ class RateLimiter {
return Object.values(this.rules).filter(rule => rule.match(input));
}
async _findAllMatchingRulesAsync(input) {
const matches = [];
for (const rule of Object.values(this.rules)) {
if (await rule.matchAsync(input)) {
matches.push(rule);
}
}
return matches;
}
/**
* Provides a mechanism to remove rules from the rate limiter. Returns boolean
* about success.

View File

@@ -5,9 +5,10 @@ import {
import { StreamClientCommon } from "./common.js";
// Statically importing SockJS here will prevent native WebSocket usage
// below (in favor of SockJS), but will ensure maximum compatibility for
// clients stuck in unusual networking environments.
// SockJS is imported statically to avoid the startup latency introduced by
// dynamic import() in _launchConnection(). When DISABLE_SOCKJS=1, SockJS
// remains bundled in the client, but the connection uses the native
// WebSocket path directly with no import-time delay.
import SockJS from "./sockjs-1.6.1-min-.js";
export class ClientStream extends StreamClientCommon {
@@ -157,20 +158,18 @@ export class ClientStream extends StreamClientCommon {
_launchConnection() {
this._cleanup(); // cleanup the old socket, if there was one.
var options = {
transports: this._sockjsProtocolsWhitelist(),
...this.options._sockjsOptions
};
const hasSockJS = typeof SockJS === "function";
const disableSockJS = __meteor_runtime_config__.DISABLE_SOCKJS;
this.socket = hasSockJS && !disableSockJS
if (__meteor_runtime_config__.DISABLE_SOCKJS) {
this.socket = new WebSocket(toWebsocketUrl(this.rawUrl));
} else {
const options = {
transports: this._sockjsProtocolsWhitelist(),
...this.options._sockjsOptions
};
// Convert raw URL to SockJS URL each time we open a connection, so
// that we can connect to random hostnames and get around browser
// per-host connection limits.
? new SockJS(toSockjsUrl(this.rawUrl), undefined, options)
: new WebSocket(toWebsocketUrl(this.rawUrl));
this.socket = new SockJS(toSockjsUrl(this.rawUrl), undefined, options);
}
this.socket.onopen = data => {
this.lastError = null;

View File

@@ -1,6 +1,6 @@
Package.describe({
name: "socket-stream-client",
version: '0.6.1',
version: '0.7.0-beta350.7',
summary: "Provides the ClientStream abstraction used by ddp-client",
documentation: "README.md"
});
@@ -12,7 +12,7 @@ Npm.depends({
"lodash.once": "4.1.1"
});
Package.onUse(function(api) {
Package.onUse(function (api) {
api.use("ecmascript");
api.use("modern-browsers");
api.use("retry"); // TODO Try to remove this.
@@ -23,7 +23,7 @@ Package.onUse(function(api) {
api.mainModule("node.js", "server", { lazy: true });
});
Package.onTest(function(api) {
Package.onTest(function (api) {
api.use("ecmascript");
api.use("tinytest");
api.use("test-helpers");

View File

@@ -1,11 +1,12 @@
Package.describe({
summary: 'Run tests noninteractively, with results going to the console.',
version: '2.0.1',
version: '2.0.2-beta350.7',
});
Package.onUse(function(api) {
Package.onUse(function (api) {
api.use(['tinytest', 'random', 'ejson', 'check', 'ecmascript']);
api.use('fetch', 'server');
api.use('jquery', 'client');
api.export('TEST_STATUS', 'client');

View File

@@ -15,6 +15,7 @@ export PATH=$METEOR_HOME:$PATH
export URL='http://127.0.0.1:4096/'
export METEOR_PACKAGE_DIRS='packages/deprecated'
export METEOR_NO_DEPRECATION=true
exec 3< <(./meteor test-packages --driver-package test-in-console -p 4096 --exclude ${TEST_PACKAGES_EXCLUDE:-''} $1)
EXEC_PID=$!

View File

@@ -1,6 +1,6 @@
Package.describe({
summary: "Serves a Meteor app over HTTP",
version: "2.1.0",
version: '2.2.0-beta350.7',
});
Npm.depends({

View File

@@ -26,6 +26,49 @@ const isMacOS = () => {
return platform() === 'darwin';
};
const getGroupNameForGid = (gid) => {
try {
const data = readFileSync('/etc/group', 'utf8');
const line = data
.trim()
.split('\n')
.find((groupLine) => {
const [, , groupGid] = groupLine.trim().split(':');
return Number(groupGid) === gid;
});
if (!line) return null;
const [name] = line.trim().split(':');
return name || null;
} catch {
return null;
}
};
const getWritableGroupName = () => {
const { gid, uid } = userInfo();
const gidsToTry = new Set();
if (typeof gid === 'number') {
gidsToTry.add(gid);
}
if (typeof process.getgroups === 'function') {
process.getgroups().forEach((groupId) => gidsToTry.add(groupId));
}
for (const groupId of gidsToTry) {
const groupName = getGroupNameForGid(groupId);
if (groupName) {
return groupName;
}
}
if (Boolean(process.env.TRAVIS)) return 'travis';
if (isMacOS()) return 'staff';
return uid === 0 ? 'root' : null;
};
const removeTestSocketFile = () => {
try {
unlinkSync(testSocketFile);
@@ -131,9 +174,16 @@ testAsyncMulti(
},
async (test) => {
// use UNIX_SOCKET_PATH and UNIX_SOCKET_GROUP
const groupToUse = getWritableGroupName();
if (!groupToUse) {
// Skip when no writable group could be determined for the current user.
test.isTrue(true);
return;
}
const { httpServer, server } = prepareServer();
const groupToUse = Boolean(process.env.TRAVIS) && 'travis' || (isMacOS() ? 'staff' : 'root');
process.env.UNIX_SOCKET_PATH = testSocketFile;
process.env.UNIX_SOCKET_GROUP = groupToUse;
process.env.UNIX_SOCKET_PERMISSIONS = '777';

View File

@@ -36,7 +36,7 @@ fi
echo Found build $DIRNAME
trap "echo Found surprising number of tarballs." EXIT
trap "echo 'Found surprising number of tarballs.'; aws s3 ls s3://com.meteor.jenkins/$DIRNAME/" EXIT
# Check to make sure the proper number of each kind of file is there.
aws s3 ls s3://com.meteor.jenkins/$DIRNAME/ | \
perl -nle 'if (/\.tar\.gz/) { ++$TAR } else { die "something weird" } END { exit !($TAR == 4) }'

View File

@@ -1,7 +1,7 @@
{
"track": "METEOR",
"version": "3.4-rc.4",
"version": "3.5-beta.7",
"recommended": false,
"official": false,
"description": "Meteor experimental release"
}
}

View File

@@ -1,6 +1,6 @@
{
"track": "METEOR",
"version": "3.4",
"version": "3.5",
"recommended": false,
"official": true,
"description": "The Official Meteor Distribution"

View File

@@ -5,10 +5,10 @@ set -u
UNAME=$(uname)
ARCH=$(uname -m)
NODE_VERSION=22.22.0
NODE_VERSION=24.14.0
MONGO_VERSION_64BIT=7.0.16
MONGO_VERSION_32BIT=3.2.22
NPM_VERSION=10.9.4
NPM_VERSION=11.10.1
if [ "$UNAME" == "Linux" ] ; then

View File

@@ -10,7 +10,7 @@ var packageJson = {
dependencies: {
// Explicit dependency because we are replacing it with a bundled version
// and we want to make sure there are no dependencies on a higher version
npm: "10.9.4",
npm: "11.10.1",
"node-gyp": "10.2.0",
"node-gyp-build": "4.8.4",
"@mapbox/node-pre-gyp": "1.0.11",

View File

@@ -0,0 +1,20 @@
# E2E Tests
Isolated Jest + Playwright environment for end-to-end testing Meteor skeletons and bundler integrations.
The repo root `node_modules/` is used to build the dev bundle, which becomes the Meteor tool itself. Installing test deps (jest, playwright, swc, cheerio, semver, underscore) there could pull in incompatible transitive versions (e.g. lru-cache v10 vs v5) and silently break the dev bundle build or a published Meteor release. This subfolder keeps test dependencies fully isolated so they never affect how Meteor is built or shipped.
Tests create real Meteor projects, start dev servers, and assert behavior in a headless Chromium browser.
All commands below should be run from the repo root:
```sh
# Install dependencies (first time)
npm run install:e2e
# Run all E2E tests
npm run test:e2e
# Run a specific suite
npm run test:e2e -- --testPathPattern skeleton
```

View File

@@ -1,14 +0,0 @@
module.exports = {
presets: [
[
'@babel/preset-env',
{
targets: {
node: 'current',
},
},
],
],
// This is needed to handle ES modules
sourceType: 'unambiguous',
};

View File

@@ -11,9 +11,14 @@ module.exports = {
transformIgnorePatterns: [
"/node_modules/(?!(execa|wait-on|is-docker|is-stream|human-signals|merge-stream|npm-run-path|onetime|mimic-fn|strip-final-newline|path-key|shebug-command|shebug-regex)/)"
],
// Use Babel to transform JavaScript files
transform: {
"^.+\\.js$": "babel-jest"
"^.+\\.js$": ["@swc/jest", {
jsc: {
parser: { syntax: "ecmascript" },
target: "es2022",
},
module: { type: "commonjs" },
}],
},
// Playwright configuration
globals: {

File diff suppressed because it is too large Load Diff

View File

@@ -1,19 +1,21 @@
{
"name": "meteor-modern-tests",
"version": "1.0.0",
"description": "Modern tests for Meteor",
"description": "Isolated Jest + Playwright environment for Meteor E2E tests",
"scripts": {
"test": "jest --config jest.config.js"
},
"devDependencies": {
"@babel/preset-env": "^7.21.3",
"babel-jest": "^29.0.0",
"@swc/core": "^1.15.18",
"@swc/jest": "^0.2.39",
"cheerio": "^1.0.0-rc.12",
"execa": "^5.1.1",
"fs-extra": "^11.3.1",
"jest": "^29.0.0",
"jest-playwright-preset": "^3.0.1",
"playwright": "1.58.0",
"semver": "^7.7.4",
"underscore": "^1.13.8",
"wait-on": "^7.0.0"
}
}

View File

@@ -93,7 +93,7 @@ describe('Meteor Skeletons /', () => {
);
describe(
'Full Library Skeleton /',
'Full Skeleton /',
testMeteorSkeleton({
skeletonName: 'full',
port: 3204,
@@ -116,8 +116,9 @@ describe('Meteor Skeletons /', () => {
test: 'tests/main.js',
},
bodyStyles: {
'font-family':
'Inter, -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif',
'font-family': process.platform === 'darwin'
? 'Inter, -apple-system, "system-ui", "Segoe UI", Roboto, sans-serif'
: 'Inter, -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif',
padding: '10px',
},
}),
@@ -150,7 +151,7 @@ describe('Meteor Skeletons /', () => {
);
describe(
'Tailwind Library Skeleton /',
'Tailwind Skeleton /',
testMeteorSkeleton({
skeletonName: 'tailwind',
port: 3208,

View File

@@ -130,7 +130,16 @@ export default class Matcher {
}
matchEmpty() {
if (this.buf.length > 0) {
if (this.buf.length === 0) return;
// Strip Node.js runtime warning lines before checking for unexpected output.
// These originate from third-party packages (e.g. http-proxy using the
// deprecated url.parse() API) and should not cause test failures.
// Pattern covers: "(node:NNNN) Warning: ...\n(Use `node --trace-warnings ...`)\n"
const nodeWarningRe = /\(node:\d+\) \w+: [^\n]+\n(?:\(Use [^\n]+\)\n)?/g;
const stripped = this.buf.replace(nodeWarningRe, '');
if (stripped.length > 0) {
Console.info("Extra junk is :", this.buf);
throw new TestFailure('junk-at-end', { run: this.run });
}

View File

@@ -0,0 +1,23 @@
# Unit Tests
Isolated Jest environment for unit-testing Meteor `tools/` and `scripts/`.
The repo root `node_modules/` is used to build the dev bundle, which becomes the Meteor tool itself. Installing test deps (jest, swc, semver, underscore) there could pull in incompatible transitive versions (e.g. lru-cache v10 vs v5) and silently break the dev bundle build or a published Meteor release. This subfolder keeps test dependencies fully isolated so they never affect how Meteor is built or shipped.
Test files use `*.test.js` next to their source.
All commands below should be run from the repo root:
```sh
# Install dependencies (first time)
npm run install:unit
# Run all unit tests
npm run test:unit
# Run a specific test file
npm run test:unit -- tools/path/to/file.test.js
# Run tests matching a name pattern
npm run test:unit -- -t "my test name"
```

View File

@@ -0,0 +1,42 @@
const path = require('path');
const repoRoot = path.resolve(__dirname, '../..');
module.exports = {
rootDir: repoRoot,
testMatch: [
"<rootDir>/tools/**/*.test.js",
"<rootDir>/scripts/**/*.test.js",
],
testPathIgnorePatterns: [
"/node_modules/",
"<rootDir>/tools/modern-tests/",
"<rootDir>/tools/tests/",
"<rootDir>/packages/",
"<rootDir>/.github/",
],
modulePathIgnorePatterns: [
"<rootDir>/tools/modern-tests/",
"<rootDir>/tools/tests/",
"<rootDir>/tools/static-assets/",
"<rootDir>/npm-packages/",
"<rootDir>/scripts/admin/",
"<rootDir>/docs/",
"<rootDir>/packages/non-core/",
],
modulePaths: [
path.resolve(__dirname, 'node_modules'),
],
transform: {
"^.+\\.js$": [require.resolve("@swc/jest"), {
jsc: {
parser: { syntax: "ecmascript" },
target: "es2022",
},
module: { type: "commonjs" },
}],
},
transformIgnorePatterns: ["/node_modules/"],
testTimeout: 10_000,
verbose: true,
};

4672
tools/unit-tests/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,16 @@
{
"name": "meteor-unit-tests",
"version": "1.0.0",
"private": true,
"description": "Isolated Jest environment for Meteor unit tests",
"scripts": {
"test": "jest --config jest.config.js --passWithNoTests"
},
"devDependencies": {
"@swc/core": "^1.15.18",
"@swc/jest": "^0.2.39",
"jest": "^30.2.0",
"semver": "^7.7.2",
"underscore": "^1.13.7"
}
}

210
tools/utils/utils.test.js Normal file
View File

@@ -0,0 +1,210 @@
jest.mock('./archinfo', () => ({
host: jest.fn(() => 'os.osx.x86_64'),
matches: jest.fn((host, pattern) => host.startsWith(pattern)),
}));
jest.mock('./buildmessage.js', () => ({
error: jest.fn(),
}));
jest.mock('../fs/files', () => ({
stat: jest.fn(),
inCheckout: jest.fn(() => true),
getToolsVersion: jest.fn(() => '3.0.0'),
getCurrentToolsDir: jest.fn(() => '/mock/tools'),
convertToOSPath: jest.fn(p => p),
pathJoin: jest.fn((...args) => args.join('/')),
}));
jest.mock('../packaging/package-version-parser.js', () => ({
parsePackageConstraint: jest.fn(),
validatePackageName: jest.fn((name) => {
if (name === 'INVALID') {
const err = new Error('bad package name');
err.versionParserError = true;
throw err;
}
}),
parse: jest.fn((version) => {
if (version === 'bad') {
const err = new Error('bad version');
err.versionParserError = true;
throw err;
}
return version;
}),
}));
const utils = require('./utils');
const buildmessage = require('./buildmessage.js');
describe('parseUrl', () => {
test.each([
['3000', {}, { port: '3000', hostname: undefined, protocol: undefined }],
['4000', { hostname: 'h', protocol: 'https' }, { port: '4000', hostname: 'h', protocol: 'https' }],
['localhost', {}, { hostname: 'localhost' }],
['localhost:3000', {}, { hostname: 'localhost', port: '3000', protocol: undefined }],
['https://ex.com:8080/path', {}, { protocol: 'https', hostname: 'ex.com', port: '8080', pathname: '/path' }],
['ex.com:3000', { protocol: 'https' }, { protocol: 'https', hostname: 'ex.com', port: '3000' }],
['http://ex.com', { protocol: 'https' }, { protocol: 'http', hostname: 'ex.com' }],
['http://ex.com', { port: '9999' }, { protocol: 'http', hostname: 'ex.com', port: '9999' }],
])('parseUrl(%s) with defaults %j', (input, defaults, expected) => {
const result = utils.parseUrl(input, defaults);
expect(result).toMatchObject(expected);
});
test('excludes pathname for root path', () => {
expect(utils.parseUrl('http://ex.com/').pathname).toBeUndefined();
});
});
describe('hasScheme', () => {
test.each([
['http://x', true], ['https://x', true], ['git+ssh://x', true],
['my2proto://x', true], ['example.com', false], ['3000', false],
['http:x', false], ['2http://x', false],
])('(%s) = %s', (input, expected) => {
expect(!!utils.hasScheme(input)).toBe(expected);
});
});
describe('isIPv4Address', () => {
test.each([
['192.168.1.1', true], ['0.0.0.0', true], ['255.255.255.255', true],
['localhost', false], ['192.168.1', false], ['::1', false], ['1.2.3.4.5', false],
])('(%s) = %s', (input, expected) => {
expect(!!utils.isIPv4Address(input)).toBe(expected);
});
});
describe('validEmail', () => {
test.each([
['user@example.com', true], ['a.b@mail.co.uk', true],
['user+tag@example.com', true], ['a@my-host.com', true],
['userexample.com', false], ['user@', false], ['@example.com', false],
['us er@x.com', false], ['', false], ['u@x.c', false],
])('(%s) = %s', (input, expected) => {
expect(utils.validEmail(input)).toBe(expected);
});
});
describe('quotemeta', () => {
test.each([
['a.b*c+d?e', 'a\\.b\\*c\\+d\\?e'],
['[a](b)\\c', '\\[a\\]\\(b\\)\\\\c'],
['abc123', 'abc123'],
])('(%s) = %s', (input, expected) => {
expect(utils.quotemeta(input)).toBe(expected);
});
test('escaped string works as literal RegExp', () => {
const s = 'price: $100 (USD)';
expect(new RegExp(utils.quotemeta(s)).test(s)).toBe(true);
});
});
describe('defaultOrderKeyForReleaseVersion', () => {
test.each([
['1.2.3', '0001.0002.0003$'],
['5', '0005$'],
['1.2.3.4', '0001.0002.0003.0004$'],
['1.0-beta', '0001.0000!beta!!!!!!!!!!!$'],
['1.0-beta.rc3', '0001.0000!beta.rc!!!!!!!!0003$'],
])('(%s) = %s', (input, expected) => {
expect(utils.defaultOrderKeyForReleaseVersion(input)).toBe(expected);
});
test('prerelease key contains ! and tag, ends with $', () => {
const key = utils.defaultOrderKeyForReleaseVersion('1.0-rc1');
expect(key).toMatch(/!.*rc.*\$$/);
});
test('sort order: prerelease < release, 1.2 < 1.2.3, 2 < 10', () => {
const k = (v) => utils.defaultOrderKeyForReleaseVersion(v);
expect(k('1.0-rc1') < k('1.0')).toBe(true);
expect(k('1.2') < k('1.2.3')).toBe(true);
expect(k('2') < k('10')).toBe(true);
});
test.each([
'abc', '01.2.3', '1.02.3', '1.0-rc01', '12345', '',
])('returns null for invalid input: %s', (input) => {
expect(utils.defaultOrderKeyForReleaseVersion(input)).toBeNull();
});
});
describe('generateSubsetsOfIncreasingSize', () => {
test('enumerates all subsets in order and supports early stop', () => {
const all = [];
utils.generateSubsetsOfIncreasingSize([1, 2, 3], (s) => { all.push([...s]); });
expect(all).toEqual([[], [1], [2], [3], [1, 2], [1, 3], [2, 3], [1, 2, 3]]);
const stopped = [];
utils.generateSubsetsOfIncreasingSize([1, 2, 3], (s) => {
stopped.push([...s]);
return s.length === 2;
});
expect(stopped).toEqual([[], [1], [2], [3], [1, 2]]);
});
test('empty array yields only the empty subset', () => {
const r = [];
utils.generateSubsetsOfIncreasingSize([], (s) => { r.push([...s]); });
expect(r).toEqual([[]]);
});
});
describe('URL scheme matchers', () => {
test.each([
['isUrlWithFileScheme', 'file:///path', true],
['isUrlWithFileScheme', 'file://host/path', true],
['isUrlWithFileScheme', 'file://', false],
['isUrlWithFileScheme', 'http://x', false],
['isUrlWithSha', `https://x/${'a'.repeat(40)}`, true],
['isUrlWithSha', `http://x/${'b'.repeat(40)}`, true],
['isUrlWithSha', 'https://x/abc123', false],
['isUrlWithSha', 'not-a-url', false],
['isNpmUrl', 'git://github.com/r', true],
['isNpmUrl', 'git+ssh://git@github.com/r', true],
['isNpmUrl', 'git+http://github.com/r', true],
['isNpmUrl', 'git+https://github.com/r', true],
['isNpmUrl', 'https://x/pkg', true],
['isNpmUrl', 'http://x/pkg', true],
['isNpmUrl', 'lodash', false],
])('%s(%s) = %s', (fn, input, expected) => {
expect(!!utils[fn](input)).toBe(expected);
});
});
describe('sourceMapLength', () => {
test.each([
[null, 0],
[undefined, 0],
[{ mappings: 'AAAA' }, 4],
[{ mappings: 'ABC', sourcesContent: ['hello', 'world'] }, 13],
[{ mappings: 'AB', sourcesContent: [null, 'code', null] }, 6],
])('sourceMapLength(%j) = %s', (input, expected) => {
expect(utils.sourceMapLength(input)).toBe(expected);
});
});
describe('parsePackageAndVersion', () => {
test.each([
['my-pkg 1.0.0', { package: 'my-pkg', version: '1.0.0' }],
['my-pkg@2.0.0', { package: 'my-pkg', version: '2.0.0' }],
['user:pkg 1.0.0', { package: 'user:pkg', version: '1.0.0' }],
])('parses %s', (input, expected) => {
expect(utils.parsePackageAndVersion(input)).toEqual(expected);
});
test('throws for missing separator or invalid version', () => {
expect(() => utils.parsePackageAndVersion('noseparator')).toThrow('Malformed package version');
expect(() => utils.parsePackageAndVersion('pkg bad')).toThrow();
});
test('returns null with useBuildmessage on malformed input', () => {
buildmessage.error.mockClear();
expect(utils.parsePackageAndVersion('noseparator', { useBuildmessage: true })).toBeNull();
expect(buildmessage.error).toHaveBeenCalled();
});
});

View File

@@ -616,6 +616,10 @@ export default defineConfig({
{
text: "Performance",
items: [
{
text: "Change Streams Observer Driver",
link: "/performance/change-streams-observer-driver",
},
{
text: "Performance Improvements",
link: "/performance/performance-improvement",

View File

@@ -35,6 +35,41 @@ By default, Meteor uses Local Storage to store, among other things, login tokens
}
```
### Accounts with HttpOnly Cookies {#accounts-httponly-cookies}
Meteor 3.3 introduces a native flow to keep the persistent resume token in an HttpOnly cookie instead of in Web Storage. This protects the token from malicious scripts and pairs nicely with in-memory client storage. Enable the feature with two small changes:
1. On the server, call `Accounts.config` during startup and set both options:
```ts
import { Accounts } from "meteor/accounts-base";
import { Meteor } from "meteor/meteor";
Meteor.startup(() => {
Accounts.config({
clientStorage: "none",
useHttpOnlyCookies: true,
});
});
```
2. Surface the same flags to the client via settings so the browser-side Accounts instance starts with the right defaults:
```json
{
"public": {
"packages": {
"accounts": {
"clientStorage": "none",
"useHttpOnlyCookies": true
}
}
}
}
```
After restarting the app and logging in, `Meteor.loginToken*` keys should no longer appear in `localStorage`. Instead, the browser receives an HttpOnly `meteor_login_token` cookie and the client keeps credentials in memory only for the active tab. If you later disable the feature, remember to revert both the server configuration and the public settings so that Accounts resumes using Web Storage.
<ApiBox name="Meteor.user" hasCustomExample/>
Retrieves the user record for the current user from

View File

@@ -993,16 +993,19 @@ contains the following fields:
security risk for this transport. For details and alternatives, see
the [SockJS documentation](https://github.com/sockjs/sockjs-node#authorisation).
> Currently when a client reconnects to the server (such as after
> temporarily losing its Internet connection), it will get a new
> connection each time. The `onConnection` callbacks will be called
> again, and the new connection will have a new connection `id`.
## Reconnection
> In the future, when client reconnection is fully implemented,
> reconnecting from the client will reconnect to the same connection on
> the server: the `onConnection` callback won't be called for that
> connection again, and the connection will still have the same
> connection `id`.
Meteor 3.5+ supports [DDP session resumption](https://github.com/meteor/meteor/pull/14051), allowing clients to automatically resume their previous connection after a temporary network disconnect. When a client reconnects within the grace period, the `onConnection` callback is not called again and the connection retains its original `id`.
This behavior is controlled by the following server options:
### Meteor.server.options.disconnectGracePeriod
Defines how long (in milliseconds) we should maintain a session for after a non-graceful disconnect before destroying it. Sessions that reconnect within this time will be resumed with minimal performance impact. Defaults to `15000`.
### Meteor.server.options.maxMessageQueueLength
Determines how many messages we should queue during a non-graceful disconnect before we destroy the session, to help prevent memory leaks. Defaults to `100`.
<ApiBox name="DDP.connect" hasCustomExample/>

View File

@@ -24,6 +24,10 @@ Please bear in mind if you are adding a package to this list, try providing as m
## List of Community Packages
#### AI/LLM helpers
- [Wormhole](./wormhole.md) Meteor Wormhole, MCP and REST API endpoint creator
#### Method/Subscription helpers
- [`meteor-rpc`](./meteor-rpc.md), Meteor Methods Evolved with type checking and runtime validation

Some files were not shown because too many files have changed in this diff Show More