Compare commits

...

33 Commits

Author SHA1 Message Date
trop[bot]
7be2386a5f fix: trigger ShipIt Mach service after SMJobSubmit to unblock on-demand-only mode (#51212)
* fix: trigger ShipIt Mach service to unblock on-demand-only mode

When a macOS system update is pending, launchd puts the user domain
into on-demand-only mode, preventing ShipIt from starting. The
MachServices endpoint in the job dictionary was registered but never
connected to (a leftover from the XPC removal in 2013).

Instead of removing MachServices, fire a lightweight XPC connection
to the Mach port after SMJobSubmit. This satisfies launchd's
on-demand trigger, starting ShipIt immediately while preserving
KeepAlive retry behavior.

Co-Authored-By: Claude <svc-devxp-claude@slack-corp.com>

Co-authored-by: Keeley Hammond <khammond@slack-corp.com>

* fix: add ResetAtClose to ShipIt MachServices to prevent standing demand

The XPC trigger message sent after SMJobSubmit sits in the Mach port's
kernel queue unread. Without ResetAtClose, this creates standing demand
that causes launchd to respawn ShipIt after a successful exit(0),
defeating KeepAlive.SuccessfulExit = NO.

Set ResetAtClose on the MachServices registration so launchd tears down
and recreates the port when ShipIt exits, flushing the stale trigger.

Co-Authored-By: Claude <svc-devxp-claude@slack-corp.com>

Co-authored-by: Keeley Hammond <khammond@slack-corp.com>

* fix: drain Mach port before exit(0) instead of using ResetAtClose

ResetAtClose blocks KeepAlive.SuccessfulExit retries in on-demand-only
mode because it removes demand when the port resets. Instead, have
ShipIt drain its own Mach service port (via bootstrap_check_in +
mach_msg) before each exit(EXIT_SUCCESS). This clears the standing
demand from the trigger message so launchd won't respawn after a
successful exit, while leaving the message in place on failure exits
so KeepAlive retries remain demand-backed.

Tested in on-demand-only mode (pending macOS update):
- exit(0) + drain: 1 run, no respawn ✓
- exit(1) + no drain: continuous respawn every 2s ✓

Co-Authored-By: Claude <svc-devxp-claude@slack-corp.com>

Co-authored-by: Keeley Hammond <khammond@slack-corp.com>

* chore: update patch

Co-authored-by: Keeley Hammond <khammond@slack-corp.com>

* chore: harden ShipIt Mach trigger and simplify port drain

Scope the XPC trigger to the unprivileged path and add a send barrier
so the connection cannot be released before the message is on the wire.
Reduce drainMachServicePort to bootstrap_check_in (process exit flushes
the queue), dropping the mach_msg loop whose buffer/dealloc usage was
incorrect, and remove the no-op drain from the posix_spawn'd launch
helper. Patch filename regenerated to match the commit subject.

Co-authored-by: Samuel Attard <sattard@anthropic.com>

* fix: restore explicit mach_msg drain in drainMachServicePort

bootstrap_check_in alone does not prevent respawn: launchd tracks
outstanding demand independently of the receive right's lifetime, so the
queued trigger message must be explicitly dequeued with mach_msg before
exit(0). Verified empirically (check-in-only: 5 respawns in 10s; full
drain: 1 run). Keep the correctness fixes from the previous commit
(4K buffer, mach_msg_destroy on each receive, no mach_port_deallocate).

Co-authored-by: Samuel Attard <sattard@anthropic.com>

---------

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Keeley Hammond <khammond@slack-corp.com>
Co-authored-by: Samuel Attard <sattard@anthropic.com>
2026-04-21 18:52:22 +00:00
trop[bot]
e9d5977bde fix: add crash diagnostics for ARM64 power notification crash (#51205)
On ARM64 Windows, UnregisterSuspendResumeNotification (user32) forwards
to PowerUnregisterSuspendResumeNotification (powrprof), which treats the
HPOWERNOTIFY handle as a pointer and dereferences it. The user32 API
returns an opaque handle, not a pointer-backed allocation, causing an
access violation at shutdown.

Add crash keys (pm-reg-handle, pm-reg-memstate, pm-unreg-memstate) to
capture
- The handle value
- VirtualQuery memory state at both registration and unregistration

If the handle address is MEM_FREE, it confirms the handle is an opaque
index and powrprof is incorrectly dereferencing it. If MEM_COMMIT, it
would indicate a use-after-free of the underlying allocation.

Refs https://github.com/MicrosoftDocs/sdk-api/blob/docs/sdk-api-src/content/powerbase/nf-powerbase-powerunregistersuspendresumenotification.md

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: deepak1556 <hop2deep@gmail.com>
2026-04-21 18:11:21 +02:00
electron-roller[bot]
687b9da1ad chore: bump node to v24.15.0 (42-x-y) (#51090)
* chore: bump node in DEPS to v24.15.0

* fix(patch): adapt V8 sandboxed pointers for buffer kMaxLength

Upstream replaced the hardcoded buffer length limit with a runtime
kMaxLength variable, making the patch's regex workaround for sandbox
vs non-sandbox limits unnecessary. Dropped the test-buffer-concat.js
hunk.

Ref: https://github.com/nodejs/node/pull/61721

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(patch): adapt deprecated GetIsolate for upstream refactors

Upstream removed Uint32ToName from node_contextify.cc and
node_webstorage.cc, and renamed LookupAndCompile to
LookupAndCompileFunction in node_builtins.cc. Updated the
GetIsolate deprecation patch to match.

Ref: https://github.com/nodejs/node/pull/60846
Ref: https://github.com/nodejs/node/pull/60518

* chore: remove upstreamed patch

The fix_generate_config_gypi_needs_to_generate_valid_json patch
applied with "No changes -- Patch already applied", confirming
the fix has been incorporated upstream.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* node#60518: src: build v8 tick processor as built-in source text modules

Upstream restructured BuiltinLoader to auto-detect parameters by
source type, removing the custom parameters overload. Added a new
LookupAndCompileFunction overload for embedder scripts and updated
node_util.cc to use it. Also suppressed exit-time-destructors
warning from builtin_info.h in node_includes.h.

Ref: https://github.com/nodejs/node/pull/60518

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(patch): add LookupAndCompileFunction overload for embedder scripts

Ref: https://github.com/nodejs/node/pull/60518

* fix(patch): stop using v8::PropertyCallbackInfo<T>::This() in sqlite

Ref: https://github.com/nodejs/node/issues/60616

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(patch): adapt new crypto tests for BoringSSL

Guard aes-128-ccm test in test-crypto-authenticated.js behind cipher
availability check. Skip Ed448/X448/DSA tests in
test-crypto-key-objects-raw.js. Skip AES-KW tests in
test-webcrypto-promise-prototype-pollution.mjs.

Ref: https://github.com/nodejs/node/pull/62240
Ref: https://github.com/nodejs/node/pull/62455

* fix(patch): guard DH key test for BoringSSL

BoringSSL does not support loading DH private keys from PEM, causing
createPrivateKey to throw UNSUPPORTED_ALGORITHM.

Ref: https://github.com/nodejs/node/pull/62240

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(patch): correct thenable snapshot for Chromium V8

The snapshot used `*` wildcards which don't match the actual output.
Regenerated with NODE_REGENERATE_SNAPSHOTS=1 to capture the correct
concrete frame + <node-internal-frames> output.

Ref: https://chromium-review.googlesource.com/c/v8/v8/+/6826001

* fix(patch): GN build files for new merve dep

Ref: https://github.com/nodejs/node/pull/61984

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(patch): adapt fileExists patch to resolve.js module reorg

Ref: https://github.com/nodejs/node/pull/61769

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: update patches (trivial only)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: electron-roller[bot] <84116207+electron-roller[bot]@users.noreply.github.com>
Co-authored-by: Shelley Vohr <shelley.vohr@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 16:03:33 +02:00
trop[bot]
4a128fc887 ci: don't upload build stats on Windows if build fails (#51203)
Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: David Sanders <dsanders11@ucsbalum.com>
2026-04-21 12:53:45 +00:00
trop[bot]
1cdbebbd4b chore: add Phase Three (node smoke tests) to Node.js upgrade skill (#51185)
Adds test suite workflow, BoringSSL incompatibility reference table,
snapshot regeneration instructions, and commit guidelines.

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Shelley Vohr <shelley.vohr@gmail.com>
2026-04-21 10:46:56 +02:00
trop[bot]
abd47d40cb fix: intermittent CI failure is-not-alwaysOnTop (#51133)
* fix: intermittent CI failure is-not-alwaysOnTop

Ensure that the `always-on-top-changed` event always fires with the
right 'alwaysOnTop' boolean, regardless of interaction between
SetZOrderLevel() and MoveBehindTaskBarIfNeeded(). We know what the
value will be when all of the HWND events settle, so use that value.

Co-authored-by: Charles Kerr <charles@charleskerr.com>

* test: temporary commit to torture-test the new change with 1000 iterations

Co-authored-by: Charles Kerr <charles@charleskerr.com>

* test: keep eventually-becomes-consistent test but do not loop 1000 times

Co-authored-by: Charles Kerr <charles@charleskerr.com>

---------

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Charles Kerr <charles@charleskerr.com>
2026-04-20 15:25:53 -07:00
trop[bot]
36a610a9d0 docs: update versioning references (#51172)
* docs: update versioning references

Co-authored-by: Erick Zhao <erick@hotmail.ca>

* fixups

Co-authored-by: Erick Zhao <erick@hotmail.ca>

---------

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Erick Zhao <erick@hotmail.ca>
2026-04-20 13:25:58 -07:00
trop[bot]
30d5e51a0e build: update ANGLE repository URL to GitHub mirror (#51168)
Clone angle from github.com/google/angle in fix-sync action

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
2026-04-20 12:36:43 -07:00
trop[bot]
7790ade15a build: resolve electron_version from git when building in a worktree (#51166)
BUILD.gn previously hard-coded read_file(".git/packed-refs", ...) and
".git/HEAD" to derive electron_version. In a `git worktree` checkout
.git is a file containing a gitdir: pointer, not a directory, so GN's
read_file() fails and gn gen aborts unless override_electron_version is
set manually.

Ask git itself for the real locations via `git rev-parse --git-dir` /
`--git-common-dir` in a small helper script, and feed those resolved
paths to read_file() and the exec_script dependency list. Behaviour in
a plain clone is unchanged (both resolve to electron/.git/...), and the
tarball case still fails loudly with a pointer to
override_electron_version.

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Sam Attard <sattard@anthropic.com>
2026-04-20 08:24:33 -05:00
trop[bot]
bffc44fae9 fix: linux test shutdown error "AttributeError: type object 'DBusTestCase' has no attribute 'stop_dbus'" (#51149)
stop_dbus() was removed on 2025-09-14 by
99c4800e9e

I think CI isn't seeing this yet because its image has an older version.

This patched script should work on old & new versions of python-dbusmock.

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Charles Kerr <charles@charleskerr.com>
2026-04-19 00:52:58 -07:00
Samuel Attard
6da0023312 chore: cherry-pick 10 changes from chromium, dawn, pdfium (#51136)
* chore: cherry-pick b173791bf402 from chromium

* chore: cherry-pick be87466afecb from chromium

* chore: cherry-pick c0390bcd64ba from chromium

* chore: cherry-pick 7c11e1188705 from dawn

* chore: cherry-pick 1b69067db7d2 from chromium

* chore: cherry-pick d513cd2fe668 from chromium

* chore: cherry-pick dc5e20c4c055 from chromium

* chore: cherry-pick 847b11ad2fa3 from chromium

* chore: cherry-pick bce2e6728279 from pdfium

* chore: cherry-pick fc79e8cc2dfc from chromium

* chore: add patches/config.json entries for dawn and pdfium

* chore: restore compact patches/config.json formatting

* chore: update patches

---------

Co-authored-by: PatchUp <73610968+patchup[bot]@users.noreply.github.com>
2026-04-18 17:02:53 -07:00
Samuel Attard
88cc5da6cf refactor: attach translator holder via v8::Function data slot (#51119)
refactor: attach translator holder via v8::Function data slot (#50867)

(cherry picked from commit bfa5c93332)
2026-04-17 17:13:04 -05:00
Samuel Attard
d91a5e78e5 fix: use fresh LazyNow for OnEndWorkItemImpl to fix TimeKeeper DCHECK (#51101)
fix: use fresh LazyNow for OnEndWorkItemImpl to fix TimeKeeper DCHECK (#50418)
2026-04-17 15:16:10 +02:00
trop[bot]
b38c88664e fix: remove vestigial MachServices from ShipIt launchd job (#51111)
Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Keeley Hammond <khammond@slack-corp.com>
2026-04-17 00:20:13 +00:00
trop[bot]
687bd0a1f0 fix: fix types in devtools console for release (#51108)
Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Keeley Hammond <khammond@slack-corp.com>
2026-04-16 15:15:58 -07:00
trop[bot]
9d194e28ab ci: build a patched siso for Windows builds (#51093)
* ci: build a patched siso for Windows builds

The Windows Chromium builds intermittently fail during manifest load
with 'The parameter is incorrect.' (ERROR_INVALID_PARAMETER) out of
bindflt.sys. Root cause is a handle-relative NtCreateFile race in
siso/toolsupport/ninjautil/file_parser.go, which opens each subninja
twice — once in the outer goroutine and once more per chunk for
ReadAt. (*os.File).ReadAt is documented as safe for concurrent use,
so the extra open is redundant and removing it both halves the
CreateFileW calls per subninja and sidesteps the race.

Add a new build-siso-windows job on ubuntu-latest (runs in parallel
with checkout-windows) that:

- reads chromium_version from DEPS and pulls the matching siso_version
  SHA from the Chromium mirror's DEPS at that ref
- shallow-clones chromium.googlesource.com/build at that SHA
- applies the in-tree patches under .github/siso-patches/ via git am
- cross-compiles siso.exe for windows/amd64
- caches the binary keyed on siso SHA + sha256 of the patches, so
  subsequent runs hit the cache and skip the clone/patch/build steps
- uploads the result as a siso-windows-amd64 artifact

The Windows build jobs now depend on build-siso-windows, download the
artifact into $RUNNER_TEMP/siso, and export SISO_PATH, which
depot_tools/siso.py already honors. Mirrored into windows-publish.yml
and the regenerated pipeline-segment-electron-publish.yml so release
builds pick it up too.

Notes: none

Co-authored-by: Sam Attard <sattard@anthropic.com>

* ci: extract siso build into a reusable workflow segment

Move the build-siso-windows job body into
pipeline-segment-build-siso-windows.yml and call it from both build.yml
and windows-publish.yml via workflow_call. Also pin actions/cache to
v5.0.5 and add version comments next to the action SHAs introduced by
this change.

Co-authored-by: Sam Attard <sattard@anthropic.com>

---------

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Sam Attard <sattard@anthropic.com>
2026-04-16 15:11:57 -07:00
trop[bot]
b6de8acc8a chore: add Node.js skill to settings (#51106)
Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Shelley Vohr <shelley.vohr@gmail.com>
2026-04-16 15:10:59 -07:00
trop[bot]
b51e62e560 test: add tab source ID tests for media handler (#51095)
* test: add getMediaSourceId tab source coverage

Co-authored-by: Charles Kerr <charles@charleskerr.com>

* chore: move captureWithTabSourceId() to a shared helper

Co-authored-by: Charles Kerr <charles@charleskerr.com>

* test: improve "webContents module getMediaSourceId()" testing

Co-authored-by: Charles Kerr <charles@charleskerr.com>

---------

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Charles Kerr <charles@charleskerr.com>
2026-04-16 15:25:33 -04:00
trop[bot]
f1958d838c fix: show 'Electron Isolated Context' in Dev Tools (#51079)
Because of a bug after the [upstream refactor][0] Dev Tools stopped
showing 'Electron Isolated Context' in the execution context selector.
'Electron Isolated Context' runs with origin set to `file://`. Since
domain name is empty for the origin the respective UI item in the
context selector is created with an empty `subtitle`. However, with the
upstream change items with either of `title` or `subtitle` are omitted
from rendering.

Here we float an [in-review patch][1] until it is fixed upstream.

[0]: dbb61cf4b2
[1]: https://chromium-review.googlesource.com/c/devtools/devtools-frontend/+/7761316

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Fedor Indutny <indutny@signal.org>
2026-04-16 15:22:49 -04:00
trop[bot]
8b2dba3726 fix: prevent uaf when destroying guest WebContents during event emission (#51082)
fix: prevent use-after-free when destroying guest WebContents during event emission

Multiple event emission sites in WebContents destroy the underlying C++
object via a JavaScript event handler calling webContents.destroy(), then
continue to dereference the freed `this` pointer. This is exploitable
through <webview> guest WebContents because Destroy() calls `delete this`
synchronously for guests, unlike non-guests which safely defer deletion.

The fix has two layers:

1. A new `is_emitting_event_` flag is checked in Destroy() — when true,
   guest deletion is deferred to a posted task instead of executing
   synchronously. This is separate from `is_safe_to_delete_` (which
   gates LoadURL re-entrancy) to avoid rejecting legitimate loadURL
   calls from event handlers.

2. AutoReset<bool> guards on `is_emitting_event_` are added to
   CloseContents, RenderViewDeleted, DidFinishNavigation, and
   SetContentsBounds, preventing synchronous destruction while their
   Emit() calls are on the stack.

Destroy() now requires both `is_safe_to_delete_` (navigation re-entrancy)
and `!is_emitting_event_` (event emission) to allow synchronous guest
deletion. The existing AutoReset guards on `is_safe_to_delete_` in
DidStartNavigation, DidRedirectNavigation, and ReadyToCommitNavigation
are also now effective for guests.

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Shelley Vohr <shelley.vohr@gmail.com>
2026-04-16 13:01:31 -04:00
trop[bot]
4ac50292d5 fix: use CreateDataProperty when copying objects across contextBridge (#51086)
Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Sam Attard <sattard@anthropic.com>
2026-04-16 12:42:49 -04:00
trop[bot]
81e76165ae fix: allow PDF viewer to show save file picker (#51072)
The PDF viewer's "save with changes" feature uses
`window.showSaveFilePicker()`, but the PDF extension runs in a
cross-origin iframe (chrome-extension:// inside the app's origin).
Chromium's File System Access API blocks cross-origin subframes from
showing file pickers unless the embedder explicitly allows them via
`ContentClient::IsFilePickerAllowedForCrossOriginSubframe()`.

Chrome overrides this in `ChromeContentClient` to allowlist the PDF
extension origin, but Electron never did — so the picker was always
blocked with a SecurityError.

This adds the same override to `ElectronContentClient`, allowing the
built-in PDF extension origin to bypass the cross-origin check.

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Shelley Vohr <shelley.vohr@gmail.com>
2026-04-15 21:55:17 -05:00
trop[bot]
acf615229d build: fail gha-done check when required job fails (#51067)
fix: fail gha-done when any required job failed

Previously, the `gha-done` gate job used an `if:` expression that
evaluated to false whenever any needed job reported a failure, which
caused the job to be *skipped* rather than *failed*. GitHub branch
protection treats skipped required checks as non-blocking, so a PR
could be marked mergeable even though one of its test jobs had failed.

Keep the job always running and move the failure check into a step
that explicitly exits 1 when any dependency failed or was cancelled,
so the "GitHub Actions Completed" required check actually blocks the
merge in that case.

Notes: none

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Samuel Attard <samuel.r.attard@gmail.com>
2026-04-15 18:02:13 +02:00
trop[bot]
d5cea60ac7 chore: remove unused parts of chore_provide_iswebcontentscreationoverridden_with_full_params.patch (#51043)
chore: remove dead patches from chore_provide_iswebcontentscreationoverridden_with_full_params.patch

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Charles Kerr <charles@charleskerr.com>
2026-04-15 11:00:49 +02:00
Samuel Attard
60ec7cd0fb build: authenticate sudowoodo /token exchange via Actions OIDC (42-x-y) (#51052)
build: authenticate sudowoodo /token exchange via Actions OIDC
2026-04-14 20:44:06 -07:00
John Kleinschmidt
3d7d676bee test: fixup autoupdater tests failures (#51050) 2026-04-14 19:54:44 -05:00
trop[bot]
c5b0ee8a9b docs: mention pre-release installation (#51044)
* docs: pre-release installation

Co-authored-by: Erick Zhao <erick@hotmail.ca>

* Update installation.md

Co-authored-by: Niklas Wenzel <dev@nikwen.de>

Co-authored-by: Erick Zhao <erick@hotmail.ca>

---------

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Erick Zhao <erick@hotmail.ca>
2026-04-14 11:53:13 -07:00
trop[bot]
3ea2c9c760 fix: crash when closing devtools after focus (#51036)
Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Shelley Vohr <shelley.vohr@gmail.com>
2026-04-14 09:23:33 -07:00
trop[bot]
067fe3d1f1 ci: don't login to RBE for clang-tidy and gn-check (#51038)
* ci: don't login to RBE for clang-tidy

Co-authored-by: John Kleinschmidt <kleinschmidtorama@gmail.com>

* ci: don't login to RBE for gn check

Co-authored-by: John Kleinschmidt <kleinschmidtorama@gmail.com>

---------

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: John Kleinschmidt <kleinschmidtorama@gmail.com>
2026-04-14 16:05:52 +02:00
trop[bot]
75a7ebc7c0 refactor: migrate api::Extensions to cppgc (#50956)
* refactor: migrate api::Extensions to cppgc

Co-authored-by: Charles Kerr <charles@charleskerr.com>

* chore: update patch indices

Co-authored-by: Charles Kerr <charles@charleskerr.com>

---------

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Charles Kerr <charles@charleskerr.com>
2026-04-14 01:04:24 -05:00
trop[bot]
64055f27e7 feat: add id, groupId, and groupTitle support for Windows notifications (#50895)
* feat: allow to set id and groupId

Co-authored-by: Jan Hannemann <jan.hannemann@outlook.com>

* feat: use Id's without hash but check length

Co-authored-by: Jan Hannemann <jan.hannemann@outlook.com>

* feat: adds visual grouping via groupTitle

Co-authored-by: Jan Hannemann <jan.hannemann@outlook.com>

* test: tests added for id, groupId and groupTitle

Co-authored-by: Jan Hannemann <jan.hannemann@outlook.com>

* fix: unused vars on Mac and Linux

Co-authored-by: Jan Hannemann <jan.hannemann@outlook.com>

* fix: remove redundant parameter

Co-authored-by: Jan Hannemann <jan.hannemann@outlook.com>

* fix: add doc links for id and group

Co-authored-by: Jan Hannemann <jan.hannemann@outlook.com>

* fix: throw if groupId is missing

Co-authored-by: Jan Hannemann <jan.hannemann@outlook.com>

* fix: test

Co-authored-by: Jan Hannemann <jan.hannemann@outlook.com>

---------

Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Jan Hannemann <jan.hannemann@outlook.com>
2026-04-13 16:01:01 -07:00
trop[bot]
09156151c4 fix: dangling raw_ptr api::Protocol::protocol_registry_ (#50951)
Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: Charles Kerr <charles@charleskerr.com>
2026-04-13 16:18:25 -05:00
trop[bot]
b07503d468 ci: capture fatal errors in clang problem matcher (#50998)
Co-authored-by: trop[bot] <37223003+trop[bot]@users.noreply.github.com>
Co-authored-by: David Sanders <dsanders11@ucsbalum.com>
2026-04-13 14:22:59 -05:00
128 changed files with 3636 additions and 965 deletions

View File

@@ -11,6 +11,7 @@
"Bash(e patches:*)",
"Bash(e sync:*)",
"Skill(electron-chromium-upgrade)",
"Skill(electron-node-upgrade)",
"Read(*)",
"Bash(echo:*)",
"Bash(e build:*)",

View File

@@ -1,6 +1,6 @@
---
name: electron-node-upgrade
description: Guide for performing Node.js version upgrades in the Electron project. Use when working on the roller/node/main branch to fix patch conflicts during `e sync --3`. Covers the patch application workflow, conflict resolution, analyzing upstream Node.js changes, and proper commit formatting for patch fixes.
description: Guide for performing Node.js version upgrades in the Electron project. Use when working on the roller/node/main branch to fix patch conflicts during `e sync --3`. Covers the patch application workflow, conflict resolution, analyzing upstream Node.js changes, building, running the Node.js test suite, and proper commit formatting for patch fixes.
---
# Electron Node.js Upgrade: Phase One
@@ -174,10 +174,127 @@ When the error is in Electron's own source code:
1. Edit files directly in the electron repo
2. Commit directly (no patch export needed)
# Electron Node.js Upgrade: Phase Three
## Summary
Run the Node.js test suite via `script/node-spec-runner.js`, fix failing tests, and commit fixes until all tests pass. Certain tests are permanently disabled (listed in `script/node-disabled-tests.json`) and should not be run.
Run Phase Three immediately after Phase Two is complete.
## Success Criteria
Phase Three is complete when:
- `node script/node-spec-runner.js --default` exits with zero failures
- All changes are committed per the commit guidelines
Do not stop until these criteria are met.
## Context
Electron runs a subset of Node.js's upstream test suite using a custom runner (`script/node-spec-runner.js`). Tests are executed with the built Electron binary via `ELECTRON_RUN_AS_NODE=true`. Many tests need adaptation because Electron uses BoringSSL (not OpenSSL) and Chromium's V8 (which may differ from Node.js's bundled V8).
**Key files:**
- `script/node-spec-runner.js` — Test runner script
- `script/node-disabled-tests.json` — Permanently disabled tests (do not try to fix these)
- `../third_party/electron_node/test/` — Node.js test files (where patches apply)
- `patches/node/fix_crypto_tests_to_run_with_bssl.patch` — BoringSSL crypto test adaptations
- `patches/node/test_formally_mark_some_tests_as_flaky.patch` — Flaky test list
## Workflow
1. Run `node script/node-spec-runner.js --default` from the electron repo
2. If all tests pass → Phase Three is complete
3. If tests fail:
- Identify the failing test file(s) from the output
- Analyze each failure (see "Common Failure Patterns" below)
- Fix the test in `../third_party/electron_node/test/...`
- Re-run the specific failing test to verify: `node script/node-spec-runner.js {test-path}`
- The test path is relative to the node `test/` directory, e.g. `test/parallel/test-crypto-key-objects-raw.js`
- Do NOT use `--default` when running specific tests — it adds the full suite flags
- Do NOT run tests directly with `ELECTRON_RUN_AS_NODE` — the runner handles environment setup (e.g. temporarily switching `package.json` from ESM to CommonJS)
- Commit the fix using the fixup workflow and commit guidelines
- Return to step 1
## Commands Reference
| Command | Purpose |
|---------|---------|
| `node script/node-spec-runner.js --default` | Run full Node.js test suite |
| `node script/node-spec-runner.js test/parallel/test-foo.js` | Run a single test |
| `NODE_REGENERATE_SNAPSHOTS=1 node script/node-spec-runner.js test/test-runner/test-foo.mjs` | Regenerate snapshot for a snapshot-based test |
## Common Failure Patterns
### BoringSSL incompatibilities
Electron uses BoringSSL (via Chromium) instead of OpenSSL. Many crypto features are missing or behave differently:
| Unsupported in BoringSSL | Guard pattern |
|--------------------------|---------------|
| ChaCha20-Poly1305 | `if (!process.features.openssl_is_boringssl)` |
| AES-CCM (aes-128-ccm, aes-256-ccm) | `if (ciphers.includes('aes-128-ccm'))` |
| AES-KW (key wrapping) | `if (!process.features.openssl_is_boringssl)` |
| DSA keys | `if (!process.features.openssl_is_boringssl)` |
| Ed448 / X448 curves | `if (!process.features.openssl_is_boringssl)` |
| DH key PEM loading | `if (!process.features.openssl_is_boringssl)` |
| PQC algorithms (ML-KEM, ML-DSA, SLH-DSA) | `if (hasOpenSSL(3, 5))` (already guards these) |
When guarding tests, prefer checking cipher availability (`ciphers.includes(algo)`) over blanket BoringSSL checks where possible, as it's more precise and self-documenting.
New upstream tests that exercise these features will need guards added to the `fix_crypto_tests_to_run_with_bssl` patch.
### Snapshot test mismatches
Some tests compare output against committed `.snapshot` files using `assert.strictEqual` — these are NOT wildcard comparisons. When Chromium's V8 produces different output (e.g. different stack traces due to V8 enhancements), the snapshot must be regenerated:
```bash
NODE_REGENERATE_SNAPSHOTS=1 node script/node-spec-runner.js test/test-runner/test-foo.mjs
```
Then inspect the diff to verify the changes are expected, and commit the updated snapshot into the appropriate patch.
### V8 behavioral differences
Chromium's V8 may be ahead of Node.js's bundled V8. This can cause:
- Different stack trace formats (e.g. thenable async stack frames)
- Different error messages
- Features available in Chromium V8 that aren't in stock Node.js V8 (or vice versa)
## Two Types of Test Fixes
### A. Patch Fixes (most common for test failures)
Most test fixes go into existing patches in `patches/node/`. Use the fixup workflow:
1. Edit the test file in `../third_party/electron_node/test/...`
2. Find the relevant patch commit: `git log --oneline | grep -i "keyword"`
- Crypto/BoringSSL tests → `fix crypto tests to run with bssl`
- Snapshot tests → the specific snapshot patch (e.g. `test: accomodate V8 thenable`)
- Flaky tests → `test: formally mark some tests as flaky`
3. Create a fixup commit:
```bash
cd ../third_party/electron_node
git add test/path/to/test.js
git commit --fixup=<patch-commit-hash>
GIT_SEQUENCE_EDITOR=: git rebase --autosquash --autostash -i <commit>^
```
4. Export: `e patches node`
5. **Read `references/phase-three-commit-guidelines.md` NOW**, then commit the updated patch file.
### B. New Patches (rare)
Only create a new patch when the fix doesn't belong in any existing patch. The new patch commit in `../third_party/electron_node` must include a description explaining why the patch exists and when it can be removed — the lint check enforces this.
## Adding to Disabled Tests
Only add a test to `script/node-disabled-tests.json` as a **last resort** — when the test is fundamentally incompatible with Electron's architecture (not just a BoringSSL difference that can be guarded). Tests disabled here are completely skipped and never run.
# Critical: Read Before Committing
- Before ANY Phase One commits: Read `references/phase-one-commit-guidelines.md`
- Before ANY Phase Two commits: Read `references/phase-two-commit-guidelines.md`
- Before ANY Phase Three commits: Read `references/phase-three-commit-guidelines.md`
# High-Churn Patches
@@ -201,5 +318,6 @@ This skill has additional reference files in `references/`:
- patch-analysis.md - How to analyze patch failures
- phase-one-commit-guidelines.md - Commit format for Phase One
- phase-two-commit-guidelines.md - Commit format for Phase Two
- phase-three-commit-guidelines.md - Commit format for Phase Three
Read these when referenced in the workflow steps.

View File

@@ -0,0 +1,80 @@
# Phase Three Commit Guidelines
Only follow these instructions if there are uncommitted changes after fixing a test failure during Phase Three.
Ignore other instructions about making commit messages, our guidelines are CRITICALLY IMPORTANT and must be followed.
## Commit Message Style
**Titles** follow the 60/80-character guideline: simple changes fit within 60 characters, otherwise the limit is 80 characters.
Always include a `Co-Authored-By` trailer identifying the AI model that assisted (e.g., `Co-Authored-By: <AI model attribution>`).
## Commit Types
### Patch updates (most test fixes)
Test fixes go into existing patches via the fixup workflow. Use `fix(patch):` prefix with a descriptive topic:
```
fix(patch): {topic headline}
Ref: {Node.js commit or issue link}
Co-Authored-By: <AI model attribution>
```
Examples:
- `fix(patch): guard DH key test for BoringSSL`
- `fix(patch): adapt new crypto tests for BoringSSL`
- `fix(patch): correct thenable snapshot for Chromium V8`
- `fix(patch): skip AES-KW tests with BoringSSL`
Group related test fixes into a single commit when they address the same root cause (e.g., multiple crypto tests all needing BoringSSL guards for the same missing cipher). Don't create one commit per test file if they share the same fix pattern.
### Snapshot regeneration
When a snapshot test fails because Chromium's V8 produces different output, regenerate it:
```bash
NODE_REGENERATE_SNAPSHOTS=1 node script/node-spec-runner.js test/test-runner/test-foo.mjs
```
Then commit the updated snapshot patch with a title describing what changed:
```
fix(patch): correct {name} snapshot for Chromium V8
Ref: {V8 CL or issue link if known}
Co-Authored-By: <AI model attribution>
```
### Trivial patch updates
After any patch modification, check for dependent patches that only have index/hunk header changes:
```bash
git status
# If other .patch files show as modified with only trivial changes:
git add patches/
git commit -m "chore: update patches (trivial only)"
```
## Finding References
For BoringSSL-related test fixes, the reference is typically the upstream Node.js PR that added the new test:
```bash
cd ../third_party/electron_node
git log --oneline -5 -- test/parallel/test-crypto-foo.js
git log -1 <commit> --format="%B" | grep "PR-URL"
```
For V8 behavioral differences, reference the Chromium CL:
```
Ref: https://chromium-review.googlesource.com/c/v8/v8/+/NNNNNNN
```
If no reference found after searching: `Ref: Unable to locate reference`

View File

@@ -92,6 +92,10 @@ runs:
} else {
e build --target electron:testing_build
}
if ($LASTEXITCODE -ne 0) {
Write-Host "e build failed with exit code $LASTEXITCODE"
exit $LASTEXITCODE
}
Copy-Item out\Default\.ninja_log out\electron_ninja_log
node electron\script\check-symlinks.js

View File

@@ -133,7 +133,7 @@ runs:
run : |
cd src/third_party/angle
rm -f .git/objects/info/alternates
git remote set-url origin https://chromium.googlesource.com/angle/angle.git
git remote set-url origin https://github.com/google/angle.git
cp .git/config .git/config.backup
git remote remove origin
mv .git/config.backup .git/config

View File

@@ -5,7 +5,7 @@
"fromPath": "src/out/Default/args.gn",
"pattern": [
{
"regexp": "^(.+)[(:](\\d+)[:,](\\d+)\\)?:\\s+(warning|error):\\s+(.*)$",
"regexp": "^(.+)[(:](\\d+)[:,](\\d+)\\)?:\\s+(warning|fatal error|error):\\s+(.*)$",
"file": 1,
"line": 2,
"column": 3,

View File

@@ -0,0 +1,47 @@
From 85b561ea4dbc76ba98af020b970f3aa6b20fdb9e Mon Sep 17 00:00:00 2001
From: Samuel Attard <sam@electronjs.org>
Date: Wed, 8 Apr 2026 23:24:15 -0700
Subject: [PATCH] siso: reuse the outer *os.File for chunked ReadAt in
fileParser.readFile
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
The per-chunk goroutine currently re-opens fname to get its own handle
for ReadAt. (*os.File).ReadAt is documented as safe for concurrent
calls on the same File (on Windows it is ReadFile with an OVERLAPPED
offset, so there is no shared seek state), so the extra open is
redundant — the goroutines can share the outer f.
Besides halving the CreateFileW calls per subninja, this avoids an
intermittent 'The parameter is incorrect.' (ERROR_INVALID_PARAMETER)
from bindflt.sys when out/ is a mapped directory inside a Windows
container: bindflt's handle-relative NtCreateFile path races when a
second relative open arrives while the first handle to the same target
is still being set up. Absolute paths and single opens do not trigger
it; see microsoft/Windows-Containers#<tbd>.
---
siso/toolsupport/ninjautil/file_parser.go | 7 -------
1 file changed, 7 deletions(-)
diff --git a/siso/toolsupport/ninjautil/file_parser.go b/siso/toolsupport/ninjautil/file_parser.go
index 8c18d084..63116662 100644
--- a/siso/toolsupport/ninjautil/file_parser.go
+++ b/siso/toolsupport/ninjautil/file_parser.go
@@ -111,13 +111,6 @@ func (p *fileParser) readFile(ctx context.Context, fname string) ([]byte, error)
eg.Go(func() error {
p.sema <- struct{}{}
defer func() { <-p.sema }()
- f, err := os.Open(fname)
- if err != nil {
- return err
- }
- defer func() {
- _ = f.Close()
- }()
for len(chunkBuf) > 0 {
n, err := f.ReadAt(chunkBuf, pos)
if err != nil {
--
2.53.0

View File

@@ -200,6 +200,15 @@ jobs:
generate-sas-token: 'true'
target-platform: win
# Build a patched siso binary for Windows CI in parallel with checkout-windows.
# The Windows build jobs download the resulting artifact and use it via SISO_PATH.
build-siso-windows:
needs: setup
if: ${{ needs.setup.outputs.src == 'true' && !inputs.skip-windows }}
uses: ./.github/workflows/pipeline-segment-build-siso-windows.yml
permissions:
contents: read
# GN Check Jobs
macos-gn-check:
uses: ./.github/workflows/pipeline-segment-electron-gn-check.yml
@@ -384,7 +393,7 @@ jobs:
issues: read
pull-requests: read
uses: ./.github/workflows/pipeline-electron-build-and-test.yml
needs: checkout-windows
needs: [checkout-windows, build-siso-windows]
if: ${{ needs.setup.outputs.src == 'true' && !inputs.skip-windows }}
with:
build-runs-on: electron-arc-centralus-windows-amd64-16core
@@ -403,7 +412,7 @@ jobs:
issues: read
pull-requests: read
uses: ./.github/workflows/pipeline-electron-build-and-test.yml
needs: checkout-windows
needs: [checkout-windows, build-siso-windows]
if: ${{ needs.setup.outputs.src == 'true' && !inputs.skip-windows }}
with:
build-runs-on: electron-arc-centralus-windows-amd64-16core
@@ -422,7 +431,7 @@ jobs:
issues: read
pull-requests: read
uses: ./.github/workflows/pipeline-electron-build-and-test.yml
needs: checkout-windows
needs: [checkout-windows, build-siso-windows]
if: ${{ needs.setup.outputs.src == 'true' && !inputs.skip-windows }}
with:
build-runs-on: electron-arc-centralus-windows-amd64-16core
@@ -440,9 +449,12 @@ jobs:
runs-on: ubuntu-latest
permissions:
contents: read
needs: [docs-only, macos-x64, macos-arm64, linux-x64, linux-x64-asan, linux-arm, linux-arm64, windows-x64, windows-x86, windows-arm64]
if: always() && github.repository == 'electron/electron' && !contains(needs.*.result, 'failure')
needs: [docs-only, macos-x64, macos-arm64, linux-x64, linux-x64-asan, linux-arm, linux-arm64, build-siso-windows, windows-x64, windows-x86, windows-arm64]
if: always() && github.repository == 'electron/electron'
steps:
- name: Fail if any needed job failed or was cancelled
if: contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled')
run: exit 1
- name: GitHub Actions Jobs Done
run: |
echo "All GitHub Actions Jobs are done"

View File

@@ -0,0 +1,98 @@
name: Pipeline Segment - Build Siso (Windows)
# Builds a patched siso binary for Windows CI. Reads the siso revision from
# the Chromium DEPS file at the pinned chromium_version, shallow-clones
# chromium.googlesource.com/build at that revision, applies the patches under
# .github/siso-patches/, cross-compiles siso.exe for windows/amd64, and
# publishes it as the `siso-windows-amd64` artifact. The Windows build jobs
# download it and use it via SISO_PATH. The built binary is cached keyed on
# the siso revision + sha256 of the patch contents, so subsequent runs just
# restore it.
on:
workflow_call: {}
permissions: {}
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- name: Checkout Electron
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
ref: ${{ github.event.pull_request.head.sha }}
sparse-checkout: |
DEPS
.github/siso-patches
- name: Resolve siso revision from Chromium DEPS
id: resolve
run: |
set -euo pipefail
CHROMIUM_VERSION=$(python3 -c "import re; print(re.search(r\"'chromium_version':\s*\n\s*'([^']+)'\", open('DEPS').read()).group(1))")
if ! [[ "$CHROMIUM_VERSION" =~ ^[0-9]+(\.[0-9]+){1,3}$ ]]; then
echo "error: unexpected chromium_version format: $CHROMIUM_VERSION" >&2
exit 1
fi
curl -sfL "https://raw.githubusercontent.com/chromium/chromium/${CHROMIUM_VERSION}/DEPS" -o /tmp/chromium-DEPS
SISO_SHA=$(python3 -c "import re; print(re.search(r\"'siso_version':\s*'git_revision:([0-9a-f]+)'\", open('/tmp/chromium-DEPS').read()).group(1))")
if ! [[ "$SISO_SHA" =~ ^[0-9a-f]{40}$ ]]; then
echo "error: unexpected siso_version SHA: $SISO_SHA" >&2
exit 1
fi
PATCHES_HASH=$(find .github/siso-patches -type f -name '*.patch' | sort | xargs sha256sum | sha256sum | awk '{print $1}')
echo "siso-sha=${SISO_SHA}" >> "$GITHUB_OUTPUT"
echo "patches-hash=${PATCHES_HASH}" >> "$GITHUB_OUTPUT"
echo "Chromium ${CHROMIUM_VERSION} pins siso at ${SISO_SHA}"
echo "Patches hash: ${PATCHES_HASH}"
- name: Restore cached siso binary
id: cache-siso
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: siso-out/siso.exe
key: siso-windows-amd64-${{ steps.resolve.outputs.siso-sha }}-${{ steps.resolve.outputs.patches-hash }}
- name: Shallow clone chromium build repo at pinned revision
if: steps.cache-siso.outputs.cache-hit != 'true'
env:
SISO_SHA: ${{ steps.resolve.outputs.siso-sha }}
run: |
set -euo pipefail
mkdir chromium-build
cd chromium-build
git init -q
git remote add origin https://chromium.googlesource.com/build
git -c protocol.version=2 fetch --depth=1 origin "$SISO_SHA"
git checkout --detach FETCH_HEAD
- name: Apply in-tree siso patches
if: steps.cache-siso.outputs.cache-hit != 'true'
run: |
set -euo pipefail
cd chromium-build
git -c user.name=electron-ci -c user.email=ci@electronjs.org \
am --3way "${GITHUB_WORKSPACE}/.github/siso-patches"/*.patch
- name: Set up Go
if: steps.cache-siso.outputs.cache-hit != 'true'
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version-file: chromium-build/siso/go.mod
cache: false
- name: Build siso (windows/amd64)
if: steps.cache-siso.outputs.cache-hit != 'true'
working-directory: chromium-build/siso
env:
CGO_ENABLED: '0'
GOOS: windows
GOARCH: amd64
run: |
mkdir -p "${GITHUB_WORKSPACE}/siso-out"
go build -trimpath -o "${GITHUB_WORKSPACE}/siso-out/siso.exe" .
- name: Upload siso artifact
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: siso-windows-amd64
path: siso-out/siso.exe
if-no-files-found: error
retention-days: 1

View File

@@ -77,7 +77,6 @@ env:
ELECTRON_ARTIFACTS_BLOB_STORAGE: ${{ secrets.ELECTRON_ARTIFACTS_BLOB_STORAGE }}
ELECTRON_RBE_JWT: ${{ secrets.ELECTRON_RBE_JWT }}
SUDOWOODO_EXCHANGE_URL: ${{ secrets.SUDOWOODO_EXCHANGE_URL }}
SUDOWOODO_EXCHANGE_TOKEN: ${{ secrets.SUDOWOODO_EXCHANGE_TOKEN }}
GCLIENT_EXTRA_ARGS: ${{ inputs.target-platform == 'macos' && '--custom-var=checkout_mac=True --custom-var=host_os=mac' || inputs.target-platform == 'win' && '--custom-var=checkout_win=True' || '--custom-var=checkout_arm=True --custom-var=checkout_arm64=True' }}
ELECTRON_OUT_DIR: Default
ACTIONS_STEP_DEBUG: ${{ secrets.ACTIONS_STEP_DEBUG }}
@@ -195,6 +194,22 @@ jobs:
- name: Free up space (macOS)
if: ${{ inputs.target-platform == 'macos' }}
uses: ./src/electron/.github/actions/free-space-macos
- name: Download custom siso binary (Windows)
if: ${{ inputs.target-platform == 'win' }}
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
name: siso-windows-amd64
path: ${{ runner.temp }}/siso
- name: Set SISO_PATH (Windows)
if: ${{ inputs.target-platform == 'win' }}
run: |
SISO_BIN="${RUNNER_TEMP}/siso/siso.exe"
if [ ! -f "$SISO_BIN" ]; then
echo "error: expected siso binary at $SISO_BIN" >&2
exit 1
fi
echo "SISO_PATH=$SISO_BIN" >> "$GITHUB_ENV"
echo "Using custom siso binary at $SISO_BIN"
- name: Build Electron
if: ${{ inputs.target-platform != 'macos' || (inputs.target-variant == 'all' || inputs.target-variant == 'darwin') }}
uses: ./src/electron/.github/actions/build-electron

View File

@@ -135,7 +135,7 @@ jobs:
run: echo "::add-matcher::src/electron/.github/problem-matchers/clang.json"
- name: Run Clang-Tidy
run: |
e init -f --root=$(pwd) --out=${ELECTRON_OUT_DIR} testing --target-cpu ${TARGET_ARCH}
e init -f --root=$(pwd) --out=${ELECTRON_OUT_DIR} testing --target-cpu ${TARGET_ARCH} --remote-build none
export GN_EXTRA_ARGS="target_cpu=\"${TARGET_ARCH}\""
if [ "${{ inputs.target-platform }}" = "win" ]; then

View File

@@ -130,7 +130,7 @@ jobs:
run: |
for target_cpu in ${{ inputs.target-archs }}
do
e init -f --root=$(pwd) --out=Default ${{ inputs.gn-build-type }} --import ${{ inputs.gn-build-type }} --target-cpu $target_cpu
e init -f --root=$(pwd) --out=Default ${{ inputs.gn-build-type }} --import ${{ inputs.gn-build-type }} --target-cpu $target_cpu --remote-build none
cd src
export GN_EXTRA_ARGS="target_cpu=\"$target_cpu\""
if [ "${{ inputs.target-platform }}" = "linux" ]; then

View File

@@ -79,7 +79,6 @@ env:
ELECTRON_ARTIFACTS_BLOB_STORAGE: ${{ secrets.ELECTRON_ARTIFACTS_BLOB_STORAGE }}
ELECTRON_RBE_JWT: ${{ secrets.ELECTRON_RBE_JWT }}
SUDOWOODO_EXCHANGE_URL: ${{ secrets.SUDOWOODO_EXCHANGE_URL }}
SUDOWOODO_EXCHANGE_TOKEN: ${{ secrets.SUDOWOODO_EXCHANGE_TOKEN }}
GCLIENT_EXTRA_ARGS: ${{ inputs.target-platform == 'macos' &&
'--custom-var=checkout_mac=True --custom-var=host_os=mac' ||
inputs.target-platform == 'win' && '--custom-var=checkout_win=True' ||
@@ -208,6 +207,22 @@ jobs:
- name: Free up space (macOS)
if: ${{ inputs.target-platform == 'macos' }}
uses: ./src/electron/.github/actions/free-space-macos
- name: Download custom siso binary (Windows)
if: ${{ inputs.target-platform == 'win' }}
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c
with:
name: siso-windows-amd64
path: ${{ runner.temp }}/siso
- name: Set SISO_PATH (Windows)
if: ${{ inputs.target-platform == 'win' }}
run: |
SISO_BIN="${RUNNER_TEMP}/siso/siso.exe"
if [ ! -f "$SISO_BIN" ]; then
echo "error: expected siso binary at $SISO_BIN" >&2
exit 1
fi
echo "SISO_PATH=$SISO_BIN" >> "$GITHUB_ENV"
echo "Using custom siso binary at $SISO_BIN"
- name: Build Electron
if: ${{ inputs.target-platform != 'macos' || (inputs.target-variant == 'all' ||
inputs.target-variant == 'darwin') }}

View File

@@ -51,6 +51,14 @@ jobs:
generate-sas-token: 'true'
target-platform: win
# Build the patched siso binary in parallel with checkout-windows; the
# publish-*-win jobs consume it via SISO_PATH.
build-siso-windows:
if: github.repository == 'electron/electron'
uses: ./.github/workflows/pipeline-segment-build-siso-windows.yml
permissions:
contents: read
publish-x64-win:
uses: ./.github/workflows/pipeline-segment-electron-publish.yml
permissions:
@@ -58,7 +66,7 @@ jobs:
attestations: write
contents: read
id-token: write
needs: checkout-windows
needs: [checkout-windows, build-siso-windows]
with:
environment: production-release
build-runs-on: electron-arc-centralus-windows-amd64-16core
@@ -77,7 +85,7 @@ jobs:
attestations: write
contents: read
id-token: write
needs: checkout-windows
needs: [checkout-windows, build-siso-windows]
with:
environment: production-release
build-runs-on: electron-arc-centralus-windows-amd64-16core
@@ -96,7 +104,7 @@ jobs:
attestations: write
contents: read
id-token: write
needs: checkout-windows
needs: [checkout-windows, build-siso-windows]
with:
environment: production-release
build-runs-on: electron-arc-centralus-windows-amd64-16core

View File

@@ -105,21 +105,25 @@ electron_mac_bundle_id = branding.mac_bundle_id
if (override_electron_version != "") {
electron_version = override_electron_version
} else {
# When building from source code tarball there is no git tag available and
# When building from a source code tarball there is no git tag available and
# builders must explicitly pass override_electron_version in gn args.
#
# Resolve the real locations of packed-refs and HEAD via git so that this
# also works when electron/ is a `git worktree` (where .git is a file, not a
# directory, and GN's read_file cannot follow the gitdir indirection).
electron_git_ref_paths =
exec_script("script/get-git-ref-paths.py", [], "list lines")
# This read_file call will assert if there is no git information, without it
# gn will generate a malformed build configuration and ninja will get into
# infinite loop.
read_file(".git/packed-refs", "string")
read_file(electron_git_ref_paths[0], "string")
# Set electron version from git tag.
electron_version = exec_script("script/get-git-version.py",
[],
"trim string",
[
".git/packed-refs",
".git/HEAD",
])
electron_git_ref_paths)
}
if (is_mas_build) {

2
DEPS
View File

@@ -4,7 +4,7 @@ vars = {
'chromium_version':
'148.0.7778.5',
'node_version':
'v24.14.1',
'v24.15.0',
'nan_version':
'675cefebca42410733da8a454c8d9391fcebfbc2',
'squirrel.mac_version':

View File

@@ -79,8 +79,9 @@ app.whenReady().then(() => {
### `new Notification([options])`
* `options` Object (optional)
* `id` string (optional) _macOS_ - A unique identifier for the notification, mapping to `UNNotificationRequest`'s [`identifier`](https://developer.apple.com/documentation/usernotifications/unnotificationrequest/identifier) property. Defaults to a random UUID if not provided or if an empty string is passed. This can be used to remove or update previously delivered notifications.
* `groupId` string (optional) _macOS_ - A string identifier used to visually group notifications together in Notification Center. Maps to `UNNotificationContent`'s [`threadIdentifier`](https://developer.apple.com/documentation/usernotifications/unnotificationcontent/threadidentifier) property.
* `id` string (optional) _macOS_ _Windows_ - A unique identifier for the notification. On macOS, maps to `UNNotificationRequest`'s [`identifier`](https://developer.apple.com/documentation/usernotifications/unnotificationrequest/identifier) property. On Windows, maps to the toast notification's [`Tag`](https://learn.microsoft.com/en-us/uwp/api/windows.ui.notifications.toastnotification.tag) property. Defaults to a random UUID if not provided or if an empty string is passed. This can be used to remove or update previously delivered notifications.
* `groupId` string (optional) _macOS_ _Windows_ - A string identifier used to visually group notifications together in Notification Center / Action Center. On macOS, maps to `UNNotificationContent`'s [`threadIdentifier`](https://developer.apple.com/documentation/usernotifications/unnotificationcontent/threadidentifier) property. On Windows, maps to the toast notification's [`Group`](https://learn.microsoft.com/en-us/uwp/api/windows.ui.notifications.toastnotification.group) property.
* `groupTitle` string (optional) _Windows_ - A title for the notification group header. When both `groupId` and `groupTitle` are specified, Windows will display a header above the notification that groups related notifications together. Maps to the toast notification's [`header`](https://learn.microsoft.com/en-us/windows/apps/design/shell/tiles-and-notifications/toast-headers) element.
* `title` string (optional) - A title for the notification, which will be displayed at the top of the notification window when it is shown.
* `subtitle` string (optional) _macOS_ - A subtitle for the notification, which will be displayed below the title.
* `body` string (optional) - The body text of the notification, which will be displayed below the title or subtitle.
@@ -329,13 +330,17 @@ app.whenReady().then(() => {
### Instance Properties
#### `notification.id` _macOS_ _Readonly_
#### `notification.id` _macOS_ _Windows_ _Readonly_
A `string` property representing the unique identifier of the notification. This is set at construction time — either from the `id` option or as a generated UUID if none was provided.
#### `notification.groupId` _macOS_ _Readonly_
#### `notification.groupId` _macOS_ _Windows_ _Readonly_
A `string` property representing the group identifier of the notification. Notifications with the same `groupId` will be visually grouped together in Notification Center.
A `string` property representing the group identifier of the notification. Notifications with the same `groupId` will be visually grouped together in Notification Center (macOS) or Action Center (Windows).
#### `notification.groupTitle` _Windows_ _Readonly_
A `string` property representing the title of the notification group header.
#### `notification.title`

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

View File

@@ -2,28 +2,53 @@
Electron frequently releases major versions alongside every other Chromium release.
This document focuses on the release cadence and version support policy.
For a more in-depth guide on our git branches and how Electron uses semantic versions,
check out our [Electron Versioning](./electron-versioning.md) doc.
> [!TIP]
> See the [Electron Versioning](./electron-versioning.md) document for more details
> on how Electron is versioned.
## Timeline
[Electron's Release Schedule](https://releases.electronjs.org/schedule) lists a schedule of Electron major releases showing key milestones including alpha, beta, and stable release dates, as well as end-of-life dates and dependency versions.
:::info Official support dates may change
> [!IMPORTANT]
> Electron's official support policy is the latest 3 stable releases. Our stable
> release and end-of-life dates are determined by Chromium, and may be subject to
> change. While we try to keep our planned release and end-of-life dates frequently
> updated here, future dates may change if affected by upstream scheduling changes,
> and may not always be accurately reflected.
>
> See [Chromium's public release schedule](https://chromiumdash.appspot.com/schedule) for
> definitive information about Chromium's scheduled release dates.
Electron's official support policy is the latest 3 stable releases. Our stable
release and end-of-life dates are determined by Chromium, and may be subject to
change. While we try to keep our planned release and end-of-life dates frequently
updated here, future dates may change if affected by upstream scheduling changes,
and may not always be accurately reflected.
Electron's cadence between major version releases is 8 weeks long. Before each major
version hits stable, it goes through a four-week **alpha** phase and a four-week
**beta** phase.
See [Chromium's public release schedule](https://chromiumdash.appspot.com/schedule) for
definitive information about Chromium's scheduled release dates.
:::
```mermaid
gantt
title Electron release cycle
dateFormat YYYY-MM-DD
axisFormat Week %W
todayMarker off
section v41
Alpha phase :a1, 2026-01-19, 4w
M146 enters Chrome beta :milestone, bm1, after a1, 0d
Beta phase :b1, after a1, 4w
M146 enters Chrome stable :milestone, s1, after b1, 0d
Supported until v44 release :active, after b1, 12w
section v42
Alpha phase :a2, after b1, 4w
M148 enters Chrome beta :milestone, bm2, after a2, 0d
Beta phase :b2, after a2, 4w
M148 enters Chrome stable :milestone, s2, after b2, 0d
Supported until v45 release :active, after b2, 4w
```
**Notes:**
* Alphas are generally less stable than beta releases. The cutoff between the two
corresponds to when the underlying Chromium version enters Chrome's Beta channel.
* The `-alpha.1`, `-beta.1`, and `stable` dates are our solid release dates.
* We strive for weekly alpha/beta releases, but we often release more than scheduled.
* All dates are our goals but there may be reasons for adjusting the stable deadline, such as security bugs.
@@ -38,10 +63,11 @@ and may not always be accurately reflected.
## Version support policy
The latest three _stable_ major versions are supported by the Electron team.
For example, if the latest release is 6.1.x, then the 5.0.x as well
as the 4.2.x series are supported. We only support the latest minor release
For example, if the latest release is 42.1.x, then the 41.0.x as well
as the 40.2.x series are supported. We only support the latest minor release
for each stable release series. This means that in the case of a security fix,
6.1.x will receive the fix, but we will not release a new version of 6.0.x.
42.1.x will receive the fix, but we will not release a new version of 42.0.x.
The latest stable release unilaterally receives all fixes from `main`,
and the version prior to that receives the vast majority of those fixes
@@ -50,11 +76,8 @@ only security fixes directly.
### Chromium version support
:::info Chromium release schedule
Chromium's public release schedule is [here](https://chromiumdash.appspot.com/schedule).
:::
> [!TIP]
> Chromium's public release schedule is [here](https://chromiumdash.appspot.com/schedule).
Electron targets Chromium even-number versions, releasing every 8 weeks in concert
with Chromium's 4-week release schedule. For example, Electron 26 uses Chromium 116, while Electron 27 uses Chromium 118.
@@ -82,3 +105,7 @@ and that number is reduced to two in major version 10, the three-argument versio
continue to work until, at minimum, major version 12. Past the minimum two-version
threshold, we will attempt to support backwards compatibility beyond two versions
until the maintainers feel the maintenance burden is too high to continue doing so.
> [!TIP]
> For a canonical list of breaking changes, see the [Breaking Changes](../breaking-changes.md)
> document.

View File

@@ -14,18 +14,6 @@ To update an existing project to use the latest stable version:
npm install --save-dev electron@latest
```
## Versioning scheme
There are several major changes from our 1.x strategy outlined below. Each change is intended to satisfy the needs and priorities of developers/maintainers and app developers.
1. Strict use of the [SemVer](#semver) spec
2. Introduction of semver-compliant `-beta` tags
3. Introduction of [conventional commit messages](https://conventionalcommits.org/)
4. Well-defined stabilization branches
5. The `main` branch is versionless; only stabilization branches contain version information
We will cover in detail how git branching works, how npm tagging works, what developers should expect to see, and how one can backport changes.
## SemVer
Below is a table explicitly mapping types of changes to their corresponding category of SemVer (e.g. Major, Minor, Patch).
@@ -34,7 +22,7 @@ Below is a table explicitly mapping types of changes to their corresponding cate
| ------------------------------- | ---------------------------------- | ----------------------------- |
| Electron breaking API changes | Electron non-breaking API changes | Electron bug fixes |
| Node.js major version updates | Node.js minor version updates | Node.js patch version updates |
| Chromium version updates | | fix-related chromium patches |
| Chromium version updates | | fix-related Chromium patches |
For more information, see the [Semantic Versioning 2.0.0](https://semver.org/) spec.
@@ -44,68 +32,189 @@ Note that most Chromium updates will be considered breaking. Fixes that can be b
Stabilization branches are branches that run parallel to `main`, taking in only cherry-picked commits that are related to security or stability. These branches are never merged back to `main`.
![Stabilization Branches](../images/versioning-sketch-1.png)
Since Electron 8, stabilization branches are always **major** version lines, and named against the following template `$MAJOR-x-y` e.g. `8-x-y`. Prior to that we used **minor** version lines and named them as `$MAJOR-$MINOR-x` e.g. `2-0-x`.
We allow for multiple stabilization branches to exist simultaneously, one for each supported version. For more details on which versions are supported, see our [Electron Releases](./electron-timelines.md) doc.
![Multiple Stability Branches](../images/versioning-sketch-2.png)
Older lines will not be supported by the Electron project, but other groups can take ownership and backport stability and security fixes on their own. We discourage this, but recognize that it makes life easier for many app developers.
## Beta releases and bug fixes
Developers want to know which releases are _safe_ to use. Even seemingly innocent features can introduce regressions in complex applications. At the same time, locking to a fixed version is dangerous because youre ignoring security patches and bug fixes that may have come out since your version. Our goal is to allow the following standard semver ranges in `package.json` :
* Use `~2.0.0` to admit only stability or security related fixes to your `2.0.0` release.
* Use `^2.0.0` to admit non-breaking _reasonably stable_ feature work as well as security and bug fixes.
Whats important about the second point is that apps using `^` should still be able to expect a reasonable level of stability. To accomplish this, SemVer allows for a _pre-release identifier_ to indicate a particular version is not yet _safe_ or _stable_.
Whatever you choose, you will periodically have to bump the version in your `package.json` as breaking changes are a fact of Chromium life.
The process is as follows:
1. All new major and minor releases lines begin with a beta series indicated by SemVer prerelease tags of `beta.N`, e.g. `2.0.0-beta.1`. After the first beta, subsequent beta releases must meet all of the following conditions:
1. The change is backwards API-compatible (deprecations are allowed)
2. The risk to meeting our stability timeline must be low.
2. If allowed changes need to be made once a release is beta, they are applied and the prerelease tag is incremented, e.g. `2.0.0-beta.2`.
3. If a particular beta release is _generally regarded_ as stable, it will be re-released as a stable build, changing only the version information. e.g. `2.0.0`. After the first stable, all changes must be backwards-compatible bug or security fixes.
4. If future bug fixes or security patches need to be made once a release is stable, they are applied and the _patch_ version is incremented
e.g. `2.0.1`.
Specifically, the above means:
1. Admitting non-breaking-API changes before Week 3 in the beta cycle is okay, even if those changes have the potential to cause moderate side-effects.
2. Admitting feature-flagged changes, that do not otherwise alter existing code paths, at most points in the beta cycle is okay. Users can explicitly enable those flags in their apps.
3. Admitting features of any sort after Week 3 in the beta cycle is 👎 without a very good reason.
For each major and minor bump, you should expect to see something like the following:
```plaintext
2.0.0-beta.1
2.0.0-beta.2
2.0.0-beta.3
2.0.0
2.0.1
2.0.2
```mermaid
gitGraph
commit
commit
branch N-x-y
checkout main
commit id:"fix-1"
checkout N-x-y
cherry-pick id:"fix-1"
checkout main
commit id:"fix-2"
checkout N-x-y
cherry-pick id:"fix-2"
checkout main
commit
commit
```
An example lifecycle in pictures:
Since Electron 8, stabilization branches are always **major** version lines, and named against the following template `$MAJOR-x-y` e.g. `8-x-y`. (Prior to that, we used **minor** version lines and named them as `$MAJOR-$MINOR-x` e.g. `2-0-x`.)
* A new release branch is created that includes the latest set of features. It is published as `2.0.0-beta.1`.
![New Release Branch](../images/versioning-sketch-3.png)
* A bug fix comes into master that can be backported to the release branch. The patch is applied, and a new beta is published as `2.0.0-beta.2`.
![Bugfix Backport to Beta](../images/versioning-sketch-4.png)
* The beta is considered _generally stable_ and it is published again as a non-beta under `2.0.0`.
![Beta to Stable](../images/versioning-sketch-5.png)
* Later, a zero-day exploit is revealed and a fix is applied to master. We backport the fix to the `2-0-x` line and release `2.0.1`.
![Security Backports](../images/versioning-sketch-6.png)
We allow for multiple stabilization branches to exist simultaneously, one for each supported version.
A few examples of how various SemVer ranges will pick up new releases:
> [!TIP]
> For more details on which versions are supported, see our [Electron Releases](./electron-timelines.md) doc.
![Semvers and Releases](../images/versioning-sketch-7.png)
```mermaid
gitGraph
commit
branch "41-x-y"
checkout main
commit
commit
commit id:"fix-a"
checkout "41-x-y"
cherry-pick id:"fix-a"
checkout main
commit
commit id:"fix-b"
checkout "41-x-y"
cherry-pick id:"fix-b"
checkout main
commit
branch "42-x-y"
checkout main
commit
commit id:"fix-c"
checkout "41-x-y"
cherry-pick id:"fix-c"
checkout "42-x-y"
cherry-pick id:"fix-c"
checkout main
commit
commit id:"fix-d"
checkout "41-x-y"
cherry-pick id:"fix-d"
checkout "42-x-y"
cherry-pick id:"fix-d"
checkout main
commit
```
Older lines will not be supported by the Electron project.
## Release cycle
Electron follows an **8-week regular release cycle** where key milestones correspond to
matching dates in the Chromium release cycle.
```mermaid
gantt
title Electron release cycle
dateFormat YYYY-MM-DD
axisFormat Week %W
todayMarker off
section v41
Alpha phase :a1, 2026-01-19, 4w
M146 enters Chrome beta :milestone, bm1, after a1, 0d
Beta phase :b1, after a1, 4w
M146 enters Chrome stable :milestone, s1, after b1, 0d
Supported until v44 release :active, after b1, 12w
section v42
Alpha phase :a2, after b1, 4w
M148 enters Chrome beta :milestone, bm2, after a2, 0d
Beta phase :b2, after a2, 4w
M148 enters Chrome stable :milestone, s2, after b2, 0d
Supported until v45 release :active, after b2, 4w
```
### Example
When Electron 41 hits its stable release, the release line for Electron 42 is branched off of `main`.
Its first alpha release is created with all the changes contained on `main`:
```mermaid
gitGraph
commit
commit
commit
branch "42-x-y"
checkout "42-x-y"
commit tag:"v42.0.0-alpha.1"
```
A bug fix comes into `main` that can be backported to the release branch. The patch is applied,
and it is published in the next `v42.0.0-alpha.2` release.
```mermaid
gitGraph
commit
commit
commit
branch "42-x-y"
checkout "42-x-y"
commit id:"42.0.0-alpha.1" tag:"v42.0.0-alpha.1"
checkout "main"
commit
commit id:"fix-1"
checkout "42-x-y"
cherry-pick id:"fix-1" tag:"v42.0.0-alpha.2"
```
The version of Chromium that powers Electron 42 hits Chrome's beta channel. The `alpha` line is
promoted to `beta`.
```mermaid
gitGraph
commit
commit
commit
branch "42-x-y"
checkout "42-x-y"
commit id:"42.0.0-alpha.1" tag:"v42.0.0-alpha.1"
checkout "main"
commit
commit id:"fix-1"
checkout "42-x-y"
cherry-pick id:"fix-1" tag:"v42.0.0-alpha.2"
checkout "main"
commit
commit
commit id:"fix-2"
checkout "42-x-y"
cherry-pick id:"fix-2" tag:"v42.0.0-beta.1"
```
Beta releases continue weekly until Electron 42 is promoted to stable and the same cycle starts again
with `43-x-y`. Later, a zero-day exploit is revealed and a fix is applied to `main`. We backport the
fix to the `42-x-y` line and release `42.0.1`.
```mermaid
gitGraph
commit
commit
commit
branch "42-x-y"
checkout "42-x-y"
commit id:"42.0.0-alpha.1" tag:"v42.0.0-alpha.1"
checkout "main"
commit
commit id:"fix-1"
checkout "42-x-y"
cherry-pick id:"fix-1" tag:"v42.0.0-alpha.2"
checkout "main"
commit
commit
commit id:"fix-2"
checkout "42-x-y"
cherry-pick id:"fix-2" tag:"v42.0.0-beta.1"
checkout "main"
commit id:"fix-3"
checkout "42-x-y"
cherry-pick id:"fix-3" tag:"v42.0.0"
checkout "main"
branch "43-x-y"
checkout "43-x-y"
commit id:"43.0.0-alpha.1" tag:"v43.0.0-alpha.1"
checkout "main"
commit id:"security-fix"
checkout "42-x-y"
cherry-pick id:"security-fix" tag:"v42.0.1"
checkout "43-x-y"
cherry-pick id:"security-fix" tag:"v43.0.0-alpha.2"
```
### Backport request process
@@ -136,10 +245,11 @@ The `electron/electron` repository also enforces squash merging, so you only nee
## Versioned `main` branch
* The `main` branch will always contain the next major version `X.0.0-nightly.DATE` in its `package.json`.
* The `main` branch always corresponds to the major version above the current pre-release line.
* Unstable nightly releases of `main` are released under the [`electron-nightly`](https://www.npmjs.com/package/electron-nightly)
package on npm.
* Release branches are never merged back to `main`.
* Release branches _do_ contain the correct version in their `package.json`.
* As soon as a release branch is cut for a major, `main` must be bumped to the next major (i.e. `main` is always versioned as the next theoretical release branch).
* All `package.json` values are fixed at `0.0.0-development`.
## Historical versioning (Electron 1.X)
@@ -147,6 +257,29 @@ Electron versions _< 2.0_ did not conform to the [SemVer](https://semver.org) sp
Here is an example of the 1.x strategy:
![1.x Versioning](../images/versioning-sketch-0.png)
```mermaid
---
config:
gitGraph:
mainBranchName: 'master'
---
gitGraph
commit
branch "bugfix-1"
checkout "bugfix-1"
commit
checkout master
merge "bugfix-1" tag:"1.8.1"
branch "feature"
checkout "feature"
commit
checkout master
merge "feature" tag:"1.8.2"
branch "bugfix-2"
checkout "bugfix-2"
commit
checkout master
merge "bugfix-2" tag:"1.8.3"
```
An app developed with `1.8.1` cannot take the `1.8.3` bug fix without either absorbing the `1.8.2` feature, or by backporting the fix and maintaining a new release line.

View File

@@ -25,6 +25,27 @@ included in the `electron` package:
npx install-electron --no
```
## Installing prereleases
Electron [distributes experimental releases of future major versions](./electron-timelines.md)
via npm as well.
Nightly builds contain the latest changes from the `main` branch:
```sh
npm install electron-nightly --save-dev
```
Alpha and beta builds contain changes slated for the next major version:
```sh
npm install electron@alpha --save-dev
npm install electron@beta --save-dev
```
> [!TIP]
> For more information on available Electron releases, see the [Release Status dashboard](https://releases.electronjs.org).
## Running Electron ad-hoc
If you're in a pinch and would prefer to not use `npm install` in your local

View File

@@ -148,3 +148,12 @@ fix_restore_sdk_inputs_cross-toolchain_deps_for_macos.patch
fix_fire_menu_popup_start_for_dynamically_created_aria_menus.patch
feat_allow_enabling_extensions_on_custom_protocols.patch
fix_initialize_com_on_desktopmedialistcapturethread_on_windows.patch
fix_use_fresh_lazynow_for_onendworkitemimpl_after_didruntask.patch
cherry-pick-b173791bf402.patch
cherry-pick-be87466afecb.patch
cherry-pick-c0390bcd64ba.patch
cherry-pick-1b69067db7d2.patch
cherry-pick-d513cd2fe668.patch
cherry-pick-dc5e20c4c055.patch
cherry-pick-847b11ad2fa3.patch
cherry-pick-fc79e8cc2dfc.patch

View File

@@ -0,0 +1,103 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Vasilii Sukhanov <vasilii@chromium.org>
Date: Wed, 8 Apr 2026 07:48:21 -0700
Subject: Fix cross-domain password leak via manual-fallback preview
In PasswordManualFallbackFlow::DidSelectSuggestion, when a user selects
a password suggestion, the browser process sends the cleartext password
to the renderer for previewing. If the suggestion is cross-domain, this
leak happens without consent or auth.
This CL fixes this by omitting the password in the preview message for
all the cases by sending the fake string.
Fixed: 498269651
Change-Id: Ic9546114c453f05de1030f05c7a9830b39d73038
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/7735152
Commit-Queue: Vasilii Sukhanov <vasilii@chromium.org>
Reviewed-by: Anna Tsvirchkova <atsvirchkova@google.com>
Cr-Commit-Position: refs/heads/main@{#1611490}
diff --git a/components/password_manager/core/browser/password_manual_fallback_flow.cc b/components/password_manager/core/browser/password_manual_fallback_flow.cc
index 6fd5468b061949c7f4a45a29b07e1325bde629e3..47bd86d5fd70a849a13b6837a98f3009fbc10ea6 100644
--- a/components/password_manager/core/browser/password_manual_fallback_flow.cc
+++ b/components/password_manager/core/browser/password_manual_fallback_flow.cc
@@ -213,12 +213,13 @@ void PasswordManualFallbackFlow::DidSelectSuggestion(
if (!form) {
return;
}
+ const auto payload =
+ suggestion.GetPayload<Suggestion::PasswordSuggestionDetails>();
password_manager_driver_->PreviewSuggestionById(
form->username_element_renderer_id,
form->password_element_renderer_id,
GetUsernameFromLabel(suggestion.labels[0][0].value),
- suggestion.GetPayload<Suggestion::PasswordSuggestionDetails>()
- .password);
+ std::u16string(payload.password.length(), '*'));
break;
}
case autofill::SuggestionType::kPasswordFieldByFieldFilling:
diff --git a/components/password_manager/core/browser/password_manual_fallback_flow_unittest.cc b/components/password_manager/core/browser/password_manual_fallback_flow_unittest.cc
index 8b51bbcab5ec65562eed443ea9ba80dbaf8cba63..b99c6531aeb2d4c46e0a88cc0478e18c117c2bb6 100644
--- a/components/password_manager/core/browser/password_manual_fallback_flow_unittest.cc
+++ b/components/password_manager/core/browser/password_manual_fallback_flow_unittest.cc
@@ -656,7 +656,7 @@ TEST_F(PasswordManualFallbackFlowTest,
EXPECT_CALL(driver(), PreviewSuggestionById(form.username_element_renderer_id,
form.password_element_renderer_id,
std::u16string(u"username"),
- std::u16string(u"password")));
+ std::u16string(u"********")));
Suggestion suggestion = autofill::test::CreateAutofillSuggestion(
SuggestionType::kPasswordEntry, u"google.com",
CreateTestPasswordDetails());
@@ -667,6 +667,40 @@ TEST_F(PasswordManualFallbackFlowTest,
flow().DidSelectSuggestion(suggestion);
}
+// Test that password manual fallback suggestion is previewed without password
+// if the suggestion is cross-domain.
+TEST_F(PasswordManualFallbackFlowTest,
+ SelectFillFullFormSuggestion_CrossDomain_TriggeredOnAPasswordForm) {
+ InitializeFlow();
+ ProcessPasswordStoreUpdates();
+
+ PasswordForm form;
+ form.username_element_renderer_id = MakeFieldRendererId();
+ form.password_element_renderer_id = MakeFieldRendererId();
+ // Simulate that the field is/isn't classified as target filling password.
+ EXPECT_CALL(password_form_cache(),
+ GetPasswordForm(_, form.username_element_renderer_id))
+ .WillRepeatedly(Return(&form));
+
+ flow().RunFlow(form.username_element_renderer_id, gfx::RectF{},
+ TextDirection::LEFT_TO_RIGHT);
+
+ // Expect that the password is empty in the preview call.
+ EXPECT_CALL(driver(), PreviewSuggestionById(form.username_element_renderer_id,
+ form.password_element_renderer_id,
+ std::u16string(u"username"),
+ std::u16string(u"********")));
+ Suggestion suggestion = autofill::test::CreateAutofillSuggestion(
+ SuggestionType::kPasswordEntry, u"google.com",
+ Suggestion::PasswordSuggestionDetails(u"username", u"password",
+ "https://cross-domain.com/",
+ u"cross-domain.com",
+ /*is_cross_domain=*/true));
+ suggestion.labels = {{Suggestion::Text(u"username")}};
+ suggestion.acceptability = Suggestion::Acceptability::kAcceptable;
+ flow().DidSelectSuggestion(suggestion);
+}
+
// Test that only password field is previewed if the credential doesn't have
// a username saved for it.
TEST_F(PasswordManualFallbackFlowTest,
@@ -687,7 +721,7 @@ TEST_F(PasswordManualFallbackFlowTest,
EXPECT_CALL(driver(), PreviewSuggestionById(FieldRendererId(),
form.password_element_renderer_id,
std::u16string(),
- std::u16string(u"password")));
+ std::u16string(u"********")));
Suggestion suggestion = autofill::test::CreateAutofillSuggestion(
SuggestionType::kPasswordEntry, u"google.com",
CreateTestPasswordDetails());

View File

@@ -0,0 +1,216 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Joey Arhar <jarhar@chromium.org>
Date: Fri, 10 Apr 2026 12:19:11 -0700
Subject: Fix crashes when restoring <selectedcontent> with <input>
When restoring form control state, the DOM could be modified to add or
remove more listed elements to the form if a select element is being
restored which has a selectedcontent element.
Fixed: 499384399
Change-Id: I18f69c30ae25396c53625f7a3172626b79de3ae3
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/7732030
Reviewed-by: Joey Arhar <jarhar@chromium.org>
Commit-Queue: Joey Arhar <jarhar@chromium.org>
Reviewed-by: Dominic Farolino <dom@chromium.org>
Cr-Commit-Position: refs/heads/main@{#1613032}
diff --git a/third_party/blink/renderer/core/html/forms/form_controller.cc b/third_party/blink/renderer/core/html/forms/form_controller.cc
index abacf373d30debc17498eaeb7cd940972aed2a3a..c4cb59678eeacc2d3fb9f69a30df8674154dbf37 100644
--- a/third_party/blink/renderer/core/html/forms/form_controller.cc
+++ b/third_party/blink/renderer/core/html/forms/form_controller.cc
@@ -492,8 +492,10 @@ void FormController::RestoreControlStateIn(HTMLFormElement& form) {
if (!document_->HasFinishedParsing())
return;
EventQueueScope scope;
- const ListedElement::List& elements = form.ListedElements();
- for (const auto& control : elements) {
+ // Make a copy of the list because the DOM could be modified during
+ // restoration of a <select> with a <selectedcontent> element.
+ ListedElement::List elements_copy(form.ListedElements());
+ for (const auto& control : elements_copy) {
if (!control->ClassSupportsStateRestore())
continue;
if (OwnerFormForState(*control) != &form)
@@ -550,7 +552,11 @@ void FormController::RestoreAllControlsInDocumentOrder() {
return;
HeapHashSet<Member<HTMLFormElement>> finished_forms;
EventQueueScope scope;
- for (auto& control : document_state_->GetControlList()) {
+ // Make a copy of the list because the DOM could be modified during
+ // restoration of a <select> with a <selectedcontent> element.
+ DocumentState::ControlList control_list_copy(
+ document_state_->GetControlList());
+ for (auto& control : control_list_copy) {
auto* owner = OwnerFormForState(*control);
if (!owner)
RestoreControlStateFor(*control);
diff --git a/third_party/blink/web_tests/VirtualTestSuites b/third_party/blink/web_tests/VirtualTestSuites
index ef47961c57cd15f8c59a4554165568c7f13c0d0a..54331107dd22c6b534327ffd5c282ca3220345de 100644
--- a/third_party/blink/web_tests/VirtualTestSuites
+++ b/third_party/blink/web_tests/VirtualTestSuites
@@ -3067,7 +3067,7 @@
],
"bases": [
"external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-in-option-crash.html",
- "external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-restore.html",
+ "external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-restore.optional.html",
"external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-color.html",
"external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-nested.html",
"external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontentelement-attr.tentative.html"
diff --git a/third_party/blink/web_tests/external/wpt/html/semantics/forms/the-select-element/customizable-select/resources/selectedcontent-input.html b/third_party/blink/web_tests/external/wpt/html/semantics/forms/the-select-element/customizable-select/resources/selectedcontent-input.html
new file mode 100644
index 0000000000000000000000000000000000000000..847f42ac304835c2049cf434a4dec68814ad533c
--- /dev/null
+++ b/third_party/blink/web_tests/external/wpt/html/semantics/forms/the-select-element/customizable-select/resources/selectedcontent-input.html
@@ -0,0 +1,27 @@
+<!DOCTYPE html>
+<style>
+select,::picker(select) {
+ appearance: base-select;
+}
+</style>
+<form action="blank.html">
+ <select>
+ <button>
+ <selectedcontent></selectedcontent>
+ </button>
+ <option id=one>one</option>
+ <option id=two>two</option>
+ </select>
+</form>
+
+<script>
+window.createInput = () => {
+ const selectedcontent = document.querySelector('selectedcontent');
+ const input = document.createElement('input');
+ window.input = input;
+ input.name = 'input';
+ selectedcontent.innerHTML = '';
+ selectedcontent.appendChild(input);
+};
+window.createInput();
+</script>
diff --git a/third_party/blink/web_tests/external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-restore.html b/third_party/blink/web_tests/external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-restore.html
deleted file mode 100644
index da5fe450abbae0d19826021f114cc6388f97bc57..0000000000000000000000000000000000000000
--- a/third_party/blink/web_tests/external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-restore.html
+++ /dev/null
@@ -1,35 +0,0 @@
-<!DOCTYPE html>
-<link rel=author href="mailto:jarhar@chromium.org">
-<link rel=help href="https://github.com/whatwg/html/issues/9799">
-<script src="/resources/testharness.js"></script>
-<script src="/resources/testharnessreport.js"></script>
-<script src="/resources/testdriver.js"></script>
-<script src="/resources/testdriver-vendor.js"></script>
-
-<iframe src="resources/selectedcontent-restore-iframe.html"></iframe>
-
-<script>
-const iframe = document.querySelector('iframe');
-promise_test(async () => {
- await new Promise(resolve => iframe.onload = resolve);
- await test_driver.bless();
-
- iframe.contentDocument.querySelector('select').value = 'two';
- assert_equals(iframe.contentDocument.querySelector('select').value, 'two',
- 'Assining two to select.value should work.');
- iframe.contentDocument.querySelector('form').submit();
- await new Promise(resolve => iframe.onload = resolve);
-
- await test_driver.bless();
- iframe.contentWindow.history.back();
- await new Promise(resolve => iframe.onload = resolve);
- await new Promise(requestAnimationFrame);
- await new Promise(requestAnimationFrame);
-
- assert_equals(iframe.contentDocument.querySelector('select').value, 'two',
- 'The selects value should be restored after navigating back.');
- assert_equals(iframe.contentDocument.querySelector('selectedcontent').innerHTML,
- iframe.contentDocument.querySelector('option[value=two]').innerHTML,
- 'selectedcontent.innerHTML should match the selected <option>');
-}, '<selectedcontent> should be up to date after form restoration.');
-</script>
diff --git a/third_party/blink/web_tests/external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-restore.optional.html b/third_party/blink/web_tests/external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-restore.optional.html
new file mode 100644
index 0000000000000000000000000000000000000000..1d0064659cd9df06d6267261bf0b39b3fb29aeef
--- /dev/null
+++ b/third_party/blink/web_tests/external/wpt/html/semantics/forms/the-select-element/customizable-select/selectedcontent-restore.optional.html
@@ -0,0 +1,76 @@
+<!DOCTYPE html>
+<link rel=author href="mailto:jarhar@chromium.org">
+<link rel=help href="https://github.com/whatwg/html/issues/9799">
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script src="/resources/testdriver.js"></script>
+<script src="/resources/testdriver-vendor.js"></script>
+
+<!-- This test is marked optional because form control restoration is not explicitly specified. -->
+
+<iframe id=iframe1 src="resources/selectedcontent-restore-iframe.html"></iframe>
+<iframe id=iframe2 src="resources/selectedcontent-input.html"></iframe>
+
+<script>
+const iframe1 = document.getElementById('iframe1');
+const iframe2 = document.getElementById('iframe2');
+const iframe1load = new Promise(resolve => iframe1.onload = resolve);
+const iframe2load = new Promise(resolve => iframe2.onload = resolve);
+
+promise_test(async () => {
+ await iframe1load;
+ await test_driver.bless();
+
+ iframe1.contentDocument.querySelector('select').value = 'two';
+ assert_equals(iframe1.contentDocument.querySelector('select').value, 'two',
+ 'Assigning two to select.value should work.');
+ iframe1.contentDocument.querySelector('form').submit();
+ await new Promise(resolve => iframe1.onload = resolve);
+
+ await test_driver.bless();
+ iframe1.contentWindow.history.back();
+ // Form controls are restored immediately after the load event is fired, so
+ // one rAF is added after awaiting the load event. See
+ // LocalDOMWindow::DispatchLoadAndPageshowEvents.
+ await new Promise(resolve => iframe1.onload = resolve);
+ await new Promise(requestAnimationFrame);
+
+ assert_equals(iframe1.contentDocument.querySelector('select').value, 'two',
+ 'The selects value should be restored after navigating back.');
+ assert_equals(iframe1.contentDocument.querySelector('selectedcontent').innerHTML,
+ iframe1.contentDocument.querySelector('option[value=two]').innerHTML,
+ 'selectedcontent.innerHTML should match the selected <option>');
+}, '<selectedcontent> should be up to date after form restoration.');
+
+promise_test(async () => {
+ await iframe2load;
+ await test_driver.bless();
+
+ iframe2.contentDocument.querySelector('select').value = 'two';
+ iframe2.contentWindow.createInput();
+ iframe2.contentDocument.querySelector('input').value = 'value';
+ iframe2.contentDocument.querySelector('form').submit();
+ await new Promise(resolve => iframe2.onload = resolve);
+
+ await test_driver.bless();
+ iframe2.contentWindow.history.back();
+ // Form controls are restored immediately after the load event is fired, so
+ // one rAF is added after awaiting the load event. See
+ // LocalDOMWindow::DispatchLoadAndPageshowEvents.
+ await new Promise(resolve => iframe2.onload = resolve);
+ await new Promise(requestAnimationFrame);
+
+ // A crash would happen here because the form restoration code would iterate
+ // over all of the form controls and remove an input element to restore during
+ // restoration of the selectedcontent element, then try to restore the
+ // disconnected input.
+
+ assert_equals(iframe2.contentDocument.querySelector('select').value, 'two',
+ 'The selects value should be restored after navigating back.');
+ assert_equals(iframe2.contentDocument.querySelector('selectedcontent').innerHTML,
+ iframe2.contentDocument.getElementById('two').innerHTML,
+ 'selectedcontent.innerHTML should match the selected <option>');
+ assert_equals(iframe2.contentWindow.input.value, '',
+ 'The text inputs value should not be restored because it was removed before restoring.');
+}, '<input> inside <selectedcontent> should be restored after form submission.');
+</script>

View File

@@ -0,0 +1,53 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: p0-tato <smartphonewithbear@gmail.com>
Date: Mon, 13 Apr 2026 14:50:07 -0700
Subject: Fix dangling pointers in OpenXrSpatialFrameworkManager
Pointers to vector elements were collected during emplace_back,
which invalidates them on reallocation. Split into two loops
and reserve the correct capacity.
Bug: 497724498
Change-Id: I204534bc1bd1522fe03db86f03c2c3e0d285631c
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/7735242
Commit-Queue: Brian Sheedy <bsheedy@chromium.org>
Reviewed-by: Brian Sheedy <bsheedy@chromium.org>
Reviewed-by: Brandon Jones <bajones@chromium.org>
Cr-Commit-Position: refs/heads/main@{#1613990}
diff --git a/AUTHORS b/AUTHORS
index 700ad1f495549eac39242e5d3b489245eb1b633d..5ef10597084e19b283209ee9833c56a7a0346187 100644
--- a/AUTHORS
+++ b/AUTHORS
@@ -736,6 +736,7 @@ Jihoon Chung <jihoon@gmail.com>
Jihun Brent Kim <devgrapher@gmail.com>
Jihwan Marc Kim <bluewhale.marc@gmail.com>
Jihye Hyun <jijinny26@gmail.com>
+Jihyeon Jeong <smartphonewithbear@gmail.com>
Jihyeon Lee <wlgus7464@gmail.com>
Jim Wu <lofoz.tw@gmail.com>
Jin Yang <jin.a.yang@intel.com>
diff --git a/device/vr/openxr/openxr_spatial_framework_manager.cc b/device/vr/openxr/openxr_spatial_framework_manager.cc
index 2fd3609f277dc425d38a9acf9895b1ad02d64c72..b6f82c5c2999073aad9c43811fe9561c670992f9 100644
--- a/device/vr/openxr/openxr_spatial_framework_manager.cc
+++ b/device/vr/openxr/openxr_spatial_framework_manager.cc
@@ -74,12 +74,15 @@ OpenXrSpatialFrameworkManager::OpenXrSpatialFrameworkManager(
// to help abstract some of the details of creating the child structs, even
// though at present we only have a configuration base.
std::vector<OpenXrSpatialCapabilityConfigurationBase> capability_configs;
- std::vector<XrSpatialCapabilityConfigurationBaseHeaderEXT*>
- capability_config_ptrs;
+ capability_configs.reserve(capability_configuration.size());
for (auto& [capability, components] : capability_configuration) {
capability_configs.emplace_back(capability, components);
- capability_config_ptrs.push_back(
- capability_configs.back().GetAsBaseHeader());
+ }
+
+ std::vector<XrSpatialCapabilityConfigurationBaseHeaderEXT*>
+ capability_config_ptrs;
+ for (auto& config : capability_configs) {
+ capability_config_ptrs.push_back(config.GetAsBaseHeader());
}
XrSpatialContextCreateInfoEXT create_info = {

View File

@@ -0,0 +1,224 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Jonathan Ross <jonross@chromium.org>
Date: Wed, 8 Apr 2026 17:15:45 -0700
Subject: gl: Make DCOMPSurfaceRegistry thread-safe
DCOMPSurfaceRegistry is accessed from both the GPU IO thread (via
GpuServiceImpl) and the GPU main scheduler thread (via DCOMPTexture).
The underlying base::flat_map is not thread-safe, leading to potential
container corruption and crashes (UAF, BOf) during concurrent access.
This CL adds a base::Lock to protect all accesses to the map and
includes a new multi-threaded stress test to verify the fix.
Bug: 493315759
Change-Id: Ibb7ef5e602f222410fde06a61fb3f5e571e7a70f
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/7737061
Reviewed-by: Sunny Sachanandani <sunnyps@chromium.org>
Commit-Queue: Jonathan Ross <jonross@chromium.org>
Cr-Commit-Position: refs/heads/main@{#1611867}
diff --git a/ui/gl/BUILD.gn b/ui/gl/BUILD.gn
index 3584b693370b5199456608a26ceb763f6e9c3446..1cb66199a0b8adf2035a05fecc411c67180f7e80 100644
--- a/ui/gl/BUILD.gn
+++ b/ui/gl/BUILD.gn
@@ -552,6 +552,7 @@ test("gl_unittests") {
if (is_win) {
sources += [
"dcomp_presenter_unittest.cc",
+ "dcomp_surface_registry_unittest.cc",
"delegated_ink_point_renderer_gpu_unittest.cc",
"gl_fence_win_unittest.cc",
"hdr_metadata_helper_win_unittest.cc",
diff --git a/ui/gl/dcomp_surface_registry.cc b/ui/gl/dcomp_surface_registry.cc
index 352cc298b9ea97361ae2a7d668b7d7e9eb455cd5..410f76f8980438abae32b6c89e7083ae48cf1699 100644
--- a/ui/gl/dcomp_surface_registry.cc
+++ b/ui/gl/dcomp_surface_registry.cc
@@ -3,8 +3,11 @@
// found in the LICENSE file.
#include "ui/gl/dcomp_surface_registry.h"
+
+#include "base/check.h"
#include "base/logging.h"
#include "base/no_destructor.h"
+#include "base/synchronization/lock.h"
namespace gl {
@@ -20,8 +23,11 @@ base::UnguessableToken DCOMPSurfaceRegistry::RegisterDCOMPSurfaceHandle(
base::win::ScopedHandle surface) {
DVLOG(1) << __func__;
base::UnguessableToken token = base::UnguessableToken::Create();
- DCHECK(surface_handle_map_.find(token) == surface_handle_map_.end());
- surface_handle_map_[token] = std::move(surface);
+ {
+ base::AutoLock lock(lock_);
+ DCHECK(surface_handle_map_.find(token) == surface_handle_map_.end());
+ surface_handle_map_[token] = std::move(surface);
+ }
DVLOG(1) << __func__ << ": Surface handle registered with token " << token;
return token;
}
@@ -29,12 +35,14 @@ base::UnguessableToken DCOMPSurfaceRegistry::RegisterDCOMPSurfaceHandle(
void DCOMPSurfaceRegistry::UnregisterDCOMPSurfaceHandle(
const base::UnguessableToken& token) {
DVLOG(1) << __func__;
+ base::AutoLock lock(lock_);
surface_handle_map_.erase(token);
}
base::win::ScopedHandle DCOMPSurfaceRegistry::TakeDCOMPSurfaceHandle(
const base::UnguessableToken& token) {
DVLOG(1) << __func__;
+ base::AutoLock lock(lock_);
auto surface_iter = surface_handle_map_.find(token);
if (surface_iter != surface_handle_map_.end()) {
// Take ownership.
diff --git a/ui/gl/dcomp_surface_registry.h b/ui/gl/dcomp_surface_registry.h
index 803a3cc6398f0777504063118920998869086d7f..7cd9fdbfe8669bc97d4b664fdb29573ec2ea26de 100644
--- a/ui/gl/dcomp_surface_registry.h
+++ b/ui/gl/dcomp_surface_registry.h
@@ -7,6 +7,7 @@
#include "base/containers/flat_map.h"
#include "base/no_destructor.h"
+#include "base/synchronization/lock.h"
#include "base/unguessable_token.h"
#include "base/win/scoped_handle.h"
#include "ui/gl/gl_export.h"
@@ -44,7 +45,9 @@ class GL_EXPORT DCOMPSurfaceRegistry {
~DCOMPSurfaceRegistry();
base::flat_map<base::UnguessableToken, base::win::ScopedHandle>
- surface_handle_map_;
+ surface_handle_map_ GUARDED_BY(lock_);
+
+ base::Lock lock_;
};
} // namespace gl
diff --git a/ui/gl/dcomp_surface_registry_unittest.cc b/ui/gl/dcomp_surface_registry_unittest.cc
new file mode 100644
index 0000000000000000000000000000000000000000..595e2388e9f50df33214359ecef0c135d94610b8
--- /dev/null
+++ b/ui/gl/dcomp_surface_registry_unittest.cc
@@ -0,0 +1,118 @@
+// Copyright 2026 The Chromium Authors
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
+
+#include "ui/gl/dcomp_surface_registry.h"
+
+#include <windows.h>
+
+#include <atomic>
+#include <thread>
+#include <vector>
+
+#include "base/memory/raw_ptr.h"
+#include "base/synchronization/lock.h"
+#include "base/unguessable_token.h"
+#include "base/win/scoped_handle.h"
+#include "testing/gtest/include/gtest/gtest.h"
+
+namespace gl {
+
+namespace {
+
+class DCOMPSurfaceRegistryTest : public testing::Test {
+ public:
+ void SetUp() override { registry_ = DCOMPSurfaceRegistry::GetInstance(); }
+
+ protected:
+ raw_ptr<DCOMPSurfaceRegistry> registry_;
+};
+
+} // namespace
+
+// Stress test for concurrent access to DCOMPSurfaceRegistry using the
+// barrier pattern to ensure TSAN consistently catches data races.
+//
+// Without proper synchronization (e.g., base::Lock), this test would likely
+// fail in the following ways:
+// 1. Memory Corruption (UAF/HeapBOf): base::flat_map uses a contiguous
+// std::vector. If one thread triggers a reallocation during an insertion
+// while another thread is searching or erasing, the latter will hold an
+// invalidated iterator or pointer.
+// 2. Container Inconsistency: Concurrent insertions and erasures can leave
+// the map in an unsorted or corrupted state, leading to failed lookups
+// for valid tokens.
+// 3. Sanitizer Triggers: ASan would detect container-overflow or
+// heap-use-after-free, and TSan would flag a data race.
+TEST_F(DCOMPSurfaceRegistryTest, ConcurrentRegisterAndTake) {
+ const int kOpsPerThread = 100;
+
+ std::vector<base::UnguessableToken> tokens;
+ base::Lock tokens_lock;
+
+ std::atomic<bool> start_flag{false};
+ std::atomic<int> threads_ready{0};
+
+ auto register_worker = [&]() {
+ threads_ready++;
+ while (!start_flag.load(std::memory_order_acquire)) {
+ std::this_thread::yield();
+ }
+
+ for (int i = 0; i < kOpsPerThread; ++i) {
+ base::win::ScopedHandle handle(
+ ::CreateEvent(nullptr, FALSE, FALSE, nullptr));
+ base::UnguessableToken token =
+ registry_->RegisterDCOMPSurfaceHandle(std::move(handle));
+ {
+ base::AutoLock lock(tokens_lock);
+ tokens.push_back(token);
+ }
+ }
+ };
+
+ auto take_worker = [&]() {
+ threads_ready++;
+ while (!start_flag.load(std::memory_order_acquire)) {
+ std::this_thread::yield();
+ }
+
+ int taken = 0;
+ while (taken < kOpsPerThread) {
+ base::UnguessableToken token;
+ {
+ base::AutoLock lock(tokens_lock);
+ if (!tokens.empty()) {
+ token = tokens.back();
+ tokens.pop_back();
+ }
+ }
+ if (!token.is_empty()) {
+ base::win::ScopedHandle handle =
+ registry_->TakeDCOMPSurfaceHandle(token);
+ taken++;
+ } else {
+ std::this_thread::yield();
+ }
+ }
+ };
+
+ // With the barrier pattern, two threads are sufficient to trigger
+ // the race condition for TSAN.
+ std::thread t1(register_worker);
+ std::thread t2(take_worker);
+
+ // Wait until both threads are ready at the starting line.
+ while (threads_ready.load(std::memory_order_relaxed) < 2) {
+ std::this_thread::yield();
+ }
+
+ // Signal the staring flag to allow both threads to race from the initialized
+ // state.
+ start_flag.store(true, std::memory_order_release);
+
+ t1.join();
+ t2.join();
+}
+
+} // namespace gl

View File

@@ -0,0 +1,210 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Fergal Daly <fergal@chromium.org>
Date: Mon, 6 Apr 2026 19:49:06 -0700
Subject: Fix UAF in FileSystemAccessChangeSource.
`DidInitialize` calls any outstanding initialization callbacks but a
callback can delete this. The code guards against this in its access
of `initialization_callbacks_` but not `initialization_result_`.
This fix keeps a copy of the result on the stack.
This also adds a test which fails with ASAN before the fix is applied
and passes after.
The basic test code was written by Gemini.
Fixed: 497880137
Change-Id: I046831db23cb4b8e41964910e2aede9b1be0db7f
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/7728464
Auto-Submit: Fergal Daly <fergal@chromium.org>
Reviewed-by: Ming-Ying Chung <mych@chromium.org>
Commit-Queue: Ming-Ying Chung <mych@chromium.org>
Cr-Commit-Position: refs/heads/main@{#1610499}
diff --git a/content/browser/file_system_access/file_system_access_change_source.cc b/content/browser/file_system_access/file_system_access_change_source.cc
index 566dc1ea40b43a54b33d70e82a20ff5695b57b5e..48bd867a9d3d140eaf515ea7bc1613231f7e79e9 100644
--- a/content/browser/file_system_access/file_system_access_change_source.cc
+++ b/content/browser/file_system_access/file_system_access_change_source.cc
@@ -71,13 +71,14 @@ void FileSystemAccessChangeSource::DidInitialize(
CHECK(!initialization_result_.has_value());
CHECK(!initialization_callbacks_.empty());
- initialization_result_ = std::move(result);
+ // The callbacks may cause |this| to be deleted, so we should only use
+ // stack-based objects below.
+ initialization_result_ = result->Clone();
- // Move the callbacks to the stack since they may cause |this| to be deleted.
auto initialization_callbacks = std::move(initialization_callbacks_);
initialization_callbacks_.clear();
for (auto& callback : initialization_callbacks) {
- std::move(callback).Run(initialization_result_->Clone());
+ std::move(callback).Run(result->Clone());
}
}
diff --git a/content/browser/file_system_access/file_system_access_change_source_unittest.cc b/content/browser/file_system_access/file_system_access_change_source_unittest.cc
new file mode 100644
index 0000000000000000000000000000000000000000..b0f15909bebda29fc2ec689a6d3b15d797dcc722
--- /dev/null
+++ b/content/browser/file_system_access/file_system_access_change_source_unittest.cc
@@ -0,0 +1,146 @@
+// Copyright 2026 The Chromium Authors
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
+
+#include "content/browser/file_system_access/file_system_access_change_source.h"
+
+#include "base/files/scoped_temp_dir.h"
+#include "base/functional/bind.h"
+#include "base/memory/scoped_refptr.h"
+#include "base/task/sequenced_task_runner.h"
+#include "base/test/task_environment.h"
+#include "base/test/test_future.h"
+#include "content/browser/file_system_access/file_system_access_watch_scope.h"
+#include "storage/browser/file_system/file_system_context.h"
+#include "storage/browser/file_system/file_system_url.h"
+#include "storage/browser/quota/quota_manager_proxy.h"
+#include "storage/browser/test/test_file_system_context.h"
+#include "testing/gmock/include/gmock/gmock.h"
+#include "testing/gtest/include/gtest/gtest.h"
+#include "third_party/blink/public/mojom/file_system_access/file_system_access_error.mojom.h"
+
+namespace content {
+
+namespace {
+
+class MockRawChangeObserver
+ : public FileSystemAccessChangeSource::RawChangeObserver {
+ public:
+ MOCK_METHOD(void,
+ OnRawChange,
+ (const storage::FileSystemURL& changed_url,
+ bool error,
+ const FileSystemAccessChangeSource::ChangeInfo& change_info,
+ const FileSystemAccessWatchScope& scope),
+ (override));
+ MOCK_METHOD(void,
+ OnUsageChange,
+ (size_t old_usage,
+ size_t new_usage,
+ const FileSystemAccessWatchScope& scope),
+ (override));
+ MOCK_METHOD(void,
+ OnSourceBeingDestroyed,
+ (FileSystemAccessChangeSource * source),
+ (override));
+};
+
+class FakeChangeSource : public FileSystemAccessChangeSource {
+ public:
+ FakeChangeSource(
+ FileSystemAccessWatchScope scope,
+ scoped_refptr<storage::FileSystemContext> file_system_context)
+ : FileSystemAccessChangeSource(std::move(scope),
+ std::move(file_system_context)) {}
+ ~FakeChangeSource() override = default;
+
+ // FileSystemAccessChangeSource:
+ void Initialize(
+ base::OnceCallback<void(blink::mojom::FileSystemAccessErrorPtr)>
+ on_source_initialized) override {
+ base::SequencedTaskRunner::GetCurrentDefault()->PostTask(
+ FROM_HERE, base::BindOnce(std::move(on_source_initialized),
+ blink::mojom::FileSystemAccessError::New(
+ blink::mojom::FileSystemAccessStatus::kOk,
+ base::File::FILE_OK, "")));
+ }
+
+ void Signal(const storage::FileSystemURL& changed_url,
+ bool error = false,
+ ChangeInfo change_info = ChangeInfo()) {
+ NotifyOfChange(changed_url, error, change_info);
+ }
+};
+
+} // namespace
+
+class FileSystemAccessChangeSourceTest : public testing::Test {
+ public:
+ FileSystemAccessChangeSourceTest()
+ : task_environment_(base::test::TaskEnvironment::MainThreadType::IO) {}
+
+ void SetUp() override {
+ ASSERT_TRUE(dir_.CreateUniqueTempDir());
+ file_system_context_ = storage::CreateFileSystemContextForTesting(
+ /*quota_manager_proxy=*/nullptr, dir_.GetPath());
+ }
+
+ protected:
+ base::test::TaskEnvironment task_environment_;
+ base::ScopedTempDir dir_;
+ scoped_refptr<storage::FileSystemContext> file_system_context_;
+};
+
+TEST_F(FileSystemAccessChangeSourceTest, CreateAndInitialize) {
+ auto file_path = dir_.GetPath().AppendASCII("file");
+ auto file_url = file_system_context_->CreateCrackedFileSystemURL(
+ blink::StorageKey(), storage::kFileSystemTypeLocal, file_path);
+
+ auto scope = FileSystemAccessWatchScope::GetScopeForFileWatch(file_url);
+ FakeChangeSource source(scope, file_system_context_);
+
+ base::test::TestFuture<blink::mojom::FileSystemAccessErrorPtr> future;
+ source.EnsureInitialized(future.GetCallback());
+ EXPECT_EQ(future.Get()->status, blink::mojom::FileSystemAccessStatus::kOk);
+}
+
+TEST_F(FileSystemAccessChangeSourceTest, NotifyOfChange) {
+ auto file_path = dir_.GetPath().AppendASCII("file");
+ auto file_url = file_system_context_->CreateCrackedFileSystemURL(
+ blink::StorageKey(), storage::kFileSystemTypeLocal, file_path);
+
+ auto scope = FileSystemAccessWatchScope::GetScopeForFileWatch(file_url);
+ FakeChangeSource source(scope, file_system_context_);
+
+ MockRawChangeObserver observer;
+ source.AddObserver(&observer);
+
+ EXPECT_CALL(observer, OnRawChange(testing::Eq(file_url), testing::IsFalse(),
+ testing::_, testing::Eq(scope)));
+ source.Signal(file_url);
+
+ source.RemoveObserver(&observer);
+}
+
+// A callback passed to `EnsureInitialized` may result in `this` being
+// destroyed. This tests that `DidInitialize` (which calls the callbacks) is
+// robust to that situation. See https://crbug.com/497880137.
+TEST_F(FileSystemAccessChangeSourceTest, TestDestroyFromInitializeCallback) {
+ auto file_path = dir_.GetPath().AppendASCII("file");
+ auto file_url = file_system_context_->CreateCrackedFileSystemURL(
+ blink::StorageKey(), storage::kFileSystemTypeLocal, file_path);
+
+ auto scope = FileSystemAccessWatchScope::GetScopeForFileWatch(file_url);
+ FakeChangeSource* source = new FakeChangeSource(scope, file_system_context_);
+
+ source->EnsureInitialized(base::BindOnce(
+ [](FakeChangeSource* source, blink::mojom::FileSystemAccessErrorPtr) {
+ delete source;
+ },
+ base::Unretained(source)));
+ base::test::TestFuture<blink::mojom::FileSystemAccessErrorPtr> future;
+ source->EnsureInitialized(future.GetCallback());
+ EXPECT_EQ(future.Get()->status, blink::mojom::FileSystemAccessStatus::kOk);
+}
+
+} // namespace content
diff --git a/content/test/BUILD.gn b/content/test/BUILD.gn
index 4521cc9e247c44248627c12b9eda0961f837d744..6ea61eb47fcde420261a9dd7e7e3feefa31f87d2 100644
--- a/content/test/BUILD.gn
+++ b/content/test/BUILD.gn
@@ -2680,6 +2680,7 @@ test("content_unittests") {
"../browser/fenced_frame/redacted_fenced_frame_config_mojom_traits_unittest.cc",
"../browser/file_system/browser_file_system_helper_unittest.cc",
"../browser/file_system/file_system_operation_runner_unittest.cc",
+ "../browser/file_system_access/file_system_access_change_source_unittest.cc",
"../browser/file_system_access/file_system_access_directory_handle_impl_unittest.cc",
"../browser/file_system_access/file_system_access_file_handle_impl_unittest.cc",
"../browser/file_system_access/file_system_access_file_modification_host_impl_unittest.cc",

View File

@@ -0,0 +1,59 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Kenichi Ishibashi <bashi@chromium.org>
Date: Fri, 10 Apr 2026 17:14:24 -0700
Subject: [CORS] Block forbidden methods for no-cors requests
Previously, forbidden methods like TRACE and TRACK were allowed when
the request mode was no-cors, and only CONNECT was unconditionally
blocked.
This CL updates CorsURLLoaderFactory::IsValidRequest to block all
forbidden methods regardless of the request mode. The unit test is
also updated to reflect this new restriction.
Bug: 498765210
Change-Id: Ie451a3c2b8fa7aafdebade8b3ba517be3ce255f8
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/7743444
Reviewed-by: mmenke <mmenke@chromium.org>
Commit-Queue: Kenichi Ishibashi <bashi@chromium.org>
Cr-Commit-Position: refs/heads/main@{#1613186}
diff --git a/services/network/cors/cors_url_loader_factory.cc b/services/network/cors/cors_url_loader_factory.cc
index 6a1eb075b0ed581bf81c43a0439da12eff20664c..bf02d6663b47413e47ebfe9ae4ba5799787f69ae 100644
--- a/services/network/cors/cors_url_loader_factory.cc
+++ b/services/network/cors/cors_url_loader_factory.cc
@@ -910,13 +910,8 @@ bool CorsURLLoaderFactory::IsValidRequest(
return false;
}
- // Don't allow forbidden methods for any requests except RequestMode::kNoCors.
- // Don't allow CONNECT method for any request.
- if ((request.mode != mojom::RequestMode::kNoCors &&
- cors::IsForbiddenMethod(request.method)) ||
- (request.mode == mojom::RequestMode::kNoCors &&
- base::EqualsCaseInsensitiveASCII(
- request.method, net::HttpRequestHeaders::kConnectMethod))) {
+ // Don't allow forbidden methods.
+ if (cors::IsForbiddenMethod(request.method)) {
mojo::ReportBadMessage("CorsURLLoaderFactory: Forbidden method");
return false;
}
diff --git a/services/network/cors/cors_url_loader_unittest.cc b/services/network/cors/cors_url_loader_unittest.cc
index e9bbbc2013e6fb9498bec0982c045ea8b937a207..23a9e8093aa8d6cafb9a949d4a1dae86bd52a99d 100644
--- a/services/network/cors/cors_url_loader_unittest.cc
+++ b/services/network/cors/cors_url_loader_unittest.cc
@@ -109,11 +109,10 @@ TEST_F(CorsURLLoaderTest, ForbiddenMethods) {
std::string forbidden_method;
bool expect_allowed_for_no_cors;
} kTestCases[] = {
- // CONNECT is never allowed, while TRACE and TRACK are allowed only with
- // RequestMode::kNoCors.
+ // CONNECT, TRACE and TRACK are not allowed for any mode.
{"CONNECT", false},
- {"TRACE", true},
- {"TRACK", true},
+ {"TRACE", false},
+ {"TRACK", false},
};
for (const auto& test_case : kTestCases) {
SCOPED_TRACE(test_case.forbidden_method);

View File

@@ -0,0 +1,78 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Sunny Sachanandani <sunnyps@chromium.org>
Date: Wed, 8 Apr 2026 16:29:57 -0700
Subject: [gpu] Fix OOB write due to unvalidated get_offset
A compromised GPU process can provide an invalid get_offset to the
CommandBufferHelper (e.g., via shared memory). This offset is used to
calculate available space and could lead to out-of-bounds writes in the
Browser process if not validated.
This change adds a bounds check in
CommandBufferHelper::UpdateCachedState to ensure that the cached
get_offset is within the valid range [0, total_entry_count_]. If an
invalid offset is detected, it forces a context loss, frees the ring
buffer, and marks the helper as unusable, preventing further operations.
Bug: 498782145
Test: CommandBufferHelperTest.*
Change-Id: I8c64e546ecdc90a5a22d15e57ff762a86a6a6964
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/7739951
Reviewed-by: Vasiliy Telezhnikov <vasilyt@chromium.org>
Auto-Submit: Sunny Sachanandani <sunnyps@chromium.org>
Commit-Queue: Sunny Sachanandani <sunnyps@chromium.org>
Cr-Commit-Position: refs/heads/main@{#1611853}
diff --git a/gpu/command_buffer/client/cmd_buffer_helper.cc b/gpu/command_buffer/client/cmd_buffer_helper.cc
index ccda45b133c6a9f2ee60ccc8900bd4a4ce328394..5aea0c81b29b3507099f399c374f3cb372a3100e 100644
--- a/gpu/command_buffer/client/cmd_buffer_helper.cc
+++ b/gpu/command_buffer/client/cmd_buffer_helper.cc
@@ -158,6 +158,17 @@ void CommandBufferHelper::UpdateCachedState(const CommandBuffer::State& state) {
service_on_old_buffer_ =
(state.set_get_buffer_count != set_get_buffer_count_);
cached_get_offset_ = service_on_old_buffer_ ? 0 : state.get_offset;
+
+ if (!service_on_old_buffer_ &&
+ (cached_get_offset_ < 0 || cached_get_offset_ > total_entry_count_)) {
+ command_buffer_->ForceLostContext(error::kGuilty);
+ FreeRingBuffer();
+ usable_ = false;
+ context_lost_ = true;
+ cached_get_offset_ = 0; // Safe fallback
+ return;
+ }
+
cached_last_token_read_ = state.token;
// Don't transition from a lost context to a working context.
context_lost_ |= error::IsError(state.error);
diff --git a/gpu/command_buffer/client/cmd_buffer_helper_test.cc b/gpu/command_buffer/client/cmd_buffer_helper_test.cc
index 5b1e5fae133ef75a99dab4ba1f8d2beddef68138..31e46714756ee30bf2fc3353693b6d49be8f6076 100644
--- a/gpu/command_buffer/client/cmd_buffer_helper_test.cc
+++ b/gpu/command_buffer/client/cmd_buffer_helper_test.cc
@@ -70,6 +70,8 @@ class CommandBufferHelperTest : public testing::Test {
return helper_->immediate_entry_count_;
}
+ int32_t TotalEntryCount() const { return helper_->total_entry_count_; }
+
// Adds a command to the buffer through the helper, while adding it as an
// expected call on the API mock.
void AddCommandWithExpect(error::Error _return,
@@ -655,6 +657,17 @@ TEST_F(CommandBufferHelperTest, IsContextLost) {
EXPECT_TRUE(helper_->IsContextLost());
}
+TEST_F(CommandBufferHelperTest, TestInvalidGetOffset) {
+ EXPECT_FALSE(helper_->IsContextLost());
+ EXPECT_TRUE(helper_->usable());
+
+ command_buffer_->SetGetOffsetForTest(TotalEntryCount() + 1);
+ helper_->RefreshCachedToken(); // calls UpdateCachedState internally.
+
+ EXPECT_TRUE(helper_->IsContextLost());
+ EXPECT_FALSE(helper_->usable());
+}
+
// Checks helper's 'flush generation' updates.
TEST_F(CommandBufferHelperTest, TestFlushGeneration) {
// Explicit flushing only.

View File

@@ -0,0 +1,368 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Eugene Zemtsov <eugene@chromium.org>
Date: Wed, 8 Apr 2026 18:32:31 -0700
Subject: media: Zero-copy VP9 alpha decoding in VpxVideoDecoder
Configures the VP9 alpha decoder to use `memory_pool_` for external
frame buffers, eliminating the need for `libyuv::CopyPlane`.
The `VideoFrame` now wraps the alpha data directly from the pool using
a second destruction observer. `AllocateAlphaPlaneForFrameBuffer` and
`alpha_data` tracking are removed from `FrameBufferPool`.
Bug: 500066234
Change-Id: I6e7cf13bcc8a5a1759acfd51961859c4c57fcbf2
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/7737984
Reviewed-by: Ted (Chromium) Meyer <tmathmeyer@chromium.org>
Commit-Queue: Eugene Zemtsov <eugene@chromium.org>
Reviewed-by: Dale Curtis <dalecurtis@chromium.org>
Cr-Commit-Position: refs/heads/main@{#1611919}
diff --git a/media/base/frame_buffer_pool.cc b/media/base/frame_buffer_pool.cc
index ceb0313c1fa36483ac545a6440737da035a1224c..59bc0790178aa2eb2e6960ef396c96ae18bbba09 100644
--- a/media/base/frame_buffer_pool.cc
+++ b/media/base/frame_buffer_pool.cc
@@ -56,7 +56,6 @@ struct FrameBufferPool::FrameBuffer {
// Not using std::vector<uint8_t> as resize() calls take a really long time
// for large buffers.
BytesArray data;
- BytesArray alpha_data;
bool held_by_library = false;
// Needs to be a counter since a frame buffer might be used multiple times.
int held_by_frame = 0;
@@ -155,24 +154,6 @@ void FrameBufferPool::ReleaseFrameBuffer(void* fb_priv) {
}
}
-base::span<uint8_t> FrameBufferPool::AllocateAlphaPlaneForFrameBuffer(
- size_t min_size,
- void* fb_priv) {
- base::AutoLock lock(lock_);
- DCHECK(fb_priv);
-
- auto* frame_buffer = static_cast<FrameBuffer*>(fb_priv);
- DCHECK(IsUsedLocked(frame_buffer));
- if (frame_buffer->alpha_data.size() < min_size) {
- // Free the existing |alpha_data| first so that the memory can be reused,
- // if possible.
- frame_buffer->alpha_data = {};
- frame_buffer->alpha_data = AllocateMemory(min_size, zero_initialize_memory_,
- force_allocation_error_);
- }
- return frame_buffer->alpha_data;
-}
-
base::OnceClosure FrameBufferPool::CreateFrameCallback(void* fb_priv) {
base::AutoLock lock(lock_);
@@ -210,10 +191,9 @@ bool FrameBufferPool::OnMemoryDump(
size_t bytes_reserved = 0;
for (const auto& frame_buffer : frame_buffers_) {
if (IsUsedLocked(frame_buffer.get())) {
- bytes_used += frame_buffer->data.size() + frame_buffer->alpha_data.size();
+ bytes_used += frame_buffer->data.size();
}
- bytes_reserved +=
- frame_buffer->data.size() + frame_buffer->alpha_data.size();
+ bytes_reserved += frame_buffer->data.size();
}
memory_dump->AddScalar(base::trace_event::MemoryAllocatorDump::kNameSize,
diff --git a/media/base/frame_buffer_pool.h b/media/base/frame_buffer_pool.h
index ac839b8e8bfa00d2fea203be5248a56f04cecc71..2ccb01676b0e8e1e3ca1b3cb60f2883538f2f13c 100644
--- a/media/base/frame_buffer_pool.h
+++ b/media/base/frame_buffer_pool.h
@@ -48,11 +48,6 @@ class MEDIA_EXPORT FrameBufferPool
// Called when a frame buffer allocation is no longer needed.
void ReleaseFrameBuffer(void* fb_priv);
- // Allocates (or reuses) room for an alpha plane on a given frame buffer.
- // |fb_priv| must be a value previously returned by GetFrameBuffer().
- base::span<uint8_t> AllocateAlphaPlaneForFrameBuffer(size_t min_size,
- void* fb_priv);
-
// Generates a "no_longer_needed" closure that holds a reference to this pool;
// |fb_priv| must be a value previously returned by GetFrameBuffer(). The
// callback may be called on any thread.
diff --git a/media/base/frame_buffer_pool_unittest.cc b/media/base/frame_buffer_pool_unittest.cc
index 893e941f9f9b6d7eaff98b3e9ae4278861a2b0fd..8b50896e7544e34589614216373b566598b30ec6 100644
--- a/media/base/frame_buffer_pool_unittest.cc
+++ b/media/base/frame_buffer_pool_unittest.cc
@@ -32,12 +32,6 @@ TEST(FrameBufferPool, BasicFunctionality) {
EXPECT_NE(buf1.data(), buf2.data());
std::ranges::fill(buf2, 0);
- auto alpha = pool->AllocateAlphaPlaneForFrameBuffer(kBufferSize, priv1);
- ASSERT_FALSE(alpha.empty());
- EXPECT_NE(alpha.data(), buf1.data());
- EXPECT_NE(alpha.data(), buf2.data());
- std::ranges::fill(alpha, 0);
-
EXPECT_EQ(2u, pool->get_pool_size_for_testing());
// Frames are not released immediately, so this should still show two frames.
@@ -52,7 +46,6 @@ TEST(FrameBufferPool, BasicFunctionality) {
EXPECT_EQ(1u, pool->get_pool_size_for_testing());
std::ranges::fill(buf1, 0);
- std::ranges::fill(alpha, 0);
// This will release all memory since we're in the shutdown state.
std::move(frame_release_cb).Run();
@@ -132,13 +125,6 @@ TEST(FrameBufferPool, DoesClearAllocations) {
}
EXPECT_FALSE(nonzero);
- auto alpha_buf = pool->AllocateAlphaPlaneForFrameBuffer(kBufferSize, priv1);
- nonzero = false;
- for (size_t i = 0; i < kBufferSize; i++) {
- nonzero |= !!alpha_buf[i];
- }
- EXPECT_FALSE(nonzero);
-
pool->Shutdown();
}
diff --git a/media/filters/vpx_video_decoder.cc b/media/filters/vpx_video_decoder.cc
index ca1e45ed3cddcbf878f03c233f982b04425287f5..fe1b8b9b0686ed0b89737097087fca00cb1677c6 100644
--- a/media/filters/vpx_video_decoder.cc
+++ b/media/filters/vpx_video_decoder.cc
@@ -250,7 +250,21 @@ bool VpxVideoDecoder::ConfigureDecoder(const VideoDecoderConfig& config) {
DCHECK(!vpx_codec_alpha_);
vpx_codec_alpha_ = InitializeVpxContext(config);
- return !!vpx_codec_alpha_;
+ if (!vpx_codec_alpha_) {
+ return false;
+ }
+
+ if (config.codec() == VideoCodec::kVP9) {
+ if (vpx_codec_set_frame_buffer_functions(
+ vpx_codec_alpha_.get(), &GetVP9FrameBuffer, &ReleaseVP9FrameBuffer,
+ memory_pool_.get())) {
+ DLOG(ERROR) << "Failed to configure external buffers for alpha. "
+ << vpx_codec_error(vpx_codec_alpha_.get());
+ return false;
+ }
+ }
+
+ return true;
}
void VpxVideoDecoder::CloseDecoder() {
@@ -546,20 +560,13 @@ bool VpxVideoDecoder::CopyVpxImageToVideoFrame(
if (memory_pool_) {
DCHECK_EQ(VideoCodec::kVP9, config_.codec());
if (vpx_image_alpha) {
+ CHECK_GT(vpx_image_alpha->stride[VPX_PLANE_Y], 0);
size_t alpha_plane_size =
vpx_image_alpha->stride[VPX_PLANE_Y] * vpx_image_alpha->d_h;
- auto alpha_plane = memory_pool_->AllocateAlphaPlaneForFrameBuffer(
- alpha_plane_size, vpx_image->fb_priv);
- if (alpha_plane.empty()) {
- error_status_ = DecoderStatus::Codes::kOutOfMemory;
- // In case of OOM, abort copy.
- return false;
- }
- libyuv::CopyPlane(vpx_image_alpha->planes[VPX_PLANE_Y],
- vpx_image_alpha->stride[VPX_PLANE_Y],
- alpha_plane.data(),
- vpx_image_alpha->stride[VPX_PLANE_Y],
- vpx_image_alpha->d_w, vpx_image_alpha->d_h);
+ // SAFETY: libvpx guarantees that the Y plane has at least `stride * d_h`
+ // bytes available.
+ auto alpha_plane = UNSAFE_BUFFERS(base::span<uint8_t>(
+ vpx_image_alpha->planes[VPX_PLANE_Y], alpha_plane_size));
*video_frame = VideoFrame::WrapExternalYuvaData(
codec_format, coded_size, gfx::Rect(visible_size), natural_size,
vpx_image->stride[VPX_PLANE_Y], vpx_image->stride[VPX_PLANE_U],
@@ -575,8 +582,14 @@ bool VpxVideoDecoder::CopyVpxImageToVideoFrame(
if (!(*video_frame))
return false;
- video_frame->get()->AddDestructionObserver(
- memory_pool_->CreateFrameCallback(vpx_image->fb_priv));
+ (*video_frame)
+ ->AddDestructionObserver(
+ memory_pool_->CreateFrameCallback(vpx_image->fb_priv));
+ if (vpx_image_alpha) {
+ (*video_frame)
+ ->AddDestructionObserver(
+ memory_pool_->CreateFrameCallback(vpx_image_alpha->fb_priv));
+ }
return true;
}
diff --git a/media/filters/vpx_video_decoder.h b/media/filters/vpx_video_decoder.h
index f53da976ba4ba9dc39c6f03c99d5937b82650399..8f8f07e419b7d16c52f550edd97af6f235fbd2ac 100644
--- a/media/filters/vpx_video_decoder.h
+++ b/media/filters/vpx_video_decoder.h
@@ -102,8 +102,8 @@ class MEDIA_EXPORT VpxVideoDecoder : public OffloadableVideoDecoder {
std::unique_ptr<vpx_codec_ctx> vpx_codec_;
std::unique_ptr<vpx_codec_ctx> vpx_codec_alpha_;
- // |memory_pool_| is a single-threaded memory pool used for VP9 decoding
- // with no alpha. |frame_pool_| is used for all other cases.
+ // |memory_pool_| is a thread-safe memory pool used for zero-copy VP9 decoding
+ // (both with and without alpha). |frame_pool_| is used for VP8.
scoped_refptr<FrameBufferPool> memory_pool_;
VideoFramePool frame_pool_;
diff --git a/media/filters/vpx_video_decoder_unittest.cc b/media/filters/vpx_video_decoder_unittest.cc
index 8fba2b469656c7bd60b555bf6dea02b7b37ee701..bb32fa8e7d59f5a3136a48a2ef14dda08a95fff0 100644
--- a/media/filters/vpx_video_decoder_unittest.cc
+++ b/media/filters/vpx_video_decoder_unittest.cc
@@ -175,6 +175,28 @@ class VpxVideoDecoderTest : public testing::Test {
output_frames_.push_back(std::move(frame));
}
+ // Extracts the compressed video data from the AVPacket and also checks for
+ // side data containing an alpha channel. If found, it copies the alpha data
+ // into the DecoderBuffer's side data. This is necessary because FFmpeg
+ // demuxes alpha channel data as side data associated with the video packet.
+ static scoped_refptr<DecoderBuffer> CreateBufferWithAlphaFromPacket(
+ const AVPacket* packet) {
+ auto buffer = DecoderBuffer::CopyFrom(AVPacketData(*packet));
+ size_t side_data_size = 0;
+ uint8_t* side_data_ptr = av_packet_get_side_data(
+ packet, AV_PKT_DATA_MATROSKA_BLOCKADDITIONAL, &side_data_size);
+ if (side_data_size > 8) {
+ // SAFETY: The best we can do here is trust the size reported by ffmpeg.
+ auto side_data =
+ UNSAFE_BUFFERS(base::span(side_data_ptr, side_data_size));
+ if (base::U64FromBigEndian(side_data.first<8u>()) == 1) {
+ buffer->WritableSideData().alpha_data =
+ base::HeapArray<uint8_t>::CopiedFrom(side_data.subspan(8u));
+ }
+ }
+ return buffer;
+ }
+
MOCK_METHOD1(DecodeDone, void(DecoderStatus));
base::test::TaskEnvironment task_env_;
@@ -292,6 +314,68 @@ TEST_F(VpxVideoDecoderTest, SimpleFrameReuse) {
EXPECT_EQ(old_y_data, output_frames_.back()->data(VideoFrame::Plane::kY));
}
+TEST_F(VpxVideoDecoderTest, SimpleAlphaFrameReuse) {
+ VideoDecoderConfig config = TestVideoConfig::Normal(VideoCodec::kVP9);
+ config.Initialize(
+ config.codec(), config.profile(),
+ VideoDecoderConfig::AlphaMode::kHasAlpha, config.color_space_info(),
+ config.video_transformation(), config.coded_size(), config.visible_rect(),
+ config.natural_size(), config.extra_data(), config.encryption_scheme());
+ InitializeWithConfig(config);
+ scoped_refptr<DecoderBuffer> alpha_frame = ReadTestDataFile("bear-vp9a.webm");
+
+ // Read frames from the webm file.
+ InMemoryUrlProtocol protocol(*alpha_frame, false);
+ FFmpegGlue glue(&protocol);
+ ASSERT_TRUE(glue.OpenContext());
+
+ auto packet = ScopedAVPacket::Allocate();
+
+ // Decode first frame
+ ASSERT_GE(av_read_frame(glue.format_context(), packet.get()), 0);
+ auto buffer = CreateBufferWithAlphaFromPacket(packet.get());
+ Decode(buffer);
+ av_packet_unref(packet.get());
+
+ ASSERT_EQ(1u, output_frames_.size());
+ scoped_refptr<VideoFrame> frame = std::move(output_frames_.front());
+ EXPECT_EQ(PIXEL_FORMAT_I420A, frame->format());
+ const uint8_t* old_y_data = frame->data(VideoFrame::Plane::kY);
+ const uint8_t* old_a_data = frame->data(VideoFrame::Plane::kA);
+ output_frames_.pop_back();
+
+ // Clear frame reference to return the frame to the pool.
+ frame = nullptr;
+
+ // Decode second frame.
+ Decode(buffer);
+ const uint8_t* mid_y_data =
+ output_frames_.front()->data(VideoFrame::Plane::kY);
+ const uint8_t* mid_a_data =
+ output_frames_.front()->data(VideoFrame::Plane::kA);
+ output_frames_.clear();
+
+ // Issuing another decode should reuse buffers from the pool.
+ Decode(buffer);
+
+ ASSERT_EQ(1u, output_frames_.size());
+ const uint8_t* new_y_data =
+ output_frames_.back()->data(VideoFrame::Plane::kY);
+ const uint8_t* new_a_data =
+ output_frames_.back()->data(VideoFrame::Plane::kA);
+
+ // The pool is shared, so buffers might be reused in a different order (e.g. Y
+ // might get the buffer previously used for A). Because libvpx allocates the
+ // new frame before releasing the old reference frame, we need to check across
+ // all previously allocated buffers.
+ bool reused_y = new_y_data == old_y_data || new_y_data == old_a_data ||
+ new_y_data == mid_y_data || new_y_data == mid_a_data;
+ bool reused_a = new_a_data == old_y_data || new_a_data == old_a_data ||
+ new_a_data == mid_y_data || new_a_data == mid_a_data;
+ EXPECT_TRUE(reused_y);
+ EXPECT_TRUE(reused_a);
+}
+
TEST_F(VpxVideoDecoderTest, SimpleFormatChange) {
scoped_refptr<DecoderBuffer> large_frame =
ReadTestDataFile("vp9-I-frame-1280x720");
@@ -311,10 +395,41 @@ TEST_F(VpxVideoDecoderTest, FrameValidAfterPoolDestruction) {
// Write to the Y plane. The memory tools should detect a
// use-after-free if the storage was actually removed by pool destruction.
- UNSAFE_TODO(
- memset(output_frames_.front()->writable_data(VideoFrame::Plane::kY), 0xff,
- output_frames_.front()->rows(VideoFrame::Plane::kY) *
- output_frames_.front()->stride(VideoFrame::Plane::kY)));
+ std::ranges::fill(
+ output_frames_.front()->writable_span(VideoFrame::Plane::kY), 0xff);
+}
+
+TEST_F(VpxVideoDecoderTest, AlphaFrameValidAfterPoolDestruction) {
+ VideoDecoderConfig config = TestVideoConfig::Normal(VideoCodec::kVP9);
+ config.Initialize(
+ config.codec(), config.profile(),
+ VideoDecoderConfig::AlphaMode::kHasAlpha, config.color_space_info(),
+ config.video_transformation(), config.coded_size(), config.visible_rect(),
+ config.natural_size(), config.extra_data(), config.encryption_scheme());
+ InitializeWithConfig(config);
+ scoped_refptr<DecoderBuffer> alpha_frame = ReadTestDataFile("bear-vp9a.webm");
+
+ InMemoryUrlProtocol protocol(*alpha_frame, false);
+ FFmpegGlue glue(&protocol);
+ ASSERT_TRUE(glue.OpenContext());
+
+ auto packet = ScopedAVPacket::Allocate();
+ ASSERT_GE(av_read_frame(glue.format_context(), packet.get()), 0);
+ auto buffer = CreateBufferWithAlphaFromPacket(packet.get());
+ Decode(std::move(buffer));
+ av_packet_unref(packet.get());
+
+ ASSERT_EQ(1u, output_frames_.size());
+ EXPECT_EQ(PIXEL_FORMAT_I420A, output_frames_.front()->format());
+
+ Destroy();
+
+ // Write to the Y and A planes. The memory tools should detect a
+ // use-after-free if the storage was actually removed by pool destruction.
+ std::ranges::fill(
+ output_frames_.front()->writable_span(VideoFrame::Plane::kY), 0xff);
+ std::ranges::fill(
+ output_frames_.front()->writable_span(VideoFrame::Plane::kA), 0xff);
}
// The test stream uses profile 2, which needs high bit depth support in libvpx.
@@ -362,8 +477,7 @@ TEST_F(VpxVideoDecoderTest, MemoryPoolAllowsMultipleDisplay) {
Destroy();
// ASAN will be very unhappy with this line if the above is incorrect.
- UNSAFE_TODO(memset(last_frame->writable_data(VideoFrame::Plane::kY), 0,
- last_frame->row_bytes(VideoFrame::Plane::kY)));
+ std::ranges::fill(last_frame->writable_span(VideoFrame::Plane::kY), 0);
}
#endif // !defined(LIBVPX_NO_HIGH_BIT_DEPTH) && !defined(ARCH_CPU_ARM_FAMILY)

View File

@@ -8,10 +8,10 @@ electron objects that extend gin::Wrappable and gets
allocated on the cpp heap
diff --git a/gin/public/wrappable_pointer_tags.h b/gin/public/wrappable_pointer_tags.h
index fee622ebde42211de6f702b754cfa38595df5a1c..7bc00b2941cc4aaf0ae02a0db8722f74a0c228d9 100644
index fee622ebde42211de6f702b754cfa38595df5a1c..0f649fa562cef261b127dca7769dd5687a916342 100644
--- a/gin/public/wrappable_pointer_tags.h
+++ b/gin/public/wrappable_pointer_tags.h
@@ -77,7 +77,22 @@ enum WrappablePointerTag : uint16_t {
@@ -77,7 +77,23 @@ enum WrappablePointerTag : uint16_t {
kWebAXObjectProxy, // content::WebAXObjectProxy
kWrappedExceptionHandler, // extensions::WrappedExceptionHandler
kIndigoContext, // indigo::IndigoContext
@@ -20,6 +20,7 @@ index fee622ebde42211de6f702b754cfa38595df5a1c..7bc00b2941cc4aaf0ae02a0db8722f74
+ kElectronDataPipeHolder, // electron::api::DataPipeHolder
+ kElectronDebugger, // electron::api::Debugger
+ kElectronEvent, // gin_helper::internal::Event
+ kElectronExtensions, // electron::api::Extensions
+ kElectronMenu, // electron::api::Menu
+ kElectronNetLog, // electron::api::NetLog
+ kElectronPowerMonitor, // electron::api::PowerMonitor

View File

@@ -6,79 +6,6 @@ Subject: chore: provide IsWebContentsCreationOverridden with full params
Pending upstream patch, this gives us fuller access to the window.open params
so that we will be able to decide whether to cancel it or not.
diff --git a/chrome/browser/media/offscreen_tab.cc b/chrome/browser/media/offscreen_tab.cc
index 047f1258f951f763df2ca0ba355b19d19337826b..9fc7114312212fbe38ddec740b4aebbcd72cb0f8 100644
--- a/chrome/browser/media/offscreen_tab.cc
+++ b/chrome/browser/media/offscreen_tab.cc
@@ -285,8 +285,7 @@ bool OffscreenTab::IsWebContentsCreationOverridden(
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) {
+ const content::mojom::CreateNewWindowParams& params) {
// Disallow creating separate WebContentses. The WebContents implementation
// uses this to spawn new windows/tabs, which is also not allowed for
// offscreen tabs.
diff --git a/chrome/browser/media/offscreen_tab.h b/chrome/browser/media/offscreen_tab.h
index 231e3595f218aeebe28d0b13ce6182e7a4d6f4e1..609bd205d1cd0404cab3471765bef8b0e053d061 100644
--- a/chrome/browser/media/offscreen_tab.h
+++ b/chrome/browser/media/offscreen_tab.h
@@ -108,8 +108,7 @@ class OffscreenTab final : public ProfileObserver,
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) final;
+ const content::mojom::CreateNewWindowParams& params) override;
void EnterFullscreenModeForTab(
content::RenderFrameHost* requesting_frame,
const blink::mojom::FullscreenOptions& options) final;
diff --git a/chrome/browser/ui/ash/keyboard/chrome_keyboard_web_contents.cc b/chrome/browser/ui/ash/keyboard/chrome_keyboard_web_contents.cc
index 33edb0a90d886dd44956046e03fcc182a0f6bc8e..5b5edd3da3d9f7a248ea3affd195c36bfd64a38e 100644
--- a/chrome/browser/ui/ash/keyboard/chrome_keyboard_web_contents.cc
+++ b/chrome/browser/ui/ash/keyboard/chrome_keyboard_web_contents.cc
@@ -80,8 +80,7 @@ class ChromeKeyboardContentsDelegate : public content::WebContentsDelegate,
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) override {
+ const content::mojom::CreateNewWindowParams& params) override {
return true;
}
diff --git a/chrome/browser/ui/ash/web_view/ash_web_view_impl.cc b/chrome/browser/ui/ash/web_view/ash_web_view_impl.cc
index e4e42249c476ccae58f0ba42e7dbae299f1e36bd..670c30ed4b7f1a07eb4b8abaa95e5a8a9d94bd8d 100644
--- a/chrome/browser/ui/ash/web_view/ash_web_view_impl.cc
+++ b/chrome/browser/ui/ash/web_view/ash_web_view_impl.cc
@@ -121,10 +121,9 @@ bool AshWebViewImpl::IsWebContentsCreationOverridden(
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) {
+ const content::mojom::CreateNewWindowParams& params) {
if (params_.suppress_navigation) {
- NotifyDidSuppressNavigation(target_url,
+ NotifyDidSuppressNavigation(params.target_url,
WindowOpenDisposition::NEW_FOREGROUND_TAB,
/*from_user_gesture=*/true);
return true;
diff --git a/chrome/browser/ui/ash/web_view/ash_web_view_impl.h b/chrome/browser/ui/ash/web_view/ash_web_view_impl.h
index 39fa45f0a0f9076bd7ac0be6f455dd540a276512..3d0381d463eed73470b28085830f2a23751659a7 100644
--- a/chrome/browser/ui/ash/web_view/ash_web_view_impl.h
+++ b/chrome/browser/ui/ash/web_view/ash_web_view_impl.h
@@ -60,8 +60,7 @@ class AshWebViewImpl : public ash::AshWebView,
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) override;
+ const content::mojom::CreateNewWindowParams& params) override;
content::WebContents* OpenURLFromTab(
content::WebContents* source,
const content::OpenURLParams& params,
diff --git a/chrome/browser/ui/browser.cc b/chrome/browser/ui/browser.cc
index 8f8852b2af1acfa4ec985fd1c8b50563b991b12a..c2f2903545b191c5ab13462bf330efce37d7d08c 100644
--- a/chrome/browser/ui/browser.cc
@@ -116,112 +43,6 @@ index acdb28d61badaf549c47e107f4795e1e2adc37c9..b6aca0bf802f2146d09d2a872ff9e091
content::WebContents* CreateCustomWebContents(
content::RenderFrameHost* opener,
content::SiteInstance* source_site_instance,
diff --git a/chrome/browser/ui/media_router/presentation_receiver_window_controller.cc b/chrome/browser/ui/media_router/presentation_receiver_window_controller.cc
index a05510eadf5c9ff24bb7999aa76229946319280f..a80ecc46f8a6b84de83d608257d45ae61ccc2170 100644
--- a/chrome/browser/ui/media_router/presentation_receiver_window_controller.cc
+++ b/chrome/browser/ui/media_router/presentation_receiver_window_controller.cc
@@ -206,8 +206,7 @@ bool PresentationReceiverWindowController::IsWebContentsCreationOverridden(
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) {
+ const content::mojom::CreateNewWindowParams& params) {
// Disallow creating separate WebContentses. The WebContents implementation
// uses this to spawn new windows/tabs, which is also not allowed for
// local presentations.
diff --git a/chrome/browser/ui/media_router/presentation_receiver_window_controller.h b/chrome/browser/ui/media_router/presentation_receiver_window_controller.h
index 3fc06be01f20e8cd314d95d73a3f58c2f0742fe9..c07910ae59a185442f37ea6e7b96fdf3a33aba82 100644
--- a/chrome/browser/ui/media_router/presentation_receiver_window_controller.h
+++ b/chrome/browser/ui/media_router/presentation_receiver_window_controller.h
@@ -106,8 +106,7 @@ class PresentationReceiverWindowController final
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) override;
+ const content::mojom::CreateNewWindowParams& params) override;
// The profile used for the presentation.
raw_ptr<Profile, DanglingUntriaged> otr_profile_;
diff --git a/chrome/browser/ui/views/hats/hats_next_web_dialog.cc b/chrome/browser/ui/views/hats/hats_next_web_dialog.cc
index 783d05c39ecbe5e556af2e5fd8b4f30fb5aa1196..e1895bcf9d3c6818aa64b1479774a143dae74d53 100644
--- a/chrome/browser/ui/views/hats/hats_next_web_dialog.cc
+++ b/chrome/browser/ui/views/hats/hats_next_web_dialog.cc
@@ -104,8 +104,7 @@ class HatsNextWebDialog::HatsWebView : public views::WebView {
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) override {
+ const content::mojom::CreateNewWindowParams& params) override {
return true;
}
content::WebContents* CreateCustomWebContents(
diff --git a/components/embedder_support/android/delegate/web_contents_delegate_android.cc b/components/embedder_support/android/delegate/web_contents_delegate_android.cc
index a82c39208a2709d9e292dac5c89bd2c9bf529a98..d578299501e15815ac615528610889d270aaf6ad 100644
--- a/components/embedder_support/android/delegate/web_contents_delegate_android.cc
+++ b/components/embedder_support/android/delegate/web_contents_delegate_android.cc
@@ -214,15 +214,14 @@ bool WebContentsDelegateAndroid::IsWebContentsCreationOverridden(
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) {
+ const content::mojom::CreateNewWindowParams& params) {
JNIEnv* env = AttachCurrentThread();
ScopedJavaLocalRef<jobject> obj = GetJavaDelegate(env);
if (obj.is_null()) {
return false;
}
ScopedJavaLocalRef<jobject> java_gurl =
- url::GURLAndroid::FromNativeGURL(env, target_url);
+ url::GURLAndroid::FromNativeGURL(env, params.target_url.spec());
return !Java_WebContentsDelegateAndroid_shouldCreateWebContents(env, obj,
java_gurl);
}
diff --git a/components/embedder_support/android/delegate/web_contents_delegate_android.h b/components/embedder_support/android/delegate/web_contents_delegate_android.h
index 5754a774852d53a99d34568d0b98aa19171add2a..a75d85c97a75fffa5dba6ac427d7608e345c02ef 100644
--- a/components/embedder_support/android/delegate/web_contents_delegate_android.h
+++ b/components/embedder_support/android/delegate/web_contents_delegate_android.h
@@ -82,8 +82,7 @@ class WebContentsDelegateAndroid : public content::WebContentsDelegate {
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) override;
+ const content::mojom::CreateNewWindowParams& params) override;
void CloseContents(content::WebContents* source) override;
bool DidAddMessageToConsole(content::WebContents* source,
blink::mojom::ConsoleMessageLevel log_level,
diff --git a/components/offline_pages/content/background_loader/background_loader_contents.cc b/components/offline_pages/content/background_loader/background_loader_contents.cc
index 12b38ddee62e3af915083830703a4c2e8e249f00..bf4e8dcbdecd46712c48107cfee554b7bb1e0277 100644
--- a/components/offline_pages/content/background_loader/background_loader_contents.cc
+++ b/components/offline_pages/content/background_loader/background_loader_contents.cc
@@ -85,8 +85,7 @@ bool BackgroundLoaderContents::IsWebContentsCreationOverridden(
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) {
+ const content::mojom::CreateNewWindowParams& params) {
// Background pages should not create other webcontents/tabs.
return true;
}
diff --git a/components/offline_pages/content/background_loader/background_loader_contents.h b/components/offline_pages/content/background_loader/background_loader_contents.h
index b969f1d97b7e3396119b579cfbe61e19ff7d2dd4..b8d6169652da28266a514938b45b39c58df53573 100644
--- a/components/offline_pages/content/background_loader/background_loader_contents.h
+++ b/components/offline_pages/content/background_loader/background_loader_contents.h
@@ -66,8 +66,7 @@ class BackgroundLoaderContents : public content::WebContentsDelegate {
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) override;
+ const content::mojom::CreateNewWindowParams& params) override;
content::WebContents* AddNewContents(
content::WebContents* source,
diff --git a/content/browser/web_contents/web_contents_impl.cc b/content/browser/web_contents/web_contents_impl.cc
index d43e75c20aca09080f4223d339c88381f030c504..8cd59445bae73ff0193e4512d7c36740cbad847f 100644
--- a/content/browser/web_contents/web_contents_impl.cc
@@ -356,48 +177,6 @@ index f459dddeb3f8f3a33ffead0e96fba791d18a0108..f7a229b186774ca3a01f2d747eab139a
content::WebContents* CreateCustomWebContents(
content::RenderFrameHost* opener,
content::SiteInstance* source_site_instance,
diff --git a/fuchsia_web/webengine/browser/frame_impl.cc b/fuchsia_web/webengine/browser/frame_impl.cc
index 9c1fb0b2ed4f013ef6108a9844b22f6bfe697621..ef4991adc766d53b03d280395630b83ced38c2e8 100644
--- a/fuchsia_web/webengine/browser/frame_impl.cc
+++ b/fuchsia_web/webengine/browser/frame_impl.cc
@@ -585,8 +585,7 @@ bool FrameImpl::IsWebContentsCreationOverridden(
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) {
+ const content::mojom::CreateNewWindowParams& params) {
// Specify a generous upper bound for unacknowledged popup windows, so that we
// can catch bad client behavior while not interfering with normal operation.
constexpr size_t kMaxPendingWebContentsCount = 10;
diff --git a/fuchsia_web/webengine/browser/frame_impl.h b/fuchsia_web/webengine/browser/frame_impl.h
index 756d4192271d6a65cfe8e1511737c565b543cb1f..5688f6f745056565c3c01947f741c4d13e27b6ae 100644
--- a/fuchsia_web/webengine/browser/frame_impl.h
+++ b/fuchsia_web/webengine/browser/frame_impl.h
@@ -308,8 +308,7 @@ class WEB_ENGINE_EXPORT FrameImpl : public fuchsia::web::Frame,
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) override;
+ const content::mojom::CreateNewWindowParams& params) override;
void WebContentsCreated(content::WebContents* source_contents,
int opener_render_process_id,
int opener_render_frame_id,
diff --git a/headless/lib/browser/headless_web_contents_impl.cc b/headless/lib/browser/headless_web_contents_impl.cc
index ae616fa9c352413e23fb509b3e12e0e4fab5a094..0efa65f7d4346cfe78d2f27ba55a0526202315ff 100644
--- a/headless/lib/browser/headless_web_contents_impl.cc
+++ b/headless/lib/browser/headless_web_contents_impl.cc
@@ -232,8 +232,7 @@ class HeadlessWebContentsImpl::Delegate : public content::WebContentsDelegate {
content::SiteInstance* source_site_instance,
content::mojom::WindowContainerType window_container_type,
const GURL& opener_url,
- const std::string& frame_name,
- const GURL& target_url) override {
+ const content::mojom::CreateNewWindowParams& params) override {
return headless_web_contents_->browser_context()
->options()
->block_new_web_contents();
diff --git a/ui/views/controls/webview/web_dialog_view.cc b/ui/views/controls/webview/web_dialog_view.cc
index a7d370220136f2c31afd70644ada26f1768b2e0d..e08dd61b20c68398b0532f5ae74e0ffd5968c19b 100644
--- a/ui/views/controls/webview/web_dialog_view.cc

View File

@@ -0,0 +1,52 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Sam Attard <sattard@anthropic.com>
Date: Sun, 22 Mar 2026 19:21:45 +0000
Subject: fix: use fresh LazyNow for OnEndWorkItemImpl after DidRunTask
DidRunTask can cache the LazyNow early (via RecordTaskEnd in
BrowserUIThreadScheduler::OnTaskCompleted) before running task observers.
If a task observer's DidProcessTask triggers nested pump activity (nested
RunLoop, sync IPC, etc.), TimeKeeper's last_phase_end_ advances past the
cached value. The subsequent OnEndWorkItemImpl then computes a negative
delta and hits DCHECK(!delta.is_negative()) in RecordTimeInPhase.
Using a fresh LazyNow for OnEndWorkItemImpl samples the time after all
observers have run, which matches the existing comment's intent that
microtasks are extensions of the RunTask and the work item ends after them.
This is upstreamable: the bug exists whenever any TaskObserver::DidProcessTask
triggers nested pump activity, which is not forbidden by the contract.
diff --git a/base/task/sequence_manager/thread_controller_with_message_pump_impl.cc b/base/task/sequence_manager/thread_controller_with_message_pump_impl.cc
index bb09c99ea0b37a139440d0fe98c7f2f5e9c147e0..d27c34f8090ff54d20d8339c0ad56d37d6d61ab2 100644
--- a/base/task/sequence_manager/thread_controller_with_message_pump_impl.cc
+++ b/base/task/sequence_manager/thread_controller_with_message_pump_impl.cc
@@ -481,15 +481,22 @@ std::optional<WakeUp> ThreadControllerWithMessagePumpImpl::DoWorkImpl(
// `PendingTask` reference dangling.
selected_task.reset();
- LazyNow lazy_now_after_run_task(time_source_);
- main_thread_only().task_source->DidRunTask(lazy_now_after_run_task);
+ {
+ LazyNow lazy_now_did_run_task(time_source_);
+ main_thread_only().task_source->DidRunTask(lazy_now_did_run_task);
+ }
// End the work item scope after DidRunTask() as it can process microtasks
- // (which are extensions of the RunTask).
+ // (which are extensions of the RunTask). Use a fresh LazyNow here because
+ // DidRunTask may cache the LazyNow (via RecordTaskEnd) before running task
+ // observers, and those observers may trigger nested pump activity that
+ // advances TimeKeeper's last_phase_end_ past the cached value, resulting
+ // in a negative delta in RecordTimeInPhase.
+ LazyNow lazy_now_after_run_task(time_source_);
OnEndWorkItemImpl(lazy_now_after_run_task, run_depth);
- // If DidRunTask() read the clock (lazy_now_after_run_task.has_value()) or
- // if |batch_duration| > 0, store the clock value in `recent_time` so it can
- // be reused by SelectNextTask() at the next loop iteration.
+ // If OnEndWorkItemImpl() read the clock (lazy_now_after_run_task.has_value())
+ // or if |batch_duration| > 0, store the clock value in `recent_time` so it
+ // can be reused by SelectNextTask() at the next loop iteration.
if (lazy_now_after_run_task.has_value() || !batch_duration.is_zero()) {
recent_time = lazy_now_after_run_task.Now();
} else {

View File

@@ -11,5 +11,7 @@
{ "patch_dir": "src/electron/patches/ReactiveObjC", "repo": "src/third_party/squirrel.mac/vendor/ReactiveObjC" },
{ "patch_dir": "src/electron/patches/webrtc", "repo": "src/third_party/webrtc" },
{ "patch_dir": "src/electron/patches/reclient-configs", "repo": "src/third_party/engflow-reclient-configs" },
{ "patch_dir": "src/electron/patches/sqlite", "repo": "src/third_party/sqlite/src" }
{ "patch_dir": "src/electron/patches/sqlite", "repo": "src/third_party/sqlite/src" },
{ "patch_dir": "src/electron/patches/dawn", "repo": "src/third_party/dawn" },
{ "patch_dir": "src/electron/patches/pdfium", "repo": "src/third_party/pdfium" }
]

1
patches/dawn/.patches Normal file
View File

@@ -0,0 +1 @@
cherry-pick-7c11e1188705.patch

View File

@@ -0,0 +1,118 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Lokbondo Kung <lokokung@google.com>
Date: Tue, 7 Apr 2026 19:22:22 -0700
Subject: [dawn][native] Check for waiting for idle before updating serials.
- In the ExecutionQueue, we need to make sure to check whether a thread
is waiting for idle prior to updating the completed serial. Otherwise,
as the bug below points out, it's possible for the thread that's
waiting for idle (which just waits for the completed serial to reach
a certain value), to complete and destroy the Queue before the rest of
the UpdateSerial call completes.
Bug: 497969820
Change-Id: I7b9dba50f4ccb1aa8dfced122801e17db3ee4e0e
Reviewed-on: https://dawn-review.googlesource.com/c/dawn/+/300595
Reviewed-by: Kai Ninomiya <kainino@chromium.org>
Commit-Queue: Loko Kung <lokokung@google.com>
diff --git a/src/dawn/native/ExecutionQueue.cpp b/src/dawn/native/ExecutionQueue.cpp
index 8be4e707e92ce1b3d94e4bb8fec3e332e8f17fb3..2bb5a6af253f17178332da4088fefbf922463b08 100644
--- a/src/dawn/native/ExecutionQueue.cpp
+++ b/src/dawn/native/ExecutionQueue.cpp
@@ -253,20 +253,21 @@ void ExecutionQueueBase::UpdateCompletedSerialTo(QueuePriority priority,
}
void ExecutionQueueBase::UpdateCompletedSerialToInternal(QueuePriority priority,
- ExecutionSerial completedSerial,
- bool forceTasks) {
+ ExecutionSerial newCompletedSerial,
+ bool forceTasksForDestroy) {
QueuePriorityArray<std::vector<Ref<SerialProcessor>>>* processors = nullptr;
std::vector<Task> tasks;
- // We update the completed serial as soon as possible before waiting for callback rights so
- // that we almost always process as many callbacks as possible.
- ExecutionSerial serial = mCompletedSerial.Use([&](auto old) {
- *old = std::max(*old, static_cast<uint64_t>(completedSerial));
- return ExecutionSerial(*old);
- });
-
- mState.Use<NotifyType::None>([&](auto state) {
- if (state->mWaitingForIdle && !forceTasks) {
+ // Note that we need to determine whether we are waiting for idle before updating the completed
+ // serial because some backends WaitForIdleForDestructionImpl may be implemented via a call to
+ // WaitForQueueSerial which (by default without overrides), waits on the completed serial value.
+ // If we updated the serial value before checking the other pieces of state, a thread destroying
+ // the Queue calling WaitForIdleForDestruction, could end up being woken up and destroying the
+ // Queue device before the rest of this function completes. By checking the state first before
+ // updating the serial, however, we avoid waking up the thread that's waiting for idle until we
+ // have completed using the queue.
+ bool waitingForIdle = mState.Use<NotifyType::None>([&](auto state) {
+ if (state->mWaitingForIdle && !forceTasksForDestroy) {
// If we are waiting for idle, then the callbacks will be fired there. It is currently
// necessary to avoid calling the callbacks in this function and doing it in the
// |WaitForIdleForDestruction| call because |WaitForIdleForDestruction| is called while
@@ -274,8 +275,11 @@ void ExecutionQueueBase::UpdateCompletedSerialToInternal(QueuePriority priority,
// device lock. As a result, if the main thread is waiting for idle, and another thread
// is trying to update the completed serial and call callbacks, it could deadlock. Once
// we update |WaitForIdleForDestruction| to release the device lock on the wait, we may
- // be able to simplify the code here.
- return;
+ // be able to simplify the code here. Note that skipping this when
+ // |forceTasksForDestroy| is currently ok because that branch is only called when we are
+ // also holding the device lock, either via a Destroy or via an error that is being
+ // handled.
+ return true;
}
// Wait until we can exclusively call callbacks.
@@ -284,16 +288,22 @@ void ExecutionQueueBase::UpdateCompletedSerialToInternal(QueuePriority priority,
// Call all callbacks that for the given priority and anything of higher priority as well.
processors = &state->mWaitingProcessors;
for (QueuePriority p = QueuePriority::Highest; p >= priority; p -= 1) {
- PopWaitingTasksInto(serial, state->mWaitingTasks[p], tasks);
+ PopWaitingTasksInto(newCompletedSerial, state->mWaitingTasks[p], tasks);
}
state->mCallingCallbacks = true;
+ return false;
+ });
+
+ // Update the serial now that we know whether we are waiting for idle.
+ mCompletedSerial.Use([&](auto completedSerial) {
+ *completedSerial = std::max(*completedSerial, static_cast<uint64_t>(newCompletedSerial));
});
// Always call the processors before processing individual tasks.
if (processors) {
for (QueuePriority p = QueuePriority::Highest; p >= priority; p -= 1) {
for (auto& processor : (*processors)[p]) {
- processor->UpdateCompletedSerialTo(serial);
+ processor->UpdateCompletedSerialTo(newCompletedSerial);
}
}
}
@@ -304,7 +314,9 @@ void ExecutionQueueBase::UpdateCompletedSerialToInternal(QueuePriority priority,
task();
}
- mState->mCallingCallbacks = false;
+ if (!waitingForIdle) {
+ mState->mCallingCallbacks = false;
+ }
}
MaybeError ExecutionQueueBase::EnsureCommandsFlushed(ExecutionSerial serial) {
diff --git a/src/dawn/native/ExecutionQueue.h b/src/dawn/native/ExecutionQueue.h
index 9140f9eb08458fd7163dfa724e5452f800c7dc6f..c0bfd75e2d9e6701e7f61a09c3d10b1ef5c3c200 100644
--- a/src/dawn/native/ExecutionQueue.h
+++ b/src/dawn/native/ExecutionQueue.h
@@ -183,7 +183,7 @@ class ExecutionQueueBase : public ApiObjectBase {
void UpdateCompletedSerialToInternal(QueuePriority priority,
ExecutionSerial completedSerial,
- bool forceTasks = false);
+ bool forceTasksForDestroy = false);
// |mCompletedSerial| tracks the last completed command serial that the fence has returned.
// |mLastSubmittedSerial| tracks the last submitted command serial.

View File

@@ -1,2 +1,3 @@
chore_expose_ui_to_allow_electron_to_set_dock_side.patch
feat_allow_enabling_extension_panels_on_custom_protocols.patch
fix_context_selector_not_showing_execution_contexts.patch

View File

@@ -0,0 +1,25 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Fedor Indutny <indutny@signal.org>
Date: Tue, 14 Apr 2026 19:33:23 -0700
Subject: Fix context selector not showing execution contexts
When execution context's origin is `file://` - the `url.domain()` is an
empty string and the item's subtitle becomes falsy. Since we predicated
rendering of an item on presence of both title and subtitle - the items
were not rendered while still taking space in the dropdown.
Pending CL: https://chromium-review.googlesource.com/c/devtools/devtools-frontend/+/7761316
diff --git a/front_end/panels/console/ConsoleContextSelector.ts b/front_end/panels/console/ConsoleContextSelector.ts
index f933dbd6dbdfadd2ea8b78e473d9506350825037..d8aa3f9811f1857a9b30231ed8ff59e709e3383c 100644
--- a/front_end/panels/console/ConsoleContextSelector.ts
+++ b/front_end/panels/console/ConsoleContextSelector.ts
@@ -303,7 +303,7 @@ interface ViewInput {
type View = (input: ViewInput, output: undefined, target: HTMLElement) => void;
const DEFAULT_VIEW: View = (input, _output, target): void => {
- if (!input.title || !input.subtitle) {
+ if (!input.title) {
render(nothing, target);
return;
}

View File

@@ -44,5 +44,4 @@ src_refactor_module_wrap_cc_to_update_fixedarray_get_params.patch
src_refactor_wasmstreaming_finish_to_accept_a_callback.patch
src_stop_using_v8_propertycallbackinfo_t_this.patch
build_restore_macos_deployment_target_to_12_0.patch
fix_generate_config_gypi_needs_to_generate_valid_json.patch
fix_add_externalpointertypetag_to_v8_external_api_calls.patch

View File

@@ -6,7 +6,7 @@ Subject: Delete deprecated fields on v8::Isolate
https://chromium-review.googlesource.com/c/v8/v8/+/7081397
diff --git a/src/api/environment.cc b/src/api/environment.cc
index 2111ee63a6ace438c1a143c90a807ed9fc2bcc9d..ce6426a1bf2dadb1a642874a05718724ef0f3d7c 100644
index 6873a9f203ccace7d5c501b62bed56732332d060..7ba7943c666d9cf3de47bf9a18b8e6e0e1196886 100644
--- a/src/api/environment.cc
+++ b/src/api/environment.cc
@@ -218,8 +218,6 @@ void SetIsolateCreateParamsForNode(Isolate::CreateParams* params) {

View File

@@ -44,7 +44,7 @@ index 37d83e41b618a07aca98118260abe9618f11256d..26d5c1bd3c8191fce1d22b969996b6bf
template <typename T>
diff --git a/src/base_object.cc b/src/base_object.cc
index 404e2aa8c88d0cc0e6717c01e0df68899c64cc32..16462f305a2ac6b6c3d7b85024f2e52648c4300c 100644
index 8a14a8c32626f98c2afa402786e1b6b2fbbb0988..946e60ea2d0ffe984328185a0e7762603be8e0dd 100644
--- a/src/base_object.cc
+++ b/src/base_object.cc
@@ -45,7 +45,8 @@ BaseObject::~BaseObject() {
@@ -72,7 +72,7 @@ index 74bbb9fb83246a90bc425e259150f0868020ac9e..a4b3a1c0907c9d50baf6c8cd473cb4c7
inline Environment* Environment::GetCurrent(
diff --git a/src/histogram.cc b/src/histogram.cc
index 836a51b0e5aa4b1910604537c8b380038c27a7db..c4634e42fd2e5a27b0139a9b1716bc04875be469 100644
index 2bcfcbdd9547db136c5992335a76dd3190886d6d..c34a83c9ccc83ec2973228a32cf0a51d29b4d681 100644
--- a/src/histogram.cc
+++ b/src/histogram.cc
@@ -136,7 +136,8 @@ HistogramBase::HistogramBase(
@@ -116,7 +116,7 @@ index 836a51b0e5aa4b1910604537c8b380038c27a7db..c4634e42fd2e5a27b0139a9b1716bc04
std::unique_ptr<worker::TransferData>
diff --git a/src/js_udp_wrap.cc b/src/js_udp_wrap.cc
index 51e4f8c45ffd38fcf925ab8d283b3b88f2a35832..0c30c8b4609e4870c0ccfc5e9e465248c20763b8 100644
index 6b35871d92ceb1b1eead21e328088cf87a10fa6e..57c96645fa0fdb7ed77c0a36c877b2628b7f0571 100644
--- a/src/js_udp_wrap.cc
+++ b/src/js_udp_wrap.cc
@@ -55,8 +55,9 @@ JSUDPWrap::JSUDPWrap(Environment* env, Local<Object> obj)
@@ -207,10 +207,10 @@ index 0af487a9abc9ee1783367ac86b0016ab89e02006..c01cef477aaba52f9894943acede7fb2
inline Realm* Realm::GetCurrent(
diff --git a/src/node_snapshotable.cc b/src/node_snapshotable.cc
index e34d24d51d5c090b560d06f727043f20924e6f46..07615933c858a17515836a29f7e27ace4b81e6ff 100644
index c3e995adf437413fe956e45c2ccc704d8737de59..8561933060ac30e3559d7ce2ac633d25f1e4d6ec 100644
--- a/src/node_snapshotable.cc
+++ b/src/node_snapshotable.cc
@@ -1400,7 +1400,8 @@ StartupData SerializeNodeContextInternalFields(Local<Object> holder,
@@ -1461,7 +1461,8 @@ StartupData SerializeNodeContextInternalFields(Local<Object> holder,
// For the moment we do not set any internal fields in ArrayBuffer
// or ArrayBufferViews, so just return nullptr.
if (holder->IsArrayBuffer() || holder->IsArrayBufferView()) {
@@ -220,7 +220,7 @@ index e34d24d51d5c090b560d06f727043f20924e6f46..07615933c858a17515836a29f7e27ace
return StartupData{nullptr, 0};
}
@@ -1420,7 +1421,8 @@ StartupData SerializeNodeContextInternalFields(Local<Object> holder,
@@ -1481,7 +1482,8 @@ StartupData SerializeNodeContextInternalFields(Local<Object> holder,
*holder);
BaseObject* object_ptr = static_cast<BaseObject*>(
@@ -296,7 +296,7 @@ index 29a4c29f3d3822394d23c453899cdd6aae280f3f..2a953d6390d5e4e251e54c1e847d4e5e
} // namespace node
diff --git a/src/udp_wrap.cc b/src/udp_wrap.cc
index 150ef0f0400bed9df4f4b1a4c20ec4045ef7a5f6..2ca2ac177c6b5edc3b40712a40ff4a36e96904dc 100644
index 6f3e68b79f1c7089b8c17a6ed5cd33eee4078b02..e09cd7a553464060619ecf4ac51e028382e2e148 100644
--- a/src/udp_wrap.cc
+++ b/src/udp_wrap.cc
@@ -126,8 +126,8 @@ void UDPWrapBase::set_listener(UDPListener* listener) {

View File

@@ -6,7 +6,7 @@ Subject: Remove deprecated `GetIsolate`
https://chromium-review.googlesource.com/c/v8/v8/+/6905244
diff --git a/src/api/environment.cc b/src/api/environment.cc
index 8974bac7dca43294cc5cc4570f8e2e78f42aefaa..2111ee63a6ace438c1a143c90a807ed9fc2bcc9d 100644
index ec1496467f5071a810a3d7a76d80f3d12a8582dc..6873a9f203ccace7d5c501b62bed56732332d060 100644
--- a/src/api/environment.cc
+++ b/src/api/environment.cc
@@ -795,7 +795,7 @@ std::unique_ptr<MultiIsolatePlatform> MultiIsolatePlatform::Create(
@@ -85,7 +85,7 @@ index cc60ddddb037e0279615bbe24821eb20fd8da677..37d83e41b618a07aca98118260abe961
return handle;
diff --git a/src/crypto/crypto_context.cc b/src/crypto/crypto_context.cc
index 2e3f31e1765024373c3fc2acd33fc3bfb352a906..ca62d3001bf51193d78caac0cccd93c188a8410c 100644
index cb586936a904e7b9a017732e993a35ef1115ff9a..fd2dfa9fcf444fe705a2d42cd0963531cea9a74c 100644
--- a/src/crypto/crypto_context.cc
+++ b/src/crypto/crypto_context.cc
@@ -1045,7 +1045,7 @@ bool ArrayOfStringsToX509s(Local<Context> context,
@@ -98,21 +98,21 @@ index 2e3f31e1765024373c3fc2acd33fc3bfb352a906..ca62d3001bf51193d78caac0cccd93c1
uint32_t array_length = cert_array->Length();
diff --git a/src/crypto/crypto_x509.cc b/src/crypto/crypto_x509.cc
index 4c5427596d1c90d3a413cdd9ff4f1151e657073d..70135a6be65e41fcb3564ddf6d1e8083a59ef8bb 100644
index 6b7e4211a8969351168fc982fe3466a2096bed3a..76c84dd516719849a44e7d67f42ea16dd315b190 100644
--- a/src/crypto/crypto_x509.cc
+++ b/src/crypto/crypto_x509.cc
@@ -107,7 +107,7 @@ MaybeLocal<Value> ToV8Value(Local<Context> context, BIOPointer&& bio) {
@@ -109,7 +109,7 @@ MaybeLocal<Value> ToV8Value(Local<Context> context, BIOPointer&& bio) {
if (!mem) [[unlikely]]
return {};
BUF_MEM* mem = bio;
Local<Value> ret;
- if (!String::NewFromUtf8(context->GetIsolate(),
+ if (!String::NewFromUtf8(Isolate::GetCurrent(),
mem->data,
NewStringType::kNormal,
mem->length)
@@ -121,7 +121,7 @@ MaybeLocal<Value> ToV8Value(Local<Context> context, const BIOPointer& bio) {
@@ -125,7 +125,7 @@ MaybeLocal<Value> ToV8Value(Local<Context> context, const BIOPointer& bio) {
if (!mem) [[unlikely]]
return {};
BUF_MEM* mem = bio;
Local<Value> ret;
- if (!String::NewFromUtf8(context->GetIsolate(),
+ if (!String::NewFromUtf8(Isolate::GetCurrent(),
@@ -133,10 +133,10 @@ index 6fe4f0492dc1f3eaf576c8ff7866080a54cb81c1..41e8e052ff81df78ece87163b0499966
// Recreate the buffer in the constructor.
InternalFieldInfo* casted_info = static_cast<InternalFieldInfo*>(info);
diff --git a/src/env.cc b/src/env.cc
index b5cf58cc953590493beb52abf249e33e486ffc46..347ec5c42e098186ff489dff199ac5989961f6e3 100644
index 82aee7e38bbd859e1a76eedcc3a51278a1b3a793..a09603573c02466c0d25431fe6168ca33ee4692e 100644
--- a/src/env.cc
+++ b/src/env.cc
@@ -1765,10 +1765,10 @@ void AsyncHooks::Deserialize(Local<Context> context) {
@@ -1764,10 +1764,10 @@ void AsyncHooks::Deserialize(Local<Context> context) {
context->GetDataFromSnapshotOnce<Array>(
info_->js_execution_async_resources).ToLocalChecked();
} else {
@@ -149,7 +149,7 @@ index b5cf58cc953590493beb52abf249e33e486ffc46..347ec5c42e098186ff489dff199ac598
// The native_execution_async_resources_ field requires v8::Local<> instances
// for async calls whose resources were on the stack as JS objects when they
@@ -1808,7 +1808,7 @@ AsyncHooks::SerializeInfo AsyncHooks::Serialize(Local<Context> context,
@@ -1807,7 +1807,7 @@ AsyncHooks::SerializeInfo AsyncHooks::Serialize(Local<Context> context,
info.async_id_fields = async_id_fields_.Serialize(context, creator);
if (!js_execution_async_resources_.IsEmpty()) {
info.js_execution_async_resources = creator->AddData(
@@ -353,10 +353,10 @@ index 52483740bb377a2bc2a16af701615d9a4e448eae..84d17a46efe146c1794a43963c41a446
CHECK(!env->temporary_required_module_facade_original.IsEmpty());
return env->temporary_required_module_facade_original.Get(isolate);
diff --git a/src/node.h b/src/node.h
index bbe35c7a8f1bc0bcddf628af42b71efaef8a7759..102bcc0b3400fd334bdf259a076a3ac3b5d4a266 100644
index 8aac774805a002f5af9e9aca62abc56e8f986bab..da87773ba7f0d38f04a7b3851d8a1a6df0eca489 100644
--- a/src/node.h
+++ b/src/node.h
@@ -1142,7 +1142,7 @@ NODE_DEPRECATED("Use v8::Date::ValueOf() directly",
@@ -1144,7 +1144,7 @@ NODE_DEPRECATED("Use v8::Date::ValueOf() directly",
#define NODE_DEFINE_CONSTANT(target, constant) \
do { \
@@ -365,7 +365,7 @@ index bbe35c7a8f1bc0bcddf628af42b71efaef8a7759..102bcc0b3400fd334bdf259a076a3ac3
v8::Local<v8::Context> context = isolate->GetCurrentContext(); \
v8::Local<v8::String> constant_name = v8::String::NewFromUtf8Literal( \
isolate, #constant, v8::NewStringType::kInternalized); \
@@ -1158,7 +1158,7 @@ NODE_DEPRECATED("Use v8::Date::ValueOf() directly",
@@ -1160,7 +1160,7 @@ NODE_DEPRECATED("Use v8::Date::ValueOf() directly",
#define NODE_DEFINE_HIDDEN_CONSTANT(target, constant) \
do { \
@@ -375,10 +375,10 @@ index bbe35c7a8f1bc0bcddf628af42b71efaef8a7759..102bcc0b3400fd334bdf259a076a3ac3
v8::Local<v8::String> constant_name = v8::String::NewFromUtf8Literal( \
isolate, #constant, v8::NewStringType::kInternalized); \
diff --git a/src/node_blob.cc b/src/node_blob.cc
index 4311d71bb0526f9a83a16525243446a590092910..417cd8cbd307b9bfc498ad2df24ed193616ac512 100644
index 40407527800075b6afec5b6c7d98de2c6229e85a..6371aad07beb8514ca2a3acfd30d7c68969e8e20 100644
--- a/src/node_blob.cc
+++ b/src/node_blob.cc
@@ -562,7 +562,7 @@ void BlobBindingData::Deserialize(Local<Context> context,
@@ -561,7 +561,7 @@ void BlobBindingData::Deserialize(Local<Context> context,
int index,
InternalFieldInfoBase* info) {
DCHECK_IS_SNAPSHOT_SLOT(index);
@@ -388,37 +388,37 @@ index 4311d71bb0526f9a83a16525243446a590092910..417cd8cbd307b9bfc498ad2df24ed193
BlobBindingData* binding = realm->AddBindingData<BlobBindingData>(holder);
CHECK_NOT_NULL(binding);
diff --git a/src/node_builtins.cc b/src/node_builtins.cc
index 3377d697615ee168e49e83c4202bc227581f1aaf..1a9a57b73e635ac61016598687167a08b073f84a 100644
index 7922f2f936f64cbb7bd08f0d367f66f0b9eb083b..04d76fdd3d170a7c501bd773b698d380b8bb2426 100644
--- a/src/node_builtins.cc
+++ b/src/node_builtins.cc
@@ -260,7 +260,7 @@ MaybeLocal<Function> BuiltinLoader::LookupAndCompileInternal(
const char* id,
LocalVector<String>* parameters,
@@ -274,7 +274,7 @@ MaybeLocal<Data> BuiltinLoader::LookupAndCompile(
Local<Context> context,
const BuiltinSource* builtin_source,
Realm* optional_realm) {
- Isolate* isolate = context->GetIsolate();
+ Isolate* isolate = Isolate::GetCurrent();
EscapableHandleScope scope(isolate);
Local<String> source;
@@ -382,7 +382,7 @@ void BuiltinLoader::SaveCodeCache(const char* id, Local<Function> fun) {
MaybeLocal<Function> BuiltinLoader::LookupAndCompile(Local<Context> context,
const char* id,
Realm* optional_realm) {
BuiltinCodeCacheData cached_data{};
@@ -442,7 +442,7 @@ void BuiltinLoader::SaveCodeCache(const std::string& id, Local<Data> data) {
MaybeLocal<Function> BuiltinLoader::LookupAndCompileFunction(
Local<Context> context, const char* id, Realm* optional_realm) {
- Isolate* isolate = context->GetIsolate();
+ Isolate* isolate = Isolate::GetCurrent();
LocalVector<String> parameters(isolate);
// Detects parameters of the scripts based on module ids.
// internal/bootstrap/realm: process, getLinkedBinding,
@@ -436,7 +436,7 @@ MaybeLocal<Function> BuiltinLoader::LookupAndCompile(Local<Context> context,
Local<Data> data;
@@ -483,7 +483,7 @@ MaybeLocal<Function> BuiltinLoader::LookupAndCompileFunction(
MaybeLocal<Value> BuiltinLoader::CompileAndCall(Local<Context> context,
const char* id,
Realm* realm) {
- Isolate* isolate = context->GetIsolate();
+ Isolate* isolate = Isolate::GetCurrent();
// Detects parameters of the scripts based on module ids.
// internal/bootstrap/realm: process, getLinkedBinding,
// getInternalBinding, primordials
@@ -492,7 +492,7 @@ MaybeLocal<Value> BuiltinLoader::CompileAndCall(Local<Context> context,
const BuiltinSource* builtin_source = LoadBuiltinSource(isolate, id);
if (builtin_source == nullptr) {
THROW_ERR_MODULE_NOT_FOUND(isolate, "Cannot find module %s", id);
@@ -555,7 +555,7 @@ MaybeLocal<Value> BuiltinLoader::CompileAndCallWith(Local<Context> context,
if (!maybe_fn.ToLocal(&fn)) {
return MaybeLocal<Value>();
}
@@ -427,17 +427,17 @@ index 3377d697615ee168e49e83c4202bc227581f1aaf..1a9a57b73e635ac61016598687167a08
return fn->Call(context, undefined, argc, argv);
}
@@ -530,14 +530,14 @@ bool BuiltinLoader::CompileAllBuiltinsAndCopyCodeCache(
@@ -579,14 +579,14 @@ bool BuiltinLoader::CompileAllBuiltinsAndCopyCodeCache(
to_eager_compile_.emplace(id);
}
- TryCatch bootstrapCatch(context->GetIsolate());
+ TryCatch bootstrapCatch(Isolate::GetCurrent());
auto fn = LookupAndCompile(context, id.data(), nullptr);
auto data = LookupAndCompile(context, id.data(), nullptr);
if (bootstrapCatch.HasCaught()) {
per_process::Debug(DebugCategory::CODE_CACHE,
"Failed to compile code cache for %s\n",
id.data());
id);
all_succeeded = false;
- PrintCaughtException(context->GetIsolate(), context, bootstrapCatch);
+ PrintCaughtException(Isolate::GetCurrent(), context, bootstrapCatch);
@@ -458,19 +458,19 @@ index fea0426496978c0003fe1481afcf93fc9c23edca..c9588880d05435ab9f4e23fcff74c933
CHECK(
diff --git a/src/node_contextify.cc b/src/node_contextify.cc
index 3c234205e89be7e976dae5c3fcc73ca67953e034..e66d4fcb0c064f96cdb819c783027d864fe88d12 100644
index d3568da72a0f99419b4029b93d9eb1e9328d6bd5..cd8b64d58413914e72c32df6a2f192143e85ac46 100644
--- a/src/node_contextify.cc
+++ b/src/node_contextify.cc
@@ -113,7 +113,7 @@ namespace {
@@ -108,6 +108,8 @@ using v8::Value;
// For every `set` of a global property, the interceptor callback defines or
// changes the property both on the sandbox and the global proxy.
// Convert an int to a V8 Name (String or Symbol).
MaybeLocal<String> Uint32ToName(Local<Context> context, uint32_t index) {
- return Uint32::New(context->GetIsolate(), index)->ToString(context);
+ return Uint32::New(Isolate::GetCurrent(), index)->ToString(context);
}
} // anonymous namespace
@@ -677,7 +677,7 @@ Intercepted ContextifyContext::PropertyDefinerCallback(
+
+
ContextifyContext* ContextifyContext::New(Environment* env,
Local<Object> sandbox_obj,
ContextOptions* options) {
@@ -667,7 +669,7 @@ Intercepted ContextifyContext::PropertyDefinerCallback(
}
Local<Context> context = ctx->context();
@@ -479,7 +479,7 @@ index 3c234205e89be7e976dae5c3fcc73ca67953e034..e66d4fcb0c064f96cdb819c783027d86
PropertyAttribute attributes = PropertyAttribute::None;
bool is_declared =
@@ -1666,7 +1666,7 @@ static MaybeLocal<Function> CompileFunctionForCJSLoader(
@@ -1641,7 +1643,7 @@ static MaybeLocal<Function> CompileFunctionForCJSLoader(
bool* cache_rejected,
bool is_cjs_scope,
ScriptCompiler::CachedData* cached_data) {
@@ -489,7 +489,7 @@ index 3c234205e89be7e976dae5c3fcc73ca67953e034..e66d4fcb0c064f96cdb819c783027d86
Local<Symbol> symbol = env->vm_dynamic_import_default_internal();
diff --git a/src/node_env_var.cc b/src/node_env_var.cc
index 6aad252eb5681bb9ab9890812602b43c418e7a7f..5f7ef8cc58f589ba30a44abaaaaaf1514458c3f0 100644
index 5550a4bee3ce9ec8759d216335a9b2b96e20c96e..b38c2f75a92f2d6c1b9c6e7b7aca924653f7494d 100644
--- a/src/node_env_var.cc
+++ b/src/node_env_var.cc
@@ -311,7 +311,7 @@ std::shared_ptr<KVStore> KVStore::CreateMapKVStore() {
@@ -502,10 +502,10 @@ index 6aad252eb5681bb9ab9890812602b43c418e7a7f..5f7ef8cc58f589ba30a44abaaaaaf151
Local<Array> keys;
if (!entries->GetOwnPropertyNames(context).ToLocal(&keys))
diff --git a/src/node_errors.cc b/src/node_errors.cc
index 55a0c986c5b6989ee9ce277bb6a9778abb2ad2ee..809d88f21e5572807e38132d40ee75870ab8de07 100644
index c6404e00d04e61b675a8c4a02139b36da25bd2a8..ea90e6501bb58260f06d6720cc2fc4989752a347 100644
--- a/src/node_errors.cc
+++ b/src/node_errors.cc
@@ -631,7 +631,7 @@ v8::ModifyCodeGenerationFromStringsResult ModifyCodeGenerationFromStrings(
@@ -629,7 +629,7 @@ v8::ModifyCodeGenerationFromStringsResult ModifyCodeGenerationFromStrings(
v8::Local<v8::Context> context,
v8::Local<v8::Value> source,
bool is_code_like) {
@@ -514,7 +514,7 @@ index 55a0c986c5b6989ee9ce277bb6a9778abb2ad2ee..809d88f21e5572807e38132d40ee7587
if (context->GetNumberOfEmbedderDataFields() <=
ContextEmbedderIndex::kAllowCodeGenerationFromStrings) {
@@ -1037,7 +1037,7 @@ const char* errno_string(int errorno) {
@@ -1035,7 +1035,7 @@ const char* errno_string(int errorno) {
}
void PerIsolateMessageListener(Local<Message> message, Local<Value> error) {
@@ -523,7 +523,7 @@ index 55a0c986c5b6989ee9ce277bb6a9778abb2ad2ee..809d88f21e5572807e38132d40ee7587
switch (message->ErrorLevel()) {
case Isolate::MessageErrorLevel::kMessageWarning: {
Environment* env = Environment::GetCurrent(isolate);
@@ -1197,7 +1197,7 @@ void Initialize(Local<Object> target,
@@ -1195,7 +1195,7 @@ void Initialize(Local<Object> target,
SetMethod(
context, target, "getErrorSourcePositions", GetErrorSourcePositions);
@@ -533,10 +533,10 @@ index 55a0c986c5b6989ee9ce277bb6a9778abb2ad2ee..809d88f21e5572807e38132d40ee7587
READONLY_PROPERTY(target, "exitCodes", exit_codes);
diff --git a/src/node_file.cc b/src/node_file.cc
index 96aac2d86695732bf6805f2ad2168a62241b5045..547455bb5011677719a8de1f98cb447561bce6aa 100644
index bf202f5e2bf5eaf2dd9192dfd701e621126c492c..56b6cd5c39d5e72efd24b7aba1f28dab91a6144e 100644
--- a/src/node_file.cc
+++ b/src/node_file.cc
@@ -3850,7 +3850,7 @@ void BindingData::Deserialize(Local<Context> context,
@@ -3891,7 +3891,7 @@ void BindingData::Deserialize(Local<Context> context,
int index,
InternalFieldInfoBase* info) {
DCHECK_IS_SNAPSHOT_SLOT(index);
@@ -647,10 +647,10 @@ index ba2dd7e676bfdfe7da66a4a79db3c791a505c9a8..28e6cfac682e301b605c00c4ef2eaf01
if (!error_obj->GetOwnPropertyNames(context).ToLocal(&keys)) {
return writer->json_objectend(); // the end of 'errorProperties'
diff --git a/src/node_snapshotable.cc b/src/node_snapshotable.cc
index c2e24b4645e7903e08c80aead1c18c7bcff1bd89..e34d24d51d5c090b560d06f727043f20924e6f46 100644
index 41b0773f4c37a016cfa55aff6bb03baf50905b32..c3e995adf437413fe956e45c2ccc704d8737de59 100644
--- a/src/node_snapshotable.cc
+++ b/src/node_snapshotable.cc
@@ -1614,7 +1614,7 @@ void BindingData::Deserialize(Local<Context> context,
@@ -1675,7 +1675,7 @@ void BindingData::Deserialize(Local<Context> context,
int index,
InternalFieldInfoBase* info) {
DCHECK_IS_SNAPSHOT_SLOT(index);
@@ -660,10 +660,10 @@ index c2e24b4645e7903e08c80aead1c18c7bcff1bd89..e34d24d51d5c090b560d06f727043f20
// Recreate the buffer in the constructor.
InternalFieldInfo* casted_info = static_cast<InternalFieldInfo*>(info);
diff --git a/src/node_sqlite.cc b/src/node_sqlite.cc
index 050d779bdcd2b3129abddc3fefa1e852831df236..3f4749286406e03e77de6567b667c0098fbc2a18 100644
index d7c5bc5514044aa1ed39dd4e1c0cef346498c96f..91b80b4fb44c26e95503556064e7429b8cbf4639 100644
--- a/src/node_sqlite.cc
+++ b/src/node_sqlite.cc
@@ -2162,7 +2162,7 @@ bool StatementSync::BindParams(const FunctionCallbackInfo<Value>& args) {
@@ -2436,7 +2436,7 @@ bool StatementSync::BindParams(const FunctionCallbackInfo<Value>& args) {
if (args[0]->IsObject() && !args[0]->IsArrayBufferView()) {
Local<Object> obj = args[0].As<Object>();
@@ -699,7 +699,7 @@ index 9b676a0156ab8ef47f62627be953c23d4fcbf4f4..6294cd03667980e2ad23cae9e7961262
BindingData* binding = realm->AddBindingData<BindingData>(holder);
CHECK_NOT_NULL(binding);
diff --git a/src/node_v8.cc b/src/node_v8.cc
index 8dd32dad262679444c10878299eb6bb8fb04e120..935ea2cf5157c3a2fbdf142fc7024ec6b6d5de26 100644
index 4ee452d5bc6b67da52e91d98531ac35a7af155c7..b226d6fe60f4fdf5a237c336b9101c5a974ce837 100644
--- a/src/node_v8.cc
+++ b/src/node_v8.cc
@@ -163,7 +163,7 @@ void BindingData::Deserialize(Local<Context> context,
@@ -736,10 +736,10 @@ index 370221d3cddc201180260ecb3a222bc831c91093..f5aff2f65fe6b9f48cf970ab3e7c57cf
THROW_ERR_WASI_NOT_STARTED(isolate);
return EinvalError<R>();
diff --git a/src/node_webstorage.cc b/src/node_webstorage.cc
index 5c7d268d38ff55ce4db07463b1ea0bcb2f4e63ea..bd83654012442195866e57173b6e5d4d25fecf0f 100644
index 013322e8fb6cb76074326c2a45a04eb0f8e133f1..0a169a8dcf27eeb5b5b0c1b00ac8b79ed43d551b 100644
--- a/src/node_webstorage.cc
+++ b/src/node_webstorage.cc
@@ -57,7 +57,7 @@ using v8::Value;
@@ -56,7 +56,7 @@ using v8::Value;
} while (0)
static void ThrowQuotaExceededException(Local<Context> context) {
@@ -748,17 +748,17 @@ index 5c7d268d38ff55ce4db07463b1ea0bcb2f4e63ea..bd83654012442195866e57173b6e5d4d
auto dom_exception_str = FIXED_ONE_BYTE_STRING(isolate, "DOMException");
auto err_name = FIXED_ONE_BYTE_STRING(isolate, "QuotaExceededError");
auto err_message =
@@ -437,7 +437,7 @@ Maybe<void> Storage::Store(Local<Name> key, Local<Value> value) {
}
static MaybeLocal<String> Uint32ToName(Local<Context> context, uint32_t index) {
- return Uint32::New(context->GetIsolate(), index)->ToString(context);
+ return Uint32::New(Isolate::GetCurrent(), index)->ToString(context);
@@ -435,6 +435,8 @@ Maybe<void> Storage::Store(Local<Name> key, Local<Value> value) {
return JustVoid();
}
+
+
static void Clear(const FunctionCallbackInfo<Value>& info) {
Storage* storage;
ASSIGN_OR_RETURN_UNWRAP(&storage, info.This());
diff --git a/src/node_worker.cc b/src/node_worker.cc
index 1acc61af0c995ddefbc00fe232b2454de77a84a3..3041746fc8a132f68cc1d801bb1700634699828d 100644
index a2631a96371becb0f4ea4f47a52313f4f02477da..4866c7ff589825d41fe84786ed8f9b3fccd3d1b7 100644
--- a/src/node_worker.cc
+++ b/src/node_worker.cc
@@ -1465,8 +1465,6 @@ void GetEnvMessagePort(const FunctionCallbackInfo<Value>& args) {
@@ -784,10 +784,10 @@ index da4206187f7c7d2becb8a101c1ff5346a10e13f4..03f0910926f3d403121e227cee32a546
// Recreate the buffer in the constructor.
BindingData* binding = realm->AddBindingData<BindingData>(holder);
diff --git a/src/util-inl.h b/src/util-inl.h
index aae5956742f195279ab6af04029d76dee6af2e84..6898e8ea794675e903e13e2b45524d572a3f68bb 100644
index f42c7b1250d1eb2e4d9f8e10c5ea9a9ca310924b..d59e30a635b08b97d255ed2e5540a66db54b068f 100644
--- a/src/util-inl.h
+++ b/src/util-inl.h
@@ -336,14 +336,14 @@ v8::Maybe<void> FromV8Array(v8::Local<v8::Context> context,
@@ -337,14 +337,14 @@ v8::Maybe<void> FromV8Array(v8::Local<v8::Context> context,
std::vector<v8::Global<v8::Value>>* out) {
uint32_t count = js_array->Length();
out->reserve(count);
@@ -804,7 +804,7 @@ index aae5956742f195279ab6af04029d76dee6af2e84..6898e8ea794675e903e13e2b45524d57
if (str.size() >= static_cast<size_t>(v8::String::kMaxLength)) [[unlikely]] {
// V8 only has a TODO comment about adding an exception when the maximum
// string size is exceeded.
@@ -359,7 +359,7 @@ v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
@@ -360,7 +360,7 @@ v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
std::u16string_view str,
v8::Isolate* isolate) {
@@ -813,7 +813,7 @@ index aae5956742f195279ab6af04029d76dee6af2e84..6898e8ea794675e903e13e2b45524d57
if (str.length() >= static_cast<size_t>(v8::String::kMaxLength))
[[unlikely]] {
// V8 only has a TODO comment about adding an exception when the maximum
@@ -379,7 +379,7 @@ v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
@@ -380,7 +380,7 @@ v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
v8_inspector::StringView str,
v8::Isolate* isolate) {
@@ -822,7 +822,7 @@ index aae5956742f195279ab6af04029d76dee6af2e84..6898e8ea794675e903e13e2b45524d57
if (str.length() >= static_cast<size_t>(v8::String::kMaxLength))
[[unlikely]] {
// V8 only has a TODO comment about adding an exception when the maximum
@@ -406,7 +406,7 @@ template <typename T>
@@ -407,7 +407,7 @@ template <typename T>
v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
const std::vector<T>& vec,
v8::Isolate* isolate) {
@@ -831,7 +831,7 @@ index aae5956742f195279ab6af04029d76dee6af2e84..6898e8ea794675e903e13e2b45524d57
v8::EscapableHandleScope handle_scope(isolate);
MaybeStackBuffer<v8::Local<v8::Value>, 128> arr(vec.size());
@@ -423,7 +423,7 @@ template <typename T>
@@ -424,7 +424,7 @@ template <typename T>
v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
const std::set<T>& set,
v8::Isolate* isolate) {
@@ -840,7 +840,7 @@ index aae5956742f195279ab6af04029d76dee6af2e84..6898e8ea794675e903e13e2b45524d57
v8::Local<v8::Set> set_js = v8::Set::New(isolate);
v8::HandleScope handle_scope(isolate);
@@ -442,7 +442,7 @@ template <typename T, std::size_t U>
@@ -443,7 +443,7 @@ template <typename T, std::size_t U>
v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
const std::ranges::elements_view<T, U>& vec,
v8::Isolate* isolate) {
@@ -849,7 +849,7 @@ index aae5956742f195279ab6af04029d76dee6af2e84..6898e8ea794675e903e13e2b45524d57
v8::EscapableHandleScope handle_scope(isolate);
MaybeStackBuffer<v8::Local<v8::Value>, 128> arr(vec.size());
@@ -461,7 +461,7 @@ template <typename T, typename U>
@@ -462,7 +462,7 @@ template <typename T, typename U>
v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
const std::unordered_map<T, U>& map,
v8::Isolate* isolate) {
@@ -858,7 +858,7 @@ index aae5956742f195279ab6af04029d76dee6af2e84..6898e8ea794675e903e13e2b45524d57
v8::EscapableHandleScope handle_scope(isolate);
v8::Local<v8::Map> ret = v8::Map::New(isolate);
@@ -504,7 +504,7 @@ template <typename T, typename>
@@ -505,7 +505,7 @@ template <typename T, typename>
v8::MaybeLocal<v8::Value> ToV8Value(v8::Local<v8::Context> context,
const T& number,
v8::Isolate* isolate) {
@@ -867,7 +867,7 @@ index aae5956742f195279ab6af04029d76dee6af2e84..6898e8ea794675e903e13e2b45524d57
return ConvertNumberToV8Value(isolate, number);
}
@@ -517,7 +517,7 @@ v8::Local<v8::Array> ToV8ValuePrimitiveArray(v8::Local<v8::Context> context,
@@ -518,7 +518,7 @@ v8::Local<v8::Array> ToV8ValuePrimitiveArray(v8::Local<v8::Context> context,
std::is_floating_point_v<T>,
"Only primitive types (bool, integral, floating-point) are supported.");
@@ -876,7 +876,7 @@ index aae5956742f195279ab6af04029d76dee6af2e84..6898e8ea794675e903e13e2b45524d57
v8::EscapableHandleScope handle_scope(isolate);
v8::LocalVector<v8::Value> elements(isolate);
@@ -803,7 +803,7 @@ inline v8::MaybeLocal<v8::Object> NewDictionaryInstanceNullProto(
@@ -811,7 +811,7 @@ inline v8::MaybeLocal<v8::Object> NewDictionaryInstanceNullProto(
if (value.IsEmpty()) return v8::MaybeLocal<v8::Object>();
}
v8::Local<v8::Object> obj = tmpl->NewInstance(context, property_values);
@@ -935,7 +935,7 @@ index 660cfff6b8a0c583be843e555e7a06cd09e0d279..c4b39450c5b7f91c46f7027db367c30d
context, that, OneByteString(isolate, name), tmpl, flag);
}
diff --git a/src/util.h b/src/util.h
index 81d08c27fb7037d16e12843dc03c3d8f9caee723..52e6a149d6760640d93c56ce91a759ae9207a8c7 100644
index 51f0b6463ab6bc33aa4e66bd55e0ab3822840ab0..ee05fa017f48c5b03e7179d6fef39b6e32e488a5 100644
--- a/src/util.h
+++ b/src/util.h
@@ -753,7 +753,7 @@ inline v8::MaybeLocal<v8::Value> ToV8Value(

View File

@@ -11,10 +11,10 @@ really in 20/21. We have to wait until 22 is released to be able to
build with upstream GN files.
diff --git a/configure.py b/configure.py
index 98a8b147e4cbfd5957c35688f2b37ae0ca52a818..fd13970ae73bbe5db186f81faed792a5597bbcd0 100755
index fa25de8c316b71d3ad5b55b5ce398b69a5d4a965..fc48438060e0dd84edc60d1aebf3d0946be98ea9 100755
--- a/configure.py
+++ b/configure.py
@@ -1821,7 +1821,7 @@ def configure_v8(o, configs):
@@ -1838,7 +1838,7 @@ def configure_v8(o, configs):
# Until we manage to get rid of all those, v8_enable_sandbox cannot be used.
# Note that enabling pointer compression without enabling sandbox is unsupported by V8,
# so this can be broken at any time.
@@ -23,54 +23,8 @@ index 98a8b147e4cbfd5957c35688f2b37ae0ca52a818..fd13970ae73bbe5db186f81faed792a5
# We set v8_enable_pointer_compression_shared_cage to 0 always, even when
# pointer compression is enabled so that we don't accidentally enable shared
# cage mode when pointer compression is on.
diff --git a/deps/merve/BUILD.gn b/deps/merve/BUILD.gn
new file mode 100644
index 0000000000000000000000000000000000000000..7bb318f8835dba6f4a6f211d8534bb6923958747
--- /dev/null
+++ b/deps/merve/BUILD.gn
@@ -0,0 +1,14 @@
+##############################################################################
+# #
+# DO NOT EDIT THIS FILE! #
+# #
+##############################################################################
+
+# This file is used by GN for building, which is NOT the build system used for
+# building official binaries.
+# Please modify the gyp files if you are making changes to build system.
+
+import("unofficial.gni")
+
+merve_gn_build("merve") {
+}
diff --git a/deps/merve/unofficial.gni b/deps/merve/unofficial.gni
new file mode 100644
index 0000000000000000000000000000000000000000..dfb508d1d22f84accb146620ed07d89715b367e6
--- /dev/null
+++ b/deps/merve/unofficial.gni
@@ -0,0 +1,20 @@
+# This file is used by GN for building, which is NOT the build system used for
+# building official binaries.
+# Please edit the gyp files if you are making changes to build system.
+
+# The actual configurations are put inside a template in unofficial.gni to
+# prevent accidental edits from contributors.
+template("merve_gn_build") {
+ config("merve_config") {
+ include_dirs = [ "." ]
+ }
+ gypi_values = exec_script("../../tools/gypi_to_gn.py",
+ [ rebase_path("merve.gyp") ],
+ "scope",
+ [ "merve.gyp" ])
+ source_set(target_name) {
+ forward_variables_from(invoker, "*")
+ public_configs = [ ":merve_config" ]
+ sources = gypi_values.merve_sources
+ }
+}
diff --git a/node.gni b/node.gni
index d4438f7fd61598afac2c1e3184721a759d22b10c..156fee33b3813fe4d94a1c9585f217a99dbfbd5f 100644
index 41f200189a34e150e4c8f25da2a72c2108259720..156fee33b3813fe4d94a1c9585f217a99dbfbd5f 100644
--- a/node.gni
+++ b/node.gni
@@ -5,10 +5,10 @@
@@ -86,16 +40,7 @@ index d4438f7fd61598afac2c1e3184721a759d22b10c..156fee33b3813fe4d94a1c9585f217a9
# The location of OpenSSL - use the one from node's deps by default.
node_openssl_path = "$node_path/deps/openssl"
@@ -26,8 +26,6 @@ declare_args() {
# TODO(zcbenz): This is currently copied from configure.py, we should share
# the list between configure.py and GN configurations.
node_builtin_shareable_builtins = [
- "deps/cjs-module-lexer/lexer.js",
- "deps/cjs-module-lexer/dist/lexer.js",
"deps/undici/undici.js",
"deps/amaro/dist/index.js",
]
@@ -50,7 +48,7 @@ declare_args() {
@@ -48,7 +48,7 @@ declare_args() {
node_openssl_system_ca_path = ""
# Initialize v8 platform during node.js startup.
@@ -104,7 +49,7 @@ index d4438f7fd61598afac2c1e3184721a759d22b10c..156fee33b3813fe4d94a1c9585f217a9
# Custom build tag.
node_tag = ""
@@ -70,10 +68,16 @@ declare_args() {
@@ -68,10 +68,16 @@ declare_args() {
# TODO(zcbenz): There are few broken things for now:
# 1. cross-os compilation is not supported.
# 2. node_mksnapshot crashes when cross-compiling for x64 from arm64.
@@ -123,11 +68,51 @@ index d4438f7fd61598afac2c1e3184721a759d22b10c..156fee33b3813fe4d94a1c9585f217a9
assert(!node_enable_inspector || node_use_openssl,
diff --git a/src/node_builtins.cc b/src/node_builtins.cc
index f25ca01d6ef016489371a3a1c9d8500da65e8023..2c816bef8d64f3e0ba2993c4885641620ee64272 100644
index 6506dcea3f4f88a7781975fae1ee5f8b87d4dfb2..a077ad673fdf7eab61878940e5fef43921c2e453 100644
--- a/src/node_builtins.cc
+++ b/src/node_builtins.cc
@@ -760,6 +760,7 @@ void BuiltinLoader::RegisterExternalReferences(
registry->Register(GetNatives);
@@ -455,6 +455,30 @@ MaybeLocal<Function> BuiltinLoader::LookupAndCompileFunction(
return value.As<Function>();
}
+MaybeLocal<Function> BuiltinLoader::LookupAndCompileFunction(
+ Local<Context> context,
+ const char* id,
+ LocalVector<String>* parameters,
+ Realm* optional_realm) {
+ Isolate* isolate = Isolate::GetCurrent();
+ const BuiltinSource* builtin_source = LoadBuiltinSource(isolate, id);
+ if (builtin_source == nullptr) {
+ THROW_ERR_MODULE_NOT_FOUND(isolate, "Cannot find module %s", id);
+ return MaybeLocal<Function>();
+ }
+ std::string filename_s = std::string("node:") + builtin_source->id;
+ Local<String> filename = OneByteString(isolate, filename_s);
+ Local<String> source = builtin_source->source.ToStringChecked(isolate);
+ ScriptOrigin origin(filename, 0, 0, true);
+ ScriptCompiler::Source script_source(source, origin);
+ return ScriptCompiler::CompileFunction(context,
+ &script_source,
+ parameters->size(),
+ parameters->data(),
+ 0,
+ nullptr);
+}
+
MaybeLocal<Value> BuiltinLoader::CompileAndCall(Local<Context> context,
const char* id,
Realm* realm) {
@@ -741,7 +765,7 @@ MaybeLocal<Module> BuiltinLoader::LoadBuiltinSourceTextModule(Realm* realm,
// Pre-fetch all dependencies.
if (requests->Length() > 0) {
for (int i = 0; i < requests->Length(); i++) {
- Local<ModuleRequest> req = requests->Get(context, i).As<ModuleRequest>();
+ Local<ModuleRequest> req = requests->Get(i).As<ModuleRequest>();
std::string specifier =
Utf8Value(isolate, req->GetSpecifier()).ToString();
std::string resolved_id = ResolveRequestForBuiltin(specifier);
@@ -900,6 +924,7 @@ void BuiltinLoader::RegisterExternalReferences(
registry->Register(ImportBuiltinSourceTextModule);
RegisterExternalReferencesForInternalizedBuiltinCode(registry);
+ EmbedderRegisterExternalReferencesForInternalizedBuiltinCode(registry);
@@ -135,10 +120,10 @@ index f25ca01d6ef016489371a3a1c9d8500da65e8023..2c816bef8d64f3e0ba2993c488564162
} // namespace builtins
diff --git a/src/node_builtins.h b/src/node_builtins.h
index 7a7b84337feb67960819472e43192dbdc151e299..bcdd50f635757f41287c87df1db9cd3b55c4b6b9 100644
index e4af1f42f4442b4c1ec94cf25d8d811f0e82d89e..490f429986e43653e0dd2048d9e3bd2e99ae44b2 100644
--- a/src/node_builtins.h
+++ b/src/node_builtins.h
@@ -75,6 +75,8 @@ using BuiltinCodeCacheMap =
@@ -82,6 +82,8 @@ using BuiltinCodeCacheMap =
// Generated by tools/js2c.cc as node_javascript.cc
void RegisterExternalReferencesForInternalizedBuiltinCode(
ExternalReferenceRegistry* registry);
@@ -147,13 +132,27 @@ index 7a7b84337feb67960819472e43192dbdc151e299..bcdd50f635757f41287c87df1db9cd3b
// Handles compilation and caching of built-in JavaScript modules and
// bootstrap scripts, whose source are bundled into the binary as static data.
@@ -104,6 +106,13 @@ class NODE_EXTERN_PRIVATE BuiltinLoader {
v8::MaybeLocal<v8::Function> LookupAndCompileFunction(
v8::Local<v8::Context> context, const char* id, Realm* optional_realm);
+ // Overload that accepts custom parameters for embedder scripts.
+ v8::MaybeLocal<v8::Function> LookupAndCompileFunction(
+ v8::Local<v8::Context> context,
+ const char* id,
+ v8::LocalVector<v8::String>* parameters,
+ Realm* optional_realm);
+
v8::MaybeLocal<v8::Value> CompileAndCallWith(v8::Local<v8::Context> context,
const char* id,
int argc,
diff --git a/tools/js2c.cc b/tools/js2c.cc
old mode 100644
new mode 100755
index 9c2f70de4e00834ff448e573743898072dc14c5d..71a12c606f4da7165cc41a295a278b2e504af1b6
index 2cb09f8e1d7ba6ba389f70cdfc6300458f469caa..1a7b6ec6e6c51cf947694fac5dfd860b345d478f
--- a/tools/js2c.cc
+++ b/tools/js2c.cc
@@ -28,6 +28,7 @@ namespace js2c {
@@ -29,6 +29,7 @@ namespace js2c {
int Main(int argc, char* argv[]);
static bool is_verbose = false;
@@ -161,7 +160,7 @@ index 9c2f70de4e00834ff448e573743898072dc14c5d..71a12c606f4da7165cc41a295a278b2e
void Debug(const char* format, ...) {
va_list arguments;
@@ -175,6 +176,7 @@ const char* kTemplate = R"(
@@ -176,6 +177,7 @@ const char* kTemplate = R"(
#include "node_builtins.h"
#include "node_external_reference.h"
#include "node_internals.h"
@@ -169,7 +168,7 @@ index 9c2f70de4e00834ff448e573743898072dc14c5d..71a12c606f4da7165cc41a295a278b2e
namespace node {
@@ -190,7 +192,11 @@ const ThreadsafeCopyOnWrite<BuiltinSourceMap> global_source_map {
@@ -191,7 +193,11 @@ const ThreadsafeCopyOnWrite<BuiltinSourceMap> global_source_map {
} // anonymous namespace
void BuiltinLoader::LoadJavaScriptSource() {
@@ -182,7 +181,7 @@ index 9c2f70de4e00834ff448e573743898072dc14c5d..71a12c606f4da7165cc41a295a278b2e
}
void RegisterExternalReferencesForInternalizedBuiltinCode(
@@ -207,6 +213,45 @@ UnionBytes BuiltinLoader::GetConfig() {
@@ -208,6 +214,45 @@ UnionBytes BuiltinLoader::GetConfig() {
} // namespace node
)";
@@ -228,7 +227,7 @@ index 9c2f70de4e00834ff448e573743898072dc14c5d..71a12c606f4da7165cc41a295a278b2e
Fragment Format(const Fragments& definitions,
const Fragments& initializers,
const Fragments& registrations) {
@@ -216,13 +261,12 @@ Fragment Format(const Fragments& definitions,
@@ -217,13 +262,12 @@ Fragment Format(const Fragments& definitions,
size_t init_size = init_buf.size();
std::vector<char> reg_buf = Join(registrations, "\n");
size_t reg_size = reg_buf.size();
@@ -245,7 +244,7 @@ index 9c2f70de4e00834ff448e573743898072dc14c5d..71a12c606f4da7165cc41a295a278b2e
static_cast<int>(def_buf.size()),
def_buf.data(),
static_cast<int>(init_buf.size()),
@@ -836,12 +880,15 @@ int JS2C(const FileList& js_files,
@@ -848,12 +892,15 @@ int JS2C(const FileList& js_files,
}
}
@@ -261,7 +260,7 @@ index 9c2f70de4e00834ff448e573743898072dc14c5d..71a12c606f4da7165cc41a295a278b2e
Fragment out = Format(definitions, initializers, registrations);
return WriteIfChanged(out, dest);
}
@@ -867,6 +914,8 @@ int Main(int argc, char* argv[]) {
@@ -879,6 +926,8 @@ int Main(int argc, char* argv[]) {
std::string arg(argv[i]);
if (arg == "--verbose") {
is_verbose = true;
@@ -270,7 +269,7 @@ index 9c2f70de4e00834ff448e573743898072dc14c5d..71a12c606f4da7165cc41a295a278b2e
} else if (arg == "--root") {
if (i == argc - 1) {
fprintf(stderr, "--root must be followed by a path\n");
@@ -915,6 +964,14 @@ int Main(int argc, char* argv[]) {
@@ -927,6 +976,14 @@ int Main(int argc, char* argv[]) {
}
}
@@ -285,7 +284,7 @@ index 9c2f70de4e00834ff448e573743898072dc14c5d..71a12c606f4da7165cc41a295a278b2e
// Should have exactly 3 types: `.js`, `.mjs` and `.gypi`.
assert(file_map.size() == 3);
auto gypi_it = file_map.find(".gypi");
@@ -941,6 +998,7 @@ int Main(int argc, char* argv[]) {
@@ -953,6 +1010,7 @@ int Main(int argc, char* argv[]) {
std::sort(mjs_it->second.begin(), mjs_it->second.end());
return JS2C(js_it->second, mjs_it->second, gypi_it->second[0], output);
@@ -306,10 +305,10 @@ index 856878c33681a73d41016729dabe48b0a6a80589..91a11852d206b65485fe90fd037a0bd1
if sys.platform == 'win32':
files = [ x.replace('\\', '/') for x in files ]
diff --git a/unofficial.gni b/unofficial.gni
index c742b62c484e9dd205eff63dcffad78c76828375..bff7b0650cfe8578a044e45d0f9e352859909695 100644
index aa78f9ce60c0439536eaf6e23880e30ebef0e1a9..df0ae804a5338d8f2ec4d331a1e2ed053c3c3955 100644
--- a/unofficial.gni
+++ b/unofficial.gni
@@ -147,31 +147,42 @@ template("node_gn_build") {
@@ -147,32 +147,42 @@ template("node_gn_build") {
public_configs = [
":node_external_config",
"deps/googletest:googletest_config",
@@ -328,7 +327,7 @@ index c742b62c484e9dd205eff63dcffad78c76828375..bff7b0650cfe8578a044e45d0f9e3528
"deps/cares",
"deps/histogram",
"deps/llhttp",
+ "deps/merve",
"deps/merve",
"deps/nbytes",
"deps/nghttp2",
- "deps/ngtcp2",
@@ -355,7 +354,7 @@ index c742b62c484e9dd205eff63dcffad78c76828375..bff7b0650cfe8578a044e45d0f9e3528
"$target_gen_dir/node_javascript.cc",
] + gypi_values.node_sources
@@ -194,7 +205,7 @@ template("node_gn_build") {
@@ -195,7 +205,7 @@ template("node_gn_build") {
}
if (node_use_openssl) {
deps += [ "deps/ncrypto" ]
@@ -364,7 +363,7 @@ index c742b62c484e9dd205eff63dcffad78c76828375..bff7b0650cfe8578a044e45d0f9e3528
sources += gypi_values.node_crypto_sources
}
if (node_use_sqlite) {
@@ -223,6 +234,10 @@ template("node_gn_build") {
@@ -224,6 +234,10 @@ template("node_gn_build") {
}
}
@@ -375,7 +374,7 @@ index c742b62c484e9dd205eff63dcffad78c76828375..bff7b0650cfe8578a044e45d0f9e3528
executable(target_name) {
forward_variables_from(invoker, "*")
@@ -314,6 +329,7 @@ template("node_gn_build") {
@@ -315,6 +329,7 @@ template("node_gn_build") {
}
executable("node_js2c") {
@@ -383,9 +382,9 @@ index c742b62c484e9dd205eff63dcffad78c76828375..bff7b0650cfe8578a044e45d0f9e3528
deps = [
"deps/uv",
"$node_simdutf_path",
@@ -324,26 +340,75 @@ template("node_gn_build") {
"src/embedded_data.cc",
"src/embedded_data.h",
@@ -327,26 +342,75 @@ template("node_gn_build") {
"src/builtin_info.cc",
"src/builtin_info.h",
]
- include_dirs = [ "src" ]
+ include_dirs = [ "src", "tools" ]
@@ -469,7 +468,7 @@ index c742b62c484e9dd205eff63dcffad78c76828375..bff7b0650cfe8578a044e45d0f9e3528
outputs = [ "$target_gen_dir/node_javascript.cc" ]
# Get the path to node_js2c executable of the host toolchain.
@@ -357,11 +422,11 @@ template("node_gn_build") {
@@ -360,11 +424,11 @@ template("node_gn_build") {
get_label_info(":node_js2c($host_toolchain)", "name") +
host_executable_suffix

View File

@@ -14,7 +14,7 @@ We don't need to do this for zlib, as the existing gn workflow uses the same
Upstreamed at https://github.com/nodejs/node/pull/55903
diff --git a/unofficial.gni b/unofficial.gni
index bff7b0650cfe8578a044e45d0f9e352859909695..4ab316e45bd84e43a53335df60f847b17fe6c2fa 100644
index df0ae804a5338d8f2ec4d331a1e2ed053c3c3955..07ebc4706c6b3208dc9136e6ba0e4d795c238d61 100644
--- a/unofficial.gni
+++ b/unofficial.gni
@@ -199,7 +199,17 @@ template("node_gn_build") {

View File

@@ -64,10 +64,10 @@ index f8b4fd7c4ca5a0907806c7e804de8c951675a36a..209e3bcf8be5a23ac528dcd673bed82c
function ipToInt(ip) {
diff --git a/node.gyp b/node.gyp
index f5cd416b5fe7a51084bc4af9a4427a8e62599fd8..5eb70ce3820f2b82121bc102c5182ab768cbef36 100644
index 0620850e0872cfea9a1241b2a56f7bede7fa291a..cdd977a2fb0a9a2debd75631c4c691b673c1e544 100644
--- a/node.gyp
+++ b/node.gyp
@@ -182,7 +182,6 @@
@@ -184,7 +184,6 @@
'src/timers.cc',
'src/timer_wrap.cc',
'src/tracing/agent.cc',
@@ -75,7 +75,7 @@ index f5cd416b5fe7a51084bc4af9a4427a8e62599fd8..5eb70ce3820f2b82121bc102c5182ab7
'src/tracing/node_trace_writer.cc',
'src/tracing/trace_event.cc',
'src/tracing/traced_value.cc',
@@ -314,7 +313,6 @@
@@ -318,7 +317,6 @@
'src/tcp_wrap.h',
'src/timers.h',
'src/tracing/agent.h',

View File

@@ -7,7 +7,7 @@ Subject: build: ensure native module compilation fails if not using a new
This should not be upstreamed, it is a quality-of-life patch for downstream module builders.
diff --git a/common.gypi b/common.gypi
index d9eb9527e3cbb3b101274ab19e6d6ace42f0e022..a1243ad39b8fcf564285ace0b51b1482bd85071b 100644
index d398e33d5acc9acd096e2c263a6f8ad9d947d1b0..f760bb6e6a498c3f9786b46b60fbbf38521ab60a 100644
--- a/common.gypi
+++ b/common.gypi
@@ -89,6 +89,8 @@
@@ -42,19 +42,19 @@ index d9eb9527e3cbb3b101274ab19e6d6ace42f0e022..a1243ad39b8fcf564285ace0b51b1482
# list in v8/BUILD.gn.
['v8_enable_v8_checks == 1', {
diff --git a/configure.py b/configure.py
index fd13970ae73bbe5db186f81faed792a5597bbcd0..162e3b09c92b49cd39d32a87ff97a54555d3e47b 100755
index fc48438060e0dd84edc60d1aebf3d0946be98ea9..4e30f58c3f33ed400301ed08a365a738b49f530f 100755
--- a/configure.py
+++ b/configure.py
@@ -1802,6 +1802,7 @@ def configure_library(lib, output, pkgname=None):
@@ -1810,6 +1810,7 @@ def configure_library(lib, output, pkgname=None):
def configure_v8(o, configs):
set_configuration_variable(configs, 'v8_enable_v8_checks', release=1, debug=0)
set_configuration_variable(configs, 'v8_enable_v8_checks', release=0, debug=1)
+ o['variables']['using_electron_config_gypi'] = 1
o['variables']['v8_enable_webassembly'] = 0 if options.v8_lite_mode else 1
o['variables']['v8_enable_javascript_promise_hooks'] = 1
o['variables']['v8_enable_lite_mode'] = 1 if options.v8_lite_mode else 0
diff --git a/src/node.h b/src/node.h
index ebfd7229b5f0044b628fbe0b03ac211f0c6ed9a6..b92a9d42da8419741c435643b7401efcb21a9e8b 100644
index 2087509f7961dcacf02c634f2c4940e45f374072..0225ff4f43e8b82c08e8ec5492df73223a82066c 100644
--- a/src/node.h
+++ b/src/node.h
@@ -22,6 +22,12 @@

View File

@@ -34,10 +34,10 @@ index a493c9579669072d97c7caa9049e846bda36f8b9..334ffaa6f2d955125ca8b427ace1442c
let kResistStopPropagation;
diff --git a/src/node_builtins.cc b/src/node_builtins.cc
index 2c816bef8d64f3e0ba2993c4885641620ee64272..3377d697615ee168e49e83c4202bc227581f1aaf 100644
index a077ad673fdf7eab61878940e5fef43921c2e453..7922f2f936f64cbb7bd08f0d367f66f0b9eb083b 100644
--- a/src/node_builtins.cc
+++ b/src/node_builtins.cc
@@ -39,6 +39,7 @@ using v8::Value;
@@ -46,6 +46,7 @@ using v8::Value;
BuiltinLoader::BuiltinLoader()
: config_(GetConfig()), code_cache_(std::make_shared<BuiltinCodeCache>()) {
LoadJavaScriptSource();
@@ -46,10 +46,10 @@ index 2c816bef8d64f3e0ba2993c4885641620ee64272..3377d697615ee168e49e83c4202bc227
AddExternalizedBuiltin("internal/deps/undici/undici",
STRINGIFY(NODE_SHARED_BUILTIN_UNDICI_UNDICI_PATH));
diff --git a/src/node_builtins.h b/src/node_builtins.h
index bcdd50f635757f41287c87df1db9cd3b55c4b6b9..e908f3c0e314b90ff7b6c599940ea8f4e657c709 100644
index 490f429986e43653e0dd2048d9e3bd2e99ae44b2..05b1c5bbc38f851b11383b7e3e48c1c92f47aba1 100644
--- a/src/node_builtins.h
+++ b/src/node_builtins.h
@@ -141,6 +141,7 @@ class NODE_EXTERN_PRIVATE BuiltinLoader {
@@ -150,6 +150,7 @@ class NODE_EXTERN_PRIVATE BuiltinLoader {
// Generated by tools/js2c.cc as node_javascript.cc
void LoadJavaScriptSource(); // Loads data into source_

View File

@@ -11,7 +11,7 @@ node-gyp will use the result of `process.config` that reflects the environment
in which the binary got built.
diff --git a/common.gypi b/common.gypi
index a1243ad39b8fcf564285ace0b51b1482bd85071b..60ac7a50718fd8239fd96b811cdccd1c73b2d606 100644
index f760bb6e6a498c3f9786b46b60fbbf38521ab60a..d9ba0816ae424e5eb77fa742f8fd3d2d1c9845fc 100644
--- a/common.gypi
+++ b/common.gypi
@@ -128,6 +128,7 @@

View File

@@ -10,7 +10,7 @@ M151, and so we should allow for building until then.
This patch can be removed at the M151 branch point.
diff --git a/common.gypi b/common.gypi
index 60ac7a50718fd8239fd96b811cdccd1c73b2d606..709eb83801eeed81f79c4305a86d1a19710298c2 100644
index d9ba0816ae424e5eb77fa742f8fd3d2d1c9845fc..537332b67dc7a45ef2b9ca4e439081a9f39a5f69 100644
--- a/common.gypi
+++ b/common.gypi
@@ -677,7 +677,7 @@

View File

@@ -8,10 +8,10 @@ they use themselves as the entry point. We should try to upstream some form
of this.
diff --git a/lib/internal/process/pre_execution.js b/lib/internal/process/pre_execution.js
index 8ed8802adcda308166d700e463c8d6cbcb26d94a..9a99ff6d44320c0e28f4a787d24ea98ae1c96196 100644
index e9ab2b2a8958e99a6cd9d4280641c05eedec18aa..4a42dd66f451eece98fded909a72dd5ffcfd9708 100644
--- a/lib/internal/process/pre_execution.js
+++ b/lib/internal/process/pre_execution.js
@@ -276,12 +276,14 @@ function patchProcessObject(expandArgv1) {
@@ -281,12 +281,14 @@ function patchProcessObject(expandArgv1) {
// the entry point.
if (expandArgv1 && process.argv[1] && process.argv[1][0] !== '-') {
// Expand process.argv[1] into a full path.

View File

@@ -14,10 +14,10 @@ and
This patch can be removed once this is fixed upstream in simdjson.
diff --git a/deps/simdjson/simdjson.h b/deps/simdjson/simdjson.h
index 1d6560e80fab0458b22f0ac2437056bce4873e8f..c3dbe2b6fc08c36a07ced5e29a814f7bcd85b748 100644
index 3a413a7d1e046e93babfdda9982bd602f35ba3fc..76a6d4a30efcf0f5c9d14a314f1be670e8184524 100644
--- a/deps/simdjson/simdjson.h
+++ b/deps/simdjson/simdjson.h
@@ -4215,12 +4215,17 @@ inline std::ostream& operator<<(std::ostream& out, simdjson_result<padded_string
@@ -4408,12 +4408,17 @@ private:
} // namespace simdjson
@@ -35,7 +35,7 @@ index 1d6560e80fab0458b22f0ac2437056bce4873e8f..c3dbe2b6fc08c36a07ced5e29a814f7b
namespace simdjson {
namespace internal {
@@ -4729,6 +4734,9 @@ inline simdjson_result<padded_string> padded_string::load(std::wstring_view file
@@ -5105,6 +5110,9 @@ simdjson_inline bool padded_memory_map::is_valid() const noexcept {
} // namespace simdjson
@@ -45,16 +45,16 @@ index 1d6560e80fab0458b22f0ac2437056bce4873e8f..c3dbe2b6fc08c36a07ced5e29a814f7b
inline simdjson::padded_string operator ""_padded(const char *str, size_t len) {
return simdjson::padded_string(str, len);
}
@@ -4737,6 +4745,8 @@ inline simdjson::padded_string operator ""_padded(const char8_t *str, size_t len
@@ -5113,6 +5121,8 @@ inline simdjson::padded_string operator ""_padded(const char8_t *str, size_t len
return simdjson::padded_string(reinterpret_cast<const char *>(str), len);
}
#endif
+#pragma clang diagnostic pop
+
#endif // SIMDJSON_PADDED_STRING_INL_H
/* end file simdjson/padded_string-inl.h */
/* skipped duplicate #include "simdjson/padded_string_view.h" */
@@ -44745,12 +44755,16 @@ simdjson_inline simdjson_warn_unused std::unique_ptr<ondemand::parser>& parser::
@@ -72412,12 +72422,16 @@ simdjson_inline simdjson_warn_unused std::unique_ptr<ondemand::parser>& parser::
return parser_instance;
}
@@ -71,7 +71,7 @@ index 1d6560e80fab0458b22f0ac2437056bce4873e8f..c3dbe2b6fc08c36a07ced5e29a814f7b
} // namespace ondemand
} // namespace arm64
@@ -59221,12 +59235,16 @@ simdjson_inline simdjson_warn_unused std::unique_ptr<ondemand::parser>& parser::
@@ -85564,12 +85578,16 @@ simdjson_inline simdjson_warn_unused std::unique_ptr<ondemand::parser>& parser::
return parser_instance;
}

View File

@@ -20,7 +20,7 @@ index ab7dc27de3e304f6d912d5834da47e3b4eb25495..b6c0fd4ceee989dac55c7d54e52fef18
}
}
diff --git a/unofficial.gni b/unofficial.gni
index 4ab316e45bd84e43a53335df60f847b17fe6c2fa..def9a302830e493e51cc2b3588816fcbd3a1bb51 100644
index 07ebc4706c6b3208dc9136e6ba0e4d795c238d61..8844e2c3916541b62418c4b891b5a834b910bea4 100644
--- a/unofficial.gni
+++ b/unofficial.gni
@@ -143,7 +143,10 @@ template("node_gn_build") {
@@ -35,8 +35,8 @@ index 4ab316e45bd84e43a53335df60f847b17fe6c2fa..def9a302830e493e51cc2b3588816fcb
public_configs = [
":node_external_config",
"deps/googletest:googletest_config",
@@ -364,6 +367,7 @@ template("node_gn_build") {
"src/embedded_data.h",
@@ -366,6 +369,7 @@ template("node_gn_build") {
"src/builtin_info.h",
]
include_dirs = [ "src", "tools" ]
+ configs += [ "//build/config/compiler:no_exit_time_destructors" ]

View File

@@ -11,7 +11,7 @@ its own blended handler between Node and Blink.
Not upstreamable.
diff --git a/lib/internal/modules/esm/utils.js b/lib/internal/modules/esm/utils.js
index 0af25ebbf6c3f2b790238e32f01addfb648e4e52..bd726088f7480853b8507c39668cc4716c4ce61f 100644
index 5019477c55e5ff1121a2b51168f12e008d54d59e..8eb390fbe16401a7abf836030233832925ab5c9e 100644
--- a/lib/internal/modules/esm/utils.js
+++ b/lib/internal/modules/esm/utils.js
@@ -35,7 +35,7 @@ const {

View File

@@ -9,10 +9,10 @@ modules to sandboxed renderers.
TODO(codebytere): remove and replace with a public facing API.
diff --git a/src/node_binding.cc b/src/node_binding.cc
index 740706e917b7d28c520abdbd743605bf73274f30..9ab30b3c9bc663d2947fcbfaac6f06d2c8f8a5b1 100644
index b76ecc8cab47dfb96adce17294eb0191a60a2efd..0ddc080f6f1e1b3d0aaa0e55c9aa5ddb7409b11b 100644
--- a/src/node_binding.cc
+++ b/src/node_binding.cc
@@ -656,6 +656,10 @@ void GetInternalBinding(const FunctionCallbackInfo<Value>& args) {
@@ -657,6 +657,10 @@ void GetInternalBinding(const FunctionCallbackInfo<Value>& args) {
args.GetReturnValue().Set(exports);
}
@@ -24,10 +24,10 @@ index 740706e917b7d28c520abdbd743605bf73274f30..9ab30b3c9bc663d2947fcbfaac6f06d2
Environment* env = Environment::GetCurrent(args);
diff --git a/src/node_binding.h b/src/node_binding.h
index a55a9c6a5787983c0477cb268ef1355162e72911..3455eb3d223a49cd73d80c72c209c26d49b769dc 100644
index bb6547e5dac4086e29fd588d46e1b69d4e5dbc15..58f68485c298dce116a6a8ee6960c85edcb65e93 100644
--- a/src/node_binding.h
+++ b/src/node_binding.h
@@ -154,6 +154,8 @@ void GetInternalBinding(const v8::FunctionCallbackInfo<v8::Value>& args);
@@ -155,6 +155,8 @@ void GetInternalBinding(const v8::FunctionCallbackInfo<v8::Value>& args);
void GetLinkedBinding(const v8::FunctionCallbackInfo<v8::Value>& args);
void DLOpen(const v8::FunctionCallbackInfo<v8::Value>& args);

View File

@@ -7,7 +7,7 @@ common.gypi is a file that's included in the node header bundle, despite
the fact that we do not build node with gyp.
diff --git a/common.gypi b/common.gypi
index 283c60eab356a5befc15027cd186ea0416914ee6..d9eb9527e3cbb3b101274ab19e6d6ace42f0e022 100644
index 576d5057d988fca43a604a6db6a0b5723e960e2e..d398e33d5acc9acd096e2c263a6f8ad9d947d1b0 100644
--- a/common.gypi
+++ b/common.gypi
@@ -91,6 +91,23 @@

View File

@@ -8,7 +8,7 @@ an ExternalPointerTypeTag parameter. Use kExternalPointerTypeTagDefault
for all existing call sites.
diff --git a/src/crypto/crypto_context.cc b/src/crypto/crypto_context.cc
index d6af2460c3745901415d4e785cf210da8a730a8d..4b5b892b81727c8f93e3041d33902c31d3776a52 100644
index ff7a5917b554fc0c67edf4e36f567e7223e70577..23aaa364e955753a92c3cb575646955c40cdfd55 100644
--- a/src/crypto/crypto_context.cc
+++ b/src/crypto/crypto_context.cc
@@ -2336,7 +2336,7 @@ int SecureContext::TicketCompatibilityCallback(SSL* ssl,
@@ -100,7 +100,7 @@ index d067b47e7e30a95740fe0275c70445707dec426b..391c57eed9058602bd8311d885cf5fc6
env->compile_cache_handler()->MaybeSave(cache_entry, utf8.ToStringView());
}
diff --git a/src/node_util.cc b/src/node_util.cc
index e9f4c1cdb60c03dce210f49e18dda57a4934a8b5..263edfd92e38c66f7912c602b306d420b503a839 100644
index 065ed602b314f367c2e7dec94019521fd5d23bf4..8be8a8b5726a265a838841a21bb023fa41ceeb13 100644
--- a/src/node_util.cc
+++ b/src/node_util.cc
@@ -93,7 +93,7 @@ static void GetExternalValue(

View File

@@ -9,10 +9,10 @@ conflict with Blink's in renderer and worker processes.
We should try to upstream some version of this.
diff --git a/doc/api/cli.md b/doc/api/cli.md
index f05686608297e538f0a6f65abb389281bced4291..c8da076f80a559b9ee6d2ffed831b088c15c8e88 100644
index ba9fcc3a3abf48f7ab4416a7ec13e689fd5802c7..4311a6e35ab85876b24c03036e5d91d5adb5c369 100644
--- a/doc/api/cli.md
+++ b/doc/api/cli.md
@@ -1820,6 +1820,14 @@ changes:
@@ -1802,6 +1802,14 @@ changes:
Disable using [syntax detection][] to determine module type.
@@ -27,7 +27,7 @@ index f05686608297e538f0a6f65abb389281bced4291..c8da076f80a559b9ee6d2ffed831b088
### `--no-experimental-global-navigator`
<!-- YAML
@@ -3499,6 +3507,7 @@ one is included in the list below.
@@ -3516,6 +3524,7 @@ one is included in the list below.
* `--no-addons`
* `--no-async-context-frame`
* `--no-deprecation`
@@ -36,7 +36,7 @@ index f05686608297e538f0a6f65abb389281bced4291..c8da076f80a559b9ee6d2ffed831b088
* `--no-experimental-repl-await`
* `--no-experimental-sqlite`
diff --git a/doc/node.1 b/doc/node.1
index 9a0f5beb5b995fb92b31514c166e1c76e18d8ca9..fab0b24b630e755658b58a7281df1dacc4b7f32a 100644
index bed0774b43a21a75eac7466905d7c06d6e9f4f8a..ebc364bbd79972f012a7489853530adc87b580a6 100644
--- a/doc/node.1
+++ b/doc/node.1
@@ -201,6 +201,9 @@ Enable transformation of TypeScript-only syntax into JavaScript code.
@@ -50,7 +50,7 @@ index 9a0f5beb5b995fb92b31514c166e1c76e18d8ca9..fab0b24b630e755658b58a7281df1dac
Disable experimental support for the WebSocket API.
.
diff --git a/lib/internal/process/pre_execution.js b/lib/internal/process/pre_execution.js
index 9a99ff6d44320c0e28f4a787d24ea98ae1c96196..8f5810267d1ba430bae02be141f087f2a5d3cf9f 100644
index 4a42dd66f451eece98fded909a72dd5ffcfd9708..a693e35135fc8f7e918e1410a104d4607ece5489 100644
--- a/lib/internal/process/pre_execution.js
+++ b/lib/internal/process/pre_execution.js
@@ -117,6 +117,7 @@ function prepareExecution(options) {
@@ -61,7 +61,7 @@ index 9a99ff6d44320c0e28f4a787d24ea98ae1c96196..8f5810267d1ba430bae02be141f087f2
setupWebsocket();
setupEventsource();
setupCodeCoverage();
@@ -345,6 +346,16 @@ function setupWarningHandler() {
@@ -350,6 +351,16 @@ function setupWarningHandler() {
}
}
@@ -79,10 +79,10 @@ index 9a99ff6d44320c0e28f4a787d24ea98ae1c96196..8f5810267d1ba430bae02be141f087f2
function setupWebsocket() {
if (getOptionValue('--no-experimental-websocket')) {
diff --git a/src/node_options.cc b/src/node_options.cc
index f6f81f50c8bd91a72ca96093dc64c183bd58039b..aa18dab6e4171d8a7f0af4b7db1b8c2c07191859 100644
index 79d7c8cac002ba85b95c1d58a8e7cbf84fc35e89..cf66678164f4fca371f853c8afe3c0b7e7e3bc82 100644
--- a/src/node_options.cc
+++ b/src/node_options.cc
@@ -545,7 +545,11 @@ EnvironmentOptionsParser::EnvironmentOptionsParser() {
@@ -551,7 +551,11 @@ EnvironmentOptionsParser::EnvironmentOptionsParser() {
&EnvironmentOptions::experimental_eventsource,
kAllowedInEnvvar,
false);

View File

@@ -11,19 +11,16 @@ We can fix this by allowing the C++ implementation of legacyMainResolve to use
a fileExists function that does take Asar into account.
diff --git a/lib/internal/modules/esm/resolve.js b/lib/internal/modules/esm/resolve.js
index 81799fc159cf20344aac64cd7129240deb9a4fe8..12b476ff97603718186dd25b1f435d377841bd89 100644
index b01eafd3c476c065a16701317aca2a1def559376..a7471f3c55d94b7572061f5a04cb67cef120fe79 100644
--- a/lib/internal/modules/esm/resolve.js
+++ b/lib/internal/modules/esm/resolve.js
@@ -28,14 +28,13 @@ const { BuiltinModule } = require('internal/bootstrap/realm');
@@ -28,11 +28,10 @@ const { BuiltinModule } = require('internal/bootstrap/realm');
const fs = require('fs');
const { getOptionValue } = require('internal/options');
// Do not eagerly grab .manifest, it may be in TDZ
-const { sep, posix: { relative: relativePosixPath }, resolve } = require('path');
+const { sep, posix: { relative: relativePosixPath }, toNamespacedPath, resolve } = require('path');
const preserveSymlinks = getOptionValue('--preserve-symlinks');
const preserveSymlinksMain = getOptionValue('--preserve-symlinks-main');
const inputTypeFlag = getOptionValue('--input-type');
-const { URL, pathToFileURL, fileURLToPath, isURL, URLParse } = require('internal/url');
+const { sep, posix: { relative: relativePosixPath }, toNamespacedPath, resolve } = require('path');
+const { URL, pathToFileURL, fileURLToPath, isURL, URLParse, toPathIfFileURL } = require('internal/url');
const { getCWDURL, setOwnProperty } = require('internal/util');
const { canParse: URLCanParse } = internalBinding('url');
@@ -31,7 +28,7 @@ index 81799fc159cf20344aac64cd7129240deb9a4fe8..12b476ff97603718186dd25b1f435d37
const {
ERR_INPUT_TYPE_NOT_ALLOWED,
ERR_INVALID_ARG_TYPE,
@@ -184,6 +183,11 @@ const legacyMainResolveExtensionsIndexes = {
@@ -180,6 +179,11 @@ const legacyMainResolveExtensionsIndexes = {
kResolvedByPackageAndNode: 9,
};
@@ -43,7 +40,7 @@ index 81799fc159cf20344aac64cd7129240deb9a4fe8..12b476ff97603718186dd25b1f435d37
/**
* Legacy CommonJS main resolution:
* 1. let M = pkg_url + (json main field)
@@ -202,7 +206,7 @@ function legacyMainResolve(packageJSONUrl, packageConfig, base) {
@@ -198,7 +202,7 @@ function legacyMainResolve(packageJSONUrl, packageConfig, base) {
const baseStringified = isURL(base) ? base.href : base;
@@ -53,10 +50,10 @@ index 81799fc159cf20344aac64cd7129240deb9a4fe8..12b476ff97603718186dd25b1f435d37
const maybeMain = resolvedOption <= legacyMainResolveExtensionsIndexes.kResolvedByMainIndexNode ?
packageConfig.main || './' : '';
diff --git a/src/node_file.cc b/src/node_file.cc
index c69b4eb461cab79906833152d02f76f81149ad7e..96aac2d86695732bf6805f2ad2168a62241b5045 100644
index 65bd2661bbceba2f1ab76f662d8a25f3419ed723..bf202f5e2bf5eaf2dd9192dfd701e621126c492c 100644
--- a/src/node_file.cc
+++ b/src/node_file.cc
@@ -3599,13 +3599,25 @@ static void CpSyncCopyDir(const FunctionCallbackInfo<Value>& args) {
@@ -3640,13 +3640,25 @@ static void CpSyncCopyDir(const FunctionCallbackInfo<Value>& args) {
}
BindingData::FilePathIsFileReturnType BindingData::FilePathIsFile(
@@ -83,7 +80,7 @@ index c69b4eb461cab79906833152d02f76f81149ad7e..96aac2d86695732bf6805f2ad2168a62
uv_fs_t req;
int rc = uv_fs_stat(env->event_loop(), &req, file_path.c_str(), nullptr);
@@ -3663,6 +3675,11 @@ void BindingData::LegacyMainResolve(const FunctionCallbackInfo<Value>& args) {
@@ -3704,6 +3716,11 @@ void BindingData::LegacyMainResolve(const FunctionCallbackInfo<Value>& args) {
std::optional<std::string> initial_file_path;
std::string file_path;
@@ -95,7 +92,7 @@ index c69b4eb461cab79906833152d02f76f81149ad7e..96aac2d86695732bf6805f2ad2168a62
if (args.Length() >= 2 && args[1]->IsString()) {
auto package_config_main = Utf8Value(isolate, args[1]).ToString();
@@ -3683,7 +3700,7 @@ void BindingData::LegacyMainResolve(const FunctionCallbackInfo<Value>& args) {
@@ -3724,7 +3741,7 @@ void BindingData::LegacyMainResolve(const FunctionCallbackInfo<Value>& args) {
BufferValue buff_file_path(isolate, local_file_path);
ToNamespacedPath(env, &buff_file_path);
@@ -104,7 +101,7 @@ index c69b4eb461cab79906833152d02f76f81149ad7e..96aac2d86695732bf6805f2ad2168a62
case BindingData::FilePathIsFileReturnType::kIsFile:
return args.GetReturnValue().Set(i);
case BindingData::FilePathIsFileReturnType::kIsNotFile:
@@ -3720,7 +3737,7 @@ void BindingData::LegacyMainResolve(const FunctionCallbackInfo<Value>& args) {
@@ -3761,7 +3778,7 @@ void BindingData::LegacyMainResolve(const FunctionCallbackInfo<Value>& args) {
BufferValue buff_file_path(isolate, local_file_path);
ToNamespacedPath(env, &buff_file_path);
@@ -114,7 +111,7 @@ index c69b4eb461cab79906833152d02f76f81149ad7e..96aac2d86695732bf6805f2ad2168a62
return args.GetReturnValue().Set(i);
case BindingData::FilePathIsFileReturnType::kIsNotFile:
diff --git a/src/node_file.h b/src/node_file.h
index 224fa6f7ade375cb673c8adcc95927fa04f9c248..343c6bec67e6cf70ffb91b87e7837dbaf6071cee 100644
index 95bc5802051642a0e8c21ac97fd402396d7d45aa..f84886d0c33dcad50370775c9437bf0a96fd9204 100644
--- a/src/node_file.h
+++ b/src/node_file.h
@@ -101,7 +101,8 @@ class BindingData : public SnapshotableObject {

View File

@@ -39,7 +39,7 @@ index 66331d2d9999e93e59cbce9e153affb942b79946..0f7fd0747ea779f76a9e64ed37d195bd
if (secureOptions) {
validateInteger(secureOptions, 'secureOptions');
diff --git a/src/crypto/crypto_context.cc b/src/crypto/crypto_context.cc
index ca62d3001bf51193d78caac0cccd93c188a8410c..d6af2460c3745901415d4e785cf210da8a730a8d 100644
index fd2dfa9fcf444fe705a2d42cd0963531cea9a74c..ff7a5917b554fc0c67edf4e36f567e7223e70577 100644
--- a/src/crypto/crypto_context.cc
+++ b/src/crypto/crypto_context.cc
@@ -1377,10 +1377,8 @@ SecureContext::SecureContext(Environment* env, Local<Object> wrap)

View File

@@ -42,7 +42,7 @@ index 9876c4bb6ecd2e5b8879f153811cd0a0a22997aa..2c4bf03452eb10fec52c38a361b6aad9
// Test Parallel Execution w/ KeyObject is threadsafe in openssl3
{
diff --git a/test/parallel/test-crypto-authenticated.js b/test/parallel/test-crypto-authenticated.js
index e8fedf2d5d5072e00afd493ac2ac44748212b02e..6fcbe244871d25b2151d39160149aaa50dc96012 100644
index 9778ea548e81d719f1ca01f9e6fb8cfb821d4103..d6178dba6faa94778203d615b0638bd5afb4f9da 100644
--- a/test/parallel/test-crypto-authenticated.js
+++ b/test/parallel/test-crypto-authenticated.js
@@ -627,21 +627,25 @@ for (const test of TEST_CASES) {
@@ -130,13 +130,20 @@ index e8fedf2d5d5072e00afd493ac2ac44748212b02e..6fcbe244871d25b2151d39160149aaa5
const rfcTestCases = TEST_CASES.filter(({ algo, tampered }) => {
return algo === 'chacha20-poly1305' && tampered === false;
});
@@ -771,4 +781,6 @@ for (const test of TEST_CASES) {
@@ -771,10 +781,12 @@ for (const test of TEST_CASES) {
assert.throws(() => {
decipher.final();
}, /Unsupported state or unable to authenticate data/);
+} else {
+ common.printSkipMessage('Skipping unsupported chacha20-poly1305 test');
}
// Refs: https://github.com/nodejs/node/issues/62342
-{
+if (ciphers.includes('aes-128-ccm')) {
const key = crypto.randomBytes(16);
const nonce = crypto.randomBytes(13);
diff --git a/test/parallel/test-crypto-cipheriv-decipheriv.js b/test/parallel/test-crypto-cipheriv-decipheriv.js
index 6742722f9e90914b4dc8c079426d10040d476f72..8801ddfe7023fd0f7d5657b86a9164d75765322e 100644
--- a/test/parallel/test-crypto-cipheriv-decipheriv.js
@@ -298,6 +305,58 @@ index d22281abbd5c3cab3aaa3ac494301fa6b4a8a968..5f0c6a4aed2e868a1a1049212edf2187
s.pipe(h).on('data', common.mustCall(function(c) {
assert.strictEqual(c, expect);
diff --git a/test/parallel/test-crypto-key-objects-raw.js b/test/parallel/test-crypto-key-objects-raw.js
index f301cc1942fd9a46ea91e18e580504d09ce53e48..a025c74d8832e55999f7513e9525d48533670c4b 100644
--- a/test/parallel/test-crypto-key-objects-raw.js
+++ b/test/parallel/test-crypto-key-objects-raw.js
@@ -34,10 +34,13 @@ const { hasOpenSSL } = require('../common/crypto');
// Key types that don't support raw-* formats
{
- for (const [type, pub, priv] of [
+ const unsupportedKeyTypes = [
['rsa', 'rsa_public_2048.pem', 'rsa_private_2048.pem'],
- ['dsa', 'dsa_public.pem', 'dsa_private.pem'],
- ]) {
+ ];
+ if (!process.features.openssl_is_boringssl) {
+ unsupportedKeyTypes.push(['dsa', 'dsa_public.pem', 'dsa_private.pem']);
+ }
+ for (const [type, pub, priv] of unsupportedKeyTypes) {
const pubKeyObj = crypto.createPublicKey(
fixtures.readKey(pub, 'ascii'));
const privKeyObj = crypto.createPrivateKey(
@@ -58,7 +61,7 @@ const { hasOpenSSL } = require('../common/crypto');
}
// DH keys also don't support raw formats
- {
+ if (!process.features.openssl_is_boringssl) {
const privKeyObj = crypto.createPrivateKey(
fixtures.readKey('dh_private.pem', 'ascii'));
assert.throws(() => privKeyObj.export({ format: 'raw-private' }),
@@ -224,7 +227,9 @@ if (hasOpenSSL(3, 5)) {
assert.throws(() => ecPriv.export({ format: 'raw-seed' }),
{ code: 'ERR_CRYPTO_INCOMPATIBLE_KEY_OPTIONS' });
- for (const type of ['ed25519', 'ed448', 'x25519', 'x448']) {
+ const seedKeyTypes = process.features.openssl_is_boringssl ?
+ ['ed25519', 'x25519'] : ['ed25519', 'ed448', 'x25519', 'x448'];
+ for (const type of seedKeyTypes) {
const priv = crypto.createPrivateKey(
fixtures.readKey(`${type}_private.pem`, 'ascii'));
assert.throws(() => priv.export({ format: 'raw-seed' }),
@@ -392,7 +397,9 @@ if (hasOpenSSL(3, 5)) {
// x25519, ed25519, x448, and ed448 cannot be used as 'ec' namedCurve values
{
- for (const type of ['ed25519', 'x25519', 'ed448', 'x448']) {
+ const curveTypes = process.features.openssl_is_boringssl ?
+ ['ed25519', 'x25519'] : ['ed25519', 'x25519', 'ed448', 'x448'];
+ for (const type of curveTypes) {
const priv = crypto.createPrivateKey(
fixtures.readKey(`${type}_private.pem`, 'ascii'));
const pub = crypto.createPublicKey(
diff --git a/test/parallel/test-crypto-key-objects-to-crypto-key.js b/test/parallel/test-crypto-key-objects-to-crypto-key.js
index 141e51d1ab74a4fc3b176b303807fb1cf2a58ce1..ba4fc881aa72ba7c39e8ae227a08be0ecf501c6f 100644
--- a/test/parallel/test-crypto-key-objects-to-crypto-key.js
@@ -334,10 +393,10 @@ index 141e51d1ab74a4fc3b176b303807fb1cf2a58ce1..ba4fc881aa72ba7c39e8ae227a08be0e
assert.throws(() => {
publicKey.toCryptoKey(algorithm === 'Ed25519' ? 'X25519' : 'Ed25519', true, []);
diff --git a/test/parallel/test-crypto-key-objects.js b/test/parallel/test-crypto-key-objects.js
index e8359ed6d0362c6e8da8be08b0fd42245fa7ae47..bd8211d98261a1acc928e849bf713578c85ff877 100644
index 6c1c3fd3afa448f4a84da6c33873bb2a7a1d6c31..257c34b13995b54c9bd4ec5ffb2e2ba7d0f915f1 100644
--- a/test/parallel/test-crypto-key-objects.js
+++ b/test/parallel/test-crypto-key-objects.js
@@ -302,11 +302,11 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
@@ -319,11 +319,11 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
}, hasOpenSSL3 ? {
message: 'error:1E08010C:DECODER routines::unsupported',
} : {
@@ -352,7 +411,7 @@ index e8359ed6d0362c6e8da8be08b0fd42245fa7ae47..bd8211d98261a1acc928e849bf713578
});
// This should not abort either: https://github.com/nodejs/node/issues/29904
@@ -329,12 +329,12 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
@@ -346,12 +346,12 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
message: /error:1E08010C:DECODER routines::unsupported/,
library: 'DECODER routines'
} : {
@@ -368,7 +427,7 @@ index e8359ed6d0362c6e8da8be08b0fd42245fa7ae47..bd8211d98261a1acc928e849bf713578
{ private: fixtures.readKey('ed25519_private.pem', 'ascii'),
public: fixtures.readKey('ed25519_public.pem', 'ascii'),
keyType: 'ed25519',
@@ -344,17 +344,6 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
@@ -361,17 +361,6 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
d: 'wVK6M3SMhQh3NK-7GRrSV-BVWQx1FO5pW8hhQeu_NdA',
kty: 'OKP'
} },
@@ -386,7 +445,7 @@ index e8359ed6d0362c6e8da8be08b0fd42245fa7ae47..bd8211d98261a1acc928e849bf713578
{ private: fixtures.readKey('x25519_private.pem', 'ascii'),
public: fixtures.readKey('x25519_public.pem', 'ascii'),
keyType: 'x25519',
@@ -364,18 +353,37 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
@@ -381,18 +370,37 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
d: 'mL_IWm55RrALUGRfJYzw40gEYWMvtRkesP9mj8o8Omc',
kty: 'OKP'
} },
@@ -429,7 +488,7 @@ index e8359ed6d0362c6e8da8be08b0fd42245fa7ae47..bd8211d98261a1acc928e849bf713578
const keyType = info.keyType;
{
@@ -417,7 +425,7 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
@@ -461,7 +469,7 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
}
});
@@ -438,7 +497,7 @@ index e8359ed6d0362c6e8da8be08b0fd42245fa7ae47..bd8211d98261a1acc928e849bf713578
{ private: fixtures.readKey('ec_p256_private.pem', 'ascii'),
public: fixtures.readKey('ec_p256_public.pem', 'ascii'),
keyType: 'ec',
@@ -429,17 +437,6 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
@@ -473,17 +481,6 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
x: 'X0mMYR_uleZSIPjNztIkAS3_ud5LhNpbiIFp6fNf2Gs',
y: 'UbJuPy2Xi0lW7UYTBxPK3yGgDu9EAKYIecjkHX5s2lI'
} },
@@ -456,7 +515,7 @@ index e8359ed6d0362c6e8da8be08b0fd42245fa7ae47..bd8211d98261a1acc928e849bf713578
{ private: fixtures.readKey('ec_p384_private.pem', 'ascii'),
public: fixtures.readKey('ec_p384_public.pem', 'ascii'),
keyType: 'ec',
@@ -465,7 +462,25 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
@@ -509,7 +506,25 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
y: 'Ad3flexBeAfXceNzRBH128kFbOWD6W41NjwKRqqIF26vmgW_8COldGKZjFkOSEASxPB' +
'cvA2iFJRUyQ3whC00j0Np'
} },
@@ -483,7 +542,7 @@ index e8359ed6d0362c6e8da8be08b0fd42245fa7ae47..bd8211d98261a1acc928e849bf713578
const { keyType, namedCurve } = info;
{
@@ -540,7 +555,7 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
@@ -623,7 +638,7 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
format: 'pem',
passphrase: Buffer.alloc(1024, 'a')
}), {
@@ -492,7 +551,7 @@ index e8359ed6d0362c6e8da8be08b0fd42245fa7ae47..bd8211d98261a1acc928e849bf713578
});
const publicKey = createPublicKey(publicDsa);
@@ -566,7 +581,7 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
@@ -649,7 +664,7 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
{
// Test RSA-PSS.
@@ -501,7 +560,7 @@ index e8359ed6d0362c6e8da8be08b0fd42245fa7ae47..bd8211d98261a1acc928e849bf713578
// This key pair does not restrict the message digest algorithm or salt
// length.
const publicPem = fixtures.readKey('rsa_pss_public_2048.pem');
@@ -625,6 +640,8 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
@@ -708,6 +723,8 @@ const privateDsa = fixtures.readKey('dsa_private_encrypted_1025.pem',
}, {
code: 'ERR_CRYPTO_INCOMPATIBLE_KEY_OPTIONS'
});
@@ -541,7 +600,7 @@ index 75cb4800ff1bd51fedd7bc4e2d7e6af6f4f48346..b4363c31592763235116d970a5f45d4c
{
// Default outputLengths.
diff --git a/test/parallel/test-crypto-pqc-key-objects-ml-dsa.js b/test/parallel/test-crypto-pqc-key-objects-ml-dsa.js
index 37eab463deae472a78102c9fc6e03d4b642854ce..99e8c47702c55a9518ff093a58d87c753bec3aa8 100644
index aef1012098fbb95d009093b27dbf204420f74269..99761be00162ed4e7b9c9b536e2d8861f185d687 100644
--- a/test/parallel/test-crypto-pqc-key-objects-ml-dsa.js
+++ b/test/parallel/test-crypto-pqc-key-objects-ml-dsa.js
@@ -4,6 +4,10 @@ const common = require('../common');
@@ -656,7 +715,7 @@ index eafdfe392bde8eb1fde1dc7e7e9ae51682c74b87..2907e0175379266c90acb9df829d1028
};
assert.throws(() => crypto.scrypt('pass', 'salt', 1, options, () => {}),
diff --git a/test/parallel/test-crypto-sign-verify.js b/test/parallel/test-crypto-sign-verify.js
index a66f0a94efd7c952c1d2320fbc7a39fe3a88a8a1..dc5846db0e3dcf8f7cb5f7efcdbc81c1d767ab88 100644
index 1900f244b8491ae422963a5145663ea20e55ce27..cf7f641710c26e2ae42af34b6379e15627428bb1 100644
--- a/test/parallel/test-crypto-sign-verify.js
+++ b/test/parallel/test-crypto-sign-verify.js
@@ -33,7 +33,7 @@ const keySize = 2048;
@@ -690,21 +749,22 @@ index a66f0a94efd7c952c1d2320fbc7a39fe3a88a8a1..dc5846db0e3dcf8f7cb5f7efcdbc81c1
});
}
@@ -423,11 +423,13 @@ assert.throws(
public: fixtures.readKey('ed25519_public.pem', 'ascii'),
@@ -424,12 +424,14 @@ assert.throws(
algo: null,
sigLen: 64 },
sigLen: 64,
raw: true },
+ /*
{ private: fixtures.readKey('ed448_private.pem', 'ascii'),
public: fixtures.readKey('ed448_public.pem', 'ascii'),
algo: null,
supportsContext: true,
sigLen: 114 },
sigLen: 114,
raw: true },
+ */
{ private: fixtures.readKey('rsa_private_2048.pem', 'ascii'),
public: fixtures.readKey('rsa_public_2048.pem', 'ascii'),
algo: 'sha1',
@@ -547,7 +549,7 @@ assert.throws(
@@ -573,7 +575,7 @@ assert.throws(
{
const data = Buffer.from('Hello world');
@@ -868,10 +928,10 @@ index d21a6bd3d98d6db26cc82896e62da2869cf22842..21553911f8e16a76187bfff120dfbeea
// Make sure memory isn't released before being returned
diff --git a/test/parallel/test-tls-client-auth.js b/test/parallel/test-tls-client-auth.js
index 04bf40b9a9e1ac6b92e98e3c4201c3e6e427d70c..495a7590be29370900659d1385afcbbb99a1fbf8 100644
index 67aed40914c9fe87b654e28be942a530ec8ecd92..aac870c3075bc8d52543912ba41b5a1e9424480f 100644
--- a/test/parallel/test-tls-client-auth.js
+++ b/test/parallel/test-tls-client-auth.js
@@ -110,7 +110,7 @@ if (tls.DEFAULT_MAX_VERSION === 'TLSv1.3') connect({
@@ -111,7 +111,7 @@ if (tls.DEFAULT_MAX_VERSION === 'TLSv1.3') connect({
// and sends a fatal Alert to the client that the client discovers there has
// been a fatal error.
pair.client.conn.once('error', common.mustCall((err) => {
@@ -947,10 +1007,10 @@ index 53fcc0b16b5bd6f50c334fb7cc5671e31c1546b9..da428f1320e9e7bd1683724806a7438e
client.end();
server.close();
diff --git a/test/parallel/test-tls-set-sigalgs.js b/test/parallel/test-tls-set-sigalgs.js
index 985ca13ba2ac7d58f87c263c7654c4f4087efddf..21c199bdb12739f82a075c4e10e08faf8c587cf4 100644
index 1bce814f3e86042901395bc72afc0ccdfb840eee..ef2719a88685240a813541187e4f1a1ef7eb242d 100644
--- a/test/parallel/test-tls-set-sigalgs.js
+++ b/test/parallel/test-tls-set-sigalgs.js
@@ -65,13 +65,14 @@ test('RSA-PSS+SHA256:RSA-PSS+SHA512:ECDSA+SHA256',
@@ -65,14 +65,15 @@ test('RSA-PSS+SHA256:RSA-PSS+SHA512:ECDSA+SHA256',
'RSA-PSS+SHA256:ECDSA+SHA256',
['RSA-PSS+SHA256', 'ECDSA+SHA256']);
@@ -958,8 +1018,9 @@ index 985ca13ba2ac7d58f87c263c7654c4f4087efddf..21c199bdb12739f82a075c4e10e08faf
+ 'ERR_SSL_NO_COMMON_SIGNATURE_ALGORITHMS' : 'ERR_SSL_NO_SHARED_SIGNATURE_ALGORITHMS';
+
// Do not have shared sigalgs.
const handshakeErr = hasOpenSSL(3, 2) ?
'ERR_SSL_SSL/TLS_ALERT_HANDSHAKE_FAILURE' : 'ERR_SSL_SSLV3_ALERT_HANDSHAKE_FAILURE';
const handshakeErr = hasOpenSSL(4, 0) ?
'ERR_SSL_TLS_ALERT_HANDSHAKE_FAILURE' : hasOpenSSL(3, 2) ?
'ERR_SSL_SSL/TLS_ALERT_HANDSHAKE_FAILURE' : 'ERR_SSL_SSLV3_ALERT_HANDSHAKE_FAILURE';
test('RSA-PSS+SHA384', 'ECDSA+SHA256',
- undefined, handshakeErr,
- 'ERR_SSL_NO_SHARED_SIGNATURE_ALGORITHMS');
@@ -990,6 +1051,38 @@ index ae203e1005de0ab4370bd611f4f2ae64bb7a9a6a..216ce5fd14001183e7deb2abadc93178
+} else {
+ common.printSkipMessage('Skipping RSA key import tests');
}
diff --git a/test/parallel/test-webcrypto-promise-prototype-pollution.mjs b/test/parallel/test-webcrypto-promise-prototype-pollution.mjs
index b4fbedba5e32423821879a856cc56716bacb77fe..a927089fbf1f04710b66ecdc0d870c722f501f6a 100644
--- a/test/parallel/test-webcrypto-promise-prototype-pollution.mjs
+++ b/test/parallel/test-webcrypto-promise-prototype-pollution.mjs
@@ -59,17 +59,19 @@ await subtle.deriveKey(
true,
['encrypt', 'decrypt']);
-const wrappingKey = await subtle.generateKey(
- { name: 'AES-KW', length: 256 }, true, ['wrapKey', 'unwrapKey']);
+if (!process.features.openssl_is_boringssl) {
+ const wrappingKey = await subtle.generateKey(
+ { name: 'AES-KW', length: 256 }, true, ['wrapKey', 'unwrapKey']);
-const keyToWrap = await subtle.generateKey(
- { name: 'AES-CBC', length: 256 }, true, ['encrypt', 'decrypt']);
+ const keyToWrap = await subtle.generateKey(
+ { name: 'AES-CBC', length: 256 }, true, ['encrypt', 'decrypt']);
-const wrapped = await subtle.wrapKey('raw', keyToWrap, wrappingKey, 'AES-KW');
+ const wrapped = await subtle.wrapKey('raw', keyToWrap, wrappingKey, 'AES-KW');
-await subtle.unwrapKey(
- 'raw', wrapped, wrappingKey, 'AES-KW',
- { name: 'AES-CBC', length: 256 }, true, ['encrypt', 'decrypt']);
+ await subtle.unwrapKey(
+ 'raw', wrapped, wrappingKey, 'AES-KW',
+ { name: 'AES-CBC', length: 256 }, true, ['encrypt', 'decrypt']);
+}
const { privateKey } = await subtle.generateKey(
{ name: 'ECDSA', namedCurve: 'P-256' }, true, ['sign', 'verify']);
diff --git a/test/parallel/test-webcrypto-wrap-unwrap.js b/test/parallel/test-webcrypto-wrap-unwrap.js
index bd788ec4ed88289d35798b8af8c9490a68e081a2..c6a6f33490595faabaefc9b58afdd813f0887258 100644
--- a/test/parallel/test-webcrypto-wrap-unwrap.js

View File

@@ -6,10 +6,10 @@ Subject: fix: do not resolve electron entrypoints
This wastes fs cycles and can result in strange behavior if this path actually exists on disk
diff --git a/lib/internal/modules/esm/translators.js b/lib/internal/modules/esm/translators.js
index a6fbcb6fd3c2413df96273d93b7339cad3f25f7a..130fe48b233691d8ee4c5d56f80d331924619008 100644
index a048d887c4717b2c187e162d8b559285cf540a10..9a15b872a81e9cd7aac3015c58caeb5ae81e5ac8 100644
--- a/lib/internal/modules/esm/translators.js
+++ b/lib/internal/modules/esm/translators.js
@@ -392,6 +392,10 @@ function cjsPreparseModuleExports(filename, source, format) {
@@ -390,6 +390,10 @@ function cjsPreparseModuleExports(filename, source, format) {
return { module, exportNames: module[kModuleExportNames] };
}

View File

@@ -8,10 +8,10 @@ an API override to replace the native `ReadFileSync` in the `modules`
binding.
diff --git a/src/env_properties.h b/src/env_properties.h
index 454750db0113d289e7f8c8cb160e91797790572c..09786f710a88e0243bfaab10d0eca5cb2db62245 100644
index 75b718e6171853e77737f96ddff16bf2aa178cec..86dd5c6e36312eb68fb7e3282ffc5289c0707f48 100644
--- a/src/env_properties.h
+++ b/src/env_properties.h
@@ -492,6 +492,7 @@
@@ -496,6 +496,7 @@
V(maybe_cache_generated_source_map, v8::Function) \
V(messaging_deserialize_create_object, v8::Function) \
V(message_port, v8::Object) \

View File

@@ -6,10 +6,10 @@ Subject: fix: expose the built-in electron module via the ESM loader
This allows usage of `import { app } from 'electron'` and `import('electron')` natively in the browser + non-sandboxed renderer
diff --git a/lib/internal/modules/esm/get_format.js b/lib/internal/modules/esm/get_format.js
index 48ccb97a6244eab4bcfbee92feb9829ca32ad0c5..043d094540845228556b0f9837f48fbaddedfb47 100644
index 4f334c7d88c3369578880d5d343b756c3dda0a5a..c26059eb083ae721eddeeb85a3a0a1cb84217fe2 100644
--- a/lib/internal/modules/esm/get_format.js
+++ b/lib/internal/modules/esm/get_format.js
@@ -26,6 +26,7 @@ const protocolHandlers = {
@@ -76,6 +76,7 @@ const protocolHandlers = {
'data:': getDataProtocolModuleFormat,
'file:': getFileProtocolModuleFormat,
'node:'() { return 'builtin'; },
@@ -64,10 +64,10 @@ index c284163fba86ec820af1996571fbd3d092d41d34..5f1921d15bc1d3a68c35990f85e36a0e
}
}
diff --git a/lib/internal/modules/esm/loader.js b/lib/internal/modules/esm/loader.js
index 22c1e9f1ae652b033903f56f394352806ddff754..961da666a233541203b5416909fd1ff0326e63e1 100644
index 9d08c0819b1ccecb5402265979df857da06e389e..c09274bd7b3ff2a4f8a8f5c5c91cdeb9f900960e 100644
--- a/lib/internal/modules/esm/loader.js
+++ b/lib/internal/modules/esm/loader.js
@@ -437,7 +437,7 @@ class ModuleLoader {
@@ -415,7 +415,7 @@ class ModuleLoader {
assert(wrap instanceof ModuleWrap, `Translator used for require(${url}) should not be async`);
const cjsModule = wrap[imported_cjs_symbol];
@@ -77,10 +77,10 @@ index 22c1e9f1ae652b033903f56f394352806ddff754..961da666a233541203b5416909fd1ff0
if (cjsModule?.[kIsExecuting]) {
const parentFilename = urlToFilename(parentURL);
diff --git a/lib/internal/modules/esm/resolve.js b/lib/internal/modules/esm/resolve.js
index cc1230648881d8d14ba3902fca78291c90fb79fb..edf347102fedbb28bce221defa99c37b5834024b 100644
index cbdc120302443cd45d0d568069716b466600ebb3..008160b7d5e8efde13c7907e2b7212974a04d3f2 100644
--- a/lib/internal/modules/esm/resolve.js
+++ b/lib/internal/modules/esm/resolve.js
@@ -751,6 +751,9 @@ function packageImportsResolve(name, base, conditions) {
@@ -747,6 +747,9 @@ function packageImportsResolve(name, base, conditions) {
throw importNotDefined(name, packageJSONUrl, base);
}
@@ -90,7 +90,7 @@ index cc1230648881d8d14ba3902fca78291c90fb79fb..edf347102fedbb28bce221defa99c37b
/**
* Resolves a package specifier to a URL.
@@ -765,6 +768,11 @@ function packageResolve(specifier, base, conditions) {
@@ -761,6 +764,11 @@ function packageResolve(specifier, base, conditions) {
return new URL('node:' + specifier);
}
@@ -103,10 +103,10 @@ index cc1230648881d8d14ba3902fca78291c90fb79fb..edf347102fedbb28bce221defa99c37b
const packageConfig = packageJsonReader.read(packageJSONPath, { __proto__: null, specifier, base, isESM: true });
diff --git a/lib/internal/modules/esm/translators.js b/lib/internal/modules/esm/translators.js
index d6c96996a900da8e7d4f7f5104312e73e72c2d62..ec76d7ffa45f49721d395e8e33be79114daa0369 100644
index 12c3cef5e6801437037aaf83465d30b5bfd1a59a..12d90864129b20ac29eddc0545e6ae629d99049e 100644
--- a/lib/internal/modules/esm/translators.js
+++ b/lib/internal/modules/esm/translators.js
@@ -214,7 +214,9 @@ function createCJSModuleWrap(url, translateContext, parentURL, loadCJS = loadCJS
@@ -212,7 +212,9 @@ function createCJSModuleWrap(url, translateContext, parentURL, loadCJS = loadCJS
const { exportNames, module } = cjsPreparseModuleExports(filename, source, sourceFormat);
cjsCache.set(url, module);
@@ -117,7 +117,7 @@ index d6c96996a900da8e7d4f7f5104312e73e72c2d62..ec76d7ffa45f49721d395e8e33be7911
if (!exportNames.has('default')) {
ArrayPrototypePush(wrapperNames, 'default');
}
@@ -313,6 +315,10 @@ translators.set('require-commonjs', (url, translateContext, parentURL) => {
@@ -311,6 +313,10 @@ translators.set('require-commonjs', (url, translateContext, parentURL) => {
return createCJSModuleWrap(url, translateContext, parentURL);
});

View File

@@ -1,34 +0,0 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Shelley Vohr <shelley.vohr@gmail.com>
Date: Tue, 10 Feb 2026 16:48:47 +0100
Subject: fix: generate_config_gypi needs to generate valid JSON
Node.js added new process.config.variables entries, which the GN generator
emitted with Python repr (single quotes). This made the JSON parse blow
up. Fix this by switching to json.dumps.
This should be upstreamed.
diff --git a/tools/generate_config_gypi.py b/tools/generate_config_gypi.py
index abd11e1dbda8d400cb900343c6e8d2d6e84fe944..436767e633e1f492fa858645e25ce9b904a5fccb 100755
--- a/tools/generate_config_gypi.py
+++ b/tools/generate_config_gypi.py
@@ -58,7 +58,7 @@ def translate_config(out_dir, config, v8_config):
'llvm_version': 13,
'napi_build_version': config['napi_build_version'],
'node_builtin_shareable_builtins':
- eval(config['node_builtin_shareable_builtins']),
+ json.loads(config['node_builtin_shareable_builtins']),
'node_module_version': int(config['node_module_version']),
'node_use_openssl': config['node_use_openssl'],
'node_use_amaro': config['node_use_amaro'],
@@ -102,7 +102,8 @@ def main():
# Write output.
with open(args.target, 'w') as f:
- f.write(repr(translate_config(args.out_dir, config, v8_config)))
+ f.write(json.dumps(translate_config(args.out_dir, config, v8_config),
+ sort_keys=True))
# Write depfile. Force regenerating config.gypi when GN configs change.
if args.dep_file:

View File

@@ -17,7 +17,7 @@ Upstreams:
- https://github.com/nodejs/node/pull/39136
diff --git a/deps/ncrypto/ncrypto.cc b/deps/ncrypto/ncrypto.cc
index 461819ce0fa732048e4365c40a86ef55d984c35f..f1c85e94cf526d0255f47c003664680d26413ec3 100644
index 3e3e8d4720a45650c0699f0106a3cdb2fd49e9ca..7de5d24b008f3da6265e84161ca59b9448b86f3b 100644
--- a/deps/ncrypto/ncrypto.cc
+++ b/deps/ncrypto/ncrypto.cc
@@ -11,6 +11,7 @@
@@ -28,7 +28,7 @@ index 461819ce0fa732048e4365c40a86ef55d984c35f..f1c85e94cf526d0255f47c003664680d
#if OPENSSL_VERSION_MAJOR >= 3
#include <openssl/core_names.h>
#include <openssl/params.h>
@@ -3090,9 +3091,11 @@ const Cipher Cipher::AES_256_GCM = Cipher::FromNid(NID_aes_256_gcm);
@@ -3103,9 +3104,11 @@ const Cipher Cipher::AES_256_GCM = Cipher::FromNid(NID_aes_256_gcm);
const Cipher Cipher::AES_128_KW = Cipher::FromNid(NID_id_aes128_wrap);
const Cipher Cipher::AES_192_KW = Cipher::FromNid(NID_id_aes192_wrap);
const Cipher Cipher::AES_256_KW = Cipher::FromNid(NID_id_aes256_wrap);
@@ -41,10 +41,10 @@ index 461819ce0fa732048e4365c40a86ef55d984c35f..f1c85e94cf526d0255f47c003664680d
bool Cipher::isGcmMode() const {
diff --git a/deps/ncrypto/ncrypto.h b/deps/ncrypto/ncrypto.h
index 175ec8ba0f2a908ffad2ce48434aeed573b09c90..3218590ddce1e92c2a9d776f20f9fb016612061d 100644
index 4f86702da88267ded46d33a943a80ae3c2e17fa6..854c29f27f95634b96780b4750d23986d6ba522f 100644
--- a/deps/ncrypto/ncrypto.h
+++ b/deps/ncrypto/ncrypto.h
@@ -306,9 +306,13 @@ class Cipher final {
@@ -309,9 +309,13 @@ class Cipher final {
#else
static constexpr size_t MAX_AUTH_TAG_LENGTH = 16;
#endif
@@ -119,7 +119,7 @@ index d005bf0ffb93445fa6611a1beb1b465764271ede..01770687bd191c61af02e76d7de24bba
X509View ca(sk_X509_value(peer_certs.get(), i));
if (!cert->view().isIssuedBy(ca)) continue;
diff --git a/src/crypto/crypto_context.cc b/src/crypto/crypto_context.cc
index 4e968477ebcc08fb0ccd6abd4d66240309cf76e8..2e3f31e1765024373c3fc2acd33fc3bfb352a906 100644
index 980b7fafb3144b1db0ff26dc157e0f08633c3c9e..cb586936a904e7b9a017732e993a35ef1115ff9a 100644
--- a/src/crypto/crypto_context.cc
+++ b/src/crypto/crypto_context.cc
@@ -143,7 +143,7 @@ int SSL_CTX_use_certificate_chain(SSL_CTX* ctx,
@@ -192,10 +192,10 @@ index 33cde71b105c7cf22b559583d2e46bfb50016f6d..659910992dff7c05bb7e367e1cba1425
// This is to cause hash() to fail when an incorrect
// outputLength option was passed for a non-XOF hash function.
diff --git a/src/crypto/crypto_keys.cc b/src/crypto/crypto_keys.cc
index e805a984322c8348ceba950fe6f45e002ade10b3..bb9b1f8e1b3c6dd8479ee463e303088e3240d6be 100644
index c05872d73f6fc21cf362928559f3b232a907238d..8efe004ce88ae4d798a24504d47073b331f6525f 100644
--- a/src/crypto/crypto_keys.cc
+++ b/src/crypto/crypto_keys.cc
@@ -1034,6 +1034,7 @@ void KeyObjectHandle::GetAsymmetricKeyType(
@@ -1225,6 +1225,7 @@ void KeyObjectHandle::GetAsymmetricKeyType(
}
bool KeyObjectHandle::CheckEcKeyData() const {
@@ -203,7 +203,7 @@ index e805a984322c8348ceba950fe6f45e002ade10b3..bb9b1f8e1b3c6dd8479ee463e303088e
MarkPopErrorOnReturn mark_pop_error_on_return;
const auto& key = data_.GetAsymmetricKey();
@@ -1043,6 +1044,9 @@ bool KeyObjectHandle::CheckEcKeyData() const {
@@ -1234,6 +1235,9 @@ bool KeyObjectHandle::CheckEcKeyData() const {
return data_.GetKeyType() == kKeyTypePrivate ? ctx.privateCheck()
: ctx.publicCheck();
@@ -214,7 +214,7 @@ index e805a984322c8348ceba950fe6f45e002ade10b3..bb9b1f8e1b3c6dd8479ee463e303088e
void KeyObjectHandle::CheckEcKeyData(const FunctionCallbackInfo<Value>& args) {
diff --git a/src/crypto/crypto_util.cc b/src/crypto/crypto_util.cc
index 205e248e0f20f019e189a6c69d3c011a616b3939..12b0d804c6f1d4998b85160b0aac8eb7a3b5576b 100644
index 3435e43b3baa0c188b984fa1905c6a8614c77de3..a2fa1132c47d8cc4b875396c6cbaa9db6fe20262 100644
--- a/src/crypto/crypto_util.cc
+++ b/src/crypto/crypto_util.cc
@@ -533,24 +533,15 @@ Maybe<void> Decorate(Environment* env,

View File

@@ -28,7 +28,7 @@ index 5f1921d15bc1d3a68c35990f85e36a0e8a5b3ec4..99c6ce57c04768d125dd0a1c6bd62bca
const result = dataURLProcessor(url);
if (result === 'failure') {
diff --git a/lib/internal/modules/esm/resolve.js b/lib/internal/modules/esm/resolve.js
index edf347102fedbb28bce221defa99c37b5834024b..81799fc159cf20344aac64cd7129240deb9a4fe8 100644
index 008160b7d5e8efde13c7907e2b7212974a04d3f2..b01eafd3c476c065a16701317aca2a1def559376 100644
--- a/lib/internal/modules/esm/resolve.js
+++ b/lib/internal/modules/esm/resolve.js
@@ -25,7 +25,7 @@ const {
@@ -40,7 +40,7 @@ index edf347102fedbb28bce221defa99c37b5834024b..81799fc159cf20344aac64cd7129240d
const { getOptionValue } = require('internal/options');
// Do not eagerly grab .manifest, it may be in TDZ
const { sep, posix: { relative: relativePosixPath }, resolve } = require('path');
@@ -277,7 +277,7 @@ function finalizeResolution(resolved, base, preserveSymlinks) {
@@ -273,7 +273,7 @@ function finalizeResolution(resolved, base, preserveSymlinks) {
}
if (!preserveSymlinks) {
@@ -50,7 +50,7 @@ index edf347102fedbb28bce221defa99c37b5834024b..81799fc159cf20344aac64cd7129240d
});
const { search, hash } = resolved;
diff --git a/lib/internal/modules/esm/translators.js b/lib/internal/modules/esm/translators.js
index ec76d7ffa45f49721d395e8e33be79114daa0369..a6fbcb6fd3c2413df96273d93b7339cad3f25f7a 100644
index 12d90864129b20ac29eddc0545e6ae629d99049e..a048d887c4717b2c187e162d8b559285cf540a10 100644
--- a/lib/internal/modules/esm/translators.js
+++ b/lib/internal/modules/esm/translators.js
@@ -23,7 +23,7 @@ const {
@@ -62,7 +62,7 @@ index ec76d7ffa45f49721d395e8e33be79114daa0369..a6fbcb6fd3c2413df96273d93b7339ca
const { dirname, extname } = require('path');
const {
assertBufferSource,
@@ -350,7 +350,7 @@ translators.set('commonjs', function commonjsStrategy(url, translateContext, par
@@ -348,7 +348,7 @@ translators.set('commonjs', function commonjsStrategy(url, translateContext, par
try {
// We still need to read the FS to detect the exports.

View File

@@ -10,7 +10,7 @@ change, it seems to introduce an incompatibility when compiling
using clang modules. Disabling them resolves the issue.
diff --git a/unofficial.gni b/unofficial.gni
index def9a302830e493e51cc2b3588816fcbd3a1bb51..900c5e4d8a48d0725420518c923c7024518158b8 100644
index 8844e2c3916541b62418c4b891b5a834b910bea4..15bd82cad8c6927362f4e852bed49c7d10b71c61 100644
--- a/unofficial.gni
+++ b/unofficial.gni
@@ -197,6 +197,10 @@ template("node_gn_build") {
@@ -24,7 +24,7 @@ index def9a302830e493e51cc2b3588816fcbd3a1bb51..900c5e4d8a48d0725420518c923c7024
}
if (is_posix) {
configs -= [ "//build/config/gcc:symbol_visibility_hidden" ]
@@ -369,6 +373,12 @@ template("node_gn_build") {
@@ -371,6 +375,12 @@ template("node_gn_build") {
include_dirs = [ "src", "tools" ]
configs += [ "//build/config/compiler:no_exit_time_destructors" ]

View File

@@ -86,10 +86,10 @@ index fd28e0904d05e24e8eeb74fa36abd9727699a649..fea0426496978c0003fe1481afcf93fc
NODE_DEFINE_CONSTANT(target, ETIMEDOUT);
#endif
diff --git a/src/node_errors.cc b/src/node_errors.cc
index d148127b89b632b339a63eb50370dfa0daca6308..55a0c986c5b6989ee9ce277bb6a9778abb2ad2ee 100644
index 0388e2e31c9739b8b4bfbcbfb1f8c11b2a84c233..c6404e00d04e61b675a8c4a02139b36da25bd2a8 100644
--- a/src/node_errors.cc
+++ b/src/node_errors.cc
@@ -899,10 +899,6 @@ const char* errno_string(int errorno) {
@@ -897,10 +897,6 @@ const char* errno_string(int errorno) {
ERRNO_CASE(ENOBUFS);
#endif
@@ -100,7 +100,7 @@ index d148127b89b632b339a63eb50370dfa0daca6308..55a0c986c5b6989ee9ce277bb6a9778a
#ifdef ENODEV
ERRNO_CASE(ENODEV);
#endif
@@ -941,14 +937,6 @@ const char* errno_string(int errorno) {
@@ -939,14 +935,6 @@ const char* errno_string(int errorno) {
ERRNO_CASE(ENOSPC);
#endif
@@ -115,7 +115,7 @@ index d148127b89b632b339a63eb50370dfa0daca6308..55a0c986c5b6989ee9ce277bb6a9778a
#ifdef ENOSYS
ERRNO_CASE(ENOSYS);
#endif
@@ -1031,10 +1019,6 @@ const char* errno_string(int errorno) {
@@ -1029,10 +1017,6 @@ const char* errno_string(int errorno) {
ERRNO_CASE(ESTALE);
#endif

View File

@@ -7,10 +7,10 @@ libc++ added [[nodiscard]] to std::filesystem::copy_options operator|=
which causes build failures with -Werror.
diff --git a/src/node_file.cc b/src/node_file.cc
index 547455bb5011677719a8de1f98cb447561bce6aa..385db5fd6fe5db6bb7ff17e98309b6cd605a82d3 100644
index 56b6cd5c39d5e72efd24b7aba1f28dab91a6144e..adac1a04793cc9876c46a508e5c6e5241697311c 100644
--- a/src/node_file.cc
+++ b/src/node_file.cc
@@ -3460,11 +3460,11 @@ static void CpSyncCopyDir(const FunctionCallbackInfo<Value>& args) {
@@ -3501,11 +3501,11 @@ static void CpSyncCopyDir(const FunctionCallbackInfo<Value>& args) {
auto file_copy_opts = std::filesystem::copy_options::recursive;
if (force) {

View File

@@ -6,7 +6,7 @@ Subject: Pass all globals through "require"
(cherry picked from commit 7d015419cb7a0ecfe6728431a4ed2056cd411d62)
diff --git a/lib/internal/modules/cjs/loader.js b/lib/internal/modules/cjs/loader.js
index 0a6788d1b848d860fa3fa3e857c7feab6f16311e..a6b01d7e143fa6ffeda6fa7723e279db7678ddd4 100644
index df656b86016a87477e696da30f685fd2ec66865f..a405f1662452bd1dc969019f1f0fcbf9dd6ea54d 100644
--- a/lib/internal/modules/cjs/loader.js
+++ b/lib/internal/modules/cjs/loader.js
@@ -209,6 +209,13 @@ const {
@@ -23,7 +23,7 @@ index 0a6788d1b848d860fa3fa3e857c7feab6f16311e..a6b01d7e143fa6ffeda6fa7723e279db
const {
isProxy,
} = require('internal/util/types');
@@ -1807,9 +1814,12 @@ Module.prototype._compile = function(content, filename, format) {
@@ -1825,9 +1832,12 @@ Module.prototype._compile = function(content, filename, format) {
if (this[kIsMainSymbol] && getOptionValue('--inspect-brk')) {
const { callAndPauseOnStart } = internalBinding('inspector');
result = callAndPauseOnStart(compiledWrapper, thisValue, exports,

View File

@@ -7,7 +7,7 @@ We use this to allow node's 'fs' module to read from ASAR files as if they were
a real filesystem.
diff --git a/lib/internal/bootstrap/node.js b/lib/internal/bootstrap/node.js
index de18fc4934bcfef6485dc0bc853ca324ed17fc4e..52998c967109c797f3eab64f2f99990b2d69841a 100644
index 7e7e79c661cfdbd9b6f4e347fd3683b03e071473..96d4809368bc488fdc3506345dcbb1071c107e5c 100644
--- a/lib/internal/bootstrap/node.js
+++ b/lib/internal/bootstrap/node.js
@@ -129,6 +129,10 @@ process.domain = null;

View File

@@ -18,7 +18,7 @@ This can be removed when Node.js upgrades to a version of V8 containing CLs
from the above issue.
diff --git a/src/api/environment.cc b/src/api/environment.cc
index 5f51ad205189bd75d0d9638b1104c12b537b4e9b..8974bac7dca43294cc5cc4570f8e2e78f42aefaa 100644
index befaa423abbe86f523a7b8902d74599e56c5b078..ec1496467f5071a810a3d7a76d80f3d12a8582dc 100644
--- a/src/api/environment.cc
+++ b/src/api/environment.cc
@@ -323,6 +323,10 @@ Isolate* NewIsolate(Isolate::CreateParams* params,
@@ -89,10 +89,10 @@ index fb2af584a4ae777022c9ef8c20ada1edcbbbefdc..fe6300a5d5d2d6602a84cbd33736c213
#endif // defined(NODE_WANT_INTERNALS) && NODE_WANT_INTERNALS
diff --git a/src/env.cc b/src/env.cc
index fdabe48dd7776c59298f7d972286d0d2ed062752..b5cf58cc953590493beb52abf249e33e486ffc46 100644
index bbb30dd8a50f7b8550caf1967de8547cc7d8af47..82aee7e38bbd859e1a76eedcc3a51278a1b3a793 100644
--- a/src/env.cc
+++ b/src/env.cc
@@ -611,7 +611,7 @@ IsolateData::~IsolateData() {}
@@ -610,7 +610,7 @@ IsolateData::~IsolateData() {}
// Deprecated API, embedders should use v8::Object::Wrap() directly instead.
void SetCppgcReference(Isolate* isolate,
Local<Object> object,
@@ -102,7 +102,7 @@ index fdabe48dd7776c59298f7d972286d0d2ed062752..b5cf58cc953590493beb52abf249e33e
isolate, object, wrappable);
}
diff --git a/src/node.h b/src/node.h
index b92a9d42da8419741c435643b7401efcb21a9e8b..bbe35c7a8f1bc0bcddf628af42b71efaef8a7759 100644
index 0225ff4f43e8b82c08e8ec5492df73223a82066c..8aac774805a002f5af9e9aca62abc56e8f986bab 100644
--- a/src/node.h
+++ b/src/node.h
@@ -78,6 +78,7 @@
@@ -113,7 +113,7 @@ index b92a9d42da8419741c435643b7401efcb21a9e8b..bbe35c7a8f1bc0bcddf628af42b71efa
#include "v8-platform.h" // NOLINT(build/include_order)
#include "node_version.h" // NODE_MODULE_VERSION
@@ -603,7 +604,8 @@ NODE_EXTERN v8::Isolate* NewIsolate(
@@ -605,7 +606,8 @@ NODE_EXTERN v8::Isolate* NewIsolate(
struct uv_loop_s* event_loop,
MultiIsolatePlatform* platform,
const EmbedderSnapshotData* snapshot_data = nullptr,
@@ -123,7 +123,7 @@ index b92a9d42da8419741c435643b7401efcb21a9e8b..bbe35c7a8f1bc0bcddf628af42b71efa
NODE_EXTERN v8::Isolate* NewIsolate(
std::shared_ptr<ArrayBufferAllocator> allocator,
struct uv_loop_s* event_loop,
@@ -1702,9 +1704,10 @@ void RegisterSignalHandler(int signal,
@@ -1704,9 +1706,10 @@ void RegisterSignalHandler(int signal,
// work with only Node.js versions with v8::Object::Wrap() should use that
// instead.
NODE_DEPRECATED("Use v8::Object::Wrap()",
@@ -135,7 +135,7 @@ index b92a9d42da8419741c435643b7401efcb21a9e8b..bbe35c7a8f1bc0bcddf628af42b71efa
+ v8::Local<v8::Object> object,
+ v8::Object::Wrappable* wrappable));
} // namespace node
namespace crypto {
diff --git a/src/node_main_instance.cc b/src/node_main_instance.cc
index 6f674df3ed0dc1b4e5cd3a249fb787a9fc98361d..90f9eb84b6835c36a91ce23d77722812ce173c0f 100644
@@ -151,7 +151,7 @@ index 6f674df3ed0dc1b4e5cd3a249fb787a9fc98361d..90f9eb84b6835c36a91ce23d77722812
isolate_ =
NewIsolate(isolate_params_.get(), event_loop, platform, snapshot_data);
diff --git a/src/node_worker.cc b/src/node_worker.cc
index fa7dc52b19119ed3d2dc407c029f56107476bc39..1acc61af0c995ddefbc00fe232b2454de77a84a3 100644
index 1d07449c5293f0839082a328d10bfd42cf522107..a2631a96371becb0f4ea4f47a52313f4f02477da 100644
--- a/src/node_worker.cc
+++ b/src/node_worker.cc
@@ -181,6 +181,9 @@ class WorkerThreadData {

View File

@@ -19,10 +19,10 @@ index 2c95ac99be70b0750372e9c858753bf519498e3d..5ab30502fd232196739ca2b450e35cc9
Local<Module> module = obj->module_.Get(isolate);
if (module->GetStatus() < Module::kInstantiated) {
diff --git a/src/node_contextify.cc b/src/node_contextify.cc
index e66d4fcb0c064f96cdb819c783027d864fe88d12..619980b36db457ef7e476eacd446e3bf2a9a71d2 100644
index cd8b64d58413914e72c32df6a2f192143e85ac46..5a26603c43b0974d8f9221e1a36dd9208e6f3333 100644
--- a/src/node_contextify.cc
+++ b/src/node_contextify.cc
@@ -460,7 +460,7 @@ ContextifyContext* ContextifyContext::Get(const PropertyCallbackInfo<T>& args) {
@@ -452,7 +452,7 @@ ContextifyContext* ContextifyContext::Get(const PropertyCallbackInfo<T>& args) {
// args.GetIsolate()->GetCurrentContext() and take the pointer at
// ContextEmbedderIndex::kContextifyContext, as V8 is supposed to
// push the creation context before invoking these callbacks.
@@ -31,7 +31,7 @@ index e66d4fcb0c064f96cdb819c783027d864fe88d12..619980b36db457ef7e476eacd446e3bf
}
ContextifyContext* ContextifyContext::Get(Local<Object> object) {
@@ -593,10 +593,21 @@ Intercepted ContextifyContext::PropertySetterCallback(
@@ -585,10 +585,21 @@ Intercepted ContextifyContext::PropertySetterCallback(
return Intercepted::kNo;
}
@@ -53,7 +53,7 @@ index e66d4fcb0c064f96cdb819c783027d864fe88d12..619980b36db457ef7e476eacd446e3bf
bool is_contextual_store = ctx->global_proxy() != args.This();
// Indicator to not return before setting (undeclared) function declarations
@@ -613,7 +624,7 @@ Intercepted ContextifyContext::PropertySetterCallback(
@@ -605,7 +616,7 @@ Intercepted ContextifyContext::PropertySetterCallback(
!is_function) {
return Intercepted::kNo;
}
@@ -75,8 +75,30 @@ index b925434940baeeb6b06882242ca947736866d175..d067b47e7e30a95740fe0275c7044570
if (!receiver_val->IsObject()) {
THROW_ERR_INVALID_INVOCATION(isolate);
return;
diff --git a/src/node_sqlite.cc b/src/node_sqlite.cc
index 91b80b4fb44c26e95503556064e7429b8cbf4639..1a468abbf42161ffdebe2176bd2c7a8c9e119532 100644
--- a/src/node_sqlite.cc
+++ b/src/node_sqlite.cc
@@ -729,7 +729,7 @@ Intercepted DatabaseSyncLimits::LimitsGetter(
}
DatabaseSyncLimits* limits;
- ASSIGN_OR_RETURN_UNWRAP(&limits, info.This(), Intercepted::kNo);
+ ASSIGN_OR_RETURN_UNWRAP(&limits, info.HolderV2(), Intercepted::kNo);
Environment* env = limits->env();
Isolate* isolate = env->isolate();
@@ -761,7 +761,7 @@ Intercepted DatabaseSyncLimits::LimitsSetter(
}
DatabaseSyncLimits* limits;
- ASSIGN_OR_RETURN_UNWRAP(&limits, info.This(), Intercepted::kNo);
+ ASSIGN_OR_RETURN_UNWRAP(&limits, info.HolderV2(), Intercepted::kNo);
Environment* env = limits->env();
Isolate* isolate = env->isolate();
diff --git a/src/node_util.cc b/src/node_util.cc
index af42a3bd72c3f4aa6aff4a95231f3f3da5008176..e9f4c1cdb60c03dce210f49e18dda57a4934a8b5 100644
index fbfda9c1551e071132e35b90fc3676a9b493abee..065ed602b314f367c2e7dec94019521fd5d23bf4 100644
--- a/src/node_util.cc
+++ b/src/node_util.cc
@@ -366,7 +366,7 @@ static void DefineLazyPropertiesGetter(
@@ -89,10 +111,10 @@ index af42a3bd72c3f4aa6aff4a95231f3f3da5008176..e9f4c1cdb60c03dce210f49e18dda57a
THROW_ERR_INVALID_INVOCATION(isolate);
return;
diff --git a/src/node_webstorage.cc b/src/node_webstorage.cc
index bd83654012442195866e57173b6e5d4d25fecf0f..9f31a56b00600b2754d8c7115630a1132335bffc 100644
index 0a169a8dcf27eeb5b5b0c1b00ac8b79ed43d551b..a4a86f2f5ada6481a89e0682d2c7a5de056d51e8 100644
--- a/src/node_webstorage.cc
+++ b/src/node_webstorage.cc
@@ -535,7 +535,7 @@ template <typename T>
@@ -532,7 +532,7 @@ template <typename T>
static bool ShouldIntercept(Local<Name> property,
const PropertyCallbackInfo<T>& info) {
Environment* env = Environment::GetCurrent(info);
@@ -101,7 +123,7 @@ index bd83654012442195866e57173b6e5d4d25fecf0f..9f31a56b00600b2754d8c7115630a113
if (proto->IsObject()) {
bool has_prop;
@@ -559,7 +559,7 @@ static Intercepted StorageGetter(Local<Name> property,
@@ -556,7 +556,7 @@ static Intercepted StorageGetter(Local<Name> property,
}
Storage* storage;
@@ -110,7 +132,7 @@ index bd83654012442195866e57173b6e5d4d25fecf0f..9f31a56b00600b2754d8c7115630a113
Local<Value> result;
if (storage->Load(property).ToLocal(&result) && !result->IsNull()) {
@@ -573,7 +573,7 @@ static Intercepted StorageSetter(Local<Name> property,
@@ -570,7 +570,7 @@ static Intercepted StorageSetter(Local<Name> property,
Local<Value> value,
const PropertyCallbackInfo<void>& info) {
Storage* storage;
@@ -119,7 +141,7 @@ index bd83654012442195866e57173b6e5d4d25fecf0f..9f31a56b00600b2754d8c7115630a113
if (storage->Store(property, value).IsNothing()) {
info.GetReturnValue().SetFalse();
@@ -589,7 +589,7 @@ static Intercepted StorageQuery(Local<Name> property,
@@ -586,7 +586,7 @@ static Intercepted StorageQuery(Local<Name> property,
}
Storage* storage;
@@ -128,7 +150,7 @@ index bd83654012442195866e57173b6e5d4d25fecf0f..9f31a56b00600b2754d8c7115630a113
Local<Value> result;
if (!storage->Load(property).ToLocal(&result) || result->IsNull()) {
return Intercepted::kNo;
@@ -602,7 +602,7 @@ static Intercepted StorageQuery(Local<Name> property,
@@ -599,7 +599,7 @@ static Intercepted StorageQuery(Local<Name> property,
static Intercepted StorageDeleter(Local<Name> property,
const PropertyCallbackInfo<Boolean>& info) {
Storage* storage;
@@ -137,7 +159,7 @@ index bd83654012442195866e57173b6e5d4d25fecf0f..9f31a56b00600b2754d8c7115630a113
info.GetReturnValue().Set(storage->Remove(property).IsJust());
@@ -611,7 +611,7 @@ static Intercepted StorageDeleter(Local<Name> property,
@@ -608,7 +608,7 @@ static Intercepted StorageDeleter(Local<Name> property,
static void StorageEnumerator(const PropertyCallbackInfo<Array>& info) {
Storage* storage;
@@ -146,7 +168,7 @@ index bd83654012442195866e57173b6e5d4d25fecf0f..9f31a56b00600b2754d8c7115630a113
Local<Array> result;
if (!storage->Enumerate().ToLocal(&result)) {
return;
@@ -623,7 +623,7 @@ static Intercepted StorageDefiner(Local<Name> property,
@@ -620,7 +620,7 @@ static Intercepted StorageDefiner(Local<Name> property,
const PropertyDescriptor& desc,
const PropertyCallbackInfo<void>& info) {
Storage* storage;

View File

@@ -7,7 +7,7 @@ This refactors several allocators to allocate within the V8 memory cage,
allowing them to be compatible with the V8_SANDBOXED_POINTERS feature.
diff --git a/src/crypto/crypto_util.cc b/src/crypto/crypto_util.cc
index 12b0d804c6f1d4998b85160b0aac8eb7a3b5576b..27bd93769233dc65a064710db4095d9cdc3a8b1a 100644
index a2fa1132c47d8cc4b875396c6cbaa9db6fe20262..efce181ae28383745fca2ff086cf2dac3107487a 100644
--- a/src/crypto/crypto_util.cc
+++ b/src/crypto/crypto_util.cc
@@ -346,24 +346,30 @@ std::unique_ptr<BackingStore> ByteSource::ReleaseToBackingStore(
@@ -57,7 +57,7 @@ index 12b0d804c6f1d4998b85160b0aac8eb7a3b5576b..27bd93769233dc65a064710db4095d9c
#else
std::unique_ptr<BackingStore> ptr = ArrayBuffer::NewBackingStore(
allocated_data_,
@@ -662,23 +668,16 @@ namespace {
@@ -664,23 +670,16 @@ namespace {
// using OPENSSL_malloc. However, if the secure heap is
// initialized, SecureBuffer will automatically use it.
void SecureBuffer(const FunctionCallbackInfo<Value>& args) {
@@ -87,12 +87,12 @@ index 12b0d804c6f1d4998b85160b0aac8eb7a3b5576b..27bd93769233dc65a064710db4095d9c
return THROW_ERR_OPERATION_FAILED(env, "Allocation failed");
}
diff --git a/src/crypto/crypto_x509.cc b/src/crypto/crypto_x509.cc
index b30297eac08ad9587642b723f91d7e3b954294d4..4c5427596d1c90d3a413cdd9ff4f1151e657073d 100644
index cb1ad6bcfa7ea421642d097d6a9a8a5e04d5cf7c..6b7e4211a8969351168fc982fe3466a2096bed3a 100644
--- a/src/crypto/crypto_x509.cc
+++ b/src/crypto/crypto_x509.cc
@@ -135,19 +135,17 @@ MaybeLocal<Value> ToBuffer(Environment* env, BIOPointer* bio) {
@@ -141,19 +141,17 @@ MaybeLocal<Value> ToBuffer(Environment* env, BIOPointer* bio) {
if (!mem) [[unlikely]]
return {};
BUF_MEM* mem = *bio;
#ifdef V8_ENABLE_SANDBOX
- // If the v8 sandbox is enabled, then all array buffers must be allocated
- // via the isolate. External buffers are not allowed. So, instead of wrapping
@@ -121,10 +121,10 @@ index b30297eac08ad9587642b723f91d7e3b954294d4..4c5427596d1c90d3a413cdd9ff4f1151
auto backing = ArrayBuffer::NewBackingStore(
mem->data,
diff --git a/src/node_buffer.cc b/src/node_buffer.cc
index e1bee00825d140232456d6dc2337420fde6bda17..04edc4ca3c0e7c2284d2822fe9f5de66ff64fda2 100644
index 362ac268483ea37c1d6fd65fc0cd9fcec7c8448d..93e0c8445056f45c3228bc9c64db3b91d0c84096 100644
--- a/src/node_buffer.cc
+++ b/src/node_buffer.cc
@@ -1443,7 +1443,7 @@ inline size_t CheckNumberToSize(Local<Value> number) {
@@ -1477,7 +1477,7 @@ inline size_t CheckNumberToSize(Local<Value> number) {
CHECK(value >= 0 && value < maxSize);
size_t size = static_cast<size_t>(value);
#ifdef V8_ENABLE_SANDBOX
@@ -133,7 +133,7 @@ index e1bee00825d140232456d6dc2337420fde6bda17..04edc4ca3c0e7c2284d2822fe9f5de66
#endif
return size;
}
@@ -1466,6 +1466,26 @@ void CreateUnsafeArrayBuffer(const FunctionCallbackInfo<Value>& args) {
@@ -1500,6 +1500,26 @@ void CreateUnsafeArrayBuffer(const FunctionCallbackInfo<Value>& args) {
env->isolate_data()->is_building_snapshot()) {
buf = ArrayBuffer::New(isolate, size);
} else {
@@ -160,7 +160,7 @@ index e1bee00825d140232456d6dc2337420fde6bda17..04edc4ca3c0e7c2284d2822fe9f5de66
std::unique_ptr<BackingStore> store = ArrayBuffer::NewBackingStore(
isolate,
size,
@@ -1476,6 +1496,7 @@ void CreateUnsafeArrayBuffer(const FunctionCallbackInfo<Value>& args) {
@@ -1510,6 +1530,7 @@ void CreateUnsafeArrayBuffer(const FunctionCallbackInfo<Value>& args) {
THROW_ERR_MEMORY_ALLOCATION_FAILED(env);
return;
}
@@ -344,17 +344,3 @@ index ef659f1c39f7ee958879bf395377bc99911fc346..225b1465b7c97d972a38968faf6d6850
auto ab = ArrayBuffer::New(isolate, std::move(bs));
v8::Local<Uint8Array> u8 = v8::Uint8Array::New(ab, 0, 1);
diff --git a/test/parallel/test-buffer-concat.js b/test/parallel/test-buffer-concat.js
index 9f0eadd2f10163c3c30657c84eb0ba55db17364d..7c1a6f71ca24dd2e54f9f5987aae2014b44bfba6 100644
--- a/test/parallel/test-buffer-concat.js
+++ b/test/parallel/test-buffer-concat.js
@@ -84,8 +84,7 @@ assert.throws(() => {
Buffer.concat([Buffer.from('hello')], -2);
}, {
code: 'ERR_OUT_OF_RANGE',
- message: 'The value of "length" is out of range. It must be >= 0 && <= 9007199254740991. ' +
- 'Received -2'
+ message: /The value of "length" is out of range\. It must be >= 0 && <= (?:34359738367|9007199254740991)\. Received -2/
});
// eslint-disable-next-line node-core/crypto-check

View File

@@ -12,23 +12,22 @@ See:
https://chromium-review.googlesource.com/c/v8/v8/+/6826001
diff --git a/test/fixtures/test-runner/output/describe_it.snapshot b/test/fixtures/test-runner/output/describe_it.snapshot
index cae467f6487ffef4fbe94da229e30c2537fe9e95..f1240a6a99dafc18ad51d413719df58b757893ab 100644
index 4df2c20a45f3b7c06172d5c5a2045c3dc066824e..9be9d988f0ffa9c4b84dce321935cfca3719a3fa 100644
--- a/test/fixtures/test-runner/output/describe_it.snapshot
+++ b/test/fixtures/test-runner/output/describe_it.snapshot
@@ -726,6 +726,8 @@ not ok 60 - timeouts
@@ -659,6 +659,7 @@ not ok 60 - timeouts
code: 'ERR_TEST_FAILURE'
stack: |-
*
+ *
+ *
Object.<anonymous> (<project-root>/test/fixtures/test-runner/output/describe_it.js:372:50)
+ <node-internal-frames>
...
1..2
not ok 61 - successful thenable
@@ -748,6 +750,7 @@ not ok 62 - rejected thenable
@@ -681,6 +682,7 @@ not ok 62 - rejected thenable
code: 'ERR_TEST_FAILURE'
stack: |-
*
+ *
Object.<anonymous> (<project-root>/test/fixtures/test-runner/output/describe_it.js:393:48)
+ <node-internal-frames>
...
# Subtest: async describe function
# Subtest: it inside describe 1

1
patches/pdfium/.patches Normal file
View File

@@ -0,0 +1 @@
cherry-pick-bce2e6728279.patch

View File

@@ -0,0 +1,36 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Tom Sepez <tsepez@google.com>
Date: Tue, 7 Apr 2026 15:50:30 -0700
Subject: Use safe arithmetic in CFX_PSRenderer::DrawDIBits()
Hardening suggestion from the AI bot.
Bug: 500036290
Change-Id: Ie521629d06ba944f610b941a8c9e9505fa29aea7
Reviewed-on: https://pdfium-review.googlesource.com/c/pdfium/+/145731
Reviewed-by: Lei Zhang <thestig@chromium.org>
Commit-Queue: Tom Sepez <tsepez@chromium.org>
diff --git a/core/fxge/win32/cfx_psrenderer.cpp b/core/fxge/win32/cfx_psrenderer.cpp
index b38f1a2b7c3271769e609763be2e183f2890ebb3..b8710e50ed01233b2aefbf1760e26e05964b315e 100644
--- a/core/fxge/win32/cfx_psrenderer.cpp
+++ b/core/fxge/win32/cfx_psrenderer.cpp
@@ -620,8 +620,16 @@ bool CFX_PSRenderer::DrawDIBits(RetainPtr<const CFX_DIBBase> bitmap,
encoder_iface_->pJpegEncodeFunc(bitmap, &output_buf, &output_size)) {
filter = "/DCTDecode filter ";
} else {
- int src_pitch = width * bytes_per_pixel;
- output_size = height * src_pitch;
+ FX_SAFE_UINT32 safe_pitch = bytes_per_pixel;
+ safe_pitch *= width;
+ FX_SAFE_UINT32 safe_output_size = safe_pitch;
+ safe_output_size *= height;
+ if (!safe_output_size.IsValid()) {
+ WriteString("\nQ\n");
+ return false;
+ }
+ uint32_t src_pitch = safe_pitch.ValueOrDie();
+ output_size = safe_output_size.ValueOrDie();
output_buf = FX_Alloc(uint8_t, output_size);
for (int row = 0; row < height; row++) {
const uint8_t* src_scan = bitmap->GetScanline(row).data();

View File

@@ -12,3 +12,4 @@ use_uttype_class_instead_of_deprecated_uttypeconformsto.patch
fix_clean_up_orphaned_staged_updates_before_downloading_new_update.patch
fix_add_explicit_json_property_mappings_for_shipit_request_model.patch
fix_resolve_target_bundle_path_once_at_start_of_install.patch
fix_trigger_shipit_mach_service_after_smjobsubmit_to_unblock.patch

View File

@@ -0,0 +1,139 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Keeley Hammond <vertedinde@electronjs.org>
Date: Tue, 14 Apr 2026 10:00:00 -0700
Subject: fix: trigger ShipIt Mach service after SMJobSubmit to unblock
on-demand-only mode
When a macOS system update is pending (downloaded but not yet installed),
launchd puts the user domain (gui/<uid>) into "on-demand-only mode".
In this mode, launchd only starts jobs triggered by an on-demand event
such as a Mach port connection -- KeepAlive and RunAtLoad are suppressed.
ShipIt's launchd job registers a MachServices endpoint, but nothing ever
connects to it. The MachServices key was originally used when ShipIt was
a full XPC service (removed in Squirrel/Squirrel.Mac@d6ca1c2 in October
2013). The client-side connection was removed but the server-side
MachServices registration was left behind, creating a trigger that
launchd waits on but nothing ever fires.
Fix: after SMJobSubmit, open a lightweight XPC connection to the
registered Mach service name and send an empty message. This satisfies
launchd's on-demand trigger and causes it to start ShipIt immediately.
In normal operation (no pending update), the job starts via KeepAlive
anyway and the trigger is a harmless no-op. Unlike launchctl kickstart,
this preserves KeepAlive.SuccessfulExit respawn behavior because launchd
treats the activation as a legitimate on-demand event.
The trigger message stays in the Mach port's kernel queue (ShipIt has
no XPC listener), which creates standing demand that provides the
on-demand activity needed for KeepAlive retries in on-demand-only mode.
To prevent this standing demand from also respawning ShipIt after a
successful exit(0), ShipIt checks in for its Mach service port and
dequeues every pending message before each exit(EXIT_SUCCESS). Checking
in alone is not sufficient: launchd tracks demand independently of the
port's lifetime and will respawn the job if the message was never read.
On failure exits, the message is left in place so launchd treats the
KeepAlive respawn as demand-backed.
diff --git a/Squirrel/SQRLShipItLauncher.m b/Squirrel/SQRLShipItLauncher.m
index 6a9151d92f399184fff9854eb00ea506165bbbe2..a087f20043fa79a07391ed065031396d7ec6fce4 100644
--- a/Squirrel/SQRLShipItLauncher.m
+++ b/Squirrel/SQRLShipItLauncher.m
@@ -10,6 +10,7 @@
#import <ReactiveObjC/EXTScope.h>
#import "SQRLDirectoryManager.h"
#import <ReactiveObjC/ReactiveObjC.h>
+#import <xpc/xpc.h>
#import <Security/Security.h>
#import <ServiceManagement/ServiceManagement.h>
#import <launch.h>
@@ -57,7 +58,7 @@ + (RACSignal *)shipItJobDictionary {
NSMutableArray *arguments = [[NSMutableArray alloc] init];
[arguments addObject:[squirrelBundle URLForResource:@"ShipIt" withExtension:nil].path];
- // Pass in the service name so ShipIt knows how to broadcast itself.
+ // Pass in the job label so ShipIt can identify itself.
[arguments addObject:jobLabel];
// We need to pass the path to ShipIt rather than having ShipIt
@@ -154,6 +155,23 @@ + (RACSignal *)launchPrivileged:(BOOL)privileged {
return [RACSignal error:CFBridgingRelease(cfError)];
}
+ // Trigger an on-demand launch by sending a message to the job's
+ // Mach service. When loginwindow begins a restart (e.g. for a
+ // pending macOS update) it puts the per-user launchd domain into
+ // on-demand-only mode, which defers RunAtLoad/KeepAlive spawns
+ // but still honors real IPC demand. The system domain is not
+ // affected, so this is only needed for the unprivileged path.
+ if (!privileged) {
+ xpc_connection_t trigger = xpc_connection_create_mach_service(self.shipItJobLabel.UTF8String, NULL, 0);
+ xpc_connection_set_event_handler(trigger, ^(xpc_object_t __unused event) {});
+ xpc_connection_resume(trigger);
+ xpc_connection_send_message(trigger, xpc_dictionary_create(NULL, NULL, 0));
+ // send_message is async; keep the connection alive until the
+ // message is actually on the wire so ARC releasing `trigger`
+ // at end-of-scope can't drop it first.
+ xpc_connection_send_barrier(trigger, ^{ (void)trigger; });
+ }
+
return [RACSignal empty];
}]
flatten]
diff --git a/Squirrel/ShipIt-main.m b/Squirrel/ShipIt-main.m
index acf545199dbf1831fe8a73155c6e4d0db4047934..e26c0f11870ddbe801e572b1696af484231bf1dc 100644
--- a/Squirrel/ShipIt-main.m
+++ b/Squirrel/ShipIt-main.m
@@ -15,6 +15,8 @@
#include <spawn.h>
#include <sys/wait.h>
+#include <mach/mach.h>
+#include <servers/bootstrap.h>
#import "NSError+SQRLVerbosityExtensions.h"
#import "RACSignal+SQRLTransactionExtensions.h"
@@ -63,6 +65,28 @@ static BOOL clearInstallationAttempts(NSString *applicationIdentifier) {
return CFPreferencesSynchronize((__bridge CFStringRef)applicationIdentifier, kCFPreferencesCurrentUser, kCFPreferencesCurrentHost);
}
+// Drain the Mach service port registered via MachServices in the launchd
+// job dictionary before exit(0) so launchd sees no outstanding demand and
+// does not immediately respawn the job. bootstrap_check_in transfers the
+// receive right into this task, but that alone is not sufficient: launchd
+// tracks demand independently of the port's lifetime, so the queued
+// trigger message must be explicitly dequeued. On failure exits the
+// message is intentionally left queued so the KeepAlive respawn is
+// demand-backed while the launchd domain is in on-demand-only mode.
+static void drainMachServicePort(const char *serviceName) {
+ mach_port_t port = MACH_PORT_NULL;
+ if (bootstrap_check_in(bootstrap_port, serviceName, &port) != KERN_SUCCESS) return;
+
+ struct {
+ mach_msg_header_t header;
+ uint8_t body[4096];
+ } msg;
+ while (mach_msg(&msg.header, MACH_RCV_MSG | MACH_RCV_TIMEOUT,
+ 0, sizeof(msg), port, 0, MACH_PORT_NULL) == KERN_SUCCESS) {
+ mach_msg_destroy(&msg.header);
+ }
+}
+
// Waits for all instances of the target application (as described in the
// `request`) to exit, then sends completed.
static RACSignal *waitForTerminationIfNecessary(SQRLShipItRequest *request) {
@@ -206,12 +230,14 @@ static void installRequest(RACSignal *readRequestSignal, NSString *applicationId
if ([[error domain] isEqual:SQRLInstallerErrorDomain] && [error code] == SQRLInstallerErrorAppStillRunning) {
NSLog(@"Installation cancelled: %@", error);
clearInstallationAttempts(applicationIdentifier);
+ drainMachServicePort(applicationIdentifier.UTF8String);
exit(EXIT_SUCCESS);
} else {
NSLog(@"Installation error: %@", error);
exit(EXIT_FAILURE);
}
} completed:^{
+ drainMachServicePort(applicationIdentifier.UTF8String);
exit(EXIT_SUCCESS);
}];
}

View File

@@ -8,9 +8,16 @@ from dbusmock import DBusTestCase
from lib.config import is_verbose_mode
def stop():
DBusTestCase.stop_dbus(DBusTestCase.system_bus_pid)
DBusTestCase.stop_dbus(DBusTestCase.session_bus_pid)
if hasattr(DBusTestCase, 'stop_dbus'):
if DBusTestCase.system_bus_pid is not None:
DBusTestCase.stop_dbus(DBusTestCase.system_bus_pid)
if DBusTestCase.session_bus_pid is not None:
DBusTestCase.stop_dbus(DBusTestCase.session_bus_pid)
else:
DBusTestCase.tearDownClass()
def start():
with sys.stdout if is_verbose_mode() \
@@ -21,6 +28,7 @@ def start():
DBusTestCase.start_session_bus()
DBusTestCase.spawn_server_template('notification_daemon', None, log)
if __name__ == '__main__':
start()
try:

View File

@@ -0,0 +1,35 @@
#!/usr/bin/env python3
import os
import subprocess
import sys
# Resolve the on-disk locations of HEAD and packed-refs for this checkout so
# that BUILD.gn can register them as build graph inputs. In a plain clone both
# live under electron/.git/, but in a `git worktree` checkout .git is a file
# pointing at <common>/.git/worktrees/<name>; HEAD lives there while
# packed-refs is shared in the common dir. GN's read_file() does not follow
# that indirection, so we ask git to do it for us.
ELECTRON_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
def rev_parse(flag):
out = subprocess.check_output(['git', 'rev-parse', flag],
cwd=ELECTRON_DIR,
stderr=subprocess.PIPE,
universal_newlines=True).strip()
return out if os.path.isabs(out) else os.path.join(ELECTRON_DIR, out)
try:
git_dir = rev_parse('--git-dir')
common_dir = rev_parse('--git-common-dir')
except (subprocess.CalledProcessError, OSError) as e:
sys.stderr.write(f'get-git-ref-paths.py: not a git checkout '
f'(set override_electron_version): {e}\n')
sys.exit(1)
for p in (os.path.join(common_dir, 'packed-refs'),
os.path.join(git_dir, 'HEAD')):
print(os.path.normpath(p).replace('\\', '/'))

View File

@@ -5,18 +5,36 @@ import { ElectronReleaseRepo } from './types';
const cachedTokens = Object.create(null);
const SUDOWOODO_OIDC_AUDIENCE = 'sudowoodo-broker';
async function getActionsIdToken (): Promise<string> {
const { ACTIONS_ID_TOKEN_REQUEST_URL, ACTIONS_ID_TOKEN_REQUEST_TOKEN } = process.env;
if (!ACTIONS_ID_TOKEN_REQUEST_URL || !ACTIONS_ID_TOKEN_REQUEST_TOKEN) {
throw new Error(
'ACTIONS_ID_TOKEN_REQUEST_URL/_TOKEN not set — the job needs `permissions: id-token: write` to mint an OIDC token for the sudowoodo exchange'
);
}
const { value } = await got(ACTIONS_ID_TOKEN_REQUEST_URL + '&audience=' + SUDOWOODO_OIDC_AUDIENCE, {
headers: {
authorization: 'Bearer ' + ACTIONS_ID_TOKEN_REQUEST_TOKEN
}
}).json<{ value: string }>();
return value;
}
async function ensureToken (repo: ElectronReleaseRepo) {
if (!cachedTokens[repo]) {
cachedTokens[repo] = await (async () => {
const { ELECTRON_GITHUB_TOKEN, SUDOWOODO_EXCHANGE_URL, SUDOWOODO_EXCHANGE_TOKEN } = process.env;
const { ELECTRON_GITHUB_TOKEN, SUDOWOODO_EXCHANGE_URL } = process.env;
if (ELECTRON_GITHUB_TOKEN) {
return ELECTRON_GITHUB_TOKEN;
}
if (SUDOWOODO_EXCHANGE_URL && SUDOWOODO_EXCHANGE_TOKEN) {
if (SUDOWOODO_EXCHANGE_URL) {
const idToken = await getActionsIdToken();
const resp = await got.post(SUDOWOODO_EXCHANGE_URL + '?repo=' + repo, {
headers: {
Authorization: SUDOWOODO_EXCHANGE_TOKEN
Authorization: 'Bearer ' + idToken
},
throwHttpErrors: false
});

View File

@@ -32,6 +32,7 @@
#if BUILDFLAG(ENABLE_PDF_VIEWER)
#include "components/pdf/common/constants.h" // nogncheck
#include "components/pdf/common/pdf_util.h" // nogncheck
#include "shell/common/electron_constants.h"
#endif // BUILDFLAG(ENABLE_PDF_VIEWER)
@@ -217,4 +218,13 @@ void ElectronContentClient::AddContentDecryptionModules(
}
}
bool ElectronContentClient::IsFilePickerAllowedForCrossOriginSubframe(
const url::Origin& origin) {
#if BUILDFLAG(ENABLE_PDF_VIEWER)
return IsPdfExtensionOrigin(origin);
#else
return false;
#endif
}
} // namespace electron

View File

@@ -9,6 +9,7 @@
#include <vector>
#include "content/public/common/content_client.h"
#include "url/origin.h"
namespace electron {
@@ -33,6 +34,8 @@ class ElectronContentClient : public content::ContentClient {
void AddContentDecryptionModules(
std::vector<content::CdmInfo>* cdms,
std::vector<media::CdmHostFilePath>* cdm_host_file_paths) override;
bool IsFilePickerAllowedForCrossOriginSubframe(
const url::Origin& origin) override;
};
} // namespace electron

View File

@@ -321,8 +321,8 @@ void BaseWindow::OnWindowLeaveHtmlFullScreen() {
Emit("leave-html-full-screen");
}
void BaseWindow::OnWindowAlwaysOnTopChanged() {
Emit("always-on-top-changed", IsAlwaysOnTop());
void BaseWindow::OnWindowAlwaysOnTopChanged(const bool is_always_on_top) {
Emit("always-on-top-changed", is_always_on_top);
}
void BaseWindow::OnExecuteAppCommand(const std::string_view command_name) {

View File

@@ -91,7 +91,7 @@ class BaseWindow : public gin_helper::TrackableObject<BaseWindow>,
void OnWindowLeaveFullScreen() override;
void OnWindowEnterHtmlFullScreen() override;
void OnWindowLeaveHtmlFullScreen() override;
void OnWindowAlwaysOnTopChanged() override;
void OnWindowAlwaysOnTopChanged(bool is_always_on_top) override;
void OnExecuteAppCommand(std::string_view command_name) override;
void OnTouchBarItemResult(const std::string& item_id,
const base::DictValue& details) override;

View File

@@ -8,7 +8,6 @@
#include "extensions/browser/extension_registry.h"
#include "gin/data_object_builder.h"
#include "gin/object_template_builder.h"
#include "shell/browser/api/electron_api_extensions.h"
#include "shell/browser/electron_browser_context.h"
#include "shell/browser/extensions/electron_extension_system.h"
#include "shell/browser/javascript_environment.h"
@@ -17,17 +16,17 @@
#include "shell/common/gin_converters/gurl_converter.h"
#include "shell/common/gin_converters/value_converter.h"
#include "shell/common/gin_helper/dictionary.h"
#include "shell/common/gin_helper/handle.h"
#include "shell/common/gin_helper/promise.h"
#include "shell/common/node_util.h"
#include "v8/include/cppgc/allocation.h"
namespace electron::api {
gin::DeprecatedWrapperInfo Extensions::kWrapperInfo = {gin::kEmbedderNativeGin};
const gin::WrapperInfo Extensions::kWrapperInfo = {{gin::kEmbedderNativeGin},
gin::kElectronExtensions};
Extensions::Extensions(v8::Isolate* isolate,
ElectronBrowserContext* browser_context)
: browser_context_(browser_context) {
Extensions::Extensions(ElectronBrowserContext* browser_context)
: browser_context_{browser_context} {
extensions::ExtensionRegistry::Get(browser_context)->AddObserver(this);
}
@@ -36,11 +35,10 @@ Extensions::~Extensions() {
}
// static
gin_helper::Handle<Extensions> Extensions::Create(
v8::Isolate* isolate,
ElectronBrowserContext* browser_context) {
return gin_helper::CreateHandle(isolate,
new Extensions(isolate, browser_context));
Extensions* Extensions::Create(v8::Isolate* isolate,
ElectronBrowserContext* browser_context) {
return cppgc::MakeGarbageCollected<Extensions>(
isolate->GetCppHeap()->GetAllocationHandle(), browser_context);
}
v8::Local<v8::Promise> Extensions::LoadExtension(
@@ -152,8 +150,12 @@ gin::ObjectTemplateBuilder Extensions::GetObjectTemplateBuilder(
.SetMethod("getAllExtensions", &Extensions::GetAllExtensions);
}
const char* Extensions::GetTypeName() {
return "Extensions";
const gin::WrapperInfo* Extensions::wrapper_info() const {
return &kWrapperInfo;
}
const char* Extensions::GetHumanReadableName() const {
return "Electron / Extensions";
}
} // namespace electron::api

View File

@@ -6,15 +6,9 @@
#define ELECTRON_SHELL_BROWSER_API_ELECTRON_API_EXTENSIONS_H_
#include "base/memory/raw_ptr.h"
#include "extensions/browser/extension_registry.h"
#include "extensions/browser/extension_registry_observer.h"
#include "gin/wrappable.h"
#include "shell/browser/event_emitter_mixin.h"
#include "shell/common/gin_helper/wrappable.h"
namespace gin_helper {
template <typename T>
class Handle;
} // namespace gin_helper
namespace electron {
@@ -22,19 +16,24 @@ class ElectronBrowserContext;
namespace api {
class Extensions final : public gin_helper::DeprecatedWrappable<Extensions>,
class Extensions final : public gin::Wrappable<Extensions>,
public gin_helper::EventEmitterMixin<Extensions>,
private extensions::ExtensionRegistryObserver {
public:
static gin_helper::Handle<Extensions> Create(
v8::Isolate* isolate,
ElectronBrowserContext* browser_context);
static Extensions* Create(v8::Isolate* isolate,
ElectronBrowserContext* browser_context);
// gin_helper::Wrappable
static gin::DeprecatedWrapperInfo kWrapperInfo;
// gin::Wrappable
static const gin::WrapperInfo kWrapperInfo;
const gin::WrapperInfo* wrapper_info() const override;
const char* GetHumanReadableName() const override;
gin::ObjectTemplateBuilder GetObjectTemplateBuilder(
v8::Isolate* isolate) override;
const char* GetTypeName() override;
const char* GetClassName() const { return "Extensions"; }
// Make public for cppgc::MakeGarbageCollected.
explicit Extensions(ElectronBrowserContext* browser_context);
~Extensions() override;
v8::Local<v8::Promise> LoadExtension(v8::Isolate* isolate,
const base::FilePath& extension_path,
@@ -57,17 +56,12 @@ class Extensions final : public gin_helper::DeprecatedWrappable<Extensions>,
Extensions(const Extensions&) = delete;
Extensions& operator=(const Extensions&) = delete;
protected:
explicit Extensions(v8::Isolate* isolate,
ElectronBrowserContext* browser_context);
~Extensions() override;
private:
content::BrowserContext* browser_context() const {
return browser_context_.get();
}
raw_ptr<content::BrowserContext> browser_context_;
const raw_ptr<content::BrowserContext> browser_context_;
};
} // namespace api

View File

@@ -24,6 +24,7 @@
#include <windows.h>
#include "base/no_destructor.h"
#include "base/strings/utf_string_conversions.h"
#include "shell/browser/javascript_environment.h"
#include "shell/browser/notifications/win/windows_toast_activator.h"
#endif
@@ -76,6 +77,7 @@ Notification::Notification(gin::Arguments* args) {
if (args->GetNext(&opts)) {
opts.Get("id", &id_);
opts.Get("groupId", &group_id_);
opts.Get("groupTitle", &group_title_);
opts.Get("title", &title_);
opts.Get("subtitle", &subtitle_);
opts.Get("body", &body_);
@@ -108,7 +110,32 @@ gin_helper::Handle<Notification> Notification::New(
thrower.ThrowError("Cannot create Notification before app is ready");
return {};
}
return gin_helper::CreateHandle(thrower.isolate(), new Notification(args));
auto handle =
gin_helper::CreateHandle(thrower.isolate(), new Notification(args));
#if BUILDFLAG(IS_WIN)
constexpr size_t kMaxTagLength = 64;
auto* notif = handle.get();
if (!notif->id_.empty() &&
base::UTF8ToWide(notif->id_).length() > kMaxTagLength) {
thrower.ThrowError(
"Notification id exceeds Windows limit of 64 UTF-16 characters");
return {};
}
if (!notif->group_id_.empty() &&
base::UTF8ToWide(notif->group_id_).length() > kMaxTagLength) {
thrower.ThrowError(
"Notification groupId exceeds Windows limit of 64 UTF-16 characters");
return {};
}
if (!notif->group_title_.empty() && notif->group_id_.empty()) {
thrower.ThrowError("Notification groupTitle requires groupId to be set");
return {};
}
#endif
return handle;
}
// Setters
@@ -259,6 +286,7 @@ void Notification::Show() {
options.urgency = urgency_;
options.toast_xml = toast_xml_;
options.group_id = group_id_;
options.group_title = group_title_;
notification_->Show(options);
}
}
@@ -349,6 +377,7 @@ void Notification::FillObjectTemplate(v8::Isolate* isolate,
.SetMethod("close", &Notification::Close)
.SetProperty("id", &Notification::id)
.SetProperty("groupId", &Notification::group_id)
.SetProperty("groupTitle", &Notification::group_title)
.SetProperty("title", &Notification::title, &Notification::SetTitle)
.SetProperty("subtitle", &Notification::subtitle,
&Notification::SetSubtitle)

View File

@@ -85,6 +85,7 @@ class Notification final : public gin_helper::DeprecatedWrappable<Notification>,
// Prop Getters
const std::string& id() const { return id_; }
const std::string& group_id() const { return group_id_; }
const std::u16string& group_title() const { return group_title_; }
const std::u16string& title() const { return title_; }
const std::u16string& subtitle() const { return subtitle_; }
const std::u16string& body() const { return body_; }
@@ -117,6 +118,7 @@ class Notification final : public gin_helper::DeprecatedWrappable<Notification>,
private:
std::string id_;
std::string group_id_;
std::u16string group_title_;
std::u16string title_;
std::u16string subtitle_;
std::u16string body_;

View File

@@ -7,9 +7,14 @@
#include <windows.h>
#include <wtsapi32.h>
#include "base/debug/alias.h"
#include "base/debug/crash_logging.h"
#include "base/logging.h"
#include "base/strings/string_number_conversions.h"
#include "base/win/windows_handle_util.h"
#include "base/win/windows_types.h"
#include "base/win/wrapped_window_proc.h"
#include "components/crash/core/common/crash_key.h"
#include "content/public/browser/browser_task_traits.h"
#include "content/public/browser/browser_thread.h"
#include "ui/gfx/win/hwnd_util.h"
@@ -20,6 +25,18 @@ namespace {
const wchar_t kPowerMonitorWindowClass[] = L"Electron_PowerMonitorHostWindow";
std::string DescribeMemoryState(void* address) {
MEMORY_BASIC_INFORMATION mbi = {};
if (!VirtualQuery(address, &mbi, sizeof(mbi))) {
return "VirtualQuery failed err=" + base::NumberToString(::GetLastError());
}
// Refs
// https://learn.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-memory_basic_information
return "state=" + base::NumberToString(mbi.State) +
" protect=" + base::NumberToString(mbi.Protect) +
" type=" + base::NumberToString(mbi.Type);
}
} // namespace
namespace api {
@@ -49,12 +66,35 @@ void PowerMonitor::InitPlatformSpecificMonitors() {
static_cast<HANDLE>(window_), DEVICE_NOTIFY_WINDOW_HANDLE);
PLOG_IF(ERROR, !power_notify_handle_)
<< "RegisterSuspendResumeNotification failed";
// On ARM64 Windows, UnregisterSuspendResumeNotification may
// crash in powrprof!PowerUnregisterSuspendResumeNotification by dereferencing
// the HPOWERNOTIFY handle. VirtualQuery on the handle address reveals
// whether it was ever a valid pointer.
// Use static crash keys (not SCOPED_) so they persist until the crash.
if (power_notify_handle_) {
static crash_reporter::CrashKeyString<16> reg_handle_key("pm-reg-handle");
static crash_reporter::CrashKeyString<64> reg_memstate_key(
"pm-reg-memstate");
reg_handle_key.Set(
base::NumberToString(base::win::HandleToUint32(power_notify_handle_)));
reg_memstate_key.Set(DescribeMemoryState(power_notify_handle_));
}
}
void PowerMonitor::DestroyPlatformSpecificMonitors() {
if (window_) {
WTSUnRegisterSessionNotification(window_);
if (power_notify_handle_) {
// Capture handle value and memory state at unregistration time.
// debug::Alias forces the raw value onto the stack.
auto handle_value = base::win::HandleToUint32(power_notify_handle_);
base::debug::Alias(&handle_value);
static crash_reporter::CrashKeyString<64> unreg_memstate_key(
"pm-unreg-memstate");
unreg_memstate_key.Set(DescribeMemoryState(power_notify_handle_));
UnregisterSuspendResumeNotification(power_notify_handle_);
power_notify_handle_ = nullptr;
}

View File

@@ -1343,15 +1343,12 @@ v8::Local<v8::Value> Session::Cookies(v8::Isolate* isolate) {
return cookies_.Get(isolate);
}
v8::Local<v8::Value> Session::Extensions(v8::Isolate* isolate) {
api::Extensions* Session::Extensions(v8::Isolate* isolate) {
#if BUILDFLAG(ENABLE_ELECTRON_EXTENSIONS)
if (extensions_.IsEmptyThreadSafe()) {
v8::Local<v8::Value> handle;
handle = Extensions::Create(isolate, browser_context()).ToV8();
extensions_.Reset(isolate, handle);
}
if (!extensions_)
extensions_ = Extensions::Create(isolate, browser_context());
#endif
return extensions_.Get(isolate);
return extensions_.Get();
}
api::Protocol* Session::Protocol() {

View File

@@ -57,6 +57,7 @@ struct PreloadScript;
namespace api {
class Extensions;
class NetLog;
class Protocol;
class ServiceWorkerContext;
@@ -166,7 +167,7 @@ class Session final : public gin::Wrappable<Session>,
v8::Local<v8::Promise> ClearSharedDictionaryCacheForIsolationKey(
const gin_helper::Dictionary& options);
v8::Local<v8::Value> Cookies(v8::Isolate* isolate);
v8::Local<v8::Value> Extensions(v8::Isolate* isolate);
api::Extensions* Extensions(v8::Isolate* isolate);
api::Protocol* Protocol();
api::ServiceWorkerContext* ServiceWorkerContext();
WebRequest* WebRequest(v8::Isolate* isolate);
@@ -213,7 +214,7 @@ class Session final : public gin::Wrappable<Session>,
// Cached gin_helper::Wrappable objects.
v8::TracedReference<v8::Value> cookies_;
v8::TracedReference<v8::Value> extensions_;
cppgc::Member<api::Extensions> extensions_;
cppgc::Member<api::Protocol> protocol_;
cppgc::Member<api::NetLog> net_log_;
cppgc::Member<api::ServiceWorkerContext> service_worker_context_;

Some files were not shown because too many files have changed in this diff Show More