Compare commits

...

1124 Commits

Author SHA1 Message Date
Lincoln Stein
efc7a262b7 chore(release): bump version to 6.11.0 2026-01-31 17:27:56 -05:00
Weblate (bot)
a873ce0175 ui: translations update from weblate (#8816)
* translationBot(ui): update translation (Italian)

Currently translated at 95.0% (2124 of 2235 strings)

translationBot(ui): update translation (Italian)

Currently translated at 94.5% (2114 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2195 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2197 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/

* translationBot(ui): update translation (Russian)

Currently translated at 60.0% (1341 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: DustyShoe <warukeichi@gmail.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-31 22:18:09 +00:00
Alexander Eichhorn
9ee7baaba5 fix(ui): convert reference image configs when switching main model base (#8811)
When switching between FLUX.2 (model-less reference images) and other
models that require IP adapter/Redux models, the reference image configs
were not being converted, leaving stale config types that hid or showed
the wrong UI controls.

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-31 22:04:23 +00:00
Weblate (bot)
fb5c43a905 ui: translations update from weblate (#8814)
* translationBot(ui): update translation (Italian)

Currently translated at 95.0% (2124 of 2235 strings)

translationBot(ui): update translation (Italian)

Currently translated at 94.5% (2114 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2195 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2197 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/

* translationBot(ui): update translation (Russian)

Currently translated at 60.0% (1341 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: DustyShoe <warukeichi@gmail.com>
2026-01-31 17:03:47 -05:00
Weblate (bot)
0f69f4bb9a ui: translations update from weblate (#8813)
* translationBot(ui): update translation (Italian)

Currently translated at 95.0% (2124 of 2235 strings)

translationBot(ui): update translation (Italian)

Currently translated at 94.5% (2114 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2195 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2197 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/

* translationBot(ui): update translation (Russian)

Currently translated at 60.0% (1341 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: DustyShoe <warukeichi@gmail.com>
2026-01-31 16:41:12 -05:00
Weblate (bot)
8a355e66fa ui: translations update from weblate (#8812)
* translationBot(ui): update translation (Italian)

Currently translated at 95.0% (2124 of 2235 strings)

translationBot(ui): update translation (Italian)

Currently translated at 94.5% (2114 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2195 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2197 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/

* translationBot(ui): update translation (Russian)

Currently translated at 60.0% (1341 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: DustyShoe <warukeichi@gmail.com>
2026-01-31 08:52:27 -05:00
blessedcoolant
b811602b38 fix(ui): Flux 2 Model Manager default settings not showing Guidance (#8810) 2026-01-31 13:41:05 +00:00
DustyShoe
0716b2fa75 Fix blur filter clipping by expanding padded bounds (#8773)
Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-30 20:56:51 +00:00
Alexander Eichhorn
4d71609115 fix(ui): remove scheduler selection for FLUX.2 Klein (#8808)
The scheduler dropdown is no longer shown for FLUX.2 Klein models.
The backend default (Euler) is used instead.

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-30 02:16:12 +00:00
blessedcoolant
0ecb903ae2 fix: Klein 2 Inpainting breaking when there is a reference image (#8803)
Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-30 02:12:41 +00:00
Alexander Eichhorn
736f4ffeb1 fix(ui): improve DyPE field ordering and add 'On' preset option (#8793)
* fix(ui): improve DyPE field ordering and add 'On' preset option

- Add ui_order to DyPE fields (100, 101, 102) to group them at bottom of node
- Change DyPEPreset from Enum to Literal type for proper frontend dropdown support
- Add ui_choice_labels for human-readable dropdown options
- Add new 'On' preset to enable DyPE regardless of resolution
- Fix frontend input field sorting to respect ui_order (unordered first, then ordered)
- Bump flux_denoise node version to 4.4.0

* Chore Ruff check fix

* fix(flux): remove .value from dype_preset logging

DyPEPreset is now a Literal type (string) instead of an Enum,
so .value is no longer needed.

* fix(tests): update DyPE tests for Literal type change

Update test imports and assertions to use string constants
instead of Enum attributes since DyPEPreset is now a Literal type.

* feat(flux): add DyPE scale and exponent controls to Linear UI

- Add dype_scale (λs) and dype_exponent (λt) sliders to generation settings
- Add Zod schemas and parameter types for DyPE scale/exponent
- Pass custom values from Linear UI to flux_denoise node
- Fix bug where DyPE was enabled even when preset was "off"
- Add enhanced logging showing all DyPE parameters when enabled

* fix(flux): apply DyPE scale/exponent and add metadata recall

- Fix DyPE scale and exponent parameters not being applied in frequency
  computation (compute_vision_yarn_freqs, compute_yarn_freqs now call
  get_timestep_mscale)
- Add metadata handlers for dype_scale and dype_exponent to enable
  recall from generated images
- Add i18n translations referencing existing parameter labels

* fix(flux): apply DyPE scale/exponent and add metadata recall

- Fix DyPE scale and exponent parameters not being applied in frequency
  computation (compute_vision_yarn_freqs, compute_yarn_freqs now call
  get_timestep_mscale)
- Add metadata handlers for dype_scale and dype_exponent to enable
  recall from generated images
- Add i18n translations referencing existing parameter labels

* feat(ui): show DyPE scale/exponent only when preset is "on"

- Hide scale/exponent controls in UI when preset is not "on"
- Only parse/recall scale/exponent from metadata when preset is "on"
- Prevents confusion where custom values override preset behavior

* fix(dype): only allow custom scale/exponent with 'on' preset

Presets (auto, 4k) now use their predefined values and ignore
any custom_scale/custom_exponent parameters. Only the 'on' preset
allows manual override of these values.

This matches the frontend UI behavior where the scale/exponent
fields are only shown when 'On' is selected.

* refactor(dype): rename 'on' preset to 'manual'

Rename the 'on' DyPE preset to 'manual' to better reflect its purpose:
allowing users to manually configure scale and exponent values.

Updated in:
- Backend presets (DYPE_PRESET_ON -> DYPE_PRESET_MANUAL)
- Frontend UI labels and options
- Redux slice type definitions
- Zod schema validation
- Tests

* refactor(dype): rename 'on' preset to 'manual'

Rename the 'on' DyPE preset to 'manual' to better reflect its purpose:
allowing users to manually configure scale and exponent values.

Updated in:
- Backend presets (DYPE_PRESET_ON -> DYPE_PRESET_MANUAL)
- Frontend UI labels and options
- Redux slice type definitions
- Zod schema validation
- Tests

* fix(dype): update remaining 'on' references to 'manual'

- Update docstrings, comments, and error messages to use 'manual' preset name
- Simplify FLUX graph builder to always send dype_scale/dype_exponent
- Fix UI condition to show DyPE controls for 'manual' preset

---------

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-30 01:28:28 +00:00
Weblate (bot)
2102b43edc ui: translations update from weblate (#8807)
* translationBot(ui): update translation (Italian)

Currently translated at 95.0% (2124 of 2235 strings)

translationBot(ui): update translation (Italian)

Currently translated at 94.5% (2114 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2195 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2197 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-30 01:22:50 +00:00
Lincoln Stein
5801e59e2b Documentation: InvokeAI PR review and merge policy (#8795)
* docs: Add a PR review and merge policy

* doc(release): add policy on release candidates

* docs(CD/CI): add best practice for external components
2026-01-30 01:03:43 +00:00
Lincoln Stein
5fc950b745 Release Workflow: Fix workflow edge case (#8792)
* release(docker): fix workflow edge case that prevented CUDA build from completing

* bugfix(release): fix yaml syntax error

* bugfix(CI/CD): fix similar problem in typegen check
2026-01-30 01:02:24 +00:00
Weblate (bot)
63dec985cd ui: translations update from weblate (#8806)
* translationBot(ui): update translation (Italian)

Currently translated at 95.0% (2124 of 2235 strings)

translationBot(ui): update translation (Italian)

Currently translated at 94.5% (2114 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2195 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2197 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
2026-01-29 20:52:21 +00:00
Weblate (bot)
03cdd6df2e ui: translations update from weblate (#8804)
* translationBot(ui): update translation (Italian)

Currently translated at 95.0% (2124 of 2235 strings)

translationBot(ui): update translation (Italian)

Currently translated at 94.5% (2114 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2195 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2197 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-29 15:42:10 -05:00
Lincoln Stein
99f4070ce7 translationBot(ui): update translation (Italian) (#8805)
Currently translated at 98.2% (2197 of 2235 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
2026-01-29 15:36:44 -05:00
Alexander Eichhorn
cf07f8be14 Add new model type integration guide (#8779)
* Add new model type integration guide

Comprehensive documentation covering all steps required to integrate
a new model type into InvokeAI, including:

- Backend: Model manager, configs, loaders, invocations, sampling
- Frontend: Graph building, state management, parameter recall
- Metadata, starter models, and optional features (ControlNet, LoRA, IP-Adapter)

Uses FLUX.1, FLUX.2 Klein, SD3, SDXL, and Z-Image as reference implementations.

* docs: improve new model integration guide

- Move document to docs/contributing/ directory
- Fix broken TOC links by replacing '&' with 'and' in headings
- Add code example for text encoder config (section 2.4)
- Add text encoder loader example (new section 3.3)
- Expand text encoder invocation to show full conditioning flow (section 4.2)

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-29 13:45:29 +00:00
Alexander Eichhorn
1f0d92defc fix(ui): allow guidance slider to reach 1 for FLUX.2 Klein
FLUX.2 Klein models require guidance=1 (no CFG), but the slider minimum
was set to 2. Changed sliderMin from 2 to 1 to allow proper configuration.
2026-01-29 07:27:18 +05:30
blessedcoolant
68089ca688 fix(ui): use proper FLUX2 latent RGB factors for preview images (#8802)
## Summary

Replace placeholder zeros with actual 32-channel factors from ComfyUI
and add latent_rgb_bias support for improved FLUX2 denoising previews.

## Related Issues / Discussions

https://github.com/Comfy-Org/ComfyUI/blob/main/comfy/latent_formats.py

https://github.com/user-attachments/assets/dfbc3d81-b855-46b8-8217-50b140f13520

## QA Instructions

1. Generate an image with a FLUX2 model (e.g. FLUX.2 Kontext)
2. Observe the denoising preview during generation
3. Preview should now show more accurate colors instead of
washed-out/incorrect colors from the previous placeholder factors
2026-01-29 07:12:39 +05:30
blessedcoolant
32e2132948 Merge branch 'main' into fix/flux2-latent-preview-factors 2026-01-29 07:07:50 +05:30
Alexander Eichhorn
bec3586930 fix(ui): use proper FLUX2 latent RGB factors for preview images
Replace placeholder zeros with actual 32-channel factors from ComfyUI
and add latent_rgb_bias support for improved FLUX2 denoising previews.
2026-01-29 02:22:17 +01:00
Weblate (bot)
8bf4d1ea59 ui: translations update from weblate (#8797)
* translationBot(ui): update translation (Italian)

Currently translated at 95.0% (2124 of 2235 strings)

translationBot(ui): update translation (Italian)

Currently translated at 94.5% (2114 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.2% (2195 of 2235 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-28 17:21:42 -05:00
Jonathan
fd7a3aebd2 Add input connectors to the FLUX model loader (#8785)
* Update flux_model_loader.py

Added nodal points for inputs to the model loader since we should be able to use a model selection node and pass in for Flux models.

* typegen

* Fixed existing ruff error

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-28 16:49:16 -05:00
Alexander Eichhorn
72491e2153 Fix ref_images metadata format for FLUX Kontext recall (#8791)
Remove extra array wrapper when saving ref_images metadata for FLUX.2 Klein

and FLUX.1 Kontext reference images. The double-nested array [[...]] was

preventing recall from parsing the metadata correctly.
2026-01-27 08:44:44 -05:00
Lincoln Stein
3d0725072d Prep for 6.11.0.rc1 (#8771)
* chore(release): add flux.2-klein to whats new items & bump version

* doc(release): update the WhatsNew text

* chore(frontend): run lint:prettier and frontend-typegen
2026-01-27 05:40:09 +00:00
Alexander Eichhorn
0ae7392c81 fix(model_manager): detect Flux1/2 VAE by latent space dimensions instead of filename (#8790)
* fix(model_manager): detect Flux VAE by latent space dimensions instead of filename

VAE detection previously relied solely on filename pattern matching, which failed
for Flux VAE files with generic names like "ae.safetensors". Now probes the model's
decoder.conv_in weight shape to determine the latent space dimensions:
- 16 channels -> Flux VAE
- 4 channels -> SD/SDXL VAE (with filename fallback for SD1/SD2/SDXL distinction)

* fix(model_manager): add latent space probing for Flux2 VAE detection

Extend Flux2 VAE detection to also check for 32-dimensional latent space
(decoder.conv_in with 32 input channels) in addition to BatchNorm layers.
This provides more robust detection for Flux2 VAE files regardless of filename.

* Chore Ruff format

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-27 05:20:50 +00:00
Alexander Eichhorn
cff20b45f3 Feature: Add DyPE (Dynamic Position Extrapolation) support to FLUX models for improved high-resolution image generation (#8763)
* docs: add DyPE implementation plan for FLUX high-resolution generation

Add detailed plan for porting ComfyUI-DyPE (Dynamic Position Extrapolation)
to InvokeAI, enabling 4K+ image generation with FLUX models without
training. Estimated effort: 5-7 developer days.

* docs: update DyPE plan with design decisions

- Integrate DyPE directly into FluxDenoise (no separate node)
- Add 4K preset and "auto" mode for automatic activation
- Confirm FLUX Schnell support (same base resolution as Dev)

* docs: add activation threshold for DyPE auto mode

FLUX can handle resolutions up to ~1.5x natively without artifacts.
Set activation_threshold=1536 so DyPE only kicks in above that.

* feat(flux): implement DyPE for high-resolution generation

Add Dynamic Position Extrapolation (DyPE) support to FLUX models,
enabling artifact-free generation at 4K+ resolutions.

New files:
- invokeai/backend/flux/dype/base.py: DyPEConfig and scaling calculations
- invokeai/backend/flux/dype/rope.py: DyPE-enhanced RoPE functions
- invokeai/backend/flux/dype/embed.py: DyPEEmbedND position embedder
- invokeai/backend/flux/dype/presets.py: Presets (off, auto, 4k)
- invokeai/backend/flux/extensions/dype_extension.py: Pipeline integration

Modified files:
- invokeai/backend/flux/denoise.py: Add dype_extension parameter
- invokeai/app/invocations/flux_denoise.py: Add UI parameters

UI parameters:
- dype_preset: off | auto | 4k
- dype_scale: Custom magnitude override (0-8)
- dype_exponent: Custom decay speed override (0-1000)

Auto mode activates DyPE for resolutions > 1536px.

Based on: https://github.com/wildminder/ComfyUI-DyPE

* feat(flux): add DyPE preset selector to Linear UI

Add Linear UI integration for FLUX DyPE (Dynamic Position Extrapolation):

- Add ParamFluxDypePreset component with Off/Auto/4K options
- Integrate preset selector in GenerationSettingsAccordion for FLUX models
- Add state management (paramsSlice, types) for fluxDypePreset
- Add dype_preset to FLUX denoise graph builder and metadata
- Add translations for DyPE preset label and popover
- Add zFluxDypePresetField schema definition

Fix DyPE frequency computation:
- Remove incorrect mscale multiplication on frequencies
- Use only NTK-aware theta scaling for position extrapolation

* feat(flux): add DyPE preset to metadata recall

- Add FluxDypePreset handler to ImageMetadataHandlers
- Parse dype_preset from metadata and dispatch setFluxDypePreset on recall
- Add translation key metadata.dypePreset

* chore: remove dype-implementation-plan.md

Remove internal planning document from the branch.

* chore(flux): bump flux_denoise version to 4.3.0

Version bump for dype_preset field addition.

* chore: ruff check fix

* chore: ruff format

* Fix truncated DyPE label in advanced options UI

Shorten the label from "DyPE (High-Res)" to "DyPE" to prevent text truncation in the sidebar. The high-resolution context is preserved in the informational popover tooltip.

* Add DyPE preset to recall parameters in image viewer

The dype_preset metadata was being saved but not displayed in the Recall Parameters tab. Add FluxDypePreset handler to ImageMetadataActions so users can see and recall this parameter.

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
2026-01-26 23:54:44 -05:00
Alexander Eichhorn
b92c6ae633 feat(flux2): add FLUX.2 klein model support (#8768)
* WIP: feat(flux2): add FLUX 2 Kontext model support

- Add new invocation nodes for FLUX 2:
  - flux2_denoise: Denoising invocation for FLUX 2
  - flux2_klein_model_loader: Model loader for Klein architecture
  - flux2_klein_text_encoder: Text encoder for Qwen3-based encoding
  - flux2_vae_decode: VAE decoder for FLUX 2

- Add backend support:
  - New flux2 module with denoise and sampling utilities
  - Extended model manager configs for FLUX 2 models
  - Updated model loaders for Klein architecture

- Update frontend:
  - Extended graph builder for FLUX 2 support
  - Added FLUX 2 model types and configurations
  - Updated readiness checks and UI components

* fix(flux2): correct VAE decode with proper BN denormalization

FLUX.2 VAE uses Batch Normalization in the patchified latent space
(128 channels). The decode must:
1. Patchify latents from (B, 32, H, W) to (B, 128, H/2, W/2)
2. Apply BN denormalization using running_mean/running_var
3. Unpatchify back to (B, 32, H, W) for VAE decode

Also fixed image normalization from [-1, 1] to [0, 255].

This fixes washed-out colors in generated FLUX.2 Klein images.

* feat(flux2): add FLUX.2 Klein model support with ComfyUI checkpoint compatibility

- Add FLUX.2 transformer loader with BFL-to-diffusers weight conversion
- Fix AdaLayerNorm scale-shift swap for final_layer.adaLN_modulation weights
- Add VAE batch normalization handling for FLUX.2 latent normalization
- Add Qwen3 text encoder loader with ComfyUI FP8 quantization support
- Add frontend components for FLUX.2 Klein model selection
- Update configs and schema for FLUX.2 model types

* Chore Ruff

* Fix Flux1 vae probing

* Fix Windows Paths schema.ts

* Add 4B und 9B klein to Starter Models.

* feat(flux2): add non-commercial license indicator for FLUX.2 Klein 9B

- Add isFlux2Klein9BMainModelConfig and isNonCommercialMainModelConfig functions
- Update MainModelPicker and InitialStateMainModelPicker to show license icon
- Update license tooltip text to include FLUX.2 Klein 9B

* feat(flux2): add Klein/Qwen3 variant support and encoder filtering

Backend:
- Add klein_4b/klein_9b variants for FLUX.2 Klein models
- Add qwen3_4b/qwen3_8b variants for Qwen3 encoder models
- Validate encoder variant matches Klein model (4B↔4B, 9B↔8B)
- Auto-detect Qwen3 variant from hidden_size during probing

Frontend:
- Show variant field for all model types in ModelView
- Filter Qwen3 encoder dropdown to only show compatible variants
- Update variant type definitions (zFlux2VariantType, zQwen3VariantType)
- Remove unused exports (isFluxDevMainModelConfig, isFlux2Klein9BMainModelConfig)

* Chore Ruff

* feat(flux2): add Klein 9B Base (undistilled) variant support

Distinguish between FLUX.2 Klein 9B (distilled) and Klein 9B Base (undistilled)
models by checking guidance_embeds in diffusers config or guidance_in keys in
safetensors. Klein 9B Base requires more steps but offers higher quality.

* feat(flux2): improve diffusers compatibility and distilled model support

Backend changes:
- Update text encoder layers from [9,18,27] to (10,20,30) matching diffusers
- Use apply_chat_template with system message instead of manual formatting
- Change position IDs from ones to zeros to match diffusers implementation
- Add get_schedule_flux2() with empirical mu computation for proper schedule shifting
- Add txt_embed_scale parameter for Qwen3 embedding magnitude control
- Add shift_schedule toggle for base (28+ steps) vs distilled (4 steps) models
- Zero out guidance_embedder weights for Klein models without guidance_embeds

UI changes:
- Clear Klein VAE and Qwen3 encoder when switching away from flux2 base
- Clear Qwen3 encoder when switching between different Klein model variants
- Add toast notification informing user to select compatible encoder

* feat(flux2): fix distilled model scheduling with proper dynamic shifting

- Configure scheduler with FLUX.2 Klein parameters from scheduler_config.json
  (use_dynamic_shifting=True, shift=3.0, time_shift_type="exponential")
- Pass mu parameter to scheduler.set_timesteps() for resolution-aware shifting
- Remove manual shift_schedule parameter (scheduler handles this automatically)
- Simplify get_schedule_flux2() to return linear sigmas only
- Remove txt_embed_scale parameter (no longer needed)

This matches the diffusers Flux2KleinPipeline behavior where the
FlowMatchEulerDiscreteScheduler applies dynamic timestep shifting
based on image resolution via the mu parameter.

Fixes 4-step distilled Klein 9B model quality issues.

* fix(ui): fix FLUX.1 graph building with posCondCollect node lookup

The posCondCollect node was created with getPrefixedId() which generates
a random suffix (e.g., 'pos_cond_collect:abc123'), but g.getNode() was
called with the plain string 'pos_cond_collect', causing a node lookup
failure.

Fix by declaring posCondCollect as a module-scoped variable and
referencing it directly instead of using g.getNode().

* Remove Flux2 Klein Base from Starter Models

* Remove Logging

* Add Default Values for Flux2 Klein and add variant as additional info to from_base

* Add migrations for the z-image qwen3 encoder without a variant value

* Add img2img, inpainting and outpainting support for FLUX.2 Klein

- Add flux2_vae_encode invocation for encoding images to FLUX.2 latents
- Integrate inpaint_extension into FLUX.2 denoise loop for proper mask handling
- Apply BN normalization to init_latents and noise for consistency in inpainting
- Use manual Euler stepping for img2img/inpaint to preserve exact timestep schedule
- Add flux2_img2img, flux2_inpaint, flux2_outpaint generation modes
- Expand starter models with FP8 variants, standalone transformers, and separate VAE/encoders
- Fix outpainting to always use full denoising (0-1) since strength doesn't apply
- Improve error messages in model loader with clear guidance for standalone models

* Add GGUF quantized model support and Diffusers VAE loader for FLUX.2 Klein

- Add Main_GGUF_Flux2_Config for GGUF-quantized FLUX.2 transformer models
- Add VAE_Diffusers_Flux2_Config for FLUX.2 VAE in diffusers format
- Add Flux2GGUFCheckpointModel loader with BFL-to-diffusers conversion
- Add Flux2VAEDiffusersLoader for AutoencoderKLFlux2
- Add FLUX.2 Klein 4B/9B hardware requirements to documentation
- Update starter model descriptions to clarify dependencies install together
- Update frontend schema for new model configs

* Fix FLUX.2 model detection and add FP8 weight dequantization support

- Improve FLUX.2 variant detection for GGUF/checkpoint models (BFL format keys)
- Fix guidance_embeds logic: distilled=False, undistilled=True
- Add FP8 weight dequantization for ComfyUI-style quantized models
- Prevent FLUX.2 models from being misidentified as FLUX.1
- Preserve user-editable fields (name, description, etc.) on model reidentify
- Improve Qwen3Encoder detection by variant in starter models
- Add defensive checks for tensor operations

* Chore ruff format

* Chore Typegen

* Fix FLUX.2 Klein 9B model loading by detecting hidden_size from weights

Previously num_attention_heads was hardcoded to 24, which is correct for
Klein 4B but causes size mismatches when loading Klein 9B checkpoints.

Now dynamically calculates num_attention_heads from the hidden_size
dimension of context_embedder weights:
- Klein 4B: hidden_size=3072 → num_attention_heads=24
- Klein 9B: hidden_size=4096 → num_attention_heads=32

Fixes both Checkpoint and GGUF loaders for FLUX.2 models.

* Only clear Qwen3 encoder when FLUX.2 Klein variant changes

Previously the encoder was cleared whenever switching between any Klein
models, even if they had the same variant. Now compares the variant of
the old and new model and only clears the encoder when switching between
different variants (e.g., klein_4b to klein_9b).

This allows users to switch between different Klein 9B models without
having to re-select the Qwen3 encoder each time.

* Add metadata recall support for FLUX.2 Klein parameters

The scheduler, VAE model, and Qwen3 encoder model were not being
recalled correctly for FLUX.2 Klein images. This adds dedicated
metadata handlers for the Klein-specific parameters.

* Fix FLUX.2 Klein denoising scaling and Z-Image VAE compatibility

- Apply exponential denoising scaling (exponent 0.2) to FLUX.2 Klein,
  matching FLUX.1 behavior for more intuitive inpainting strength
- Add isFlux1VAEModelConfig type guard to filter FLUX 1.0 VAEs only
- Restrict Z-Image VAE selection to FLUX 1.0 VAEs, excluding FLUX.2
  Klein 32-channel VAEs which are incompatible

* chore pnpm fix

* Add FLUX.2 Klein to starter bundles and documentation

- Add FLUX.2 Klein hardware requirements to quick start guide
- Create flux2_klein_bundle with GGUF Q4 model, VAE, and Qwen3 encoder
- Add "What's New" entry announcing FLUX.2 Klein support

* Add FLUX.2 Klein built-in reference image editing support

FLUX.2 Klein has native multi-reference image editing without requiring
a separate model (unlike FLUX.1 which needs a Kontext model).

Backend changes:
- Add Flux2RefImageExtension for encoding reference images with FLUX.2 VAE
- Apply BN normalization to reference image latents for correct scaling
- Use T-coordinate offset scale=10 like diffusers (T=10, 20, 30...)
- Concatenate reference latents with generated image during denoising
- Extract only generated portion in step callback for correct preview

Frontend changes:
- Add flux2_reference_image config type without model field
- Hide model selector for FLUX.2 reference images (built-in support)
- Add type guards to handle configs without model property
- Update validators to skip model validation for FLUX.2
- Add 'flux2' to SUPPORTS_REF_IMAGES_BASE_MODELS

* Chore windows path fix

* Add reference image resizing for FLUX.2 Klein

Resize large reference images to match BFL FLUX.2 sampling.py limits:
- Single reference: max 2024² pixels (~4.1M)
- Multiple references: max 1024² pixels (~1M)

Uses same scaling approach as BFL's cap_pixels() function.
2026-01-26 23:21:37 -05:00
DustyShoe
729bae19a5 Feat(UI): Search bar in image info code tabs and add vertical margins for improved UX in Recall Parameters tab. (#8786)
* Adjusted Search bar position and added padding in image info viewer.

* Minor bug fix with spaces being highlighted.
2026-01-25 22:38:43 +01:00
Copilot
fcc81f17a5 Limit automated issue closure to bug issues only (#8776)
* Initial plan

* Add only-labels parameter to limit automated issue closure to bugs only

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2026-01-21 02:43:59 +05:30
Weblate (bot)
27ae70a428 translationBot(ui): update translation files (#8767)
Updated by "Cleanup translation files" hook in Weblate.


Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2026-01-20 15:50:21 -05:00
Lincoln Stein
82819cdadc Add user survey section to README (#8766)
* Add user survey section to README

Added a section for new and returning users to take a survey.

* docs: add user survey link to WhatsNew

* Fix formatting issues in WhatsNew.tsx

---------

Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev>
2026-01-16 03:32:16 +01:00
Alexander Eichhorn
b2b8820519 fix(model_manager): prevent Z-Image LoRAs from being misclassified as main models (#8754)
* fix(model_manager): prevent Z-Image LoRAs from being misclassified as main models

Z-Image LoRAs containing keys like `diffusion_model.context_refiner.*` were being
incorrectly classified as main checkpoint models instead of LoRAs. This happened
because the `_has_z_image_keys()` function checked for Z-Image specific keys
(like `context_refiner`) without verifying if the file was actually a LoRA.

Since main models have higher priority than LoRAs in the classification sort order,
the incorrect main model classification would win.

The fix adds detection of LoRA-specific weight suffixes (`.lora_down.weight`,
`.lora_up.weight`, `.lora_A.weight`, `.lora_B.weight`, `.dora_scale`) and returns
False if any are found, ensuring LoRAs are correctly classified.

* refactor(mm): simplify _has_z_image_keys with early return

Return True directly when a Z-Image key is found instead of using an
intermediate variable.
2026-01-14 22:35:17 -05:00
Alexander Eichhorn
bb6c544603 feat(z-image): add Seed Variance Enhancer node and Linear UI integration (#8753)
* feat(z-image): add Seed Variance Enhancer node and Linear UI integration

Add a new conditioning node for Z-Image models that injects seed-based
noise into text embeddings to increase visual variation between seeds.

Backend:
- New invocation: z_image_seed_variance_enhancer.py
- Parameters: strength (0-2), randomize_percent (1-100%), seed

Frontend:
- State management in paramsSlice with selectors and reducers
- UI components in SeedVariance/ folder with toggle and sliders
- Integration in GenerationSettingsAccordion (Advanced Options)
- Graph builder integration in buildZImageGraph.ts
- Metadata recall handlers for remix functionality
- Translations and tooltip descriptions

Based on: github.com/Pfannkuchensack/invokeai-z-image-seed-variance-enhancer

* chore: ruff and typegen fix

* chore: ruff and typegen fix

* Revise seedVarianceStrength explanation

Updated description for seedVarianceStrength.

* Update description for seedVarianceStrength

* fix(z-image): correct noise range comment from [-1, 1] to [-1, 1)

torch.rand() generates [0, 1), so the scaled range excludes 1.
2026-01-12 20:36:21 +01:00
blessedcoolant
8a18914637 chore(CI/CD): Remove codeowners from /docs directory (#8737)
## Summary

This PR removes codeowners from the `/docs` directory, allowing any team
member with repo write permissions to review and approve PRs involving
documentation.

## Related Issues / Discussions

Documentation review is a shared responsibility.

## QA Instructions

None needed.

## Merge Plan

Simple merge.

## Checklist

- [X] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2026-01-12 15:19:22 +05:30
blessedcoolant
d66df9a0d0 Merge branch 'main' into lstein/chore/codeowners 2026-01-12 15:18:19 +05:30
DustyShoe
5c00684701 Feat(UI): Canvas high level transform smoothing (#8756)
* WIP transform smoothing controls

* Fix transform smoothing control typings

* High level resize algo for transformation

* ESLint fix

* format with prettier

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-11 15:48:27 -05:00
DustyShoe
d93ce6ac42 Fix(UI): Canvas numeric brush size (#8761)
* Fix for brush/eraser size not updating on up/down arrow click

* Made further improvements on brush size selection behavior

---------

Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev>
2026-01-11 15:23:06 -05:00
blessedcoolant
13bf5feb4d Fix(UI): Error message for extract region (#8759)
## Summary

This PR fixes misleading popup message "Canvas is empty" when attempting
to extract region with empty mask layer.
Replaced with correct message "Mask layer is empty". Also redirected few
other popups to use translation file.


## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2026-01-11 21:53:48 +05:30
DustyShoe
53ab178edd Merge branch 'invoke-ai:main' into Fix(UI)--Error-messsage-for-extract-region 2026-01-11 02:13:35 +02:00
DustyShoe
2d8317f1aa Corrected error message and redirected popup messages to use translation file 2026-01-11 02:08:47 +02:00
Lincoln Stein
04f815638c chore(invocation stats): remove old dangling debug statement 2026-01-10 11:32:37 -05:00
Lincoln Stein
d6ad6a2dcb fix(invocation stats): Report delta VRAM for each invocation and fix reporting of RAM cache size 2026-01-10 11:32:37 -05:00
Hosted Weblate
784503e484 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2026-01-08 16:48:16 -05:00
RyoKoba
da2809b000 translationBot(ui): update translation (Japanese)
Currently translated at 99.6% (2155 of 2163 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2026-01-08 16:48:16 -05:00
Weblate (bot)
53c34eb95e ui: translations update from weblate (#8748)
* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2099 of 2132 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Japanese)

Currently translated at 99.6% (2155 of 2163 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2103 of 2136 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): added translation (English (United Kingdom))

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-08 15:56:33 -05:00
Weblate (bot)
18fc822d37 ui: translations update from weblate (#8747)
* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2099 of 2132 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Japanese)

Currently translated at 99.6% (2155 of 2163 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2103 of 2136 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): added translation (English (United Kingdom))

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-08 20:31:33 +00:00
Lincoln Stein
89dc50bd7c Chore: Fix weblate merge conflicts (#8744)
* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2099 of 2132 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Japanese)

Currently translated at 99.6% (2155 of 2163 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2103 of 2136 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* chore: add weblate.ini file to gitignore

* Fix duplicate entry in ja.json

Removed duplicate 'jump' entry in Japanese locale.

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
2026-01-08 15:25:11 -05:00
Lincoln Stein
d34655fd58 Fix(model manager): Improve calculation of Z-Image VAE working memory needs (#8740)
* Fix Z-Image VAE encode/decode to request working memory

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>

* fix: remove check for non-flux vae

* fix: remove check for non-flux vae: latents_to_image

* Remove conditional estimation tests

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2026-01-08 17:48:09 +00:00
Lincoln Stein
c1a8300e96 chore(release): bump development version to 6.10.0.post1 (#8745)
* chore(release): bump development version 6.10.0post1

* chore: fix version syntax
2026-01-08 12:42:11 -05:00
Lincoln Stein
9c5b2f6498 (chore) Bump to version 6.10.0 (#8742)
* (chore) Prep for v6.10.0rc2

* (chore) bump to version v6.10.0
2026-01-05 23:47:57 -05:00
Alexander Eichhorn
dbb4a07a8f feat(z-image): add add_noise option to Z-Image Denoise (#8739)
* feat(z-image): add `add_noise` option to Z-Image Denoise

Add the same `add_noise` option that exists in FLUX Denoise to Z-Image Denoise.
When set to false, no noise is added to the input latents during image-to-image,
allowing for more controlled transformations.
2026-01-05 21:24:44 -05:00
Lincoln Stein
f66a1a38c8 Merge branch 'main' into lstein/chore/codeowners 2026-01-05 15:16:33 -05:00
Alexander Eichhorn
be2635161c Feature: z-image + metadata node (#8733)
## Summary

Add a new "Denoise - Z-Image + Metadata" node
(`ZImageDenoiseMetaInvocation`) that extends the Z-Image denoise node
with metadata output for image recall functionality.

This follows the same pattern as existing `denoise_latents_meta`
(SD1.5/SDXL) and `flux_denoise_meta` (FLUX) nodes.

**Captured metadata:**
- `width` / `height`
- `steps`
- `guidance` (guidance_scale)
- `denoising_start` / `denoising_end`
- `scheduler`
- `model` (transformer)
- `seed`
- `loras` (if applied)

## Related Issues / Discussions

Enables metadata recall for Z-Image generated images, similar to
existing support for SD1.5, SDXL, and FLUX models.

## QA Instructions

1. Create a workflow using the new "Denoise - Z-Image + Metadata" node
2. Connect the metadata output to a "Save Image" node
3. Generate an image
4. Check that metadata is saved with the image (visible in image info
panel)
5. Verify all generation parameters are captured correctly

## Merge Plan

Requires `feature/zimage-scheduler-support` #8705 branch to be merged
first (base branch).

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2026-01-05 01:56:22 +01:00
Alexander Eichhorn
384a1a689d Merge branch 'main' into z-image_metadata_node 2026-01-05 01:50:28 +01:00
Lincoln Stein
0021404639 chore: remove dangling debug statements (#8738) 2026-01-05 00:47:46 +00:00
Alexander Eichhorn
a05a626644 Fix typegen 2026-01-05 01:42:49 +01:00
Alexander Eichhorn
97b82d752e Add configurable model cache timeout for automatic memory management (#8693)
## Summary

Adds `model_cache_keep_alive_min` config option (minutes, default 5) to
automatically clear model cache after inactivity. Addresses memory
contention when running InvokeAI alongside other GPU applications like
Ollama.

**Implementation:**
- **Config**: New `model_cache_keep_alive_min` field in
`InvokeAIAppConfig` with 5-minute default
- **ModelCache**: Activity tracking on get/lock/unlock/put operations,
threading.Timer for scheduled clearing
- **Thread safety**: Double-check pattern handles race conditions,
daemon threads for clean shutdown
- **Integration**: ModelManagerService passes config to cache, calls
shutdown() on stop
- **Logging**: Smart timeout logging that only shows messages when
unlocked models are actually cleared
- **Tests**: Comprehensive unit tests with properly configured mock
logger

**Usage:**
```yaml
# invokeai.yaml
model_cache_keep_alive_min: 10  # Clear after 10 minutes idle
model_cache_keep_alive_min: 0   # Set to 0 for indefinite caching (old behavior)
```

**Key Behavior:**
- **Default timeout**: 5 minutes - models are automatically cleared
after 5 minutes of inactivity
- Clearing uses same logic as "Clear Model Cache" button (make_room with
1000GB)
- Only clears **unlocked** models (respects models actively in use
during generation)
- Timeout message only appears when models are actually cleared
- Debug logging available for timeout events when no action is taken
- Prevents misleading log entries during active generation
- Users can set to 0 to restore indefinite caching behavior

## Related Issues / Discussions

Addresses enhancement request for automatic model unloading from memory
after inactivity period.

## QA Instructions

1. **Test default behavior (5-minute timeout)**:
   - Start InvokeAI without explicit config
   - Run a generation
   - Wait 6 minutes with no activity
   - Check logs for "Clearing X unlocked model(s) from cache" message
   - Verify cache is empty

2. **Test custom timeout**:
   - Set `model_cache_keep_alive_min: 0.1` (6 seconds) in config
   - Load a model (run generation)
   - Wait 7+ seconds with no activity
   - Check logs for "Clearing X unlocked model(s) from cache" message
   - Verify cache is empty

3. **Test no timeout (old behavior)**:
   - Set `model_cache_keep_alive_min: 0` in config
   - Run generations and wait extended periods
   - Verify models remain cached indefinitely

4. **Test during active use**:
   - Run continuous generations with any timeout setting
- Verify no timeout messages appear during active use (models are
locked)
- After generation completes, wait for timeout and verify unlocked
models are cleared

## Merge Plan

N/A - Additive change with sensible defaults. The 5-minute default
enables automatic memory management while remaining practical for
typical workflows.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [ ] _Changes to a redux slice have a corresponding migration_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_

<!-- START COPILOT ORIGINAL PROMPT -->



<details>

<summary>Original prompt</summary>

> 
> ----
> 
> *This section details on the original issue you should resolve*
> 
> <issue_title>[enhancement]: option to unload from memory
</issue_title>
> <issue_description>### Is there an existing issue for this?
> 
> - [X] I have searched the existing issues
> 
> ### Contact Details
> 
> ### What should this feature add?
> 
> a command line option to unload model from RAM after a defined period
of time
> 
> ### Alternatives
> 
> running as a container and using Sablier to shutdown the container
after some time, this has the downside of if traffic isn't see through
the web interface it will be shut even if jobs are running.
> 
> ### Additional Content
> 
> _No response_</issue_description>
> 
> ## Comments on the Issue (you are @copilot in this section)
> 
> <comments>
> <comment_new><author>@lstein</author><body>
> I am reopening this issue. I'm running ollama and invoke on the same
server and I find their memory requirements are frequently clashing. It
would be helpful to offer users the option to have the model cache
automatically cleared after a fixed amount of inactivity. I would
suggest the following:
> 
> 1. Introduce a new config file option `model_cache_keep_alive` which
specifies, in minutes, how long to keep a model in cache between
generations. The default is 0, which means to keep the model in cache
indefinitely, as is currently the case.
> 2. If no model generations occur within the timeout period, the model
cache is cleared using the same backend code as the "Clear Model Cache"
button in the queue tab.
> 
> I'm going to assign this to GitHub copilot, partly to test how well it
can manage the Invoke code base. </body></comment_new>
> </comments>
> 


</details>



<!-- START COPILOT CODING AGENT SUFFIX -->

- Fixes invoke-ai/InvokeAI#6856

<!-- START COPILOT CODING AGENT TIPS -->
---

 Let Copilot coding agent [set things up for
you](https://github.com/invoke-ai/InvokeAI/issues/new?title=+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.
2026-01-05 01:41:40 +01:00
Alexander Eichhorn
f29820a7ba feat(ui): improve Z-Image model selector UX with auto-clearing conflicts
Instead of disabling mutually exclusive model selectors, automatically
clear conflicting models when a new selection is made. This applies to
VAE, Qwen3 Encoder, and Qwen3 Source selectors - selecting one now
clears the others. Also applies same logic during metadata recall.
2026-01-05 00:57:45 +01:00
Lincoln Stein
47a634d8fb fix(naming style) change name of model_cache_keep_alive to model_cache_keep_alive_min 2026-01-04 17:36:55 -05:00
Lincoln Stein
768f3dbde0 chore: remove codeowners from /docs directory 2026-01-04 17:08:45 -05:00
Alexander Eichhorn
1ca589ea10 Merge branch 'main' into z-image_metadata_node 2026-01-04 23:07:06 +01:00
Jonathan
3a21e7699f Merge branch 'main' into copilot/add-unload-model-option 2026-01-04 10:22:44 -05:00
Lincoln Stein
56fd7bc7c4 docs(z-image) add Z-Image requirements and starter bundle (#8734)
* docs(z-image) add minimum requirements for Z-Image and create Z-Image starter bundle

* fix(model manager) add flux VAE to Z-Image bundle

* docs(model manager) remove out-of-date model info link

* chore: fix frontendchecks

* chore: lint:prettier

* docs(model manager): clarify minimum hardware for z-image turbo

* (fix) add flux VAE to ZIT starter dependencies & tweak UI docs
2026-01-04 10:17:26 -05:00
Lincoln Stein
2425005aad chore: typegen update 2026-01-04 09:28:43 -05:00
Lincoln Stein
2ccadd1834 Merge branch 'main' into z-image_metadata_node 2026-01-04 07:03:25 -05:00
Lincoln Stein
5cef8bd364 (fix) default timeout to 0 min, to disable timeout feature and restore previous default behavior 2026-01-04 07:01:01 -05:00
Lincoln Stein
8a6d593fe8 Merge branch 'main' into copilot/add-unload-model-option 2026-01-03 22:48:36 -05:00
Lincoln Stein
14309562b8 chore: typegen 2026-01-03 22:48:19 -05:00
Alexander Eichhorn
9f8f9965f9 fix(model-loaders): add local_files_only=True to prevent network requests (#8735) 2026-01-03 22:21:42 -05:00
Jonathan
44a21a348d Merge branch 'main' into copilot/add-unload-model-option 2026-01-03 22:00:11 -05:00
Alexander Eichhorn
81d83d5aab Merge branch 'main' into z-image_metadata_node 2026-01-03 23:06:42 +01:00
Alexander Eichhorn
d99707fdcb fix(ui): fix z-image scheduler recall by reordering metadata handlers
Move Scheduler handler after MainModel in ImageMetadataHandlers so that
base-dependent recall logic (z-image scheduler) works correctly. The
Scheduler handler checks `base === 'z-image'` before dispatching the
z-image scheduler action, but this check failed when Scheduler ran
before MainModel was recalled.
2026-01-03 22:33:18 +01:00
dunkeroni
252dd5b426 Add @dunkeroni as code owner for some paths (#8732)
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-03 20:56:21 +00:00
Alexander Eichhorn
f922f6c634 Update CODEOWNERS (#8731)
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-03 20:53:16 +00:00
Alexander Eichhorn
be0cbe046c feat(flux): add scheduler selection for Flux models (#8704)
* feat(flux): add scheduler selection for Flux models

Add support for alternative diffusers Flow Matching schedulers:
- Euler (default, 1st order)
- Heun (2nd order, better quality, 2x slower)
- LCM (optimized for few steps)

Backend:
- Add schedulers.py with scheduler type definitions and class mapping
- Modify denoise.py to accept optional scheduler parameter
- Add scheduler InputField to flux_denoise invocation (v4.2.0)

Frontend:
- Add fluxScheduler to Redux state and paramsSlice
- Create ParamFluxScheduler component for Linear UI
- Add scheduler to buildFLUXGraph for generation

* fix(flux): prevent progress percentage overflow with LCM scheduler

LCM scheduler may have more internal timesteps than user-facing steps,
causing user_step to exceed total_steps. This resulted in progress
percentage > 1.0, which caused a pydantic validation error.

Fix: Only call step_callback when user_step <= total_steps.

* Ruff format

* fix(flux): remove initial step-0 callback for consistent step count

Remove the initial step_callback at step=0 to match SD/SDXL behavior.
Previously Flux showed N+1 steps (step 0 + N denoising steps), while
SD/SDXL showed only N steps. Now all models display N steps consistently.

* feat(flux): add scheduler support with metadata recall

- Handle LCM scheduler by using num_inference_steps instead of custom sigmas
- Fix progress bar to show user-facing steps instead of internal scheduler steps
- Pass scheduler parameter to Flux denoise node in graph builder
- Add model-aware metadata recall for Flux scheduler

---------

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-03 15:52:00 -05:00
Jonathan
e39b880f6d Merge branch 'main' into copilot/add-unload-model-option 2026-01-03 15:41:59 -05:00
Jonathan
4f8ec07d2f Update CODEOWNERS (#8728)
Adding @JPPhoto to CODEOWNERS

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-03 20:40:27 +00:00
Alexander Eichhorn
689953e3cf Feature/zimage scheduler support (#8705)
* feat(flux): add scheduler selection for Flux models

Add support for alternative diffusers Flow Matching schedulers:
- Euler (default, 1st order)
- Heun (2nd order, better quality, 2x slower)
- LCM (optimized for few steps)

Backend:
- Add schedulers.py with scheduler type definitions and class mapping
- Modify denoise.py to accept optional scheduler parameter
- Add scheduler InputField to flux_denoise invocation (v4.2.0)

Frontend:
- Add fluxScheduler to Redux state and paramsSlice
- Create ParamFluxScheduler component for Linear UI
- Add scheduler to buildFLUXGraph for generation

* feat(z-image): add scheduler selection for Z-Image models

Add support for alternative diffusers Flow Matching schedulers for Z-Image:
- Euler (default) - 1st order, optimized for Z-Image-Turbo (8 steps)
- Heun (2nd order) - Better quality, 2x slower
- LCM - Optimized for few-step generation

Backend:
- Extend schedulers.py with Z-Image scheduler types and mapping
- Add scheduler InputField to z_image_denoise invocation (v1.3.0)
- Refactor denoising loop to support diffusers schedulers

Frontend:
- Add zImageScheduler to Redux state in paramsSlice
- Create ParamZImageScheduler component for Linear UI
- Add scheduler to buildZImageGraph for generation

* fix ruff check

* fix(schedulers): prevent progress percentage overflow with LCM scheduler

LCM scheduler may have more internal timesteps than user-facing steps,
causing user_step to exceed total_steps. This resulted in progress
percentage > 1.0, which caused a pydantic validation error.

Fix: Only call step_callback when user_step <= total_steps.

* Ruff format

* fix(schedulers): remove initial step-0 callback for consistent step count

Remove the initial step_callback at step=0 to match SD/SDXL behavior.
Previously Flux/Z-Image showed N+1 steps (step 0 + N denoising steps),
while SD/SDXL showed only N steps. Now all models display N steps
consistently in the server log.

* feat(z-image): add scheduler support with metadata recall

- Handle LCM scheduler by using num_inference_steps instead of custom sigmas
- Fix progress bar to show user-facing steps instead of internal scheduler steps
- Pass scheduler parameter to Z-Image denoise node in graph builder
- Add model-aware metadata recall for Flux and Z-Image schedulers

---------

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-03 20:37:04 +00:00
Lincoln Stein
61c2589e39 (chore) update WhatsNew translation text (#8727) 2026-01-03 20:31:50 +00:00
Lincoln Stein
8cf4c6944a (style) ruff fix 2026-01-03 14:54:15 -05:00
Lincoln Stein
db228ddc4f (style) add @record_activity and @synchronized to locked methods 2026-01-03 14:52:31 -05:00
Lincoln Stein
858c94b575 Merge remote-tracking branch 'refs/remotes/origin/copilot/add-unload-model-option' into copilot/add-unload-model-option 2026-01-03 14:26:20 -05:00
Alexander Eichhorn
252794d717 ruff fix 2026-01-03 19:50:08 +01:00
Alexander Eichhorn
7847ccea13 fix typegen 2026-01-03 19:48:11 +01:00
Alexander Eichhorn
1bcf589d19 feat(z-image): add Z-Image Denoise + Metadata node
Add ZImageDenoiseMetaInvocation that extends ZImageDenoiseInvocation
with metadata output for image recall. Captures generation parameters
including steps, guidance, scheduler, seed, model, and LoRAs.
2026-01-03 18:28:17 +01:00
Alexander Eichhorn
132a48497b feat(z-image): add scheduler support with metadata recall
- Handle LCM scheduler by using num_inference_steps instead of custom sigmas
- Fix progress bar to show user-facing steps instead of internal scheduler steps
- Pass scheduler parameter to Z-Image denoise node in graph builder
- Add model-aware metadata recall for Flux and Z-Image schedulers
2026-01-03 17:11:05 +01:00
Jonathan
f49e1b8dae Merge branch 'main' into copilot/add-unload-model-option 2026-01-01 21:31:08 -05:00
Jonathan
e7233efb79 Merge branch 'main' into feature/zimage-scheduler-support 2026-01-01 21:30:44 -05:00
Alexander Eichhorn
3b2d2ef10a fix(gguf): ensure dequantized tensors are on correct device for MPS (#8713)
When using GGUF-quantized models on MPS (Apple Silicon), the
dequantized tensors could end up on a different device than the
other operands in math operations, causing "Expected all tensors
to be on the same device" errors.

This fix ensures that after dequantization, tensors are moved to
the same device as the other tensors in the operation.

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-02 00:45:50 +00:00
Alexander Eichhorn
66974841f1 fix(model-manager): support offline Qwen3 tokenizer loading for Z-Image (#8719)
Add local_files_only fallback for Qwen3 tokenizer loading in both
Checkpoint and GGUF loaders. This ensures Z-Image models can generate
images offline after the initial tokenizer download.

The tokenizer is now loaded with local_files_only=True first, falling
back to network download only if files aren't cached yet.

Fixes #8716

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-02 00:40:08 +00:00
Lincoln Stein
87608ade45 (chore) update config docstrings 2026-01-01 19:35:15 -05:00
Weblate (bot)
1e83aeeb79 ui: translations update from weblate (#8725)
* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2099 of 2132 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Japanese)

Currently translated at 99.6% (2155 of 2163 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2103 of 2136 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): added translation (English (United Kingdom))

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-02 00:35:09 +00:00
Alex Yankov
1c76d295a2 fix(docs) Bump versions in mkdocs github actions (#8722) 2026-01-01 19:31:33 -05:00
Lincoln Stein
384250ff8c Merge branch 'main' into copilot/add-unload-model-option 2026-01-01 19:28:45 -05:00
Lincoln Stein
6c3ce8e7e9 Merge branch 'main' into feature/zimage-scheduler-support 2026-01-01 19:08:56 -05:00
Weblate (bot)
d658ef4322 ui: translations update from weblate (#8724)
* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2099 of 2132 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Japanese)

Currently translated at 99.6% (2155 of 2163 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2103 of 2136 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
2025-12-30 13:21:36 -05:00
Alexander Eichhorn
8d880ef5a0 fix(schedulers): remove initial step-0 callback for consistent step count
Remove the initial step_callback at step=0 to match SD/SDXL behavior.
Previously Flux/Z-Image showed N+1 steps (step 0 + N denoising steps),
while SD/SDXL showed only N steps. Now all models display N steps
consistently in the server log.
2025-12-29 12:39:39 +01:00
Lincoln Stein
c6775cc999 (style) ruff and typegen updates 2025-12-28 22:40:36 -05:00
Lincoln Stein
d44b99ae0a Merge branch 'main' into copilot/add-unload-model-option 2025-12-28 22:39:45 -05:00
blessedcoolant
1675712094 Implement PBR Maps Node (#8700)
* feat: Implement PBR Maps Generation Node

* feat(ui): Add PBR Maps Generation to UI

* chore: fix typegen checks

* chore: possible fix for nvidia 5000 series cards

* fix: Use safetensor models for PBR maps instead of pickles.

* fix: incorrect naming of upconv_block for PBR network

* fix: incorrect naming of displacement map variable

* chore: add relevant docs to the PBR generate function

* fix: clear cuda cache after loading state_dict for PBR maps

* fix: load torch_device only once as multiple models are loaded

* chore(ui): update the filter icon for PBR to CubeBold

More relevant

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-29 02:11:46 +00:00
Kyle H
2924d052c5 Fix an issue with regional guidance and multiple quick-queued generations after moving bbox (#8613)
* Fix an issue with multiple quick-queued generations after moving bbox

After moving the canvas bbox we still handed out the previous regional-guidance mask because only two parts of the system knew anything had changed. The adapter’s
cache key doesn’t include the bbox, so the next few graph builds reused the stale mask from before the move; if the user queued several runs back‑to‑back, every
background enqueue except the last skipped rerasterizing altogether because another raster job was still in flight. The fix makes the canvas manager invalidate each
region adapter’s cached mask whenever the bbox (or a related setting) changes, and—if a reraster is already running—queues up and waits instead of bailing. Now the
first run after a bbox edit forces a new mask, and rapid-fire enqueues just wait their turn, so every queued generation gets the correct regional prompt.

* (fix) Update invokeai/frontend/web/src/features/controlLayers/konva/CanvasStateApiModule.ts

Fixes race condition identified during copilot review.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update invokeai/frontend/web/src/features/controlLayers/konva/CanvasStateApiModule.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-29 02:01:21 +00:00
Lincoln Stein
f1624a6215 Merge branch 'main' into copilot/add-unload-model-option 2025-12-28 20:38:42 -05:00
Alexander Eichhorn
b7e28e4fa6 fix(ui): make Z-Image model selects mutually exclusive (#8717)
* fix(ui): make Z-Image model selects mutually exclusive

VAE and Qwen3 Encoder selects are disabled when Qwen3 Source is selected,
and vice versa. This prevents invalid model combinations.

* feat(ui): auto-select Z-Image component models on model change

When switching to a Z-Image model, automatically set valid defaults
if no configuration exists:
- Prefers Qwen3 Source (Diffusers model) if available
- Falls back to Qwen3 Encoder + FLUX VAE combination

This ensures the generate button is enabled immediately after selecting
a Z-Image model, without requiring manual configuration.

* fix(ui): save and restore Qwen3 Source model in metadata

Qwen3 Source (Diffusers Z-Image) model was not being saved to image
metadata or restored during Remix. This adds:
- Saving qwen3_source to metadata in buildZImageGraph
- ZImageQwen3SourceModel metadata handler for parsing and recall
- i18n translation for qwen3Source
2025-12-28 20:25:35 -05:00
Alexander Eichhorn
d7d051200f fix(z_image): use unrestricted image self-attention for regional prompting (#8718)
Changes image self-attention from restricted (region-isolated) to unrestricted
(all image tokens can attend to each other), similar to the FLUX approach.

This fixes the issue where ZImage-Turbo with multiple regional guidance layers
would generate two separate/disconnected images instead of compositing them
into a single unified image.

The regional text-image attention remains restricted so that each region still
responds to its corresponding prompt.

Fixes #8715
2025-12-28 11:32:50 -05:00
Alexander Eichhorn
0f830ddd00 Ruff format 2025-12-28 12:37:21 +01:00
Alexander Eichhorn
9617140b7f Merge branch 'feature/zimage-scheduler-support' of https://github.com/Pfannkuchensack/InvokeAI into feature/zimage-scheduler-support 2025-12-28 12:29:19 +01:00
Alexander Eichhorn
bc4783028f Merge branch 'main' into feature/zimage-scheduler-support 2025-12-28 12:29:14 +01:00
Alexander Eichhorn
16fedfb538 fix(schedulers): prevent progress percentage overflow with LCM scheduler
LCM scheduler may have more internal timesteps than user-facing steps,
causing user_step to exceed total_steps. This resulted in progress
percentage > 1.0, which caused a pydantic validation error.

Fix: Only call step_callback when user_step <= total_steps.
2025-12-28 12:22:28 +01:00
Lincoln Stein
d781a3b8a2 Merge branch 'main' into copilot/add-unload-model-option 2025-12-27 23:27:19 -05:00
blessedcoolant
7182ff26dc fix(ui): misaligned Color Compensation Option (#8714) 2025-12-27 23:11:48 -05:00
Lincoln Stein
95ee27d5c0 Merge branch 'main' into copilot/add-unload-model-option 2025-12-27 21:56:55 -05:00
Lincoln Stein
b4f05d3fe7 Merge branch 'main' into feature/zimage-scheduler-support 2025-12-27 21:50:05 -05:00
Josh Corbett
8deafabe6b feat(prompts): 💄 increase prompt font size (#8712)
* feat(prompts): 💄 increase prompt font size

* style(prompts): 🚨 satisfy linter
2025-12-27 21:18:23 -05:00
copilot-swe-agent[bot]
1bd1c76a2c Change default model_cache_keep_alive to 5 minutes
Changed the default value of model_cache_keep_alive from 0 (indefinite)
to 5 minutes as requested. This means models will now be automatically
cleared from cache after 5 minutes of inactivity by default, unless
users explicitly configure a different value.

Users can still set it to 0 in their config to get the old behavior
of keeping models indefinitely.

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-28 02:11:20 +00:00
Lincoln Stein
56fd1da888 Merge branch 'main' into copilot/add-unload-model-option 2025-12-27 21:08:17 -05:00
Alexander Eichhorn
0956ce0cd3 Merge branch 'main' into feature/zimage-scheduler-support 2025-12-28 00:44:10 +01:00
blessedcoolant
d42bf9c941 fix(model-manager): add Z-Image LoRA/DoRA detection support (#8709)
## Summary

Fix Z-Image LoRA/DoRA model detection failing during installation.

Z-Image LoRAs use different key patterns than SD/SDXL LoRAs. The base
`LoRA_LyCORIS_Config_Base` class only checked for key suffixes like
`lora_A.weight` and `lora_B.weight`, but Z-Image LoRAs (especially those
in DoRA format) use:
- `lora_down.weight` / `lora_up.weight` (standard LoRA format)
- `dora_scale` (DoRA weight decomposition)

This PR overrides `_validate_looks_like_lora` in
`LoRA_LyCORIS_ZImage_Config` to recognize Z-Image specific patterns:
- Keys starting with `diffusion_model.layers.` (Z-Image S3-DiT
architecture)
- Keys ending with `lora_down.weight`, `lora_up.weight`,
`lora_A.weight`, `lora_B.weight`, or `dora_scale`

## Related Issues / Discussions

Fixes installation of Z-Image LoRAs trained with DoRA (Weight-Decomposed
Low-Rank Adaptation).

## QA Instructions

1. Download a Z-Image LoRA in DoRA format (e.g., from CivitAI with keys
like `diffusion_model.layers.X.attention.to_k.lora_down.weight`)
2. Try to install the LoRA via Model Manager
3. Verify the model is recognized as a Z-Image LoRA and installs
successfully
4. Verify the LoRA can be applied when generating with Z-Image

## Merge Plan

Standard merge, no special considerations.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-12-27 23:10:06 +05:30
Alexander Eichhorn
d403587c7f Merge branch 'fix/z-image-lora-dora-detection' of https://github.com/Pfannkuchensack/InvokeAI into fix/z-image-lora-dora-detection 2025-12-27 09:17:33 +01:00
Alexander Eichhorn
355c985cc3 fix(model-manager): add Z-Image LoRA/DoRA detection and loading support
Two fixes for Z-Image LoRA support:

1. Override _validate_looks_like_lora in LoRA_LyCORIS_ZImage_Config to
   recognize Z-Image specific LoRA formats that use different key patterns
   than SD/SDXL LoRAs. Z-Image LoRAs use lora_down.weight/lora_up.weight
   and dora_scale suffixes instead of lora_A.weight/lora_B.weight.

2. Fix _group_by_layer in z_image_lora_conversion_utils.py to correctly
   group LoRA keys by layer name. The previous logic used rsplit with
   maxsplit=2 which incorrectly grouped keys like:
   - "to_k.alpha" -> layer "diffusion_model.layers.17.attention"
   - "lora_down.weight" -> layer "diffusion_model.layers.17.attention.to_k"

   Now uses suffix matching to ensure all keys for a layer are grouped
   together (alpha, dora_scale, lora_down.weight, lora_up.weight).
2025-12-27 09:17:29 +01:00
Alexander Eichhorn
41742146e2 fix(model-manager): add Z-Image LoRA/DoRA detection support
Override _validate_looks_like_lora in LoRA_LyCORIS_ZImage_Config to
recognize Z-Image specific LoRA formats that use different key patterns
than SD/SDXL LoRAs.

Z-Image LoRAs (including DoRA format) use keys like:
- diffusion_model.layers.X.attention.to_k.lora_down.weight
- diffusion_model.layers.X.attention.to_k.dora_scale

The base LyCORIS config only checked for lora_A.weight/lora_B.weight
suffixes, missing the lora_down.weight/lora_up.weight and dora_scale
patterns used by Z-Image LoRAs.
2025-12-27 07:06:12 +01:00
Jonathan
eb516e1998 Merge branch 'main' into feature/zimage-scheduler-support 2025-12-26 22:06:49 -05:00
Lincoln Stein
0b1befa9ab (chore) Prep for v6.10.0rc2 (#8701) 2025-12-26 18:26:04 -05:00
Alexander Eichhorn
bd678b1c95 fix ruff check 2025-12-26 21:22:46 +01:00
Alexander Eichhorn
56bef0b089 feat(z-image): add scheduler selection for Z-Image models
Add support for alternative diffusers Flow Matching schedulers for Z-Image:
- Euler (default) - 1st order, optimized for Z-Image-Turbo (8 steps)
- Heun (2nd order) - Better quality, 2x slower
- LCM - Optimized for few-step generation

Backend:
- Extend schedulers.py with Z-Image scheduler types and mapping
- Add scheduler InputField to z_image_denoise invocation (v1.3.0)
- Refactor denoising loop to support diffusers schedulers

Frontend:
- Add zImageScheduler to Redux state in paramsSlice
- Create ParamZImageScheduler component for Linear UI
- Add scheduler to buildZImageGraph for generation
2025-12-26 21:15:26 +01:00
Alexander Eichhorn
99fc1243cb feat(flux): add scheduler selection for Flux models
Add support for alternative diffusers Flow Matching schedulers:
- Euler (default, 1st order)
- Heun (2nd order, better quality, 2x slower)
- LCM (optimized for few steps)

Backend:
- Add schedulers.py with scheduler type definitions and class mapping
- Modify denoise.py to accept optional scheduler parameter
- Add scheduler InputField to flux_denoise invocation (v4.2.0)

Frontend:
- Add fluxScheduler to Redux state and paramsSlice
- Create ParamFluxScheduler component for Linear UI
- Add scheduler to buildFLUXGraph for generation
2025-12-26 20:53:59 +01:00
Lincoln Stein
a7205e4e36 Merge branch 'main' into copilot/add-unload-model-option 2025-12-25 21:33:59 -05:00
Alexander Eichhorn
65efc3db7d Feature: Add Z-Image-Turbo regional guidance (#8672)
* feat: Add Regional Guidance support for Z-Image model

Implements regional prompting for Z-Image (S3-DiT Transformer) allowing
different prompts to affect different image regions using attention masks.

Backend changes:
- Add ZImageRegionalPromptingExtension for mask preparation
- Add ZImageTextConditioning and ZImageRegionalTextConditioning data classes
- Patch transformer forward to inject 4D regional attention masks
- Use additive float mask (0.0 attend, -inf block) in bfloat16 for compatibility
- Alternate regional/full attention layers for global coherence

Frontend changes:
- Update buildZImageGraph to support regional conditioning collectors
- Update addRegions to create z_image_text_encoder nodes for regions
- Update addZImageLoRAs to handle optional negCond when guidance_scale=0
- Add Z-Image validation (no IP adapters, no autoNegative)

* @Pfannkuchensack
Fix windows path again

* ruff check fix

* ruff formating

* fix(ui): Z-Image CFG guidance_scale check uses > 1 instead of > 0

Changed the guidance_scale check from > 0 to > 1 for Z-Image models.
Since Z-Image uses guidance_scale=1.0 as "no CFG" (matching FLUX convention),
negative conditioning should only be created when guidance_scale > 1.

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-26 02:25:38 +00:00
Lincoln Stein
de1aa557b8 chore: bump version to v6.10.0rc1 (#8695)
* chore: bump version to v6.10.0rc1

* docs: fix names of code owners in release doc
2025-12-26 02:08:14 +00:00
Lincoln Stein
b9493ddce7 Workaround for Windows being unable to remove tmp directories when installing GGUF files (#8699)
* (bugfix)(mm) work around Windows being unable to rmtree tmp directories after GGUF install

* (style) fix ruff error

* (fix) add workaround for Windows Permission Denied on GGUF file move() call

* (fix) perform torch copy() in GGUF reader to avoid deletion failures on Windows

* (style) fix ruff formatting issues
2025-12-26 02:02:39 +00:00
Lincoln Stein
ca14c5c9e1 Merge branch 'main' into copilot/add-unload-model-option 2025-12-25 00:08:28 -05:00
Josh Corbett
ddb85ca669 fix(prompts): 🐛 prompt attention behaviors, add tests (#8683)
* fix(prompts): 🐛 prompt attention adjust elevation edge cases, added tests

* refactor(prompts): ♻️ create attention edit helper for prompt boxes

* feat(prompts):  apply attention keybinds to negative prompt

* feat(prompts): 🚀 reconsider behaviors, simplify code

* fix(prompts): 🐛 keybind attention update not tracked by undo/redo

* feat(prompts):  overhaul prompt attention behavior

* fix(prompts): 🩹 remove unused type

* fix(prompts): 🩹 remove unused `Token` type

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-24 17:38:24 -05:00
Lincoln Stein
5b69403ba8 Merge branch 'main' into copilot/add-unload-model-option 2025-12-24 15:39:46 -05:00
Alexander Eichhorn
ac245cbf6c feat(backend): add support for xlabs Flux LoRA format (#8686)
Add support for loading Flux LoRA models in the xlabs format, which uses
keys like `double_blocks.X.processor.{qkv|proj}_lora{1|2}.{down|up}.weight`.

The xlabs format maps:
- lora1 -> img_attn (image attention stream)
- lora2 -> txt_attn (text attention stream)
- qkv -> query/key/value projection
- proj -> output projection

Changes:
- Add FluxLoRAFormat.XLabs enum value
- Add flux_xlabs_lora_conversion_utils.py with detection and conversion
- Update formats.py to detect xlabs format
- Update lora.py loader to handle xlabs format
- Update model probe to accept recognized Flux LoRA formats
- Add unit tests for xlabs format detection and conversion

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-24 20:18:11 +00:00
Alexander Eichhorn
5be1e03d73 Feature/user workflow tags (#8698)
* Feature: Add Tag System for user made Workflows

* feat(ui): display tags on workflow library tiles

Show workflow tags at the bottom of each tile in the workflow browser,
making it easier to identify workflow categories at a glance.

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-24 14:54:22 -05:00
Josh Corbett
87314142b5 feat(hotkeys modal): loading state + performance improvements (#8694)
* feat(hotkeys modal):  loading state + performance improvements

* feat(hotkeys modal): add tooltip to edit button and adjust layout spacing

* style(hotkeys modal): 🚨 satisfy the linter

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-24 14:39:14 -05:00
Alexander Eichhorn
4cb9b8d97d Feature: add prompt template node (#8680)
* feat(nodes): add Prompt Template node

Add a new node that applies Style Preset templates to prompts in workflows.
The node takes a style preset ID and positive/negative prompts as inputs,
then replaces {prompt} placeholders in the template with the provided prompts.

This makes Style Preset templates accessible in Workflow mode, enabling
users to apply consistent styling across their workflow-based generations.

* feat(nodes): add StylePresetField for database-driven preset selection

Adds a new StylePresetField type that enables dropdown selection of
style presets from the database in the workflow editor.

Changes:
- Add StylePresetField to backend (fields.py)
- Update Prompt Template node to use StylePresetField instead of string ID
- Add frontend field type definitions (zod schemas, type guards)
- Create StylePresetFieldInputComponent with Combobox
- Register field in InputFieldRenderer and nodesSlice
- Add translations for preset selection

* fix schema.ts on windows.

* chore(api): regenerate schema.ts after merge

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-24 14:33:16 -05:00
Lincoln Stein
83deb0233e Merge remote-tracking branch 'refs/remotes/origin/copilot/add-unload-model-option' into copilot/add-unload-model-option 2025-12-24 00:44:32 -05:00
Lincoln Stein
8ebb6dd3d9 (chore) regenerate typescript schema 2025-12-24 00:43:06 -05:00
copilot-swe-agent[bot]
b7afd9b5b3 Fix test failures caused by MagicMock TypeError
Configure mock logger to return a valid log level for getEffectiveLevel()
to prevent TypeError when comparing with logging.DEBUG constant.

The issue was that ModelCache._log_cache_state() checks
self._logger.getEffectiveLevel() > logging.DEBUG, and when the logger
is a MagicMock without configuration, getEffectiveLevel() returns another
MagicMock, causing a TypeError when compared with an int.

Fixes all 4 test failures in test_model_cache_timeout.py

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 05:42:45 +00:00
copilot-swe-agent[bot]
4987b4da1c Fix timeout message appearing during active generation
Only log "Clearing model cache" message when there are actually unlocked
models to clear. This prevents the misleading message from appearing during
active generation when all models are locked.

Changes:
- Check for unlocked models before logging clear message
- Add count of unlocked models in log message
- Add debug log when all models are locked
- Improves user experience by avoiding confusing messages

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 05:31:11 +00:00
Lincoln Stein
a21b7792d8 (chore) regenerate config docstrings 2025-12-24 00:29:48 -05:00
Lincoln Stein
8819cc30be (chore) regenerate schema.ts 2025-12-24 00:28:55 -05:00
Lincoln Stein
9d1de81fe2 (style) correct ruff formatting error 2025-12-24 00:19:25 -05:00
Lincoln Stein
1e15b8c106 Merge branch 'main' into copilot/add-unload-model-option 2025-12-24 00:14:45 -05:00
Alexander Eichhorn
21138e5d52 fix support multi-subfolder downloads for Z-Image Qwen3 encoder (#8692)
* fix(model-install): support multi-subfolder downloads for Z-Image Qwen3 encoder

The Z-Image Qwen3 text encoder requires both text_encoder and tokenizer
subfolders from the HuggingFace repo, but the previous implementation
only downloaded the text_encoder subfolder, causing model identification
to fail.

Changes:
- Add subfolders property to HFModelSource supporting '+' separated paths
- Extend filter_files() and download_urls() to handle multiple subfolders
- Update _multifile_download() to preserve subfolder structure
- Make Qwen3Encoder probe check both nested and direct config.json paths
- Update Qwen3EncoderLoader to handle both directory structures
- Change starter model source to text_encoder+tokenizer

* ruff format

* fix schema description

* fix schema description

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-23 23:39:43 -05:00
copilot-swe-agent[bot]
8d76b4e4d4 Fix ruff whitespace errors and improve timeout logging
- Remove all trailing whitespace (W293 errors)
- Add debug logging when timeout fires but activity detected
- Add debug logging when timeout fires but cache is empty
- Only log "Clearing model cache" message when actually clearing
- Prevents misleading timeout messages during active generation

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 04:05:57 +00:00
Lincoln Stein
9662d1fdb6 Merge branch 'main' into copilot/add-unload-model-option 2025-12-23 22:48:11 -05:00
Alexander Eichhorn
39114b0ad0 Feature (UI): add model path update for external models (#8675)
* feat(ui): add model path update for external models

Add ability to update file paths for externally managed models (models with
absolute paths). Invoke-controlled models (with relative paths in the models
directory) are excluded from this feature to prevent breaking internal
model management.

- Add ModelUpdatePathButton component with modal dialog
- Only show button for external models (absolute path check)
- Add translations for path update UI elements

* Added support for Windows UNC paths in ModelView.tsx:38-41. The isExternalModel function now detects:
Unix absolute paths: /home/user/models/...
Windows drive paths: C:\Models\... or D:/Models/...
Windows UNC paths: \\ServerName\ShareName\... or //ServerName/ShareName/...

* fix(ui): validate path format in Update Path modal to prevent invalid paths

When updating an external model's path, the new path is now validated to ensure
it follows an absolute path format (Unix, Windows drive, or UNC). This prevents
users from accidentally entering invalid paths that would cause the Update Path
button to disappear, leaving them unable to correct the mistake.

* fix(ui): extract isExternalModel to separate file to fix circular dependency

Moves the isExternalModel utility function to its own file to break the
circular dependency between ModelView.tsx and ModelUpdatePathButton.tsx.

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-23 22:46:50 -05:00
Josh Corbett
3fe5f62c48 feat(hotkeys): Overhaul hotkeys modal UI (#8682)
* feat(hotkeys):  overhaul hotkeys modal UI

* fix(model manager): 🩹 improved check for hotkey search clear button

* fix(model manager): 🩹 remove unused exports

* feat(starter-models): add Z-Image Turbo starter models

Add Z-Image Turbo and related models to the starter models list:
- Z-Image Turbo (full precision, ~13GB)
- Z-Image Turbo quantized (GGUF Q4_K, ~4GB)
- Z-Image Qwen3 Text Encoder (full precision, ~8GB)
- Z-Image Qwen3 Text Encoder quantized (GGUF Q6_K, ~3.3GB)
- Z-Image ControlNet Union (Canny, HED, Depth, Pose, MLSD, Inpainting)

The quantized Turbo model includes the quantized Qwen3 encoder as a
dependency for automatic installation.

* feat(starter-models): add Z-Image Q8 quant and ControlNet Tile

Add higher quality Q8_0 quantization option for Z-Image Turbo (~6.6GB)
to complement existing Q4_K variant, providing better quality for users
with more VRAM.

Add dedicated Z-Image ControlNet Tile model (~6.7GB) for upscaling and
detail enhancement workflows.

* feat(hotkeys):  overhaul hotkeys modal UI

* feat(hotkeys modal): 💄 shrink add hotkey button

* fix(hotkeys): normalization and detection issues

* style: 🚨 satisfy the linter

* fix(hotkeys modal): 🩹 remove unused exports

---------

Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-23 22:24:00 -05:00
Josh Corbett
73c6b31011 feat(model manager): 💄 refactor model manager bulk actions UI (#8684)
* feat(model manager): 💄 refactor model manager bulk actions UI

* feat(model manager): 💄 tweak model list item ui for checkbox selects

* style(model manager): 🚨 satisfy the linter

* feat(model manager): 💄 tweak search and actions dropdown placement

* refactor(model manager): 🔥 remove unused `ModelListHeader` component

* fix(model manager): 🐛 list items overlapping sticky headers

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-23 22:17:07 -05:00
copilot-swe-agent[bot]
b16717bbf8 Explicitly pass all ModelCache constructor parameters
- Add explicit storage_device parameter (cpu)
- Add explicit log_memory_usage parameter from config
- Improves code clarity and configuration transparency

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 00:30:51 +00:00
copilot-swe-agent[bot]
c3217d8a08 Address code review feedback
- Remove unused variable in test
- Add clarifying comment for daemon thread setting
- Add detailed comment explaining cache clearing with 1000 GB value
- Improve code documentation

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 00:27:39 +00:00
blessedcoolant
f82bcd40fc fix: CFG Scale min value reset to zero (#8691)
No longer needed coz Z Image works at 1.0
2025-12-23 19:27:30 -05:00
copilot-swe-agent[bot]
2500153ed8 Fix race condition in timeout mechanism
- Added clarifying comment that _record_activity is called with lock held
- Enhanced double-check in _on_timeout for thread safety
- Added lock protection to shutdown method
- Improved handling of edge cases where timer fires during activity

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 00:26:01 +00:00
copilot-swe-agent[bot]
75a14e2a4b Add unit tests for model cache timeout functionality
- Created test_model_cache_timeout.py with comprehensive tests
- Tests timeout clearing behavior
- Tests activity resetting timeout
- Tests no-timeout default behavior
- Tests shutdown canceling timers

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 00:24:31 +00:00
copilot-swe-agent[bot]
9bbd2b3f11 Add model_cache_keep_alive config option and timeout mechanism
- Added model_cache_keep_alive config field (minutes, default 0 = infinite)
- Implemented timeout tracking in ModelCache class
- Added _record_activity() to track model usage
- Added _on_timeout() to auto-clear cache when timeout expires
- Added shutdown() method to clean up timers
- Integrated timeout with get(), lock(), unlock(), and put() operations
- Updated ModelManagerService to pass keep_alive parameter
- Added cleanup in stop() method

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 00:22:59 +00:00
copilot-swe-agent[bot]
c26445253c Initial plan 2025-12-24 00:16:17 +00:00
blessedcoolant
5a0b227256 feat(starter-models): add Z-Image Turbo starter models (#8689)
## Summary

Add Z-Image Turbo and related models to the starter models list for easy
installation via the Model Manager:

- **Z-Image Turbo** - Full precision Diffusers format (~13GB)
- **Z-Image Turbo (quantized)** - GGUF Q4_K format (~4GB)
- **Z-Image Qwen3 Text Encoder** - Full precision (~8GB)
- **Z-Image Qwen3 Text Encoder (quantized)** - GGUF Q6_K format (~3.3GB)
- **Z-Image ControlNet Union** - Unified ControlNet supporting Canny,
HED, Depth, Pose, MLSD, and Inpainting modes

The quantized Turbo model includes the quantized Qwen3 encoder as a
dependency for automatic installation.

## Related Issues / Discussions

Builds on the Z-Image Turbo support added in main.

## QA Instructions

1. Open Model Manager → Starter Models
2. Search for "Z-Image"
3. Verify all 5 models appear with correct descriptions
4. Install the quantized version and confirm the Qwen3 encoder
dependency is also installed

## Merge Plan

Standard merge, no special considerations.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-12-23 08:31:34 +05:30
blessedcoolant
1b5d91d1cf Merge branch 'main' into feat/z-image-starter-models 2025-12-23 08:27:25 +05:30
Alexander Eichhorn
a748519e92 feat(starter-models): add Z-Image Q8 quant and ControlNet Tile
Add higher quality Q8_0 quantization option for Z-Image Turbo (~6.6GB)
to complement existing Q4_K variant, providing better quality for users
with more VRAM.

Add dedicated Z-Image ControlNet Tile model (~6.7GB) for upscaling and
detail enhancement workflows.
2025-12-23 03:27:09 +01:00
blessedcoolant
90e34002f0 fix(z-image): Fix padding token shape mismatch for GGUF models (#8690)
## Summary

Fix shape mismatch when loading GGUF-quantized Z-Image transformer
models.

GGUF Z-Image models store `x_pad_token` and `cap_pad_token` with shape
`[3840]`, but diffusers `ZImageTransformer2DModel` expects `[1, 3840]`
(with batch dimension). This caused a `RuntimeError` on Linux systems
when loading models like `z_image_turbo-Q4_K.gguf`.

The fix:
- Dequantizes GGMLTensors first (since they don't support `unsqueeze`)
- Reshapes the tensors to add the missing batch dimension

## Related Issues / Discussions

Reported by Linux user using:
-
https://huggingface.co/leejet/Z-Image-Turbo-GGUF/resolve/main/z_image_turbo-Q4_K.gguf
-
https://huggingface.co/worstplayer/Z-Image_Qwen_3_4b_text_encoder_GGUF/resolve/main/Qwen_3_4b-Q6_K.gguf

## QA Instructions

1. Install a GGUF-quantized Z-Image model (e.g.,
`z_image_turbo-Q4_K.gguf`)
2. Install a Qwen3 GGUF encoder
3. Run a Z-Image generation
4. Verify no `RuntimeError: size mismatch for x_pad_token` error occurs

## Merge Plan

None, straightforward fix.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-12-23 06:04:40 +05:30
blessedcoolant
7068cf956a Merge branch 'main' into pr/8690 2025-12-23 05:59:49 +05:30
blessedcoolant
aa764f8bf4 Feature: z-image Turbo Control Net (#8679)
## Summary

Add support for Z-Image ControlNet V2.0 alongside the existing V1
support.

**Key changes:**
- Auto-detect `control_in_dim` from adapter weights (16 for V1, 33 for
V2.0)
- Auto-detect `n_refiner_layers` from state dict
- Add zero-padding for V2.0's additional control channels (diffusers
approach)
- Use `accelerate.init_empty_weights()` for more efficient model
creation
- Add `ControlNet_Checkpoint_ZImage_Config` to frontend schema

## Related Issues / Discussions

Part of Z-Image feature implementation.

## QA Instructions

1. Load a Z-Image ControlNet V1 model (control_in_dim=16) and verify it
works
2. Load a Z-Image ControlNet V2.0 model (control_in_dim=33) and verify
it works
3. Test with different control types: Canny, Depth, Pose
4. Recommended `control_context_scale`: 0.65-0.80

## Merge Plan

Can be merged after review. No special considerations needed.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-12-23 05:58:58 +05:30
Alexander Eichhorn
73be5e5d35 Merge branch 'main' into feature/z-image-control 2025-12-22 22:56:30 +01:00
DustyShoe
259304bac5 Feature(UI): add extract masked area from raster layers (#8667)
* chore: localize extraction errors

* chore: rename extract masked area menu item

* chore: rename inpaint mask extract component

* fix: use mask bounds for extraction region

* Prettier format applied to InpaintMaskMenuItemsExtractMaskedArea.tsx

* Fix base64 image import bug in extracted area in InpaintMaskMenuItemsExtractMaskedArea.tsx and removed unused locales entries in en.json

* Fix formatting issue in InpaintMaskMenuItemsExtractMaskedArea.tsx

* Minor comment fix

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-22 15:57:27 -05:00
Alexander Eichhorn
2be701cfe3 Feature: Add Tag System for user made Workflows (#8673)
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-22 15:41:48 -05:00
blessedcoolant
874b547598 chore: format code for ruff checks 2025-12-23 01:04:22 +05:30
blessedcoolant
7b9ce35806 Merge branch 'main' into pr/8679 2025-12-23 01:03:43 +05:30
Alexander Eichhorn
84f3e44a5d Merge branch 'main' into feat/z-image-starter-models 2025-12-22 20:16:05 +01:00
Alexander Eichhorn
5264b7511c Merge branch 'main' into fix/z-image-gguf-padding-token-shape 2025-12-22 20:15:18 +01:00
Alexander Eichhorn
f8b1f42f6d fix(z-image): Fix padding token shape mismatch for GGUF models
GGUF Z-Image models store x_pad_token and cap_pad_token with shape [dim],
but diffusers ZImageTransformer2DModel expects [1, dim]. This caused a
RuntimeError when loading GGUF-quantized Z-Image models.

The fix dequantizes GGMLTensors first (since they don't support unsqueeze),
then reshapes to add the batch dimension.
2025-12-22 18:31:57 +01:00
Josh Corbett
e1acb636d8 fix(ui): 🐛 HotkeysModal and SettingsModal initial focus (#8687)
* fix(ui): 🐛 `HotkeysModal` and `SettingsModal` initial focus

instead of using the `initialFocusRef` prop, the `Modal` component was focusing on the last available Button. This is a workaround that uses `tabIndex` instead which seems to be working.

Closes #8685

* style: 🚨 satisfy linter

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-22 11:20:44 -05:00
Alexander Eichhorn
b08accd4be feat(starter-models): add Z-Image Turbo starter models
Add Z-Image Turbo and related models to the starter models list:
- Z-Image Turbo (full precision, ~13GB)
- Z-Image Turbo quantized (GGUF Q4_K, ~4GB)
- Z-Image Qwen3 Text Encoder (full precision, ~8GB)
- Z-Image Qwen3 Text Encoder quantized (GGUF Q6_K, ~3.3GB)
- Z-Image ControlNet Union (Canny, HED, Depth, Pose, MLSD, Inpainting)

The quantized Turbo model includes the quantized Qwen3 encoder as a
dependency for automatic installation.
2025-12-22 15:04:27 +01:00
Alexander Eichhorn
3668d5b83b feat(z-image): add Extension-based Z-Image ControlNet support
Implement Z-Image ControlNet as an Extension pattern (similar to FLUX ControlNet)
instead of merging control weights into the base transformer. This provides:
- Lower memory usage (no weight duplication)
- Flexibility to enable/disable control per step
- Cleaner architecture with separate control adapter

Key implementation details:
- ZImageControlNetExtension: computes control hints per denoising step
- z_image_forward_with_control: custom forward pass with hint injection
- patchify_control_context: utility for control image patchification
- ZImageControlAdapter: standalone adapter with control_layers and noise_refiner

Architecture matches original VideoX-Fun implementation:
- Hints computed ONCE using INITIAL unified state (before main layers)
- Hints injected at every other main transformer layer (15 control blocks)
- Control signal added after each designated layer's forward pass

V2.0 ControlNet support (control_in_dim=33):
- Channels 0-15: control image latents
- Channels 16-31: reference image (zeros for pure control)
- Channel 32: inpaint mask (1.0 = don't inpaint, use control signal)
2025-12-21 22:30:28 +01:00
Alexander Eichhorn
1c13ca8159 style: apply ruff formatting 2025-12-21 18:52:12 +01:00
Alexander Eichhorn
3ed0e55d9d fix: resolve linting errors in Z-Image ControlNet support
- Add missing ControlNet_Checkpoint_ZImage_Config import
- Remove unused imports (Any, Dict, ADALN_EMBED_DIM, is_torch_version)
- Add strict=True to zip() calls
- Replace mutable list defaults with immutable tuples
- Replace dict() calls with literal syntax
- Sort imports in z_image_denoise.py
2025-12-21 18:50:43 +01:00
Alexander Eichhorn
8db8aa8594 Add Z-Image ControlNet V2.0 support
VRAM usage is high.

- Auto-detect control_in_dim from adapter weights (16 for V1, 33 for V2.0)
- Auto-detect n_refiner_layers from state dict
- Add zero-padding for V2.0's additional channels
- Use accelerate.init_empty_weights() for efficient model creation
- Add ControlNet_Checkpoint_ZImage_Config to frontend schema
2025-12-21 18:43:02 +01:00
Alexander Eichhorn
456d578f20 WIP not working.
feat: Add Z-Image ControlNet support with spatial conditioning

Add comprehensive ControlNet support for Z-Image models including:

Backend:
- New ControlNet_Checkpoint_ZImage_Config for Z-Image control adapter models
- Z-Image control key detection (_has_z_image_control_keys) to identify control layers
- ZImageControlAdapter loader for standalone control models
- ZImageControlTransformer2DModel combining base transformer with control layers
- Memory-efficient model loading by building combined state dict
2025-12-21 18:43:02 +01:00
blessedcoolant
ab6b6721dc Feature: Add Z-Image-Turbo model support (#8671)
Add comprehensive support for Z-Image-Turbo (S3-DiT) models including:

Backend:
- New BaseModelType.ZImage in taxonomy
- Z-Image model config classes (ZImageTransformerConfig,
Qwen3TextEncoderConfig)
- Model loader for Z-Image transformer and Qwen3 text encoder
- Z-Image conditioning data structures
- Step callback support for Z-Image with FLUX latent RGB factors

Invocations:
- z_image_model_loader: Load Z-Image transformer and Qwen3 encoder
- z_image_text_encoder: Encode prompts using Qwen3 with chat template
- z_image_denoise: Flow matching denoising with time-shifted sigmas
- z_image_image_to_latents: Encode images to 16-channel latents
- z_image_latents_to_image: Decode latents using FLUX VAE

Frontend:
- Z-Image graph builder for text-to-image generation
- Model picker and validation updates for z-image base type
- CFG scale now allows 0 (required for Z-Image-Turbo)
- Clip skip disabled for Z-Image (uses Qwen3, not CLIP)
- Optimal dimension settings for Z-Image (1024x1024)

Technical details:
- Uses Qwen3 text encoder (not CLIP/T5)
- 16 latent channels with FLUX-compatible VAE
- Flow matching scheduler with dynamic time shift
- 8 inference steps recommended for Turbo variant
- bfloat16 inference dtype

## Summary

<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

- Install a Z-Image-Turbo model (e.g., from HuggingFace)
- Select the model in the Model Picker
- Generate a text-to-image with:
- CFG Scale: 0
- Steps: 8
- Resolution: 1024x1024
- Verify the generated image is coherent (not noise)

## Merge Plan

Standard merge, no special considerations needed.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-12-21 22:11:37 +05:30
blessedcoolant
93a587da90 Merge branch 'main' into feat/z-image-turbo-support 2025-12-21 21:58:22 +05:30
blessedcoolant
87bebf9c28 chore: upgrade diffusers to 0.36.0 to support z image 2025-12-21 21:54:47 +05:30
Alexander Eichhorn
f417c269d1 fix(vae): Fix dtype mismatch in FP32 VAE decode mode
The previous mixed-precision optimization for FP32 mode only converted
some VAE decoder layers (post_quant_conv, conv_in, mid_block) to the
latents dtype while leaving others (up_blocks, conv_norm_out) in float32.
This caused "expected scalar type Half but found Float" errors after
recent diffusers updates.

Simplify FP32 mode to consistently use float32 for both VAE and latents,
removing the incomplete mixed-precision logic. This trades some VRAM
usage for stability and correctness.

Also removes now-unused attention processor imports.
2025-12-16 15:58:48 +01:00
Alexander Eichhorn
4ce0ef5260 stupid windows file path again. 2025-12-16 10:31:52 +01:00
Alexander Eichhorn
39cdcdc9e8 fix(z-image): remove unused WithMetadata and WithBoard mixins from denoise node
The Z-Image denoise node outputs latents, not images, so these mixins
were unnecessary. Metadata and board handling is correctly done in the
L2I (latents-to-image) node. This aligns with how FLUX denoise works.
2025-12-16 09:41:26 +01:00
Josh Corbett
926923bb2b feat(prompts): hotkey controlled prompt weighting (#8647)
* feat(prompts): add abstract syntax tree (AST) builder for prompts

* fix(prompts): add escaped parens to AST

* test(prompts): add AST tests

* fix(prompts): appease the linter

* perf(prompts): break up tokenize function into subroutines

* feat(prompts): add hotkey controlled prompt attention adjust

* fix(hotkeys): 🩹 add translations for hotkey dialog

* fix: 🏷️ remove unused exports

* fix(keybinds): 🐛 use `arrowup`/`arrowdown` over `up`/`down`

* refactor(prompts): ♻️ use better language for attention direction

* style: 🚨 appease the linter

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-15 21:53:58 -05:00
blessedcoolant
8785d9a3a9 chore: fix ruff checks 2025-12-14 19:51:22 +05:30
Alexander Eichhorn
1e72feb744 Remove unneeded Loggging 2025-12-14 06:44:29 +01:00
Alexander Eichhorn
3ee24cbdde Remove the ParamScheduler for z-images
Fixed the DEFAULT_TOKENIZER_SOURCE to Qwen/Qwen3-4B
2025-12-13 04:23:34 +01:00
Alexander Eichhorn
f9605e18a0 z-image-turbo-fp8-e5m2 works. the z-image-turbo_fp8_scaled_e4m3fn_KJ dont. 2025-12-10 17:15:54 +01:00
Alexander Eichhorn
8551ff8569 fix typegen 2025-12-10 15:04:39 +01:00
Alexander Eichhorn
fb1a99b650 feat(cache): add partial loading support for Z-Image RMSNorm and LayerNorm
- Add CustomDiffusersRMSNorm for diffusers.models.normalization.RMSNorm
- Add CustomLayerNorm for torch.nn.LayerNorm
- Register both in AUTOCAST_MODULE_TYPE_MAPPING

Enables partial loading (enable_partial_loading: true) for Z-Image models
by wrapping their normalization layers with device autocast support
2025-12-10 03:45:42 +01:00
Alexander Eichhorn
3b5d9c26d3 feat(z-image): add Qwen3 GGUF text encoder support and default parameters
- Add Qwen3EncoderGGUFLoader for llama.cpp GGUF quantized text encoders
- Convert llama.cpp key format (blk.X., token_embd) to PyTorch format
- Handle tied embeddings (lm_head.weight ↔ embed_tokens.weight)
- Dequantize embed_tokens for embedding lookups (GGMLTensor limitation)
- Add QK normalization key mappings (q_norm, k_norm) for Qwen3
- Set Z-Image defaults: steps=9, cfg_scale=0.0, width/height=1024
- Allow cfg_scale >= 0 (was >= 1) for Z-Image Turbo compatibility
- Add GGUF format detection for Qwen3 model probing
2025-12-10 03:07:07 +01:00
Alexander Eichhorn
0a986c2720 fix(ui): replace misused isCheckpointMainModelConfig with isFluxDevMainModelConfig
The FLUX Dev license warning in model pickers used isCheckpointMainModelConfig
incorrectly:
```
isCheckpointMainModelConfig(config) && config.variant === 'dev'
```

This caused a TypeScript error because CheckpointModelConfig type doesn't
include the 'variant' property (it's extracted as `{ type: 'main'; format:
'checkpoint' }` which doesn't narrow to include variant).

Changes:
- Add isFluxDevMainModelConfig type guard that properly checks
  base='flux' AND variant='dev', returning MainModelConfig
- Update MainModelPicker and InitialStateMainModelPicker to use new guard
- Remove isCheckpointMainModelConfig as it had no other usages

The function was removed because:
1. It was only used for detecting FLUX Dev models (incorrect use case)
2. No other code needs a generic "is checkpoint format" check
3. The pattern in this codebase is specific type guards per model variant
   (isFluxFillMainModelModelConfig, isRefinerMainModelModelConfig, etc.)
2025-12-09 08:18:17 +01:00
Alexander Eichhorn
3e862ced25 fix typegen wrong 2025-12-09 07:46:12 +01:00
Alexander Eichhorn
ba2475c3f0 fix(z-image): improve device/dtype compatibility and error handling
Add robust device capability detection for bfloat16, replacing hardcoded
dtype with runtime checks that fallback to float16/float32 on unsupported
hardware. This prevents runtime failures on GPUs and CPUs without bfloat16.

Key changes:
- Add TorchDevice.choose_bfloat16_safe_dtype() helper for safe dtype selection
- Fix LoRA device mismatch in layer_patcher.py (add device= to .to() call)
- Replace all assert statements with descriptive exceptions (TypeError/ValueError)
- Add hidden_states bounds check and apply_chat_template fallback in text encoder
- Add GGUF QKV tensor validation (divisible by 3 check)
- Fix CPU noise generation to use float32 for compatibility
- Remove verbose debug logging from LoRA conversion utils
2025-12-09 07:37:06 +01:00
Alexander Eichhorn
841372944f feat(z-image): add metadata recall for VAE and Qwen3 encoder
Add support for saving and recalling Z-Image component models (VAE and
Qwen3 Encoder) in image metadata.

Backend:
- Add qwen3_encoder field to CoreMetadataInvocation (version 2.1.0)

Frontend:
- Add vae and qwen3_encoder to Z-Image graph metadata
- Add Qwen3EncoderModel metadata handler for recall
- Add ZImageVAEModel metadata handler (uses zImageVaeModelSelected
  instead of vaeSelected to set Z-Image-specific VAE state)
- Add qwen3Encoder translation key

This enables "Recall Parameters" / "Remix Image" to restore the VAE
and Qwen3 Encoder settings used for Z-Image generations.
2025-12-09 07:12:36 +01:00
Alexander Eichhorn
e9d52734d1 feat(z-image): add single-file checkpoint support for Z-Image models
Add support for loading Z-Image transformer and Qwen3 encoder models
from single-file safetensors format (in addition to existing diffusers
directory format).

Changes:
- Add Main_Checkpoint_ZImage_Config and Main_GGUF_ZImage_Config for
  single-file Z-Image transformer models
- Add Qwen3Encoder_Checkpoint_Config for single-file Qwen3 text encoder
- Add ZImageCheckpointModel and ZImageGGUFCheckpointModel loaders with
  automatic key conversion from original to diffusers format
- Add Qwen3EncoderCheckpointLoader using Qwen3ForCausalLM with fast
  loading via init_empty_weights and proper weight tying for lm_head
- Update z_image_denoise to accept Checkpoint format models
2025-12-09 06:32:51 +01:00
Alexander Eichhorn
2e0cd4d68c Patch from @lstein for the update of diffusers 2025-12-06 03:12:50 +01:00
Alexander Eichhorn
b28d58b8ce Merge branch 'feat/z-image-turbo-support' of https://github.com/Pfannkuchensack/InvokeAI into feat/z-image-turbo-support 2025-12-05 01:12:34 +01:00
Alexander Eichhorn
4a1710b795 fix for the typegen-checks 2025-12-05 01:12:19 +01:00
Alexander Eichhorn
9f6d04c690 Merge branch 'main' into feat/z-image-turbo-support 2025-12-05 00:45:02 +01:00
Alexander Eichhorn
66729ea9eb Fix windows path again again again... 2025-12-03 03:28:43 +01:00
Alexander Eichhorn
280202908a feat: Add GGUF quantized Z-Image support and improve VAE/encoder flexibility
Add comprehensive support for GGUF quantized Z-Image models and improve component flexibility:

Backend:
- New Main_GGUF_ZImage_Config for GGUF quantized Z-Image transformers
- Z-Image key detection (_has_z_image_keys) to identify S3-DiT models
- GGUF quantization detection and sidecar LoRA patching for quantized models
- Qwen3Encoder_Qwen3Encoder_Config for standalone Qwen3 encoder models

Model Loader:
- Split Z-Image model
2025-12-02 20:31:11 +01:00
Alexander Eichhorn
2b062b21cd fix: Improve Flux AI Toolkit LoRA detection to prevent Z-Image misidentification
Move Flux layer structure check before metadata check to prevent misidentifying Z-Image LoRAs (which use `diffusion_model.layers.X`) as Flux AI Toolkit format. Flux models use `double_blocks` and `single_blocks` patterns which are now checked first regardless of metadata presence.
2025-12-02 15:50:01 +01:00
Alexander Eichhorn
6f9f8e57ac Feature(UI): bulk remove models loras (#8659)
* feat: Add bulk delete functionality for models, LoRAs, and embeddings

Implements a comprehensive bulk deletion feature for the model manager that allows users to select and delete multiple models, LoRAs, and embeddings at once.

Key changes:

Frontend:
- Add multi-selection state management to modelManagerV2 slice
- Update ModelListItem to support Ctrl/Cmd+Click multi-selection with checkboxes
- Create ModelListHeader component showing selection count and bulk actions
- Create BulkDeleteModelsModal for confirming bulk deletions
- Integrate bulk delete UI into ModelList with proper error handling
- Add API mutation for bulk delete operations

Backend:
- Add POST /api/v2/models/i/bulk_delete endpoint
- Implement BulkDeleteModelsRequest and BulkDeleteModelsResponse schemas
- Handle partial failures with detailed error reporting
- Return lists of successfully deleted and failed models

This feature significantly improves user experience when managing large model libraries, especially when restructuring model storage locations.

Fixes issue where users had to delete models individually after moving model files to new storage locations.

* fix: prevent model list header from scrolling with content

* fix: improve error handling in bulk model deletion

- Added proper error serialization using serialize-error for better error logging
- Explicitly defined BulkDeleteModelsResponse type instead of relying on generated schema reference

* refactor: improve code organization in ModelList components

- Reordered imports to follow conventional grouping (external, internal, then third-party utilities)
- Added type assertion for error serialization to satisfy TypeScript
- Extracted inline event handler into named callback function for better readability

* refactor: consolidate Button component props to single line

* feat(ui): enhance model manager bulk selection with select-all and actions menu

- Added select-all checkbox in navigation header with indeterminate state support
- Replaced single delete button with actions dropdown menu for future extensibility
- Made checkboxes always visible instead of conditionally showing on selection
- Moved model filtering logic to ModelListNavigation for select-all functionality
- Improved UX by showing selection state for filtered models only

* fix the wrong path seperater from my windows system

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 20:09:27 -05:00
Alexander Eichhorn
eaf4742799 Fix windows path again again 2025-12-01 22:28:39 +01:00
Alexander Eichhorn
f05ea28cbd feat: Add Z-Image LoRA support
Add comprehensive LoRA support for Z-Image models including:

Backend:
- New Z-Image LoRA config classes (LoRA_LyCORIS_ZImage_Config, LoRA_Diffusers_ZImage_Config)
- Z-Image LoRA conversion utilities with key mapping for transformer and Qwen3 encoder
- LoRA prefix constants (Z_IMAGE_LORA_TRANSFORMER_PREFIX, Z_IMAGE_LORA_QWEN3_PREFIX)
- LoRA detection logic to distinguish Z-Image from Flux models
- Layer patcher improvements for proper dtype conversion and parameter
2025-12-01 22:23:30 +01:00
Alexander Eichhorn
13ac16e2c0 fix windows path again. 2025-12-01 00:30:53 +01:00
Alexander Eichhorn
eb3f1c9a61 feat: Add Z-Image-Turbo model support
Add comprehensive support for Z-Image-Turbo (S3-DiT) models including:

Backend:
- New BaseModelType.ZImage in taxonomy
- Z-Image model config classes (ZImageTransformerConfig, Qwen3TextEncoderConfig)
- Model loader for Z-Image transformer and Qwen3 text encoder
- Z-Image conditioning data structures
- Step callback support for Z-Image with FLUX latent RGB factors

Invocations:
- z_image_model_loader: Load Z-Image transformer and Qwen3 encoder
- z_image_text_encoder: Encode prompts using Qwen3 with chat template
- z_image_denoise: Flow matching denoising with time-shifted sigmas
- z_image_image_to_latents: Encode images to 16-channel latents
- z_image_latents_to_image: Decode latents using FLUX VAE

Frontend:
- Z-Image graph builder for text-to-image generation
- Model picker and validation updates for z-image base type
- CFG scale now allows 0 (required for Z-Image-Turbo)
- Clip skip disabled for Z-Image (uses Qwen3, not CLIP)
- Optimal dimension settings for Z-Image (1024x1024)

Technical details:
- Uses Qwen3 text encoder (not CLIP/T5)
- 16 latent channels with FLUX-compatible VAE
- Flow matching scheduler with dynamic time shift
- 8 inference steps recommended for Turbo variant
- bfloat16 inference dtype
2025-12-01 00:22:32 +01:00
Kent Keirsey
c6a9847bbd feat(ui): Color Picker V2 (#8585)
* pinned colorpicker

* hex options

* remove unused consts

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-11-16 09:49:55 -05:00
Alexander Eichhorn
a2e109b3c2 feat(ui): improve hotkey customization UX with interactive controls and validation (#8649)
* feat: remove the ModelFooter in the ModelView and add the Delete Model Button from the Footer into the View

* forget to run pnpm fix

* chore(ui): reorder the model view buttons

* Initial plan

* Add customizable hotkeys infrastructure with UI

Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>

* Fix ESLint issues in HotkeyEditor component

Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>

* Fix knip unused export warning

Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>

* Add tests for hotkeys slice

Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>

* Fix tests to actually call reducer and add documentation

Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>

* docs: add comprehensive hotkeys system documentation

- Created new HOTKEYS.md technical documentation for developers explaining architecture, data flow, and implementation details
- Added user-facing hotkeys.md guide with features overview and usage instructions
- Removed old CUSTOMIZABLE_HOTKEYS.md in favor of new split documentation
- Expanded documentation with detailed sections on:
  - State management and persistence
  - Component architecture and responsibilities
  - Developer integration

* Behavior changed to hotkey press instead of input + checking for allready used hotkeys

---------

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-11-16 14:35:37 +00:00
dunkeroni
5642099a40 Feat: SDXL Color Compensation (#8637)
* feat(nodes/UI): add SDXL color compensation option

* adjust value

* Better warnings on wrong VAE base model

* Restrict XL compensation to XL models

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>

* fix: BaseModelType missing import

* (chore): appease the ruff

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-11-16 14:32:12 +00:00
gogurtenjoyer
382d85ee23 Fix memory issues when installing models on Windows (#8652)
* Wrap GGUF loader for context managed close()

Wrap gguf.GGUFReader and then use a context manager to load memory-mapped GGUF files, so that they will automatically close properly when no longer needed. Should prevent the 'file in use in another process' errors on Windows.

* Additional check for cached state_dict

Additional check for cached state_dict as path is now optional - should solve model manager 'missing' this and the resultant memory errors.

* Appease ruff

* Further ruff appeasement

* ruff

* loaders.py fix for linux

No longer attempting to delete internal object.

* loaders.py - one more _mmap ref removed

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-11-16 09:25:52 -05:00
Jonathan
abcc987f6f Rework graph.py (#8642)
* Rework graph, add documentation

* Minor fixes to README.md

* Updated schema

* Fixed test to match behavior - all nodes executed, parents before children

* Update invokeai/app/services/shared/graph.py

Cleaned up code

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>

* Change silent corrections to enforcing invariants

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-11-16 09:10:47 -05:00
Lincoln Stein
36e400dd5d (chore) Update requirements to python 3.11-12 (#8657)
* (chore) update requirements to python 3.11-12

* update uv.lock
2025-11-08 21:29:43 -05:00
Weblate (bot)
0113931956 ui: translations update from weblate (#8599)
* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2099 of 2132 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2130 of 2163 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Japanese)

Currently translated at 99.6% (2155 of 2163 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2103 of 2136 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI

---------

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
2025-11-04 02:29:05 +00:00
DustyShoe
8d6e00533e Fix to enable loading fp16 repo variant ControlNets (#8643)
* Fix ControlNet repo variant detection for fp16 weights

* Remove ControlNet diffusers fp16 regression test

* Update invokeai/backend/model_manager/configs/controlnet.py

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>

* style: ruff format controlnet.py

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-11-03 21:23:35 -05:00
Lincoln Stein
10eebb6c0c remove jazzhaiku as well 2025-11-02 15:22:13 -05:00
Lincoln Stein
68bcf2ebe0 chore(codeowners): remove commercial dev codeowners 2025-11-02 15:22:13 -05:00
blessedcoolant
ad0b09c738 chore(ui): reorder the model view buttons 2025-10-28 00:16:20 +05:30
Alexander Eichhorn
737cf795e8 forget to run pnpm fix 2025-10-28 00:16:20 +05:30
Alexander Eichhorn
6192ff5abb feat: remove the ModelFooter in the ModelView and add the Delete Model Button from the Footer into the View 2025-10-28 00:16:20 +05:30
blessedcoolant
066ba5fb19 fix(mm): directory path leakage on scan folder error (#8641)
## Summary

This fixes a bug in which private directory paths on the host could be
leaked to the user interface. The error occurs during the `scan_folders`
operation when a subdirectory is not accessible. The UI shows a
permission denied error message, followed by the path of the offending
directory. This patch limits the error message to the error type only
and does not give further details.

## Related Issues / Discussions

This bug was reported in a private DM on the Discord server.

## QA Instructions

Before applying this PR, go to ***Model Manager -> Add Model -> Scan
Folder*** and enter the path of a directory that has subdirectories that
the backend should not have access to, for example `/etc`. Press the
***Scan Folder*** button. You will see a Permission Denied error message
that gives away the path of the first inaccesislbe subdirectory.

After applying this PR, you will see just the Permission Denied error
without details.

## Merge Plan

Merge when approved.

## Checklist

- [X] _The PR has a short but descriptive title, suitable for a
changelog_
- [X] _Tests added / updated (if applicable)_
- [X] _Changes to a redux slice have a corresponding migration_
- [X] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-10-28 00:02:59 +05:30
Lincoln Stein
2fb4c92310 fix(mm): directory path leakage on scan folder error 2025-10-27 08:54:57 -04:00
psychedelicious
3fdceba5fc chore: bump version to v6.9.0 2025-10-17 12:13:01 +11:00
psychedelicious
ae4bcc08f2 chore(ui): point ui lib dep at gh repo 2025-10-17 07:22:39 +11:00
psychedelicious
e1d88f93ca fix(ui): generator nodes
Closes #8617
2025-10-16 10:37:14 +11:00
psychedelicious
4ad2574835 feat(ui): add button to reidentify model to mm 2025-10-16 10:33:02 +11:00
psychedelicious
0e3d4beb48 chore(ui): typegen 2025-10-16 10:33:02 +11:00
psychedelicious
dcfd4ea756 feat(mm): reidentify models
Add route and model record service method to reidentify a model. This
re-probes the model files and replaces the model's config with the new
one if it does not error.
2025-10-16 10:33:02 +11:00
psychedelicious
093f8d6720 fix(mm): ignore files in hidden directories when identifying models 2025-10-16 10:33:02 +11:00
psychedelicious
22fdfab764 chore: bump version to v6.9.0rc3 2025-10-16 08:08:44 +11:00
psychedelicious
7a0b157fb8 feat(mm): more exports in invocation api 2025-10-16 08:08:44 +11:00
psychedelicious
563da9ee8e feat(mm): write warning README file to models dir 2025-10-16 08:08:44 +11:00
psychedelicious
c8d9cdc22e docs(mm): add readme for updating or adding new model support 2025-10-16 08:08:44 +11:00
psychedelicious
e9c2411da9 chore: bump version to v6.9.0rc2 2025-10-16 08:08:44 +11:00
psychedelicious
90989291ed fix(ui): wait for nav api to be ready before loading main app component 2025-10-16 08:08:44 +11:00
psychedelicious
d04fc343f0 feat(ui): add flag for connected status 2025-10-16 08:08:44 +11:00
psychedelicious
437594915a feat(mm): add model taxonomy and other classes to public API exports 2025-10-16 08:08:44 +11:00
psychedelicious
875aba8979 tidy(mm): remove unused class 2025-10-16 08:08:44 +11:00
psychedelicious
61d13f20ea chore: bump version to v6.9.0rc1 2025-10-16 08:08:44 +11:00
psychedelicious
3b0dd5768b chore(ui): update whatsnew 2025-10-16 08:08:44 +11:00
psychedelicious
d9888912e8 chore(ui): fix schema 2025-10-15 10:46:16 +11:00
psychedelicious
7cfaca804a chore: ruff 2025-10-15 10:46:16 +11:00
psychedelicious
f37f817fd1 fix(ui): move logging setup to react code 2025-10-15 10:46:16 +11:00
psychedelicious
85e5b40627 docs(ui): add canvas overview and design doc 2025-10-15 10:46:16 +11:00
psychedelicious
e93cda8b2b docs(ui): more documentation 2025-10-15 10:46:16 +11:00
psychedelicious
3d0f29f85f tidy: app "config", settings modal, infill methods
We had an "infill methods" route that long ago told the frontend infill
method, upscale method (model), NSFW checker, and watermark feature
availability.

None of these were used except for the patchmatch check. Removed them,
made the check exclusively for patchmatch, updated related code in redux
app startup listeners and settings modal.
2025-10-15 10:46:16 +11:00
psychedelicious
117fbd97af tidy(ui): clean up useSyncQueueStatus 2025-10-15 10:46:16 +11:00
psychedelicious
0bc11b9af9 docs(ui): add high-level readmes for various features 2025-10-15 10:46:16 +11:00
psychedelicious
79be70d918 tidy(ui): rename GalleryImageGrid and add comments 2025-10-15 10:46:16 +11:00
psychedelicious
16330a14cd tidy(ui): rename GalleryPanel file 2025-10-15 10:46:16 +11:00
psychedelicious
a6690072f1 docs(ui): add comments for image context menu 2025-10-15 10:46:16 +11:00
psychedelicious
26aebb0165 docs(ui): add comments for nes 2025-10-15 10:46:16 +11:00
psychedelicious
0742ebe11a tidy(ui): lift error boundary reset cb outside component 2025-10-15 10:46:16 +11:00
psychedelicious
c80c89dfdb tidy: removing unused code paths 7 2025-10-15 10:46:16 +11:00
psychedelicious
240dc673e4 tidy: removing unused code paths 6 2025-10-15 10:46:16 +11:00
psychedelicious
b2e93d7be7 tidy: removing unused code paths 5 2025-10-15 10:46:16 +11:00
psychedelicious
3126726abf tidy: removing unused code paths 4 2025-10-15 10:46:16 +11:00
psychedelicious
c09dc8e14c tidy: removing unused code paths 3 2025-10-15 10:46:16 +11:00
psychedelicious
906ec4519d tidy: removing unused code paths 2 2025-10-15 10:46:16 +11:00
psychedelicious
7cff5da2c0 tidy: removing unused code paths 1 2025-10-15 10:46:16 +11:00
psychedelicious
454d05bbde refactor: model manager v3 (#8607)
* feat(mm): add UnknownModelConfig

* refactor(ui): move model categorisation-ish logic to central location, simplify model manager models list

* refactor(ui)refactor(ui): more cleanup of model categories

* refactor(ui): remove unused excludeSubmodels

I can't remember what this was for and don't see any reference to it.
Maybe it's just remnants from a previous implementation?

* feat(nodes): add unknown as model base

* chore(ui): typegen

* feat(ui): add unknown model base support in ui

* feat(ui): allow changing model type in MM, fix up base and variant selects

* feat(mm): omit model description instead of making it "base type filename model"

* feat(app): add setting to allow unknown models

* feat(ui): allow changing model format in MM

* feat(app): add the installed model config to install complete events

* chore(ui): typegen

* feat(ui): toast warning when installed model is unidentified

* docs: update config docstrings

* chore(ui): typegen

* tests(mm): fix test for MM, leave the UnknownModelConfig class in the list of configs

* tidy(ui): prefer types from zod schemas for model attrs

* chore(ui): lint

* fix(ui): wrong translation string

* feat(mm): normalized model storage

Store models in a flat directory structure. Each model is in a dir named
its unique key (a UUID). Inside that dir is either the model file or the
model dir.

* feat(mm): add migration to flat model storage

* fix(mm): normalized multi-file/diffusers model installation no worky

now worky

* refactor: port MM probes to new api

- Add concept of match certainty to new probe
- Port CLIP Embed models to new API
- Fiddle with stuff

* feat(mm): port TIs to new API

* tidy(mm): remove unused probes

* feat(mm): port spandrel to new API

* fix(mm): parsing for spandrel

* fix(mm): loader for clip embed

* fix(mm): tis use existing weight_files method

* feat(mm): port vae to new API

* fix(mm): vae class inheritance and config_path

* tidy(mm): patcher types and import paths

* feat(mm): better errors when invalid model config found in db

* feat(mm): port t5 to new API

* feat(mm): make config_path optional

* refactor(mm): simplify model classification process

Previously, we had a multi-phase strategy to identify models from their
files on disk:
1. Run each model config classes' `matches()` method on the files. It
checks if the model could possibly be an identified as the candidate
model type. This was intended to be a quick check. Break on the first
match.
2. If we have a match, run the config class's `parse()` method. It
derive some additional model config attrs from the model files. This was
intended to encapsulate heavier operations that may require loading the
model into memory.
3. Derive the common model config attrs, like name, description,
calculate the hash, etc. Some of these are also heavier operations.

This strategy has some issues:
- It is not clear how the pieces fit together. There is some
back-and-forth between different methods and the config base class. It
is hard to trace the flow of logic until you fully wrap your head around
the system and therefore difficult to add a model architecture to the
probe.
- The assumption that we could do quick, lightweight checks before
heavier checks is incorrect. We often _must_ load the model state dict
in the `matches()` method. So there is no practical perf benefit to
splitting up the responsibility of `matches()` and `parse()`.
- Sometimes we need to do the same checks in `matches()` and `parse()`.
In these cases, splitting the logic is has a negative perf impact
because we are doing the same work twice.
- As we introduce the concept of an "unknown" model config (i.e. a model
that we cannot identify, but still record in the db; see #8582), we will
_always_ run _all_ the checks for every model. Therefore we need not try
to defer heavier checks or resource-intensive ops like hashing. We are
going to do them anyways.
- There are situations where a model may match multiple configs. One
known case are SD pipeline models with merged LoRAs. In the old probe
API, we relied on the implicit order of checks to know that if a model
matched for pipeline _and_ LoRA, we prefer the pipeline match. But, in
the new API, we do not have this implicit ordering of checks. To resolve
this in a resilient way, we need to get all matches up front, then use
tie-breaker logic to figure out which should win (or add "differential
diagnosis" logic to the matchers).
- Field overrides weren't handled well by this strategy. They were only
applied at the very end, if a model matched successfully. This means we
cannot tell the system "Hey, this model is type X with base Y. Trust me
bro.". We cannot override the match logic. As we move towards letting
users correct mis-identified models (see #8582), this is a requirement.

We can simplify the process significantly and better support "unknown"
models.

Firstly, model config classes now have a single `from_model_on_disk()`
method that attempts to construct an instance of the class from the
model files. This replaces the `matches()` and `parse()` methods.

If we fail to create the config instance, a special exception is raised
that indicates why we think the files cannot be identified as the given
model config class.

Next, the flow for model identification is a bit simpler:
- Derive all the common fields up-front (name, desc, hash, etc).
- Merge in overrides.
- Call `from_model_on_disk()` for every config class, passing in the
fields. Overrides are handled in this method.
- Record the results for each config class and choose the best one.

The identification logic is a bit more verbose, with the special
exceptions and handling of overrides, but it is very clear what is
happening.

The one downside I can think of for this strategy is we do need to check
every model type, instead of stopping at the first match. It's a bit
less efficient. In practice, however, this isn't a hot code path, and
the improved clarity is worth far more than perf optimizations that the
end user will likely never notice.

* refactor(mm): remove unused methods in config.py

* refactor(mm): add model config parsing utils

* fix(mm): abstractmethod bork

* tidy(mm): clarify that model id utils are private

* fix(mm): fall back to UnknownModelConfig correctly

* feat(mm): port CLIPVisionDiffusersConfig to new api

* feat(mm): port SigLIPDiffusersConfig to new api

* feat(mm): make match helpers more succint

* feat(mm): port flux redux to new api

* feat(mm): port ip adapter to new api

* tidy(mm): skip optimistic override handling for now

* refactor(mm): continue iterating on config

* feat(mm): port flux "control lora" and t2i adapter to new api

* tidy(ui): use Extract to get model config types

* fix(mm): t2i base determination

* feat(mm): port cnet to new api

* refactor(mm): add config validation utils, make it all consistent and clean

* feat(mm): wip port of main models to new api

* feat(mm): wip port of main models to new api

* feat(mm): wip port of main models to new api

* docs(mm): add todos

* tidy(mm): removed unused model merge class

* feat(mm): wip port main models to new api

* tidy(mm): clean up model heuristic utils

* tidy(mm): clean up ModelOnDisk caching

* tidy(mm): flux lora format util

* refactor(mm): make config classes narrow

Simpler logic to identify, less complexity to add new model, fewer
useless attrs that do not relate to the model arch, etc

* refactor(mm): diffusers loras

w

* feat(mm): consistent naming for all model config classes

* fix(mm): tag generation & scattered probe fixes

* tidy(mm): consistent class names

* refactor(mm): split configs into separate files

* docs(mm): add comments for identification utils

* chore(ui): typegen

* refactor(mm): remove legacy probe, new configs dir structure, update imports

* fix(mm): inverted condition

* docs(mm): update docsstrings in factory.py

* docs(mm): document flux variant attr

* feat(mm): add helper method for legacy configs

* feat(mm): satisfy type checker in flux denoise

* docs(mm): remove extraneous comment

* fix(mm): ensure unknown model configs get unknown attrs

* fix(mm): t5 identification

* fix(mm): sdxl ip adapter identification

* feat(mm): more flexible config matching utils

* fix(mm): clip vision identification

* feat(mm): add sanity checks before probing paths

* docs(mm): add reminder for self for field migrations

* feat(mm): clearer naming for main config class hierarchy

* feat(mm): fix clip vision starter model bases, add ref to actual models

* feat(mm): add model config schema migration logic

* fix(mm): duplicate import

* refactor(mm): split big migration into 3

Split the big migration that did all of these things into 3:

- Migration 22: Remove unique contraint on base/name/type in models
table
- Migration 23: Migrate configs to v6.8.0 schemas
- Migration 24: Normalize file storage

* fix(mm): pop base/type/format when creating unknown model config

* fix(db): migration 22 insert only real cols

* fix(db): migration 23 fall back to unknown model when config change fails

* feat(db): run migrations 23 and 24

* fix(mm): false negative on flux lora

* fix(mm): vae checkpoint probe checking for dir instead of file

* fix(mm): ModelOnDisk skips dirs when looking for weights

Previously a path w/ any of the known weights suffixes would be seen as
a weights file, even if it was a directory. We now check to ensure the
candidate path is actually a file before adding it to the list of
weights.

* feat(mm): add method to get main model defaults from a base

* feat(mm): do not log when multiple non-unknown model matches

* refactor(mm): continued iteration on model identifcation

* tests(mm): refactor model identification tests

Overhaul of model identification (probing) tests. Previously we didn't
test the correctness of probing except in a few narrow cases - now we
do.

See tests/model_identification/README.md for a detailed overview of the
new test setup. It includes instructions for adding a new test case. In
brief:

- Download the model you want to add as a test case
- Run a script against it to generate the test model files
- Fill in the expected model type/format/base/etc in the generated test
metadata JSON file

Included test cases:
- All starter models
- A handful of other models that I had installed
- Models present in the previous test cases as smoke tests, now also
tested for correctness

* fix(mm): omit type/format/base when creating unknown config instance

* feat(mm): use ValueError for model id sanity checks

* feat(mm): add flag for updating models to allow class changes

* tests(mm): fix remaining MM tests

* feat: allow users to edit models freely

* feat(ui): add warning for model settings edit

* tests(mm): flux state dict tests

* tidy: remove unused file

* fix(mm): lora state dict loading in model id

* feat(ui): use translation string for model edit warning

* docs(db): update version numbers in migration comments

* chore: bump version to v6.9.0a1

* docs: update model id readme

* tests(mm): attempt to fix windows model id tests

* fix(mm): issue with deleting single file models

* feat(mm): just delete the dir w/ rmtree when deleting model

* tests(mm): windows CI issue

* fix(ui): typegen schema sync

* fix(mm): fixes for migration 23

- Handle CLIP Embed and Main SD models missing variant field
- Handle errors when calling the discriminator function, previously only
handled ValidationError but it could be a ValueError or something else
- Better logging for config migration

* chore: bump version to v6.9.0a2

* chore: bump version to v6.9.0a3
2025-10-15 10:18:53 +11:00
psychedelicious
a7e1f06698 chore: uv lock 2025-10-12 08:18:03 -04:00
psychedelicious
8dfaf7e697 chore: bump version to v6.8.1 2025-10-12 08:18:03 -04:00
psychedelicious
f59ffbe145 fix: schema generation bug in fastapi 0.119.0
Couldn't figure out a quick and easy fix. Needs some pydantic/FastAPI
fanagling. For now, roll back to last good version.
2025-10-12 08:18:03 -04:00
dunkeroni
bd4bb075a5 bump node version to 2.0.0 2025-10-09 17:55:13 +11:00
dunkeroni
e19b7d4afb update typegen 2025-10-09 17:55:13 +11:00
dunkeroni
f8d0b43a9b change Colorspace title to "Color Space" 2025-10-09 17:55:13 +11:00
dunkeroni
50c77d9bf0 error message for incorrect mask size 2025-10-09 17:55:13 +11:00
dunkeroni
358cc0349e (chore) cleanup and schema 2025-10-09 17:55:13 +11:00
copilot-swe-agent[bot]
417e6ebdbc Simplify mask application by pasting base on corrected instead of inverting mask
Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>
2025-10-09 17:55:13 +11:00
copilot-swe-agent[bot]
7919d659b7 Use PIL Image.paste() for mask application instead of numpy array blending
Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>
2025-10-09 17:55:13 +11:00
dunkeroni
ec665d2c7f remove extra conversion 2025-10-09 17:55:13 +11:00
dunkeroni
020d36b234 remove extra conversion 2025-10-09 17:55:13 +11:00
copilot-swe-agent[bot]
d67272c027 Switch from LAB to YCbCr colorspace for simpler conversions
Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>
2025-10-09 17:55:13 +11:00
copilot-swe-agent[bot]
82548f9e41 Fix mask loading and blending: load as L, white=original, black=result
Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>
2025-10-09 17:55:13 +11:00
copilot-swe-agent[bot]
07a2369105 Add safety check for CDF normalization in histogram matching
Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>
2025-10-09 17:55:13 +11:00
copilot-swe-agent[bot]
b1f7e2dfdc Refactor ColorCorrectInvocation with histogram matching
Co-authored-by: dunkeroni <3298737+dunkeroni@users.noreply.github.com>
2025-10-09 17:55:13 +11:00
psychedelicious
b2b4a35bc4 chore(ui): update whatsnew 2025-10-09 07:22:56 +11:00
psychedelicious
c249a25f85 chore: bump version to v6.8.0 2025-10-09 07:22:56 +11:00
psychedelicious
25f8ab24aa tests: fix test for breaking pydantic v2.12 change
Fixes a test failure introduced by
https://github.com/pydantic/pydantic/pull/11957

TL;DR: "after" model validators should be instance methods, not class
methods. Batch model updated to use an instance method, which fixes the
failing test.
2025-10-08 17:24:47 +11:00
psychedelicious
c0469ef633 chore: bump version to v6.8.0rc2 2025-10-08 17:24:47 +11:00
hffeka
310e826d76 docs: add BiRefNet and Image Export to communityNodes.md 2025-10-06 10:08:29 +11:00
psychedelicious
a423ead99e fix(ui): correct the in-place install verbiage, add tooltip 2025-09-30 13:08:17 +10:00
psychedelicious
3707c3b034 fix(ui): do not bake opacity when rasterizing layer adjustments 2025-09-22 11:43:08 +10:00
Mary Hipp
5885db4ab5 ruff 2025-09-19 11:07:36 -04:00
Mary Hipp
36ed9b750d restore list_queue_items method 2025-09-19 11:07:36 -04:00
psychedelicious
3cec06f86e chore(ui): typegen 2025-09-19 22:13:12 +10:00
psychedelicious
28b5f7a1c5 feat(nodes): better deprecation handling for ui_type
- Move migration of model-specific ui_types into BaseInvocation. This
gives us access to the node and field names, so the warnings are more
useful to the end user.
- Ensure we serialize the fields' json_schema_extra with enum values.
This wasn't a problem until now, when it interferes with migrating
ui_type cleanly. It's a transparent change.
- Improve warnings when validating fields (which includes the ui_type
migration logic)
2025-09-19 22:13:12 +10:00
psychedelicious
22cbb23ae0 fix(ui): ref images for flux kontext & api models not parsed correctly 2025-09-19 21:40:17 +10:00
Riccardo Giovanetti
4d585e3eec translationBot(ui): update translation (Italian)
Currently translated at 98.4% (2130 of 2163 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (2127 of 2161 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-09-18 14:01:31 +10:00
psychedelicious
006b4356bb chore(ui): typegen 2025-09-18 12:39:27 +10:00
psychedelicious
da947866f2 fix(nodes): ensure SD2 models are pickable in loader/cnet nodes 2025-09-18 12:39:27 +10:00
psychedelicious
84a2cc6fc9 chore(ui): typegen 2025-09-18 12:39:27 +10:00
psychedelicious
b50534bb49 revert(nodes): do not deprecate ui_type for output fields! only deprecate the model ui types 2025-09-18 12:39:27 +10:00
psychedelicious
c305e79fee tests(ui): update tests to reflect new model parsing logic 2025-09-18 12:39:27 +10:00
psychedelicious
c32949d113 tidy(nodes): mark all UIType.*ModelField as deprecated 2025-09-18 12:39:27 +10:00
psychedelicious
87a98902da tidy(nodes): remove unused UIType.Video 2025-09-18 12:39:27 +10:00
psychedelicious
2857a446c9 docs(nodes): update docstrings for InputField 2025-09-18 12:39:27 +10:00
psychedelicious
035d9432bd feat(ui): support filtering on model format 2025-09-18 12:39:27 +10:00
psychedelicious
bdeb9fb1cf chore(ui): typegen 2025-09-18 12:39:27 +10:00
psychedelicious
dadff57061 feat(nodes): add ui_model_format filter for nodes 2025-09-18 12:39:27 +10:00
psychedelicious
480857ae4e fix(nodes): add base to SD1 model loader 2025-09-18 12:39:27 +10:00
psychedelicious
eaf0624004 feat(ui): remove explicit model type handling from workflow editor 2025-09-18 12:39:27 +10:00
psychedelicious
58bca1b9f4 feat(nodes): use new ui_model_[base|type|variant] on all core nodes 2025-09-18 12:39:27 +10:00
psychedelicious
54aa6908fa feat(ui): update invocation parsing to handle new ui_model_[base|type|variant] attrs 2025-09-18 12:39:27 +10:00
psychedelicious
e6d9daca96 chore(ui): typegen 2025-09-18 12:39:27 +10:00
psychedelicious
6e5a529cb7 feat(nodes): add ui_model_[base|type|variant] to InputField args for dynamic UI generation 2025-09-18 12:39:27 +10:00
Iq1pl
8c742a6e38 ruff format 2025-09-18 11:05:32 +10:00
Iq1pl
693373f1c1 Update ip_adapter.py
added support for NOOB-IPA-MARK1
2025-09-18 11:05:32 +10:00
Josh Corbett
4809080fd9 fix(ui): allow scrolling in ModelPane 2025-09-18 10:33:22 +10:00
psychedelicious
efcb1bea7f chore: bump version to v6.8.0rc1 2025-09-17 13:57:43 +10:00
psychedelicious
e0d7a401f3 feat(ui): make ref images croppable 2025-09-17 13:43:13 +10:00
psychedelicious
aac979e9a4 fix(ui): issue w/ setting initial aspect ratio in cropper 2025-09-17 13:43:13 +10:00
psychedelicious
3b0d7f076d tidy(ui): rename from "editor" to "cropper", minor cleanup 2025-09-17 13:43:13 +10:00
psychedelicious
e1acbcdbd5 fix(ui): store floats for box 2025-09-17 13:43:13 +10:00
psychedelicious
7d9b81550b feat(ui): revert to original image when crop discarded 2025-09-17 13:43:13 +10:00
psychedelicious
6a447dd1fe refactor(ui): remove "apply", "start" and "cancel" concepts from editor 2025-09-17 13:43:13 +10:00
psychedelicious
c2dc63ddbc fix(ui): video graphs 2025-09-17 13:43:13 +10:00
psychedelicious
1bc689d531 docs(ui): add comments to startingframeimage 2025-09-17 13:43:13 +10:00
psychedelicious
4829975827 feat(ui): make the editor components not care about the image 2025-09-17 13:43:13 +10:00
psychedelicious
49da4e00c3 feat(ui): add concept for editable image state 2025-09-17 13:43:13 +10:00
psychedelicious
89dfe5e729 docs(ui): add comments to editor 2025-09-17 13:43:13 +10:00
psychedelicious
6816d366df tidy(ui): editor misc 2025-09-17 13:43:13 +10:00
psychedelicious
9d3d2a36c9 tidy(ui): editor listeners 2025-09-17 13:43:13 +10:00
psychedelicious
ed231044c8 refactor(ui): simplify crop constraints 2025-09-17 13:43:13 +10:00
psychedelicious
b51a232794 feat(ui): extract config to own obj 2025-09-17 13:43:13 +10:00
psychedelicious
4412143a6e feat(ui): clean up editor 2025-09-17 13:43:13 +10:00
psychedelicious
de11cafdb3 refactor(ui): editor (wip) 2025-09-17 13:43:13 +10:00
psychedelicious
4d9114aa7d refactor(ui): editor (wip) 2025-09-17 13:43:13 +10:00
psychedelicious
67e2da1ebf refactor(ui): editor (wip) 2025-09-17 13:43:13 +10:00
psychedelicious
33ecc591c3 refactor(ui): editor init 2025-09-17 13:43:13 +10:00
psychedelicious
b57459a226 chore(ui): lint 2025-09-17 13:43:13 +10:00
psychedelicious
01282b1c90 feat(ui): do not clear crop when canceling 2025-09-17 13:43:13 +10:00
psychedelicious
3f302906dc feat(ui): crop doesn't hide outside cropped region 2025-09-17 13:43:13 +10:00
psychedelicious
81d56596fb tidy(ui): cleanup 2025-09-17 13:43:13 +10:00
psychedelicious
b536b0df0c feat(ui): misc iterate on editor 2025-09-17 13:43:13 +10:00
psychedelicious
692af1d93d feat(ui): type narrowing for editor output types 2025-09-17 13:43:13 +10:00
psychedelicious
bb7ef77b50 tidy(ui): lint/react conventions for editor component 2025-09-17 13:43:13 +10:00
psychedelicious
1862548573 feat(ui): image editor bg checkerboard pattern 2025-09-17 13:43:13 +10:00
psychedelicious
242c1b6350 feat(ui): tweak editor konva styles 2025-09-17 13:43:13 +10:00
psychedelicious
fc6e4bb04e tidy(ui): editor component cleanup 2025-09-17 13:43:13 +10:00
psychedelicious
20841abca6 tidy(ui): editor cleanup 2025-09-17 13:43:13 +10:00
psychedelicious
e8b69d99a4 chore(ui): lint 2025-09-17 13:43:13 +10:00
Mary Hipp
d6eaff8237 create editImageModal that takes an imageDTO, loads blob onto canvas, and allows cropping. cropped blob is uploaded as new asset 2025-09-17 13:43:13 +10:00
Mary Hipp
068b095956 show warning state with tooltip if starting frame image aspect ratio does not match the video output aspect ratio' 2025-09-17 13:43:13 +10:00
psychedelicious
f795a47340 tidy(ui): remove unused translation string 2025-09-16 15:04:03 +10:00
psychedelicious
df47345eb0 feat(ui): add translation strings for prompt history 2025-09-16 15:04:03 +10:00
psychedelicious
def04095a4 feat(ui): tweak prompt history styling 2025-09-16 15:04:03 +10:00
psychedelicious
28be8f0911 refactor(ui): simplify prompt history shortcuts 2025-09-16 15:04:03 +10:00
Kent Keirsey
b50c44bac0 handle potential for invalid list item 2025-09-16 15:04:03 +10:00
Kent Keirsey
b4ce0e02fc lint 2025-09-16 15:04:03 +10:00
Kent Keirsey
d6442d9a34 Prompt history shortcuts 2025-09-16 15:04:03 +10:00
Josh Corbett
4528bcafaf feat(model manager): add ModelFooter component and reusable ModelDeleteButton 2025-09-16 12:29:57 +10:00
Josh Corbett
8b82b81ee2 fix(ModelImage): change MODEL_IMAGE_THUMBNAIL_SIZE to a local constant 2025-09-16 12:29:57 +10:00
Josh Corbett
757acdd49e feat(model manager): 💄 update model manager ui, initial commit 2025-09-16 12:29:57 +10:00
psychedelicious
94b7cc583a fix(ui): do not reset params state on studio init nav to generate tab 2025-09-16 12:25:25 +10:00
psychedelicious
b663a6bac4 chore: bump version to v6.7.0 2025-09-15 14:37:56 +10:00
psychedelicious
65d40153fb chore(ui): update whatsnew 2025-09-15 14:37:56 +10:00
Riccardo Giovanetti
c8b741a514 translationBot(ui): update translation (Italian)
Currently translated at 98.4% (2120 of 2153 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (2097 of 2153 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-09-15 14:25:41 +10:00
psychedelicious
6d3aeffed9 fix(ui): dedupe prompt history 2025-09-15 14:22:44 +10:00
psychedelicious
203be96910 fix(ui): render popovers in portals to ensure they are on top of other ui elements 2025-09-15 14:19:54 +10:00
psychedelicious
b0aa48ddb8 feat(ui): simple prompt history 2025-09-12 10:19:48 -04:00
psychedelicious
867dbe51e5 fix(ui): extend lora weight schema to accept full range of weights
This could cause a failure to rehydrate LoRA state, or failure to recall
a LoRA.

Closes #8551
2025-09-12 11:50:10 +10:00
psychedelicious
ff8948b6f1 chore(ui): update whatsnew 2025-09-11 18:09:31 +10:00
psychedelicious
fa3a6425a6 tests(ui): update staging area test to reflect new behaviour 2025-09-11 18:09:31 +10:00
psychedelicious
c5992ece89 fix(ui): better logic in staging area when canceling the selected item 2025-09-11 18:09:31 +10:00
psychedelicious
12a6239929 chore: bump version to v6.7.0rc1 2025-09-11 18:09:31 +10:00
Riccardo Giovanetti
e9238c59f4 translationBot(ui): update translation (Italian)
Currently translated at 96.5% (2053 of 2127 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-09-11 17:42:41 +10:00
Linos
c1cbbe51d6 translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (2127 of 2127 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-09-11 17:42:41 +10:00
Hosted Weblate
4219b4a288 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-09-11 17:42:41 +10:00
psychedelicious
48c8a9c09d chore(ui): lint 2025-09-11 17:25:57 +10:00
psychedelicious
a67efdf4ad perf(ui): optimize curves graph component
Do not use whole layer as trigger for histo recalc; use the canvas cache
of the layer - it more reliably indicates when the layer pixel data has
changed, and fixes an issue where we can miss the first histo calc due
to race conditiong with async layer bbox calculation.
2025-09-11 17:25:57 +10:00
psychedelicious
d6ff9c2e49 tidy(ui): split curves graph into own component 2025-09-11 17:25:57 +10:00
psychedelicious
e768a3bc7b perf(ui): use narrow selectors in adjustments to reduce rerenders
dramatically improves the feel of the sliders
2025-09-11 17:25:57 +10:00
psychedelicious
7273700f61 fix(ui): sharpness range 2025-09-11 17:25:57 +10:00
psychedelicious
f909e81d91 feat(ui): better types & runtime guarantees for filter data stored in konva node attrs 2025-09-11 17:25:57 +10:00
psychedelicious
8c85f168f6 refactor(ui): make layer adjustments schemas/types composable 2025-09-11 17:25:57 +10:00
psychedelicious
263d86d46f fix(ui): points where x=255 sorted incorrectly 2025-09-11 17:25:57 +10:00
psychedelicious
0921805160 feat(ui): tweak adjustments panel styling 2025-09-11 17:25:57 +10:00
psychedelicious
517f4811e7 feat(ui): single action to reset adjustments 2025-09-11 17:25:57 +10:00
psychedelicious
0dc73c8803 tidy(ui): move some histogram drawing logic out of components and into calblacks 2025-09-11 17:25:57 +10:00
psychedelicious
26702b54c0 feat(ui): tweak layouts, use react conventions, disabled state 2025-09-11 17:25:57 +10:00
dunkeroni
2d65e4543f minor padding changes 2025-09-11 17:25:57 +10:00
dunkeroni
309113956b remove unknown type annotations 2025-09-11 17:25:57 +10:00
dunkeroni
0ac4099bc6 allow negative sharpness to soften 2025-09-11 17:25:57 +10:00
dunkeroni
899dc739fa defaultValue on adjusters 2025-09-11 17:25:57 +10:00
dunkeroni
4e2439fc8e remove extra edit comments 2025-09-11 17:25:57 +10:00
dunkeroni
00864c24e0 layout fixes 2025-09-11 17:25:57 +10:00
dunkeroni
b73aaa7d6f fix several points of curve editor jank 2025-09-11 17:25:57 +10:00
dunkeroni
85057ae704 splitup adjustment panel objects 2025-09-11 17:25:57 +10:00
dunkeroni
c3fb3a43a2 blue mode switch indicator 2025-09-11 17:25:57 +10:00
dunkeroni
51d0a15a1b use default factory on reset 2025-09-11 17:25:57 +10:00
dunkeroni
5991067fd9 simplify adjustments type to optional not null 2025-09-11 17:25:57 +10:00
dunkeroni
32c2d3f740 remove extra casts and types from filters.ts 2025-09-11 17:25:57 +10:00
dunkeroni
c661f86b34 fix: crop to bbox doubles adjustment filters 2025-09-11 17:25:57 +10:00
dunkeroni
cc72d8eab4 curves editor syntax and structure fixes 2025-09-11 17:25:57 +10:00
dunkeroni
e8550f9355 move constants in curves editor 2025-09-11 17:25:57 +10:00
dunkeroni
a1d0386ca4 move memoized slider to component 2025-09-11 17:25:57 +10:00
dunkeroni
495d089f85 clean up right click menu 2025-09-11 17:25:57 +10:00
dunkeroni
913b91e9dd remove redundant en.json colors 2025-09-11 17:25:57 +10:00
dunkeroni
3e907f4e14 remove extra title 2025-09-11 17:25:57 +10:00
dunkeroni
756df6ebe4 Finish button on adjustments 2025-09-11 17:25:57 +10:00
dunkeroni
2a6be99152 Fix tint not shifting green in negative direction 2025-09-11 17:25:57 +10:00
dunkeroni
3099e2bf9d fix disable toggle reverts to simple view 2025-09-11 17:25:57 +10:00
dunkeroni
6921f0412a log scale and panel width compatibility 2025-09-11 17:25:57 +10:00
dunkeroni
022d5a8863 curves editor 2025-09-11 17:25:57 +10:00
dunkeroni
af99beedc5 apply filters to operations 2025-09-11 17:25:57 +10:00
dunkeroni
f3d83dc6b7 visual adjustment filters 2025-09-11 17:25:57 +10:00
psychedelicious
ebc3f18a1a ai(ui): add CLAUDE.md to frontend 2025-09-11 13:26:39 +10:00
Mary Hipp
aeb512f8d9 ruff 2025-09-11 12:41:56 +10:00
Mary Hipp
a1810acb93 accidental commit 2025-09-11 12:41:56 +10:00
Mary Hipp
aa35a5083b remove completed_at from queue list so that created_at is only sort option, restore field values in UI 2025-09-11 12:41:56 +10:00
psychedelicious
4f17de0b32 fix(ui): ensure mask image is deleted when no more inputs to select object 2025-09-11 12:15:41 +10:00
psychedelicious
370c3cd59b feat(ui): update select object info tooltip 2025-09-11 12:15:41 +10:00
psychedelicious
67214e16c0 tidy(ui): organize select object components 2025-09-11 12:15:41 +10:00
psychedelicious
4880a1d946 feat(nodes): accept neg coords for bbox
This actually works fine for SAM.
2025-09-11 12:15:41 +10:00
psychedelicious
0f0988610f feat(ui): spruce up UI a bit 2025-09-11 12:15:41 +10:00
psychedelicious
6805d28b7a feat(ui): increase hit area for bbox anchors 2025-09-11 12:15:41 +10:00
psychedelicious
9b45a24136 fix(ui): respect selected point type 2025-09-11 12:15:41 +10:00
psychedelicious
4e9d66a64b tidy(ui): clean up CanvasSegmentAnythingModule 2025-09-11 12:15:41 +10:00
psychedelicious
8fec530b0f fix(ui): restore old tooltip for select object
need to add translation strigns for new functionality
2025-09-11 12:15:41 +10:00
psychedelicious
50c66f8671 fix(ui): select obj box moving on mmb pan 2025-09-11 12:15:41 +10:00
psychedelicious
f0aa39ea81 fix(ui): prevent bbox from following cursor after middle mouse pan
Added button checks to bbox rect and transformer mousedown/touchstart handlers to only process left clicks. Also added stage dragging check in onBboxDragMove to clear bbox drag state when middle mouse panning is active.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-11 12:15:41 +10:00
psychedelicious
faac814a3d fix(ui): prevent middle mouse from creating points in segmentation module
When middle mouse button is used for canvas panning, the pointerup event was still creating points in the segmentation module. Added button check to onBboxDragEnd handler to only process left clicks.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-11 12:15:41 +10:00
psychedelicious
fb9545bb90 fix(ui): bbox no shrinkies 2025-09-11 12:15:41 +10:00
psychedelicious
8ad2ee83b6 fix(ui): prevent bbox scale accumulation in SAM module
Fixed an issue where bounding boxes could grow exponentially when created at small sizes. The problem occurred because Konva Transformer modifies scaleX/scaleY rather than width/height directly, and the scale values weren't consistently reset after being applied to dimensions.

Changes:
- Ensure scale values are always reset to 1 after applying to dimensions
- Add minimum size constraints to prevent zero/negative dimensions
- Fix scale handling in transformend, dragend, and initial bbox creation

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-11 12:15:41 +10:00
psychedelicious
f8ad62b5eb tidy(backend) cleanup sam pipelines 2025-09-11 12:15:41 +10:00
psychedelicious
03ae78bc7c tidy(nodes): clean up sam node 2025-09-11 12:15:41 +10:00
psychedelicious
ec1a058dbe fix(backend): issue w/ multiple bbox and sam1 2025-09-11 12:15:41 +10:00
psychedelicious
9e4d441e2e feat(ui): allow adding point inside bbox 2025-09-11 12:15:41 +10:00
psychedelicious
3770fd22f8 tidy(ui): ts issues 2025-09-11 12:15:41 +10:00
psychedelicious
a0232b0e63 feat(ui): combine points and bbox in visual mode for SAM
Revised the Select Object feature to support two input modes:
- Visual mode: Combined points and bounding box input for paired SAM inputs
- Prompt mode: Text-based object selection (unchanged)

Key changes:
- Replaced three input types (points, prompt, bbox) with two (visual, prompt)
- Visual mode supports both point and bbox inputs simultaneously
- Click to add include points, Shift+click for exclude points
- Click and drag to draw bounding box
- Fixed bbox visibility issues when adding points
- Fixed coordinate system issues for proper bbox positioning
- Added proper event handling and interaction controls

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-11 12:15:41 +10:00
psychedelicious
e1e964bf0e experiment(ui): support bboxes in select object 2025-09-11 12:15:41 +10:00
psychedelicious
1b1759cffc feat(ui): support prompt-based selection for object selection 2025-09-11 12:15:41 +10:00
psychedelicious
d828502bc8 refactor(backend): simplify segment anything APIs
There was a really confusing aspect of the SAM pipeline classes where
they accepted deeply nested lists of different dimensions (bbox, points,
and labels).

The lengths of the lists are related; each point must have a
corresponding label, and if bboxes are provided with points, they must
be same length.

I've refactored the backend API to take a single list of SAMInput
objects. This class has a bbox and/or a list of points, making it much
simpler to provide the right shape of inputs.

Internally, the pipeline classes take rejigger these input classes to
have the correct nesting.

The Nodes still have an awkward API where you can provide both bboxes
and points of different lengths, so I added a pydantic validator that
enforces correct lenghts.
2025-09-11 12:15:41 +10:00
psychedelicious
7a073b6de7 feat(ui): hold shift to add inverse point type 2025-09-11 12:15:41 +10:00
psychedelicious
338ff8d588 chore: typegen 2025-09-11 12:15:41 +10:00
psychedelicious
a3625efd3a chore: ruff 2025-09-11 12:15:41 +10:00
Kent Keirsey
5efb37fe63 consolidate into one node. 2025-09-11 12:15:41 +10:00
Kent Keirsey
aef0b81d5b fix models 2025-09-11 12:15:41 +10:00
Kent Keirsey
544edff507 update uv.lock 2025-09-11 12:15:41 +10:00
Kent Keirsey
42b1adab22 init Sam2 2025-09-11 12:15:41 +10:00
Attila Cseh
a2b9d12e88 prettier errors fixed 2025-09-10 11:28:50 +10:00
Attila Cseh
7a94fb6c04 maths enabled on numeric input fields in worklow editor 2025-09-10 11:28:50 +10:00
psychedelicious
efcd159704 fix(app): path traversal via bulk downloads paths 2025-09-10 11:18:12 +10:00
psychedelicious
997e619a9d feat(ui): address feedback 2025-09-09 14:42:30 +10:00
Attila Cseh
4bc184ff16 LoRA number input min/max restored 2025-09-09 14:42:30 +10:00
psychedelicious
0b605a745b fix(ui): route metadata to gemini node 2025-09-09 14:31:07 +10:00
Attila Cseh
22b038ce3b unused translations removed 2025-09-08 20:41:36 +10:00
psychedelicious
0bb5d647b5 tidy(app): method naming snake case 2025-09-08 20:41:36 +10:00
psychedelicious
4a3599929b fix(ui): do not pass scroll seek props to DOM in queue list 2025-09-08 20:41:36 +10:00
psychedelicious
f959ce8323 feat(ui): reduce overscan for queue
makes it a bit less sluggish
2025-09-08 20:41:36 +10:00
Attila Cseh
74e1047870 build errors fixed 2025-09-08 20:41:36 +10:00
Attila Cseh
732881c51b createdAt column fixed 2025-09-08 20:41:36 +10:00
Attila Cseh
107be8e166 queueSlice cleaned up 2025-09-08 20:41:36 +10:00
Attila Cseh
3c2f654da8 queue api listQueueItems removed 2025-09-08 20:41:36 +10:00
Attila Cseh
474fd44e50 status column not sortable 2025-09-08 20:41:36 +10:00
Attila Cseh
0dc5f8fd65 getQueueItemIds cache invalidation added 2025-09-08 20:41:36 +10:00
Attila Cseh
d4215fb460 isOpen refactored 2025-09-08 20:41:36 +10:00
Attila Cseh
0cd05ee9fd ListContext reverted with queryArgs 2025-09-08 20:41:36 +10:00
Attila Cseh
9fcb3af1d8 ListContext removed 2025-09-08 20:41:36 +10:00
Attila Cseh
c9da7e2172 typegen fixed 2025-09-08 20:41:36 +10:00
Attila Cseh
9788735d6b code review fixes 2025-09-08 20:41:36 +10:00
Attila Cseh
d6139748e2 Update invokeai/frontend/web/src/features/queue/components/QueueList/QueueList.tsx
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2025-09-08 20:41:36 +10:00
Attila Cseh
602dfb1e5d Update invokeai/frontend/web/src/features/queue/components/QueueList/QueueList.tsx
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2025-09-08 20:41:36 +10:00
Attila Cseh
5bb3a78f56 Update invokeai/frontend/web/src/features/queue/components/QueueList/QueueItemComponent.tsx
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2025-09-08 20:41:36 +10:00
Attila Cseh
d58df1e17b schema re-generated 2025-09-08 20:41:36 +10:00
Attila Cseh
5d0e37eb2f lint errors fixed 2025-09-08 20:41:36 +10:00
Attila Cseh
486b333cef queue list virtualized 2025-09-08 20:41:36 +10:00
Attila Cseh
6fa437af03 get_queue_itemIds endpoint created 2025-09-08 20:41:36 +10:00
Attila Cseh
787ef6fa27 ColumnSortIcon refactored 2025-09-08 20:41:36 +10:00
Attila Cseh
7f0571c229 QueueListHeaderColumnProps.field turned into SortBy 2025-09-08 20:41:36 +10:00
Attila Cseh
f5a58c0ceb QueueListHeaderColumn created 2025-09-08 20:41:36 +10:00
psychedelicious
d16eef4e66 chore: bump version to v6.6.0 2025-09-08 14:01:02 +10:00
psychedelicious
681ff2b2b3 chore(ui): update whatsnew 2025-09-08 14:01:02 +10:00
psychedelicious
0d81b4ce98 tidy(ui): make names a bit clearer 2025-09-08 13:54:23 +10:00
psychedelicious
99f1667ced tidy(ui): remove unused dependency 2025-09-08 13:54:23 +10:00
psychedelicious
aa5597ab4d feat(ui): use resize observer directly in component 2025-09-08 13:54:23 +10:00
psychedelicious
9bbb8e8a5e feat(ui): simpler strategy to conditionally render slider brush width 2025-09-08 13:54:23 +10:00
psychedelicious
f284d282c1 feat(ui): color picker number input outline styling 2025-09-08 13:54:23 +10:00
Attila Cseh
4231488da6 number input height set 2025-09-08 13:54:23 +10:00
Attila Cseh
a014867e68 slider number input height set 2025-09-08 13:54:23 +10:00
Attila Cseh
22654fbc9c redundant translations removed 2025-09-08 13:54:23 +10:00
Attila Cseh
daa4fd751c ToolWidthPicker refactored 2025-09-08 13:54:23 +10:00
Attila Cseh
3fd265c333 slider for brush and eraser tool 2025-09-08 13:54:23 +10:00
psychedelicious
26a3a9130c Revert "build(ui): port clean translations script to js"
This reverts commit 8a00d855b4.
2025-09-08 11:20:55 +10:00
psychedelicious
3dfeaab4b2 Revert "build(ui): add package script to check and clean translatoins"
This reverts commit 9610f34dd4.
2025-09-08 11:20:55 +10:00
psychedelicious
a33707cc76 Revert "ci: add translation string check to frontend checks"
This reverts commit 98945a4560.
2025-09-08 11:20:55 +10:00
psychedelicious
21e13daf6e Revert "chore(ui): clean translations"
This reverts commit a0dceecab9.
2025-09-08 11:20:55 +10:00
psychedelicious
fa2614ee02 Revert "tidy(ui): remove python clean translations script"
This reverts commit 8a81c05caf.
2025-09-08 11:20:55 +10:00
Hosted Weblate
4be6ddb23d translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-09-05 12:28:33 +10:00
Riccardo Giovanetti
bba0e01926 translationBot(ui): update translation (Italian)
Currently translated at 98.6% (2093 of 2122 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-09-05 12:28:33 +10:00
psychedelicious
20d57d5ccf gh: update pr template 2025-09-05 11:27:02 +10:00
psychedelicious
d9121271a2 fix(ui): rehydration + redux migration design issue
Certain items in redux are ephemeral and omitted from persisted slices.
On rehydration, we need to inject these values back into the slice.

But there was an issue taht could prevent slice migrations from running
during rehydration.

The migrations look for the `_version` key in state and migrate the
slice accordingly.

The logic that merged in the ephemeral values accidentally _also_ merged
in the `_version` key if it didn't already exist. This happened _before_
migrations are run.

This causes problems for slices that didn't have a `_version` key and
then have one added via migration.

For example, the params slice didn't have a `_version` key until the
previous commit, which added `_version` and changed some other parts of
state in a migration.

On first load of the updated code, we have a catch-22 kinda situation:
- The persisted params slice is the old version. It needs to have both
`_version` and some other data added to it.
- We deserialize the state and then merge in ephemeral values. This
inadvertnetly also merged in the `_version` key.
- We run the slice migration. It sees there is a `_version` key and
thinks it doesn't need to run. The extra data isn't added to the slice.
The slice is parsed against its zod schema and fails because the new
data is missing.
- Because the parse failed, we treat the user's persisted data as
invalid and overwrite it with initial state, potentially causing data
loss.

The fix is to be more selective when merging in the ephemeral state
before migration - this is now done by checking which keys are on the
persist denylist and only adding those key.
2025-09-05 11:27:02 +10:00
psychedelicious
30b487c71c tidy(ui): remove unused x/y coords from params slice 2025-09-05 11:27:02 +10:00
psychedelicious
8a81c05caf tidy(ui): remove python clean translations script 2025-09-05 11:02:37 +10:00
psychedelicious
a0dceecab9 chore(ui): clean translations 2025-09-05 11:02:37 +10:00
psychedelicious
98945a4560 ci: add translation string check to frontend checks 2025-09-05 11:02:37 +10:00
psychedelicious
9610f34dd4 build(ui): add package script to check and clean translatoins 2025-09-05 11:02:37 +10:00
psychedelicious
8a00d855b4 build(ui): port clean translations script to js 2025-09-05 11:02:37 +10:00
psychedelicious
25430f04c5 chore: bump version to v6.6.0rc2 2025-09-04 16:43:41 +10:00
psychedelicious
b2b53c4481 fix(ui): set a react key on the current image viewer's components
This tells react that the component is a new instance each time we
change the image. Which, in turn, prevents a flash of the
previously-selected image during image switching and
progress-image-to-output-image-ing.
2025-09-04 16:35:40 +10:00
psychedelicious
c6696d7913 fix(ui): ensure origin is set correctly for generate tab batches
This prevents an issue in the image viewer's logic for simulating the
progress image "resolving" to a completed image
2025-09-04 16:35:40 +10:00
psychedelicious
8bcb6648f1 fix(ui): stop dragging when user clicks mmb once
This has been an issue for a long time. I suspect it wasn't noticed
until now because it's finicky to trigger - you have to click and
release very quickly, without moving the mouse at all.
2025-09-04 16:16:04 +10:00
psychedelicious
0ee360ba6c fix(ui): show fallback when no image is selected 2025-09-04 16:13:01 +10:00
psychedelicious
09bbe3eef9 fix(ui): clear gallery selection when switching boards and there are no items in the new board 2025-09-04 16:13:01 +10:00
psychedelicious
d14b7a48f5 fix(ui): clear gallery selection when last image on selected board is deleted 2025-09-04 16:13:01 +10:00
Mary Hipp
1db55b0ffa cleanup 2025-09-03 10:11:32 -04:00
Mary Hipp
3104a1baa6 remove crossOrigin for thumbnail loading 2025-09-03 10:11:32 -04:00
psychedelicious
0e523ca2c1 fix(ui): browser image caching cors race condition
Must set cross origin whenever we load an image from a URL to prevent
race conditions where browser caches an image with no CORS, then canvas
attempts to load it with CORS, resulting in browser rejecting the
request before it is made
2025-09-03 10:11:32 -04:00
psychedelicious
75daef2aba fix(ui): fix situation where progress images are super tiny
Missed a spot
2025-09-03 22:56:55 +10:00
psychedelicious
b036b18986 chore: bump version to v6.6.0rc1 2025-09-03 18:02:37 +10:00
Hosted Weblate
93535fa3c2 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-09-03 17:57:27 +10:00
Riccardo Giovanetti
dcafb44f8a translationBot(ui): update translation (Italian)
Currently translated at 98.6% (2088 of 2117 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-09-03 17:57:27 +10:00
Mary Hipp
44b1d8d1fc remove video base models from image aspect/ratio logic 2025-09-03 10:22:14 +10:00
Attila Cseh
6f70a6bd10 prettier fix 2025-09-02 19:23:24 +10:00
Attila Cseh
0546aeed1d code review changes 2025-09-02 19:23:24 +10:00
Attila Cseh
8933f3f5dd LoRA weight default values turned into constant 2025-09-02 19:23:24 +10:00
Attila Cseh
29cdefe873 type conversion fixed 2025-09-02 19:23:24 +10:00
Attila Cseh
df299bb37f python source code reformatted 2025-09-02 19:23:24 +10:00
Attila Cseh
481fb42371 lint errors fixed 2025-09-02 19:23:24 +10:00
Attila Cseh
631a04b48c LoRA default weight 2025-09-02 19:23:24 +10:00
Attila Cseh
547e1941f4 code review changes 2025-09-02 19:16:26 +10:00
Attila Cseh
031d25ed63 switchable foreground/background colors 2025-09-02 19:16:26 +10:00
Riccardo Giovanetti
27f4af0eb4 translationBot(ui): update translation (Italian)
Currently translated at 98.6% (2087 of 2116 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-09-02 15:05:51 +10:00
psychedelicious
e0a0617093 chore(ui): bump dockview
This brings in a fix for Chrome that allowed you to drag tabs and split
the panels.

Closes #8449
2025-09-02 11:05:41 +10:00
psychedelicious
e6a763b887 fix(ui): move getItemsPerRow to frontend src dir
Not sure how but it was in repo root

Closes #8509
2025-09-02 11:02:56 +10:00
psychedelicious
3c9c49f7d9 feat(ui): add readiness checks for LoRAs
If incompatible LoRAs are added, prevent Invoking.

The logic to prevent adding incompatible LoRAs to graphs already
existed. This does not fix any generation bugs; just a visual
inconsistency where it looks like Invoke would use an incompatible LoRA.
2025-09-01 14:41:03 +10:00
Attila Cseh
26690d47b7 lint errors fixed 2025-09-01 14:34:35 +10:00
Attila Cseh
fcaff6ce09 remove LoRAs for recall use all 2025-09-01 14:34:35 +10:00
Damian
afd7296cb2 Add 'sd-2' to supported negative prompt base models
add back negative prompt support for sd2
2025-08-31 10:20:31 -04:00
psychedelicious
d6f42c76d5 fix(app): board count queries not getting categories as params 2025-08-29 11:07:52 +10:00
Mary Hipp
68f39fe907 cleanup 2025-08-28 16:38:48 -04:00
Mary Hipp
23a528545f match screen capture button to the others 2025-08-28 16:38:48 -04:00
Mary Hipp
c69d04a7f0 handle large videos 2025-08-28 15:29:47 -04:00
Mary Hipp
60f1e2d7ad do not show negative prompt for video 2025-08-28 12:59:23 -04:00
Mary Hipp
cb386bec28 do not show reference images on video tab 2025-08-28 12:59:23 -04:00
Mary Hipp
f29ceb3f12 add translations 2025-08-28 10:17:00 -04:00
Mary Hipp
4f51bc9421 add credit estimate for video generation 2025-08-28 10:17:00 -04:00
Mary Hipp
0c41abab79 add label for starting image field 2025-08-28 10:17:00 -04:00
Mary Hipp
cb457c3402 default resolution to 1080p 2025-08-28 10:17:00 -04:00
Mary Hipp
606ad73814 use first video model if none selected 2025-08-28 10:17:00 -04:00
psychedelicious
fe70bd538a fix(ui): hide unused queue actions menu item category 2025-08-28 10:17:00 -04:00
psychedelicious
b5c7316c0a chore(ui): lint 2025-08-28 10:17:00 -04:00
psychedelicious
460aec03ea fix(ui): more video translations 2025-08-28 10:17:00 -04:00
psychedelicious
6730d86a13 fix(ui): make ctx menu star label not refer to iamges 2025-08-28 10:17:00 -04:00
psychedelicious
c4bc03cb1f fix(ui): make ctx menu download tooltip not refer to iamges 2025-08-28 10:17:00 -04:00
psychedelicious
136ee28199 feat(ui): remove unimplemented context menu items for video 2025-08-28 10:17:00 -04:00
psychedelicious
2c6d266c0a fix(ui): metadata viewer translations 2025-08-28 10:17:00 -04:00
psychedelicious
f779920eaa chore(ui): lint 2025-08-28 10:17:00 -04:00
psychedelicious
01bef5d165 fix(ui): do not highlight starting frame image in red when it is not required 2025-08-28 10:17:00 -04:00
psychedelicious
72851d3e84 feat(ui): tweak video settings padding 2025-08-28 10:17:00 -04:00
psychedelicious
4ba85c62ca feat(ui): add border around starting frame image 2025-08-28 10:17:00 -04:00
psychedelicious
313aedb00a fix(ui): graph builder check for veo 2025-08-28 10:17:00 -04:00
psychedelicious
85bd324d74 tweak(ui): nav bar divider not so bright 2025-08-28 10:17:00 -04:00
psychedelicious
4a04411e74 fix(ui): tab hotkeys for video 2025-08-28 10:17:00 -04:00
psychedelicious
299a4db3bb chore(ui): lint 2025-08-28 10:17:00 -04:00
psychedelicious
390faa592c chore: ruff 2025-08-28 10:17:00 -04:00
Mary Hipp
2463aeb84a studio init action for video tab 2025-08-28 10:17:00 -04:00
Mary Hipp
ec8df163d1 launchpad cleanup 2025-08-28 10:17:00 -04:00
Mary Hipp
a198b7da78 fix view on large screens, restore auth for screen capture 2025-08-28 10:17:00 -04:00
Mary Hipp
fb11770852 rearrange image | video | asset for boards 2025-08-28 10:17:00 -04:00
Mary Hipp
6b6f3d56f7 add option for video upsell, rearrange navigation bar and gallery tabs 2025-08-28 10:17:00 -04:00
Mary Hipp
29d00eef9a hide video features if video is disabled 2025-08-28 10:17:00 -04:00
psychedelicious
6972cd708d feat(ui): delete confirmation for videos 2025-08-28 10:17:00 -04:00
psychedelicious
82893804ff feat(ui): metadata recall for videos 2025-08-28 10:17:00 -04:00
psychedelicious
47ffe365bc fix(ui): do not store whole model config in state 2025-08-28 10:17:00 -04:00
psychedelicious
f7b03b1e63 fix(ui): do not change canvas bbox on video model change 2025-08-28 10:17:00 -04:00
psychedelicious
356e38e82a feat(ui): use correct model config object in video graph builders 2025-08-28 10:17:00 -04:00
psychedelicious
5ea077bb8c feat(ui): add selector to get model config for current video model 2025-08-28 10:17:00 -04:00
psychedelicious
3c4b303555 feat(ui): simplify and consolidate video capture logic 2025-08-28 10:17:00 -04:00
psychedelicious
b8651cb1a2 fix(ui): rebase conflict 2025-08-28 10:17:00 -04:00
Mary Hipp
a6527c0ba1 lint again 2025-08-28 10:17:00 -04:00
Mary Hipp
6e40eca754 lint 2025-08-28 10:17:00 -04:00
Mary Hipp
53fab17c33 use context to track video ref so that toolbar can also save current frame 2025-08-28 10:17:00 -04:00
Mary Hipp
3876d88b3c add save frame functionality 2025-08-28 10:17:00 -04:00
Mary Hipp
82b4526691 add video_count and asset_count to boards UI 2025-08-28 10:17:00 -04:00
Mary Hipp
f56ba11394 add asset_count to BoardDTO and split it out from image count 2025-08-28 10:17:00 -04:00
Mary Hipp
32eb5190f2 add video_count to boardDTO 2025-08-28 10:17:00 -04:00
Mary Hipp
72e378789d video metadata support 2025-08-28 10:17:00 -04:00
Mary Hipp
f10ddb0cab split out video aspect/ratio into its own components 2025-08-28 10:17:00 -04:00
Mary Hipp
286127077d updates for new model type 2025-08-28 10:17:00 -04:00
Mary Hipp
36278bc044 add UI support for new model type Video 2025-08-28 10:17:00 -04:00
Mary Hipp
7a1c7ca43a add Video as new model type 2025-08-28 10:17:00 -04:00
psychedelicious
8303d567d5 docs(ui): add note about visual jank in gallery 2025-08-28 10:17:00 -04:00
psychedelicious
1fe19c1242 fix(ui): use correct placeholder for vidoes 2025-08-28 10:17:00 -04:00
psychedelicious
127a43865c fix(ui): locate in gallery, galleryview when selecting image/video 2025-08-28 10:17:00 -04:00
psychedelicious
24a48884cb chore(ui): lint 2025-08-28 10:17:00 -04:00
psychedelicious
47cee816fd chore(ui): dpdm 2025-08-28 10:17:00 -04:00
psychedelicious
90bacaddda feat(ui): video dnd 2025-08-28 10:17:00 -04:00
psychedelicious
c0cc9f421e fix(ui): generate tab graph builder 2025-08-28 10:17:00 -04:00
psychedelicious
dbb9032648 fix(ui): iterations works for video models 2025-08-28 10:17:00 -04:00
psychedelicious
b9e32e59a2 fix(ui): missing tranlsation 2025-08-28 10:17:00 -04:00
psychedelicious
545a1d8737 fix(ui): fetching imageDTO for video 2025-08-28 10:17:00 -04:00
psychedelicious
c4718403a2 tidy(ui): remove unused VideoAtPosition component 2025-08-28 10:17:00 -04:00
psychedelicious
eb308b1ff7 feat(ui): simpler layout for video player 2025-08-28 10:17:00 -04:00
Mary Hipp
a277bea804 fix video styling 2025-08-28 10:17:00 -04:00
Mary Hipp
30619c0420 add runway back as a model and allow runway and veo3 to live together in peace and harmony 2025-08-28 10:17:00 -04:00
Mary Hipp
504d8e32be add runway to backend 2025-08-28 10:17:00 -04:00
Mary Hipp
f21229cd14 update redux selection to have a list of images and/or videos, update image viewer to show either image or video depending on what is selected 2025-08-28 10:17:00 -04:00
Mary Hipp
640ec676c3 lint 2025-08-28 10:17:00 -04:00
Mary Hipp
6370412e9c tsc 2025-08-28 10:17:00 -04:00
Mary Hipp
edec2c2775 lint the dang thing 2025-08-28 10:17:00 -04:00
psychedelicious
bd38be31d8 gallery 2025-08-28 10:17:00 -04:00
psychedelicious
b938ae0a7e Revert "feat(ui): consolidated gallery (wip)"
This reverts commit 12b70bca67.
2025-08-28 10:17:00 -04:00
Mary Hipp
6e5b1ed55f add videos to change board modal 2025-08-28 10:17:00 -04:00
Mary Hipp
5970bd38c2 add resolution as a generation setting 2025-08-28 10:17:00 -04:00
Mary Hipp
e046417cf5 replace runway with veo, build out veo3 model support 2025-08-28 10:17:00 -04:00
Mary Hipp
27a2cd19bd add Veo3 model support to backend 2025-08-28 10:17:00 -04:00
psychedelicious
0df631b802 feat(ui): consolidated gallery (wip) 2025-08-28 10:17:00 -04:00
psychedelicious
5bb7cd168d feat(ui): gallery optimistic updates for video 2025-08-28 10:17:00 -04:00
psychedelicious
b4ba84ad35 fix(ui): panel names on video tab 2025-08-28 10:17:00 -04:00
Mary Hipp
d1628f51c9 stubbing out change board functionality 2025-08-28 10:17:00 -04:00
Mary Hipp
17c1304ce2 hook up starring, unstarring, and deleting single videos (no multiselect yet), adapt context menus to work for both images and videos and start on video context menu 2025-08-28 10:17:00 -04:00
Mary Hipp
cc9a85f7d0 add readiness logic to video tab 2025-08-28 10:17:00 -04:00
psychedelicious
7e2999649a feat(ui): more video stuff 2025-08-28 10:17:00 -04:00
psychedelicious
1473142f73 feat(ui): fiddle w/ video stuff 2025-08-28 10:17:00 -04:00
psychedelicious
49343546e7 feat(ui): fiddle w/ video stuff 2025-08-28 10:17:00 -04:00
psychedelicious
39d5879405 chore: ruff 2025-08-28 10:17:00 -04:00
psychedelicious
4b4ec29a09 feat(nodes): update VideoField & VideoOutput 2025-08-28 10:17:00 -04:00
psychedelicious
dc6811076f feat(ui): add dnd target for video start frame 2025-08-28 10:17:00 -04:00
Mary Hipp
0568784ee9 add duration and aspect ratio to video settings 2025-08-28 10:17:00 -04:00
Mary Hipp
895eac6bcd integrating video into gallery - thinking maybe a new category of image would make more senes 2025-08-28 10:17:00 -04:00
Mary Hipp
fe0efa9bdf add noop video router 2025-08-28 10:17:00 -04:00
Mary Hipp
acabc8bd54 add video models 2025-08-28 10:17:00 -04:00
Mary Hipp
89f999af08 combine nodes that generate and save videos 2025-08-28 10:17:00 -04:00
Mary Hipp
9ae76bef51 build out adhoc video saving graph 2025-08-28 10:17:00 -04:00
Mary Hipp
0999b43616 push up updates for VideoField 2025-08-28 10:17:00 -04:00
Mary Hipp
e6e4f58163 update VideoField 2025-08-28 10:17:00 -04:00
Mary Hipp
b371930e02 split out RunwayVideoOutput from VideoOutput 2025-08-28 10:17:00 -04:00
Mary Hipp
9b50e2303b rough rough POC of video tab 2025-08-28 10:17:00 -04:00
Mary Hipp
49d1810991 video_output support 2025-08-28 10:17:00 -04:00
psychedelicious
b1b009f7b8 chore: bump version to v6.5.1 2025-08-28 22:57:14 +10:00
psychedelicious
3431e6385c chore: uv lock 2025-08-28 22:57:14 +10:00
psychedelicious
5db1027d32 Pin sentencepiece version in pyproject.toml
Pin sentencepiece version to 0.2.0 to avoid coredump issues.
2025-08-28 22:57:14 +10:00
Hosted Weblate
579f182fe9 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-08-28 22:51:40 +10:00
Riccardo Giovanetti
55bf41f63f translationBot(ui): update translation (Italian)
Currently translated at 98.6% (2053 of 2082 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-08-28 22:51:40 +10:00
psychedelicious
fc32fd2d2e fix(ui): progress image renders at physical size 2025-08-28 22:47:52 +10:00
psychedelicious
a2b6536078 fix(ui): konva caching opt-out doesn't do what i thought it would 2025-08-28 22:45:03 +10:00
Mary Hipp Rogers
144c54a6c8 Revert "video_output support"
This reverts commit 453ef1a220.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
ca40daeb97 Revert "rough rough POC of video tab"
This reverts commit e89266bfe3.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
e600cdc826 Revert "split out RunwayVideoOutput from VideoOutput"
This reverts commit 97719b0aab.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
b7c52f33dc Revert "update VideoField"
This reverts commit bd251f8cce.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
e78157fcf0 Revert "push up updates for VideoField"
This reverts commit 94ba840948.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
7d7b98249f Revert "build out adhoc video saving graph"
This reverts commit 07565d4015.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
f5bf84f304 Revert "combine nodes that generate and save videos"
This reverts commit eff9c7b92f.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
c30d5bece2 Revert "add video models"
This reverts commit 295b5a20a8.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
27845b2f1b Revert "add noop video router"
This reverts commit e9c4e12454.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
bad6eea077 Revert "integrating video into gallery - thinking maybe a new category of image would make more senes"
This reverts commit 5c93e53195.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
9c26ac5ce3 Revert "add duration and aspect ratio to video settings"
This reverts commit 4d8bcad15b.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
b7306bb5c9 Revert "feat(ui): add dnd target for video start frame"
This reverts commit 530d20c1be.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
0c115177b2 Revert "feat(nodes): update VideoField & VideoOutput"
This reverts commit 67de3f2d9b.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
5aae41b5bb Revert "chore: ruff"
This reverts commit 9380d8901c.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
7ad09a2f79 Revert "feat(ui): fiddle w/ video stuff"
This reverts commit f98bbc32dd.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
5a6d3639b7 Revert "feat(ui): fiddle w/ video stuff"
This reverts commit 79e8482b27.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
84617d3df2 Revert "feat(ui): more video stuff"
This reverts commit 963c2ec60c.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
e05f30749e Revert "add readiness logic to video tab"
This reverts commit 288ac0a293.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
88a2e27338 Revert "hook up starring, unstarring, and deleting single videos (no multiselect yet), adapt context menus to work for both images and videos and start on video context menu"
This reverts commit a918198d4f.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
15a6fd76c8 Revert "stubbing out change board functionality"
This reverts commit 67042e6dec.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
6adb46a86c Revert "fix(ui): panel names on video tab"
This reverts commit 64dfa125d2.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
e8a74eb79d Revert "feat(ui): gallery optimistic updates for video"
This reverts commit 0ec6d33086.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
dcd716c384 Revert "feat(ui): consolidated gallery (wip)"
This reverts commit 6ef1c2a5e1.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
56697635dd Revert "add Veo3 model support to backend"
This reverts commit 49d569ec59.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
5b5657e292 Revert "replace runway with veo, build out veo3 model support"
This reverts commit d95a698ebd.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
ad3dfbe1ed Revert "add resolution as a generation setting"
This reverts commit b71829a827.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
59ddc4f7b0 Revert "add videos to change board modal"
This reverts commit 45b4432833.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
4653b79f12 Revert "Revert "feat(ui): consolidated gallery (wip)""
This reverts commit 637d19c22b.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
778d6f167f Revert "gallery"
This reverts commit aa4e3adadb.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
05c71f50f1 Revert "lint the dang thing"
This reverts commit 1b0d599dc2.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
406e0be39c Revert "tsc"
This reverts commit 7828102b67.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
0d71234a12 Revert "lint"
This reverts commit b377b80446.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
e38019bb70 Revert "update redux selection to have a list of images and/or videos, update image viewer to show either image or video depending on what is selected"
This reverts commit 8df3067599.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
a879880b42 Revert "add runway to backend"
This reverts commit f631b5178f.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
71c8accbfe Revert "add runway back as a model and allow runway and veo3 to live together in peace and harmony"
This reverts commit b2026d9c00.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
154fb99daf Revert "fix video styling"
This reverts commit 3d9889e272.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
0df476ce13 Revert "feat(ui): simpler layout for video player"
This reverts commit 3a1cedbced.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
e7ad830fa9 Revert "tidy(ui): remove unused VideoAtPosition component"
This reverts commit e55d39a20b.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
e81e0a8286 Revert "fix(ui): fetching imageDTO for video"
This reverts commit fbf8aa17c8.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
d0f7e72cbb Revert "fix(ui): missing tranlsation"
This reverts commit 89efe9c2b1.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
fdead4fb8c Revert "fix(ui): iterations works for video models"
This reverts commit 24f22d539f.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
31c9945b32 Revert "fix(ui): generate tab graph builder"
This reverts commit 84dc4e4ea9.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
22de8a4b12 Revert "feat(ui): video dnd"
This reverts commit f5fdba795a.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
89cb3c3230 Revert "chore(ui): dpdm"
This reverts commit 6a7fe6668b.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
7bb99ece4e Revert "chore(ui): lint"
This reverts commit 55139bb169.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
28f040123f Revert "fix(ui): locate in gallery, galleryview when selecting image/video"
This reverts commit 26fe937d97.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
1be3a4db64 Revert "fix(ui): use correct placeholder for vidoes"
This reverts commit 7e031e9c01.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
cb44c995d2 Revert "docs(ui): add note about visual jank in gallery"
This reverts commit 2d9c82da85.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
9b9b35c315 Revert "add Video as new model type"
This reverts commit fb0a924918.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
f6edab6032 Revert "add UI support for new model type Video"
This reverts commit c6f2d127ef.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
f79665b023 Revert "updates for new model type"
This reverts commit 23cde86bc4.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
6b1bc7a87d Revert "split out video aspect/ratio into its own components"
This reverts commit 6c375b228e.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
c6f2994c84 Revert "video metadata support"
This reverts commit b16d1a943d.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
0cff67ff23 Revert "add video_count to boardDTO"
This reverts commit 1cc6893d0d.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
e957c11c9a Revert "add asset_count to BoardDTO and split it out from image count"
This reverts commit d4378d9f2a.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
4baa685c7a Revert "add video_count and asset_count to boards UI"
This reverts commit e36490c2ec.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
1bd5907a12 Revert "add save frame functionality"
This reverts commit 6a20271dba.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
2fd56e6029 Revert "use context to track video ref so that toolbar can also save current frame"
This reverts commit 1bf25fadb3.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
b0548edc8c Revert "lint"
This reverts commit 378f33bc92.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
41d781176f Revert "lint again"
This reverts commit 41e1697e79.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
8709de0b33 Revert "fix(ui): rebase conflict"
This reverts commit bc6dd12083.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
af43fe2fd4 Revert "feat(ui): simplify and consolidate video capture logic"
This reverts commit c5a76806c1.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
ebbb11c3b1 Revert "feat(ui): add selector to get model config for current video model"
This reverts commit 5cabc37a87.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
0fc8c08da3 Revert "feat(ui): use correct model config object in video graph builders"
This reverts commit 9fcba3b876.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
bfadcffe3c Revert "fix(ui): do not change canvas bbox on video model change"
This reverts commit 8eb3f40e1b.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
49c2332c13 Revert "fix(ui): do not store whole model config in state"
This reverts commit b2ed3c99d4.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
dacef158c4 Revert "feat(ui): metadata recall for videos"
This reverts commit 4c32b2a123.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
0c34d8201e Revert "feat(ui): delete confirmation for videos"
This reverts commit 505c75a5ab.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
77132075ff Revert "hide video features if video is disabled"
This reverts commit 0de5097207.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
f008d3b0b2 Revert "add option for video upsell, rearrange navigation bar and gallery tabs"
This reverts commit 4845d31857.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
4e66ccefe8 Revert "rearrange image | video | asset for boards"
This reverts commit 8a60def51f.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
5d0ed45326 Revert "fix view on large screens, restore auth for screen capture"
This reverts commit 1f526a1c27.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
379d633ac6 Revert "launchpad cleanup"
This reverts commit ab41f71a36.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
93bba1b692 Revert "studio init action for video tab"
This reverts commit 431fd83a43.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
667e175ab7 Revert "chore: ruff"
This reverts commit 3ae99df091.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
de146aa4aa Revert "chore(ui): lint"
This reverts commit 36c16d2781.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
ed9c2c8208 Revert "fix(ui): tab hotkeys for video"
This reverts commit 20813b5615.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
9d984878f3 Revert "tweak(ui): nav bar divider not so bright"
This reverts commit 269d4fe670.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
585eb8c69d Revert "fix(ui): graph builder check for veo"
This reverts commit 239fb86a46.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
c105bae127 Revert "feat(ui): add border around starting frame image"
This reverts commit 8642e8881d.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
c39f26266f Revert "feat(ui): tweak video settings padding"
This reverts commit 842d729ec8.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
47dffd123a Revert "fix(ui): do not highlight starting frame image in red when it is not required"
This reverts commit 0b05b24e9a.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
b946ec3172 Revert "chore(ui): lint"
This reverts commit 8c2e6a3988.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
024c02329d Revert "fix(ui): metadata viewer translations"
This reverts commit 2a6cfde488.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
4b43b59472 Revert "feat(ui): remove unimplemented context menu items for video"
This reverts commit a6b0581939.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
d11f115e1a Revert "fix(ui): make ctx menu download tooltip not refer to iamges"
This reverts commit e4f24c4dc4.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
92253ce854 Revert "fix(ui): make ctx menu star label not refer to iamges"
This reverts commit ec793cb636.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
0ebbfa90c9 Revert "fix(ui): more video translations"
This reverts commit 0d827d8306.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
fdfee11e37 Revert "chore(ui): lint"
This reverts commit 3971382a6d.
2025-08-28 08:32:47 -04:00
Mary Hipp Rogers
6091bf4f60 Revert "fix(ui): hide unused queue actions menu item category"
This reverts commit 07271ca468.
2025-08-28 08:32:47 -04:00
psychedelicious
07271ca468 fix(ui): hide unused queue actions menu item category 2025-08-28 08:23:58 -04:00
psychedelicious
3971382a6d chore(ui): lint 2025-08-28 08:23:58 -04:00
psychedelicious
0d827d8306 fix(ui): more video translations 2025-08-28 08:23:58 -04:00
psychedelicious
ec793cb636 fix(ui): make ctx menu star label not refer to iamges 2025-08-28 08:23:58 -04:00
psychedelicious
e4f24c4dc4 fix(ui): make ctx menu download tooltip not refer to iamges 2025-08-28 08:23:58 -04:00
psychedelicious
a6b0581939 feat(ui): remove unimplemented context menu items for video 2025-08-28 08:23:58 -04:00
psychedelicious
2a6cfde488 fix(ui): metadata viewer translations 2025-08-28 08:23:58 -04:00
psychedelicious
8c2e6a3988 chore(ui): lint 2025-08-28 08:23:58 -04:00
psychedelicious
0b05b24e9a fix(ui): do not highlight starting frame image in red when it is not required 2025-08-28 08:23:58 -04:00
psychedelicious
842d729ec8 feat(ui): tweak video settings padding 2025-08-28 08:23:58 -04:00
psychedelicious
8642e8881d feat(ui): add border around starting frame image 2025-08-28 08:23:58 -04:00
psychedelicious
239fb86a46 fix(ui): graph builder check for veo 2025-08-28 08:23:58 -04:00
psychedelicious
269d4fe670 tweak(ui): nav bar divider not so bright 2025-08-28 08:23:58 -04:00
psychedelicious
20813b5615 fix(ui): tab hotkeys for video 2025-08-28 08:23:58 -04:00
psychedelicious
36c16d2781 chore(ui): lint 2025-08-28 08:23:58 -04:00
psychedelicious
3ae99df091 chore: ruff 2025-08-28 08:23:58 -04:00
Mary Hipp
431fd83a43 studio init action for video tab 2025-08-28 08:23:58 -04:00
Mary Hipp
ab41f71a36 launchpad cleanup 2025-08-28 08:23:58 -04:00
Mary Hipp
1f526a1c27 fix view on large screens, restore auth for screen capture 2025-08-28 08:23:58 -04:00
Mary Hipp
8a60def51f rearrange image | video | asset for boards 2025-08-28 08:23:58 -04:00
Mary Hipp
4845d31857 add option for video upsell, rearrange navigation bar and gallery tabs 2025-08-28 08:23:58 -04:00
Mary Hipp
0de5097207 hide video features if video is disabled 2025-08-28 08:23:58 -04:00
psychedelicious
505c75a5ab feat(ui): delete confirmation for videos 2025-08-28 08:23:58 -04:00
psychedelicious
4c32b2a123 feat(ui): metadata recall for videos 2025-08-28 08:23:58 -04:00
psychedelicious
b2ed3c99d4 fix(ui): do not store whole model config in state 2025-08-28 08:23:58 -04:00
psychedelicious
8eb3f40e1b fix(ui): do not change canvas bbox on video model change 2025-08-28 08:23:58 -04:00
psychedelicious
9fcba3b876 feat(ui): use correct model config object in video graph builders 2025-08-28 08:23:58 -04:00
psychedelicious
5cabc37a87 feat(ui): add selector to get model config for current video model 2025-08-28 08:23:58 -04:00
psychedelicious
c5a76806c1 feat(ui): simplify and consolidate video capture logic 2025-08-28 08:23:58 -04:00
psychedelicious
bc6dd12083 fix(ui): rebase conflict 2025-08-28 08:23:58 -04:00
Mary Hipp
41e1697e79 lint again 2025-08-28 08:23:58 -04:00
Mary Hipp
378f33bc92 lint 2025-08-28 08:23:58 -04:00
Mary Hipp
1bf25fadb3 use context to track video ref so that toolbar can also save current frame 2025-08-28 08:23:58 -04:00
Mary Hipp
6a20271dba add save frame functionality 2025-08-28 08:23:58 -04:00
Mary Hipp
e36490c2ec add video_count and asset_count to boards UI 2025-08-28 08:23:58 -04:00
Mary Hipp
d4378d9f2a add asset_count to BoardDTO and split it out from image count 2025-08-28 08:23:58 -04:00
Mary Hipp
1cc6893d0d add video_count to boardDTO 2025-08-28 08:23:58 -04:00
Mary Hipp
b16d1a943d video metadata support 2025-08-28 08:23:58 -04:00
Mary Hipp
6c375b228e split out video aspect/ratio into its own components 2025-08-28 08:23:58 -04:00
Mary Hipp
23cde86bc4 updates for new model type 2025-08-28 08:23:58 -04:00
Mary Hipp
c6f2d127ef add UI support for new model type Video 2025-08-28 08:23:58 -04:00
Mary Hipp
fb0a924918 add Video as new model type 2025-08-28 08:23:58 -04:00
psychedelicious
2d9c82da85 docs(ui): add note about visual jank in gallery 2025-08-28 08:23:58 -04:00
psychedelicious
7e031e9c01 fix(ui): use correct placeholder for vidoes 2025-08-28 08:23:58 -04:00
psychedelicious
26fe937d97 fix(ui): locate in gallery, galleryview when selecting image/video 2025-08-28 08:23:58 -04:00
psychedelicious
55139bb169 chore(ui): lint 2025-08-28 08:23:58 -04:00
psychedelicious
6a7fe6668b chore(ui): dpdm 2025-08-28 08:23:58 -04:00
psychedelicious
f5fdba795a feat(ui): video dnd 2025-08-28 08:23:58 -04:00
psychedelicious
84dc4e4ea9 fix(ui): generate tab graph builder 2025-08-28 08:23:58 -04:00
psychedelicious
24f22d539f fix(ui): iterations works for video models 2025-08-28 08:23:58 -04:00
psychedelicious
89efe9c2b1 fix(ui): missing tranlsation 2025-08-28 08:23:58 -04:00
psychedelicious
fbf8aa17c8 fix(ui): fetching imageDTO for video 2025-08-28 08:23:58 -04:00
psychedelicious
e55d39a20b tidy(ui): remove unused VideoAtPosition component 2025-08-28 08:23:58 -04:00
psychedelicious
3a1cedbced feat(ui): simpler layout for video player 2025-08-28 08:23:58 -04:00
Mary Hipp
3d9889e272 fix video styling 2025-08-28 08:23:58 -04:00
Mary Hipp
b2026d9c00 add runway back as a model and allow runway and veo3 to live together in peace and harmony 2025-08-28 08:23:58 -04:00
Mary Hipp
f631b5178f add runway to backend 2025-08-28 08:23:58 -04:00
Mary Hipp
8df3067599 update redux selection to have a list of images and/or videos, update image viewer to show either image or video depending on what is selected 2025-08-28 08:23:58 -04:00
Mary Hipp
b377b80446 lint 2025-08-28 08:23:58 -04:00
Mary Hipp
7828102b67 tsc 2025-08-28 08:23:58 -04:00
Mary Hipp
1b0d599dc2 lint the dang thing 2025-08-28 08:23:58 -04:00
psychedelicious
aa4e3adadb gallery 2025-08-28 08:23:58 -04:00
psychedelicious
637d19c22b Revert "feat(ui): consolidated gallery (wip)"
This reverts commit 12b70bca67.
2025-08-28 08:23:58 -04:00
Mary Hipp
45b4432833 add videos to change board modal 2025-08-28 08:23:58 -04:00
Mary Hipp
b71829a827 add resolution as a generation setting 2025-08-28 08:23:58 -04:00
Mary Hipp
d95a698ebd replace runway with veo, build out veo3 model support 2025-08-28 08:23:58 -04:00
Mary Hipp
49d569ec59 add Veo3 model support to backend 2025-08-28 08:23:58 -04:00
psychedelicious
6ef1c2a5e1 feat(ui): consolidated gallery (wip) 2025-08-28 08:23:58 -04:00
psychedelicious
0ec6d33086 feat(ui): gallery optimistic updates for video 2025-08-28 08:23:58 -04:00
psychedelicious
64dfa125d2 fix(ui): panel names on video tab 2025-08-28 08:23:58 -04:00
Mary Hipp
67042e6dec stubbing out change board functionality 2025-08-28 08:23:58 -04:00
Mary Hipp
a918198d4f hook up starring, unstarring, and deleting single videos (no multiselect yet), adapt context menus to work for both images and videos and start on video context menu 2025-08-28 08:23:58 -04:00
Mary Hipp
288ac0a293 add readiness logic to video tab 2025-08-28 08:23:58 -04:00
psychedelicious
963c2ec60c feat(ui): more video stuff 2025-08-28 08:23:58 -04:00
psychedelicious
79e8482b27 feat(ui): fiddle w/ video stuff 2025-08-28 08:23:58 -04:00
psychedelicious
f98bbc32dd feat(ui): fiddle w/ video stuff 2025-08-28 08:23:58 -04:00
psychedelicious
9380d8901c chore: ruff 2025-08-28 08:23:58 -04:00
psychedelicious
67de3f2d9b feat(nodes): update VideoField & VideoOutput 2025-08-28 08:23:58 -04:00
psychedelicious
530d20c1be feat(ui): add dnd target for video start frame 2025-08-28 08:23:58 -04:00
Mary Hipp
4d8bcad15b add duration and aspect ratio to video settings 2025-08-28 08:23:58 -04:00
Mary Hipp
5c93e53195 integrating video into gallery - thinking maybe a new category of image would make more senes 2025-08-28 08:23:58 -04:00
Mary Hipp
e9c4e12454 add noop video router 2025-08-28 08:23:58 -04:00
Mary Hipp
295b5a20a8 add video models 2025-08-28 08:23:58 -04:00
Mary Hipp
eff9c7b92f combine nodes that generate and save videos 2025-08-28 08:23:58 -04:00
Mary Hipp
07565d4015 build out adhoc video saving graph 2025-08-28 08:23:58 -04:00
Mary Hipp
94ba840948 push up updates for VideoField 2025-08-28 08:23:58 -04:00
Mary Hipp
bd251f8cce update VideoField 2025-08-28 08:23:58 -04:00
Mary Hipp
97719b0aab split out RunwayVideoOutput from VideoOutput 2025-08-28 08:23:58 -04:00
Mary Hipp
e89266bfe3 rough rough POC of video tab 2025-08-28 08:23:58 -04:00
Mary Hipp
453ef1a220 video_output support 2025-08-28 08:23:58 -04:00
psychedelicious
faf8f0f291 chore: bump version to v6.5.0 2025-08-28 13:32:37 +10:00
psychedelicious
5d36499982 chore: update whatsnew 2025-08-28 13:32:37 +10:00
Linos
151d67a0cc translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (2082 of 2082 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-08-28 13:02:16 +10:00
Riccardo Giovanetti
72431ff197 translationBot(ui): update translation (Italian)
Currently translated at 98.6% (2053 of 2082 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-08-28 13:02:16 +10:00
psychedelicious
0de1feed76 chore(ui): lint 2025-08-28 12:59:35 +10:00
psychedelicious
7ffb626dbe feat(ui): add image load errors to logging 2025-08-28 12:59:35 +10:00
psychedelicious
79753289b1 feat(ui): log image failed to load errors at error level 2025-08-28 12:59:35 +10:00
psychedelicious
bac4c05fd9 feat(ui): log "destroying module" at debug level 2025-08-28 12:59:35 +10:00
psychedelicious
8a3b5d2c6f fix(ui): do not cache canvas entities when they have no w/h 2025-08-28 12:59:35 +10:00
psychedelicious
309578c19a fix(ui): progress image gets stuck on viewer when generating on canvas 2025-08-28 12:55:36 +10:00
Mary Hipp
fd58e1d0f2 update copy for API models without w/h controls 2025-08-27 09:24:22 -04:00
psychedelicious
04ffb979ce fix(ui): deny the pull of the square 2025-08-27 08:56:15 -04:00
psychedelicious
35c00d5a83 chore(ui): lint 2025-08-27 08:56:15 -04:00
psychedelicious
c2b49d58f5 fix(ui): gemini 2.5 unsupported gen mode error message 2025-08-27 08:56:15 -04:00
psychedelicious
6ff6b40a35 feat(ui): support unknown output image dimensions on canvas
Gemini 2.5 Flash makes no guarantees about output image sizes. Our
existing logic always rendered staged images on Canvas at the bbox dims
- not the image's physical dimensions. When Gemini returns an image that
doesn't match the bbox, it would get squished.

To rectify this, the canvas staging area renderer is updated to render
its images using their physical dimensions, as opposed to their
configured dimensions (i.e. bbox).

A flag on CanvasObjectImage enables this rendering behaviour.

Then, when saving the image as a layer from staging area, we use the
physical dimensions.

When the bbox and physical dimensions do not match, the bbox is not
touched, so it won't exactly encompass the staged image. No point in
resizing the bbox if the dimensions don't match - the next image could
be a different size, and the sizes might not be valid (it's an external
resource, after all).
2025-08-27 08:56:15 -04:00
psychedelicious
1f1beda567 fix(ui): remove gemini aspect ratio checking in graph builder 2025-08-27 08:56:15 -04:00
psychedelicious
91d62eb242 fix(ui): update ref image type when switching to gemini 2025-08-27 08:56:15 -04:00
psychedelicious
013e02d08b feat(ui): show w/h, scaled bbox settings only when relevant 2025-08-27 08:56:15 -04:00
psychedelicious
115053972c feat(ui): handle api model determination in a clearer way w/ list of base models; use it in dimensions component 2025-08-27 08:56:15 -04:00
psychedelicious
bcab754ac2 docs(ui): add note about reactflow types 2025-08-27 08:56:15 -04:00
psychedelicious
f1a542aca2 docs(ui): add note about extraneous coordiantes in paramsSlice 2025-08-27 08:56:15 -04:00
psychedelicious
0701cc63a1 feat(ui): hide width/height sliders for api models
These models only support aspect ratio inputs; not pixel dimensions
2025-08-27 08:56:15 -04:00
psychedelicious
9337710b45 chore(ui): lint 2025-08-27 08:56:15 -04:00
psychedelicious
592ef5a9ee feat(ui): improved support model handling when switching models
- Disable LoRAs instead of deleting them when base model changes
- Update toast message to indicate that we may have _updated_ a model
(prev just sayed cleared or disabled)
- Do not change ref image models if the new base model doesn't support
them. For example, changing from SDXL to Imagen does not update the ref
image model or alert the user, because Imagen does not support ref
images. Switching from Imagen to FLUX does update the ref image model
and alert the user. Just a bit less noisy.
2025-08-27 08:56:15 -04:00
psychedelicious
5fe39a3ae9 fix(ui): add gemini 2.5 to ref image supporting models 2025-08-27 08:56:15 -04:00
psychedelicious
1888c586ca feat(ui): do not prevent invoking when ref images are added but model does not support ref images 2025-08-27 08:56:15 -04:00
psychedelicious
88922a467e feat(ui): hide ref images UI when selected models does not support ref images 2025-08-27 08:56:15 -04:00
psychedelicious
84115e598c fix(ui): lock height slider when using api model 2025-08-27 08:56:15 -04:00
Mary Hipp
370fc67777 UI support for gemini 2.5 API model 2025-08-27 08:56:15 -04:00
Mary Hipp
fa810e1d02 add gemini 2.5 to base model 2025-08-27 08:56:15 -04:00
Attila Cseh
ec5043aa83 useNodeFieldElementExists turned private 2025-08-26 11:39:16 +10:00
Attila Cseh
9a2a0cef74 node field dnd logic updatedto prevent duplicates 2025-08-26 11:39:16 +10:00
Attila Cseh
c205c1d19e current board removed from options 2025-08-26 11:33:39 +10:00
Attila Cseh
ae1a815453 change board - sorting order of boards alphabetical 2025-08-26 11:33:39 +10:00
psychedelicious
687bc281e5 chore: prep for v6.5.0rc1 (#8479)
## Summary

Bump version

## Related Issues / Discussions

n/a

## QA Instructions

n/a

## Merge Plan

This is already released.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-08-26 11:25:01 +10:00
psychedelicious
567316d753 chore: bump version to v6.5.0rc1 2025-08-25 18:10:18 +10:00
psychedelicious
53ac7c9d2c feat(ui): bbox aspect ratio lock is always inverted by shift 2025-08-25 17:59:20 +10:00
Riccardo Giovanetti
90be2a0cdf translationBot(ui): update translation (Italian)
Currently translated at 98.6% (2050 of 2079 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-08-25 17:57:54 +10:00
Attila Cseh
c7fb8f69ae code review fixes 2025-08-25 17:53:59 +10:00
Attila Cseh
7fecb8e88b formatting fixed 2025-08-25 17:53:59 +10:00
Attila Cseh
ee6a2a6603 respect direction of selection in Gallery 2025-08-25 17:53:59 +10:00
Attila Cseh
2496ac19c4 remove input field from form 2025-08-25 16:33:09 +10:00
psychedelicious
e34ed199c9 feat(ui): respect aspect ratio when resizing bbox on canvas 2025-08-25 15:30:01 +10:00
psychedelicious
569533ef80 fix(ui): toggle bbox visiblity translation 2025-08-25 14:51:34 +10:00
psychedelicious
dfac73f9f0 fix(ui): disable color picker while middle-mouse panning canvas 2025-08-25 14:47:42 +10:00
psychedelicious
f4219d5db3 chore: uv lock 2025-08-23 14:17:56 +10:00
psychedelicious
04d1958e93 feat(app): vendor in invisible-watermark
Fixes errors like `AttributeError: module 'cv2.ximgproc' has no
attribute 'thinning'` which occur because there is a conflict between
our own `opencv-contrib-python` dependency and the `invisible-watermark`
library's `opencv-python`.
2025-08-23 14:17:56 +10:00
psychedelicious
47d7d93e78 fix(ui): float input precision
Determine the "base" step for floats. If no `multipleOf` is provided,
the "base" step is `undefined`, meaning the float can have any number of
decimal places.

The UI library does its own step constrains though and is rounding to 3
decimal places. Probably need to update the logic in the UI library to
have truly arbitrary precision for float fields.
2025-08-22 13:35:59 +10:00
psychedelicious
0e17950949 fix(ui): race condition when setting hf token and downloading model
I ran into a race condition where I set a HF token and it was valid, but
somehow this error toast still appeared. The conditional feel through to
an assertion that we never expected to get to, which crashed the UI.

Handled the unexpected case gracefully now.
2025-08-22 13:30:38 +10:00
psychedelicious
b0cfdc94b5 feat(ui): do not sample alpha in Canvas color picker
Closes #7897
2025-08-21 21:38:03 +10:00
psychedelicious
bb153b55d3 docs: update quick start 2025-08-21 21:26:09 +10:00
psychedelicious
93ef637d59 docs: update latest release links 2025-08-21 21:26:09 +10:00
Attila Cseh
c5689ca1a7 code review changes 2025-08-21 19:42:38 +10:00
Attila Cseh
008e421ad4 shuffle button on workflows 2025-08-21 19:42:38 +10:00
psychedelicious
28a77ab06c Revert "experiment: add non-lfs-tracked file to lfs-tracked dir"
This reverts commit 4f4b7ddfb0.
2025-08-21 15:49:20 +10:00
psychedelicious
be48d3c12d ci: give workflow perms to label/comment on pr 2025-08-21 15:49:20 +10:00
psychedelicious
518b21a49a experiment: add non-lfs-tracked file to lfs-tracked dir 2025-08-21 15:49:20 +10:00
psychedelicious
68825ca9eb ci: add workflow to catch incorrect usage of git-lfs 2025-08-21 15:49:20 +10:00
psychedelicious
73c5f0b479 chore: bump version to v6.4.0 2025-08-19 12:19:02 +10:00
psychedelicious
7b4e04cd7c git: move test LoRA to LFS 2025-08-19 11:56:59 +10:00
Linos
ae4368fabe translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (2073 of 2073 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-08-19 10:28:35 +10:00
psychedelicious
df8e39a9e1 chore: bump version to v6.4.0rc2 2025-08-19 00:01:48 +10:00
psychedelicious
45b43de571 fix(ui): prevent node drag when editing title
Closes #8435
2025-08-18 23:20:28 +10:00
psychedelicious
6d18a72a05 fix(ui): fit to bbox when bbox is not aligned to 64px grid 2025-08-18 23:17:45 +10:00
Kent Keirsey
af58a75e97 Support PEFT Loras with Base_Model.model prefix (#8433)
* Support PEFT Loras with Base_Model.model prefix

* update tests

* ruff

* fix python complaints

* update kes

* format keys

* remove unneeded test
2025-08-18 09:14:46 -04:00
psychedelicious
fd4c3bd27a refactor: estimate working vae memory during encode/decode
- Move the estimation logic to utility functions
- Estimate memory _within_ the encode and decode methods, ensuring we
_always_ estimate working memory when running a VAE
2025-08-18 21:43:14 +10:00
psychedelicious
1f8a60ded2 fix(ui): export NumericalParameterConfig type 2025-08-18 21:38:17 +10:00
psychedelicious
b1b677997d chore: bump version to v6.4.0rc1 2025-08-18 21:34:09 +10:00
psychedelicious
f17b43d736 chore(ui): update whatsnew 2025-08-18 21:34:09 +10:00
psychedelicious
c009a50489 feat(ui): reduce storage persist debounce to 300ms
matches pre-server-backed-state-persistence value
2025-08-18 21:34:09 +10:00
psychedelicious
97a16c455c fix(ui): update board totals when generation completes 2025-08-18 21:34:09 +10:00
psychedelicious
a8a07598c8 chore: ruff 2025-08-18 21:14:00 +10:00
psychedelicious
23206e22e8 tests: skip excessively flaky MPS-specific tests in CI 2025-08-18 21:14:00 +10:00
psychedelicious
f4aba52b90 feat(ui): use flushSync for locateInGallery to ensure panel api calls finish before selecting image 2025-08-18 19:55:06 +10:00
psychedelicious
d17c273939 feat(ui): add locate in gallery button to current image buttons toolbar 2025-08-18 19:55:06 +10:00
psychedelicious
aeb5e7d50a feat(ui): hide locate in gallery from context when unable to actually locate
e.g. when on a tab that doesn't have a gallery, or the image is
intermediate
2025-08-18 19:55:06 +10:00
psychedelicious
580ad30832 feat(ui): use bold icon for locate in gallery 2025-08-18 19:55:06 +10:00
psychedelicious
6390f7d734 fix(ui): more reliable scrollIntoView/"Locate in Gallery"
Three changes needed to make scrollIntoView and "Locate in Gallery" work
reliably.

1. Use setTimeout to work around race condition with scrollIntoView in
gallery.

It was possible to call scrollIntoView before react-virtuoso was ready.
I think react-virtuoso was initialized but hadn't rendered/measured its
items yet, so when we scroll to e.g. index 742, the items have a zero
height, so it doesn't actually scroll down. Then the items render.

Setting a timeout here defers the scroll until after the next event loop
cycle, by which time we expect react-virutoso to be ready.

2. Ensure the scollIntoView effect in gallery triggers any time the
selection is touched by making its dependency the array of selected
images, not just the last selected image name.

The "locate in gallery" functionality works by selecting an image.
There's a reactive effect in the gallery that runs when the last
selected image changes and scrolls it into view.

But if you already have an image selected, selecting it again will not
change the image name bc it is a string primitive. The useEffect ignores
the selection.

So, if you clicked "locate in gallery" on an image that was already
selected, it wouldn't be scrolled into view - even if you had already
scrolled away from it.

To work around this, the effect now uses the whole selection array as
its dependency. Whenever the selection changes, we get a new array,
which triggers the effect.

3. Gallery slice had some checks to avoid creating a new array of
selected image names in state when the selected images didn't change.

For example, if image "abc" was selected, and we selected "abc" again,
instead of creating a new array with the same "abc" image, we bailed
early. IIRC this optimization addressed a rerender issue long ago.

This optimization needs to be removed in order for fix #2 above to work.
We now _want_ a new array whenever selection is set - even if it didn't
actually change.
2025-08-18 19:55:06 +10:00
psychedelicious
5ddbfefb6a feat(ui): add trace logging to scrollIntoView 2025-08-18 19:55:06 +10:00
psychedelicious
bbf5ed7956 fix(ui): use is_intermediate to determine if image is gallery image 2025-08-18 19:55:06 +10:00
Attila Cseh
19cd6eed08 locate in gallery image context menu 2025-08-18 19:55:06 +10:00
Attila Cseh
9c1eb263a8 new entity added above the currently selected one 2025-08-18 18:46:40 +10:00
Attila Cseh
75755189a7 prettier fixes 2025-08-18 18:46:40 +10:00
Attila Cseh
a9ab72d27d new layers created on the top of the existing layers 2025-08-18 18:46:40 +10:00
Attila Cseh
678eb34995 duplicate layer appear above original one 2025-08-18 18:46:40 +10:00
Attila Cseh
ef7050f560 merged layers order retained 2025-08-18 18:46:40 +10:00
Attila Cseh
9787d9de74 prettier fix 2025-08-18 18:30:08 +10:00
Attila Cseh
bb4a50bab2 confirmation before downloading starter bundle 2025-08-18 18:30:08 +10:00
Attila Cseh
f3554b4e1b prettier fixed 2025-08-14 21:10:21 +10:00
Attila Cseh
9dcb025241 build error fixed 2025-08-14 21:10:21 +10:00
Attila Cseh
ecf646066a CLIP skip value clamped 2025-08-14 21:10:21 +10:00
Attila Cseh
3fd10b68cd recall CLIP skip 2025-08-14 21:10:21 +10:00
Attila Cseh
6e32c7993c CLIP Skip zod schema created 2025-08-14 21:10:21 +10:00
Riccardo Giovanetti
8329533848 translationBot(ui): update translation (Italian)
Currently translated at 98.5% (2041 of 2071 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.6% (2039 of 2067 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-08-14 12:14:27 +10:00
psychedelicious
fc7157b029 fix(ui): do not add pos style prompt to metadata 2025-08-14 10:56:24 +10:00
psychedelicious
a1897f7490 chore(ui): lint 2025-08-14 10:56:24 +10:00
psychedelicious
a89b3efd14 feat(ui): remove SDXL style prompt from linear UI
This feature added a lot of unexpected complexity in graph building /
metadata recall and is unintuitive user experience. 99% of the time, the
style prompt should be exactly the main prompt.

You can still use style prompts in workflows, but in an effort to reduce
complexity in the linear UI, we are removing this rarely-used feature.
2025-08-14 10:56:24 +10:00
jiangmencity
5259693ed1 chore: fix some comments
Signed-off-by: jiangmencity <jiangmen@52it.net>
2025-08-14 09:32:54 +10:00
Tikal
d77c24206d Update NODES.md 2025-08-14 09:18:47 +10:00
psychedelicious
c5069557f3 fix(mm): fail when model exists at path instead of finding unused new path
When installing a model, the previous, graceful logic would increment a
suffix on the destination path until found a free path for the model.

But because model file installation and record creation are not in a
transaction, we could end up moving the file successfully and fail to
create the record:
- User attempts to install an already-installed model
- Attempt to move the downloaded model from download tempdir to
destination path
- The path already exists
- Add `_1` or similar to the path until we find a path that is free
- Move the model
- Create the model record
- FK constraint violation bc we already have a model w/ that name, but
the model file has already been moved into the invokeai dir.

Closes #8416
2025-08-13 10:40:06 +10:00
psychedelicious
9b220f61bd translations(ui): add translation for gallery settings 2025-08-12 23:34:24 +10:00
psychedelicious
7fc3af12cc translations(ui): add translation for select your model in launchpad 2025-08-12 23:34:24 +10:00
psychedelicious
e2721b46b6 translations(ui): add atranslations for add/remove negative promtp 2025-08-12 23:34:24 +10:00
psychedelicious
17118a04bd feat(ui): dynamic dockview tab title translations
Requires a ui slice migration and reset of users's layout settings to
get the right titles into dockview params state, which is persisted.
2025-08-12 23:34:24 +10:00
psychedelicious
24788e3c83 fix(ui): input field error styling specificity 2025-08-12 23:30:34 +10:00
psychedelicious
056387c981 feat(ui): allow recall of prompt and seed on upscaling tab 2025-08-12 16:21:51 +10:00
psychedelicious
8a43d90273 fix(ui): positive prompt in upscale metadata 2025-08-12 16:21:51 +10:00
psychedelicious
4f9b9760db feat(ui): debounce persistence instead of throttle 2025-08-12 16:16:11 +10:00
psychedelicious
fdaddafa56 fix(mm): only add suffix to model paths when path is file 2025-08-12 15:31:43 +10:00
psychedelicious
23d59abbd7 chore: ruff 2025-08-12 10:51:05 +10:00
psychedelicious
cf7fa5bce8 perf(backend): clear torch cache after encoding each image in kontext extension
Slightly reduces VRAM allocations.
2025-08-12 10:51:05 +10:00
psychedelicious
39e41998bb feat(ui): use latent-space kontext ref image concat in flux graph
Prevents a large spike in VRAM when preparing to denoise w/ multiple ref
images.

There doesn't appear to be any different in image quality / ref
adherence when concatenating in latent space vs image space, though
images _are_ different.
2025-08-12 10:51:05 +10:00
psychedelicious
c6eff71b74 fix(backend): bug in kontext canvas dimension tracking when concating in latent space
We weren't tracking the canvas dimensions properly which coudl result in
FLUX not "seeing" ref images after the first very well
2025-08-12 10:51:05 +10:00
psychedelicious
6ea4c47757 chore: ruff 2025-08-12 10:51:05 +10:00
psychedelicious
91f91aa835 feat(mm): prepare kontext latents before loading transformer
If the transformer fills up VRAM, then when we VAE encode kontext
latents, we'll need to first offload the transformer (partially, if
partial loading is enabled).

No need to do this - we can encode kontext latents before loading the
transformer to reduce model thrashing.
2025-08-12 10:51:05 +10:00
psychedelicious
ea7868d076 Revert "experiment(mm): investigate vae working memory calculations"
This reverts commit bc9ed57d5cd134dc7c9117395e91d22a3c4aa6de.
2025-08-12 10:51:05 +10:00
psychedelicious
7d86f00d82 feat(mm): implement working memory estimation for VAE encode for all models
Tell the model manager that we need some extra working memory for VAE
encoding operations to prevent OOMs.

See previous commit for investigation and determination of the magic
numbers used.

This safety measure is especially relevant now that we have FLUX Kontext
and may be encoding rather large ref images. Without the working memory
estimation we can OOM as we prepare for denoising.

See #8405 for an example of this issue on a very low VRAM system. It's
possible we can have the same issue on any GPU, though - just a matter
of hitting the right combination of models loaded.
2025-08-12 10:51:05 +10:00
psychedelicious
7785061e7d experiment(mm): investigate vae working memory calculations
This commit includes a task delegated to Claude to investigate our VAE
working memory calculations and investigation results.

See VAE_INVESTIGATION.md for motivation and detail. Everything else is
its output.

Result data includes empirical measurements for all supported model
architectures at a variety of resolutions and fp16/fp32 precision.
Testing conducted on a 4090.

The summarized conclusion is that our working memory estimations for
decoding are spot-on, but decoding also needs some extra working memory.
Empirical measurements suggest ~45% the amount needed for encoding.

A followup commit will implement working memory estimations for VAE
encoding with the goal of preventing unexpected OOMs during encode.
2025-08-12 10:51:05 +10:00
psychedelicious
3370052e54 fix(ui): restore deduping logic in node field element selectors
This is required for some publishing functionality
2025-08-11 22:50:05 +10:00
Attila Cseh
325dacd29c same field cannot be added to form multiple times in workflow editor 2025-08-11 22:50:05 +10:00
psychedelicious
f4981a6ba9 tidy(ui): minor cleanup 2025-08-11 22:37:46 +10:00
Attila Cseh
8c159942eb add to form icon included 2025-08-11 22:37:46 +10:00
Attila Cseh
deb4dc64af error nodes outlined in red 2025-08-11 22:37:46 +10:00
psychedelicious
1a11437b6f feat(ui): add hidden bbox hotkey to alert
If you accidentally hit the hotkey and hide the bbox it could be
difficult to figure out how to un-hide it without the hotkey called out
in the alert.
2025-08-11 22:30:45 +10:00
Attila Cseh
04572c94ad setting bbox visibility moved into render method 2025-08-11 22:30:45 +10:00
Attila Cseh
1e9e78089e Add toggle for bbox with hotkey 2025-08-11 22:30:45 +10:00
Heathen711
e65f93663d bugfix(container-builder) Use the mnt space instead of root space for docker images 2025-08-06 12:36:07 -04:00
psychedelicious
2a796fe25e chore: bump version to v6.3.0 2025-08-05 10:35:22 +10:00
psychedelicious
61ff9ee3a7 feat(ui): add button to ref image to recall size & optimize for model
This is useful for FLUX Kontext, where you typically want the generation
size to at least roughly match the first ref image size.
2025-08-05 10:28:44 +10:00
psychedelicious
111408c046 feat(mm): add flux krea to starter models 2025-08-05 10:25:14 +10:00
psychedelicious
d7619d465e feat(mm): change anime upscaling model to one that doesn't trigger picklescan 2025-08-05 10:25:14 +10:00
Kent Keirsey
8ad4f6e56d updates & fix 2025-08-05 10:10:52 +10:00
Cursor Agent
bf4899526f Add 'shift+s' hotkey for fitting bbox to canvas
Co-authored-by: kent <kent@invoke.ai>
2025-08-05 10:10:52 +10:00
psychedelicious
6435d265c6 fix(ui): overflow w/ long board names 2025-08-05 10:06:55 +10:00
Linos
3163ef454d translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (2065 of 2065 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-08-05 10:04:20 +10:00
Riccardo Giovanetti
7ea636df70 translationBot(ui): update translation (Italian)
Currently translated at 98.6% (2037 of 2065 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.6% (2037 of 2065 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (2036 of 2065 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.6% (2014 of 2042 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-08-05 10:04:20 +10:00
Hosted Weblate
1869824803 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-08-05 10:04:20 +10:00
psychedelicious
66fc8af8a6 fix(ui): reset session button actions
- Do not reset dimensions when resetting generation settings (they are
model-dependent, and we don't change model-dependent settings w/ that
butotn)
- Do not reset bbox when resetting canvas layers
- Show reset canvas layers button only on canvas tab
- Show reset generation settings button only on canvas or generate tab
2025-08-05 10:01:22 +10:00
psychedelicious
48cb6b12f0 fix(ui): add style ref launchpad using wrong dnd config
I don't think this actually caused problems bc the two DND targets were
very similar, but it was wrong.
2025-08-05 09:57:11 +10:00
psychedelicious
68e30a9864 feat(ui): prevent creating new canvases while staging
Disable these items while staging:
- New Canvas From Image context menu
- Edit image hook & launchpad button
- Generate from Text launchpad button (only while on canvas tab)
- Use a Layout Image launchpad button
2025-08-05 09:57:11 +10:00
psychedelicious
f65dc2c081 chore(ui): typegen 2025-08-05 09:54:00 +10:00
psychedelicious
0cd77443a7 feat(app): add setting to disable picklescan
When unsafe_disable_picklescan is enabled, instead of erroring on
detections or scan failures, a warning is logged.

A warning is also logged on app startup when this setting is enabled.

The setting is disabled by default and there is no change in behaviour
when disabled.
2025-08-05 09:54:00 +10:00
Mary Hipp
185ed86424 fix graph building 2025-08-04 12:32:27 -04:00
Mary Hipp
fed817ab83 add image concatenation to flux kontext graph if more than one refernece image 2025-08-04 11:27:02 -04:00
Mary Hipp
e0b45db69a remove check in readiness for multiple reg images 2025-08-04 11:27:02 -04:00
psychedelicious
2beac1fb04 chore: bump version to v6.3.0rc2 2025-08-04 23:55:04 +10:00
psychedelicious
e522de33f8 refactor(nodes): roll back latent-space resizing of kontext images 2025-08-04 23:03:12 +10:00
psychedelicious
d591b50c25 feat(ui): use image-space concatenation in FLUX graphs 2025-08-04 23:03:12 +10:00
psychedelicious
b365aad6d8 chore(ui): typegen 2025-08-04 23:03:12 +10:00
psychedelicious
65ad392361 feat(nodes): add node to prep images for FLUX Kontext 2025-08-04 23:03:12 +10:00
psychedelicious
56d75e1c77 feat(backend): use VAE mean encoding for Kontext reference images
Use distribution mean without sampling noise for more stable and
consistent reference image encoding, matching ComfyUI implementation
2025-08-04 23:03:12 +10:00
psychedelicious
df77a12efe refactor(backend): use torchvision transforms for Kontext image preprocessing
Replace numpy-based normalization with torchvision transforms for
consistency with other image processing in the codebase
2025-08-04 23:03:12 +10:00
psychedelicious
faf662d12e refactor(backend): use BICUBIC resampling for Kontext images
Switch from LANCZOS to BICUBIC for smoother image resizing to reduce
artifacts in reference image processing
2025-08-04 23:03:12 +10:00
psychedelicious
44a7dfd486 fix(backend): use consistent idx_offset=1 for all Kontext images
Changes from per-image index offsets to a consistent value of 1 for
all reference images, matching the ComfyUI implementation
2025-08-04 23:03:12 +10:00
psychedelicious
bb15e5cf06 feat(backend): add spatial tiling for multiple Kontext reference images
Implements intelligent spatial tiling that arranges multiple reference
images in a virtual canvas, choosing between horizontal and vertical
placement to maintain a square-like aspect ratio
2025-08-04 23:03:12 +10:00
psychedelicious
1a1c846be3 feat(backend): include reference images in negative CFG pass for Kontext
Maintains consistency between positive and negative passes to prevent
CFG artifacts when using Kontext reference images
2025-08-04 23:03:12 +10:00
psychedelicious
93c896a370 fix(backend): use img_cond_seq to check for Kontext slicing
Was incorrectly checking img_input_ids instead of img_cond_seq
2025-08-04 23:03:12 +10:00
psychedelicious
053d7c8c8e feat(ui): support disabling roarr output styling via localstorage 2025-07-31 23:02:45 +10:00
psychedelicious
5296263954 feat(ui): add missing translations 2025-07-31 22:51:33 +10:00
psychedelicious
a36b70c01c fix(ui): add image name data attr to gallery placeholder image elements
This fixes an issue where gallery's auto-scroll-into-view for selected
images didn't work, and users instead saw a "Unable to find image..."
debug log message in JS console.
2025-07-31 22:48:42 +10:00
psychedelicious
854a2a5a7a chore: bump version to v6.3.0rc1 2025-07-31 14:17:18 +10:00
psychedelicious
f9c64b0609 chore(ui): update whats new 2025-07-31 14:17:18 +10:00
psychedelicious
5889fa536a feat(ui): add migration path for client state from IndexedDB to server-backed storage 2025-07-31 14:09:45 +10:00
Linos
0e71ba892f translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (2044 of 2044 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-07-31 13:59:21 +10:00
Riccardo Giovanetti
d766a21223 translationBot(ui): update translation (Italian)
Currently translated at 98.6% (2016 of 2044 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-07-31 13:59:21 +10:00
psychedelicious
5c8c54eab8 chore: ruff 2025-07-31 06:38:48 +10:00
psychedelicious
f296f4525c tidy(ui): disable logging middleware 2025-07-31 06:38:48 +10:00
psychedelicious
7c9ba4cb52 refactor(ui): add persistence gate logic to prevent race conditions with slow rehydration 2025-07-31 06:38:48 +10:00
psychedelicious
6784fd5b43 refactor(ui): use new routes for _all_ client state persistence (no override/custom drivers) 2025-07-31 06:38:48 +10:00
psychedelicious
11d68cc646 chore(ui): typegen 2025-07-31 06:38:48 +10:00
psychedelicious
ea8c877025 refactor(app): move client state persistence to own route, add queue_id 2025-07-31 06:38:48 +10:00
psychedelicious
7a3c2332dd feat(ui): add visual indicator when input field is added to form 2025-07-31 06:33:22 +10:00
psychedelicious
3835fd2f72 feat(ui): zhoosh image comparison ui 2025-07-30 07:20:47 -04:00
psychedelicious
6f8746040c docs(ui): update comments in readiness re: flux kontext via bfl api 2025-07-30 12:26:48 +10:00
psychedelicious
35e3940a09 feat(ui): update warning when using multiple ref images on BFL API kontext
It only supports 1 image.
2025-07-30 12:26:48 +10:00
psychedelicious
415616d83f feat(ui): support multiple kontext ref images in studio 2025-07-30 12:26:48 +10:00
psychedelicious
afb67efef9 chore(ui): typegen 2025-07-30 12:26:48 +10:00
psychedelicious
1ed1fefa60 feat(nodes): support multiple kontext ref images
Images are concatenated in latent space.
2025-07-30 12:26:48 +10:00
Ar7ific1al
fa94a05c77 Update CanvasStateApiModule.ts
Add temporary grid snap with ctrl, optional small step with ctrl+shift, while grid snap is off
2025-07-30 12:16:42 +10:00
psychedelicious
7a23d8266f feat(ui): simpler storage driver impl 2025-07-30 05:53:20 +10:00
psychedelicious
a44de079dd perf(ui): instantiate logger for storage error handler once 2025-07-30 05:53:20 +10:00
psychedelicious
c3c1a3edd8 chore(ui): typegen 2025-07-30 05:53:20 +10:00
psychedelicious
ea26b5b147 feat(app): client state persistence endpoints accept stringified data 2025-07-30 05:53:20 +10:00
Eugene Brodsky
4226b741b1 fix(docker) rocm 6.3 based image (#8152)
1. Fix the run script to properly read the GPU_DRIVER
2. Cloned and adjusted the ROCM dockerbuild for docker
3. Adjust the docker-compose.yml to use the cloned dockerbuild
2025-07-29 10:16:42 -04:00
Eugene Brodsky
1424b7c254 Merge branch 'main' into bugfix/heathen711/rocm-docker 2025-07-29 10:12:13 -04:00
psychedelicious
933fb2294c fix(ui): zod rejects any board id besides "none"
Turns out the string autocomplete TS hack does not translate to zod.
Widen the zod schema to any string, but use the hack for the TS type.
2025-07-29 08:45:16 -04:00
psychedelicious
5a181ee0fd build(ui): export loading component 2025-07-29 08:43:03 -04:00
psychedelicious
3b0d59e459 tests(app): update mm tests to test updated behaviour 2025-07-29 16:08:15 +10:00
psychedelicious
fec296e41d fix(app): move (not copy) models from install tmpdir to destination
It's not clear why we were copying downloaded models to the destination
dir instead of moving them. I cannot find a reason for it, and I am able
to install single-file and diffusers models just fine with the change.

This fixes an issue where model installation requires 2x the model's
size (bc we were copying the model over).
2025-07-29 16:08:15 +10:00
Heathen711
ae4e38c6d0 Merge branch 'main' into bugfix/heathen711/rocm-docker 2025-07-28 21:24:34 -07:00
psychedelicious
a9f3f1a4b2 fix(app): handle model files with periods in their name
Previously, we used pathlib's `with_suffix()` method to change add a
suffix (e.g. ".safetensors") to a model when installing it.

The intention is to add a suffix to the model's name - but that method
actually replaces everything after the first period.

This can cause different models to be installed under the same name!

For example, the FLUX models all end up with the same name:
- "FLUX.1 schnell.safetensors" -> "FLUX.safetensors"
- "FLUX.1 dev.safetensors" -> "FLUX.safetensors"

The fix is easy - append the suffix using string formatting instead of
using pathlib.

This issue has existed for a long time, but was exacerbated in
075345bffd in which I updated the names of
our starter models, adding ".1" to the FLUX model names. Whoops!
2025-07-29 14:15:59 +10:00
psychedelicious
8a73df4fe1 fix(ui): progress image does not hide on viewer with autoswitch disabled 2025-07-29 12:53:45 +10:00
psychedelicious
ea2e1ea8f0 fix(ui): queue count badge renders when left panel collapsed 2025-07-29 12:51:23 +10:00
psychedelicious
e8aa91931d fix(ui): connect metadata to output node for ext api nodes 2025-07-29 06:46:17 +10:00
psychedelicious
8d22a314a6 docs(ui): add some comments for race condition handling 2025-07-29 06:34:08 +10:00
psychedelicious
57ce2b8aa7 chore(ui): lint 2025-07-29 06:34:08 +10:00
psychedelicious
6b810cb3fb fix(ui): race condition w/ queue counts 2025-07-29 06:34:08 +10:00
psychedelicious
4f3a5dcc43 tidy(ui): remove unused progress related logic and components 2025-07-29 06:34:08 +10:00
psychedelicious
c3ae14cf73 fix(ui): ignore events for already-completed queue items 2025-07-29 06:34:08 +10:00
psychedelicious
b9c44b92d5 fix(ui): clear progress images from viewer at the right time 2025-07-29 06:34:08 +10:00
psychedelicious
5a68b4ddbc build(ui): skip logging ctx plugin when running tests 2025-07-29 06:31:30 +10:00
psychedelicious
18a722839b chore(ui): update knip conifg 2025-07-29 06:31:30 +10:00
psychedelicious
7370cb9be6 build(ui): add vite plugin to add relative file path to logger context 2025-07-29 06:31:30 +10:00
Kent Keirsey
cc4df52f82 feat: server-side client state persistence (#8314)
## Summary

Move client state persistence from browser to server.

- Add new client state persistence service to handle reading and writing
client state to db & associated router. The API mirrors that of
LocalStorage/IndexedDB where the set/get methods both operate on _keys_.
For example, when we persist the canvas state, we send only the new
canvas state to the backend - not the whole app state.
- The data is very flexibly-typed as a pydantic `JsonValue`. The client
is expected to handle all data parsing/validation (it must do this
anyways, and does this today).
- Change persistence from debounced to throttled at 2 seconds. Maybe
less is OK? Trying to not hammer the server.
- Add new persistence storage driver in client and use it in
redux-remember. It does its best to avoid extraneous persist requests,
caching the last data it persisted and noop-ing if there are no changes.
- Storage driver tracks pending persist actions using ref counts (bc
each slice is persisted independently). If there user navigates away
from the page during a persist request, it will give them the "you may
lose something if you navigate away" alert.
- This "lose something" alert message is not customizable (browser
security reasons).
- The alert is triggered only when the user closes the tape while a
persist network request is mid-flight. It's possible that the user makes
a change and closes the page before we start persisting. In this case,
they will lose the last 2 seconds of data.
- I tried making triggering the alert when a persist was waiting to
start, and it felt off.
- Maybe the alert isn't even necessary. Again you'd lose 2s of data at
most, probably a non issue. IMO after trying it, a subtle indicator
somewhere on the page is probably less confusing/intrusive.
- Fix an issue where the `redux-remember` enhancer was added _last_ in
the enhancer chain, which prevented us detecting when a persist has
succeeded. This required a small change to the `unserialze` utility
(used during rehydration) to ensure slices enhanced with `redux-undo`
are set up correctly as they are rehydrated.
- Restructure the redux store code to avoid circular dependencies. I
couldn't figure out how to do this without just smooshing it all into
the main `store.ts` file. Oh well.

Implications:
- Because client state is now on the server, different browsers will
have the same studio state. For example, if I start working on something
in Firefox, if I switch to Chrome, I have the same client state.
- Incognito windows won't do anything bc client state is server-side.
- It takes a bit longer for persistence to happen thanks to the
debounce, but there's now an indicator that tells you your stuff isn't
saved yet.
- Resetting the browser won't fix an issue with your studio state. You
must use `Reset Web UI` to fix it (or otherwise hit the appropriate
endpoint). It may be possible to end up in a Catch-22 where you can't
click the button and get stuck w/ a borked studio - I think to think
through this a bit more, might not be an issue.
- It probably takes a bit longer to start up, since we need to retrieve
client state over network instead of directly with browser APIs.

Other notes:
- We could explore adding an "incognito" mode, enabled via
`invokeai.yaml` setting or maybe in the UI. This would temporarily
disable persistence. Actually, I don't think this really makes sense, bc
all the images would be saved to disk.
- The studio state is stored in a single row in the DB. Currently, a
static row ID is used to force the studio state to be a singleton. It is
_possible_ to support multiple saved states. Might be a solve for app
workspaces.

## Related Issues / Discussions

n/a

## QA Instructions

Try it out. It's pretty straightforward. Error states are the main
things to test - for example, network blips. The new server-side
persistence driver is the only real functional change - everything else
is just kinda shuffling things around to support it.

## Merge Plan

n/a

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-07-25 12:08:47 -04:00
Kent Keirsey
1cb4ef05a4 add newline 2025-07-25 11:08:54 -04:00
Kent Keirsey
7da141101c Merge branch 'main' into psyche/feat/app/client-state-persistence 2025-07-25 11:07:17 -04:00
psychedelicious
2571e199c5 tidy(ui): remove unused props 2025-07-25 11:06:18 -04:00
psychedelicious
79e93f905e fix(ui): add separate wrapper components for notes and current image nodes that do not need invocation node context 2025-07-25 11:06:18 -04:00
psychedelicious
f562e4f835 fix(ui): ensure all node context provider wraps all calls to useInvocationNodeContext 2025-07-25 11:06:18 -04:00
psychedelicious
47e220aaf3 perf(ui): imperatively get nodes and edges in autolayout hook 2025-07-25 11:06:18 -04:00
psychedelicious
9365154bfe chore: bump version to v6.2.0 2025-07-25 11:06:18 -04:00
psychedelicious
afc6911c96 chore: bump version to v6.3.0a1 2025-07-25 19:07:08 +10:00
psychedelicious
afa1ee7ffd tidy(ui): enable devmode redux checks 2025-07-25 19:04:21 +10:00
psychedelicious
5a102f6b53 chore(ui): lint 2025-07-25 19:04:21 +10:00
psychedelicious
af345a33f3 fix(ui): infinite loop when setting tile controlnet model 2025-07-25 19:04:21 +10:00
psychedelicious
038b110a82 fix(ui): do not store whole model configs in state 2025-07-25 19:04:21 +10:00
psychedelicious
f3cd49d46e refactor(ui): just manually validate async stuff 2025-07-25 19:04:21 +10:00
psychedelicious
ca7d7c9d93 refactor(ui): work around zod async validation issue 2025-07-25 19:04:21 +10:00
psychedelicious
1addeb4b59 fix(ui): check initial retrieval and set as last persisted 2025-07-25 19:04:21 +10:00
psychedelicious
6ea4884b0c chore(ui): bump zod to latest
Checking if it fixes an issue w/ async validators
2025-07-25 19:04:21 +10:00
psychedelicious
aed9b1013e refactor(ui): use zod for all redux state 2025-07-25 19:04:21 +10:00
psychedelicious
6962536b4a refactor(ui): use zod for all redux state (wip)
needed for confidence w/ state rehydration logic
2025-07-25 19:04:21 +10:00
psychedelicious
7e59d040aa feat(ui): iterate on storage api 2025-07-25 19:04:20 +10:00
psychedelicious
e7c67da2c2 refactor(ui): restructure persistence driver creation to support custom drivers 2025-07-25 19:04:20 +10:00
psychedelicious
c44571bc36 revert(ui): temp changes to main.tsx for testing 2025-07-25 19:04:20 +10:00
psychedelicious
ca257650d4 revert(ui): temp disable eslint rule 2025-07-25 19:04:20 +10:00
psychedelicious
6a9962d2bb git: update gitignore 2025-07-25 19:04:20 +10:00
psychedelicious
9492569a2c wip 2025-07-25 19:04:20 +10:00
psychedelicious
61e711620d chore: ruff 2025-07-25 19:04:20 +10:00
psychedelicious
3cf82505bb tests(app): service mocks 2025-07-25 19:04:20 +10:00
psychedelicious
53bcbc58f5 chore(ui): lint 2025-07-25 19:04:20 +10:00
psychedelicious
42f3990f7a refactor(ui): iterate on persistence 2025-07-25 19:04:20 +10:00
psychedelicious
456205da17 refactor(ui): iterate on persistence 2025-07-25 19:04:20 +10:00
psychedelicious
ca0684700e refactor(ui): alternate approach to slice configs 2025-07-25 19:04:19 +10:00
psychedelicious
6a702821ef chore(ui): typegen 2025-07-25 19:04:19 +10:00
psychedelicious
682d271f6f feat(api): make client state key query not body 2025-07-25 19:04:19 +10:00
psychedelicious
e872c253b1 refactor(ui): cleaner slice definitions 2025-07-25 19:04:19 +10:00
psychedelicious
28633c9983 feat: server-side client state persistence 2025-07-25 19:04:19 +10:00
psychedelicious
70ac58e64a tidy(ui): remove unused props 2025-07-25 18:51:21 +10:00
psychedelicious
e653837236 fix(ui): add separate wrapper components for notes and current image nodes that do not need invocation node context 2025-07-25 18:51:21 +10:00
psychedelicious
2bbfcc2f13 fix(ui): ensure all node context provider wraps all calls to useInvocationNodeContext 2025-07-25 18:51:21 +10:00
psychedelicious
d6e0e439c5 perf(ui): imperatively get nodes and edges in autolayout hook 2025-07-25 18:50:59 +10:00
psychedelicious
26aab60f81 chore: bump version to v6.2.0 2025-07-25 18:41:00 +10:00
Riccardo Giovanetti
7bea2fa11f translationBot(ui): update translation (Italian)
Currently translated at 98.6% (2016 of 2044 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.6% (2015 of 2043 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-07-25 17:15:01 +10:00
psychedelicious
169d58ea4c feat(ui): restore clear queue button
It is accessible in two places:
- The queue actions hamburger menu.
- On the queue tab.

If the clear queue app feature is disabled, it is not shown in either of
those places.
2025-07-23 23:38:53 +10:00
psychedelicious
b53d2250f7 feat(ui): reduce snap tolerance to make it easier to break the snap 2025-07-23 23:05:40 +10:00
psychedelicious
242eea8295 fix(ui): incorrect zoom direction w/ small scroll amounts 2025-07-23 23:05:40 +10:00
psychedelicious
4dabe09e0d tests(ui): remove test for no-longer-valid behaviour 2025-07-23 23:03:02 +10:00
psychedelicious
07fa0d3b77 fix(ui): do not attempt toggle when target panel isn't registered 2025-07-23 23:03:02 +10:00
psychedelicious
e97f82292f tests(ui): add tests for disposable handling 2025-07-23 23:03:02 +10:00
psychedelicious
005bab9035 fix(ui): tab disposables not being added correctly 2025-07-23 23:03:02 +10:00
psychedelicious
409173919c tests(ui): add tests for toggleViewer functionality 2025-07-23 23:03:02 +10:00
psychedelicious
7915180047 feat(ui): restore viewer toggle hotkey 2025-07-23 23:03:02 +10:00
Riccardo Giovanetti
4349b8387d translationBot(ui): update translation (Italian)
Currently translated at 97.9% (2000 of 2042 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-07-23 12:26:48 +10:00
Kent Keirsey
f95b686bdc reposition export button 2025-07-23 11:55:11 +10:00
Mary Hipp
72afb9c3fd fix iterations for all API models 2025-07-22 13:27:35 -04:00
Mary Hipp
f004fc31f1 update whats new 2025-07-22 12:24:10 -04:00
psychedelicious
2aa163b3a2 feat(ui): add default inpaint mask layer on canvas reset 2025-07-22 10:26:57 +10:00
psychedelicious
f40900c173 chore: bump version to v6.1.0 2025-07-22 08:24:31 +10:00
psychedelicious
2c1f2b2873 tidy(ui): move star hotkey into own hook & use reactive state for focus 2025-07-22 08:11:57 +10:00
Kent Keirsey
8418e34480 lint 2025-07-22 08:11:57 +10:00
Kent Keirsey
b548ac0ccf Add Star/Unstar Hotkey and fix hotkey translations 2025-07-22 08:11:57 +10:00
Linos
2af2b8b6c4 translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (2003 of 2003 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-07-22 07:58:19 +10:00
Hosted Weblate
058dc06748 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-07-22 07:58:19 +10:00
Riccardo Giovanetti
8acb1c0088 translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1978 of 2003 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1978 of 2003 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.6% (1968 of 1994 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-07-22 07:58:19 +10:00
Hosted Weblate
683732a37c translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-07-22 07:58:19 +10:00
Riku
b990eacca0 translationBot(ui): update translation (German)
Currently translated at 62.1% (1251 of 2012 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2025-07-22 07:58:19 +10:00
RyoKoba
5f7e920deb translationBot(ui): update translation (Japanese)
Currently translated at 99.8% (2007 of 2011 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 99.8% (2007 of 2011 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 99.8% (2007 of 2011 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 99.8% (2007 of 2011 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 99.8% (2007 of 2011 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 92.0% (1851 of 2011 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 92.0% (1851 of 2011 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 92.0% (1851 of 2011 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 87.4% (1744 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 87.4% (1744 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 81.0% (1616 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 81.0% (1616 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 81.0% (1616 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 81.0% (1616 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 81.0% (1616 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 81.0% (1616 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 81.0% (1616 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 81.0% (1616 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 81.0% (1616 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 75.6% (1510 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 75.6% (1510 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 75.6% (1510 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 75.6% (1510 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 75.6% (1510 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 75.6% (1510 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 75.6% (1510 of 1995 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 75.6% (1510 of 1995 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2025-07-22 07:58:19 +10:00
Riccardo Giovanetti
55dfdc0a9c translationBot(ui): update translation (Italian)
Currently translated at 97.9% (1953 of 1994 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1986 of 2011 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1970 of 1995 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.8% (1910 of 1952 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-07-22 07:58:19 +10:00
Linos
10d6d19e17 translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (2012 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 100.0% (2012 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 99.7% (2006 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 99.7% (2006 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 99.5% (2002 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 99.5% (2002 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 97.8% (1968 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 97.8% (1968 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 97.8% (1968 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 97.8% (1968 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 96.4% (1940 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 96.4% (1940 of 2012 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 100.0% (1921 of 1921 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 100.0% (1917 of 1917 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-07-22 07:58:19 +10:00
skunkworxdark
15542b954d Fix nodes ui: Make nodes dot background to be the same as the snap to grid size and position
Fix nodes ui:  Make nodes dot background to be the same as the snap to grid size and position
Update to Flow.tsx

Changes the size and offset of the dots background to be the same size as the snap to grid, and also fix the background dot pattern alignment.

Currently, the snapGrid is 25x25, and the default background dot gap is 20x20, these do not align.  This is fixed by making the gap property of the background the same as the snapGrid.

Additionally, there is a bug in the rectFlow background code that incorrectly sets the offset to be the centre of the dot pattern with the default offset of 0.  To work around this issue, setting the background offset property to the snapGrid size will realign the dot pattern correctly. 

I have logged a bug for the rectFlow background issue in its repo. 
https://github.com/xyflow/xyflow/issues/5405
2025-07-22 07:46:52 +10:00
skunkworxdark
6430d830c1 Update nodes auto layout spacing for snap to grid size
Update workflowSettingsSlice.ts

Change the default settings for auto layout nodeSpacing and layerSpacing  to 30 instead of 32.    This will make the x position of auto layed nodes land on the snap to grid positions. 

Because the node width (320) + 30 = 350 which is divisible by the snap to grid size of 25.
2025-07-22 07:40:58 +10:00
Kent Keirsey
c3f6389291 fix ruff and remove unused API route 2025-07-22 07:33:48 +10:00
Kent Keirsey
070eef3eff remove whitespace 2025-07-22 07:33:48 +10:00
Kent Keirsey
b14d841d57 Extract util and fix model image logic 2025-07-22 07:33:48 +10:00
Kent Keirsey
dd35ab026a update logic and remove bad test 2025-07-22 07:33:48 +10:00
Cursor Agent
7fc06db8ad Add LoRA model metadata extraction from JSON and PNG files
Co-authored-by: kent <kent@invoke.ai>
2025-07-22 07:33:48 +10:00
psychedelicious
9d1f09c0f3 fix(ui): return wrapped history in redux-remember unserialize
We intermittently get an error like this:
```
TypeError: Cannot read properties of undefined (reading 'length')
```

This error is caused by a `redux-undo`-enhanced slice being rehydrated
without the extra stuff it adds to the slice to make it undoable (e.g.
an array of `past` states, the `present` state, array of `future`
states, and some other metadata).

`redux-undo` may need to check the length of the past/future arrays as
part of its internal functionality. These keys don't exist so we get the
error. I'm not sure _why_ they don't exist - my understanding of
`redux-undo` is that it should be checking and wrapping the state w/ the
history stuff automatically. Seems to be related to `redux-remember` -
may be a race condition.

The solution is to ensure we wrap rehydrated state for undoable slices
as we rehydrate them. I discovered the solution while troubleshooting
#8314 when the changes therein somehow triggered the issue to start
occuring every time instead of rarely.
2025-07-22 07:00:57 +10:00
Heathen711
1cdd4b5980 bugfix(docs) link syntax 2025-07-17 04:26:06 +00:00
Heathen711
89ceecc870 bugfix(docker) Ensure the correct extra install. 2025-07-17 04:19:22 +00:00
Heathen711
687cccdb99 cleanup(docker) 2025-07-17 04:00:42 +00:00
Heathen711
c84f8465b8 bugfix(pyproject) Convert from dependency groups to extras and update docks to use UV's built in torch support 2025-07-17 03:58:26 +00:00
Heathen711
4b5c481b7a Merge remote-tracking branch 'origin' into bugfix/heathen711/rocm-docker 2025-07-17 01:03:03 +00:00
Heathen711
2caa1b166d Merge remote-tracking branch 'origin' into bugfix/heathen711/rocm-docker 2025-07-13 00:55:39 +00:00
Heathen711
1b6ebede7b Revert "cleanup(github actions)"
This reverts commit 017d38eee2.
2025-07-10 21:10:56 +00:00
Heathen711
017d38eee2 cleanup(github actions) 2025-07-10 21:04:48 +00:00
Heathen711
78eb6b0338 cleanup(docker) 2025-07-10 21:03:57 +00:00
Heathen711
3e8e0f6ddf Merge remote-tracking branch 'origin' into bugfix/heathen711/rocm-docker 2025-07-10 20:14:27 +00:00
Heathen711
8213f62d3b bugfix(docker) render group controls the devices, but it needs to match the host's render group ID 2025-07-09 20:20:59 +00:00
Heathen711
233740a40e Merge remote-tracking branch 'origin' into bugfix/heathen711/rocm-docker 2025-07-09 03:27:42 +00:00
Heathen711
8c5fcfd0fd cleanup(docker) remove no cache argument 2025-07-05 15:25:26 +00:00
Heathen711
6d7b231196 Merge remote-tracking branch 'origin' into bugfix/heathen711/rocm-docker 2025-07-05 15:22:35 +00:00
Heathen711
31ca314b02 Missed files 2025-07-05 15:21:46 +00:00
Heathen711
0db304f1ee bugfix(uv) Lock torchvision and ensure the docker uses the same rocm version 2025-07-05 03:35:11 +00:00
Heathen711
a3cb3e03f4 bugfix(ci) Clean up more space for typegen check 2025-07-03 21:22:11 +00:00
Heathen711
641a6cfdb7 bugfix(docker) Remove the need for UV index as that is now baked into the uv.lock 2025-07-03 21:15:03 +00:00
Heathen711
f27471cea7 bugfix(docker): Use uv.lock for docker, and update to newer index urls. 2025-07-03 20:08:28 +00:00
Heathen711
47508b8d6c bugfix(docker) combined the dockerfiles and reduced image size 2025-07-03 06:01:51 +00:00
Heathen711
28e0242907 Fix tagging & remove force reinstall 2025-07-03 01:56:46 +00:00
Heathen711
96523ca01f fix(docker) Add cloned dockerbuild 2025-06-29 22:07:11 +00:00
Heathen711
c10a6fdab1 fix(docker) rocm 2.4.6 based image 2025-06-29 22:02:40 +00:00
1351 changed files with 117787 additions and 24026 deletions

1
.gitattributes vendored
View File

@@ -4,3 +4,4 @@
* text=auto
docker/** text eol=lf
tests/test_model_probe/stripped_models/** filter=lfs diff=lfs merge=lfs -text
tests/model_identification/stripped_models/** filter=lfs diff=lfs merge=lfs -text

39
.github/CODEOWNERS vendored
View File

@@ -1,31 +1,32 @@
# continuous integration
/.github/workflows/ @lstein @blessedcoolant @hipsterusername @ebr @jazzhaiku @psychedelicious
/.github/workflows/ @lstein @blessedcoolant
# documentation
/docs/ @lstein @blessedcoolant @hipsterusername @psychedelicious
/mkdocs.yml @lstein @blessedcoolant @hipsterusername @psychedelicious
# documentation - anyone with write privileges can review
/docs/
/mkdocs.yml
# nodes
/invokeai/app/ @blessedcoolant @psychedelicious @hipsterusername @jazzhaiku
/invokeai/app/ @blessedcoolant @lstein @dunkeroni @JPPhoto
# installation and configuration
/pyproject.toml @lstein @blessedcoolant @psychedelicious @hipsterusername
/docker/ @lstein @blessedcoolant @psychedelicious @hipsterusername @ebr
/scripts/ @ebr @lstein @psychedelicious @hipsterusername
/installer/ @lstein @ebr @psychedelicious @hipsterusername
/invokeai/assets @lstein @ebr @psychedelicious @hipsterusername
/invokeai/configs @lstein @psychedelicious @hipsterusername
/invokeai/version @lstein @blessedcoolant @psychedelicious @hipsterusername
/pyproject.toml @lstein @blessedcoolant
/docker/ @lstein @blessedcoolant
/scripts/ @lstein
/installer/ @lstein
/invokeai/assets @lstein
/invokeai/configs @lstein
/invokeai/version @lstein @blessedcoolant
# web ui
/invokeai/frontend @blessedcoolant @psychedelicious @lstein @maryhipp @hipsterusername
/invokeai/frontend @blessedcoolant @lstein @dunkeroni
# generation, model management, postprocessing
/invokeai/backend @lstein @blessedcoolant @hipsterusername @jazzhaiku @psychedelicious @maryhipp
/invokeai/backend @lstein @blessedcoolant @dunkeroni @JPPhoto
# front ends
/invokeai/frontend/CLI @lstein @psychedelicious @hipsterusername
/invokeai/frontend/install @lstein @ebr @psychedelicious @hipsterusername
/invokeai/frontend/merge @lstein @blessedcoolant @psychedelicious @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @psychedelicious @hipsterusername
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp @hipsterusername
/invokeai/frontend/CLI @lstein
/invokeai/frontend/install @lstein
/invokeai/frontend/merge @lstein @blessedcoolant
/invokeai/frontend/training @lstein @blessedcoolant
/invokeai/frontend/web @blessedcoolant @lstein @dunkeroni @Pfannkuchensack

View File

@@ -18,5 +18,6 @@
- [ ] _The PR has a short but descriptive title, suitable for a changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _❗Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_

View File

@@ -45,13 +45,23 @@ jobs:
steps:
- name: Free up more disk space on the runner
# https://github.com/actions/runner-images/issues/2840#issuecomment-1284059930
# the /mnt dir has 70GBs of free space
# /dev/sda1 74G 28K 70G 1% /mnt
# According to some online posts the /mnt is not always there, so checking before setting docker to use it
run: |
echo "----- Free space before cleanup"
df -h
sudo rm -rf /usr/share/dotnet
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
sudo swapoff /mnt/swapfile
sudo rm -rf /mnt/swapfile
if [ -f /mnt/swapfile ]; then
sudo swapoff /mnt/swapfile
sudo rm -rf /mnt/swapfile
fi
if [ -d /mnt ]; then
sudo chmod -R 777 /mnt
echo '{"data-root": "/mnt/docker-root"}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker
fi
echo "----- Free space after cleanup"
df -h

View File

@@ -23,6 +23,7 @@ jobs:
close-issue-message: "Due to inactivity, this issue was automatically closed. If you are still experiencing the issue, please recreate the issue."
days-before-pr-stale: -1
days-before-pr-close: -1
only-labels: "bug"
exempt-issue-labels: "Active Issue"
repo-token: ${{ secrets.GITHUB_TOKEN }}
operations-per-run: 500

30
.github/workflows/lfs-checks.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
# Checks that large files and LFS-tracked files are properly checked in with pointer format.
# Uses https://github.com/ppremk/lfs-warning to detect LFS issues.
name: 'lfs checks'
on:
push:
branches:
- 'main'
pull_request:
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
merge_group:
workflow_dispatch:
jobs:
lfs-check:
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
# Required to label and comment on the PRs
pull-requests: write
steps:
- name: checkout
uses: actions/checkout@v4
- name: check lfs files
uses: ppremk/lfs-warning@v3.3

View File

@@ -22,12 +22,12 @@ jobs:
steps:
- name: checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: setup python
uses: actions/setup-python@v5
uses: actions/setup-python@v6
with:
python-version: '3.10'
python-version: '3.12'
cache: pip
cache-dependency-path: pyproject.toml

View File

@@ -39,6 +39,20 @@ jobs:
- name: checkout
uses: actions/checkout@v4
- name: Free up more disk space on the runner
# https://github.com/actions/runner-images/issues/2840#issuecomment-1284059930
run: |
echo "----- Free space before cleanup"
df -h
sudo rm -rf /usr/share/dotnet
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
if [ -f /mnt/swapfile ]; then
sudo swapoff /mnt/swapfile
sudo rm -rf /mnt/swapfile
fi
echo "----- Free space after cleanup"
df -h
- name: check for changed files
if: ${{ inputs.always_run != true }}
id: changed-files

3
.gitignore vendored
View File

@@ -192,3 +192,6 @@ installer/InvokeAI-Installer/
.aider*
.claude/
# Weblate configuration file
weblate.ini

View File

@@ -4,38 +4,33 @@
# Invoke - Professional Creative AI Tools for Visual Media
#### To learn more about Invoke, or implement our Business solutions, visit [invoke.com]
[![discord badge]][discord link] [![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link] [![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link] [![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
</div>
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
Invoke is available in two editions:
| **Community Edition** | **Professional Edition** |
|----------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| **For users looking for a locally installed, self-hosted and self-managed service** | **For users or teams looking for a cloud-hosted, fully managed service** |
| - Free to use under a commercially-friendly license | - Monthly subscription fee with three different plan levels |
| - Download and install on compatible hardware | - Offers additional benefits, including multi-user support, improved model training, and more |
| - Includes all core studio features: generate, refine, iterate on images, and build workflows | - Hosted in the cloud for easy, secure model access and scalability |
| Quick Start -> [Installation and Updates][installation docs] | More Information -> [www.invoke.com/pricing](https://www.invoke.com/pricing) |
- Free to use under a commercially-friendly license
- Download and install on compatible hardware
- Generate, refine, iterate on images, and build workflows
![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)
---
> ## 📣 Are you a new or returning InvokeAI user?
> Take our first annual [User's Survey](https://forms.gle/rCE5KuQ7Wfrd1UnS7)
---
# Documentation
| **Quick Links** |
|----------------------------------------------------------------------------------------------------------------------------|
| [Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs] |
| **Quick Links** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs] |
# Installation
To get started with Invoke, [Download the Installer](https://www.invoke.com/downloads).
For detailed step by step instructions, or for instructions on manual/docker installations, visit our documentation on [Installation and Updates][installation docs]
To get started with Invoke, [Download the Launcher](https://github.com/invoke-ai/launcher/releases/latest).
## Troubleshooting, FAQ and Support
@@ -90,7 +85,6 @@ Original portions of the software are Copyright © 2024 by respective contributo
[features docs]: https://invoke-ai.github.io/InvokeAI/features/database/
[faq]: https://invoke-ai.github.io/InvokeAI/faq/
[contributors]: https://invoke-ai.github.io/InvokeAI/contributing/contributors/
[invoke.com]: https://www.invoke.com/about
[github issues]: https://github.com/invoke-ai/InvokeAI/issues
[docs home]: https://invoke-ai.github.io/InvokeAI
[installation docs]: https://invoke-ai.github.io/InvokeAI/installation/

View File

@@ -22,6 +22,10 @@
## GPU_DRIVER can be set to either `cuda` or `rocm` to enable GPU support in the container accordingly.
# GPU_DRIVER=cuda #| rocm
## If you are using ROCM, you will need to ensure that the render group within the container and the host system use the same group ID.
## To obtain the group ID of the render group on the host system, run `getent group render` and grab the number.
# RENDER_GROUP_ID=
## CONTAINER_UID can be set to the UID of the user on the host system that should own the files in the container.
## It is usually not necessary to change this. Use `id -u` on the host system to find the UID.
# CONTAINER_UID=1000

View File

@@ -43,7 +43,6 @@ ENV \
UV_MANAGED_PYTHON=1 \
UV_LINK_MODE=copy \
UV_PROJECT_ENVIRONMENT=/opt/venv \
UV_INDEX="https://download.pytorch.org/whl/cu124" \
INVOKEAI_ROOT=/invokeai \
INVOKEAI_HOST=0.0.0.0 \
INVOKEAI_PORT=9090 \
@@ -74,19 +73,17 @@ RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
# this is just to get the package manager to recognize that the project exists, without making changes to the docker layer
--mount=type=bind,source=invokeai/version,target=invokeai/version \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then UV_INDEX="https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then UV_INDEX="https://download.pytorch.org/whl/rocm6.2"; \
fi && \
uv sync --frozen
# build patchmatch
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python -c "from patchmatch import patch_match"
ulimit -n 30000 && \
uv sync --extra $GPU_DRIVER --frozen
# Link amdgpu.ids for ROCm builds
# contributed by https://github.com/Rubonnek
RUN mkdir -p "/opt/amdgpu/share/libdrm" &&\
ln -s "/usr/share/libdrm/amdgpu.ids" "/opt/amdgpu/share/libdrm/amdgpu.ids"
ln -s "/usr/share/libdrm/amdgpu.ids" "/opt/amdgpu/share/libdrm/amdgpu.ids" && groupadd render
# build patchmatch
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python -c "from patchmatch import patch_match"
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R ${CONTAINER_UID}:${CONTAINER_GID} ${INVOKEAI_ROOT}
@@ -105,8 +102,6 @@ COPY invokeai ${INVOKEAI_SRC}/invokeai
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=uv.lock,target=uv.lock \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then UV_INDEX="https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then UV_INDEX="https://download.pytorch.org/whl/rocm6.2"; \
fi && \
uv pip install -e .
ulimit -n 30000 && \
uv pip install -e .[$GPU_DRIVER]

136
docker/Dockerfile-rocm-full Normal file
View File

@@ -0,0 +1,136 @@
# syntax=docker/dockerfile:1.4
#### Web UI ------------------------------------
FROM docker.io/node:22-slim AS web-builder
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack use pnpm@8.x
RUN corepack enable
WORKDIR /build
COPY invokeai/frontend/web/ ./
RUN --mount=type=cache,target=/pnpm/store \
pnpm install --frozen-lockfile
RUN npx vite build
## Backend ---------------------------------------
FROM library/ubuntu:24.04
ARG DEBIAN_FRONTEND=noninteractive
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt \
--mount=type=cache,target=/var/lib/apt \
apt update && apt install -y --no-install-recommends \
ca-certificates \
git \
gosu \
libglib2.0-0 \
libgl1 \
libglx-mesa0 \
build-essential \
libopencv-dev \
libstdc++-10-dev \
wget
ENV \
PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
VIRTUAL_ENV=/opt/venv \
INVOKEAI_SRC=/opt/invokeai \
PYTHON_VERSION=3.12 \
UV_PYTHON=3.12 \
UV_COMPILE_BYTECODE=1 \
UV_MANAGED_PYTHON=1 \
UV_LINK_MODE=copy \
UV_PROJECT_ENVIRONMENT=/opt/venv \
INVOKEAI_ROOT=/invokeai \
INVOKEAI_HOST=0.0.0.0 \
INVOKEAI_PORT=9090 \
PATH="/opt/venv/bin:$PATH" \
CONTAINER_UID=${CONTAINER_UID:-1000} \
CONTAINER_GID=${CONTAINER_GID:-1000}
ARG GPU_DRIVER=cuda
# Install `uv` for package management
COPY --from=ghcr.io/astral-sh/uv:0.6.9 /uv /uvx /bin/
# Install python & allow non-root user to use it by traversing the /root dir without read permissions
RUN --mount=type=cache,target=/root/.cache/uv \
uv python install ${PYTHON_VERSION} && \
# chmod --recursive a+rX /root/.local/share/uv/python
chmod 711 /root
WORKDIR ${INVOKEAI_SRC}
# Install project's dependencies as a separate layer so they aren't rebuilt every commit.
# bind-mount instead of copy to defer adding sources to the image until next layer.
#
# NOTE: there are no pytorch builds for arm64 + cuda, only cpu
# x86_64/CUDA is the default
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=uv.lock,target=uv.lock \
# this is just to get the package manager to recognize that the project exists, without making changes to the docker layer
--mount=type=bind,source=invokeai/version,target=invokeai/version \
ulimit -n 30000 && \
uv sync --extra $GPU_DRIVER --frozen
RUN --mount=type=cache,target=/var/cache/apt \
--mount=type=cache,target=/var/lib/apt \
if [ "$GPU_DRIVER" = "rocm" ]; then \
wget -O /tmp/amdgpu-install.deb \
https://repo.radeon.com/amdgpu-install/6.3.4/ubuntu/noble/amdgpu-install_6.3.60304-1_all.deb && \
apt install -y /tmp/amdgpu-install.deb && \
apt update && \
amdgpu-install --usecase=rocm -y && \
apt-get autoclean && \
apt clean && \
rm -rf /tmp/* /var/tmp/* && \
usermod -a -G render ubuntu && \
usermod -a -G video ubuntu && \
echo "\\n/opt/rocm/lib\\n/opt/rocm/lib64" >> /etc/ld.so.conf.d/rocm.conf && \
ldconfig && \
update-alternatives --auto rocm; \
fi
## Heathen711: Leaving this for review input, will remove before merge
# RUN --mount=type=cache,target=/var/cache/apt \
# --mount=type=cache,target=/var/lib/apt \
# if [ "$GPU_DRIVER" = "rocm" ]; then \
# groupadd render && \
# usermod -a -G render ubuntu && \
# usermod -a -G video ubuntu; \
# fi
## Link amdgpu.ids for ROCm builds
## contributed by https://github.com/Rubonnek
# RUN mkdir -p "/opt/amdgpu/share/libdrm" &&\
# ln -s "/usr/share/libdrm/amdgpu.ids" "/opt/amdgpu/share/libdrm/amdgpu.ids"
# build patchmatch
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python -c "from patchmatch import patch_match"
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R ${CONTAINER_UID}:${CONTAINER_GID} ${INVOKEAI_ROOT}
COPY docker/docker-entrypoint.sh ./
ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]
CMD ["invokeai-web"]
# --link requires buldkit w/ dockerfile syntax 1.4, does not work with podman
COPY --link --from=web-builder /build/dist ${INVOKEAI_SRC}/invokeai/frontend/web/dist
# add sources last to minimize image changes on code changes
COPY invokeai ${INVOKEAI_SRC}/invokeai
# this should not increase image size because we've already installed dependencies
# in a previous layer
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=uv.lock,target=uv.lock \
ulimit -n 30000 && \
uv pip install -e .[$GPU_DRIVER]

View File

@@ -47,8 +47,9 @@ services:
invokeai-rocm:
<<: *invokeai
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
environment:
- AMD_VISIBLE_DEVICES=all
- RENDER_GROUP_ID=${RENDER_GROUP_ID}
runtime: amd
profiles:
- rocm

View File

@@ -21,6 +21,17 @@ _=$(id ${USER} 2>&1) || useradd -u ${USER_ID} ${USER}
# ensure the UID is correct
usermod -u ${USER_ID} ${USER} 1>/dev/null
## ROCM specific configuration
# render group within the container must match the host render group
# otherwise the container will not be able to access the host GPU.
if [[ -v "RENDER_GROUP_ID" ]] && [[ ! -z "${RENDER_GROUP_ID}" ]]; then
# ensure the render group exists
groupmod -g ${RENDER_GROUP_ID} render
usermod -a -G render ${USER}
usermod -a -G video ${USER}
fi
### Set the $PUBLIC_KEY env var to enable SSH access.
# We do not install openssh-server in the image by default to avoid bloat.
# but it is useful to have the full SSH server e.g. on Runpod.

View File

@@ -13,7 +13,7 @@ run() {
# parse .env file for build args
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
profile="$(awk -F '=' '/GPU_DRIVER=/ {print $2}' .env)"
# default to 'cuda' profile
[[ -z "$profile" ]] && profile="cuda"
@@ -30,7 +30,7 @@ run() {
printf "%s\n" "starting service $service_name"
docker compose --profile "$profile" up -d "$service_name"
docker compose logs -f
docker compose --profile "$profile" logs -f
}
run

View File

@@ -16,7 +16,9 @@ The launcher uses GitHub as the source of truth for available releases.
## General Prep
Make a developer call-out for PRs to merge. Merge and test things out. Bump the version by editing `invokeai/version/invokeai_version.py`.
Make a developer call-out for PRs to merge. Merge and test things
out. Create a branch with a name like user/chore/vX.X.X-prep and bump the version by editing
`invokeai/version/invokeai_version.py` and commit locally.
## Release Workflow
@@ -26,14 +28,14 @@ It is triggered on **tag push**, when the tag matches `v*`.
### Triggering the Workflow
Ensure all commits that should be in the release are merged, and you have pulled them locally.
Double-check that you have checked out the commit that will represent the release (typically the latest commit on `main`).
Ensure all commits that should be in the release are merged into this branch, and that you have pulled them locally.
Run `make tag-release` to tag the current commit and kick off the workflow. You will be prompted to provide a message - use the version specifier.
If this version's tag already exists for some reason (maybe you had to make a last minute change), the script will overwrite it.
Push the commit to trigger the workflow.
> In case you cannot use the Make target, the release may also be dispatched [manually] via GH.
### Workflow Jobs and Process
@@ -89,7 +91,7 @@ The publish jobs will not run if any of the previous jobs fail.
They use [GitHub environments], which are configured as [trusted publishers] on PyPI.
Both jobs require a @hipsterusername or @psychedelicious to approve them from the workflow's **Summary** tab.
Both jobs require a @lstein or @blessedcoolant to approve them from the workflow's **Summary** tab.
- Click the **Review deployments** button
- Select the environment (either `testpypi` or `pypi` - typically you select both)
@@ -101,7 +103,7 @@ Both jobs require a @hipsterusername or @psychedelicious to approve them from th
Check the [python infrastructure status page] for incidents.
If there are no incidents, contact @hipsterusername or @lstein, who have owner access to GH and PyPI, to see if access has expired or something like that.
If there are no incidents, contact @lstein or @blessedcoolant, who have owner access to GH and PyPI, to see if access has expired or something like that.
#### `publish-testpypi` Job

View File

@@ -0,0 +1,295 @@
# Hotkeys System
This document describes the technical implementation of the customizable hotkeys system in InvokeAI.
> **Note:** For user-facing documentation on how to use customizable hotkeys, see [Hotkeys Feature Documentation](../features/hotkeys.md).
## Overview
The hotkeys system allows users to customize keyboard shortcuts throughout the application. All hotkeys are:
- Centrally defined and managed
- Customizable by users
- Persisted across sessions
- Type-safe and validated
## Architecture
The customizable hotkeys feature is built on top of the existing hotkey system with the following components:
### 1. Hotkeys State Slice (`hotkeysSlice.ts`)
Location: `invokeai/frontend/web/src/features/system/store/hotkeysSlice.ts`
**Responsibilities:**
- Stores custom hotkey mappings in Redux state
- Persisted to IndexedDB using `redux-remember`
- Provides actions to change, reset individual, or reset all hotkeys
**State Shape:**
```typescript
{
_version: 1,
customHotkeys: {
'app.invoke': ['mod+enter'],
'canvas.undo': ['mod+z'],
// ...
}
}
```
**Actions:**
- `hotkeyChanged(id, hotkeys)` - Update a single hotkey
- `hotkeyReset(id)` - Reset a single hotkey to default
- `allHotkeysReset()` - Reset all hotkeys to defaults
### 2. useHotkeyData Hook (`useHotkeyData.ts`)
Location: `invokeai/frontend/web/src/features/system/components/HotkeysModal/useHotkeyData.ts`
**Responsibilities:**
- Defines all default hotkeys
- Merges default hotkeys with custom hotkeys from the store
- Returns the effective hotkeys that should be used throughout the app
- Provides platform-specific key translations (Ctrl/Cmd, Alt/Option)
**Key Functions:**
- `useHotkeyData()` - Returns all hotkeys organized by category
- `useRegisteredHotkeys()` - Hook to register a hotkey in a component
### 3. HotkeyEditor Component (`HotkeyEditor.tsx`)
Location: `invokeai/frontend/web/src/features/system/components/HotkeysModal/HotkeyEditor.tsx`
**Features:**
- Inline editor with input field
- Modifier buttons (Mod, Ctrl, Shift, Alt) for quick insertion
- Live preview of hotkey combinations
- Validation with visual feedback
- Help tooltip with syntax examples
- Save/cancel/reset buttons
**Smart Features:**
- Automatic `+` insertion between modifiers
- Cursor position preservation
- Validation prevents invalid combinations (e.g., modifier-only keys)
### 4. HotkeysModal Component (`HotkeysModal.tsx`)
Location: `invokeai/frontend/web/src/features/system/components/HotkeysModal/HotkeysModal.tsx`
**Features:**
- View Mode / Edit Mode toggle
- Search functionality
- Category-based organization
- Shows HotkeyEditor components when in edit mode
- "Reset All to Default" button in edit mode
## Data Flow
```
┌─────────────────────────────────────────────────────────────┐
│ 1. User opens Hotkeys Modal │
│ 2. User clicks "Edit Mode" button │
│ 3. User clicks edit icon next to a hotkey │
│ 4. User enters new hotkey(s) using editor │
│ 5. User clicks save or presses Enter │
│ 6. Custom hotkey stored via hotkeyChanged() action │
│ 7. Redux state persisted to IndexedDB (redux-remember) │
│ 8. useHotkeyData() hook picks up the change │
│ 9. All components using useRegisteredHotkeys() get update │
└─────────────────────────────────────────────────────────────┘
```
## Hotkey Format
Hotkeys use the format from `react-hotkeys-hook` library:
- **Modifiers:** `mod`, `ctrl`, `shift`, `alt`, `meta`
- **Keys:** Letters, numbers, function keys, special keys
- **Separator:** `+` between keys in a combination
- **Multiple hotkeys:** Comma-separated (e.g., `mod+a, ctrl+b`)
**Examples:**
- `mod+enter` - Mod key + Enter
- `shift+x` - Shift + X
- `ctrl+shift+a` - Control + Shift + A
- `f1, f2` - F1 or F2 (alternatives)
## Developer Guide
### Using Hotkeys in Components
To use a hotkey in a component:
```tsx
import { useRegisteredHotkeys } from 'features/system/components/HotkeysModal/useHotkeyData';
const MyComponent = () => {
const handleAction = useCallback(() => {
// Your action here
}, []);
// This automatically uses custom hotkeys if configured
useRegisteredHotkeys({
id: 'myAction',
category: 'app', // or 'canvas', 'viewer', 'gallery', 'workflows'
callback: handleAction,
options: { enabled: true, preventDefault: true },
dependencies: [handleAction]
});
// ...
};
```
**Options:**
- `enabled` - Whether the hotkey is active
- `preventDefault` - Prevent default browser behavior
- `enableOnFormTags` - Allow hotkey in form elements (default: false)
### Adding New Hotkeys
To add a new hotkey to the system:
#### 1. Add Translation Strings
In `invokeai/frontend/web/public/locales/en.json`:
```json
{
"hotkeys": {
"app": {
"myAction": {
"title": "My Action",
"desc": "Description of what this hotkey does"
}
}
}
}
```
#### 2. Register the Hotkey
In `invokeai/frontend/web/src/features/system/components/HotkeysModal/useHotkeyData.ts`:
```typescript
// Inside the appropriate category builder function
addHotkey('app', 'myAction', ['mod+k']); // Default binding
```
#### 3. Use the Hotkey
In your component:
```typescript
useRegisteredHotkeys({
id: 'myAction',
category: 'app',
callback: handleMyAction,
options: { enabled: true },
dependencies: [handleMyAction]
});
```
### Hotkey Categories
Current categories:
- **app** - Global application hotkeys
- **canvas** - Canvas/drawing operations
- **viewer** - Image viewer operations
- **gallery** - Gallery/image grid operations
- **workflows** - Node workflow editor
To add a new category, update `useHotkeyData.ts` and add translations.
## Testing
Tests are located in `invokeai/frontend/web/src/features/system/store/hotkeysSlice.test.ts`.
**Test Coverage:**
- Adding custom hotkeys
- Updating existing custom hotkeys
- Resetting individual hotkeys
- Resetting all hotkeys
- State persistence and migration
Run tests with:
```bash
cd invokeai/frontend/web
pnpm test:no-watch
```
## Persistence
Custom hotkeys are persisted using the same mechanism as other app settings:
- Stored in Redux state under the `hotkeys` slice
- Persisted to IndexedDB via `redux-remember`
- Automatically loaded when the app starts
- Survives page refreshes and browser restarts
- Includes migration support for state schema changes
**State Location:**
- IndexedDB database: `invoke`
- Store key: `hotkeys`
## Dependencies
- **react-hotkeys-hook** (v4.5.0) - Core hotkey handling
- **@reduxjs/toolkit** - State management
- **redux-remember** - Persistence
- **zod** - State validation
## Best Practices
1. **Use `mod` instead of `ctrl`** - Automatically maps to Cmd on Mac, Ctrl elsewhere
2. **Provide descriptive translations** - Help users understand what each hotkey does
3. **Avoid conflicts** - Check existing hotkeys before adding new ones
4. **Use preventDefault** - Prevent browser default behavior when appropriate
5. **Check enabled state** - Only activate hotkeys when the action is available
6. **Use dependencies correctly** - Ensure callbacks are stable with useCallback
## Common Patterns
### Conditional Hotkeys
```typescript
useRegisteredHotkeys({
id: 'save',
category: 'app',
callback: handleSave,
options: {
enabled: hasUnsavedChanges && !isLoading, // Only when valid
preventDefault: true
},
dependencies: [hasUnsavedChanges, isLoading, handleSave]
});
```
### Multiple Hotkeys for Same Action
```typescript
// In useHotkeyData.ts
addHotkey('canvas', 'redo', ['mod+shift+z', 'mod+y']); // Two alternatives
```
### Focus-Scoped Hotkeys
```typescript
import { useFocusRegion } from 'common/hooks/focus';
const MyComponent = () => {
const focusRegionRef = useFocusRegion('myRegion');
// Hotkey only works when this region has focus
useRegisteredHotkeys({
id: 'myAction',
category: 'app',
callback: handleAction,
options: { enabled: true }
});
return <div ref={focusRegionRef}>...</div>;
};
```

View File

@@ -265,7 +265,7 @@ If the key is unrecognized, this call raises an
#### exists(key) -> AnyModelConfig
Returns True if a model with the given key exists in the databsae.
Returns True if a model with the given key exists in the database.
#### search_by_path(path) -> AnyModelConfig
@@ -718,7 +718,7 @@ When downloading remote models is implemented, additional
configuration information, such as list of trigger terms, will be
retrieved from the HuggingFace and Civitai model repositories.
The probed values can be overriden by providing a dictionary in the
The probed values can be overridden by providing a dictionary in the
optional `config` argument passed to `import_model()`. You may provide
overriding values for any of the model's configuration
attributes. Here is an example of setting the
@@ -841,7 +841,7 @@ variable.
#### installer.start(invoker)
The `start` method is called by the API intialization routines when
The `start` method is called by the API initialization routines when
the API starts up. Its effect is to call `sync_to_config()` to
synchronize the model record store database with what's currently on
disk.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,64 @@
# Pull Request Merge Policy
This document outlines the process for reviewing and merging pull requests (PRs) into the InvokeAI repository.
## Review Process
### 1. Assignment
One of the repository maintainers will assign collaborators to review a pull request. The assigned reviewer(s) will be responsible for conducting the code review.
### 2. Review and Iteration
The assignee is responsible for:
- Reviewing the PR thoroughly
- Providing constructive feedback
- Iterating with the PR author until the assignee is satisfied that the PR is fit to merge
- Ensuring the PR meets code quality standards, follows project conventions, and doesn't introduce bugs or regressions
### 3. Approval and Notification
Once the assignee is satisfied with the PR:
- The assignee approves the PR
- The assignee alerts one of the maintainers that the PR is ready for merge using the **#request-reviews Discord channel**
### 4. Final Merge
One of the maintainers is responsible for:
- Performing a final check of the PR
- Merging the PR into the appropriate branch
**Important:** Collaborators are strongly discouraged from merging PRs on their own, except in case of emergency (e.g., critical bug fix and no maintainer is available).
### 5. Release Policy
Once a feature release candidate is published, no feature PRs are to
be merged into main. Only bugfixes are allowed until the final
release.
## Best Practices
### Clean Commit History
To encourage a clean development log, PR authors are encouraged to use `git rebase -i` to suppress trivial commit messages (e.g., `ruff` and `prettier` formatting fixes) after the PR is accepted but before it is merged.
### Merge Strategy
The maintainer will perform either a **3-way merge** or **squash merge** when merging a PR into the `main` branch. This approach helps avoid rebase conflict hell and maintains a cleaner project history.
### Attribution
The PR author should reference any papers, source code or
documentation that they used while creating the code both in the PR
and as comments in the code itself. If there are any licensing
restrictions, these should be linked to and/or reproduced in the repo
root.
## Summary
This policy ensures that:
- All PRs receive proper review from assigned collaborators
- Maintainers have final oversight before code enters the main branch
- The commit history remains clean and meaningful
- Merge conflicts are minimized through appropriate merge strategies

View File

@@ -16,7 +16,7 @@ We thank [all contributors](https://github.com/invoke-ai/InvokeAI/graphs/contrib
- @psychedelicious (Spencer Mabrito) - Web Team Leader
- @joshistoast (Josh Corbett) - Web Development
- @cheerio (Mary Rogers) - Lead Engineer & Web App Development
- @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
- @ebr (Eugene Brodsky) - Cloud/DevOps/Software engineer; your friendly neighbourhood cluster-autoscaler
- @sunija - Standalone version
- @brandon (Brandon Rising) - Platform, Infrastructure, Backend Systems
- @ryanjdick (Ryan Dick) - Machine Learning & Training

80
docs/features/hotkeys.md Normal file
View File

@@ -0,0 +1,80 @@
# Customizable Hotkeys
InvokeAI allows you to customize all keyboard shortcuts (hotkeys) to match your workflow preferences.
## Features
- **View All Hotkeys**: See all available keyboard shortcuts in one place
- **Customize Any Hotkey**: Change any shortcut to your preference
- **Multiple Bindings**: Assign multiple key combinations to the same action
- **Smart Validation**: Built-in validation prevents invalid combinations
- **Persistent Settings**: Your custom hotkeys are saved and restored across sessions
- **Easy Reset**: Reset individual hotkeys or all hotkeys back to defaults
## How to Use
### Opening the Hotkeys Modal
Press `Shift+?` or click the keyboard icon in the application to open the Hotkeys Modal.
### Viewing Hotkeys
In **View Mode** (default), you can:
- Browse all available hotkeys organized by category (App, Canvas, Gallery, Workflows, etc.)
- Search for specific hotkeys using the search bar
- See the current key combination for each action
### Customizing Hotkeys
1. Click the **Edit Mode** button at the bottom of the Hotkeys Modal
2. Find the hotkey you want to change
3. Click the **pencil icon** next to it
4. The editor will appear with:
- **Input field**: Enter your new hotkey combination
- **Modifier buttons**: Quick-insert Mod, Ctrl, Shift, Alt keys
- **Help icon** (?): Shows syntax examples and valid keys
- **Live preview**: See how your hotkey will look
5. Enter your new hotkey using the format:
- `mod+a` - Mod key + A (Mod = Ctrl on Windows/Linux, Cmd on Mac)
- `ctrl+shift+k` - Multiple modifiers
- `f1` - Function keys
- `mod+enter, ctrl+enter` - Multiple alternatives (separated by comma)
6. Click the **checkmark** or press Enter to save
7. Click the **X** or press Escape to cancel
### Resetting Hotkeys
**Reset a single hotkey:**
- Click the counter-clockwise arrow icon that appears next to customized hotkeys
**Reset all hotkeys:**
- In Edit Mode, click the **Reset All to Default** button at the bottom
### Hotkey Format Reference
**Valid Modifiers:**
- `mod` - Context-aware: Ctrl (Windows/Linux) or Cmd (Mac)
- `ctrl` - Control key
- `shift` - Shift key
- `alt` - Alt key (Option on Mac)
**Valid Keys:**
- Letters: `a-z`
- Numbers: `0-9`
- Function keys: `f1-f12`
- Special keys: `enter`, `space`, `tab`, `backspace`, `delete`, `escape`
- Arrow keys: `up`, `down`, `left`, `right`
- And more...
**Examples:**
-`mod+s` - Save action
-`ctrl+shift+p` - Command palette
-`f5, mod+r` - Two alternatives for refresh
-`mod+` - Invalid (no key after modifier)
-`shift+ctrl+` - Invalid (ends with modifier)
## For Developers
For technical implementation details, architecture, and how to add new hotkeys to the system, see the [Hotkeys Developer Documentation](../contributing/HOTKEYS.md).

View File

@@ -69,34 +69,34 @@ The following commands vary depending on the version of Invoke being installed a
- If you have an Nvidia 20xx series GPU or older, use `invokeai[xformers]`.
- If you have an Nvidia 30xx series GPU or newer, or do not have an Nvidia GPU, use `invokeai`.
7. Determine the `PyPI` index URL to use for installation, if any. This is necessary to get the right version of torch installed.
7. Determine the torch backend to use for installation, if any. This is necessary to get the right version of torch installed. This is acheived by using [UV's built in torch support.](https://docs.astral.sh/uv/guides/integration/pytorch/#automatic-backend-selection)
=== "Invoke v5.12 and later"
- If you are on Windows or Linux with an Nvidia GPU, use `https://download.pytorch.org/whl/cu128`.
- If you are on Linux with no GPU, use `https://download.pytorch.org/whl/cpu`.
- If you are on Linux with an AMD GPU, use `https://download.pytorch.org/whl/rocm6.2.4`.
- **In all other cases, do not use an index.**
- If you are on Windows or Linux with an Nvidia GPU, use `--torch-backend=cu128`.
- If you are on Linux with no GPU, use `--torch-backend=cpu`.
- If you are on Linux with an AMD GPU, use `--torch-backend=rocm6.3`.
- **In all other cases, do not use a torch backend.**
=== "Invoke v5.10.0 to v5.11.0"
- If you are on Windows or Linux with an Nvidia GPU, use `https://download.pytorch.org/whl/cu126`.
- If you are on Linux with no GPU, use `https://download.pytorch.org/whl/cpu`.
- If you are on Linux with an AMD GPU, use `https://download.pytorch.org/whl/rocm6.2.4`.
- If you are on Windows or Linux with an Nvidia GPU, use `--torch-backend=cu126`.
- If you are on Linux with no GPU, use `--torch-backend=cpu`.
- If you are on Linux with an AMD GPU, use `--torch-backend=rocm6.2.4`.
- **In all other cases, do not use an index.**
=== "Invoke v5.0.0 to v5.9.1"
- If you are on Windows with an Nvidia GPU, use `https://download.pytorch.org/whl/cu124`.
- If you are on Linux with no GPU, use `https://download.pytorch.org/whl/cpu`.
- If you are on Linux with an AMD GPU, use `https://download.pytorch.org/whl/rocm6.1`.
- If you are on Windows with an Nvidia GPU, use `--torch-backend=cu124`.
- If you are on Linux with no GPU, use `--torch-backend=cpu`.
- If you are on Linux with an AMD GPU, use `--torch-backend=rocm6.1`.
- **In all other cases, do not use an index.**
=== "Invoke v4"
- If you are on Windows with an Nvidia GPU, use `https://download.pytorch.org/whl/cu124`.
- If you are on Linux with no GPU, use `https://download.pytorch.org/whl/cpu`.
- If you are on Linux with an AMD GPU, use `https://download.pytorch.org/whl/rocm5.2`.
- If you are on Windows with an Nvidia GPU, use `--torch-backend=cu124`.
- If you are on Linux with no GPU, use `--torch-backend=cpu`.
- If you are on Linux with an AMD GPU, use `--torch-backend=rocm5.2`.
- **In all other cases, do not use an index.**
8. Install the `invokeai` package. Substitute the package specifier and version.
@@ -105,10 +105,10 @@ The following commands vary depending on the version of Invoke being installed a
uv pip install <PACKAGE_SPECIFIER>==<VERSION> --python 3.12 --python-preference only-managed --force-reinstall
```
If you determined you needed to use a `PyPI` index URL in the previous step, you'll need to add `--index=<INDEX_URL>` like this:
If you determined you needed to use a torch backend in the previous step, you'll need to set the backend like this:
```sh
uv pip install <PACKAGE_SPECIFIER>==<VERSION> --python 3.12 --python-preference only-managed --index=<INDEX_URL> --force-reinstall
uv pip install <PACKAGE_SPECIFIER>==<VERSION> --python 3.12 --python-preference only-managed --torch-backend=<VERSION> --force-reinstall
```
9. Deactivate and reactivate your venv so that the invokeai-specific commands become available in the environment:

View File

@@ -70,7 +70,7 @@ Prior to installing PyPatchMatch, you need to take the following steps:
`from patchmatch import patch_match`: It should look like the following:
```py
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from patchmatch import patch_match
Compiling and loading c extensions from "/home/lstein/Projects/InvokeAI/.invokeai-env/src/pypatchmatch/patchmatch".

View File

@@ -25,38 +25,65 @@ Hardware requirements vary significantly depending on model and image output siz
- Memory: At least 16GB RAM.
- Disk: 10GB for base installation plus 100GB for models.
=== "FLUX - 1024×1024"
=== "FLUX.1 - 1024×1024"
- GPU: Nvidia 20xx series or later, 10GB+ VRAM.
- Memory: At least 32GB RAM.
- Disk: 10GB for base installation plus 200GB for models.
=== "FLUX.2 Klein - 1024×1024"
- GPU: Nvidia 20xx series or later, 6GB+ VRAM for GGUF Q4 quantized models, 12GB+ for full precision.
- Memory: At least 16GB RAM.
- Disk: 10GB for base installation plus 20GB for models.
=== "Z-Image Turbo - 1024x1024"
- GPU: Nvidia 20xx series or later, 8GB+ VRAM for the Q4_K quantized model. 16GB+ needed for the Q8 or BF16 models.
- Memory: At least 16GB RAM.
- Disk: 10GB for base installation plus 35GB for models.
More detail on system requirements can be found [here](./requirements.md).
## Step 2: Download
## Step 2: Download and Set Up the Launcher
Download the most recent launcher for your operating system:
The Launcher manages your Invoke install. Follow these instructions to download and set up the Launcher.
- [Download for Windows](https://download.invoke.ai/Invoke%20Community%20Edition.exe)
- [Download for macOS](https://download.invoke.ai/Invoke%20Community%20Edition.dmg)
- [Download for Linux](https://download.invoke.ai/Invoke%20Community%20Edition.AppImage)
!!! info "Instructions for each OS"
## Step 3: Install or Update
=== "Windows"
Run the launcher you just downloaded, click **Install** and follow the instructions to get set up.
- [Download for Windows](https://github.com/invoke-ai/launcher/releases/latest/download/Invoke.Community.Edition.Setup.latest.exe)
- Run the `EXE` to install the Launcher and start it.
- A desktop shortcut will be created; use this to run the Launcher in the future.
- You can delete the `EXE` file you downloaded.
=== "macOS"
- [Download for macOS](https://github.com/invoke-ai/launcher/releases/latest/download/Invoke.Community.Edition-latest-arm64.dmg)
- Open the `DMG` and drag the app into `Applications`.
- Run the Launcher using its entry in `Applications`.
- You can delete the `DMG` file you downloaded.
=== "Linux"
- [Download for Linux](https://github.com/invoke-ai/launcher/releases/latest/download/Invoke.Community.Edition-latest.AppImage)
- You may need to edit the `AppImage` file properties and make it executable.
- Optionally move the file to a location that does not require admin privileges and add a desktop shortcut for it.
- Run the Launcher by double-clicking the `AppImage` or the shortcut you made.
## Step 3: Install Invoke
Run the Launcher you just set up if you haven't already. Click **Install** and follow the instructions to install (or update) Invoke.
If you have an existing Invoke installation, you can select it and let the launcher manage the install. You'll be able to update or launch the installation.
!!! warning "Problem running the launcher on macOS"
!!! tip "Updating"
macOS may not allow you to run the launcher. We are working to resolve this by signing the launcher executable. Until that is done, you can manually flag the launcher as safe:
The Launcher will check for updates for itself _and_ Invoke.
- Open the **Invoke Community Edition.dmg** file.
- Drag the launcher to **Applications**.
- Open a terminal.
- Run `xattr -d 'com.apple.quarantine' /Applications/Invoke\ Community\ Edition.app`.
You should now be able to run the launcher.
- When the Launcher detects an update is available for itself, you'll get a small popup window. Click through this and the Launcher will update itself.
- When the Launcher detects an update for Invoke, you'll see a small green alert in the Launcher. Click that and follow the instructions to update Invoke.
## Step 4: Launch

View File

@@ -25,12 +25,29 @@ The requirements below are rough guidelines for best performance. GPUs with less
- Memory: At least 16GB RAM.
- Disk: 10GB for base installation plus 100GB for models.
=== "FLUX - 1024×1024"
=== "FLUX.1 - 1024×1024"
- GPU: Nvidia 20xx series or later, 10GB+ VRAM.
- Memory: At least 32GB RAM.
- Disk: 10GB for base installation plus 200GB for models.
=== "FLUX.2 Klein 4B - 1024×1024"
- GPU: Nvidia 30xx series or later, 12GB+ VRAM (e.g. RTX 3090, RTX 4070). FP8 version works with 8GB+ VRAM.
- Memory: At least 16GB RAM.
- Disk: 10GB for base installation plus 20GB for models (Diffusers format with encoder).
=== "FLUX.2 Klein 9B - 1024×1024"
- GPU: Nvidia 40xx series, 24GB+ VRAM (e.g. RTX 4090). FP8 version works with 12GB+ VRAM.
- Memory: At least 32GB RAM.
- Disk: 10GB for base installation plus 40GB for models (Diffusers format with encoder).
=== "Z-Image Turbo - 1024x1024"
- GPU: Nvidia 20xx series or later, 8GB+ VRAM for the Q4_K quantized model. 16GB+ needed for the Q8 or BF16 models.
- Memory: At least 16GB RAM.
- Disk: 10GB for base installation plus 35GB for models.
!!! info "`tmpfs` on Linux"
If your temporary directory is mounted as a `tmpfs`, ensure it has sufficient space.
@@ -41,7 +58,7 @@ The requirements below are rough guidelines for best performance. GPUs with less
You don't need to do this if you are installing with the [Invoke Launcher](./quick_start.md).
Invoke requires python 3.10 through 3.12. If you don't already have one of these versions installed, we suggest installing 3.12, as it will be supported for longer.
Invoke requires python 3.11 through 3.12. If you don't already have one of these versions installed, we suggest installing 3.12, as it will be supported for longer.
Check that your system has an up-to-date Python installed by running `python3 --version` in the terminal (Linux, macOS) or cmd/powershell (Windows).
@@ -56,7 +73,7 @@ Check that your system has an up-to-date Python installed by running `python3 --
=== "macOS"
- Install python with [an official installer].
- If model installs fail with a certificate error, you may need to run this command (changing the python version to match what you have installed): `/Applications/Python\ 3.10/Install\ Certificates.command`
- If model installs fail with a certificate error, you may need to run this command (changing the python version to match what you have installed): `/Applications/Python\ 3.11/Install\ Certificates.command`
- If you haven't already, you will need to install the XCode CLI Tools by running `xcode-select --install` in a terminal.
=== "Linux"

View File

@@ -41,7 +41,7 @@ Nodes have a "Use Cache" option in their footer. This allows for performance imp
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
### Noise
### Create Latent Noise
An initial noise tensor is necessary for the latent diffusion process. As a result, the Denoising node requires a noise node input.

View File

@@ -4,21 +4,22 @@ These are nodes that have been developed by the community, for the community. If
If you'd like to submit a node for the community, please refer to the [node creation overview](contributingNodes.md).
To use a node, add the node to the `nodes` folder found in your InvokeAI install location.
To use a node, add the node to the `nodes` folder found in your InvokeAI install location.
The suggested method is to use `git clone` to clone the repository the node is found in. This allows for easy updates of the node in the future.
The suggested method is to use `git clone` to clone the repository the node is found in. This allows for easy updates of the node in the future.
If you'd prefer, you can also just download the whole node folder from the linked repository and add it to the `nodes` folder.
If you'd prefer, you can also just download the whole node folder from the linked repository and add it to the `nodes` folder.
To use a community workflow, download the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
To use a community workflow, download the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
- Community Nodes
+ [Anamorphic Tools](#anamorphic-tools)
+ [Adapters-Linked](#adapters-linked-nodes)
+ [Autostereogram](#autostereogram-nodes)
+ [Average Images](#average-images)
+ [BiRefNet Background Removal](#birefnet-background-removal)
+ [Clean Image Artifacts After Cut](#clean-image-artifacts-after-cut)
+ [Close Color Mask](#close-color-mask)
+ [Close Color Mask](#close-color-mask)
+ [Clothing Mask](#clothing-mask)
+ [Contrast Limited Adaptive Histogram Equalization](#contrast-limited-adaptive-histogram-equalization)
+ [Curves](#curves)
@@ -34,6 +35,7 @@ To use a community workflow, download the `.json` node graph file and load it in
+ [Hand Refiner with MeshGraphormer](#hand-refiner-with-meshgraphormer)
+ [Image and Mask Composition Pack](#image-and-mask-composition-pack)
+ [Image Dominant Color](#image-dominant-color)
+ [Image Export](#image-export)
+ [Image to Character Art Image Nodes](#image-to-character-art-image-nodes)
+ [Image Picker](#image-picker)
+ [Image Resize Plus](#image-resize-plus)
@@ -51,7 +53,7 @@ To use a community workflow, download the `.json` node graph file and load it in
+ [Prompt Tools](#prompt-tools)
+ [Remote Image](#remote-image)
+ [BriaAI Background Remove](#briaai-remove-background)
+ [Remove Background](#remove-background)
+ [Remove Background](#remove-background)
+ [Retroize](#retroize)
+ [Stereogram](#stereogram-nodes)
+ [Size Stepper Nodes](#size-stepper-nodes)
@@ -81,7 +83,7 @@ To use a community workflow, download the `.json` node graph file and load it in
- `IP-Adapter-Linked` - Collects IP-Adapter info to pass to other nodes.
- `T2I-Adapter-Linked` - Collects T2I-Adapter info to pass to other nodes.
Note: These are inherited from the core nodes so any update to the core nodes should be reflected in these.
Note: These are inherited from the core nodes so any update to the core nodes should be reflected in these.
**Node Link:** https://github.com/skunkworxdark/adapters-linked-nodes
@@ -103,6 +105,20 @@ Note: These are inherited from the core nodes so any update to the core nodes sh
**Node Link:** https://github.com/JPPhoto/average-images-node
--------------------------------
### BiRefNet Background Removal
**Description:** Remove image backgrounds using BiRefNet (Bilateral Reference Network), a high-quality segmentation model. Supports multiple model variants including standard, high-resolution, matting, portrait, and specialized models for different use cases.
**Node Link:** https://github.com/veeliks/invoke_birefnet
**Output Examples**
<section>
<img src="https://raw.githubusercontent.com/veeliks/invoke_birefnet/main/.readme/example_before_removal.png" width="49%" alt="Before background removal">
<img src="https://raw.githubusercontent.com/veeliks/invoke_birefnet/main/.readme/example_after_removal.png" width="49%" alt="After background removal">
</section>
--------------------------------
### Clean Image Artifacts After Cut
@@ -216,7 +232,7 @@ This includes 3 Nodes:
**Node Link:** https://github.com/mickr777/GPT2RandomPromptMaker
**Output Examples**
**Output Examples**
Generated Prompt: An enchanted weapon will be usable by any character regardless of their alignment.
@@ -231,7 +247,7 @@ Generated Prompt: An enchanted weapon will be usable by any character regardless
**Example Node Graph:** https://github.com/mildmisery/invokeai-GridToGifNode/blob/main/Grid%20to%20Gif%20Example%20Workflow.json
**Output Examples**
**Output Examples**
<img src="https://raw.githubusercontent.com/mildmisery/invokeai-GridToGifNode/main/input.png" width="300" />
<img src="https://raw.githubusercontent.com/mildmisery/invokeai-GridToGifNode/main/output.gif" width="300" />
@@ -293,7 +309,7 @@ This includes 15 Nodes:
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
**Node Link:** https://github.com/dwringer/composition-nodes
</br><img src="https://raw.githubusercontent.com/dwringer/composition-nodes/main/composition_pack_overview.jpg" width="500" />
--------------------------------
@@ -306,6 +322,23 @@ Node Link: https://github.com/VeyDlin/image-dominant-color-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/image-dominant-color-node/master/.readme/node.png" width="500" />
--------------------------------
### Image Export
**Description:** Export images in multiple formats (AVIF, JPEG, PNG, TIFF, WebP) with format-specific compression and quality options.
**Node Link:** https://github.com/veeliks/invoke_image_export
**Nodes:**
<section>
<img src="https://raw.githubusercontent.com/veeliks/invoke_image_export/main/.readme/node_avif.png" width="19%" alt="Save Image as AVIF">
<img src="https://raw.githubusercontent.com/veeliks/invoke_image_export/main/.readme/node_jpeg.png" width="19%" alt="Save Image as JPEG">
<img src="https://raw.githubusercontent.com/veeliks/invoke_image_export/main/.readme/node_png.png" width="19%" alt="Save Image as PNG">
<img src="https://raw.githubusercontent.com/veeliks/invoke_image_export/main/.readme/node_tiff.png" width="19%" alt="Save Image as TIFF">
<img src="https://raw.githubusercontent.com/veeliks/invoke_image_export/main/.readme/node_webp.png" width="19%" alt="Save Image as WebP">
</section>
--------------------------------
### Image to Character Art Image Nodes
@@ -352,7 +385,7 @@ View:
**Node Link:** https://github.com/helix4u/load_video_frame
**Output Example:**
**Output Example:**
<img src="https://raw.githubusercontent.com/helix4u/load_video_frame/refs/heads/main/_git_assets/dance1736978273.gif" width="500" />
--------------------------------
@@ -364,7 +397,7 @@ View:
**Example Node Graph:** https://gitlab.com/srcrr/shift3d/-/raw/main/example-workflow.json?ref_type=heads&inline=false
**Output Examples**
**Output Examples**
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-1.png" width="300" />
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-2.png" width="300" />
@@ -386,13 +419,13 @@ View:
- Option to only transfer luminance channel.
- Option to save output as grayscale
A good use case for this node is to normalize the colors of an image that has been through the tiled scaling workflow of my XYGrid Nodes.
A good use case for this node is to normalize the colors of an image that has been through the tiled scaling workflow of my XYGrid Nodes.
See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/main/README.md
**Node Link:** https://github.com/skunkworxdark/match_histogram
**Output Examples**
**Output Examples**
<img src="https://github.com/skunkworxdark/match_histogram/assets/21961335/ed12f329-a0ef-444a-9bae-129ed60d6097" />
@@ -410,12 +443,12 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
- `Metadata To Bool` - Extracts Bool types from metadata
- `Metadata To Model` - Extracts model types from metadata
- `Metadata To SDXL Model` - Extracts SDXL model types from metadata
- `Metadata To LoRAs` - Extracts Loras from metadata.
- `Metadata To LoRAs` - Extracts Loras from metadata.
- `Metadata To SDXL LoRAs` - Extracts SDXL Loras from metadata
- `Metadata To ControlNets` - Extracts ControNets from metadata
- `Metadata To IP-Adapters` - Extracts IP-Adapters from metadata
- `Metadata To T2I-Adapters` - Extracts T2I-Adapters from metadata
- `Denoise Latents + Metadata` - This is an inherited version of the existing `Denoise Latents` node but with a metadata input and output.
- `Denoise Latents + Metadata` - This is an inherited version of the existing `Denoise Latents` node but with a metadata input and output.
**Node Link:** https://github.com/skunkworxdark/metadata-linked-nodes
@@ -445,7 +478,7 @@ View:
**Example Node Graph:** https://github.com/Jonseed/Ollama-Node/blob/main/Ollama-Node-Flux-example.json
**View:**
**View:**
![ollama node](https://raw.githubusercontent.com/Jonseed/Ollama-Node/a3e7cdc55e394cb89c1ea7ed54e106c212c85e8c/ollama-node-screenshot.png)
@@ -454,7 +487,7 @@ View:
<img src="https://raw.githubusercontent.com/AIrjen/OneButtonPrompt_X_InvokeAI/refs/heads/main/images/background.png" width="800" />
**Description:** an extensive suite of auto prompt generation and prompt helper nodes based on extensive logic. Get creative with the best prompt generator in the world.
**Description:** an extensive suite of auto prompt generation and prompt helper nodes based on extensive logic. Get creative with the best prompt generator in the world.
The main node generates interesting prompts based on a set of parameters. There are also some additional nodes such as Auto Negative Prompt, One Button Artify, Create Prompt Variant and other cool prompt toys to play around with.
@@ -491,14 +524,14 @@ a Text-Generation-Webui instance (might work remotely too, but I never tried it)
This node works best with SDXL models, especially as the style can be described independently of the LLM's output.
--------------------------------
### Prompt Tools
### Prompt Tools
**Description:** A set of InvokeAI nodes that add general prompt (string) manipulation tools. Designed to accompany the `Prompts From File` node and other prompt generation nodes.
1. `Prompt To File` - saves a prompt or collection of prompts to a file. one per line. There is an append/overwrite option.
2. `PTFields Collect` - Converts image generation fields into a Json format string that can be passed to Prompt to file.
2. `PTFields Collect` - Converts image generation fields into a Json format string that can be passed to Prompt to file.
3. `PTFields Expand` - Takes Json string and converts it to individual generation parameters. This can be fed from the Prompts to file node.
4. `Prompt Strength` - Formats prompt with strength like the weighted format of compel
4. `Prompt Strength` - Formats prompt with strength like the weighted format of compel
5. `Prompt Strength Combine` - Combines weighted prompts for .and()/.blend()
6. `CSV To Index String` - Gets a string from a CSV by index. Includes a Random index option
@@ -513,7 +546,7 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
**Node Link:** https://github.com/skunkworxdark/Prompt-tools-nodes
**Workflow Examples**
**Workflow Examples**
<img src="https://raw.githubusercontent.com/skunkworxdark/prompt-tools/refs/heads/main/images/CSVToIndexStringNode.png"/>
@@ -648,7 +681,7 @@ Highlights/Midtones/Shadows (with LUT blur enabled):
- Generate grids of images from multiple input images
- Create XY grid images with labels from parameters
- Split images into overlapping tiles for processing (for super-resolution workflows)
- Recombine image tiles into a single output image blending the seams
- Recombine image tiles into a single output image blending the seams
The nodes include:
1. `Images To Grids` - Combine multiple images into a grid of images
@@ -661,7 +694,7 @@ See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/READ
**Node Link:** https://github.com/skunkworxdark/XYGrid_nodes
**Output Examples**
**Output Examples**
<img src="https://raw.githubusercontent.com/skunkworxdark/XYGrid_nodes/refs/heads/main/images/collage.png" />
@@ -675,7 +708,7 @@ See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/READ
**Example Workflow:** https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Prompt_from_File.json
**Output Examples**
**Output Examples**
</br><img src="https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png" width="500" />
@@ -686,5 +719,5 @@ The nodes linked have been developed and contributed by members of the Invoke AI
## Help
If you run into any issues with a node, please post in the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).
If you run into any issues with a node, please post in the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).

View File

@@ -10,6 +10,7 @@ from invokeai.app.services.board_images.board_images_default import BoardImagesS
from invokeai.app.services.board_records.board_records_sqlite import SqliteBoardRecordStorage
from invokeai.app.services.boards.boards_default import BoardService
from invokeai.app.services.bulk_download.bulk_download_default import BulkDownloadService
from invokeai.app.services.client_state_persistence.client_state_persistence_sqlite import ClientStatePersistenceSqlite
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.download.download_default import DownloadQueueService
from invokeai.app.services.events.events_fastapievents import FastAPIEventService
@@ -48,6 +49,7 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
FLUXConditioningInfo,
SD3ConditioningInfo,
SDXLConditioningInfo,
ZImageConditioningInfo,
)
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
@@ -128,6 +130,7 @@ class ApiDependencies:
FLUXConditioningInfo,
SD3ConditioningInfo,
CogView4ConditioningInfo,
ZImageConditioningInfo,
],
ephemeral=True,
),
@@ -151,6 +154,7 @@ class ApiDependencies:
style_preset_records = SqliteStylePresetRecordsStorage(db=db)
style_preset_image_files = StylePresetImageFileStorageDisk(style_presets_folder / "images")
workflow_thumbnails = WorkflowThumbnailFileStorageDisk(workflow_thumbnails_folder)
client_state_persistence = ClientStatePersistenceSqlite(db=db)
services = InvocationServices(
board_image_records=board_image_records,
@@ -181,6 +185,7 @@ class ApiDependencies:
style_preset_records=style_preset_records,
style_preset_image_files=style_preset_image_files,
workflow_thumbnails=workflow_thumbnails,
client_state_persistence=client_state_persistence,
)
ApiDependencies.invoker = Invoker(services)

View File

@@ -1,8 +1,5 @@
import typing
from enum import Enum
from importlib.metadata import distributions
from pathlib import Path
from typing import Optional
import torch
from fastapi import Body
@@ -10,7 +7,6 @@ from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.config.config_default import InvokeAIAppConfig, get_config
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
@@ -27,11 +23,6 @@ class LogLevel(int, Enum):
Critical = logging.CRITICAL
class Upscaler(BaseModel):
upscaling_method: str = Field(description="Name of upscaling method")
upscaling_models: list[str] = Field(description="List of upscaling models for this method")
app_router = APIRouter(prefix="/v1/app", tags=["app"])
@@ -40,17 +31,6 @@ class AppVersion(BaseModel):
version: str = Field(description="App version")
highlights: Optional[list[str]] = Field(default=None, description="Highlights of release")
class AppConfig(BaseModel):
"""App Config Response"""
infill_methods: list[str] = Field(description="List of available infill methods")
upscaling_methods: list[Upscaler] = Field(description="List of upscaling methods")
nsfw_methods: list[str] = Field(description="List of NSFW checking methods")
watermarking_methods: list[str] = Field(description="List of invisible watermark methods")
@app_router.get("/version", operation_id="app_version", status_code=200, response_model=AppVersion)
async def get_version() -> AppVersion:
@@ -72,27 +52,9 @@ async def get_app_deps() -> dict[str, str]:
return sorted_deps
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)
async def get_config_() -> AppConfig:
infill_methods = ["lama", "tile", "cv2", "color"] # TODO: add mosaic back
if PatchMatch.patchmatch_available():
infill_methods.append("patchmatch")
upscaling_models = []
for model in typing.get_args(ESRGAN_MODELS):
upscaling_models.append(str(Path(model).stem))
upscaler = Upscaler(upscaling_method="esrgan", upscaling_models=upscaling_models)
nsfw_methods = ["nsfw_checker"]
watermarking_methods = ["invisible_watermark"]
return AppConfig(
infill_methods=infill_methods,
upscaling_methods=[upscaler],
nsfw_methods=nsfw_methods,
watermarking_methods=watermarking_methods,
)
@app_router.get("/patchmatch_status", operation_id="get_patchmatch_status", status_code=200, response_model=bool)
async def get_patchmatch_status() -> bool:
return PatchMatch.patchmatch_available()
class InvokeAIAppConfigWithSetFields(BaseModel):

View File

@@ -33,7 +33,6 @@ class DeleteBoardResult(BaseModel):
)
async def create_board(
board_name: str = Query(description="The name of the board to create", max_length=300),
is_private: bool = Query(default=False, description="Whether the board is private"),
) -> BoardDTO:
"""Creates a board"""
try:

View File

@@ -0,0 +1,58 @@
from fastapi import Body, HTTPException, Path, Query
from fastapi.routing import APIRouter
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.backend.util.logging import logging
client_state_router = APIRouter(prefix="/v1/client_state", tags=["client_state"])
@client_state_router.get(
"/{queue_id}/get_by_key",
operation_id="get_client_state_by_key",
response_model=str | None,
)
async def get_client_state_by_key(
queue_id: str = Path(description="The queue id to perform this operation on"),
key: str = Query(..., description="Key to get"),
) -> str | None:
"""Gets the client state"""
try:
return ApiDependencies.invoker.services.client_state_persistence.get_by_key(queue_id, key)
except Exception as e:
logging.error(f"Error getting client state: {e}")
raise HTTPException(status_code=500, detail="Error setting client state")
@client_state_router.post(
"/{queue_id}/set_by_key",
operation_id="set_client_state",
response_model=str,
)
async def set_client_state(
queue_id: str = Path(description="The queue id to perform this operation on"),
key: str = Query(..., description="Key to set"),
value: str = Body(..., description="Stringified value to set"),
) -> str:
"""Sets the client state"""
try:
return ApiDependencies.invoker.services.client_state_persistence.set_by_key(queue_id, key, value)
except Exception as e:
logging.error(f"Error setting client state: {e}")
raise HTTPException(status_code=500, detail="Error setting client state")
@client_state_router.post(
"/{queue_id}/delete",
operation_id="delete_client_state",
responses={204: {"description": "Client state deleted"}},
)
async def delete_client_state(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> None:
"""Deletes the client state"""
try:
ApiDependencies.invoker.services.client_state_persistence.delete(queue_id)
except Exception as e:
logging.error(f"Error deleting client state: {e}")
raise HTTPException(status_code=500, detail="Error deleting client state")

View File

@@ -28,14 +28,17 @@ from invokeai.app.services.model_records import (
UnknownModelException,
)
from invokeai.app.util.suppress_output import SuppressOutput
from invokeai.backend.model_manager import BaseModelType, ModelFormat, ModelType
from invokeai.backend.model_manager.config import (
AnyModelConfig,
MainCheckpointConfig,
from invokeai.backend.model_manager.configs.factory import AnyModelConfig, ModelConfigFactory
from invokeai.backend.model_manager.configs.main import (
Main_Checkpoint_SD1_Config,
Main_Checkpoint_SD2_Config,
Main_Checkpoint_SDXL_Config,
Main_Checkpoint_SDXLRefiner_Config,
)
from invokeai.backend.model_manager.load.model_cache.cache_stats import CacheStats
from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch
from invokeai.backend.model_manager.metadata.metadata_base import ModelMetadataWithFiles, UnknownMetadataException
from invokeai.backend.model_manager.model_on_disk import ModelOnDisk
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.model_manager.starter_models import (
STARTER_BUNDLES,
@@ -44,6 +47,7 @@ from invokeai.backend.model_manager.starter_models import (
StarterModelBundle,
StarterModelWithoutDependencies,
)
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelFormat, ModelType
model_manager_router = APIRouter(prefix="/v2/models", tags=["model_manager"])
@@ -188,6 +192,49 @@ async def get_model_record(
raise HTTPException(status_code=404, detail=str(e))
@model_manager_router.post(
"/i/{key}/reidentify",
operation_id="reidentify_model",
responses={
200: {
"description": "The model configuration was retrieved successfully",
"content": {"application/json": {"example": example_model_config}},
},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
},
)
async def reidentify_model(
key: Annotated[str, Path(description="Key of the model to reidentify.")],
) -> AnyModelConfig:
"""Attempt to reidentify a model by re-probing its weights file."""
try:
config = ApiDependencies.invoker.services.model_manager.store.get_model(key)
models_path = ApiDependencies.invoker.services.configuration.models_path
if pathlib.Path(config.path).is_relative_to(models_path):
model_path = pathlib.Path(config.path)
else:
model_path = models_path / config.path
mod = ModelOnDisk(model_path)
result = ModelConfigFactory.from_model_on_disk(mod)
if result.config is None:
raise InvalidModelException("Unable to identify model format")
# Retain user-editable fields from the original config
result.config.key = config.key
result.config.name = config.name
result.config.description = config.description
result.config.cover_image = config.cover_image
result.config.trigger_phrases = config.trigger_phrases
result.config.source = config.source
result.config.source_type = config.source_type
new_config = ApiDependencies.invoker.services.model_manager.store.replace_model(config.key, result.config)
return new_config
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
class FoundModel(BaseModel):
path: str = Field(description="Path to the model")
is_installed: bool = Field(description="Whether or not the model is already installed")
@@ -235,9 +282,10 @@ async def scan_for_models(
found_model = FoundModel(path=path, is_installed=is_installed)
scan_results.append(found_model)
except Exception as e:
error_type = type(e).__name__
raise HTTPException(
status_code=500,
detail=f"An error occurred while searching the directory: {e}",
detail=f"An error occurred while searching the directory: {error_type}",
)
return scan_results
@@ -297,10 +345,8 @@ async def update_model_record(
"""Update a model's config."""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_manager.store
installer = ApiDependencies.invoker.services.model_manager.install
try:
record_store.update_model(key, changes=changes)
config = installer.sync_model_path(key)
config = record_store.update_model(key, changes=changes, allow_class_change=True)
config = add_cover_image_to_model_config(config, ApiDependencies)
logger.info(f"Updated model: {key}")
except UnknownModelException as e:
@@ -410,6 +456,59 @@ async def delete_model(
raise HTTPException(status_code=404, detail=str(e))
class BulkDeleteModelsRequest(BaseModel):
"""Request body for bulk model deletion."""
keys: List[str] = Field(description="List of model keys to delete")
class BulkDeleteModelsResponse(BaseModel):
"""Response body for bulk model deletion."""
deleted: List[str] = Field(description="List of successfully deleted model keys")
failed: List[dict] = Field(description="List of failed deletions with error messages")
@model_manager_router.post(
"/i/bulk_delete",
operation_id="bulk_delete_models",
responses={
200: {"description": "Models deleted (possibly with some failures)"},
},
status_code=200,
)
async def bulk_delete_models(
request: BulkDeleteModelsRequest = Body(description="List of model keys to delete"),
) -> BulkDeleteModelsResponse:
"""
Delete multiple model records from database.
The configuration records will be removed. The corresponding weights files will be
deleted as well if they reside within the InvokeAI "models" directory.
Returns a list of successfully deleted keys and failed deletions with error messages.
"""
logger = ApiDependencies.invoker.services.logger
installer = ApiDependencies.invoker.services.model_manager.install
deleted = []
failed = []
for key in request.keys:
try:
installer.delete(key)
deleted.append(key)
logger.info(f"Deleted model: {key}")
except UnknownModelException as e:
logger.error(f"Failed to delete model {key}: {str(e)}")
failed.append({"key": key, "error": str(e)})
except Exception as e:
logger.error(f"Failed to delete model {key}: {str(e)}")
failed.append({"key": key, "error": str(e)})
logger.info(f"Bulk delete completed: {len(deleted)} deleted, {len(failed)} failed")
return BulkDeleteModelsResponse(deleted=deleted, failed=failed)
@model_manager_router.delete(
"/i/{key}/image",
operation_id="delete_model_image",
@@ -743,9 +842,18 @@ async def convert_model(
logger.error(str(e))
raise HTTPException(status_code=424, detail=str(e))
if not isinstance(model_config, MainCheckpointConfig):
logger.error(f"The model with key {key} is not a main checkpoint model.")
raise HTTPException(400, f"The model with key {key} is not a main checkpoint model.")
if not isinstance(
model_config,
(
Main_Checkpoint_SD1_Config,
Main_Checkpoint_SD2_Config,
Main_Checkpoint_SDXL_Config,
Main_Checkpoint_SDXLRefiner_Config,
),
):
msg = f"The model with key {key} is not a main SD 1/2/XL checkpoint model."
logger.error(msg)
raise HTTPException(400, msg)
with TemporaryDirectory(dir=ApiDependencies.invoker.services.configuration.models_path) as tmpdir:
convert_path = pathlib.Path(tmpdir) / pathlib.Path(model_config.path).stem
@@ -806,15 +914,48 @@ class StarterModelResponse(BaseModel):
def get_is_installed(
starter_model: StarterModel | StarterModelWithoutDependencies, installed_models: list[AnyModelConfig]
) -> bool:
from invokeai.backend.model_manager.taxonomy import ModelType
for model in installed_models:
# Check if source matches exactly
if model.source == starter_model.source:
return True
# Check if name (or previous names), base and type match
if (
(model.name == starter_model.name or model.name in starter_model.previous_names)
and model.base == starter_model.base
and model.type == starter_model.type
):
return True
# Special handling for Qwen3Encoder models - check by type and variant
# This allows renamed models to still be detected as installed
if starter_model.type == ModelType.Qwen3Encoder:
from invokeai.backend.model_manager.taxonomy import Qwen3VariantType
# Determine expected variant from source pattern
expected_variant: Qwen3VariantType | None = None
if "klein-9B" in starter_model.source or "qwen3_8b" in starter_model.source.lower():
expected_variant = Qwen3VariantType.Qwen3_8B
elif (
"klein-4B" in starter_model.source
or "qwen3_4b" in starter_model.source.lower()
or "Z-Image" in starter_model.source
):
expected_variant = Qwen3VariantType.Qwen3_4B
if expected_variant is not None:
for model in installed_models:
if model.type == ModelType.Qwen3Encoder and hasattr(model, "variant"):
model_variant = model.variant
# Handle both enum and string values
if isinstance(model_variant, Qwen3VariantType):
if model_variant == expected_variant:
return True
elif isinstance(model_variant, str):
if model_variant == expected_variant.value:
return True
return False

View File

@@ -2,12 +2,11 @@ from typing import Optional
from fastapi import Body, HTTPException, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from pydantic import BaseModel
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.session_processor.session_processor_common import SessionProcessorStatus
from invokeai.app.services.session_queue.session_queue_common import (
QUEUE_ITEM_STATUS,
Batch,
BatchStatus,
CancelAllExceptCurrentResult,
@@ -17,7 +16,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
DeleteAllExceptCurrentResult,
DeleteByDestinationResult,
EnqueueBatchResult,
FieldIdentifier,
ItemIdsResult,
PruneResult,
RetryItemsResult,
SessionQueueCountsByDestination,
@@ -25,7 +24,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
SessionQueueItemNotFoundError,
SessionQueueStatus,
)
from invokeai.app.services.shared.pagination import CursorPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
session_queue_router = APIRouter(prefix="/v1/queue", tags=["queue"])
@@ -37,12 +36,6 @@ class SessionQueueAndProcessorStatus(BaseModel):
processor: SessionProcessorStatus
class ValidationRunData(BaseModel):
workflow_id: str = Field(description="The id of the workflow being published.")
input_fields: list[FieldIdentifier] = Body(description="The input fields for the published workflow")
output_fields: list[FieldIdentifier] = Body(description="The output fields for the published workflow")
@session_queue_router.post(
"/{queue_id}/enqueue_batch",
operation_id="enqueue_batch",
@@ -54,10 +47,6 @@ async def enqueue_batch(
queue_id: str = Path(description="The queue id to perform this operation on"),
batch: Batch = Body(description="Batch to process"),
prepend: bool = Body(default=False, description="Whether or not to prepend this batch in the queue"),
validation_run_data: Optional[ValidationRunData] = Body(
default=None,
description="The validation run data to use for this batch. This is only used if this is a validation run.",
),
) -> EnqueueBatchResult:
"""Processes a batch and enqueues the output graphs for execution."""
try:
@@ -68,36 +57,6 @@ async def enqueue_batch(
raise HTTPException(status_code=500, detail=f"Unexpected error while enqueuing batch: {e}")
@session_queue_router.get(
"/{queue_id}/list",
operation_id="list_queue_items",
responses={
200: {"model": CursorPaginatedResults[SessionQueueItem]},
},
)
async def list_queue_items(
queue_id: str = Path(description="The queue id to perform this operation on"),
limit: int = Query(default=50, description="The number of items to fetch"),
status: Optional[QUEUE_ITEM_STATUS] = Query(default=None, description="The status of items to fetch"),
cursor: Optional[int] = Query(default=None, description="The pagination cursor"),
priority: int = Query(default=0, description="The pagination cursor priority"),
destination: Optional[str] = Query(default=None, description="The destination of queue items to fetch"),
) -> CursorPaginatedResults[SessionQueueItem]:
"""Gets cursor-paginated queue items"""
try:
return ApiDependencies.invoker.services.session_queue.list_queue_items(
queue_id=queue_id,
limit=limit,
status=status,
cursor=cursor,
priority=priority,
destination=destination,
)
except Exception as e:
raise HTTPException(status_code=500, detail=f"Unexpected error while listing all items: {e}")
@session_queue_router.get(
"/{queue_id}/list_all",
operation_id="list_all_queue_items",
@@ -119,6 +78,56 @@ async def list_all_queue_items(
raise HTTPException(status_code=500, detail=f"Unexpected error while listing all queue items: {e}")
@session_queue_router.get(
"/{queue_id}/item_ids",
operation_id="get_queue_item_ids",
responses={
200: {"model": ItemIdsResult},
},
)
async def get_queue_item_ids(
queue_id: str = Path(description="The queue id to perform this operation on"),
order_dir: SQLiteDirection = Query(default=SQLiteDirection.Descending, description="The order of sort"),
) -> ItemIdsResult:
"""Gets all queue item ids that match the given parameters"""
try:
return ApiDependencies.invoker.services.session_queue.get_queue_item_ids(queue_id=queue_id, order_dir=order_dir)
except Exception as e:
raise HTTPException(status_code=500, detail=f"Unexpected error while listing all queue item ids: {e}")
@session_queue_router.post(
"/{queue_id}/items_by_ids",
operation_id="get_queue_items_by_item_ids",
responses={200: {"model": list[SessionQueueItem]}},
)
async def get_queue_items_by_item_ids(
queue_id: str = Path(description="The queue id to perform this operation on"),
item_ids: list[int] = Body(
embed=True, description="Object containing list of queue item ids to fetch queue items for"
),
) -> list[SessionQueueItem]:
"""Gets queue items for the specified queue item ids. Maintains order of item ids."""
try:
session_queue_service = ApiDependencies.invoker.services.session_queue
# Fetch queue items preserving the order of requested item ids
queue_items: list[SessionQueueItem] = []
for item_id in item_ids:
try:
queue_item = session_queue_service.get_queue_item(item_id=item_id)
if queue_item.queue_id != queue_id: # Auth protection for items from other queues
continue
queue_items.append(queue_item)
except Exception:
# Skip missing queue items - they may have been deleted between item id fetch and queue item fetch
continue
return queue_items
except Exception:
raise HTTPException(status_code=500, detail="Failed to get queue items")
@session_queue_router.put(
"/{queue_id}/processor/resume",
operation_id="resume",
@@ -354,7 +363,10 @@ async def get_queue_item(
) -> SessionQueueItem:
"""Gets a queue item"""
try:
return ApiDependencies.invoker.services.session_queue.get_queue_item(item_id)
queue_item = ApiDependencies.invoker.services.session_queue.get_queue_item(item_id=item_id)
if queue_item.queue_id != queue_id:
raise HTTPException(status_code=404, detail=f"Queue item with id {item_id} not found in queue {queue_id}")
return queue_item
except SessionQueueItemNotFoundError:
raise HTTPException(status_code=404, detail=f"Queue item with id {item_id} not found in queue {queue_id}")
except Exception as e:

View File

@@ -106,7 +106,6 @@ async def list_workflows(
tags: Optional[list[str]] = Query(default=None, description="The tags of workflow to get"),
query: Optional[str] = Query(default=None, description="The text to query by (matches name and description)"),
has_been_opened: Optional[bool] = Query(default=None, description="Whether to include/exclude recent workflows"),
is_published: Optional[bool] = Query(default=None, description="Whether to include/exclude published workflows"),
) -> PaginatedResults[WorkflowRecordListItemWithThumbnailDTO]:
"""Gets a page of workflows"""
workflows_with_thumbnails: list[WorkflowRecordListItemWithThumbnailDTO] = []
@@ -119,7 +118,6 @@ async def list_workflows(
categories=categories,
tags=tags,
has_been_opened=has_been_opened,
is_published=is_published,
)
for workflow in workflows.items:
workflows_with_thumbnails.append(
@@ -225,6 +223,15 @@ async def get_workflow_thumbnail(
raise HTTPException(status_code=404)
@workflows_router.get("/tags", operation_id="get_all_tags")
async def get_all_tags(
categories: Optional[list[WorkflowCategory]] = Query(default=None, description="The categories to include"),
) -> list[str]:
"""Gets all unique tags from workflows"""
return ApiDependencies.invoker.services.workflow_records.get_all_tags(categories=categories)
@workflows_router.get("/counts_by_tag", operation_id="get_counts_by_tag")
async def get_counts_by_tag(
tags: list[str] = Query(description="The tags to get counts for"),

View File

@@ -19,6 +19,7 @@ from invokeai.app.api.routers import (
app_info,
board_images,
boards,
client_state,
download_queue,
images,
model_manager,
@@ -131,6 +132,7 @@ app.include_router(app_info.app_router, prefix="/api")
app.include_router(session_queue.session_queue_router, prefix="/api")
app.include_router(workflows.workflows_router, prefix="/api")
app.include_router(style_presets.style_presets_router, prefix="/api")
app.include_router(client_state.client_state_router, prefix="/api")
app.openapi = get_openapi_func(app)
@@ -155,6 +157,12 @@ def overridden_redoc() -> HTMLResponse:
web_root_path = Path(list(web_dir.__path__)[0])
if app_config.unsafe_disable_picklescan:
logger.warning(
"The unsafe_disable_picklescan option is enabled. This disables malware scanning while installing and"
"loading models, which may allow malicious code to be executed. Use at your own risk."
)
try:
app.mount("/", NoCacheStaticFiles(directory=Path(web_root_path, "dist"), html=True), name="ui")
except RuntimeError:

View File

@@ -36,6 +36,9 @@ from pydantic_core import PydanticUndefined
from invokeai.app.invocations.fields import (
FieldKind,
Input,
InputFieldJSONSchemaExtra,
UIType,
migrate_model_ui_type,
)
from invokeai.app.services.config.config_default import get_config
from invokeai.app.services.shared.invocation_context import InvocationContext
@@ -256,7 +259,9 @@ class BaseInvocation(ABC, BaseModel):
is_intermediate: bool = Field(
default=False,
description="Whether or not this is an intermediate invocation.",
json_schema_extra={"ui_type": "IsIntermediate", "field_kind": FieldKind.NodeAttribute},
json_schema_extra=InputFieldJSONSchemaExtra(
input=Input.Direct, field_kind=FieldKind.NodeAttribute, ui_type=UIType._IsIntermediate
).model_dump(exclude_none=True),
)
use_cache: bool = Field(
default=True,
@@ -445,6 +450,15 @@ with warnings.catch_warnings():
RESERVED_PYDANTIC_FIELD_NAMES = {m[0] for m in inspect.getmembers(_Model())}
def is_enum_member(value: Any, enum_class: type[Enum]) -> bool:
"""Checks if a value is a member of an enum class."""
try:
enum_class(value)
return True
except ValueError:
return False
def validate_fields(model_fields: dict[str, FieldInfo], model_type: str) -> None:
"""
Validates the fields of an invocation or invocation output:
@@ -456,51 +470,99 @@ def validate_fields(model_fields: dict[str, FieldInfo], model_type: str) -> None
"""
for name, field in model_fields.items():
if name in RESERVED_PYDANTIC_FIELD_NAMES:
raise InvalidFieldError(f'Invalid field name "{name}" on "{model_type}" (reserved by pydantic)')
raise InvalidFieldError(f"{model_type}.{name}: Invalid field name (reserved by pydantic)")
if not field.annotation:
raise InvalidFieldError(f'Invalid field type "{name}" on "{model_type}" (missing annotation)')
raise InvalidFieldError(f"{model_type}.{name}: Invalid field type (missing annotation)")
if not isinstance(field.json_schema_extra, dict):
raise InvalidFieldError(
f'Invalid field definition for "{name}" on "{model_type}" (missing json_schema_extra dict)'
)
raise InvalidFieldError(f"{model_type}.{name}: Invalid field definition (missing json_schema_extra dict)")
field_kind = field.json_schema_extra.get("field_kind", None)
# must have a field_kind
if not isinstance(field_kind, FieldKind):
if not is_enum_member(field_kind, FieldKind):
raise InvalidFieldError(
f'Invalid field definition for "{name}" on "{model_type}" (maybe it\'s not an InputField or OutputField?)'
f"{model_type}.{name}: Invalid field definition for (maybe it's not an InputField or OutputField?)"
)
if field_kind is FieldKind.Input and (
if field_kind == FieldKind.Input.value and (
name in RESERVED_NODE_ATTRIBUTE_FIELD_NAMES or name in RESERVED_INPUT_FIELD_NAMES
):
raise InvalidFieldError(f'Invalid field name "{name}" on "{model_type}" (reserved input field name)')
raise InvalidFieldError(f"{model_type}.{name}: Invalid field name (reserved input field name)")
if field_kind is FieldKind.Output and name in RESERVED_OUTPUT_FIELD_NAMES:
raise InvalidFieldError(f'Invalid field name "{name}" on "{model_type}" (reserved output field name)')
if field_kind == FieldKind.Output.value and name in RESERVED_OUTPUT_FIELD_NAMES:
raise InvalidFieldError(f"{model_type}.{name}: Invalid field name (reserved output field name)")
if (field_kind is FieldKind.Internal) and name not in RESERVED_INPUT_FIELD_NAMES:
raise InvalidFieldError(
f'Invalid field name "{name}" on "{model_type}" (internal field without reserved name)'
)
if field_kind == FieldKind.Internal.value and name not in RESERVED_INPUT_FIELD_NAMES:
raise InvalidFieldError(f"{model_type}.{name}: Invalid field name (internal field without reserved name)")
# node attribute fields *must* be in the reserved list
if (
field_kind is FieldKind.NodeAttribute
field_kind == FieldKind.NodeAttribute.value
and name not in RESERVED_NODE_ATTRIBUTE_FIELD_NAMES
and name not in RESERVED_OUTPUT_FIELD_NAMES
):
raise InvalidFieldError(
f'Invalid field name "{name}" on "{model_type}" (node attribute field without reserved name)'
f"{model_type}.{name}: Invalid field name (node attribute field without reserved name)"
)
ui_type = field.json_schema_extra.get("ui_type", None)
if isinstance(ui_type, str) and ui_type.startswith("DEPRECATED_"):
logger.warning(f'"UIType.{ui_type.split("_")[-1]}" is deprecated, ignoring')
field.json_schema_extra.pop("ui_type")
ui_model_base = field.json_schema_extra.get("ui_model_base", None)
ui_model_type = field.json_schema_extra.get("ui_model_type", None)
ui_model_variant = field.json_schema_extra.get("ui_model_variant", None)
ui_model_format = field.json_schema_extra.get("ui_model_format", None)
if ui_type is not None:
# There are 3 cases where we may need to take action:
#
# 1. The ui_type is a migratable, deprecated value. For example, ui_type=UIType.MainModel value is
# deprecated and should be migrated to:
# - ui_model_base=[BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2]
# - ui_model_type=[ModelType.Main]
#
# 2. ui_type was set in conjunction with any of the new ui_model_[base|type|variant|format] fields, which
# is not allowed (they are mutually exclusive). In this case, we ignore ui_type and log a warning.
#
# 3. ui_type is a deprecated value that is not migratable. For example, ui_type=UIType.Image is deprecated;
# Image fields are now automatically detected based on the field's type annotation. In this case, we
# ignore ui_type and log a warning.
#
# The cases must be checked in this order to ensure proper handling.
# Easier to work with as an enum
ui_type = UIType(ui_type)
# The enum member values are not always the same as their names - we want to log the name so the user can
# easily review their code and see where the deprecated enum member is used.
human_readable_name = f"UIType.{ui_type.name}"
# Case 1: migratable deprecated value
did_migrate = migrate_model_ui_type(ui_type, field.json_schema_extra)
if did_migrate:
logger.warning(
f'{model_type}.{name}: Migrated deprecated "ui_type" "{human_readable_name}" to new ui_model_[base|type|variant|format] fields'
)
field.json_schema_extra.pop("ui_type")
# Case 2: mutually exclusive with new fields
elif (
ui_model_base is not None
or ui_model_type is not None
or ui_model_variant is not None
or ui_model_format is not None
):
logger.warning(
f'{model_type}.{name}: "ui_type" is mutually exclusive with "ui_model_[base|type|format|variant]", ignoring "ui_type"'
)
field.json_schema_extra.pop("ui_type")
# Case 3: deprecated value that is not migratable
elif ui_type.startswith("DEPRECATED_"):
logger.warning(f'{model_type}.{name}: Deprecated "ui_type" "{human_readable_name}", ignoring')
field.json_schema_extra.pop("ui_type")
return None

View File

@@ -22,7 +22,7 @@ from invokeai.app.invocations.model import TransformerField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.sampling_utils import clip_timestep_schedule_fractional
from invokeai.backend.model_manager.config import BaseModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType
from invokeai.backend.rectified_flow.rectified_flow_inpaint_extension import RectifiedFlowInpaintExtension
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import CogView4ConditioningInfo

View File

@@ -17,6 +17,7 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_cogview4
# TODO(ryand): This is effectively a copy of SD3ImageToLatentsInvocation and a subset of ImageToLatentsInvocation. We
# should refactor to avoid this duplication.
@@ -38,7 +39,11 @@ class CogView4ImageToLatentsInvocation(BaseInvocation, WithMetadata, WithBoard):
@staticmethod
def vae_encode(vae_info: LoadedModel, image_tensor: torch.Tensor) -> torch.Tensor:
with vae_info as vae:
assert isinstance(vae_info.model, AutoencoderKL)
estimated_working_memory = estimate_vae_working_memory_cogview4(
operation="encode", image_tensor=image_tensor, vae=vae_info.model
)
with vae_info.model_on_device(working_mem_bytes=estimated_working_memory) as (_, vae):
assert isinstance(vae, AutoencoderKL)
vae.disable_tiling()
@@ -62,6 +67,8 @@ class CogView4ImageToLatentsInvocation(BaseInvocation, WithMetadata, WithBoard):
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, AutoencoderKL)
latents = self.vae_encode(vae_info=vae_info, image_tensor=image_tensor)
latents = latents.to("cpu")

View File

@@ -6,7 +6,6 @@ from einops import rearrange
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
@@ -20,6 +19,7 @@ from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.extensions.seamless import SeamlessExt
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_cogview4
# TODO(ryand): This is effectively a copy of SD3LatentsToImageInvocation and a subset of LatentsToImageInvocation. We
# should refactor to avoid this duplication.
@@ -39,22 +39,15 @@ class CogView4LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
latents: LatentsField = InputField(description=FieldDescriptions.latents, input=Input.Connection)
vae: VAEField = InputField(description=FieldDescriptions.vae, input=Input.Connection)
def _estimate_working_memory(self, latents: torch.Tensor, vae: AutoencoderKL) -> int:
"""Estimate the working memory required by the invocation in bytes."""
out_h = LATENT_SCALE_FACTOR * latents.shape[-2]
out_w = LATENT_SCALE_FACTOR * latents.shape[-1]
element_size = next(vae.parameters()).element_size()
scaling_constant = 2200 # Determined experimentally.
working_memory = out_h * out_w * element_size * scaling_constant
return int(working_memory)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (AutoencoderKL))
estimated_working_memory = self._estimate_working_memory(latents, vae_info.model)
estimated_working_memory = estimate_vae_working_memory_cogview4(
operation="decode", image_tensor=latents, vae=vae_info.model
)
with (
SeamlessExt.static_patch_model(vae_info.model, self.vae.seamless_axes),
vae_info.model_on_device(working_mem_bytes=estimated_working_memory) as (_, vae),

View File

@@ -5,7 +5,7 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField
from invokeai.app.invocations.model import (
GlmEncoderField,
ModelIdentifierField,
@@ -13,7 +13,7 @@ from invokeai.app.invocations.model import (
VAEField,
)
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import SubModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType, SubModelType
@invocation_output("cogview4_model_loader_output")
@@ -38,8 +38,9 @@ class CogView4ModelLoaderInvocation(BaseInvocation):
model: ModelIdentifierField = InputField(
description=FieldDescriptions.cogview4_model,
ui_type=UIType.CogView4MainModel,
input=Input.Direct,
ui_model_base=BaseModelType.CogView4,
ui_model_type=ModelType.Main,
)
def invoke(self, context: InvocationContext) -> CogView4ModelLoaderOutput:

View File

@@ -16,7 +16,6 @@ from invokeai.app.invocations.fields import (
ImageField,
InputField,
OutputField,
UIType,
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
@@ -28,6 +27,7 @@ from invokeai.app.util.controlnet_utils import (
heuristic_resize_fast,
)
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
class ControlField(BaseModel):
@@ -63,13 +63,17 @@ class ControlOutput(BaseInvocationOutput):
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet - SD1.5, SDXL", tags=["controlnet"], category="controlnet", version="1.1.3")
@invocation(
"controlnet", title="ControlNet - SD1.5, SD2, SDXL", tags=["controlnet"], category="controlnet", version="1.1.3"
)
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
image: ImageField = InputField(description="The control image")
control_model: ModelIdentifierField = InputField(
description=FieldDescriptions.controlnet_model, ui_type=UIType.ControlNetModel
description=FieldDescriptions.controlnet_model,
ui_model_base=[BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2, BaseModelType.StableDiffusionXL],
ui_model_type=ModelType.ControlNet,
)
control_weight: Union[float, List[float]] = InputField(
default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"

View File

@@ -20,9 +20,7 @@ from invokeai.app.invocations.fields import (
from invokeai.app.invocations.image_to_latents import ImageToLatentsInvocation
from invokeai.app.invocations.model import UNetField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.model_manager.config import MainConfigBase
from invokeai.backend.model_manager.taxonomy import ModelVariantType
from invokeai.backend.model_manager.taxonomy import FluxVariantType, ModelType, ModelVariantType
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
@@ -182,10 +180,11 @@ class CreateGradientMaskInvocation(BaseInvocation):
if self.unet is not None and self.vae is not None and self.image is not None:
# all three fields must be present at the same time
main_model_config = context.models.get_config(self.unet.unet.key)
assert isinstance(main_model_config, MainConfigBase)
if main_model_config.variant is ModelVariantType.Inpaint:
assert main_model_config.type is ModelType.Main
variant = getattr(main_model_config, "variant", None)
if variant is ModelVariantType.Inpaint or variant is FluxVariantType.DevFill:
mask = dilated_mask_tensor
vae_info: LoadedModel = context.models.load(self.vae.vae)
vae_info = context.models.load(self.vae.vae)
image = context.images.get_pil(self.image.image_name)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:

View File

@@ -39,7 +39,7 @@ from invokeai.app.invocations.t2i_adapter import T2IAdapterField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelVariantType
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.patches.layer_patcher import LayerPatcher

View File

@@ -1,11 +1,19 @@
from enum import Enum
from typing import Any, Callable, Optional, Tuple
from pydantic import BaseModel, ConfigDict, Field, RootModel, TypeAdapter, model_validator
from pydantic import BaseModel, ConfigDict, Field, RootModel, TypeAdapter
from pydantic.fields import _Unset
from pydantic_core import PydanticUndefined
from invokeai.app.util.metaenum import MetaEnum
from invokeai.backend.image_util.segment_anything.shared import BoundingBox
from invokeai.backend.model_manager.taxonomy import (
BaseModelType,
ClipVariantType,
ModelFormat,
ModelType,
ModelVariantType,
)
from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.get_logger()
@@ -38,35 +46,6 @@ class UIType(str, Enum, metaclass=MetaEnum):
used, and the type will be ignored. They are included here for backwards compatibility.
"""
# region Model Field Types
MainModel = "MainModelField"
CogView4MainModel = "CogView4MainModelField"
FluxMainModel = "FluxMainModelField"
SD3MainModel = "SD3MainModelField"
SDXLMainModel = "SDXLMainModelField"
SDXLRefinerModel = "SDXLRefinerModelField"
ONNXModel = "ONNXModelField"
VAEModel = "VAEModelField"
FluxVAEModel = "FluxVAEModelField"
LoRAModel = "LoRAModelField"
ControlNetModel = "ControlNetModelField"
IPAdapterModel = "IPAdapterModelField"
T2IAdapterModel = "T2IAdapterModelField"
T5EncoderModel = "T5EncoderModelField"
CLIPEmbedModel = "CLIPEmbedModelField"
CLIPLEmbedModel = "CLIPLEmbedModelField"
CLIPGEmbedModel = "CLIPGEmbedModelField"
SpandrelImageToImageModel = "SpandrelImageToImageModelField"
ControlLoRAModel = "ControlLoRAModelField"
SigLipModel = "SigLipModelField"
FluxReduxModel = "FluxReduxModelField"
LlavaOnevisionModel = "LLaVAModelField"
Imagen3Model = "Imagen3ModelField"
Imagen4Model = "Imagen4ModelField"
ChatGPT4oModel = "ChatGPT4oModelField"
FluxKontextModel = "FluxKontextModelField"
# endregion
# region Misc Field Types
Scheduler = "SchedulerField"
Any = "AnyField"
@@ -75,6 +54,7 @@ class UIType(str, Enum, metaclass=MetaEnum):
# region Internal Field Types
_Collection = "CollectionField"
_CollectionItem = "CollectionItemField"
_IsIntermediate = "IsIntermediate"
# endregion
# region DEPRECATED
@@ -112,13 +92,44 @@ class UIType(str, Enum, metaclass=MetaEnum):
CollectionItem = "DEPRECATED_CollectionItem"
Enum = "DEPRECATED_Enum"
WorkflowField = "DEPRECATED_WorkflowField"
IsIntermediate = "DEPRECATED_IsIntermediate"
BoardField = "DEPRECATED_BoardField"
MetadataItem = "DEPRECATED_MetadataItem"
MetadataItemCollection = "DEPRECATED_MetadataItemCollection"
MetadataItemPolymorphic = "DEPRECATED_MetadataItemPolymorphic"
MetadataDict = "DEPRECATED_MetadataDict"
# Deprecated Model Field Types - use ui_model_[base|type|variant|format] instead
MainModel = "DEPRECATED_MainModelField"
CogView4MainModel = "DEPRECATED_CogView4MainModelField"
FluxMainModel = "DEPRECATED_FluxMainModelField"
SD3MainModel = "DEPRECATED_SD3MainModelField"
SDXLMainModel = "DEPRECATED_SDXLMainModelField"
SDXLRefinerModel = "DEPRECATED_SDXLRefinerModelField"
ONNXModel = "DEPRECATED_ONNXModelField"
VAEModel = "DEPRECATED_VAEModelField"
FluxVAEModel = "DEPRECATED_FluxVAEModelField"
LoRAModel = "DEPRECATED_LoRAModelField"
ControlNetModel = "DEPRECATED_ControlNetModelField"
IPAdapterModel = "DEPRECATED_IPAdapterModelField"
T2IAdapterModel = "DEPRECATED_T2IAdapterModelField"
T5EncoderModel = "DEPRECATED_T5EncoderModelField"
CLIPEmbedModel = "DEPRECATED_CLIPEmbedModelField"
CLIPLEmbedModel = "DEPRECATED_CLIPLEmbedModelField"
CLIPGEmbedModel = "DEPRECATED_CLIPGEmbedModelField"
SpandrelImageToImageModel = "DEPRECATED_SpandrelImageToImageModelField"
ControlLoRAModel = "DEPRECATED_ControlLoRAModelField"
SigLipModel = "DEPRECATED_SigLipModelField"
FluxReduxModel = "DEPRECATED_FluxReduxModelField"
LlavaOnevisionModel = "DEPRECATED_LLaVAModelField"
Imagen3Model = "DEPRECATED_Imagen3ModelField"
Imagen4Model = "DEPRECATED_Imagen4ModelField"
ChatGPT4oModel = "DEPRECATED_ChatGPT4oModelField"
Gemini2_5Model = "DEPRECATED_Gemini2_5ModelField"
FluxKontextModel = "DEPRECATED_FluxKontextModelField"
Veo3Model = "DEPRECATED_Veo3ModelField"
RunwayModel = "DEPRECATED_RunwayModelField"
# endregion
class UIComponent(str, Enum, metaclass=MetaEnum):
"""
@@ -143,6 +154,7 @@ class FieldDescriptions:
clip = "CLIP (tokenizer, text encoder, LoRAs) and skipped layer count"
t5_encoder = "T5 tokenizer and text encoder"
glm_encoder = "GLM (THUDM) tokenizer and text encoder"
qwen3_encoder = "Qwen3 tokenizer and text encoder"
clip_embed_model = "CLIP Embed loader"
clip_g_model = "CLIP-G Embed loader"
unet = "UNet (scheduler, LoRAs)"
@@ -158,6 +170,7 @@ class FieldDescriptions:
flux_model = "Flux model (Transformer) to load"
sd3_model = "SD3 model (MMDiTX) to load"
cogview4_model = "CogView4 model (Transformer) to load"
z_image_model = "Z-Image model (Transformer) to load"
sdxl_main_model = "SDXL Main model (UNet, VAE, CLIP1, CLIP2) to load"
sdxl_refiner_model = "SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load"
onnx_main_model = "ONNX Main model (UNet, VAE, CLIP) to load"
@@ -230,6 +243,12 @@ class BoardField(BaseModel):
board_id: str = Field(description="The id of the board")
class StylePresetField(BaseModel):
"""A style preset primitive field"""
style_preset_id: str = Field(description="The id of the style preset")
class DenoiseMaskField(BaseModel):
"""An inpaint mask field"""
@@ -310,6 +329,17 @@ class CogView4ConditioningField(BaseModel):
conditioning_name: str = Field(description="The name of conditioning tensor")
class ZImageConditioningField(BaseModel):
"""A Z-Image conditioning tensor primitive value"""
conditioning_name: str = Field(description="The name of conditioning tensor")
mask: Optional[TensorField] = Field(
default=None,
description="The mask associated with this conditioning tensor for regional prompting. "
"Excluded regions should be set to False, included regions should be set to True.",
)
class ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""
@@ -321,14 +351,9 @@ class ConditioningField(BaseModel):
)
class BoundingBoxField(BaseModel):
class BoundingBoxField(BoundingBox):
"""A bounding box primitive value."""
x_min: int = Field(ge=0, description="The minimum x-coordinate of the bounding box (inclusive).")
x_max: int = Field(ge=0, description="The maximum x-coordinate of the bounding box (exclusive).")
y_min: int = Field(ge=0, description="The minimum y-coordinate of the bounding box (inclusive).")
y_max: int = Field(ge=0, description="The maximum y-coordinate of the bounding box (exclusive).")
score: Optional[float] = Field(
default=None,
ge=0.0,
@@ -337,21 +362,6 @@ class BoundingBoxField(BaseModel):
"when the bounding box was produced by a detector and has an associated confidence score.",
)
@model_validator(mode="after")
def check_coords(self):
if self.x_min > self.x_max:
raise ValueError(f"x_min ({self.x_min}) is greater than x_max ({self.x_max}).")
if self.y_min > self.y_max:
raise ValueError(f"y_min ({self.y_min}) is greater than y_max ({self.y_max}).")
return self
def tuple(self) -> Tuple[int, int, int, int]:
"""
Returns the bounding box as a tuple suitable for use with PIL's `Image.crop()` method.
This method returns a tuple of the form (left, upper, right, lower) == (x_min, y_min, x_max, y_max).
"""
return (self.x_min, self.y_min, self.x_max, self.y_max)
class MetadataField(RootModel[dict[str, Any]]):
"""
@@ -418,10 +428,15 @@ class InputFieldJSONSchemaExtra(BaseModel):
ui_component: Optional[UIComponent] = None
ui_order: Optional[int] = None
ui_choice_labels: Optional[dict[str, str]] = None
ui_model_base: Optional[list[BaseModelType]] = None
ui_model_type: Optional[list[ModelType]] = None
ui_model_variant: Optional[list[ClipVariantType | ModelVariantType]] = None
ui_model_format: Optional[list[ModelFormat]] = None
model_config = ConfigDict(
validate_assignment=True,
json_schema_serialization_defaults_required=True,
use_enum_values=True,
)
@@ -474,16 +489,100 @@ class OutputFieldJSONSchemaExtra(BaseModel):
"""
field_kind: FieldKind
ui_hidden: bool
ui_type: Optional[UIType]
ui_order: Optional[int]
ui_hidden: bool = False
ui_order: Optional[int] = None
ui_type: Optional[UIType] = None
model_config = ConfigDict(
validate_assignment=True,
json_schema_serialization_defaults_required=True,
use_enum_values=True,
)
def migrate_model_ui_type(ui_type: UIType | str, json_schema_extra: dict[str, Any]) -> bool:
"""Migrate deprecated model-specifier ui_type values to new-style ui_model_[base|type|variant|format] in json_schema_extra."""
if not isinstance(ui_type, UIType):
ui_type = UIType(ui_type)
ui_model_type: list[ModelType] | None = None
ui_model_base: list[BaseModelType] | None = None
ui_model_format: list[ModelFormat] | None = None
ui_model_variant: list[ClipVariantType | ModelVariantType] | None = None
match ui_type:
case UIType.MainModel:
ui_model_base = [BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2]
ui_model_type = [ModelType.Main]
case UIType.CogView4MainModel:
ui_model_base = [BaseModelType.CogView4]
ui_model_type = [ModelType.Main]
case UIType.FluxMainModel:
ui_model_base = [BaseModelType.Flux]
ui_model_type = [ModelType.Main]
case UIType.SD3MainModel:
ui_model_base = [BaseModelType.StableDiffusion3]
ui_model_type = [ModelType.Main]
case UIType.SDXLMainModel:
ui_model_base = [BaseModelType.StableDiffusionXL]
ui_model_type = [ModelType.Main]
case UIType.SDXLRefinerModel:
ui_model_base = [BaseModelType.StableDiffusionXLRefiner]
ui_model_type = [ModelType.Main]
case UIType.VAEModel:
ui_model_type = [ModelType.VAE]
case UIType.FluxVAEModel:
ui_model_base = [BaseModelType.Flux, BaseModelType.Flux2]
ui_model_type = [ModelType.VAE]
case UIType.LoRAModel:
ui_model_type = [ModelType.LoRA]
case UIType.ControlNetModel:
ui_model_type = [ModelType.ControlNet]
case UIType.IPAdapterModel:
ui_model_type = [ModelType.IPAdapter]
case UIType.T2IAdapterModel:
ui_model_type = [ModelType.T2IAdapter]
case UIType.T5EncoderModel:
ui_model_type = [ModelType.T5Encoder]
case UIType.CLIPEmbedModel:
ui_model_type = [ModelType.CLIPEmbed]
case UIType.CLIPLEmbedModel:
ui_model_type = [ModelType.CLIPEmbed]
ui_model_variant = [ClipVariantType.L]
case UIType.CLIPGEmbedModel:
ui_model_type = [ModelType.CLIPEmbed]
ui_model_variant = [ClipVariantType.G]
case UIType.SpandrelImageToImageModel:
ui_model_type = [ModelType.SpandrelImageToImage]
case UIType.ControlLoRAModel:
ui_model_type = [ModelType.ControlLoRa]
case UIType.SigLipModel:
ui_model_type = [ModelType.SigLIP]
case UIType.FluxReduxModel:
ui_model_type = [ModelType.FluxRedux]
case UIType.LlavaOnevisionModel:
ui_model_type = [ModelType.LlavaOnevision]
case _:
pass
did_migrate = False
if ui_model_type is not None:
json_schema_extra["ui_model_type"] = [m.value for m in ui_model_type]
did_migrate = True
if ui_model_base is not None:
json_schema_extra["ui_model_base"] = [m.value for m in ui_model_base]
did_migrate = True
if ui_model_format is not None:
json_schema_extra["ui_model_format"] = [m.value for m in ui_model_format]
did_migrate = True
if ui_model_variant is not None:
json_schema_extra["ui_model_variant"] = [m.value for m in ui_model_variant]
did_migrate = True
return did_migrate
def InputField(
# copied from pydantic's Field
# TODO: Can we support default_factory?
@@ -510,35 +609,63 @@ def InputField(
ui_hidden: Optional[bool] = None,
ui_order: Optional[int] = None,
ui_choice_labels: Optional[dict[str, str]] = None,
ui_model_base: Optional[BaseModelType | list[BaseModelType]] = None,
ui_model_type: Optional[ModelType | list[ModelType]] = None,
ui_model_variant: Optional[ClipVariantType | ModelVariantType | list[ClipVariantType | ModelVariantType]] = None,
ui_model_format: Optional[ModelFormat | list[ModelFormat]] = None,
) -> Any:
"""
Creates an input field for an invocation.
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/latest/api/fields/#pydantic.fields.Field) \
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/latest/api/fields/#pydantic.fields.Field)
that adds a few extra parameters to support graph execution and the node editor UI.
:param Input input: [Input.Any] The kind of input this field requires. \
`Input.Direct` means a value must be provided on instantiation. \
`Input.Connection` means the value must be provided by a connection. \
`Input.Any` means either will do.
If the field is a `ModelIdentifierField`, use the `ui_model_[base|type|variant|format]` args to filter the model list
in the Workflow Editor. Otherwise, use `ui_type` to provide extra type hints for the UI.
:param UIType ui_type: [None] Optionally provides an extra type hint for the UI. \
In some situations, the field's type is not enough to infer the correct UI type. \
For example, model selection fields should render a dropdown UI component to select a model. \
Internally, there is no difference between SD-1, SD-2 and SDXL model fields, they all use \
`MainModelField`. So to ensure the base-model-specific UI is rendered, you can use \
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
Don't use both `ui_type` and `ui_model_[base|type|variant|format]` - if both are provided, a warning will be
logged and `ui_type` will be ignored.
:param UIComponent ui_component: [None] Optionally specifies a specific component to use in the UI. \
The UI will always render a suitable component, but sometimes you want something different than the default. \
For example, a `string` field will default to a single-line input, but you may want a multi-line textarea instead. \
For this case, you could provide `UIComponent.Textarea`.
Args:
input: The kind of input this field requires.
- `Input.Direct` means a value must be provided on instantiation.
- `Input.Connection` means the value must be provided by a connection.
- `Input.Any` means either will do.
:param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI.
ui_type: Optionally provides an extra type hint for the UI. In some situations, the field's type is not enough
to infer the correct UI type. For example, Scheduler fields are enums, but we want to render a special scheduler
dropdown in the UI. Use `UIType.Scheduler` to indicate this.
:param int ui_order: [None] Specifies the order in which this field should be rendered in the UI.
ui_component: Optionally specifies a specific component to use in the UI. The UI will always render a suitable
component, but sometimes you want something different than the default. For example, a `string` field will
default to a single-line input, but you may want a multi-line textarea instead. In this case, you could use
`UIComponent.Textarea`.
:param dict[str, str] ui_choice_labels: [None] Specifies the labels to use for the choices in an enum field.
ui_hidden: Specifies whether or not this field should be hidden in the UI.
ui_order: Specifies the order in which this field should be rendered in the UI. If omitted, the field will be
rendered after all fields with an explicit order, in the order they are defined in the Invocation class.
ui_model_base: Specifies the base model architectures to filter the model list by in the Workflow Editor. For
example, `ui_model_base=BaseModelType.StableDiffusionXL` will show only SDXL architecture models. This arg is
only valid if this Input field is annotated as a `ModelIdentifierField`.
ui_model_type: Specifies the model type(s) to filter the model list by in the Workflow Editor. For example,
`ui_model_type=ModelType.VAE` will show only VAE models. This arg is only valid if this Input field is
annotated as a `ModelIdentifierField`.
ui_model_variant: Specifies the model variant(s) to filter the model list by in the Workflow Editor. For example,
`ui_model_variant=ModelVariantType.Inpainting` will show only inpainting models. This arg is only valid if this
Input field is annotated as a `ModelIdentifierField`.
ui_model_format: Specifies the model format(s) to filter the model list by in the Workflow Editor. For example,
`ui_model_format=ModelFormat.Diffusers` will show only models in the diffusers format. This arg is only valid
if this Input field is annotated as a `ModelIdentifierField`.
ui_choice_labels: Specifies the labels to use for the choices in an enum field. If omitted, the enum values
will be used. This arg is only valid if the field is annotated with as a `Literal`. For example,
`Literal["choice1", "choice2", "choice3"]` with `ui_choice_labels={"choice1": "Choice 1", "choice2": "Choice 2",
"choice3": "Choice 3"}` will render a dropdown with the labels "Choice 1", "Choice 2" and "Choice 3".
"""
json_schema_extra_ = InputFieldJSONSchemaExtra(
@@ -546,8 +673,6 @@ def InputField(
field_kind=FieldKind.Input,
)
if ui_type is not None:
json_schema_extra_.ui_type = ui_type
if ui_component is not None:
json_schema_extra_.ui_component = ui_component
if ui_hidden is not None:
@@ -556,6 +681,28 @@ def InputField(
json_schema_extra_.ui_order = ui_order
if ui_choice_labels is not None:
json_schema_extra_.ui_choice_labels = ui_choice_labels
if ui_model_base is not None:
if isinstance(ui_model_base, list):
json_schema_extra_.ui_model_base = ui_model_base
else:
json_schema_extra_.ui_model_base = [ui_model_base]
if ui_model_type is not None:
if isinstance(ui_model_type, list):
json_schema_extra_.ui_model_type = ui_model_type
else:
json_schema_extra_.ui_model_type = [ui_model_type]
if ui_model_variant is not None:
if isinstance(ui_model_variant, list):
json_schema_extra_.ui_model_variant = ui_model_variant
else:
json_schema_extra_.ui_model_variant = [ui_model_variant]
if ui_model_format is not None:
if isinstance(ui_model_format, list):
json_schema_extra_.ui_model_format = ui_model_format
else:
json_schema_extra_.ui_model_format = [ui_model_format]
if ui_type is not None:
json_schema_extra_.ui_type = ui_type
"""
There is a conflict between the typing of invocation definitions and the typing of an invocation's
@@ -657,20 +804,20 @@ def OutputField(
"""
Creates an output field for an invocation output.
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/1.10/usage/schema/#field-customization) \
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/1.10/usage/schema/#field-customization)
that adds a few extra parameters to support graph execution and the node editor UI.
:param UIType ui_type: [None] Optionally provides an extra type hint for the UI. \
In some situations, the field's type is not enough to infer the correct UI type. \
For example, model selection fields should render a dropdown UI component to select a model. \
Internally, there is no difference between SD-1, SD-2 and SDXL model fields, they all use \
`MainModelField`. So to ensure the base-model-specific UI is rendered, you can use \
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
Args:
ui_type: Optionally provides an extra type hint for the UI. In some situations, the field's type is not enough
to infer the correct UI type. For example, Scheduler fields are enums, but we want to render a special scheduler
dropdown in the UI. Use `UIType.Scheduler` to indicate this.
:param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI. \
ui_hidden: Specifies whether or not this field should be hidden in the UI.
:param int ui_order: [None] Specifies the order in which this field should be rendered in the UI. \
ui_order: Specifies the order in which this field should be rendered in the UI. If omitted, the field will be
rendered after all fields with an explicit order, in the order they are defined in the Invocation class.
"""
return Field(
default=default,
title=title,
@@ -688,9 +835,9 @@ def OutputField(
min_length=min_length,
max_length=max_length,
json_schema_extra=OutputFieldJSONSchemaExtra(
ui_type=ui_type,
ui_hidden=ui_hidden,
ui_order=ui_order,
ui_type=ui_type,
field_kind=FieldKind.Output,
).model_dump(exclude_none=True),
)

View File

@@ -0,0 +1,499 @@
"""Flux2 Klein Denoise Invocation.
Run denoising process with a FLUX.2 Klein transformer model.
Uses Qwen3 conditioning instead of CLIP+T5.
"""
from contextlib import ExitStack
from typing import Callable, Iterator, Optional, Tuple
import torch
import torchvision.transforms as tv_transforms
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
FluxConditioningField,
FluxKontextConditioningField,
Input,
InputField,
LatentsField,
)
from invokeai.app.invocations.model import TransformerField, VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.sampling_utils import clip_timestep_schedule_fractional
from invokeai.backend.flux.schedulers import FLUX_SCHEDULER_LABELS, FLUX_SCHEDULER_MAP, FLUX_SCHEDULER_NAME_VALUES
from invokeai.backend.flux2.denoise import denoise
from invokeai.backend.flux2.ref_image_extension import Flux2RefImageExtension
from invokeai.backend.flux2.sampling_utils import (
compute_empirical_mu,
generate_img_ids_flux2,
get_noise_flux2,
get_schedule_flux2,
pack_flux2,
unpack_flux2,
)
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelFormat, ModelType
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.flux_lora_constants import FLUX_LORA_TRANSFORMER_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
from invokeai.backend.rectified_flow.rectified_flow_inpaint_extension import RectifiedFlowInpaintExtension
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import FLUXConditioningInfo
from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux2_denoise",
title="FLUX2 Denoise",
tags=["image", "flux", "flux2", "klein", "denoise"],
category="image",
version="1.3.0",
classification=Classification.Prototype,
)
class Flux2DenoiseInvocation(BaseInvocation):
"""Run denoising process with a FLUX.2 Klein transformer model.
This node is designed for FLUX.2 Klein models which use Qwen3 as the text encoder.
It does not support ControlNet, IP-Adapters, or regional prompting.
"""
latents: Optional[LatentsField] = InputField(
default=None,
description=FieldDescriptions.latents,
input=Input.Connection,
)
denoise_mask: Optional[DenoiseMaskField] = InputField(
default=None,
description=FieldDescriptions.denoise_mask,
input=Input.Connection,
)
denoising_start: float = InputField(
default=0.0,
ge=0,
le=1,
description=FieldDescriptions.denoising_start,
)
denoising_end: float = InputField(
default=1.0,
ge=0,
le=1,
description=FieldDescriptions.denoising_end,
)
add_noise: bool = InputField(default=True, description="Add noise based on denoising start.")
transformer: TransformerField = InputField(
description=FieldDescriptions.flux_model,
input=Input.Connection,
title="Transformer",
)
positive_text_conditioning: FluxConditioningField = InputField(
description=FieldDescriptions.positive_cond,
input=Input.Connection,
)
negative_text_conditioning: Optional[FluxConditioningField] = InputField(
default=None,
description="Negative conditioning tensor. Can be None if cfg_scale is 1.0.",
input=Input.Connection,
)
cfg_scale: float = InputField(
default=1.0,
description=FieldDescriptions.cfg_scale,
title="CFG Scale",
)
width: int = InputField(default=1024, multiple_of=16, description="Width of the generated image.")
height: int = InputField(default=1024, multiple_of=16, description="Height of the generated image.")
num_steps: int = InputField(
default=4,
description="Number of diffusion steps. Use 4 for distilled models, 28+ for base models.",
)
scheduler: FLUX_SCHEDULER_NAME_VALUES = InputField(
default="euler",
description="Scheduler (sampler) for the denoising process. 'euler' is fast and standard. "
"'heun' is 2nd-order (better quality, 2x slower). 'lcm' is optimized for few steps.",
ui_choice_labels=FLUX_SCHEDULER_LABELS,
)
seed: int = InputField(default=0, description="Randomness seed for reproducibility.")
vae: VAEField = InputField(
description="FLUX.2 VAE model (required for BN statistics).",
input=Input.Connection,
)
kontext_conditioning: FluxKontextConditioningField | list[FluxKontextConditioningField] | None = InputField(
default=None,
description="FLUX Kontext conditioning (reference images for multi-reference image editing).",
input=Input.Connection,
title="Reference Images",
)
def _get_bn_stats(self, context: InvocationContext) -> Optional[Tuple[torch.Tensor, torch.Tensor]]:
"""Extract BN statistics from the FLUX.2 VAE.
The FLUX.2 VAE uses batch normalization on the patchified 128-channel representation.
IMPORTANT: BFL FLUX.2 VAE uses affine=False, so there are NO learnable weight/bias.
BN formula (affine=False): y = (x - mean) / std
Inverse: x = y * std + mean
Returns:
Tuple of (bn_mean, bn_std) tensors of shape (128,), or None if BN layer not found.
"""
with context.models.load(self.vae.vae).model_on_device() as (_, vae):
# Ensure VAE is in eval mode to prevent BN stats from being updated
vae.eval()
# Try to find the BN layer - it may be at different locations depending on model format
bn_layer = None
if hasattr(vae, "bn"):
bn_layer = vae.bn
elif hasattr(vae, "batch_norm"):
bn_layer = vae.batch_norm
elif hasattr(vae, "encoder") and hasattr(vae.encoder, "bn"):
bn_layer = vae.encoder.bn
if bn_layer is None:
return None
# Verify running statistics are initialized
if bn_layer.running_mean is None or bn_layer.running_var is None:
return None
# Get BN running statistics from VAE
bn_mean = bn_layer.running_mean.clone() # Shape: (128,)
bn_var = bn_layer.running_var.clone() # Shape: (128,)
bn_eps = bn_layer.eps if hasattr(bn_layer, "eps") else 1e-4 # BFL uses 1e-4
bn_std = torch.sqrt(bn_var + bn_eps)
return bn_mean, bn_std
def _bn_normalize(
self,
x: torch.Tensor,
bn_mean: torch.Tensor,
bn_std: torch.Tensor,
) -> torch.Tensor:
"""Apply BN normalization to packed latents.
BN formula (affine=False): y = (x - mean) / std
Args:
x: Packed latents of shape (B, seq, 128).
bn_mean: BN running mean of shape (128,).
bn_std: BN running std of shape (128,).
Returns:
Normalized latents of same shape.
"""
# x: (B, seq, 128), params: (128,) -> broadcast over batch and sequence dims
bn_mean = bn_mean.to(x.device, x.dtype)
bn_std = bn_std.to(x.device, x.dtype)
return (x - bn_mean) / bn_std
def _bn_denormalize(
self,
x: torch.Tensor,
bn_mean: torch.Tensor,
bn_std: torch.Tensor,
) -> torch.Tensor:
"""Apply BN denormalization to packed latents (inverse of normalization).
Inverse BN (affine=False): x = y * std + mean
Args:
x: Packed latents of shape (B, seq, 128).
bn_mean: BN running mean of shape (128,).
bn_std: BN running std of shape (128,).
Returns:
Denormalized latents of same shape.
"""
# x: (B, seq, 128), params: (128,) -> broadcast over batch and sequence dims
bn_mean = bn_mean.to(x.device, x.dtype)
bn_std = bn_std.to(x.device, x.dtype)
return x * bn_std + bn_mean
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = self._run_diffusion(context)
latents = latents.detach().to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)
def _run_diffusion(self, context: InvocationContext) -> torch.Tensor:
inference_dtype = torch.bfloat16
device = TorchDevice.choose_torch_device()
# Get BN statistics from VAE for latent denormalization (optional)
# BFL FLUX.2 VAE uses affine=False, so only mean/std are needed
# Some VAE formats (e.g. diffusers) may not expose BN stats directly
bn_stats = self._get_bn_stats(context)
bn_mean, bn_std = bn_stats if bn_stats is not None else (None, None)
# Load the input latents, if provided
init_latents = context.tensors.load(self.latents.latents_name) if self.latents else None
if init_latents is not None:
init_latents = init_latents.to(device=device, dtype=inference_dtype)
# Prepare input noise (FLUX.2 uses 32 channels)
noise = get_noise_flux2(
num_samples=1,
height=self.height,
width=self.width,
device=device,
dtype=inference_dtype,
seed=self.seed,
)
b, _c, latent_h, latent_w = noise.shape
packed_h = latent_h // 2
packed_w = latent_w // 2
# Load the conditioning data
pos_cond_data = context.conditioning.load(self.positive_text_conditioning.conditioning_name)
assert len(pos_cond_data.conditionings) == 1
pos_flux_conditioning = pos_cond_data.conditionings[0]
assert isinstance(pos_flux_conditioning, FLUXConditioningInfo)
pos_flux_conditioning = pos_flux_conditioning.to(dtype=inference_dtype, device=device)
# Qwen3 stacked embeddings (stored in t5_embeds field for compatibility)
txt = pos_flux_conditioning.t5_embeds
# Generate text position IDs (4D format for FLUX.2: T, H, W, L)
# FLUX.2 uses 4D position coordinates for its rotary position embeddings
# IMPORTANT: Position IDs must be int64 (long) dtype
# Diffusers uses: T=0, H=0, W=0, L=0..seq_len-1
seq_len = txt.shape[1]
txt_ids = torch.zeros(1, seq_len, 4, device=device, dtype=torch.long)
txt_ids[..., 3] = torch.arange(seq_len, device=device, dtype=torch.long) # L coordinate varies
# Load negative conditioning if provided
neg_txt = None
neg_txt_ids = None
if self.negative_text_conditioning is not None:
neg_cond_data = context.conditioning.load(self.negative_text_conditioning.conditioning_name)
assert len(neg_cond_data.conditionings) == 1
neg_flux_conditioning = neg_cond_data.conditionings[0]
assert isinstance(neg_flux_conditioning, FLUXConditioningInfo)
neg_flux_conditioning = neg_flux_conditioning.to(dtype=inference_dtype, device=device)
neg_txt = neg_flux_conditioning.t5_embeds
# For text tokens: T=0, H=0, W=0, L=0..seq_len-1 (only L varies per token)
neg_seq_len = neg_txt.shape[1]
neg_txt_ids = torch.zeros(1, neg_seq_len, 4, device=device, dtype=torch.long)
neg_txt_ids[..., 3] = torch.arange(neg_seq_len, device=device, dtype=torch.long)
# Validate transformer config
transformer_config = context.models.get_config(self.transformer.transformer)
assert transformer_config.base == BaseModelType.Flux2 and transformer_config.type == ModelType.Main
# Calculate the timestep schedule using FLUX.2 specific schedule
# This matches diffusers' Flux2Pipeline implementation
# Note: Schedule shifting is handled by the scheduler via mu parameter
image_seq_len = packed_h * packed_w
timesteps = get_schedule_flux2(
num_steps=self.num_steps,
image_seq_len=image_seq_len,
)
# Compute mu for dynamic schedule shifting (used by FlowMatchEulerDiscreteScheduler)
mu = compute_empirical_mu(image_seq_len=image_seq_len, num_steps=self.num_steps)
# Clip the timesteps schedule based on denoising_start and denoising_end
timesteps = clip_timestep_schedule_fractional(timesteps, self.denoising_start, self.denoising_end)
# Prepare input latent image
if init_latents is not None:
if self.add_noise:
t_0 = timesteps[0]
x = t_0 * noise + (1.0 - t_0) * init_latents
else:
x = init_latents
else:
if self.denoising_start > 1e-5:
raise ValueError("denoising_start should be 0 when initial latents are not provided.")
x = noise
# If len(timesteps) == 1, then short-circuit
if len(timesteps) <= 1:
return x
# Generate image position IDs (FLUX.2 uses 4D coordinates)
# Position IDs use int64 dtype like diffusers
img_ids = generate_img_ids_flux2(h=latent_h, w=latent_w, batch_size=b, device=device)
# Prepare inpaint mask
inpaint_mask = self._prep_inpaint_mask(context, x)
# Pack all latent tensors
init_latents_packed = pack_flux2(init_latents) if init_latents is not None else None
inpaint_mask_packed = pack_flux2(inpaint_mask) if inpaint_mask is not None else None
noise_packed = pack_flux2(noise)
x = pack_flux2(x)
# Apply BN normalization BEFORE denoising (as per diffusers Flux2KleinPipeline)
# BN normalization: y = (x - mean) / std
# This transforms latents to normalized space for the transformer
# IMPORTANT: Also normalize init_latents and noise for inpainting to maintain consistency
if bn_mean is not None and bn_std is not None:
x = self._bn_normalize(x, bn_mean, bn_std)
if init_latents_packed is not None:
init_latents_packed = self._bn_normalize(init_latents_packed, bn_mean, bn_std)
noise_packed = self._bn_normalize(noise_packed, bn_mean, bn_std)
# Verify packed dimensions
assert packed_h * packed_w == x.shape[1]
# Prepare inpaint extension
inpaint_extension: Optional[RectifiedFlowInpaintExtension] = None
if inpaint_mask_packed is not None:
assert init_latents_packed is not None
inpaint_extension = RectifiedFlowInpaintExtension(
init_latents=init_latents_packed,
inpaint_mask=inpaint_mask_packed,
noise=noise_packed,
)
# Prepare CFG scale list
num_steps = len(timesteps) - 1
cfg_scale_list = [self.cfg_scale] * num_steps
# Check if we're doing inpainting (have a mask or a clipped schedule)
is_inpainting = self.denoise_mask is not None or self.denoising_start > 1e-5
# Create scheduler with FLUX.2 Klein configuration
# For inpainting/img2img, use manual Euler stepping to preserve the exact timestep schedule
# For txt2img, use the scheduler with dynamic shifting for optimal results
scheduler = None
if self.scheduler in FLUX_SCHEDULER_MAP and not is_inpainting:
# Only use scheduler for txt2img - use manual Euler for inpainting to preserve exact timesteps
scheduler_class = FLUX_SCHEDULER_MAP[self.scheduler]
scheduler = scheduler_class(
num_train_timesteps=1000,
shift=3.0,
use_dynamic_shifting=True,
base_shift=0.5,
max_shift=1.15,
base_image_seq_len=256,
max_image_seq_len=4096,
time_shift_type="exponential",
)
# Prepare reference image extension for FLUX.2 Klein built-in editing
ref_image_extension = None
if self.kontext_conditioning:
ref_image_extension = Flux2RefImageExtension(
context=context,
ref_image_conditioning=self.kontext_conditioning
if isinstance(self.kontext_conditioning, list)
else [self.kontext_conditioning],
vae_field=self.vae,
device=device,
dtype=inference_dtype,
bn_mean=bn_mean,
bn_std=bn_std,
)
with ExitStack() as exit_stack:
# Load the transformer model
(cached_weights, transformer) = exit_stack.enter_context(
context.models.load(self.transformer.transformer).model_on_device()
)
config = transformer_config
# Determine if the model is quantized
if config.format in [ModelFormat.Diffusers]:
model_is_quantized = False
elif config.format in [
ModelFormat.BnbQuantizedLlmInt8b,
ModelFormat.BnbQuantizednf4b,
ModelFormat.GGUFQuantized,
]:
model_is_quantized = True
else:
model_is_quantized = False
# Apply LoRA models to the transformer
exit_stack.enter_context(
LayerPatcher.apply_smart_model_patches(
model=transformer,
patches=self._lora_iterator(context),
prefix=FLUX_LORA_TRANSFORMER_PREFIX,
dtype=inference_dtype,
cached_weights=cached_weights,
force_sidecar_patching=model_is_quantized,
)
)
# Prepare reference image conditioning if provided
img_cond_seq = None
img_cond_seq_ids = None
if ref_image_extension is not None:
# Ensure batch sizes match
ref_image_extension.ensure_batch_size(x.shape[0])
img_cond_seq, img_cond_seq_ids = (
ref_image_extension.ref_image_latents,
ref_image_extension.ref_image_ids,
)
x = denoise(
model=transformer,
img=x,
img_ids=img_ids,
txt=txt,
txt_ids=txt_ids,
timesteps=timesteps,
step_callback=self._build_step_callback(context),
cfg_scale=cfg_scale_list,
neg_txt=neg_txt,
neg_txt_ids=neg_txt_ids,
scheduler=scheduler,
mu=mu,
inpaint_extension=inpaint_extension,
img_cond_seq=img_cond_seq,
img_cond_seq_ids=img_cond_seq_ids,
)
# Apply BN denormalization if BN stats are available
# The diffusers Flux2KleinPipeline applies: latents = latents * bn_std + bn_mean
# This transforms latents from normalized space to VAE's expected input space
if bn_mean is not None and bn_std is not None:
x = self._bn_denormalize(x, bn_mean, bn_std)
x = unpack_flux2(x.float(), self.height, self.width)
return x
def _prep_inpaint_mask(self, context: InvocationContext, latents: torch.Tensor) -> Optional[torch.Tensor]:
"""Prepare the inpaint mask."""
if self.denoise_mask is None:
return None
mask = context.tensors.load(self.denoise_mask.mask_name)
mask = 1.0 - mask
_, _, latent_height, latent_width = latents.shape
mask = tv_resize(
img=mask,
size=[latent_height, latent_width],
interpolation=tv_transforms.InterpolationMode.BILINEAR,
antialias=False,
)
mask = mask.to(device=latents.device, dtype=latents.dtype)
return mask.expand_as(latents)
def _lora_iterator(self, context: InvocationContext) -> Iterator[Tuple[ModelPatchRaw, float]]:
"""Iterate over LoRA models to apply."""
for lora in self.transformer.loras:
lora_info = context.models.load(lora.lora)
assert isinstance(lora_info.model, ModelPatchRaw)
yield (lora_info.model, lora.weight)
del lora_info
def _build_step_callback(self, context: InvocationContext) -> Callable[[PipelineIntermediateState], None]:
"""Build a callback for step progress updates."""
def step_callback(state: PipelineIntermediateState) -> None:
latents = state.latents.float()
state.latents = unpack_flux2(latents, self.height, self.width).squeeze()
context.util.flux2_step_callback(state)
return step_callback

View File

@@ -0,0 +1,222 @@
"""Flux2 Klein Model Loader Invocation.
Loads a Flux2 Klein model with its Qwen3 text encoder and VAE.
Unlike standard FLUX which uses CLIP+T5, Klein uses only Qwen3.
"""
from typing import Literal, Optional
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField
from invokeai.app.invocations.model import (
ModelIdentifierField,
Qwen3EncoderField,
TransformerField,
VAEField,
)
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.taxonomy import (
BaseModelType,
Flux2VariantType,
ModelFormat,
ModelType,
Qwen3VariantType,
SubModelType,
)
@invocation_output("flux2_klein_model_loader_output")
class Flux2KleinModelLoaderOutput(BaseInvocationOutput):
"""Flux2 Klein model loader output."""
transformer: TransformerField = OutputField(description=FieldDescriptions.transformer, title="Transformer")
qwen3_encoder: Qwen3EncoderField = OutputField(description=FieldDescriptions.qwen3_encoder, title="Qwen3 Encoder")
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
max_seq_len: Literal[256, 512] = OutputField(
description="The max sequence length for the Qwen3 encoder.",
title="Max Seq Length",
)
@invocation(
"flux2_klein_model_loader",
title="Main Model - Flux2 Klein",
tags=["model", "flux", "klein", "qwen3"],
category="model",
version="1.0.0",
classification=Classification.Prototype,
)
class Flux2KleinModelLoaderInvocation(BaseInvocation):
"""Loads a Flux2 Klein model, outputting its submodels.
Flux2 Klein uses Qwen3 as the text encoder instead of CLIP+T5.
It uses a 32-channel VAE (AutoencoderKLFlux2) instead of the 16-channel FLUX.1 VAE.
When using a Diffusers format model, both VAE and Qwen3 encoder are extracted
automatically from the main model. You can override with standalone models:
- Transformer: Always from Flux2 Klein main model
- VAE: From main model (Diffusers) or standalone VAE
- Qwen3 Encoder: From main model (Diffusers) or standalone Qwen3 model
"""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.flux_model,
input=Input.Direct,
ui_model_base=BaseModelType.Flux2,
ui_model_type=ModelType.Main,
title="Transformer",
)
vae_model: Optional[ModelIdentifierField] = InputField(
default=None,
description="Standalone VAE model. Flux2 Klein uses the same VAE as FLUX (16-channel). "
"If not provided, VAE will be loaded from the Qwen3 Source model.",
input=Input.Direct,
ui_model_base=[BaseModelType.Flux, BaseModelType.Flux2],
ui_model_type=ModelType.VAE,
title="VAE",
)
qwen3_encoder_model: Optional[ModelIdentifierField] = InputField(
default=None,
description="Standalone Qwen3 Encoder model. "
"If not provided, encoder will be loaded from the Qwen3 Source model.",
input=Input.Direct,
ui_model_type=ModelType.Qwen3Encoder,
title="Qwen3 Encoder",
)
qwen3_source_model: Optional[ModelIdentifierField] = InputField(
default=None,
description="Diffusers Flux2 Klein model to extract VAE and/or Qwen3 encoder from. "
"Use this if you don't have separate VAE/Qwen3 models. "
"Ignored if both VAE and Qwen3 Encoder are provided separately.",
input=Input.Direct,
ui_model_base=BaseModelType.Flux2,
ui_model_type=ModelType.Main,
ui_model_format=ModelFormat.Diffusers,
title="Qwen3 Source (Diffusers)",
)
max_seq_len: Literal[256, 512] = InputField(
default=512,
description="Max sequence length for the Qwen3 encoder.",
title="Max Seq Length",
)
def invoke(self, context: InvocationContext) -> Flux2KleinModelLoaderOutput:
# Transformer always comes from the main model
transformer = self.model.model_copy(update={"submodel_type": SubModelType.Transformer})
# Check if main model is Diffusers format (can extract VAE directly)
main_config = context.models.get_config(self.model)
main_is_diffusers = main_config.format == ModelFormat.Diffusers
# Determine VAE source
# IMPORTANT: FLUX.2 Klein uses a 32-channel VAE (AutoencoderKLFlux2), not the 16-channel FLUX.1 VAE.
# The VAE should come from the FLUX.2 Klein Diffusers model, not a separate FLUX VAE.
if self.vae_model is not None:
# Use standalone VAE (user explicitly selected one)
vae = self.vae_model.model_copy(update={"submodel_type": SubModelType.VAE})
elif main_is_diffusers:
# Extract VAE from main model (recommended for FLUX.2)
vae = self.model.model_copy(update={"submodel_type": SubModelType.VAE})
elif self.qwen3_source_model is not None:
# Extract from Qwen3 source Diffusers model
self._validate_diffusers_format(context, self.qwen3_source_model, "Qwen3 Source")
vae = self.qwen3_source_model.model_copy(update={"submodel_type": SubModelType.VAE})
else:
raise ValueError(
"No VAE source provided. Standalone safetensors/GGUF models require a separate VAE. "
"Options:\n"
" 1. Set 'VAE' to a standalone FLUX VAE model\n"
" 2. Set 'Qwen3 Source' to a Diffusers Flux2 Klein model to extract the VAE from"
)
# Determine Qwen3 Encoder source
if self.qwen3_encoder_model is not None:
# Use standalone Qwen3 Encoder - validate it matches the FLUX.2 Klein variant
self._validate_qwen3_encoder_variant(context, main_config)
qwen3_tokenizer = self.qwen3_encoder_model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
qwen3_encoder = self.qwen3_encoder_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
elif main_is_diffusers:
# Extract from main model (recommended for FLUX.2 Klein)
qwen3_tokenizer = self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
qwen3_encoder = self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
elif self.qwen3_source_model is not None:
# Extract from separate Diffusers model
self._validate_diffusers_format(context, self.qwen3_source_model, "Qwen3 Source")
qwen3_tokenizer = self.qwen3_source_model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
qwen3_encoder = self.qwen3_source_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
else:
raise ValueError(
"No Qwen3 Encoder source provided. Standalone safetensors/GGUF models require a separate text encoder. "
"Options:\n"
" 1. Set 'Qwen3 Encoder' to a standalone Qwen3 text encoder model "
"(Klein 4B needs Qwen3 4B, Klein 9B needs Qwen3 8B)\n"
" 2. Set 'Qwen3 Source' to a Diffusers Flux2 Klein model to extract the encoder from"
)
return Flux2KleinModelLoaderOutput(
transformer=TransformerField(transformer=transformer, loras=[]),
qwen3_encoder=Qwen3EncoderField(tokenizer=qwen3_tokenizer, text_encoder=qwen3_encoder),
vae=VAEField(vae=vae),
max_seq_len=self.max_seq_len,
)
def _validate_diffusers_format(
self, context: InvocationContext, model: ModelIdentifierField, model_name: str
) -> None:
"""Validate that a model is in Diffusers format."""
config = context.models.get_config(model)
if config.format != ModelFormat.Diffusers:
raise ValueError(
f"The {model_name} model must be a Diffusers format model. "
f"The selected model '{config.name}' is in {config.format.value} format."
)
def _validate_qwen3_encoder_variant(self, context: InvocationContext, main_config) -> None:
"""Validate that the standalone Qwen3 encoder variant matches the FLUX.2 Klein variant.
- FLUX.2 Klein 4B requires Qwen3 4B encoder
- FLUX.2 Klein 9B requires Qwen3 8B encoder
"""
if self.qwen3_encoder_model is None:
return
# Get the Qwen3 encoder config
qwen3_config = context.models.get_config(self.qwen3_encoder_model)
# Check if the config has a variant field
if not hasattr(qwen3_config, "variant"):
# Can't validate, skip
return
qwen3_variant = qwen3_config.variant
# Get the FLUX.2 Klein variant from the main model config
if not hasattr(main_config, "variant"):
return
flux2_variant = main_config.variant
# Validate the variants match
# Klein4B requires Qwen3_4B, Klein9B/Klein9BBase requires Qwen3_8B
expected_qwen3_variant = None
if flux2_variant == Flux2VariantType.Klein4B:
expected_qwen3_variant = Qwen3VariantType.Qwen3_4B
elif flux2_variant in (Flux2VariantType.Klein9B, Flux2VariantType.Klein9BBase):
expected_qwen3_variant = Qwen3VariantType.Qwen3_8B
if expected_qwen3_variant is not None and qwen3_variant != expected_qwen3_variant:
raise ValueError(
f"Qwen3 encoder variant mismatch: FLUX.2 Klein {flux2_variant.value} requires "
f"{expected_qwen3_variant.value} encoder, but {qwen3_variant.value} was selected. "
f"Please select a matching Qwen3 encoder or use a Diffusers format model which includes the correct encoder."
)

View File

@@ -0,0 +1,222 @@
"""Flux2 Klein Text Encoder Invocation.
Flux2 Klein uses Qwen3 as the text encoder instead of CLIP+T5.
The key difference is that it extracts hidden states from layers (9, 18, 27)
and stacks them together for richer text representations.
This implementation matches the diffusers Flux2KleinPipeline exactly.
"""
from contextlib import ExitStack
from typing import Iterator, Literal, Optional, Tuple
import torch
from transformers import PreTrainedModel, PreTrainedTokenizerBase
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
FluxConditioningField,
Input,
InputField,
TensorField,
UIComponent,
)
from invokeai.app.invocations.model import Qwen3EncoderField
from invokeai.app.invocations.primitives import FluxConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.flux_lora_constants import FLUX_LORA_T5_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData, FLUXConditioningInfo
from invokeai.backend.util.devices import TorchDevice
# FLUX.2 Klein extracts hidden states from these specific layers
# Matching diffusers Flux2KleinPipeline: (9, 18, 27)
# hidden_states[0] is embedding layer, so layer N is at index N
KLEIN_EXTRACTION_LAYERS = (9, 18, 27)
# Default max sequence length for Klein models
KLEIN_MAX_SEQ_LEN = 512
@invocation(
"flux2_klein_text_encoder",
title="Prompt - Flux2 Klein",
tags=["prompt", "conditioning", "flux", "klein", "qwen3"],
category="conditioning",
version="1.1.0",
classification=Classification.Prototype,
)
class Flux2KleinTextEncoderInvocation(BaseInvocation):
"""Encodes and preps a prompt for Flux2 Klein image generation.
Flux2 Klein uses Qwen3 as the text encoder, extracting hidden states from
layers (9, 18, 27) and stacking them for richer text representations.
This matches the diffusers Flux2KleinPipeline implementation exactly.
"""
prompt: str = InputField(description="Text prompt to encode.", ui_component=UIComponent.Textarea)
qwen3_encoder: Qwen3EncoderField = InputField(
title="Qwen3 Encoder",
description=FieldDescriptions.qwen3_encoder,
input=Input.Connection,
)
max_seq_len: Literal[256, 512] = InputField(
default=512,
description="Max sequence length for the Qwen3 encoder.",
)
mask: Optional[TensorField] = InputField(
default=None,
description="A mask defining the region that this conditioning prompt applies to.",
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> FluxConditioningOutput:
qwen3_embeds, pooled_embeds = self._encode_prompt(context)
# Use FLUXConditioningInfo for compatibility with existing Flux denoiser
# t5_embeds -> qwen3 stacked embeddings
# clip_embeds -> pooled qwen3 embedding
conditioning_data = ConditioningFieldData(
conditionings=[FLUXConditioningInfo(clip_embeds=pooled_embeds, t5_embeds=qwen3_embeds)]
)
conditioning_name = context.conditioning.save(conditioning_data)
return FluxConditioningOutput(
conditioning=FluxConditioningField(conditioning_name=conditioning_name, mask=self.mask)
)
def _encode_prompt(self, context: InvocationContext) -> Tuple[torch.Tensor, torch.Tensor]:
"""Encode prompt using Qwen3 text encoder with Klein-style layer extraction.
This matches the diffusers Flux2KleinPipeline._get_qwen3_prompt_embeds() exactly.
Returns:
Tuple of (stacked_embeddings, pooled_embedding):
- stacked_embeddings: Hidden states from layers (9, 18, 27) stacked together.
Shape: (1, seq_len, hidden_size * 3)
- pooled_embedding: Pooled representation for global conditioning.
Shape: (1, hidden_size)
"""
prompt = self.prompt
device = TorchDevice.choose_torch_device()
text_encoder_info = context.models.load(self.qwen3_encoder.text_encoder)
tokenizer_info = context.models.load(self.qwen3_encoder.tokenizer)
with ExitStack() as exit_stack:
(cached_weights, text_encoder) = exit_stack.enter_context(text_encoder_info.model_on_device())
(_, tokenizer) = exit_stack.enter_context(tokenizer_info.model_on_device())
# Apply LoRA models to the text encoder
lora_dtype = TorchDevice.choose_bfloat16_safe_dtype(device)
exit_stack.enter_context(
LayerPatcher.apply_smart_model_patches(
model=text_encoder,
patches=self._lora_iterator(context),
prefix=FLUX_LORA_T5_PREFIX, # Reuse T5 prefix for Qwen3 LoRAs
dtype=lora_dtype,
cached_weights=cached_weights,
)
)
context.util.signal_progress("Running Qwen3 text encoder (Klein)")
if not isinstance(text_encoder, PreTrainedModel):
raise TypeError(
f"Expected PreTrainedModel for text encoder, got {type(text_encoder).__name__}. "
"The Qwen3 encoder model may be corrupted or incompatible."
)
if not isinstance(tokenizer, PreTrainedTokenizerBase):
raise TypeError(
f"Expected PreTrainedTokenizerBase for tokenizer, got {type(tokenizer).__name__}. "
"The Qwen3 tokenizer may be corrupted or incompatible."
)
# Format messages exactly like diffusers Flux2KleinPipeline:
# - Only user message, NO system message
# - add_generation_prompt=True (adds assistant prefix)
# - enable_thinking=False
messages = [{"role": "user", "content": prompt}]
# Step 1: Apply chat template to get formatted text (tokenize=False)
text: str = tokenizer.apply_chat_template( # type: ignore[assignment]
messages,
tokenize=False,
add_generation_prompt=True, # Adds assistant prefix like diffusers
enable_thinking=False, # Disable thinking mode
)
# Step 2: Tokenize the formatted text
inputs = tokenizer(
text,
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=self.max_seq_len,
)
input_ids = inputs["input_ids"]
attention_mask = inputs["attention_mask"]
# Move to device
input_ids = input_ids.to(device)
attention_mask = attention_mask.to(device)
# Forward pass through the model - matching diffusers exactly
outputs = text_encoder(
input_ids=input_ids,
attention_mask=attention_mask,
output_hidden_states=True,
use_cache=False,
)
# Validate hidden_states output
if not hasattr(outputs, "hidden_states") or outputs.hidden_states is None:
raise RuntimeError(
"Text encoder did not return hidden_states. "
"Ensure output_hidden_states=True is supported by this model."
)
num_hidden_layers = len(outputs.hidden_states)
# Extract and stack hidden states - EXACTLY like diffusers:
# out = torch.stack([output.hidden_states[k] for k in hidden_states_layers], dim=1)
# prompt_embeds = out.permute(0, 2, 1, 3).reshape(batch_size, seq_len, num_channels * hidden_dim)
hidden_states_list = []
for layer_idx in KLEIN_EXTRACTION_LAYERS:
if layer_idx >= num_hidden_layers:
layer_idx = num_hidden_layers - 1
hidden_states_list.append(outputs.hidden_states[layer_idx])
# Stack along dim=1, then permute and reshape - exactly like diffusers
out = torch.stack(hidden_states_list, dim=1)
out = out.to(dtype=text_encoder.dtype, device=device)
batch_size, num_channels, seq_len, hidden_dim = out.shape
prompt_embeds = out.permute(0, 2, 1, 3).reshape(batch_size, seq_len, num_channels * hidden_dim)
# Create pooled embedding for global conditioning
# Use mean pooling over the sequence (excluding padding)
# This serves a similar role to CLIP's pooled output in standard FLUX
last_hidden_state = outputs.hidden_states[-1] # Use last layer for pooling
# Expand mask to match hidden state dimensions
expanded_mask = attention_mask.unsqueeze(-1).expand_as(last_hidden_state).float()
sum_embeds = (last_hidden_state * expanded_mask).sum(dim=1)
num_tokens = expanded_mask.sum(dim=1).clamp(min=1)
pooled_embeds = sum_embeds / num_tokens
return prompt_embeds, pooled_embeds
def _lora_iterator(self, context: InvocationContext) -> Iterator[Tuple[ModelPatchRaw, float]]:
"""Iterate over LoRA models to apply to the Qwen3 text encoder."""
for lora in self.qwen3_encoder.loras:
lora_info = context.models.load(lora.lora)
if not isinstance(lora_info.model, ModelPatchRaw):
raise TypeError(
f"Expected ModelPatchRaw for LoRA '{lora.lora.key}', got {type(lora_info.model).__name__}. "
"The LoRA model may be corrupted or incompatible."
)
yield (lora_info.model, lora.weight)
del lora_info

View File

@@ -0,0 +1,106 @@
"""Flux2 Klein VAE Decode Invocation.
Decodes latents to images using the FLUX.2 32-channel VAE (AutoencoderKLFlux2).
"""
import torch
from einops import rearrange
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
LatentsField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux2_vae_decode",
title="Latents to Image - FLUX2",
tags=["latents", "image", "vae", "l2i", "flux2", "klein"],
category="latents",
version="1.0.0",
classification=Classification.Prototype,
)
class Flux2VaeDecodeInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents using FLUX.2 Klein's 32-channel VAE."""
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
def _vae_decode(self, vae_info: LoadedModel, latents: torch.Tensor) -> Image.Image:
"""Decode latents to image using FLUX.2 VAE.
Input latents should already be in the correct space after BN denormalization
was applied in the denoiser. The VAE expects (B, 32, H, W) format.
"""
with vae_info.model_on_device() as (_, vae):
vae_dtype = next(iter(vae.parameters())).dtype
device = TorchDevice.choose_torch_device()
latents = latents.to(device=device, dtype=vae_dtype)
# Decode using diffusers API
decoded = vae.decode(latents, return_dict=False)[0]
# Debug: Log decoded output statistics
print(
f"[FLUX.2 VAE] Decoded output: shape={decoded.shape}, "
f"min={decoded.min().item():.4f}, max={decoded.max().item():.4f}, "
f"mean={decoded.mean().item():.4f}"
)
# Check per-channel statistics to diagnose color issues
for c in range(min(3, decoded.shape[1])):
ch = decoded[0, c]
print(
f"[FLUX.2 VAE] Channel {c}: min={ch.min().item():.4f}, "
f"max={ch.max().item():.4f}, mean={ch.mean().item():.4f}"
)
# Convert from [-1, 1] to [0, 1] then to [0, 255] PIL image
img = (decoded / 2 + 0.5).clamp(0, 1)
img = rearrange(img[0], "c h w -> h w c")
img_np = (img * 255).byte().cpu().numpy()
# Explicitly create RGB image (not grayscale)
img_pil = Image.fromarray(img_np, mode="RGB")
return img_pil
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
# Log latent statistics for debugging black image issues
context.logger.debug(
f"FLUX.2 VAE decode input: shape={latents.shape}, "
f"min={latents.min().item():.4f}, max={latents.max().item():.4f}, "
f"mean={latents.mean().item():.4f}"
)
# Warn if input latents are all zeros or very small (would cause black images)
if latents.abs().max() < 1e-6:
context.logger.warning(
"FLUX.2 VAE decode received near-zero latents! This will cause black images. "
"The latent cache may be corrupted - try clearing the cache."
)
vae_info = context.models.load(self.vae.vae)
context.util.signal_progress("Running VAE")
image = self._vae_decode(vae_info=vae_info, latents=latents)
TorchDevice.empty_cache()
image_dto = context.images.save(image=image)
return ImageOutput.build(image_dto)

View File

@@ -0,0 +1,88 @@
"""Flux2 Klein VAE Encode Invocation.
Encodes images to latents using the FLUX.2 32-channel VAE (AutoencoderKLFlux2).
"""
import einops
import torch
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
Input,
InputField,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux2_vae_encode",
title="Image to Latents - FLUX2",
tags=["latents", "image", "vae", "i2l", "flux2", "klein"],
category="latents",
version="1.0.0",
classification=Classification.Prototype,
)
class Flux2VaeEncodeInvocation(BaseInvocation):
"""Encodes an image into latents using FLUX.2 Klein's 32-channel VAE."""
image: ImageField = InputField(
description="The image to encode.",
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
def _vae_encode(self, vae_info: LoadedModel, image_tensor: torch.Tensor) -> torch.Tensor:
"""Encode image to latents using FLUX.2 VAE.
The VAE encodes to 32-channel latent space.
Output latents shape: (B, 32, H/8, W/8).
"""
with vae_info.model_on_device() as (_, vae):
vae_dtype = next(iter(vae.parameters())).dtype
device = TorchDevice.choose_torch_device()
image_tensor = image_tensor.to(device=device, dtype=vae_dtype)
# Encode using diffusers API
# The VAE.encode() returns a DiagonalGaussianDistribution-like object
latent_dist = vae.encode(image_tensor, return_dict=False)[0]
# Sample from the distribution (or use mode for deterministic output)
# Using mode() for deterministic encoding
if hasattr(latent_dist, "mode"):
latents = latent_dist.mode()
elif hasattr(latent_dist, "sample"):
# Fall back to sampling if mode is not available
generator = torch.Generator(device=device).manual_seed(0)
latents = latent_dist.sample(generator=generator)
else:
# Direct tensor output (some VAE implementations)
latents = latent_dist
return latents
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
image = context.images.get_pil(self.image.image_name)
vae_info = context.models.load(self.vae.vae)
# Convert image to tensor (HWC -> CHW, normalize to [-1, 1])
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
context.util.signal_progress("Running VAE Encode")
latents = self._vae_encode(vae_info=vae_info, image_tensor=image_tensor)
latents = latents.to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)

View File

@@ -4,9 +4,10 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, OutputField
from invokeai.app.invocations.model import ControlLoRAField, ModelIdentifierField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
@invocation_output("flux_control_lora_loader_output")
@@ -29,7 +30,10 @@ class FluxControlLoRALoaderInvocation(BaseInvocation):
"""LoRA model and Image to use with FLUX transformer generation."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.control_lora_model, title="Control LoRA", ui_type=UIType.ControlLoRAModel
description=FieldDescriptions.control_lora_model,
title="Control LoRA",
ui_model_base=BaseModelType.Flux,
ui_model_type=ModelType.ControlLoRa,
)
image: ImageField = InputField(description="The image to encode.")
weight: float = InputField(description="The weight of the LoRA.", default=1.0)

View File

@@ -6,11 +6,12 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, OutputField
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_RESIZE_VALUES
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
class FluxControlNetField(BaseModel):
@@ -57,7 +58,9 @@ class FluxControlNetInvocation(BaseInvocation):
image: ImageField = InputField(description="The control image")
control_model: ModelIdentifierField = InputField(
description=FieldDescriptions.controlnet_model, ui_type=UIType.ControlNetModel
description=FieldDescriptions.controlnet_model,
ui_model_base=BaseModelType.Flux,
ui_model_type=ModelType.ControlNet,
)
control_weight: float | list[float] = InputField(
default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"

View File

@@ -32,6 +32,13 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.controlnet.instantx_controlnet_flux import InstantXControlNetFlux
from invokeai.backend.flux.controlnet.xlabs_controlnet_flux import XLabsControlNetFlux
from invokeai.backend.flux.denoise import denoise
from invokeai.backend.flux.dype.presets import (
DYPE_PRESET_LABELS,
DYPE_PRESET_OFF,
DyPEPreset,
get_dype_config_from_preset,
)
from invokeai.backend.flux.extensions.dype_extension import DyPEExtension
from invokeai.backend.flux.extensions.instantx_controlnet_extension import InstantXControlNetExtension
from invokeai.backend.flux.extensions.kontext_extension import KontextExtension
from invokeai.backend.flux.extensions.regional_prompting_extension import RegionalPromptingExtension
@@ -47,8 +54,9 @@ from invokeai.backend.flux.sampling_utils import (
pack,
unpack,
)
from invokeai.backend.flux.schedulers import FLUX_SCHEDULER_LABELS, FLUX_SCHEDULER_MAP, FLUX_SCHEDULER_NAME_VALUES
from invokeai.backend.flux.text_conditioning import FluxReduxConditioning, FluxTextConditioning
from invokeai.backend.model_manager.taxonomy import ModelFormat, ModelVariantType
from invokeai.backend.model_manager.taxonomy import BaseModelType, FluxVariantType, ModelFormat, ModelType
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.flux_lora_constants import FLUX_LORA_TRANSFORMER_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
@@ -63,7 +71,7 @@ from invokeai.backend.util.devices import TorchDevice
title="FLUX Denoise",
tags=["image", "flux"],
category="image",
version="4.0.0",
version="4.5.0",
)
class FluxDenoiseInvocation(BaseInvocation):
"""Run denoising process with a FLUX transformer model."""
@@ -132,6 +140,12 @@ class FluxDenoiseInvocation(BaseInvocation):
num_steps: int = InputField(
default=4, description="Number of diffusion steps. Recommended values are schnell: 4, dev: 50."
)
scheduler: FLUX_SCHEDULER_NAME_VALUES = InputField(
default="euler",
description="Scheduler (sampler) for the denoising process. 'euler' is fast and standard. "
"'heun' is 2nd-order (better quality, 2x slower). 'lcm' is optimized for few steps.",
ui_choice_labels=FLUX_SCHEDULER_LABELS,
)
guidance: float = InputField(
default=4.0,
description="The guidance strength. Higher values adhere more strictly to the prompt, and will produce less diverse images. FLUX dev only, ignored for schnell.",
@@ -153,12 +167,34 @@ class FluxDenoiseInvocation(BaseInvocation):
description=FieldDescriptions.ip_adapter, title="IP-Adapter", default=None, input=Input.Connection
)
kontext_conditioning: Optional[FluxKontextConditioningField] = InputField(
kontext_conditioning: FluxKontextConditioningField | list[FluxKontextConditioningField] | None = InputField(
default=None,
description="FLUX Kontext conditioning (reference image).",
input=Input.Connection,
)
# DyPE (Dynamic Position Extrapolation) for high-resolution generation
dype_preset: DyPEPreset = InputField(
default=DYPE_PRESET_OFF,
description="DyPE preset for high-resolution generation. 'auto' enables automatically for resolutions > 1536px. '4k' uses optimized settings for 4K output.",
ui_order=100,
ui_choice_labels=DYPE_PRESET_LABELS,
)
dype_scale: Optional[float] = InputField(
default=None,
ge=0.0,
le=8.0,
description="DyPE magnitude (λs). Higher values = stronger extrapolation. Only used when dype_preset is not 'off'.",
ui_order=101,
)
dype_exponent: Optional[float] = InputField(
default=None,
ge=0.0,
le=1000.0,
description="DyPE decay speed (λt). Controls transition from low to high frequency detail. Only used when dype_preset is not 'off'.",
ui_order=102,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = self._run_diffusion(context)
@@ -232,7 +268,14 @@ class FluxDenoiseInvocation(BaseInvocation):
)
transformer_config = context.models.get_config(self.transformer.transformer)
is_schnell = "schnell" in getattr(transformer_config, "config_path", "")
assert (
transformer_config.base in (BaseModelType.Flux, BaseModelType.Flux2)
and transformer_config.type is ModelType.Main
)
# Schnell is only for FLUX.1, FLUX.2 Klein behaves like Dev (with guidance)
is_schnell = (
transformer_config.base is BaseModelType.Flux and transformer_config.variant is FluxVariantType.Schnell
)
# Calculate the timestep schedule.
timesteps = get_schedule(
@@ -241,6 +284,12 @@ class FluxDenoiseInvocation(BaseInvocation):
shift=not is_schnell,
)
# Create scheduler if not using default euler
scheduler = None
if self.scheduler in FLUX_SCHEDULER_MAP:
scheduler_class = FLUX_SCHEDULER_MAP[self.scheduler]
scheduler = scheduler_class(num_train_timesteps=1000)
# Clip the timesteps schedule based on denoising_start and denoising_end.
timesteps = clip_timestep_schedule_fractional(timesteps, self.denoising_start, self.denoising_end)
@@ -277,7 +326,7 @@ class FluxDenoiseInvocation(BaseInvocation):
# Prepare the extra image conditioning tensor (img_cond) for either FLUX structural control or FLUX Fill.
img_cond: torch.Tensor | None = None
is_flux_fill = transformer_config.variant == ModelVariantType.Inpaint # type: ignore
is_flux_fill = transformer_config.variant is FluxVariantType.DevFill
if is_flux_fill:
img_cond = self._prep_flux_fill_img_cond(
context, device=TorchDevice.choose_torch_device(), dtype=inference_dtype
@@ -328,6 +377,21 @@ class FluxDenoiseInvocation(BaseInvocation):
cfg_scale_end_step=self.cfg_scale_end_step,
)
kontext_extension = None
if self.kontext_conditioning:
if not self.controlnet_vae:
raise ValueError("A VAE (e.g., controlnet_vae) must be provided to use Kontext conditioning.")
kontext_extension = KontextExtension(
context=context,
kontext_conditioning=self.kontext_conditioning
if isinstance(self.kontext_conditioning, list)
else [self.kontext_conditioning],
vae_field=self.controlnet_vae,
device=TorchDevice.choose_torch_device(),
dtype=inference_dtype,
)
with ExitStack() as exit_stack:
# Prepare ControlNet extensions.
# Note: We do this before loading the transformer model to minimize peak memory (see implementation).
@@ -385,19 +449,6 @@ class FluxDenoiseInvocation(BaseInvocation):
dtype=inference_dtype,
)
kontext_extension = None
if self.kontext_conditioning is not None:
if not self.controlnet_vae:
raise ValueError("A VAE (e.g., controlnet_vae) must be provided to use Kontext conditioning.")
kontext_extension = KontextExtension(
context=context,
kontext_conditioning=self.kontext_conditioning,
vae_field=self.controlnet_vae,
device=TorchDevice.choose_torch_device(),
dtype=inference_dtype,
)
# Prepare Kontext conditioning if provided
img_cond_seq = None
img_cond_seq_ids = None
@@ -406,6 +457,30 @@ class FluxDenoiseInvocation(BaseInvocation):
kontext_extension.ensure_batch_size(x.shape[0])
img_cond_seq, img_cond_seq_ids = kontext_extension.kontext_latents, kontext_extension.kontext_ids
# Prepare DyPE extension for high-resolution generation
dype_extension: DyPEExtension | None = None
dype_config = get_dype_config_from_preset(
preset=self.dype_preset,
width=self.width,
height=self.height,
custom_scale=self.dype_scale,
custom_exponent=self.dype_exponent,
)
if dype_config is not None:
dype_extension = DyPEExtension(
config=dype_config,
target_height=self.height,
target_width=self.width,
)
context.logger.info(
f"DyPE enabled: resolution={self.width}x{self.height}, preset={self.dype_preset}, "
f"method={dype_config.method}, scale={dype_config.dype_scale:.2f}, "
f"exponent={dype_config.dype_exponent:.2f}, start_sigma={dype_config.dype_start_sigma:.2f}, "
f"base_resolution={dype_config.base_resolution}"
)
else:
context.logger.debug(f"DyPE disabled: resolution={self.width}x{self.height}, preset={self.dype_preset}")
x = denoise(
model=transformer,
img=x,
@@ -423,6 +498,8 @@ class FluxDenoiseInvocation(BaseInvocation):
img_cond=img_cond,
img_cond_seq=img_cond_seq,
img_cond_seq_ids=img_cond_seq_ids,
dype_extension=dype_extension,
scheduler=scheduler,
)
x = unpack(x.float(), self.height, self.width)

View File

@@ -5,7 +5,7 @@ from pydantic import field_validator, model_validator
from typing_extensions import Self
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import InputField, UIType
from invokeai.app.invocations.fields import InputField
from invokeai.app.invocations.ip_adapter import (
CLIP_VISION_MODEL_MAP,
IPAdapterField,
@@ -16,10 +16,8 @@ from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import (
IPAdapterCheckpointConfig,
IPAdapterInvokeAIConfig,
)
from invokeai.backend.model_manager.configs.ip_adapter import IPAdapter_Checkpoint_FLUX_Config
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
@invocation(
@@ -36,7 +34,10 @@ class FluxIPAdapterInvocation(BaseInvocation):
image: ImageField = InputField(description="The IP-Adapter image prompt(s).")
ip_adapter_model: ModelIdentifierField = InputField(
description="The IP-Adapter model.", title="IP-Adapter Model", ui_type=UIType.IPAdapterModel
description="The IP-Adapter model.",
title="IP-Adapter Model",
ui_model_base=BaseModelType.Flux,
ui_model_type=ModelType.IPAdapter,
)
# Currently, the only known ViT model used by FLUX IP-Adapters is ViT-L.
clip_vision_model: Literal["ViT-L"] = InputField(description="CLIP Vision model to use.", default="ViT-L")
@@ -64,7 +65,7 @@ class FluxIPAdapterInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
assert isinstance(ip_adapter_info, (IPAdapterInvokeAIConfig, IPAdapterCheckpointConfig))
assert isinstance(ip_adapter_info, IPAdapter_Checkpoint_FLUX_Config)
# Note: There is a IPAdapterInvokeAIConfig.image_encoder_model_id field, but it isn't trustworthy.
image_encoder_starter_model = CLIP_VISION_MODEL_MAP[self.clip_vision_model]

View File

@@ -6,10 +6,10 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField
from invokeai.app.invocations.model import CLIPField, LoRAField, ModelIdentifierField, T5EncoderField, TransformerField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.taxonomy import BaseModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
@invocation_output("flux_lora_loader_output")
@@ -36,7 +36,10 @@ class FluxLoRALoaderInvocation(BaseInvocation):
"""Apply a LoRA model to a FLUX transformer and/or text encoder."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
description=FieldDescriptions.lora_model,
title="LoRA",
ui_model_base=BaseModelType.Flux,
ui_model_type=ModelType.LoRA,
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
transformer: TransformerField | None = InputField(
@@ -159,7 +162,7 @@ class FLUXLoRACollectionLoader(BaseInvocation):
if not context.models.exists(lora.lora.key):
raise Exception(f"Unknown lora: {lora.lora.key}!")
assert lora.lora.base is BaseModelType.Flux
assert lora.lora.base in (BaseModelType.Flux, BaseModelType.Flux2)
added_loras.append(lora.lora.key)

View File

@@ -6,18 +6,16 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, T5EncoderField, TransformerField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.t5_model_identifier import (
preprocess_t5_encoder_model_identifier,
preprocess_t5_tokenizer_model_identifier,
)
from invokeai.backend.flux.util import max_seq_lengths
from invokeai.backend.model_manager.config import (
CheckpointConfigBase,
)
from invokeai.backend.model_manager.taxonomy import SubModelType
from invokeai.backend.flux.util import get_flux_max_seq_length
from invokeai.backend.model_manager.configs.base import Checkpoint_Config_Base
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType, SubModelType
@invocation_output("flux_model_loader_output")
@@ -39,30 +37,34 @@ class FluxModelLoaderOutput(BaseInvocationOutput):
title="Main Model - FLUX",
tags=["model", "flux"],
category="model",
version="1.0.6",
version="1.0.7",
)
class FluxModelLoaderInvocation(BaseInvocation):
"""Loads a flux base model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.flux_model,
ui_type=UIType.FluxMainModel,
input=Input.Direct,
ui_model_base=BaseModelType.Flux,
ui_model_type=ModelType.Main,
)
t5_encoder_model: ModelIdentifierField = InputField(
description=FieldDescriptions.t5_encoder, ui_type=UIType.T5EncoderModel, input=Input.Direct, title="T5 Encoder"
description=FieldDescriptions.t5_encoder,
title="T5 Encoder",
ui_model_type=ModelType.T5Encoder,
)
clip_embed_model: ModelIdentifierField = InputField(
description=FieldDescriptions.clip_embed_model,
ui_type=UIType.CLIPEmbedModel,
input=Input.Direct,
title="CLIP Embed",
ui_model_type=ModelType.CLIPEmbed,
)
vae_model: ModelIdentifierField = InputField(
description=FieldDescriptions.vae_model, ui_type=UIType.FluxVAEModel, title="VAE"
description=FieldDescriptions.vae_model,
title="VAE",
ui_model_base=BaseModelType.Flux,
ui_model_type=ModelType.VAE,
)
def invoke(self, context: InvocationContext) -> FluxModelLoaderOutput:
@@ -80,12 +82,12 @@ class FluxModelLoaderInvocation(BaseInvocation):
t5_encoder = preprocess_t5_encoder_model_identifier(self.t5_encoder_model)
transformer_config = context.models.get_config(transformer)
assert isinstance(transformer_config, CheckpointConfigBase)
assert isinstance(transformer_config, Checkpoint_Config_Base)
return FluxModelLoaderOutput(
transformer=TransformerField(transformer=transformer, loras=[]),
clip=CLIPField(tokenizer=tokenizer, text_encoder=clip_encoder, loras=[], skipped_layers=0),
t5_encoder=T5EncoderField(tokenizer=tokenizer2, text_encoder=t5_encoder, loras=[]),
vae=VAEField(vae=vae),
max_seq_len=max_seq_lengths[transformer_config.config_path],
max_seq_len=get_flux_max_seq_length(transformer_config.variant),
)

View File

@@ -18,16 +18,15 @@ from invokeai.app.invocations.fields import (
InputField,
OutputField,
TensorField,
UIType,
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.redux.flux_redux_model import FluxReduxModel
from invokeai.backend.model_manager import BaseModelType, ModelType
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.starter_models import siglip
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
from invokeai.backend.sig_lip.sig_lip_pipeline import SigLipPipeline
from invokeai.backend.util.devices import TorchDevice
@@ -64,7 +63,8 @@ class FluxReduxInvocation(BaseInvocation):
redux_model: ModelIdentifierField = InputField(
description="The FLUX Redux model to use.",
title="FLUX Redux Model",
ui_type=UIType.FluxReduxModel,
ui_model_base=BaseModelType.Flux,
ui_model_type=ModelType.FluxRedux,
)
downsampling_factor: int = InputField(
ge=1,

View File

@@ -17,7 +17,7 @@ from invokeai.app.invocations.model import CLIPField, T5EncoderField
from invokeai.app.invocations.primitives import FluxConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.conditioner import HFEncoder
from invokeai.backend.model_manager import ModelFormat
from invokeai.backend.model_manager.taxonomy import ModelFormat
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.flux_lora_constants import FLUX_LORA_CLIP_PREFIX, FLUX_LORA_T5_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw

View File

@@ -3,7 +3,6 @@ from einops import rearrange
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
@@ -18,6 +17,7 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_flux
@invocation(
@@ -39,17 +39,11 @@ class FluxVaeDecodeInvocation(BaseInvocation, WithMetadata, WithBoard):
input=Input.Connection,
)
def _estimate_working_memory(self, latents: torch.Tensor, vae: AutoEncoder) -> int:
"""Estimate the working memory required by the invocation in bytes."""
out_h = LATENT_SCALE_FACTOR * latents.shape[-2]
out_w = LATENT_SCALE_FACTOR * latents.shape[-1]
element_size = next(vae.parameters()).element_size()
scaling_constant = 2200 # Determined experimentally.
working_memory = out_h * out_w * element_size * scaling_constant
return int(working_memory)
def _vae_decode(self, vae_info: LoadedModel, latents: torch.Tensor) -> Image.Image:
estimated_working_memory = self._estimate_working_memory(latents, vae_info.model)
assert isinstance(vae_info.model, AutoEncoder)
estimated_working_memory = estimate_vae_working_memory_flux(
operation="decode", image_tensor=latents, vae=vae_info.model
)
with vae_info.model_on_device(working_mem_bytes=estimated_working_memory) as (_, vae):
assert isinstance(vae, AutoEncoder)
vae_dtype = next(iter(vae.parameters())).dtype

View File

@@ -12,9 +12,10 @@ from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_flux
@invocation(
@@ -41,8 +42,12 @@ class FluxVaeEncodeInvocation(BaseInvocation):
# TODO(ryand): Write a util function for generating random tensors that is consistent across devices / dtypes.
# There's a starting point in get_noise(...), but it needs to be extracted and generalized. This function
# should be used for VAE encode sampling.
assert isinstance(vae_info.model, AutoEncoder)
estimated_working_memory = estimate_vae_working_memory_flux(
operation="encode", image_tensor=image_tensor, vae=vae_info.model
)
generator = torch.Generator(device=TorchDevice.choose_torch_device()).manual_seed(0)
with vae_info as vae:
with vae_info.model_on_device(working_mem_bytes=estimated_working_memory) as (_, vae):
assert isinstance(vae, AutoEncoder)
vae_dtype = next(iter(vae.parameters())).dtype
image_tensor = image_tensor.to(device=TorchDevice.choose_torch_device(), dtype=vae_dtype)

View File

@@ -46,7 +46,12 @@ class IdealSizeInvocation(BaseInvocation):
dimension = 512
elif unet_config.base == BaseModelType.StableDiffusion2:
dimension = 768
elif unet_config.base in (BaseModelType.StableDiffusionXL, BaseModelType.Flux, BaseModelType.StableDiffusion3):
elif unet_config.base in (
BaseModelType.StableDiffusionXL,
BaseModelType.Flux,
BaseModelType.Flux2,
BaseModelType.StableDiffusion3,
):
dimension = 1024
else:
raise ValueError(f"Unsupported model type: {unet_config.base}")

View File

@@ -649,102 +649,104 @@ class MaskCombineInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Color Correct",
tags=["image", "color"],
category="image",
version="1.2.2",
version="2.0.0",
)
class ColorCorrectInvocation(BaseInvocation, WithMetadata, WithBoard):
"""
Shifts the colors of a target image to match the reference image, optionally
using a mask to only color-correct certain regions of the target image.
Matches the color histogram of a base image to a reference image, optionally
using a mask to only color-correct certain regions of the base image.
"""
image: ImageField = InputField(description="The image to color-correct")
reference: ImageField = InputField(description="Reference image for color-correction")
mask: Optional[ImageField] = InputField(default=None, description="Mask to use when applying color-correction")
mask_blur_radius: float = InputField(default=8, description="Mask blur radius")
base_image: ImageField = InputField(description="The image to color-correct")
color_reference: ImageField = InputField(description="Reference image for color-correction")
mask: Optional[ImageField] = InputField(default=None, description="Optional mask to limit color correction area")
colorspace: Literal["RGB", "YCbCr", "YCbCr-Chroma", "YCbCr-Luma"] = InputField(
default="RGB", description="Colorspace in which to apply histogram matching", title="Color Space"
)
def _match_histogram_channel(self, source: numpy.ndarray, reference: numpy.ndarray) -> numpy.ndarray:
"""Match histogram of source channel to reference channel using cumulative distribution functions."""
# Compute histograms
source_hist, _ = numpy.histogram(source.flatten(), bins=256, range=(0, 256))
reference_hist, _ = numpy.histogram(reference.flatten(), bins=256, range=(0, 256))
# Compute cumulative distribution functions
source_cdf = source_hist.cumsum()
reference_cdf = reference_hist.cumsum()
# Normalize CDFs (avoid division by zero)
if source_cdf[-1] > 0:
source_cdf = source_cdf / source_cdf[-1]
if reference_cdf[-1] > 0:
reference_cdf = reference_cdf / reference_cdf[-1]
# Create lookup table using linear interpolation
lookup_table = numpy.interp(source_cdf, reference_cdf, numpy.arange(256))
# Apply lookup table to source image
return lookup_table[source].astype(numpy.uint8)
def invoke(self, context: InvocationContext) -> ImageOutput:
pil_init_mask = None
# Load images as RGBA
base_image = context.images.get_pil(self.base_image.image_name, "RGBA")
# Store original alpha channel
original_alpha = base_image.getchannel("A")
# Convert to working colorspace
if self.colorspace == "RGB":
base_array = numpy.asarray(base_image.convert("RGB"), dtype=numpy.uint8)
ref_rgb = context.images.get_pil(self.color_reference.image_name, "RGB")
ref_array = numpy.asarray(ref_rgb, dtype=numpy.uint8)
channels_to_match = [0, 1, 2] # R, G, B
else:
# Convert to YCbCr colorspace
base_ycbcr = base_image.convert("YCbCr")
ref_ycbcr = context.images.get_pil(self.color_reference.image_name, "YCbCr")
base_array = numpy.asarray(base_ycbcr, dtype=numpy.uint8)
ref_array = numpy.asarray(ref_ycbcr, dtype=numpy.uint8)
# Determine which channels to match based on mode
if self.colorspace == "YCbCr":
channels_to_match = [0, 1, 2] # Y, Cb, Cr
elif self.colorspace == "YCbCr-Chroma":
channels_to_match = [1, 2] # Cb, Cr only
else: # YCbCr-Luma
channels_to_match = [0] # Y only
# Apply histogram matching to selected channels
corrected_array = base_array.copy()
for channel_idx in channels_to_match:
corrected_array[:, :, channel_idx] = self._match_histogram_channel(
base_array[:, :, channel_idx], ref_array[:, :, channel_idx]
)
# Convert back to RGB if we were in YCbCr
if self.colorspace != "RGB":
corrected_image = Image.fromarray(corrected_array, mode="YCbCr").convert("RGB")
else:
corrected_image = Image.fromarray(corrected_array, mode="RGB")
# Apply mask if provided (white = original, black = result)
if self.mask is not None:
pil_init_mask = context.images.get_pil(self.mask.image_name).convert("L")
init_image = context.images.get_pil(self.reference.image_name)
result = context.images.get_pil(self.image.image_name).convert("RGBA")
# if init_image is None or init_mask is None:
# return result
# Get the original alpha channel of the mask if there is one.
# Otherwise it is some other black/white image format ('1', 'L' or 'RGB')
# pil_init_mask = (
# init_mask.getchannel("A")
# if init_mask.mode == "RGBA"
# else init_mask.convert("L")
# )
pil_init_image = init_image.convert("RGBA") # Add an alpha channel if one doesn't exist
# Build an image with only visible pixels from source to use as reference for color-matching.
init_rgb_pixels = numpy.asarray(init_image.convert("RGB"), dtype=numpy.uint8)
init_a_pixels = numpy.asarray(pil_init_image.getchannel("A"), dtype=numpy.uint8)
init_mask_pixels = numpy.asarray(pil_init_mask, dtype=numpy.uint8)
# Get numpy version of result
np_image = numpy.asarray(result.convert("RGB"), dtype=numpy.uint8)
# Mask and calculate mean and standard deviation
mask_pixels = init_a_pixels * init_mask_pixels > 0
np_init_rgb_pixels_masked = init_rgb_pixels[mask_pixels, :]
np_image_masked = np_image[mask_pixels, :]
if np_init_rgb_pixels_masked.size > 0:
init_means = np_init_rgb_pixels_masked.mean(axis=0)
init_std = np_init_rgb_pixels_masked.std(axis=0)
gen_means = np_image_masked.mean(axis=0)
gen_std = np_image_masked.std(axis=0)
# Color correct
np_matched_result = np_image.copy()
np_matched_result[:, :, :] = (
(
(
(np_matched_result[:, :, :].astype(numpy.float32) - gen_means[None, None, :])
/ gen_std[None, None, :]
)
* init_std[None, None, :]
+ init_means[None, None, :]
)
.clip(0, 255)
.astype(numpy.uint8)
)
matched_result = Image.fromarray(np_matched_result, mode="RGB")
# Load mask as grayscale
mask_image = context.images.get_pil(self.mask.image_name, "L")
# Start with corrected image, paste base image where mask is white
result = corrected_image.copy()
if mask_image.size != result.size:
raise ValueError("Mask size must match base image size.")
else:
result.paste(base_image.convert("RGB"), mask=mask_image)
else:
matched_result = Image.fromarray(np_image, mode="RGB")
result = corrected_image
# Blur the mask out (into init image) by specified amount
if self.mask_blur_radius > 0:
nm = numpy.asarray(pil_init_mask, dtype=numpy.uint8)
inverted_nm = 255 - nm
dilation_size = int(round(self.mask_blur_radius) + 20)
dilating_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (dilation_size, dilation_size))
inverted_dilated_nm = cv2.dilate(inverted_nm, dilating_kernel)
dilated_nm = 255 - inverted_dilated_nm
nmd = cv2.erode(
dilated_nm,
kernel=numpy.ones((3, 3), dtype=numpy.uint8),
iterations=int(self.mask_blur_radius / 2),
)
pmd = Image.fromarray(nmd, mode="L")
blurred_init_mask = pmd.filter(ImageFilter.BoxBlur(self.mask_blur_radius))
else:
blurred_init_mask = pil_init_mask
multiplied_blurred_init_mask = ImageChops.multiply(blurred_init_mask, result.split()[-1])
# Paste original on color-corrected generation (using blurred mask)
matched_result.paste(init_image, (0, 0), mask=multiplied_blurred_init_mask)
image_dto = context.images.save(image=matched_result)
# Convert to RGBA and restore original alpha
result = result.convert("RGBA")
result.putalpha(original_alpha)
# Save and return
image_dto = context.images.save(image=result)
return ImageOutput.build(image_dto)
@@ -1347,3 +1349,96 @@ class PasteImageIntoBoundingBoxInvocation(BaseInvocation, WithMetadata, WithBoar
image_dto = context.images.save(image=target_image)
return ImageOutput.build(image_dto)
@invocation(
"flux_kontext_image_prep",
title="FLUX Kontext Image Prep",
tags=["image", "concatenate", "flux", "kontext"],
category="image",
version="1.0.0",
)
class FluxKontextConcatenateImagesInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Prepares an image or images for use with FLUX Kontext. The first/single image is resized to the nearest
preferred Kontext resolution. All other images are concatenated horizontally, maintaining their aspect ratio."""
images: list[ImageField] = InputField(
description="The images to concatenate",
min_length=1,
max_length=10,
)
use_preferred_resolution: bool = InputField(
default=True, description="Use FLUX preferred resolutions for the first image"
)
def invoke(self, context: InvocationContext) -> ImageOutput:
from invokeai.backend.flux.util import PREFERED_KONTEXT_RESOLUTIONS
# Step 1: Load all images
pil_images = []
for image_field in self.images:
image = context.images.get_pil(image_field.image_name, mode="RGBA")
pil_images.append(image)
# Step 2: Determine target resolution for the first image
first_image = pil_images[0]
width, height = first_image.size
if self.use_preferred_resolution:
aspect_ratio = width / height
# Find the closest preferred resolution for the first image
_, target_width, target_height = min(
((abs(aspect_ratio - w / h), w, h) for w, h in PREFERED_KONTEXT_RESOLUTIONS), key=lambda x: x[0]
)
# Apply BFL's scaling formula
scaled_height = 2 * int(target_height / 16)
final_height = 8 * scaled_height # This will be consistent for all images
scaled_width = 2 * int(target_width / 16)
first_width = 8 * scaled_width
else:
# Use original dimensions of first image, ensuring divisibility by 16
final_height = 16 * (height // 16)
first_width = 16 * (width // 16)
# Ensure minimum dimensions
if final_height < 16:
final_height = 16
if first_width < 16:
first_width = 16
# Step 3: Process and resize all images with consistent height
processed_images = []
total_width = 0
for i, image in enumerate(pil_images):
if i == 0:
# First image uses the calculated dimensions
final_width = first_width
else:
# Subsequent images maintain aspect ratio with the same height
img_aspect_ratio = image.width / image.height
# Calculate width that maintains aspect ratio at the target height
calculated_width = int(final_height * img_aspect_ratio)
# Ensure width is divisible by 16 for proper VAE encoding
final_width = 16 * (calculated_width // 16)
# Ensure minimum width
if final_width < 16:
final_width = 16
# Resize image to calculated dimensions
resized_image = image.resize((final_width, final_height), Image.Resampling.LANCZOS)
processed_images.append(resized_image)
total_width += final_width
# Step 4: Concatenate images horizontally
concatenated_image = Image.new("RGB", (total_width, final_height))
x_offset = 0
for img in processed_images:
concatenated_image.paste(img, (x_offset, 0))
x_offset += img.width
# Save the concatenated image
image_dto = context.images.save(image=concatenated_image)
return ImageOutput.build(image_dto)

View File

@@ -1,5 +1,6 @@
from contextlib import nullcontext
from functools import singledispatchmethod
from typing import Literal
import einops
import torch
@@ -20,13 +21,22 @@ from invokeai.app.invocations.fields import (
Input,
InputField,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.model import BaseModelType, VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_sd15_sdxl
"""
SDXL VAE color compensation values determined experimentally to reduce color drift.
If more reliable values are found in the future (e.g. individual color channels), they can be updated.
SD1.5, TAESD, TAESDXL VAEs distort in less predictable ways, so no compensation is offered at this time.
"""
COMPENSATION_OPTIONS = Literal["None", "SDXL"]
COLOR_COMPENSATION_MAP = {"None": [1, 0], "SDXL": [1.015, -0.002]}
@invocation(
@@ -34,7 +44,7 @@ from invokeai.backend.util.devices import TorchDevice
title="Image to Latents - SD1.5, SDXL",
tags=["latents", "image", "vae", "i2l"],
category="latents",
version="1.1.1",
version="1.2.0",
)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
@@ -51,13 +61,30 @@ class ImageToLatentsInvocation(BaseInvocation):
# offer a way to directly set None values.
tile_size: int = InputField(default=0, multiple_of=8, description=FieldDescriptions.vae_tile_size)
fp32: bool = InputField(default=False, description=FieldDescriptions.fp32)
color_compensation: COMPENSATION_OPTIONS = InputField(
default="None",
description="Apply VAE scaling compensation when encoding images (reduces color drift).",
)
@staticmethod
@classmethod
def vae_encode(
vae_info: LoadedModel, upcast: bool, tiled: bool, image_tensor: torch.Tensor, tile_size: int = 0
cls,
vae_info: LoadedModel,
upcast: bool,
tiled: bool,
image_tensor: torch.Tensor,
tile_size: int = 0,
) -> torch.Tensor:
with vae_info as vae:
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
assert isinstance(vae_info.model, (AutoencoderKL, AutoencoderTiny)), "VAE must be of type SD-1.5 or SDXL"
estimated_working_memory = estimate_vae_working_memory_sd15_sdxl(
operation="encode",
image_tensor=image_tensor,
vae=vae_info.model,
tile_size=tile_size if tiled else None,
fp32=upcast,
)
with vae_info.model_on_device(working_mem_bytes=estimated_working_memory) as (_, vae):
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny)), "VAE must be of type SD-1.5 or SDXL"
orig_dtype = vae.dtype
if upcast:
vae.to(dtype=torch.float32)
@@ -113,14 +140,24 @@ class ImageToLatentsInvocation(BaseInvocation):
image = context.images.get_pil(self.image.image_name)
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (AutoencoderKL, AutoencoderTiny)), "VAE must be of type SD-1.5 or SDXL"
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if self.color_compensation != "None" and vae_info.config.base == BaseModelType.StableDiffusionXL:
scale, bias = COLOR_COMPENSATION_MAP[self.color_compensation]
image_tensor = image_tensor * scale + bias
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
context.util.signal_progress("Running VAE encoder")
latents = self.vae_encode(
vae_info=vae_info, upcast=self.fp32, tiled=self.tiled, image_tensor=image_tensor, tile_size=self.tile_size
vae_info=vae_info,
upcast=self.fp32,
tiled=self.tiled or context.config.get().force_tiled_decode,
image_tensor=image_tensor,
tile_size=self.tile_size,
)
latents = latents.to("cpu")

View File

@@ -5,16 +5,16 @@ from pydantic import BaseModel, Field, field_validator, model_validator
from typing_extensions import Self
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, TensorField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, TensorField
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import (
AnyModelConfig,
IPAdapterCheckpointConfig,
IPAdapterInvokeAIConfig,
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.configs.ip_adapter import (
IPAdapter_Checkpoint_Config_Base,
IPAdapter_InvokeAI_Config_Base,
)
from invokeai.backend.model_manager.starter_models import (
StarterModel,
@@ -85,7 +85,8 @@ class IPAdapterInvocation(BaseInvocation):
description="The IP-Adapter model.",
title="IP-Adapter Model",
ui_order=-1,
ui_type=UIType.IPAdapterModel,
ui_model_base=[BaseModelType.StableDiffusion1, BaseModelType.StableDiffusionXL],
ui_model_type=ModelType.IPAdapter,
)
clip_vision_model: Literal["ViT-H", "ViT-G", "ViT-L"] = InputField(
description="CLIP Vision model to use. Overrides model settings. Mandatory for checkpoint models.",
@@ -122,9 +123,9 @@ class IPAdapterInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
assert isinstance(ip_adapter_info, (IPAdapterInvokeAIConfig, IPAdapterCheckpointConfig))
assert isinstance(ip_adapter_info, (IPAdapter_InvokeAI_Config_Base, IPAdapter_Checkpoint_Config_Base))
if isinstance(ip_adapter_info, IPAdapterInvokeAIConfig):
if isinstance(ip_adapter_info, IPAdapter_InvokeAI_Config_Base):
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
else:

View File

@@ -2,12 +2,6 @@ from contextlib import nullcontext
import torch
from diffusers.image_processor import VaeImageProcessor
from diffusers.models.attention_processor import (
AttnProcessor2_0,
LoRAAttnProcessor2_0,
LoRAXFormersAttnProcessor,
XFormersAttnProcessor,
)
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
@@ -27,6 +21,7 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.extensions.seamless import SeamlessExt
from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_sd15_sdxl
@invocation(
@@ -53,39 +48,6 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
tile_size: int = InputField(default=0, multiple_of=8, description=FieldDescriptions.vae_tile_size)
fp32: bool = InputField(default=False, description=FieldDescriptions.fp32)
def _estimate_working_memory(
self, latents: torch.Tensor, use_tiling: bool, vae: AutoencoderKL | AutoencoderTiny
) -> int:
"""Estimate the working memory required by the invocation in bytes."""
# It was found experimentally that the peak working memory scales linearly with the number of pixels and the
# element size (precision). This estimate is accurate for both SD1 and SDXL.
element_size = 4 if self.fp32 else 2
scaling_constant = 2200 # Determined experimentally.
if use_tiling:
tile_size = self.tile_size
if tile_size == 0:
tile_size = vae.tile_sample_min_size
assert isinstance(tile_size, int)
out_h = tile_size
out_w = tile_size
working_memory = out_h * out_w * element_size * scaling_constant
# We add 25% to the working memory estimate when tiling is enabled to account for factors like tile overlap
# and number of tiles. We could make this more precise in the future, but this should be good enough for
# most use cases.
working_memory = working_memory * 1.25
else:
out_h = LATENT_SCALE_FACTOR * latents.shape[-2]
out_w = LATENT_SCALE_FACTOR * latents.shape[-1]
working_memory = out_h * out_w * element_size * scaling_constant
if self.fp32:
# If we are running in FP32, then we should account for the likely increase in model size (~250MB).
working_memory += 250 * 2**20
return int(working_memory)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
@@ -94,8 +56,13 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (AutoencoderKL, AutoencoderTiny))
estimated_working_memory = self._estimate_working_memory(latents, use_tiling, vae_info.model)
estimated_working_memory = estimate_vae_working_memory_sd15_sdxl(
operation="decode",
image_tensor=latents,
vae=vae_info.model,
tile_size=self.tile_size if use_tiling else None,
fp32=self.fp32,
)
with (
SeamlessExt.static_patch_model(vae_info.model, self.vae.seamless_axes),
vae_info.model_on_device(working_mem_bytes=estimated_working_memory) as (_, vae),
@@ -104,26 +71,9 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
latents = latents.to(TorchDevice.choose_torch_device())
if self.fp32:
# FP32 mode: convert everything to float32 for maximum precision
vae.to(dtype=torch.float32)
use_torch_2_0_or_xformers = hasattr(vae.decoder, "mid_block") and isinstance(
vae.decoder.mid_block.attentions[0].processor,
(
AttnProcessor2_0,
XFormersAttnProcessor,
LoRAXFormersAttnProcessor,
LoRAAttnProcessor2_0,
),
)
# if xformers or torch_2_0 is used attention block does not need
# to be in float32 which can save lots of memory
if use_torch_2_0_or_xformers:
vae.post_quant_conv.to(latents.dtype)
vae.decoder.conv_in.to(latents.dtype)
vae.decoder.mid_block.to(latents.dtype)
else:
latents = latents.float()
latents = latents.float()
else:
vae.to(dtype=torch.float16)
latents = latents.half()

View File

@@ -6,11 +6,12 @@ from pydantic import field_validator
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration, LlavaOnevisionProcessor
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, UIComponent, UIType
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, UIComponent
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import StringOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.llava_onevision_pipeline import LlavaOnevisionPipeline
from invokeai.backend.model_manager.taxonomy import ModelType
from invokeai.backend.util.devices import TorchDevice
@@ -34,7 +35,7 @@ class LlavaOnevisionVllmInvocation(BaseInvocation):
vllm_model: ModelIdentifierField = InputField(
title="LLaVA Model Type",
description=FieldDescriptions.vllm_model,
ui_type=UIType.LlavaOnevisionModel,
ui_model_type=ModelType.LlavaOnevision,
)
@field_validator("images", mode="before")

View File

@@ -150,6 +150,10 @@ GENERATION_MODES = Literal[
"flux_img2img",
"flux_inpaint",
"flux_outpaint",
"flux2_txt2img",
"flux2_img2img",
"flux2_inpaint",
"flux2_outpaint",
"sd3_txt2img",
"sd3_img2img",
"sd3_inpaint",
@@ -158,6 +162,10 @@ GENERATION_MODES = Literal[
"cogview4_img2img",
"cogview4_inpaint",
"cogview4_outpaint",
"z_image_txt2img",
"z_image_img2img",
"z_image_inpaint",
"z_image_outpaint",
]
@@ -166,7 +174,7 @@ GENERATION_MODES = Literal[
title="Core Metadata",
tags=["metadata"],
category="metadata",
version="2.0.0",
version="2.1.0",
classification=Classification.Internal,
)
class CoreMetadataInvocation(BaseInvocation):
@@ -217,6 +225,10 @@ class CoreMetadataInvocation(BaseInvocation):
default=None,
description="The VAE used for decoding, if the main model's default was not used",
)
qwen3_encoder: Optional[ModelIdentifierField] = InputField(
default=None,
description="The Qwen3 text encoder model used for Z-Image inference",
)
# High resolution fix metadata.
hrf_enabled: Optional[bool] = InputField(

View File

@@ -52,8 +52,9 @@ from invokeai.app.invocations.primitives import (
)
from invokeai.app.invocations.scheduler import SchedulerOutput
from invokeai.app.invocations.t2i_adapter import T2IAdapterField, T2IAdapterInvocation
from invokeai.app.invocations.z_image_denoise import ZImageDenoiseInvocation
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.taxonomy import ModelType, SubModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType, SubModelType
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
from invokeai.version import __version__
@@ -473,7 +474,6 @@ class MetadataToModelOutput(BaseInvocationOutput):
model: ModelIdentifierField = OutputField(
description=FieldDescriptions.main_model,
title="Model",
ui_type=UIType.MainModel,
)
name: str = OutputField(description="Model Name", title="Name")
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
@@ -488,7 +488,6 @@ class MetadataToSDXLModelOutput(BaseInvocationOutput):
model: ModelIdentifierField = OutputField(
description=FieldDescriptions.main_model,
title="Model",
ui_type=UIType.SDXLMainModel,
)
name: str = OutputField(description="Model Name", title="Name")
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
@@ -519,8 +518,7 @@ class MetadataToModelInvocation(BaseInvocation, WithMetadata):
input=Input.Direct,
)
default_value: ModelIdentifierField = InputField(
description="The default model to use if not found in the metadata",
ui_type=UIType.MainModel,
description="The default model to use if not found in the metadata", ui_model_type=ModelType.Main
)
_validate_custom_label = model_validator(mode="after")(validate_custom_label)
@@ -575,7 +573,8 @@ class MetadataToSDXLModelInvocation(BaseInvocation, WithMetadata):
)
default_value: ModelIdentifierField = InputField(
description="The default SDXL Model to use if not found in the metadata",
ui_type=UIType.SDXLMainModel,
ui_model_type=ModelType.Main,
ui_model_base=BaseModelType.StableDiffusionXL,
)
_validate_custom_label = model_validator(mode="after")(validate_custom_label)
@@ -731,6 +730,52 @@ class FluxDenoiseLatentsMetaInvocation(FluxDenoiseInvocation, WithMetadata):
return LatentsMetaOutput(**params, metadata=MetadataField.model_validate(md))
@invocation(
"z_image_denoise_meta",
title=f"{ZImageDenoiseInvocation.UIConfig.title} + Metadata",
tags=["z-image", "latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.0.0",
)
class ZImageDenoiseMetaInvocation(ZImageDenoiseInvocation, WithMetadata):
"""Run denoising process with a Z-Image transformer model + metadata."""
def invoke(self, context: InvocationContext) -> LatentsMetaOutput:
def _loras_to_json(obj: Union[Any, list[Any]]):
if not isinstance(obj, list):
obj = [obj]
output: list[dict[str, Any]] = []
for item in obj:
output.append(
LoRAMetadataField(
model=item.lora,
weight=item.weight,
).model_dump(exclude_none=True, exclude={"id", "type", "is_intermediate", "use_cache"})
)
return output
obj = super().invoke(context)
md: Dict[str, Any] = {} if self.metadata is None else self.metadata.root
md.update({"width": obj.width})
md.update({"height": obj.height})
md.update({"steps": self.steps})
md.update({"guidance": self.guidance_scale})
md.update({"denoising_start": self.denoising_start})
md.update({"denoising_end": self.denoising_end})
md.update({"scheduler": self.scheduler})
md.update({"model": self.transformer.transformer})
md.update({"seed": self.seed})
if len(self.transformer.loras) > 0:
md.update({"loras": _loras_to_json(self.transformer.loras)})
params = obj.__dict__.copy()
del params["type"]
return LatentsMetaOutput(**params, metadata=MetadataField.model_validate(md))
@invocation(
"metadata_to_vae",
title="Metadata To VAE",

View File

@@ -9,12 +9,10 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, Input, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, Input, InputField, OutputField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.model_manager.config import (
AnyModelConfig,
)
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType, SubModelType
@@ -24,8 +22,9 @@ class ModelIdentifierField(BaseModel):
name: str = Field(description="The model's name")
base: BaseModelType = Field(description="The model's base model type")
type: ModelType = Field(description="The model's type")
submodel_type: Optional[SubModelType] = Field(
description="The submodel to load, if this is a main model", default=None
submodel_type: SubModelType | None = Field(
description="The submodel to load, if this is a main model",
default=None,
)
@classmethod
@@ -73,6 +72,14 @@ class GlmEncoderField(BaseModel):
text_encoder: ModelIdentifierField = Field(description="Info to load text_encoder submodel")
class Qwen3EncoderField(BaseModel):
"""Field for Qwen3 text encoder used by Z-Image models."""
tokenizer: ModelIdentifierField = Field(description="Info to load tokenizer submodel")
text_encoder: ModelIdentifierField = Field(description="Info to load text_encoder submodel")
loras: List[LoRAField] = Field(default_factory=list, description="LoRAs to apply on model loading")
class VAEField(BaseModel):
vae: ModelIdentifierField = Field(description="Info to load vae submodel")
seamless_axes: List[str] = Field(default_factory=list, description='Axes("x" and "y") to which apply seamless')
@@ -145,7 +152,7 @@ class ModelIdentifierInvocation(BaseInvocation):
@invocation(
"main_model_loader",
title="Main Model - SD1.5",
title="Main Model - SD1.5, SD2",
tags=["model"],
category="model",
version="1.0.4",
@@ -153,7 +160,11 @@ class ModelIdentifierInvocation(BaseInvocation):
class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
model: ModelIdentifierField = InputField(description=FieldDescriptions.main_model, ui_type=UIType.MainModel)
model: ModelIdentifierField = InputField(
description=FieldDescriptions.main_model,
ui_model_base=[BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2],
ui_model_type=ModelType.Main,
)
# TODO: precision?
def invoke(self, context: InvocationContext) -> ModelLoaderOutput:
@@ -187,7 +198,10 @@ class LoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
description=FieldDescriptions.lora_model,
title="LoRA",
ui_model_base=BaseModelType.StableDiffusion1,
ui_model_type=ModelType.LoRA,
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
@@ -250,7 +264,9 @@ class LoRASelectorInvocation(BaseInvocation):
"""Selects a LoRA model and weight."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
description=FieldDescriptions.lora_model,
title="LoRA",
ui_model_type=ModelType.LoRA,
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
@@ -332,7 +348,10 @@ class SDXLLoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
description=FieldDescriptions.lora_model,
title="LoRA",
ui_model_base=BaseModelType.StableDiffusionXL,
ui_model_type=ModelType.LoRA,
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
@@ -473,13 +492,27 @@ class SDXLLoRACollectionLoader(BaseInvocation):
@invocation(
"vae_loader", title="VAE Model - SD1.5, SDXL, SD3, FLUX", tags=["vae", "model"], category="model", version="1.0.4"
"vae_loader",
title="VAE Model - SD1.5, SD2, SDXL, SD3, FLUX",
tags=["vae", "model"],
category="model",
version="1.0.4",
)
class VAELoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
vae_model: ModelIdentifierField = InputField(
description=FieldDescriptions.vae_model, title="VAE", ui_type=UIType.VAEModel
description=FieldDescriptions.vae_model,
title="VAE",
ui_model_base=[
BaseModelType.StableDiffusion1,
BaseModelType.StableDiffusion2,
BaseModelType.StableDiffusionXL,
BaseModelType.StableDiffusion3,
BaseModelType.Flux,
BaseModelType.Flux2,
],
ui_model_type=ModelType.VAE,
)
def invoke(self, context: InvocationContext) -> VAEOutput:

View File

@@ -0,0 +1,59 @@
import pathlib
from typing import Literal
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import ImageField, InputField, OutputField, WithBoard, WithMetadata
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.pbr_maps.architecture.pbr_rrdb_net import PBR_RRDB_Net
from invokeai.backend.image_util.pbr_maps.pbr_maps import NORMAL_MAP_MODEL, OTHER_MAP_MODEL, PBRMapsGenerator
from invokeai.backend.util.devices import TorchDevice
@invocation_output("pbr_maps-output")
class PBRMapsOutput(BaseInvocationOutput):
normal_map: ImageField = OutputField(default=None, description="The generated normal map")
roughness_map: ImageField = OutputField(default=None, description="The generated roughness map")
displacement_map: ImageField = OutputField(default=None, description="The generated displacement map")
@invocation("pbr_maps", title="PBR Maps", tags=["image", "material"], category="image", version="1.0.0")
class PBRMapsInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generate Normal, Displacement and Roughness Map from a given image"""
image: ImageField = InputField(description="Input image")
tile_size: int = InputField(default=512, description="Tile size")
border_mode: Literal["none", "seamless", "mirror", "replicate"] = InputField(
default="none", description="Border mode to apply to eliminate any artifacts or seams"
)
def invoke(self, context: InvocationContext) -> PBRMapsOutput:
image_pil = context.images.get_pil(self.image.image_name, mode="RGB")
def loader(model_path: pathlib.Path):
return PBRMapsGenerator.load_model(model_path, TorchDevice.choose_torch_device())
torch_device = TorchDevice.choose_torch_device()
with (
context.models.load_remote_model(NORMAL_MAP_MODEL, loader) as normal_map_model,
context.models.load_remote_model(OTHER_MAP_MODEL, loader) as other_map_model,
):
assert isinstance(normal_map_model, PBR_RRDB_Net)
assert isinstance(other_map_model, PBR_RRDB_Net)
pbr_pipeline = PBRMapsGenerator(normal_map_model, other_map_model, torch_device)
normal_map, roughness_map, displacement_map = pbr_pipeline.generate_maps(
image_pil, self.tile_size, self.border_mode
)
normal_map = context.images.save(normal_map)
normal_map_field = ImageField(image_name=normal_map.image_name)
roughness_map = context.images.save(roughness_map)
roughness_map_field = ImageField(image_name=roughness_map.image_name)
displacement_map = context.images.save(displacement_map)
displacement_map_field = ImageField(image_name=displacement_map.image_name)
return PBRMapsOutput(
normal_map=normal_map_field, roughness_map=roughness_map_field, displacement_map=displacement_map_field
)

View File

@@ -27,6 +27,7 @@ from invokeai.app.invocations.fields import (
SD3ConditioningField,
TensorField,
UIComponent,
ZImageConditioningField,
)
from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.shared.invocation_context import InvocationContext
@@ -461,6 +462,17 @@ class CogView4ConditioningOutput(BaseInvocationOutput):
return cls(conditioning=CogView4ConditioningField(conditioning_name=conditioning_name))
@invocation_output("z_image_conditioning_output")
class ZImageConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a Z-Image text conditioning tensor."""
conditioning: ZImageConditioningField = OutputField(description=FieldDescriptions.cond)
@classmethod
def build(cls, conditioning_name: str) -> "ZImageConditioningOutput":
return cls(conditioning=ZImageConditioningField(conditioning_name=conditioning_name))
@invocation_output("conditioning_output")
class ConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single conditioning tensor"""

View File

@@ -0,0 +1,57 @@
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import InputField, OutputField, StylePresetField, UIComponent
from invokeai.app.services.shared.invocation_context import InvocationContext
@invocation_output("prompt_template_output")
class PromptTemplateOutput(BaseInvocationOutput):
"""Output for the Prompt Template node"""
positive_prompt: str = OutputField(description="The positive prompt with the template applied")
negative_prompt: str = OutputField(description="The negative prompt with the template applied")
@invocation(
"prompt_template",
title="Prompt Template",
tags=["prompt", "template", "style", "preset"],
category="prompt",
version="1.0.0",
)
class PromptTemplateInvocation(BaseInvocation):
"""Applies a Style Preset template to positive and negative prompts.
Select a Style Preset and provide positive/negative prompts. The node replaces
{prompt} placeholders in the template with your input prompts.
"""
style_preset: StylePresetField = InputField(
description="The Style Preset to use as a template",
)
positive_prompt: str = InputField(
default="",
description="The positive prompt to insert into the template's {prompt} placeholder",
ui_component=UIComponent.Textarea,
)
negative_prompt: str = InputField(
default="",
description="The negative prompt to insert into the template's {prompt} placeholder",
ui_component=UIComponent.Textarea,
)
def invoke(self, context: InvocationContext) -> PromptTemplateOutput:
# Fetch the style preset from the database
style_preset = context._services.style_preset_records.get(self.style_preset.style_preset_id)
# Get the template prompts
positive_template = style_preset.preset_data.positive_prompt
negative_template = style_preset.preset_data.negative_prompt
# Replace {prompt} placeholder with the input prompts
rendered_positive = positive_template.replace("{prompt}", self.positive_prompt)
rendered_negative = negative_template.replace("{prompt}", self.negative_prompt)
return PromptTemplateOutput(
positive_prompt=rendered_positive,
negative_prompt=rendered_negative,
)

View File

@@ -23,7 +23,7 @@ from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.invocations.sd3_text_encoder import SD3_T5_MAX_SEQ_LEN
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.sampling_utils import clip_timestep_schedule_fractional
from invokeai.backend.model_manager import BaseModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType
from invokeai.backend.rectified_flow.rectified_flow_inpaint_extension import RectifiedFlowInpaintExtension
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import SD3ConditioningInfo

View File

@@ -17,6 +17,7 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_sd3
@invocation(
@@ -34,7 +35,11 @@ class SD3ImageToLatentsInvocation(BaseInvocation, WithMetadata, WithBoard):
@staticmethod
def vae_encode(vae_info: LoadedModel, image_tensor: torch.Tensor) -> torch.Tensor:
with vae_info as vae:
assert isinstance(vae_info.model, AutoencoderKL)
estimated_working_memory = estimate_vae_working_memory_sd3(
operation="encode", image_tensor=image_tensor, vae=vae_info.model
)
with vae_info.model_on_device(working_mem_bytes=estimated_working_memory) as (_, vae):
assert isinstance(vae, AutoencoderKL)
vae.disable_tiling()
@@ -58,6 +63,8 @@ class SD3ImageToLatentsInvocation(BaseInvocation, WithMetadata, WithBoard):
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, AutoencoderKL)
latents = self.vae_encode(vae_info=vae_info, image_tensor=image_tensor)
latents = latents.to("cpu")

View File

@@ -6,7 +6,6 @@ from einops import rearrange
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
@@ -20,6 +19,7 @@ from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.extensions.seamless import SeamlessExt
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_sd3
@invocation(
@@ -41,22 +41,15 @@ class SD3LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
input=Input.Connection,
)
def _estimate_working_memory(self, latents: torch.Tensor, vae: AutoencoderKL) -> int:
"""Estimate the working memory required by the invocation in bytes."""
out_h = LATENT_SCALE_FACTOR * latents.shape[-2]
out_w = LATENT_SCALE_FACTOR * latents.shape[-1]
element_size = next(vae.parameters()).element_size()
scaling_constant = 2200 # Determined experimentally.
working_memory = out_h * out_w * element_size * scaling_constant
return int(working_memory)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (AutoencoderKL))
estimated_working_memory = self._estimate_working_memory(latents, vae_info.model)
estimated_working_memory = estimate_vae_working_memory_sd3(
operation="decode", image_tensor=latents, vae=vae_info.model
)
with (
SeamlessExt.static_patch_model(vae_info.model, self.vae.seamless_axes),
vae_info.model_on_device(working_mem_bytes=estimated_working_memory) as (_, vae),

View File

@@ -6,14 +6,14 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, T5EncoderField, TransformerField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.t5_model_identifier import (
preprocess_t5_encoder_model_identifier,
preprocess_t5_tokenizer_model_identifier,
)
from invokeai.backend.model_manager.taxonomy import SubModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType, ClipVariantType, ModelType, SubModelType
@invocation_output("sd3_model_loader_output")
@@ -39,36 +39,43 @@ class Sd3ModelLoaderInvocation(BaseInvocation):
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sd3_model,
ui_type=UIType.SD3MainModel,
input=Input.Direct,
ui_model_base=BaseModelType.StableDiffusion3,
ui_model_type=ModelType.Main,
)
t5_encoder_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.t5_encoder,
ui_type=UIType.T5EncoderModel,
input=Input.Direct,
title="T5 Encoder",
default=None,
ui_model_type=ModelType.T5Encoder,
)
clip_l_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.clip_embed_model,
ui_type=UIType.CLIPLEmbedModel,
input=Input.Direct,
title="CLIP L Encoder",
default=None,
ui_model_type=ModelType.CLIPEmbed,
ui_model_variant=ClipVariantType.L,
)
clip_g_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.clip_g_model,
ui_type=UIType.CLIPGEmbedModel,
input=Input.Direct,
title="CLIP G Encoder",
default=None,
ui_model_type=ModelType.CLIPEmbed,
ui_model_variant=ClipVariantType.G,
)
vae_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.vae_model, ui_type=UIType.VAEModel, title="VAE", default=None
description=FieldDescriptions.vae_model,
title="VAE",
default=None,
ui_model_base=BaseModelType.StableDiffusion3,
ui_model_type=ModelType.VAE,
)
def invoke(self, context: InvocationContext) -> Sd3ModelLoaderOutput:

View File

@@ -1,8 +1,8 @@
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, UNetField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.taxonomy import SubModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType, SubModelType
@invocation_output("sdxl_model_loader_output")
@@ -29,7 +29,9 @@ class SDXLModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl base model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sdxl_main_model, ui_type=UIType.SDXLMainModel
description=FieldDescriptions.sdxl_main_model,
ui_model_base=BaseModelType.StableDiffusionXL,
ui_model_type=ModelType.Main,
)
# TODO: precision?
@@ -67,7 +69,9 @@ class SDXLRefinerModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl refiner model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sdxl_refiner_model, ui_type=UIType.SDXLRefinerModel
description=FieldDescriptions.sdxl_refiner_model,
ui_model_base=BaseModelType.StableDiffusionXLRefiner,
ui_model_type=ModelType.Main,
)
# TODO: precision?

View File

@@ -1,72 +1,75 @@
from enum import Enum
from itertools import zip_longest
from pathlib import Path
from typing import Literal
import numpy as np
import torch
from PIL import Image
from pydantic import BaseModel, Field
from transformers import AutoProcessor
from pydantic import BaseModel, Field, model_validator
from transformers.models.sam import SamModel
from transformers.models.sam.processing_sam import SamProcessor
from transformers.models.sam2 import Sam2Model
from transformers.models.sam2.processing_sam2 import Sam2Processor
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import BoundingBoxField, ImageField, InputField, TensorField
from invokeai.app.invocations.primitives import MaskOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.segment_anything.mask_refinement import mask_to_polygon, polygon_to_mask
from invokeai.backend.image_util.segment_anything.segment_anything_2_pipeline import SegmentAnything2Pipeline
from invokeai.backend.image_util.segment_anything.segment_anything_pipeline import SegmentAnythingPipeline
from invokeai.backend.image_util.segment_anything.shared import SAMInput, SAMPoint
SegmentAnythingModelKey = Literal["segment-anything-base", "segment-anything-large", "segment-anything-huge"]
SegmentAnythingModelKey = Literal[
"segment-anything-base",
"segment-anything-large",
"segment-anything-huge",
"segment-anything-2-tiny",
"segment-anything-2-small",
"segment-anything-2-base",
"segment-anything-2-large",
]
SEGMENT_ANYTHING_MODEL_IDS: dict[SegmentAnythingModelKey, str] = {
"segment-anything-base": "facebook/sam-vit-base",
"segment-anything-large": "facebook/sam-vit-large",
"segment-anything-huge": "facebook/sam-vit-huge",
"segment-anything-2-tiny": "facebook/sam2.1-hiera-tiny",
"segment-anything-2-small": "facebook/sam2.1-hiera-small",
"segment-anything-2-base": "facebook/sam2.1-hiera-base-plus",
"segment-anything-2-large": "facebook/sam2.1-hiera-large",
}
class SAMPointLabel(Enum):
negative = -1
neutral = 0
positive = 1
class SAMPoint(BaseModel):
x: int = Field(..., description="The x-coordinate of the point")
y: int = Field(..., description="The y-coordinate of the point")
label: SAMPointLabel = Field(..., description="The label of the point")
class SAMPointsField(BaseModel):
points: list[SAMPoint] = Field(..., description="The points of the object")
points: list[SAMPoint] = Field(..., description="The points of the object", min_length=1)
def to_list(self) -> list[list[int]]:
def to_list(self) -> list[list[float]]:
return [[point.x, point.y, point.label.value] for point in self.points]
@invocation(
"segment_anything",
title="Segment Anything",
tags=["prompt", "segmentation"],
tags=["prompt", "segmentation", "sam", "sam2"],
category="segmentation",
version="1.2.0",
version="1.3.0",
)
class SegmentAnythingInvocation(BaseInvocation):
"""Runs a Segment Anything Model."""
"""Runs a Segment Anything Model (SAM or SAM2)."""
# Reference:
# - https://arxiv.org/pdf/2304.02643
# - https://huggingface.co/docs/transformers/v4.43.3/en/model_doc/grounding-dino#grounded-sam
# - https://github.com/NielsRogge/Transformers-Tutorials/blob/a39f33ac1557b02ebfb191ea7753e332b5ca933f/Grounding%20DINO/GroundingDINO_with_Segment_Anything.ipynb
model: SegmentAnythingModelKey = InputField(description="The Segment Anything model to use.")
model: SegmentAnythingModelKey = InputField(description="The Segment Anything model to use (SAM or SAM2).")
image: ImageField = InputField(description="The image to segment.")
bounding_boxes: list[BoundingBoxField] | None = InputField(
default=None, description="The bounding boxes to prompt the SAM model with."
default=None, description="The bounding boxes to prompt the model with."
)
point_lists: list[SAMPointsField] | None = InputField(
default=None,
description="The list of point lists to prompt the SAM model with. Each list of points represents a single object.",
description="The list of point lists to prompt the model with. Each list of points represents a single object.",
)
apply_polygon_refinement: bool = InputField(
description="Whether to apply polygon refinement to the masks. This will smooth the edges of the masks slightly and ensure that each mask consists of a single closed polygon (before merging).",
@@ -77,14 +80,18 @@ class SegmentAnythingInvocation(BaseInvocation):
default="all",
)
@model_validator(mode="after")
def validate_points_and_boxes_len(self):
if self.point_lists is not None and self.bounding_boxes is not None:
if len(self.point_lists) != len(self.bounding_boxes):
raise ValueError("If both point_lists and bounding_boxes are provided, they must have the same length.")
return self
@torch.no_grad()
def invoke(self, context: InvocationContext) -> MaskOutput:
# The models expect a 3-channel RGB image.
image_pil = context.images.get_pil(self.image.image_name, mode="RGB")
if self.point_lists is not None and self.bounding_boxes is not None:
raise ValueError("Only one of point_lists or bounding_box can be provided.")
if (not self.bounding_boxes or len(self.bounding_boxes) == 0) and (
not self.point_lists or len(self.point_lists) == 0
):
@@ -111,26 +118,38 @@ class SegmentAnythingInvocation(BaseInvocation):
# model, and figure out how to make it work in the pipeline.
# torch_dtype=TorchDevice.choose_torch_dtype(),
)
sam_processor = AutoProcessor.from_pretrained(model_path, local_files_only=True)
assert isinstance(sam_processor, SamProcessor)
sam_processor = SamProcessor.from_pretrained(model_path, local_files_only=True)
return SegmentAnythingPipeline(sam_model=sam_model, sam_processor=sam_processor)
def _segment(self, context: InvocationContext, image: Image.Image) -> list[torch.Tensor]:
"""Use Segment Anything (SAM) to generate masks given an image + a set of bounding boxes."""
# Convert the bounding boxes to the SAM input format.
sam_bounding_boxes = (
[[bb.x_min, bb.y_min, bb.x_max, bb.y_max] for bb in self.bounding_boxes] if self.bounding_boxes else None
)
sam_points = [p.to_list() for p in self.point_lists] if self.point_lists else None
@staticmethod
def _load_sam_2_model(model_path: Path):
sam2_model = Sam2Model.from_pretrained(model_path, local_files_only=True)
sam2_processor = Sam2Processor.from_pretrained(model_path, local_files_only=True)
return SegmentAnything2Pipeline(sam2_model=sam2_model, sam2_processor=sam2_processor)
with (
context.models.load_remote_model(
source=SEGMENT_ANYTHING_MODEL_IDS[self.model], loader=SegmentAnythingInvocation._load_sam_model
) as sam_pipeline,
):
assert isinstance(sam_pipeline, SegmentAnythingPipeline)
masks = sam_pipeline.segment(image=image, bounding_boxes=sam_bounding_boxes, point_lists=sam_points)
def _segment(self, context: InvocationContext, image: Image.Image) -> list[torch.Tensor]:
"""Use Segment Anything (SAM or SAM2) to generate masks given an image + a set of bounding boxes."""
source = SEGMENT_ANYTHING_MODEL_IDS[self.model]
inputs: list[SAMInput] = []
for bbox_field, point_field in zip_longest(self.bounding_boxes or [], self.point_lists or [], fillvalue=None):
inputs.append(
SAMInput(
bounding_box=bbox_field,
points=point_field.points if point_field else None,
)
)
if "sam2" in source:
loader = SegmentAnythingInvocation._load_sam_2_model
with context.models.load_remote_model(source=source, loader=loader) as pipeline:
assert isinstance(pipeline, SegmentAnything2Pipeline)
masks = pipeline.segment(image=image, inputs=inputs)
else:
loader = SegmentAnythingInvocation._load_sam_model
with context.models.load_remote_model(source=source, loader=loader) as pipeline:
assert isinstance(pipeline, SegmentAnythingPipeline)
masks = pipeline.segment(image=image, inputs=inputs)
masks = self._process_masks(masks)
if self.apply_polygon_refinement:

View File

@@ -11,7 +11,6 @@ from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
InputField,
UIType,
WithBoard,
WithMetadata,
)
@@ -19,6 +18,7 @@ from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.session_processor.session_processor_common import CanceledException
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.taxonomy import ModelType
from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel
from invokeai.backend.tiles.tiles import calc_tiles_min_overlap
from invokeai.backend.tiles.utils import TBLR, Tile
@@ -33,7 +33,7 @@ class SpandrelImageToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
image_to_image_model: ModelIdentifierField = InputField(
title="Image-to-Image Model",
description=FieldDescriptions.spandrel_image_to_image_model,
ui_type=UIType.SpandrelImageToImageModel,
ui_model_type=ModelType.SpandrelImageToImage,
)
tile_size: int = InputField(
default=512, description="The tile size for tiled image-to-image. Set to 0 to disable tiling."

View File

@@ -8,11 +8,12 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, OutputField
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_RESIZE_VALUES
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
class T2IAdapterField(BaseModel):
@@ -60,7 +61,8 @@ class T2IAdapterInvocation(BaseInvocation):
description="The T2I-Adapter model.",
title="T2I-Adapter Model",
ui_order=-1,
ui_type=UIType.T2IAdapterModel,
ui_model_base=[BaseModelType.StableDiffusion1, BaseModelType.StableDiffusionXL],
ui_model_type=ModelType.T2IAdapter,
)
weight: Union[float, list[float]] = InputField(
default=1, ge=0, description="The weight given to the T2I-Adapter", title="Weight"

View File

@@ -0,0 +1,112 @@
# Copyright (c) 2024, Lincoln D. Stein and the InvokeAI Development Team
"""Z-Image Control invocation for spatial conditioning."""
from pydantic import BaseModel, Field
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
InputField,
OutputField,
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
class ZImageControlField(BaseModel):
"""A Z-Image control conditioning field for spatial control (Canny, HED, Depth, Pose, MLSD)."""
image_name: str = Field(description="The name of the preprocessed control image")
control_model: ModelIdentifierField = Field(description="The Z-Image ControlNet adapter model")
control_context_scale: float = Field(
default=0.75,
ge=0.0,
le=2.0,
description="The strength of the control signal. Recommended range: 0.65-0.80.",
)
begin_step_percent: float = Field(
default=0.0,
ge=0.0,
le=1.0,
description="When the control is first applied (% of total steps)",
)
end_step_percent: float = Field(
default=1.0,
ge=0.0,
le=1.0,
description="When the control is last applied (% of total steps)",
)
@invocation_output("z_image_control_output")
class ZImageControlOutput(BaseInvocationOutput):
"""Z-Image Control output containing control configuration."""
control: ZImageControlField = OutputField(description="Z-Image control conditioning")
@invocation(
"z_image_control",
title="Z-Image ControlNet",
tags=["image", "z-image", "control", "controlnet"],
category="control",
version="1.1.0",
classification=Classification.Prototype,
)
class ZImageControlInvocation(BaseInvocation):
"""Configure Z-Image ControlNet for spatial conditioning.
Takes a preprocessed control image (e.g., Canny edges, depth map, pose)
and a Z-Image ControlNet adapter model to enable spatial control.
Supports 5 control modes: Canny, HED, Depth, Pose, MLSD.
Recommended control_context_scale: 0.65-0.80.
"""
image: ImageField = InputField(
description="The preprocessed control image (Canny, HED, Depth, Pose, or MLSD)",
)
control_model: ModelIdentifierField = InputField(
description=FieldDescriptions.controlnet_model,
title="Control Model",
ui_model_base=BaseModelType.ZImage,
ui_model_type=ModelType.ControlNet,
)
control_context_scale: float = InputField(
default=0.75,
ge=0.0,
le=2.0,
description="Strength of the control signal. Recommended range: 0.65-0.80.",
title="Control Scale",
)
begin_step_percent: float = InputField(
default=0.0,
ge=0.0,
le=1.0,
description="When the control is first applied (% of total steps)",
)
end_step_percent: float = InputField(
default=1.0,
ge=0.0,
le=1.0,
description="When the control is last applied (% of total steps)",
)
def invoke(self, context: InvocationContext) -> ZImageControlOutput:
return ZImageControlOutput(
control=ZImageControlField(
image_name=self.image.image_name,
control_model=self.control_model,
control_context_scale=self.control_context_scale,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
)
)

View File

@@ -0,0 +1,770 @@
import inspect
import math
from contextlib import ExitStack
from typing import Callable, Iterator, Optional, Tuple
import einops
import torch
import torchvision.transforms as tv_transforms
from diffusers.schedulers.scheduling_utils import SchedulerMixin
from PIL import Image
from torchvision.transforms.functional import resize as tv_resize
from tqdm import tqdm
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
Input,
InputField,
LatentsField,
ZImageConditioningField,
)
from invokeai.app.invocations.model import TransformerField, VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.invocations.z_image_control import ZImageControlField
from invokeai.app.invocations.z_image_image_to_latents import ZImageImageToLatentsInvocation
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.schedulers import ZIMAGE_SCHEDULER_LABELS, ZIMAGE_SCHEDULER_MAP, ZIMAGE_SCHEDULER_NAME_VALUES
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelFormat
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.z_image_lora_constants import Z_IMAGE_LORA_TRANSFORMER_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
from invokeai.backend.rectified_flow.rectified_flow_inpaint_extension import RectifiedFlowInpaintExtension
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ZImageConditioningInfo
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.z_image.extensions.regional_prompting_extension import ZImageRegionalPromptingExtension
from invokeai.backend.z_image.text_conditioning import ZImageTextConditioning
from invokeai.backend.z_image.z_image_control_adapter import ZImageControlAdapter
from invokeai.backend.z_image.z_image_controlnet_extension import (
ZImageControlNetExtension,
z_image_forward_with_control,
)
from invokeai.backend.z_image.z_image_transformer_patch import patch_transformer_for_regional_prompting
@invocation(
"z_image_denoise",
title="Denoise - Z-Image",
tags=["image", "z-image"],
category="image",
version="1.4.0",
classification=Classification.Prototype,
)
class ZImageDenoiseInvocation(BaseInvocation):
"""Run the denoising process with a Z-Image model.
Supports regional prompting by connecting multiple conditioning inputs with masks.
"""
# If latents is provided, this means we are doing image-to-image.
latents: Optional[LatentsField] = InputField(
default=None, description=FieldDescriptions.latents, input=Input.Connection
)
# denoise_mask is used for image-to-image inpainting. Only the masked region is modified.
denoise_mask: Optional[DenoiseMaskField] = InputField(
default=None, description=FieldDescriptions.denoise_mask, input=Input.Connection
)
denoising_start: float = InputField(default=0.0, ge=0, le=1, description=FieldDescriptions.denoising_start)
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
add_noise: bool = InputField(default=True, description="Add noise based on denoising start.")
transformer: TransformerField = InputField(
description=FieldDescriptions.z_image_model, input=Input.Connection, title="Transformer"
)
positive_conditioning: ZImageConditioningField | list[ZImageConditioningField] = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
)
negative_conditioning: ZImageConditioningField | list[ZImageConditioningField] | None = InputField(
default=None, description=FieldDescriptions.negative_cond, input=Input.Connection
)
# Z-Image-Turbo works best without CFG (guidance_scale=1.0)
guidance_scale: float = InputField(
default=1.0,
ge=1.0,
description="Guidance scale for classifier-free guidance. 1.0 = no CFG (recommended for Z-Image-Turbo). "
"Values > 1.0 amplify guidance.",
title="Guidance Scale",
)
width: int = InputField(default=1024, multiple_of=16, description="Width of the generated image.")
height: int = InputField(default=1024, multiple_of=16, description="Height of the generated image.")
# Z-Image-Turbo uses 8 steps by default
steps: int = InputField(default=8, gt=0, description="Number of denoising steps. 8 recommended for Z-Image-Turbo.")
seed: int = InputField(default=0, description="Randomness seed for reproducibility.")
# Z-Image Control support
control: Optional[ZImageControlField] = InputField(
default=None,
description="Z-Image control conditioning for spatial control (Canny, HED, Depth, Pose, MLSD).",
input=Input.Connection,
)
# VAE for encoding control images (required when using control)
vae: Optional[VAEField] = InputField(
default=None,
description=FieldDescriptions.vae + " Required for control conditioning.",
input=Input.Connection,
)
# Scheduler selection for the denoising process
scheduler: ZIMAGE_SCHEDULER_NAME_VALUES = InputField(
default="euler",
description="Scheduler (sampler) for the denoising process. Euler is the default and recommended for "
"Z-Image-Turbo. Heun is 2nd-order (better quality, 2x slower). LCM is optimized for few steps.",
ui_choice_labels=ZIMAGE_SCHEDULER_LABELS,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = self._run_diffusion(context)
latents = latents.detach().to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)
def _prep_inpaint_mask(self, context: InvocationContext, latents: torch.Tensor) -> torch.Tensor | None:
"""Prepare the inpaint mask."""
if self.denoise_mask is None:
return None
mask = context.tensors.load(self.denoise_mask.mask_name)
# Invert mask: 0.0 = regions to denoise, 1.0 = regions to preserve
mask = 1.0 - mask
_, _, latent_height, latent_width = latents.shape
mask = tv_resize(
img=mask,
size=[latent_height, latent_width],
interpolation=tv_transforms.InterpolationMode.BILINEAR,
antialias=False,
)
mask = mask.to(device=latents.device, dtype=latents.dtype)
return mask
def _load_text_conditioning(
self,
context: InvocationContext,
cond_field: ZImageConditioningField | list[ZImageConditioningField],
img_height: int,
img_width: int,
dtype: torch.dtype,
device: torch.device,
) -> list[ZImageTextConditioning]:
"""Load Z-Image text conditioning with optional regional masks.
Args:
context: The invocation context.
cond_field: Single conditioning field or list of fields.
img_height: Height of the image token grid (H // patch_size).
img_width: Width of the image token grid (W // patch_size).
dtype: Target dtype.
device: Target device.
Returns:
List of ZImageTextConditioning objects with embeddings and masks.
"""
# Normalize to a list
cond_list = [cond_field] if isinstance(cond_field, ZImageConditioningField) else cond_field
text_conditionings: list[ZImageTextConditioning] = []
for cond in cond_list:
# Load the text embeddings
cond_data = context.conditioning.load(cond.conditioning_name)
assert len(cond_data.conditionings) == 1
z_image_conditioning = cond_data.conditionings[0]
assert isinstance(z_image_conditioning, ZImageConditioningInfo)
z_image_conditioning = z_image_conditioning.to(dtype=dtype, device=device)
prompt_embeds = z_image_conditioning.prompt_embeds
# Load the mask, if provided
mask: torch.Tensor | None = None
if cond.mask is not None:
mask = context.tensors.load(cond.mask.tensor_name)
mask = mask.to(device=device)
mask = ZImageRegionalPromptingExtension.preprocess_regional_prompt_mask(
mask, img_height, img_width, dtype, device
)
text_conditionings.append(ZImageTextConditioning(prompt_embeds=prompt_embeds, mask=mask))
return text_conditionings
def _get_noise(
self,
batch_size: int,
num_channels_latents: int,
height: int,
width: int,
dtype: torch.dtype,
device: torch.device,
seed: int,
) -> torch.Tensor:
"""Generate initial noise tensor."""
# Generate noise as float32 on CPU for maximum compatibility,
# then cast to target dtype/device
rand_device = "cpu"
rand_dtype = torch.float32
return torch.randn(
batch_size,
num_channels_latents,
int(height) // LATENT_SCALE_FACTOR,
int(width) // LATENT_SCALE_FACTOR,
device=rand_device,
dtype=rand_dtype,
generator=torch.Generator(device=rand_device).manual_seed(seed),
).to(device=device, dtype=dtype)
def _calculate_shift(
self,
image_seq_len: int,
base_image_seq_len: int = 256,
max_image_seq_len: int = 4096,
base_shift: float = 0.5,
max_shift: float = 1.15,
) -> float:
"""Calculate timestep shift based on image sequence length.
Based on diffusers ZImagePipeline.calculate_shift method.
"""
m = (max_shift - base_shift) / (max_image_seq_len - base_image_seq_len)
b = base_shift - m * base_image_seq_len
mu = image_seq_len * m + b
return mu
def _get_sigmas(self, mu: float, num_steps: int) -> list[float]:
"""Generate sigma schedule with time shift.
Based on FlowMatchEulerDiscreteScheduler with shift.
Generates num_steps + 1 sigma values (including terminal 0.0).
"""
import math
def time_shift(mu: float, sigma: float, t: float) -> float:
"""Apply time shift to a single timestep value."""
if t <= 0:
return 0.0
if t >= 1:
return 1.0
return math.exp(mu) / (math.exp(mu) + (1 / t - 1) ** sigma)
# Generate linearly spaced values from 1 to 0 (excluding endpoints for safety)
# then apply time shift
sigmas = []
for i in range(num_steps + 1):
t = 1.0 - i / num_steps # Goes from 1.0 to 0.0
sigma = time_shift(mu, 1.0, t)
sigmas.append(sigma)
return sigmas
def _run_diffusion(self, context: InvocationContext) -> torch.Tensor:
device = TorchDevice.choose_torch_device()
inference_dtype = TorchDevice.choose_bfloat16_safe_dtype(device)
transformer_info = context.models.load(self.transformer.transformer)
# Calculate image token grid dimensions
patch_size = 2 # Z-Image uses patch_size=2
latent_height = self.height // LATENT_SCALE_FACTOR
latent_width = self.width // LATENT_SCALE_FACTOR
img_token_height = latent_height // patch_size
img_token_width = latent_width // patch_size
img_seq_len = img_token_height * img_token_width
# Load positive conditioning with regional masks
pos_text_conditionings = self._load_text_conditioning(
context=context,
cond_field=self.positive_conditioning,
img_height=img_token_height,
img_width=img_token_width,
dtype=inference_dtype,
device=device,
)
# Create regional prompting extension
regional_extension = ZImageRegionalPromptingExtension.from_text_conditionings(
text_conditionings=pos_text_conditionings,
img_seq_len=img_seq_len,
)
# Get the concatenated prompt embeddings for the transformer
pos_prompt_embeds = regional_extension.regional_text_conditioning.prompt_embeds
# Load negative conditioning if provided and guidance_scale != 1.0
# CFG formula: pred = pred_uncond + cfg_scale * (pred_cond - pred_uncond)
# At cfg_scale=1.0: pred = pred_cond (no effect, skip uncond computation)
# This matches FLUX's convention where 1.0 means "no CFG"
neg_prompt_embeds: torch.Tensor | None = None
do_classifier_free_guidance = (
not math.isclose(self.guidance_scale, 1.0) and self.negative_conditioning is not None
)
if do_classifier_free_guidance:
assert self.negative_conditioning is not None
# Load all negative conditionings and concatenate embeddings
# Note: We ignore masks for negative conditioning as regional negative prompting is not fully supported
neg_text_conditionings = self._load_text_conditioning(
context=context,
cond_field=self.negative_conditioning,
img_height=img_token_height,
img_width=img_token_width,
dtype=inference_dtype,
device=device,
)
# Concatenate all negative embeddings
neg_prompt_embeds = torch.cat([tc.prompt_embeds for tc in neg_text_conditionings], dim=0)
# Calculate shift based on image sequence length
mu = self._calculate_shift(img_seq_len)
# Generate sigma schedule with time shift
sigmas = self._get_sigmas(mu, self.steps)
# Apply denoising_start and denoising_end clipping
if self.denoising_start > 0 or self.denoising_end < 1:
# Calculate start and end indices based on denoising range
total_sigmas = len(sigmas)
start_idx = int(self.denoising_start * (total_sigmas - 1))
end_idx = int(self.denoising_end * (total_sigmas - 1)) + 1
sigmas = sigmas[start_idx:end_idx]
total_steps = len(sigmas) - 1
# Load input latents if provided (image-to-image)
init_latents = context.tensors.load(self.latents.latents_name) if self.latents else None
if init_latents is not None:
init_latents = init_latents.to(device=device, dtype=inference_dtype)
# Generate initial noise
num_channels_latents = 16 # Z-Image uses 16 latent channels
noise = self._get_noise(
batch_size=1,
num_channels_latents=num_channels_latents,
height=self.height,
width=self.width,
dtype=inference_dtype,
device=device,
seed=self.seed,
)
# Prepare input latent image
if init_latents is not None:
if self.add_noise:
# Noise the init_latents by the appropriate amount for the first timestep.
s_0 = sigmas[0]
latents = s_0 * noise + (1.0 - s_0) * init_latents
else:
latents = init_latents
else:
if self.denoising_start > 1e-5:
raise ValueError("denoising_start should be 0 when initial latents are not provided.")
latents = noise
# Short-circuit if no denoising steps
if total_steps <= 0:
return latents
# Prepare inpaint extension
inpaint_mask = self._prep_inpaint_mask(context, latents)
inpaint_extension: RectifiedFlowInpaintExtension | None = None
if inpaint_mask is not None:
if init_latents is None:
raise ValueError("Initial latents are required when using an inpaint mask (image-to-image inpainting)")
inpaint_extension = RectifiedFlowInpaintExtension(
init_latents=init_latents,
inpaint_mask=inpaint_mask,
noise=noise,
)
step_callback = self._build_step_callback(context)
# Initialize the diffusers scheduler if not using built-in Euler
scheduler: SchedulerMixin | None = None
use_scheduler = self.scheduler != "euler"
if use_scheduler:
scheduler_class = ZIMAGE_SCHEDULER_MAP[self.scheduler]
scheduler = scheduler_class(
num_train_timesteps=1000,
shift=1.0,
)
# Set timesteps - LCM should use num_inference_steps (it has its own sigma schedule),
# while other schedulers can use custom sigmas if supported
is_lcm = self.scheduler == "lcm"
set_timesteps_sig = inspect.signature(scheduler.set_timesteps)
if not is_lcm and "sigmas" in set_timesteps_sig.parameters:
# Convert sigmas list to tensor for scheduler
scheduler.set_timesteps(sigmas=sigmas, device=device)
else:
# LCM or scheduler doesn't support custom sigmas - use num_inference_steps
scheduler.set_timesteps(num_inference_steps=total_steps, device=device)
# For Heun scheduler, the number of actual steps may differ
num_scheduler_steps = len(scheduler.timesteps)
else:
num_scheduler_steps = total_steps
with ExitStack() as exit_stack:
# Get transformer config to determine if it's quantized
transformer_config = context.models.get_config(self.transformer.transformer)
# Determine if the model is quantized.
# If the model is quantized, then we need to apply the LoRA weights as sidecar layers. This results in
# slower inference than direct patching, but is agnostic to the quantization format.
if transformer_config.format in [ModelFormat.Diffusers, ModelFormat.Checkpoint]:
model_is_quantized = False
elif transformer_config.format in [ModelFormat.GGUFQuantized]:
model_is_quantized = True
else:
raise ValueError(f"Unsupported Z-Image model format: {transformer_config.format}")
# Load transformer - always use base transformer, control is handled via extension
(cached_weights, transformer) = exit_stack.enter_context(transformer_info.model_on_device())
# Prepare control extension if control is provided
control_extension: ZImageControlNetExtension | None = None
if self.control is not None:
# Load control adapter using context manager (proper GPU memory management)
control_model_info = context.models.load(self.control.control_model)
(_, control_adapter) = exit_stack.enter_context(control_model_info.model_on_device())
assert isinstance(control_adapter, ZImageControlAdapter)
# Get control_in_dim from adapter config (16 for V1, 33 for V2.0)
adapter_config = control_adapter.config
control_in_dim = adapter_config.get("control_in_dim", 16)
num_control_blocks = adapter_config.get("num_control_blocks", 6)
# Log control configuration for debugging
version = "V2.0" if control_in_dim > 16 else "V1"
context.util.signal_progress(
f"Using Z-Image ControlNet {version} (Extension): control_in_dim={control_in_dim}, "
f"num_blocks={num_control_blocks}, scale={self.control.control_context_scale}"
)
# Load and prepare control image - must be VAE-encoded!
if self.vae is None:
raise ValueError("VAE is required when using Z-Image Control. Connect a VAE to the 'vae' input.")
control_image = context.images.get_pil(self.control.image_name)
# Resize control image to match output dimensions
control_image = control_image.convert("RGB")
control_image = control_image.resize((self.width, self.height), Image.Resampling.LANCZOS)
# Convert to tensor format for VAE encoding
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
control_image_tensor = image_resized_to_grid_as_tensor(control_image)
if control_image_tensor.dim() == 3:
control_image_tensor = einops.rearrange(control_image_tensor, "c h w -> 1 c h w")
# Encode control image through VAE to get latents
vae_info = context.models.load(self.vae.vae)
control_latents = ZImageImageToLatentsInvocation.vae_encode(
vae_info=vae_info,
image_tensor=control_image_tensor,
)
# Move to inference device/dtype
control_latents = control_latents.to(device=device, dtype=inference_dtype)
# Add frame dimension: [B, C, H, W] -> [C, 1, H, W] (single image)
control_latents = control_latents.squeeze(0).unsqueeze(1)
# Prepare control_cond based on control_in_dim
# V1: 16 channels (just control latents)
# V2.0: 33 channels = 16 control + 16 reference + 1 mask
# - Channels 0-15: control image latents (from VAE encoding)
# - Channels 16-31: reference/inpaint image latents (zeros for pure control)
# - Channel 32: inpaint mask (1.0 = don't inpaint, 0.0 = inpaint region)
# For pure control (no inpainting), we set mask=1 to tell model "use control, don't inpaint"
c, f, h, w = control_latents.shape
if c < control_in_dim:
padding_channels = control_in_dim - c
if padding_channels == 17:
# V2.0: 16 reference channels (zeros) + 1 mask channel (ones)
ref_padding = torch.zeros(
(16, f, h, w),
device=device,
dtype=inference_dtype,
)
# Mask channel = 1.0 means "don't inpaint this region, use control signal"
mask_channel = torch.ones(
(1, f, h, w),
device=device,
dtype=inference_dtype,
)
control_latents = torch.cat([control_latents, ref_padding, mask_channel], dim=0)
else:
# Generic padding with zeros for other cases
zero_padding = torch.zeros(
(padding_channels, f, h, w),
device=device,
dtype=inference_dtype,
)
control_latents = torch.cat([control_latents, zero_padding], dim=0)
# Create control extension (adapter is already on device from model_on_device)
control_extension = ZImageControlNetExtension(
control_adapter=control_adapter,
control_cond=control_latents,
weight=self.control.control_context_scale,
begin_step_percent=self.control.begin_step_percent,
end_step_percent=self.control.end_step_percent,
)
# Apply LoRA models to the transformer.
# Note: We apply the LoRA after the transformer has been moved to its target device for faster patching.
exit_stack.enter_context(
LayerPatcher.apply_smart_model_patches(
model=transformer,
patches=self._lora_iterator(context),
prefix=Z_IMAGE_LORA_TRANSFORMER_PREFIX,
dtype=inference_dtype,
cached_weights=cached_weights,
force_sidecar_patching=model_is_quantized,
)
)
# Apply regional prompting patch if we have regional masks
exit_stack.enter_context(
patch_transformer_for_regional_prompting(
transformer=transformer,
regional_attn_mask=regional_extension.regional_attn_mask,
img_seq_len=img_seq_len,
)
)
# Denoising loop - supports both built-in Euler and diffusers schedulers
# Track user-facing step for progress (accounts for Heun's double steps)
user_step = 0
if use_scheduler and scheduler is not None:
# Use diffusers scheduler for stepping
# Use tqdm with total_steps (user-facing steps) not num_scheduler_steps (internal steps)
# This ensures progress bar shows 1/8, 2/8, etc. even when scheduler uses more internal steps
pbar = tqdm(total=total_steps, desc="Denoising")
for step_index in range(num_scheduler_steps):
sched_timestep = scheduler.timesteps[step_index]
# Convert scheduler timestep (0-1000) to normalized sigma (0-1)
sigma_curr = sched_timestep.item() / scheduler.config.num_train_timesteps
# For Heun scheduler, track if we're in first or second order step
is_heun = hasattr(scheduler, "state_in_first_order")
in_first_order = scheduler.state_in_first_order if is_heun else True
# Timestep tensor for Z-Image model
# The model expects t=0 at start (noise) and t=1 at end (clean)
model_t = 1.0 - sigma_curr
timestep = torch.tensor([model_t], device=device, dtype=inference_dtype).expand(latents.shape[0])
# Run transformer for positive prediction
latent_model_input = latents.to(transformer.dtype)
latent_model_input = latent_model_input.unsqueeze(2) # Add frame dimension
latent_model_input_list = list(latent_model_input.unbind(dim=0))
# Determine if control should be applied at this step
apply_control = control_extension is not None and control_extension.should_apply(
user_step, total_steps
)
# Run forward pass
if apply_control:
model_out_list, _ = z_image_forward_with_control(
transformer=transformer,
x=latent_model_input_list,
t=timestep,
cap_feats=[pos_prompt_embeds],
control_extension=control_extension,
)
else:
model_output = transformer(
x=latent_model_input_list,
t=timestep,
cap_feats=[pos_prompt_embeds],
)
model_out_list = model_output[0]
noise_pred_cond = torch.stack([t.float() for t in model_out_list], dim=0)
noise_pred_cond = noise_pred_cond.squeeze(2)
noise_pred_cond = -noise_pred_cond # Z-Image uses v-prediction with negation
# Apply CFG if enabled
if do_classifier_free_guidance and neg_prompt_embeds is not None:
if apply_control:
model_out_list_uncond, _ = z_image_forward_with_control(
transformer=transformer,
x=latent_model_input_list,
t=timestep,
cap_feats=[neg_prompt_embeds],
control_extension=control_extension,
)
else:
model_output_uncond = transformer(
x=latent_model_input_list,
t=timestep,
cap_feats=[neg_prompt_embeds],
)
model_out_list_uncond = model_output_uncond[0]
noise_pred_uncond = torch.stack([t.float() for t in model_out_list_uncond], dim=0)
noise_pred_uncond = noise_pred_uncond.squeeze(2)
noise_pred_uncond = -noise_pred_uncond
noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_cond - noise_pred_uncond)
else:
noise_pred = noise_pred_cond
# Use scheduler.step() for the update
step_output = scheduler.step(model_output=noise_pred, timestep=sched_timestep, sample=latents)
latents = step_output.prev_sample
# Get sigma_prev for inpainting (next sigma value)
if step_index + 1 < len(scheduler.sigmas):
sigma_prev = scheduler.sigmas[step_index + 1].item()
else:
sigma_prev = 0.0
if inpaint_extension is not None:
latents = inpaint_extension.merge_intermediate_latents_with_init_latents(latents, sigma_prev)
# For Heun, only increment user step after second-order step completes
if is_heun:
if not in_first_order:
user_step += 1
# Only call step_callback if we haven't exceeded total_steps
if user_step <= total_steps:
pbar.update(1)
step_callback(
PipelineIntermediateState(
step=user_step,
order=2,
total_steps=total_steps,
timestep=int(sigma_curr * 1000),
latents=latents,
),
)
else:
# For LCM and other first-order schedulers
user_step += 1
# Only call step_callback if we haven't exceeded total_steps
# (LCM scheduler may have more internal steps than user-facing steps)
if user_step <= total_steps:
pbar.update(1)
step_callback(
PipelineIntermediateState(
step=user_step,
order=1,
total_steps=total_steps,
timestep=int(sigma_curr * 1000),
latents=latents,
),
)
pbar.close()
else:
# Original Euler implementation (default, optimized for Z-Image)
for step_idx in tqdm(range(total_steps)):
sigma_curr = sigmas[step_idx]
sigma_prev = sigmas[step_idx + 1]
# Timestep tensor for Z-Image model
# The model expects t=0 at start (noise) and t=1 at end (clean)
# Sigma goes from 1 (noise) to 0 (clean), so model_t = 1 - sigma
model_t = 1.0 - sigma_curr
timestep = torch.tensor([model_t], device=device, dtype=inference_dtype).expand(latents.shape[0])
# Run transformer for positive prediction
# Z-Image transformer expects: x as list of [C, 1, H, W] tensors, t, cap_feats as list
# Prepare latent input: [B, C, H, W] -> [B, C, 1, H, W] -> list of [C, 1, H, W]
latent_model_input = latents.to(transformer.dtype)
latent_model_input = latent_model_input.unsqueeze(2) # Add frame dimension
latent_model_input_list = list(latent_model_input.unbind(dim=0))
# Determine if control should be applied at this step
apply_control = control_extension is not None and control_extension.should_apply(
step_idx, total_steps
)
# Run forward pass - use custom forward with control if extension is active
if apply_control:
model_out_list, _ = z_image_forward_with_control(
transformer=transformer,
x=latent_model_input_list,
t=timestep,
cap_feats=[pos_prompt_embeds],
control_extension=control_extension,
)
else:
model_output = transformer(
x=latent_model_input_list,
t=timestep,
cap_feats=[pos_prompt_embeds],
)
model_out_list = model_output[0] # Extract list of tensors from tuple
noise_pred_cond = torch.stack([t.float() for t in model_out_list], dim=0)
noise_pred_cond = noise_pred_cond.squeeze(2) # Remove frame dimension
noise_pred_cond = -noise_pred_cond # Z-Image uses v-prediction with negation
# Apply CFG if enabled
if do_classifier_free_guidance and neg_prompt_embeds is not None:
if apply_control:
model_out_list_uncond, _ = z_image_forward_with_control(
transformer=transformer,
x=latent_model_input_list,
t=timestep,
cap_feats=[neg_prompt_embeds],
control_extension=control_extension,
)
else:
model_output_uncond = transformer(
x=latent_model_input_list,
t=timestep,
cap_feats=[neg_prompt_embeds],
)
model_out_list_uncond = model_output_uncond[0] # Extract list of tensors from tuple
noise_pred_uncond = torch.stack([t.float() for t in model_out_list_uncond], dim=0)
noise_pred_uncond = noise_pred_uncond.squeeze(2)
noise_pred_uncond = -noise_pred_uncond
noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_cond - noise_pred_uncond)
else:
noise_pred = noise_pred_cond
# Euler step
latents_dtype = latents.dtype
latents = latents.to(dtype=torch.float32)
latents = latents + (sigma_prev - sigma_curr) * noise_pred
latents = latents.to(dtype=latents_dtype)
if inpaint_extension is not None:
latents = inpaint_extension.merge_intermediate_latents_with_init_latents(latents, sigma_prev)
step_callback(
PipelineIntermediateState(
step=step_idx + 1,
order=1,
total_steps=total_steps,
timestep=int(sigma_curr * 1000),
latents=latents,
),
)
return latents
def _build_step_callback(self, context: InvocationContext) -> Callable[[PipelineIntermediateState], None]:
def step_callback(state: PipelineIntermediateState) -> None:
context.util.sd_step_callback(state, BaseModelType.ZImage)
return step_callback
def _lora_iterator(self, context: InvocationContext) -> Iterator[Tuple[ModelPatchRaw, float]]:
"""Iterate over LoRA models to apply to the transformer."""
for lora in self.transformer.loras:
lora_info = context.models.load(lora.lora)
if not isinstance(lora_info.model, ModelPatchRaw):
raise TypeError(
f"Expected ModelPatchRaw for LoRA '{lora.lora.key}', got {type(lora_info.model).__name__}. "
"The LoRA model may be corrupted or incompatible."
)
yield (lora_info.model, lora.weight)
del lora_info

View File

@@ -0,0 +1,110 @@
from typing import Union
import einops
import torch
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
Input,
InputField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.autoencoder import AutoEncoder as FluxAutoEncoder
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_flux
# Z-Image can use either the Diffusers AutoencoderKL or the FLUX AutoEncoder
ZImageVAE = Union[AutoencoderKL, FluxAutoEncoder]
@invocation(
"z_image_i2l",
title="Image to Latents - Z-Image",
tags=["image", "latents", "vae", "i2l", "z-image"],
category="image",
version="1.1.0",
classification=Classification.Prototype,
)
class ZImageImageToLatentsInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates latents from an image using Z-Image VAE (supports both Diffusers and FLUX VAE)."""
image: ImageField = InputField(description="The image to encode.")
vae: VAEField = InputField(description=FieldDescriptions.vae, input=Input.Connection)
@staticmethod
def vae_encode(vae_info: LoadedModel, image_tensor: torch.Tensor) -> torch.Tensor:
if not isinstance(vae_info.model, (AutoencoderKL, FluxAutoEncoder)):
raise TypeError(
f"Expected AutoencoderKL or FluxAutoEncoder for Z-Image VAE, got {type(vae_info.model).__name__}. "
"Ensure you are using a compatible VAE model."
)
# Estimate working memory needed for VAE encode
estimated_working_memory = estimate_vae_working_memory_flux(
operation="encode",
image_tensor=image_tensor,
vae=vae_info.model,
)
with vae_info.model_on_device(working_mem_bytes=estimated_working_memory) as (_, vae):
if not isinstance(vae, (AutoencoderKL, FluxAutoEncoder)):
raise TypeError(
f"Expected AutoencoderKL or FluxAutoEncoder, got {type(vae).__name__}. "
"VAE model type changed unexpectedly after loading."
)
vae_dtype = next(iter(vae.parameters())).dtype
image_tensor = image_tensor.to(device=TorchDevice.choose_torch_device(), dtype=vae_dtype)
with torch.inference_mode():
if isinstance(vae, FluxAutoEncoder):
# FLUX VAE handles scaling internally
generator = torch.Generator(device=TorchDevice.choose_torch_device()).manual_seed(0)
latents = vae.encode(image_tensor, sample=True, generator=generator)
else:
# AutoencoderKL - needs manual scaling
vae.disable_tiling()
image_tensor_dist = vae.encode(image_tensor).latent_dist
latents: torch.Tensor = image_tensor_dist.sample().to(dtype=vae.dtype)
# Apply scaling_factor and shift_factor from VAE config
# Z-Image uses: latents = (latents - shift_factor) * scaling_factor
scaling_factor = vae.config.scaling_factor
shift_factor = getattr(vae.config, "shift_factor", None)
if shift_factor is not None:
latents = latents - shift_factor
latents = latents * scaling_factor
return latents
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
image = context.images.get_pil(self.image.image_name)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
vae_info = context.models.load(self.vae.vae)
if not isinstance(vae_info.model, (AutoencoderKL, FluxAutoEncoder)):
raise TypeError(
f"Expected AutoencoderKL or FluxAutoEncoder for Z-Image VAE, got {type(vae_info.model).__name__}. "
"Ensure you are using a compatible VAE model."
)
context.util.signal_progress("Running VAE")
latents = self.vae_encode(vae_info=vae_info, image_tensor=image_tensor)
latents = latents.to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)

View File

@@ -0,0 +1,111 @@
from contextlib import nullcontext
from typing import Union
import torch
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from einops import rearrange
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
LatentsField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.autoencoder import AutoEncoder as FluxAutoEncoder
from invokeai.backend.stable_diffusion.extensions.seamless import SeamlessExt
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_flux
# Z-Image can use either the Diffusers AutoencoderKL or the FLUX AutoEncoder
ZImageVAE = Union[AutoencoderKL, FluxAutoEncoder]
@invocation(
"z_image_l2i",
title="Latents to Image - Z-Image",
tags=["latents", "image", "vae", "l2i", "z-image"],
category="latents",
version="1.1.0",
classification=Classification.Prototype,
)
class ZImageLatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents using Z-Image VAE (supports both Diffusers and FLUX VAE)."""
latents: LatentsField = InputField(description=FieldDescriptions.latents, input=Input.Connection)
vae: VAEField = InputField(description=FieldDescriptions.vae, input=Input.Connection)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
if not isinstance(vae_info.model, (AutoencoderKL, FluxAutoEncoder)):
raise TypeError(
f"Expected AutoencoderKL or FluxAutoEncoder for Z-Image VAE, got {type(vae_info.model).__name__}. "
"Ensure you are using a compatible VAE model."
)
is_flux_vae = isinstance(vae_info.model, FluxAutoEncoder)
# Estimate working memory needed for VAE decode
estimated_working_memory = estimate_vae_working_memory_flux(
operation="decode",
image_tensor=latents,
vae=vae_info.model,
)
# FLUX VAE doesn't support seamless, so only apply for AutoencoderKL
seamless_context = (
nullcontext() if is_flux_vae else SeamlessExt.static_patch_model(vae_info.model, self.vae.seamless_axes)
)
with seamless_context, vae_info.model_on_device(working_mem_bytes=estimated_working_memory) as (_, vae):
context.util.signal_progress("Running VAE")
if not isinstance(vae, (AutoencoderKL, FluxAutoEncoder)):
raise TypeError(
f"Expected AutoencoderKL or FluxAutoEncoder, got {type(vae).__name__}. "
"VAE model type changed unexpectedly after loading."
)
vae_dtype = next(iter(vae.parameters())).dtype
latents = latents.to(device=TorchDevice.choose_torch_device(), dtype=vae_dtype)
# Disable tiling for AutoencoderKL
if isinstance(vae, AutoencoderKL):
vae.disable_tiling()
# Clear memory as VAE decode can request a lot
TorchDevice.empty_cache()
with torch.inference_mode():
if isinstance(vae, FluxAutoEncoder):
# FLUX VAE handles scaling internally
img = vae.decode(latents)
else:
# AutoencoderKL - Apply scaling_factor and shift_factor from VAE config
# Z-Image uses: latents = latents / scaling_factor + shift_factor
scaling_factor = vae.config.scaling_factor
shift_factor = getattr(vae.config, "shift_factor", None)
latents = latents / scaling_factor
if shift_factor is not None:
latents = latents + shift_factor
img = vae.decode(latents, return_dict=False)[0]
img = img.clamp(-1, 1)
img = rearrange(img[0], "c h w -> h w c")
img_pil = Image.fromarray((127.5 * (img + 1.0)).byte().cpu().numpy())
TorchDevice.empty_cache()
image_dto = context.images.save(image=img_pil)
return ImageOutput.build(image_dto)

View File

@@ -0,0 +1,153 @@
from typing import Optional
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField
from invokeai.app.invocations.model import LoRAField, ModelIdentifierField, Qwen3EncoderField, TransformerField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
@invocation_output("z_image_lora_loader_output")
class ZImageLoRALoaderOutput(BaseInvocationOutput):
"""Z-Image LoRA Loader Output"""
transformer: Optional[TransformerField] = OutputField(
default=None, description=FieldDescriptions.transformer, title="Z-Image Transformer"
)
qwen3_encoder: Optional[Qwen3EncoderField] = OutputField(
default=None, description=FieldDescriptions.qwen3_encoder, title="Qwen3 Encoder"
)
@invocation(
"z_image_lora_loader",
title="Apply LoRA - Z-Image",
tags=["lora", "model", "z-image"],
category="model",
version="1.0.0",
)
class ZImageLoRALoaderInvocation(BaseInvocation):
"""Apply a LoRA model to a Z-Image transformer and/or Qwen3 text encoder."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model,
title="LoRA",
ui_model_base=BaseModelType.ZImage,
ui_model_type=ModelType.LoRA,
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
transformer: TransformerField | None = InputField(
default=None,
description=FieldDescriptions.transformer,
input=Input.Connection,
title="Z-Image Transformer",
)
qwen3_encoder: Qwen3EncoderField | None = InputField(
default=None,
title="Qwen3 Encoder",
description=FieldDescriptions.qwen3_encoder,
input=Input.Connection,
)
def invoke(self, context: InvocationContext) -> ZImageLoRALoaderOutput:
lora_key = self.lora.key
if not context.models.exists(lora_key):
raise ValueError(f"Unknown lora: {lora_key}!")
# Check for existing LoRAs with the same key.
if self.transformer and any(lora.lora.key == lora_key for lora in self.transformer.loras):
raise ValueError(f'LoRA "{lora_key}" already applied to transformer.')
if self.qwen3_encoder and any(lora.lora.key == lora_key for lora in self.qwen3_encoder.loras):
raise ValueError(f'LoRA "{lora_key}" already applied to Qwen3 encoder.')
output = ZImageLoRALoaderOutput()
# Attach LoRA layers to the models.
if self.transformer is not None:
output.transformer = self.transformer.model_copy(deep=True)
output.transformer.loras.append(
LoRAField(
lora=self.lora,
weight=self.weight,
)
)
if self.qwen3_encoder is not None:
output.qwen3_encoder = self.qwen3_encoder.model_copy(deep=True)
output.qwen3_encoder.loras.append(
LoRAField(
lora=self.lora,
weight=self.weight,
)
)
return output
@invocation(
"z_image_lora_collection_loader",
title="Apply LoRA Collection - Z-Image",
tags=["lora", "model", "z-image"],
category="model",
version="1.0.0",
)
class ZImageLoRACollectionLoader(BaseInvocation):
"""Applies a collection of LoRAs to a Z-Image transformer."""
loras: Optional[LoRAField | list[LoRAField]] = InputField(
default=None, description="LoRA models and weights. May be a single LoRA or collection.", title="LoRAs"
)
transformer: Optional[TransformerField] = InputField(
default=None,
description=FieldDescriptions.transformer,
input=Input.Connection,
title="Transformer",
)
qwen3_encoder: Qwen3EncoderField | None = InputField(
default=None,
title="Qwen3 Encoder",
description=FieldDescriptions.qwen3_encoder,
input=Input.Connection,
)
def invoke(self, context: InvocationContext) -> ZImageLoRALoaderOutput:
output = ZImageLoRALoaderOutput()
loras = self.loras if isinstance(self.loras, list) else [self.loras]
added_loras: list[str] = []
if self.transformer is not None:
output.transformer = self.transformer.model_copy(deep=True)
if self.qwen3_encoder is not None:
output.qwen3_encoder = self.qwen3_encoder.model_copy(deep=True)
for lora in loras:
if lora is None:
continue
if lora.lora.key in added_loras:
continue
if not context.models.exists(lora.lora.key):
raise Exception(f"Unknown lora: {lora.lora.key}!")
if lora.lora.base is not BaseModelType.ZImage:
raise ValueError(
f"LoRA '{lora.lora.key}' is for {lora.lora.base.value if lora.lora.base else 'unknown'} models, "
"not Z-Image models. Ensure you are using a Z-Image compatible LoRA."
)
added_loras.append(lora.lora.key)
if self.transformer is not None and output.transformer is not None:
output.transformer.loras.append(lora)
if self.qwen3_encoder is not None and output.qwen3_encoder is not None:
output.qwen3_encoder.loras.append(lora)
return output

View File

@@ -0,0 +1,135 @@
from typing import Optional
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField
from invokeai.app.invocations.model import (
ModelIdentifierField,
Qwen3EncoderField,
TransformerField,
VAEField,
)
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelFormat, ModelType, SubModelType
@invocation_output("z_image_model_loader_output")
class ZImageModelLoaderOutput(BaseInvocationOutput):
"""Z-Image base model loader output."""
transformer: TransformerField = OutputField(description=FieldDescriptions.transformer, title="Transformer")
qwen3_encoder: Qwen3EncoderField = OutputField(description=FieldDescriptions.qwen3_encoder, title="Qwen3 Encoder")
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation(
"z_image_model_loader",
title="Main Model - Z-Image",
tags=["model", "z-image"],
category="model",
version="3.0.0",
classification=Classification.Prototype,
)
class ZImageModelLoaderInvocation(BaseInvocation):
"""Loads a Z-Image model, outputting its submodels.
Similar to FLUX, you can mix and match components:
- Transformer: From Z-Image main model (GGUF quantized or Diffusers format)
- VAE: Separate FLUX VAE (shared with FLUX models) or from a Diffusers Z-Image model
- Qwen3 Encoder: Separate Qwen3Encoder model or from a Diffusers Z-Image model
"""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.z_image_model,
input=Input.Direct,
ui_model_base=BaseModelType.ZImage,
ui_model_type=ModelType.Main,
title="Transformer",
)
vae_model: Optional[ModelIdentifierField] = InputField(
default=None,
description="Standalone VAE model. Z-Image uses the same VAE as FLUX (16-channel). "
"If not provided, VAE will be loaded from the Qwen3 Source model.",
input=Input.Direct,
ui_model_base=BaseModelType.Flux,
ui_model_type=ModelType.VAE,
title="VAE",
)
qwen3_encoder_model: Optional[ModelIdentifierField] = InputField(
default=None,
description="Standalone Qwen3 Encoder model. "
"If not provided, encoder will be loaded from the Qwen3 Source model.",
input=Input.Direct,
ui_model_type=ModelType.Qwen3Encoder,
title="Qwen3 Encoder",
)
qwen3_source_model: Optional[ModelIdentifierField] = InputField(
default=None,
description="Diffusers Z-Image model to extract VAE and/or Qwen3 encoder from. "
"Use this if you don't have separate VAE/Qwen3 models. "
"Ignored if both VAE and Qwen3 Encoder are provided separately.",
input=Input.Direct,
ui_model_base=BaseModelType.ZImage,
ui_model_type=ModelType.Main,
ui_model_format=ModelFormat.Diffusers,
title="Qwen3 Source (Diffusers)",
)
def invoke(self, context: InvocationContext) -> ZImageModelLoaderOutput:
# Transformer always comes from the main model
transformer = self.model.model_copy(update={"submodel_type": SubModelType.Transformer})
# Determine VAE source
if self.vae_model is not None:
# Use standalone FLUX VAE
vae = self.vae_model.model_copy(update={"submodel_type": SubModelType.VAE})
elif self.qwen3_source_model is not None:
# Extract from Diffusers Z-Image model
self._validate_diffusers_format(context, self.qwen3_source_model, "Qwen3 Source")
vae = self.qwen3_source_model.model_copy(update={"submodel_type": SubModelType.VAE})
else:
raise ValueError(
"No VAE source provided. Either set 'VAE' to a FLUX VAE model, "
"or set 'Qwen3 Source' to a Diffusers Z-Image model."
)
# Determine Qwen3 Encoder source
if self.qwen3_encoder_model is not None:
# Use standalone Qwen3 Encoder
qwen3_tokenizer = self.qwen3_encoder_model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
qwen3_encoder = self.qwen3_encoder_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
elif self.qwen3_source_model is not None:
# Extract from Diffusers Z-Image model
self._validate_diffusers_format(context, self.qwen3_source_model, "Qwen3 Source")
qwen3_tokenizer = self.qwen3_source_model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
qwen3_encoder = self.qwen3_source_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
else:
raise ValueError(
"No Qwen3 Encoder source provided. Either set 'Qwen3 Encoder' to a standalone model, "
"or set 'Qwen3 Source' to a Diffusers Z-Image model."
)
return ZImageModelLoaderOutput(
transformer=TransformerField(transformer=transformer, loras=[]),
qwen3_encoder=Qwen3EncoderField(tokenizer=qwen3_tokenizer, text_encoder=qwen3_encoder),
vae=VAEField(vae=vae),
)
def _validate_diffusers_format(
self, context: InvocationContext, model: ModelIdentifierField, model_name: str
) -> None:
"""Validate that a model is in Diffusers format."""
config = context.models.get_config(model)
if config.format != ModelFormat.Diffusers:
raise ValueError(
f"The {model_name} model must be a Diffusers format Z-Image model. "
f"The selected model '{config.name}' is in {config.format.value} format."
)

View File

@@ -0,0 +1,110 @@
import torch
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
ZImageConditioningField,
)
from invokeai.app.invocations.primitives import ZImageConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
ConditioningFieldData,
ZImageConditioningInfo,
)
@invocation(
"z_image_seed_variance_enhancer",
title="Seed Variance Enhancer - Z-Image",
tags=["conditioning", "z-image", "variance", "seed"],
category="conditioning",
version="1.0.0",
classification=Classification.Prototype,
)
class ZImageSeedVarianceEnhancerInvocation(BaseInvocation):
"""Adds seed-based noise to Z-Image conditioning to increase variance between seeds.
Z-Image-Turbo can produce relatively similar images with different seeds,
making it harder to explore variations of a prompt. This node implements
reproducible, seed-based noise injection into text embeddings to increase
visual variation while maintaining reproducibility.
The noise strength is auto-calibrated relative to the embedding's standard
deviation, ensuring consistent results across different prompts.
"""
conditioning: ZImageConditioningField = InputField(
description=FieldDescriptions.cond,
input=Input.Connection,
title="Conditioning",
)
seed: int = InputField(
default=0,
ge=0,
description="Seed for reproducible noise generation. Different seeds produce different noise patterns.",
)
strength: float = InputField(
default=0.1,
ge=0.0,
le=2.0,
description="Noise strength as multiplier of embedding std. 0=off, 0.1=subtle, 0.5=strong.",
)
randomize_percent: float = InputField(
default=50.0,
ge=1.0,
le=100.0,
description="Percentage of embedding values to add noise to (1-100). Lower values create more selective noise patterns.",
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ZImageConditioningOutput:
# Load conditioning data
cond_data = context.conditioning.load(self.conditioning.conditioning_name)
assert len(cond_data.conditionings) == 1, "Expected exactly one conditioning tensor"
z_image_conditioning = cond_data.conditionings[0]
assert isinstance(z_image_conditioning, ZImageConditioningInfo), "Expected ZImageConditioningInfo"
# Early return if strength is zero (no modification needed)
if self.strength == 0:
return ZImageConditioningOutput(conditioning=self.conditioning)
# Clone embeddings to avoid modifying the original
prompt_embeds = z_image_conditioning.prompt_embeds.clone()
# Calculate actual noise strength based on embedding statistics
# This auto-calibration ensures consistent results across different prompts
embed_std = torch.std(prompt_embeds).item()
actual_strength = self.strength * embed_std
# Generate deterministic noise using the seed
generator = torch.Generator(device=prompt_embeds.device)
generator.manual_seed(self.seed)
noise = torch.rand(
prompt_embeds.shape, generator=generator, device=prompt_embeds.device, dtype=prompt_embeds.dtype
)
noise = noise * 2 - 1 # Scale to [-1, 1)
noise = noise * actual_strength
# Create selective mask for noise application
generator.manual_seed(self.seed + 1)
noise_mask = torch.bernoulli(
torch.ones_like(prompt_embeds) * (self.randomize_percent / 100.0),
generator=generator,
).bool()
# Apply noise only to masked positions
prompt_embeds = prompt_embeds + (noise * noise_mask)
# Save modified conditioning
new_conditioning = ZImageConditioningInfo(prompt_embeds=prompt_embeds)
conditioning_data = ConditioningFieldData(conditionings=[new_conditioning])
conditioning_name = context.conditioning.save(conditioning_data)
return ZImageConditioningOutput(
conditioning=ZImageConditioningField(
conditioning_name=conditioning_name,
mask=self.conditioning.mask,
)
)

View File

@@ -0,0 +1,197 @@
from contextlib import ExitStack
from typing import Iterator, Optional, Tuple
import torch
from transformers import PreTrainedModel, PreTrainedTokenizerBase
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
TensorField,
UIComponent,
ZImageConditioningField,
)
from invokeai.app.invocations.model import Qwen3EncoderField
from invokeai.app.invocations.primitives import ZImageConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.z_image_lora_constants import Z_IMAGE_LORA_QWEN3_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
ConditioningFieldData,
ZImageConditioningInfo,
)
from invokeai.backend.util.devices import TorchDevice
# Z-Image max sequence length based on diffusers default
Z_IMAGE_MAX_SEQ_LEN = 512
@invocation(
"z_image_text_encoder",
title="Prompt - Z-Image",
tags=["prompt", "conditioning", "z-image"],
category="conditioning",
version="1.1.0",
classification=Classification.Prototype,
)
class ZImageTextEncoderInvocation(BaseInvocation):
"""Encodes and preps a prompt for a Z-Image image.
Supports regional prompting by connecting a mask input.
"""
prompt: str = InputField(description="Text prompt to encode.", ui_component=UIComponent.Textarea)
qwen3_encoder: Qwen3EncoderField = InputField(
title="Qwen3 Encoder",
description=FieldDescriptions.qwen3_encoder,
input=Input.Connection,
)
mask: Optional[TensorField] = InputField(
default=None,
description="A mask defining the region that this conditioning prompt applies to.",
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ZImageConditioningOutput:
prompt_embeds = self._encode_prompt(context, max_seq_len=Z_IMAGE_MAX_SEQ_LEN)
conditioning_data = ConditioningFieldData(conditionings=[ZImageConditioningInfo(prompt_embeds=prompt_embeds)])
conditioning_name = context.conditioning.save(conditioning_data)
return ZImageConditioningOutput(
conditioning=ZImageConditioningField(conditioning_name=conditioning_name, mask=self.mask)
)
def _encode_prompt(self, context: InvocationContext, max_seq_len: int) -> torch.Tensor:
"""Encode prompt using Qwen3 text encoder.
Based on the ZImagePipeline._encode_prompt method from diffusers.
"""
prompt = self.prompt
device = TorchDevice.choose_torch_device()
text_encoder_info = context.models.load(self.qwen3_encoder.text_encoder)
tokenizer_info = context.models.load(self.qwen3_encoder.tokenizer)
with ExitStack() as exit_stack:
(_, text_encoder) = exit_stack.enter_context(text_encoder_info.model_on_device())
(_, tokenizer) = exit_stack.enter_context(tokenizer_info.model_on_device())
# Apply LoRA models to the text encoder
lora_dtype = TorchDevice.choose_bfloat16_safe_dtype(device)
exit_stack.enter_context(
LayerPatcher.apply_smart_model_patches(
model=text_encoder,
patches=self._lora_iterator(context),
prefix=Z_IMAGE_LORA_QWEN3_PREFIX,
dtype=lora_dtype,
)
)
context.util.signal_progress("Running Qwen3 text encoder")
if not isinstance(text_encoder, PreTrainedModel):
raise TypeError(
f"Expected PreTrainedModel for text encoder, got {type(text_encoder).__name__}. "
"The Qwen3 encoder model may be corrupted or incompatible."
)
if not isinstance(tokenizer, PreTrainedTokenizerBase):
raise TypeError(
f"Expected PreTrainedTokenizerBase for tokenizer, got {type(tokenizer).__name__}. "
"The Qwen3 tokenizer may be corrupted or incompatible."
)
# Apply chat template similar to diffusers ZImagePipeline
# The chat template formats the prompt for the Qwen3 model
try:
prompt_formatted = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
tokenize=False,
add_generation_prompt=True,
enable_thinking=True,
)
except (AttributeError, TypeError) as e:
# Fallback if tokenizer doesn't support apply_chat_template or enable_thinking
context.logger.warning(f"Chat template failed ({e}), using raw prompt.")
prompt_formatted = prompt
# Tokenize the formatted prompt
text_inputs = tokenizer(
prompt_formatted,
padding="max_length",
max_length=max_seq_len,
truncation=True,
return_attention_mask=True,
return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
attention_mask = text_inputs.attention_mask
if not isinstance(text_input_ids, torch.Tensor):
raise TypeError(
f"Expected torch.Tensor for input_ids, got {type(text_input_ids).__name__}. "
"Tokenizer returned unexpected type."
)
if not isinstance(attention_mask, torch.Tensor):
raise TypeError(
f"Expected torch.Tensor for attention_mask, got {type(attention_mask).__name__}. "
"Tokenizer returned unexpected type."
)
# Check for truncation
untruncated_ids = tokenizer(prompt_formatted, padding="longest", return_tensors="pt").input_ids
if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
text_input_ids, untruncated_ids
):
removed_text = tokenizer.batch_decode(untruncated_ids[:, max_seq_len - 1 : -1])
context.logger.warning(
f"The following part of your input was truncated because `max_sequence_length` is set to "
f"{max_seq_len} tokens: {removed_text}"
)
# Get hidden states from the text encoder
# Use the second-to-last hidden state like diffusers does
prompt_mask = attention_mask.to(device).bool()
outputs = text_encoder(
text_input_ids.to(device),
attention_mask=prompt_mask,
output_hidden_states=True,
)
# Validate hidden_states output
if not hasattr(outputs, "hidden_states") or outputs.hidden_states is None:
raise RuntimeError(
"Text encoder did not return hidden_states. "
"Ensure output_hidden_states=True is supported by this model."
)
if len(outputs.hidden_states) < 2:
raise RuntimeError(
f"Expected at least 2 hidden states from text encoder, got {len(outputs.hidden_states)}. "
"This may indicate an incompatible model or configuration."
)
prompt_embeds = outputs.hidden_states[-2]
# Z-Image expects a 2D tensor [seq_len, hidden_dim] with only valid tokens
# Based on diffusers ZImagePipeline implementation:
# embeddings_list.append(prompt_embeds[i][prompt_masks[i]])
# Since batch_size=1, we take the first item and filter by mask
prompt_embeds = prompt_embeds[0][prompt_mask[0]]
if not isinstance(prompt_embeds, torch.Tensor):
raise TypeError(
f"Expected torch.Tensor for prompt embeddings, got {type(prompt_embeds).__name__}. "
"Text encoder returned unexpected type."
)
return prompt_embeds
def _lora_iterator(self, context: InvocationContext) -> Iterator[Tuple[ModelPatchRaw, float]]:
"""Iterate over LoRA models to apply to the Qwen3 text encoder."""
for lora in self.qwen3_encoder.loras:
lora_info = context.models.load(lora.lora)
if not isinstance(lora_info.model, ModelPatchRaw):
raise TypeError(
f"Expected ModelPatchRaw for LoRA '{lora.lora.key}', got {type(lora_info.model).__name__}. "
"The LoRA model may be corrupted or incompatible."
)
yield (lora_info.model, lora.weight)
del lora_info

View File

@@ -49,3 +49,11 @@ class BoardImageRecordStorageBase(ABC):
) -> int:
"""Gets the number of images for a board."""
pass
@abstractmethod
def get_asset_count_for_board(
self,
board_id: str,
) -> int:
"""Gets the number of assets for a board."""
pass

View File

@@ -3,6 +3,8 @@ from typing import Optional, cast
from invokeai.app.services.board_image_records.board_image_records_base import BoardImageRecordStorageBase
from invokeai.app.services.image_records.image_records_common import (
ASSETS_CATEGORIES,
IMAGE_CATEGORIES,
ImageCategory,
ImageRecord,
deserialize_image_record,
@@ -151,15 +153,38 @@ class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
def get_image_count_for_board(self, board_id: str) -> int:
with self._db.transaction() as cursor:
# Convert the enum values to unique list of strings
category_strings = [c.value for c in set(IMAGE_CATEGORIES)]
# Create the correct length of placeholders
placeholders = ",".join("?" * len(category_strings))
cursor.execute(
"""--sql
f"""--sql
SELECT COUNT(*)
FROM board_images
INNER JOIN images ON board_images.image_name = images.image_name
WHERE images.is_intermediate = FALSE
WHERE images.is_intermediate = FALSE AND images.image_category IN ( {placeholders} )
AND board_images.board_id = ?;
""",
(board_id,),
(*category_strings, board_id),
)
count = cast(int, cursor.fetchone()[0])
return count
def get_asset_count_for_board(self, board_id: str) -> int:
with self._db.transaction() as cursor:
# Convert the enum values to unique list of strings
category_strings = [c.value for c in set(ASSETS_CATEGORIES)]
# Create the correct length of placeholders
placeholders = ",".join("?" * len(category_strings))
cursor.execute(
f"""--sql
SELECT COUNT(*)
FROM board_images
INNER JOIN images ON board_images.image_name = images.image_name
WHERE images.is_intermediate = FALSE AND images.image_category IN ( {placeholders} )
AND board_images.board_id = ?;
""",
(*category_strings, board_id),
)
count = cast(int, cursor.fetchone()[0])
return count

View File

@@ -26,8 +26,6 @@ class BoardRecord(BaseModelExcludeNull):
"""The name of the cover image of the board."""
archived: bool = Field(description="Whether or not the board is archived.")
"""Whether or not the board is archived."""
is_private: Optional[bool] = Field(default=None, description="Whether the board is private.")
"""Whether the board is private."""
def deserialize_board_record(board_dict: dict) -> BoardRecord:
@@ -42,7 +40,6 @@ def deserialize_board_record(board_dict: dict) -> BoardRecord:
updated_at = board_dict.get("updated_at", get_iso_timestamp())
deleted_at = board_dict.get("deleted_at", get_iso_timestamp())
archived = board_dict.get("archived", False)
is_private = board_dict.get("is_private", False)
return BoardRecord(
board_id=board_id,
@@ -52,7 +49,6 @@ def deserialize_board_record(board_dict: dict) -> BoardRecord:
updated_at=updated_at,
deleted_at=deleted_at,
archived=archived,
is_private=is_private,
)

View File

@@ -12,12 +12,17 @@ class BoardDTO(BoardRecord):
"""The URL of the thumbnail of the most recent image in the board."""
image_count: int = Field(description="The number of images in the board.")
"""The number of images in the board."""
asset_count: int = Field(description="The number of assets in the board.")
"""The number of assets in the board."""
def board_record_to_dto(board_record: BoardRecord, cover_image_name: Optional[str], image_count: int) -> BoardDTO:
def board_record_to_dto(
board_record: BoardRecord, cover_image_name: Optional[str], image_count: int, asset_count: int
) -> BoardDTO:
"""Converts a board record to a board DTO."""
return BoardDTO(
**board_record.model_dump(exclude={"cover_image_name"}),
cover_image_name=cover_image_name,
image_count=image_count,
asset_count=asset_count,
)

View File

@@ -17,7 +17,7 @@ class BoardService(BoardServiceABC):
board_name: str,
) -> BoardDTO:
board_record = self.__invoker.services.board_records.save(board_name)
return board_record_to_dto(board_record, None, 0)
return board_record_to_dto(board_record, None, 0, 0, 0)
def get_dto(self, board_id: str) -> BoardDTO:
board_record = self.__invoker.services.board_records.get(board_id)
@@ -27,7 +27,8 @@ class BoardService(BoardServiceABC):
else:
cover_image_name = None
image_count = self.__invoker.services.board_image_records.get_image_count_for_board(board_id)
return board_record_to_dto(board_record, cover_image_name, image_count)
asset_count = self.__invoker.services.board_image_records.get_asset_count_for_board(board_id)
return board_record_to_dto(board_record, cover_image_name, image_count, asset_count)
def update(
self,
@@ -42,7 +43,8 @@ class BoardService(BoardServiceABC):
cover_image_name = None
image_count = self.__invoker.services.board_image_records.get_image_count_for_board(board_id)
return board_record_to_dto(board_record, cover_image_name, image_count)
asset_count = self.__invoker.services.board_image_records.get_asset_count_for_board(board_id)
return board_record_to_dto(board_record, cover_image_name, image_count, asset_count)
def delete(self, board_id: str) -> None:
self.__invoker.services.board_records.delete(board_id)
@@ -67,7 +69,8 @@ class BoardService(BoardServiceABC):
cover_image_name = None
image_count = self.__invoker.services.board_image_records.get_image_count_for_board(r.board_id)
board_dtos.append(board_record_to_dto(r, cover_image_name, image_count))
asset_count = self.__invoker.services.board_image_records.get_asset_count_for_board(r.board_id)
board_dtos.append(board_record_to_dto(r, cover_image_name, image_count, asset_count))
return OffsetPaginatedResults[BoardDTO](items=board_dtos, offset=offset, limit=limit, total=len(board_dtos))
@@ -84,6 +87,7 @@ class BoardService(BoardServiceABC):
cover_image_name = None
image_count = self.__invoker.services.board_image_records.get_image_count_for_board(r.board_id)
board_dtos.append(board_record_to_dto(r, cover_image_name, image_count))
asset_count = self.__invoker.services.board_image_records.get_asset_count_for_board(r.board_id)
board_dtos.append(board_record_to_dto(r, cover_image_name, image_count, asset_count))
return board_dtos

View File

@@ -150,4 +150,15 @@ class BulkDownloadService(BulkDownloadBase):
def _is_valid_path(self, path: Union[str, Path]) -> bool:
"""Validates the path given for a bulk download."""
path = path if isinstance(path, Path) else Path(path)
return path.exists()
# Resolve the path to handle any path traversal attempts (e.g., ../)
resolved_path = path.resolve()
# The path may not traverse out of the bulk downloads folder or its subfolders
does_not_traverse = resolved_path.parent == self._bulk_downloads_folder.resolve()
# The path must exist and be a .zip file
does_exist = resolved_path.exists()
is_zip_file = resolved_path.suffix == ".zip"
return does_exist and is_zip_file and does_not_traverse

View File

@@ -0,0 +1,42 @@
from abc import ABC, abstractmethod
class ClientStatePersistenceABC(ABC):
"""
Base class for client persistence implementations.
This class defines the interface for persisting client data.
"""
@abstractmethod
def set_by_key(self, queue_id: str, key: str, value: str) -> str:
"""
Set a key-value pair for the client.
Args:
key (str): The key to set.
value (str): The value to set for the key.
Returns:
str: The value that was set.
"""
pass
@abstractmethod
def get_by_key(self, queue_id: str, key: str) -> str | None:
"""
Get the value for a specific key of the client.
Args:
key (str): The key to retrieve the value for.
Returns:
str | None: The value associated with the key, or None if the key does not exist.
"""
pass
@abstractmethod
def delete(self, queue_id: str) -> None:
"""
Delete all client state.
"""
pass

View File

@@ -0,0 +1,65 @@
import json
from invokeai.app.services.client_state_persistence.client_state_persistence_base import ClientStatePersistenceABC
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
class ClientStatePersistenceSqlite(ClientStatePersistenceABC):
"""
Base class for client persistence implementations.
This class defines the interface for persisting client data.
"""
def __init__(self, db: SqliteDatabase) -> None:
super().__init__()
self._db = db
self._default_row_id = 1
def start(self, invoker: Invoker) -> None:
self._invoker = invoker
def _get(self) -> dict[str, str] | None:
with self._db.transaction() as cursor:
cursor.execute(
f"""
SELECT data FROM client_state
WHERE id = {self._default_row_id}
"""
)
row = cursor.fetchone()
if row is None:
return None
return json.loads(row[0])
def set_by_key(self, queue_id: str, key: str, value: str) -> str:
state = self._get() or {}
state.update({key: value})
with self._db.transaction() as cursor:
cursor.execute(
f"""
INSERT INTO client_state (id, data)
VALUES ({self._default_row_id}, ?)
ON CONFLICT(id) DO UPDATE
SET data = excluded.data;
""",
(json.dumps(state),),
)
return value
def get_by_key(self, queue_id: str, key: str) -> str | None:
state = self._get()
if state is None:
return None
return state.get(key, None)
def delete(self, queue_id: str) -> None:
with self._db.transaction() as cursor:
cursor.execute(
f"""
DELETE FROM client_state
WHERE id = {self._default_row_id}
"""
)

View File

@@ -85,6 +85,7 @@ class InvokeAIAppConfig(BaseSettings):
max_cache_ram_gb: The maximum amount of CPU RAM to use for model caching in GB. If unset, the limit will be configured based on the available RAM. In most cases, it is recommended to leave this unset.
max_cache_vram_gb: The amount of VRAM to use for model caching in GB. If unset, the limit will be configured based on the available VRAM and the device_working_mem_gb. In most cases, it is recommended to leave this unset.
log_memory_usage: If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
model_cache_keep_alive_min: How long to keep models in cache after last use, in minutes. A value of 0 (the default) means models are kept in cache indefinitely. If no model generations occur within the timeout period, the model cache is cleared using the same logic as the 'Clear Model Cache' button.
device_working_mem_gb: The amount of working memory to keep available on the compute device (in GB). Has no effect if running on CPU. If you are experiencing OOM errors, try increasing this value.
enable_partial_loading: Enable partial loading of models. This enables models to run with reduced VRAM requirements (at the cost of slower speed) by streaming the model from RAM to VRAM as its used. In some edge cases, partial loading can cause models to run more slowly if they were previously being fully loaded into VRAM.
keep_ram_copy_of_weights: Whether to keep a full RAM copy of a model's weights when the model is loaded in VRAM. Keeping a RAM copy increases average RAM usage, but speeds up model switching and LoRA patching (assuming there is sufficient RAM). Set this to False if RAM pressure is consistently high.
@@ -107,6 +108,8 @@ class InvokeAIAppConfig(BaseSettings):
hashing_algorithm: Model hashing algorthim for model installs. 'blake3_multi' is best for SSDs. 'blake3_single' is best for spinning disk HDDs. 'random' disables hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models. Alternatively, any other hashlib algorithm is accepted, though these are not nearly as performant as blake3.<br>Valid values: `blake3_multi`, `blake3_single`, `random`, `md5`, `sha1`, `sha224`, `sha256`, `sha384`, `sha512`, `blake2b`, `blake2s`, `sha3_224`, `sha3_256`, `sha3_384`, `sha3_512`, `shake_128`, `shake_256`
remote_api_tokens: List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.
scan_models_on_startup: Scan the models directory on startup, registering orphaned models. This is typically only used in conjunction with `use_memory_db` for testing purposes.
unsafe_disable_picklescan: UNSAFE. Disable the picklescan security check during model installation. Recommended only for development and testing purposes. This will allow arbitrary code execution during model installation, so should never be used in production.
allow_unknown_models: Allow installation of models that we are unable to identify. If enabled, models will be marked as `unknown` in the database, and will not have any metadata associated with them. If disabled, unknown models will be rejected during installation.
"""
_root: Optional[Path] = PrivateAttr(default=None)
@@ -163,9 +166,10 @@ class InvokeAIAppConfig(BaseSettings):
max_cache_ram_gb: Optional[float] = Field(default=None, gt=0, description="The maximum amount of CPU RAM to use for model caching in GB. If unset, the limit will be configured based on the available RAM. In most cases, it is recommended to leave this unset.")
max_cache_vram_gb: Optional[float] = Field(default=None, ge=0, description="The amount of VRAM to use for model caching in GB. If unset, the limit will be configured based on the available VRAM and the device_working_mem_gb. In most cases, it is recommended to leave this unset.")
log_memory_usage: bool = Field(default=False, description="If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.")
model_cache_keep_alive_min: float = Field(default=0, ge=0, description="How long to keep models in cache after last use, in minutes. A value of 0 (the default) means models are kept in cache indefinitely. If no model generations occur within the timeout period, the model cache is cleared using the same logic as the 'Clear Model Cache' button.")
device_working_mem_gb: float = Field(default=3, description="The amount of working memory to keep available on the compute device (in GB). Has no effect if running on CPU. If you are experiencing OOM errors, try increasing this value.")
enable_partial_loading: bool = Field(default=False, description="Enable partial loading of models. This enables models to run with reduced VRAM requirements (at the cost of slower speed) by streaming the model from RAM to VRAM as its used. In some edge cases, partial loading can cause models to run more slowly if they were previously being fully loaded into VRAM.")
keep_ram_copy_of_weights: bool = Field(default=True, description="Whether to keep a full RAM copy of a model's weights when the model is loaded in VRAM. Keeping a RAM copy increases average RAM usage, but speeds up model switching and LoRA patching (assuming there is sufficient RAM). Set this to False if RAM pressure is consistently high.")
keep_ram_copy_of_weights: bool = Field(default=True, description="Whether to keep a full RAM copy of a model's weights when the model is loaded in VRAM. Keeping a RAM copy increases average RAM usage, but speeds up model switching and LoRA patching (assuming there is sufficient RAM). Set this to False if RAM pressure is consistently high.")
# Deprecated CACHE configs
ram: Optional[float] = Field(default=None, gt=0, description="DEPRECATED: This setting is no longer used. It has been replaced by `max_cache_ram_gb`, but most users will not need to use this config since automatic cache size limits should work well in most cases. This config setting will be removed once the new model cache behavior is stable.")
vram: Optional[float] = Field(default=None, ge=0, description="DEPRECATED: This setting is no longer used. It has been replaced by `max_cache_vram_gb`, but most users will not need to use this config since automatic cache size limits should work well in most cases. This config setting will be removed once the new model cache behavior is stable.")
@@ -196,6 +200,8 @@ class InvokeAIAppConfig(BaseSettings):
hashing_algorithm: HASHING_ALGORITHMS = Field(default="blake3_single", description="Model hashing algorthim for model installs. 'blake3_multi' is best for SSDs. 'blake3_single' is best for spinning disk HDDs. 'random' disables hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models. Alternatively, any other hashlib algorithm is accepted, though these are not nearly as performant as blake3.")
remote_api_tokens: Optional[list[URLRegexTokenPair]] = Field(default=None, description="List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.")
scan_models_on_startup: bool = Field(default=False, description="Scan the models directory on startup, registering orphaned models. This is typically only used in conjunction with `use_memory_db` for testing purposes.")
unsafe_disable_picklescan: bool = Field(default=False, description="UNSAFE. Disable the picklescan security check during model installation. Recommended only for development and testing purposes. This will allow arbitrary code execution during model installation, so should never be used in production.")
allow_unknown_models: bool = Field(default=True, description="Allow installation of models that we are unable to identify. If enabled, models will be marked as `unknown` in the database, and will not have any metadata associated with them. If disabled, unknown models will be rejected during installation.")
# fmt: on

View File

@@ -44,8 +44,8 @@ if TYPE_CHECKING:
SessionQueueItem,
SessionQueueStatus,
)
from invokeai.backend.model_manager import SubModelType
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.taxonomy import SubModelType
class EventServiceBase:

View File

@@ -16,8 +16,8 @@ from invokeai.app.services.session_queue.session_queue_common import (
)
from invokeai.app.services.shared.graph import AnyInvocation, AnyInvocationOutput
from invokeai.app.util.misc import get_timestamp
from invokeai.backend.model_manager import SubModelType
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.taxonomy import SubModelType
if TYPE_CHECKING:
from invokeai.app.services.download.download_base import DownloadJob
@@ -195,8 +195,6 @@ class InvocationErrorEvent(InvocationEventBase):
error_type: str = Field(description="The error type")
error_message: str = Field(description="The error message")
error_traceback: str = Field(description="The error traceback")
user_id: Optional[str] = Field(default=None, description="The ID of the user who created the invocation")
project_id: Optional[str] = Field(default=None, description="The ID of the user who created the invocation")
@classmethod
def build(
@@ -219,8 +217,6 @@ class InvocationErrorEvent(InvocationEventBase):
error_type=error_type,
error_message=error_message,
error_traceback=error_traceback,
user_id=getattr(queue_item, "user_id", None),
project_id=getattr(queue_item, "project_id", None),
)
@@ -234,14 +230,13 @@ class QueueItemStatusChangedEvent(QueueItemEventBase):
error_type: Optional[str] = Field(default=None, description="The error type, if any")
error_message: Optional[str] = Field(default=None, description="The error message, if any")
error_traceback: Optional[str] = Field(default=None, description="The error traceback, if any")
created_at: Optional[str] = Field(default=None, description="The timestamp when the queue item was created")
updated_at: Optional[str] = Field(default=None, description="The timestamp when the queue item was last updated")
created_at: str = Field(description="The timestamp when the queue item was created")
updated_at: str = Field(description="The timestamp when the queue item was last updated")
started_at: Optional[str] = Field(default=None, description="The timestamp when the queue item was started")
completed_at: Optional[str] = Field(default=None, description="The timestamp when the queue item was completed")
batch_status: BatchStatus = Field(description="The status of the batch")
queue_status: SessionQueueStatus = Field(description="The status of the queue")
session_id: str = Field(description="The ID of the session (aka graph execution state)")
credits: Optional[float] = Field(default=None, description="The total credits used for this queue item")
@classmethod
def build(
@@ -258,13 +253,12 @@ class QueueItemStatusChangedEvent(QueueItemEventBase):
error_type=queue_item.error_type,
error_message=queue_item.error_message,
error_traceback=queue_item.error_traceback,
created_at=str(queue_item.created_at) if queue_item.created_at else None,
updated_at=str(queue_item.updated_at) if queue_item.updated_at else None,
created_at=str(queue_item.created_at),
updated_at=str(queue_item.updated_at),
started_at=str(queue_item.started_at) if queue_item.started_at else None,
completed_at=str(queue_item.completed_at) if queue_item.completed_at else None,
batch_status=batch_status,
queue_status=queue_status,
credits=queue_item.credits,
)
@@ -546,11 +540,18 @@ class ModelInstallCompleteEvent(ModelEventBase):
source: ModelSource = Field(description="Source of the model; local path, repo_id or url")
key: str = Field(description="Model config record key")
total_bytes: Optional[int] = Field(description="Size of the model (may be None for installation of a local path)")
config: AnyModelConfig = Field(description="The installed model's config")
@classmethod
def build(cls, job: "ModelInstallJob") -> "ModelInstallCompleteEvent":
assert job.config_out is not None
return cls(id=job.id, source=job.source, key=(job.config_out.key), total_bytes=job.total_bytes)
return cls(
id=job.id,
source=job.source,
key=(job.config_out.key),
total_bytes=job.total_bytes,
config=job.config_out,
)
@payload_schema.register

Some files were not shown because too many files have changed in this diff Show More