* WIP: feat(flux2): add FLUX 2 Kontext model support
- Add new invocation nodes for FLUX 2:
- flux2_denoise: Denoising invocation for FLUX 2
- flux2_klein_model_loader: Model loader for Klein architecture
- flux2_klein_text_encoder: Text encoder for Qwen3-based encoding
- flux2_vae_decode: VAE decoder for FLUX 2
- Add backend support:
- New flux2 module with denoise and sampling utilities
- Extended model manager configs for FLUX 2 models
- Updated model loaders for Klein architecture
- Update frontend:
- Extended graph builder for FLUX 2 support
- Added FLUX 2 model types and configurations
- Updated readiness checks and UI components
* fix(flux2): correct VAE decode with proper BN denormalization
FLUX.2 VAE uses Batch Normalization in the patchified latent space
(128 channels). The decode must:
1. Patchify latents from (B, 32, H, W) to (B, 128, H/2, W/2)
2. Apply BN denormalization using running_mean/running_var
3. Unpatchify back to (B, 32, H, W) for VAE decode
Also fixed image normalization from [-1, 1] to [0, 255].
This fixes washed-out colors in generated FLUX.2 Klein images.
* feat(flux2): add FLUX.2 Klein model support with ComfyUI checkpoint compatibility
- Add FLUX.2 transformer loader with BFL-to-diffusers weight conversion
- Fix AdaLayerNorm scale-shift swap for final_layer.adaLN_modulation weights
- Add VAE batch normalization handling for FLUX.2 latent normalization
- Add Qwen3 text encoder loader with ComfyUI FP8 quantization support
- Add frontend components for FLUX.2 Klein model selection
- Update configs and schema for FLUX.2 model types
* Chore Ruff
* Fix Flux1 vae probing
* Fix Windows Paths schema.ts
* Add 4B und 9B klein to Starter Models.
* feat(flux2): add non-commercial license indicator for FLUX.2 Klein 9B
- Add isFlux2Klein9BMainModelConfig and isNonCommercialMainModelConfig functions
- Update MainModelPicker and InitialStateMainModelPicker to show license icon
- Update license tooltip text to include FLUX.2 Klein 9B
* feat(flux2): add Klein/Qwen3 variant support and encoder filtering
Backend:
- Add klein_4b/klein_9b variants for FLUX.2 Klein models
- Add qwen3_4b/qwen3_8b variants for Qwen3 encoder models
- Validate encoder variant matches Klein model (4B↔4B, 9B↔8B)
- Auto-detect Qwen3 variant from hidden_size during probing
Frontend:
- Show variant field for all model types in ModelView
- Filter Qwen3 encoder dropdown to only show compatible variants
- Update variant type definitions (zFlux2VariantType, zQwen3VariantType)
- Remove unused exports (isFluxDevMainModelConfig, isFlux2Klein9BMainModelConfig)
* Chore Ruff
* feat(flux2): add Klein 9B Base (undistilled) variant support
Distinguish between FLUX.2 Klein 9B (distilled) and Klein 9B Base (undistilled)
models by checking guidance_embeds in diffusers config or guidance_in keys in
safetensors. Klein 9B Base requires more steps but offers higher quality.
* feat(flux2): improve diffusers compatibility and distilled model support
Backend changes:
- Update text encoder layers from [9,18,27] to (10,20,30) matching diffusers
- Use apply_chat_template with system message instead of manual formatting
- Change position IDs from ones to zeros to match diffusers implementation
- Add get_schedule_flux2() with empirical mu computation for proper schedule shifting
- Add txt_embed_scale parameter for Qwen3 embedding magnitude control
- Add shift_schedule toggle for base (28+ steps) vs distilled (4 steps) models
- Zero out guidance_embedder weights for Klein models without guidance_embeds
UI changes:
- Clear Klein VAE and Qwen3 encoder when switching away from flux2 base
- Clear Qwen3 encoder when switching between different Klein model variants
- Add toast notification informing user to select compatible encoder
* feat(flux2): fix distilled model scheduling with proper dynamic shifting
- Configure scheduler with FLUX.2 Klein parameters from scheduler_config.json
(use_dynamic_shifting=True, shift=3.0, time_shift_type="exponential")
- Pass mu parameter to scheduler.set_timesteps() for resolution-aware shifting
- Remove manual shift_schedule parameter (scheduler handles this automatically)
- Simplify get_schedule_flux2() to return linear sigmas only
- Remove txt_embed_scale parameter (no longer needed)
This matches the diffusers Flux2KleinPipeline behavior where the
FlowMatchEulerDiscreteScheduler applies dynamic timestep shifting
based on image resolution via the mu parameter.
Fixes 4-step distilled Klein 9B model quality issues.
* fix(ui): fix FLUX.1 graph building with posCondCollect node lookup
The posCondCollect node was created with getPrefixedId() which generates
a random suffix (e.g., 'pos_cond_collect:abc123'), but g.getNode() was
called with the plain string 'pos_cond_collect', causing a node lookup
failure.
Fix by declaring posCondCollect as a module-scoped variable and
referencing it directly instead of using g.getNode().
* Remove Flux2 Klein Base from Starter Models
* Remove Logging
* Add Default Values for Flux2 Klein and add variant as additional info to from_base
* Add migrations for the z-image qwen3 encoder without a variant value
* Add img2img, inpainting and outpainting support for FLUX.2 Klein
- Add flux2_vae_encode invocation for encoding images to FLUX.2 latents
- Integrate inpaint_extension into FLUX.2 denoise loop for proper mask handling
- Apply BN normalization to init_latents and noise for consistency in inpainting
- Use manual Euler stepping for img2img/inpaint to preserve exact timestep schedule
- Add flux2_img2img, flux2_inpaint, flux2_outpaint generation modes
- Expand starter models with FP8 variants, standalone transformers, and separate VAE/encoders
- Fix outpainting to always use full denoising (0-1) since strength doesn't apply
- Improve error messages in model loader with clear guidance for standalone models
* Add GGUF quantized model support and Diffusers VAE loader for FLUX.2 Klein
- Add Main_GGUF_Flux2_Config for GGUF-quantized FLUX.2 transformer models
- Add VAE_Diffusers_Flux2_Config for FLUX.2 VAE in diffusers format
- Add Flux2GGUFCheckpointModel loader with BFL-to-diffusers conversion
- Add Flux2VAEDiffusersLoader for AutoencoderKLFlux2
- Add FLUX.2 Klein 4B/9B hardware requirements to documentation
- Update starter model descriptions to clarify dependencies install together
- Update frontend schema for new model configs
* Fix FLUX.2 model detection and add FP8 weight dequantization support
- Improve FLUX.2 variant detection for GGUF/checkpoint models (BFL format keys)
- Fix guidance_embeds logic: distilled=False, undistilled=True
- Add FP8 weight dequantization for ComfyUI-style quantized models
- Prevent FLUX.2 models from being misidentified as FLUX.1
- Preserve user-editable fields (name, description, etc.) on model reidentify
- Improve Qwen3Encoder detection by variant in starter models
- Add defensive checks for tensor operations
* Chore ruff format
* Chore Typegen
* Fix FLUX.2 Klein 9B model loading by detecting hidden_size from weights
Previously num_attention_heads was hardcoded to 24, which is correct for
Klein 4B but causes size mismatches when loading Klein 9B checkpoints.
Now dynamically calculates num_attention_heads from the hidden_size
dimension of context_embedder weights:
- Klein 4B: hidden_size=3072 → num_attention_heads=24
- Klein 9B: hidden_size=4096 → num_attention_heads=32
Fixes both Checkpoint and GGUF loaders for FLUX.2 models.
* Only clear Qwen3 encoder when FLUX.2 Klein variant changes
Previously the encoder was cleared whenever switching between any Klein
models, even if they had the same variant. Now compares the variant of
the old and new model and only clears the encoder when switching between
different variants (e.g., klein_4b to klein_9b).
This allows users to switch between different Klein 9B models without
having to re-select the Qwen3 encoder each time.
* Add metadata recall support for FLUX.2 Klein parameters
The scheduler, VAE model, and Qwen3 encoder model were not being
recalled correctly for FLUX.2 Klein images. This adds dedicated
metadata handlers for the Klein-specific parameters.
* Fix FLUX.2 Klein denoising scaling and Z-Image VAE compatibility
- Apply exponential denoising scaling (exponent 0.2) to FLUX.2 Klein,
matching FLUX.1 behavior for more intuitive inpainting strength
- Add isFlux1VAEModelConfig type guard to filter FLUX 1.0 VAEs only
- Restrict Z-Image VAE selection to FLUX 1.0 VAEs, excluding FLUX.2
Klein 32-channel VAEs which are incompatible
* chore pnpm fix
* Add FLUX.2 Klein to starter bundles and documentation
- Add FLUX.2 Klein hardware requirements to quick start guide
- Create flux2_klein_bundle with GGUF Q4 model, VAE, and Qwen3 encoder
- Add "What's New" entry announcing FLUX.2 Klein support
* Add FLUX.2 Klein built-in reference image editing support
FLUX.2 Klein has native multi-reference image editing without requiring
a separate model (unlike FLUX.1 which needs a Kontext model).
Backend changes:
- Add Flux2RefImageExtension for encoding reference images with FLUX.2 VAE
- Apply BN normalization to reference image latents for correct scaling
- Use T-coordinate offset scale=10 like diffusers (T=10, 20, 30...)
- Concatenate reference latents with generated image during denoising
- Extract only generated portion in step callback for correct preview
Frontend changes:
- Add flux2_reference_image config type without model field
- Hide model selector for FLUX.2 reference images (built-in support)
- Add type guards to handle configs without model property
- Update validators to skip model validation for FLUX.2
- Add 'flux2' to SUPPORTS_REF_IMAGES_BASE_MODELS
* Chore windows path fix
* Add reference image resizing for FLUX.2 Klein
Resize large reference images to match BFL FLUX.2 sampling.py limits:
- Single reference: max 2024² pixels (~4.1M)
- Multiple references: max 1024² pixels (~1M)
Uses same scaling approach as BFL's cap_pixels() function.
* Add user survey section to README
Added a section for new and returning users to take a survey.
* docs: add user survey link to WhatsNew
* Fix formatting issues in WhatsNew.tsx
---------
Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev>
* fix(model_manager): prevent Z-Image LoRAs from being misclassified as main models
Z-Image LoRAs containing keys like `diffusion_model.context_refiner.*` were being
incorrectly classified as main checkpoint models instead of LoRAs. This happened
because the `_has_z_image_keys()` function checked for Z-Image specific keys
(like `context_refiner`) without verifying if the file was actually a LoRA.
Since main models have higher priority than LoRAs in the classification sort order,
the incorrect main model classification would win.
The fix adds detection of LoRA-specific weight suffixes (`.lora_down.weight`,
`.lora_up.weight`, `.lora_A.weight`, `.lora_B.weight`, `.dora_scale`) and returns
False if any are found, ensuring LoRAs are correctly classified.
* refactor(mm): simplify _has_z_image_keys with early return
Return True directly when a Z-Image key is found instead of using an
intermediate variable.
* feat(z-image): add Seed Variance Enhancer node and Linear UI integration
Add a new conditioning node for Z-Image models that injects seed-based
noise into text embeddings to increase visual variation between seeds.
Backend:
- New invocation: z_image_seed_variance_enhancer.py
- Parameters: strength (0-2), randomize_percent (1-100%), seed
Frontend:
- State management in paramsSlice with selectors and reducers
- UI components in SeedVariance/ folder with toggle and sliders
- Integration in GenerationSettingsAccordion (Advanced Options)
- Graph builder integration in buildZImageGraph.ts
- Metadata recall handlers for remix functionality
- Translations and tooltip descriptions
Based on: github.com/Pfannkuchensack/invokeai-z-image-seed-variance-enhancer
* chore: ruff and typegen fix
* chore: ruff and typegen fix
* Revise seedVarianceStrength explanation
Updated description for seedVarianceStrength.
* Update description for seedVarianceStrength
* fix(z-image): correct noise range comment from [-1, 1] to [-1, 1)
torch.rand() generates [0, 1), so the scaled range excludes 1.
## Summary
This PR removes codeowners from the `/docs` directory, allowing any team
member with repo write permissions to review and approve PRs involving
documentation.
## Related Issues / Discussions
Documentation review is a shared responsibility.
## QA Instructions
None needed.
## Merge Plan
Simple merge.
## Checklist
- [X] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _❗Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
* WIP transform smoothing controls
* Fix transform smoothing control typings
* High level resize algo for transformation
* ESLint fix
* format with prettier
---------
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
* Fix for brush/eraser size not updating on up/down arrow click
* Made further improvements on brush size selection behavior
---------
Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev>
## Summary
This PR fixes misleading popup message "Canvas is empty" when attempting
to extract region with empty mask layer.
Replaced with correct message "Mask layer is empty". Also redirected few
other popups to use translation file.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _❗Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
* feat(z-image): add `add_noise` option to Z-Image Denoise
Add the same `add_noise` option that exists in FLUX Denoise to Z-Image Denoise.
When set to false, no noise is added to the input latents during image-to-image,
allowing for more controlled transformations.
## Summary
Add a new "Denoise - Z-Image + Metadata" node
(`ZImageDenoiseMetaInvocation`) that extends the Z-Image denoise node
with metadata output for image recall functionality.
This follows the same pattern as existing `denoise_latents_meta`
(SD1.5/SDXL) and `flux_denoise_meta` (FLUX) nodes.
**Captured metadata:**
- `width` / `height`
- `steps`
- `guidance` (guidance_scale)
- `denoising_start` / `denoising_end`
- `scheduler`
- `model` (transformer)
- `seed`
- `loras` (if applied)
## Related Issues / Discussions
Enables metadata recall for Z-Image generated images, similar to
existing support for SD1.5, SDXL, and FLUX models.
## QA Instructions
1. Create a workflow using the new "Denoise - Z-Image + Metadata" node
2. Connect the metadata output to a "Save Image" node
3. Generate an image
4. Check that metadata is saved with the image (visible in image info
panel)
5. Verify all generation parameters are captured correctly
## Merge Plan
Requires `feature/zimage-scheduler-support` #8705 branch to be merged
first (base branch).
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _❗Changes to a redux slice have a corresponding migration_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
## Summary
Adds `model_cache_keep_alive_min` config option (minutes, default 5) to
automatically clear model cache after inactivity. Addresses memory
contention when running InvokeAI alongside other GPU applications like
Ollama.
**Implementation:**
- **Config**: New `model_cache_keep_alive_min` field in
`InvokeAIAppConfig` with 5-minute default
- **ModelCache**: Activity tracking on get/lock/unlock/put operations,
threading.Timer for scheduled clearing
- **Thread safety**: Double-check pattern handles race conditions,
daemon threads for clean shutdown
- **Integration**: ModelManagerService passes config to cache, calls
shutdown() on stop
- **Logging**: Smart timeout logging that only shows messages when
unlocked models are actually cleared
- **Tests**: Comprehensive unit tests with properly configured mock
logger
**Usage:**
```yaml
# invokeai.yaml
model_cache_keep_alive_min: 10 # Clear after 10 minutes idle
model_cache_keep_alive_min: 0 # Set to 0 for indefinite caching (old behavior)
```
**Key Behavior:**
- **Default timeout**: 5 minutes - models are automatically cleared
after 5 minutes of inactivity
- Clearing uses same logic as "Clear Model Cache" button (make_room with
1000GB)
- Only clears **unlocked** models (respects models actively in use
during generation)
- Timeout message only appears when models are actually cleared
- Debug logging available for timeout events when no action is taken
- Prevents misleading log entries during active generation
- Users can set to 0 to restore indefinite caching behavior
## Related Issues / Discussions
Addresses enhancement request for automatic model unloading from memory
after inactivity period.
## QA Instructions
1. **Test default behavior (5-minute timeout)**:
- Start InvokeAI without explicit config
- Run a generation
- Wait 6 minutes with no activity
- Check logs for "Clearing X unlocked model(s) from cache" message
- Verify cache is empty
2. **Test custom timeout**:
- Set `model_cache_keep_alive_min: 0.1` (6 seconds) in config
- Load a model (run generation)
- Wait 7+ seconds with no activity
- Check logs for "Clearing X unlocked model(s) from cache" message
- Verify cache is empty
3. **Test no timeout (old behavior)**:
- Set `model_cache_keep_alive_min: 0` in config
- Run generations and wait extended periods
- Verify models remain cached indefinitely
4. **Test during active use**:
- Run continuous generations with any timeout setting
- Verify no timeout messages appear during active use (models are
locked)
- After generation completes, wait for timeout and verify unlocked
models are cleared
## Merge Plan
N/A - Additive change with sensible defaults. The 5-minute default
enables automatic memory management while remaining practical for
typical workflows.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [ ] _❗Changes to a redux slice have a corresponding migration_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
<!-- START COPILOT ORIGINAL PROMPT -->
<details>
<summary>Original prompt</summary>
>
> ----
>
> *This section details on the original issue you should resolve*
>
> <issue_title>[enhancement]: option to unload from memory
</issue_title>
> <issue_description>### Is there an existing issue for this?
>
> - [X] I have searched the existing issues
>
> ### Contact Details
>
> ### What should this feature add?
>
> a command line option to unload model from RAM after a defined period
of time
>
> ### Alternatives
>
> running as a container and using Sablier to shutdown the container
after some time, this has the downside of if traffic isn't see through
the web interface it will be shut even if jobs are running.
>
> ### Additional Content
>
> _No response_</issue_description>
>
> ## Comments on the Issue (you are @copilot in this section)
>
> <comments>
> <comment_new><author>@lstein</author><body>
> I am reopening this issue. I'm running ollama and invoke on the same
server and I find their memory requirements are frequently clashing. It
would be helpful to offer users the option to have the model cache
automatically cleared after a fixed amount of inactivity. I would
suggest the following:
>
> 1. Introduce a new config file option `model_cache_keep_alive` which
specifies, in minutes, how long to keep a model in cache between
generations. The default is 0, which means to keep the model in cache
indefinitely, as is currently the case.
> 2. If no model generations occur within the timeout period, the model
cache is cleared using the same backend code as the "Clear Model Cache"
button in the queue tab.
>
> I'm going to assign this to GitHub copilot, partly to test how well it
can manage the Invoke code base. </body></comment_new>
> </comments>
>
</details>
<!-- START COPILOT CODING AGENT SUFFIX -->
- Fixesinvoke-ai/InvokeAI#6856
<!-- START COPILOT CODING AGENT TIPS -->
---
✨ Let Copilot coding agent [set things up for
you](https://github.com/invoke-ai/InvokeAI/issues/new?title=✨+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.
Instead of disabling mutually exclusive model selectors, automatically
clear conflicting models when a new selection is made. This applies to
VAE, Qwen3 Encoder, and Qwen3 Source selectors - selecting one now
clears the others. Also applies same logic during metadata recall.
Move Scheduler handler after MainModel in ImageMetadataHandlers so that
base-dependent recall logic (z-image scheduler) works correctly. The
Scheduler handler checks `base === 'z-image'` before dispatching the
z-image scheduler action, but this check failed when Scheduler ran
before MainModel was recalled.
* feat(flux): add scheduler selection for Flux models
Add support for alternative diffusers Flow Matching schedulers:
- Euler (default, 1st order)
- Heun (2nd order, better quality, 2x slower)
- LCM (optimized for few steps)
Backend:
- Add schedulers.py with scheduler type definitions and class mapping
- Modify denoise.py to accept optional scheduler parameter
- Add scheduler InputField to flux_denoise invocation (v4.2.0)
Frontend:
- Add fluxScheduler to Redux state and paramsSlice
- Create ParamFluxScheduler component for Linear UI
- Add scheduler to buildFLUXGraph for generation
* fix(flux): prevent progress percentage overflow with LCM scheduler
LCM scheduler may have more internal timesteps than user-facing steps,
causing user_step to exceed total_steps. This resulted in progress
percentage > 1.0, which caused a pydantic validation error.
Fix: Only call step_callback when user_step <= total_steps.
* Ruff format
* fix(flux): remove initial step-0 callback for consistent step count
Remove the initial step_callback at step=0 to match SD/SDXL behavior.
Previously Flux showed N+1 steps (step 0 + N denoising steps), while
SD/SDXL showed only N steps. Now all models display N steps consistently.
* feat(flux): add scheduler support with metadata recall
- Handle LCM scheduler by using num_inference_steps instead of custom sigmas
- Fix progress bar to show user-facing steps instead of internal scheduler steps
- Pass scheduler parameter to Flux denoise node in graph builder
- Add model-aware metadata recall for Flux scheduler
---------
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>