- Remove negative_prompt, steps, guidance, reference_image_weights,
reference_image_modes from external model nodes (unused by any provider)
- Remove supports_negative_prompt, supports_steps, supports_guidance
from ExternalModelCapabilities
- Add provider_options dict to ExternalGenerationRequest for
provider-specific parameters
- Add OpenAI-specific fields: quality, background, input_fidelity
- Add Gemini-specific fields: temperature, thinking_level
- Add new OpenAI starter models: GPT Image 1.5, GPT Image 1 Mini,
DALL-E 3, DALL-E 2
- Fix OpenAI provider to use output_format (GPT Image) vs
response_format (DALL-E) and send model ID in requests
- Add fixed aspect ratio sizes for OpenAI models (bucketing)
- Add ExternalProviderRateLimitError with retry logic for 429 responses
- Add provider-specific UI components in ExternalSettingsAccordion
- Simplify ParamSteps/ParamGuidance by removing dead external overrides
- Update all backend and frontend tests
Add combined resolution preset selector for external models that maps
aspect ratio + image size to fixed dimensions. Gemini 3 Pro and 3.1 Flash
now send imageConfig (aspectRatio + imageSize) via generationConfig instead
of text-based aspect ratio hints used by Gemini 2.5 Flash.
Backend: ExternalResolutionPreset model, resolution_presets capability field,
image_size on ExternalGenerationRequest, and Gemini provider imageConfig logic.
Frontend: ExternalSettingsAccordion with combo resolution select, dimension
slider disabling for fixed-size models, and panel schema constraint wiring
for Steps/Guidance/Seed controls.
* Repair partially loaded Qwen models after cancel to avoid device mismatches
* ruff
* Repair CogView4 text encoder after canceled partial loads
* Avoid MPS CI crash in repair regression test
* Fix MPS device assertion in repair test
LoRAs trained with musubi-tuner (and potentially other trainers) that
only target transformer blocks (double_blocks/single_blocks) without
embedding layers (txt_in/vector_in/context_embedder) were incorrectly
classified as Flux 1. Add fallback detection using attention projection
hidden_size and MLP ratio from transformer block tensors
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Merged Z-Image checkpoints (e.g. models with LoRAs baked in) may bundle
text encoder weights (text_encoders.*) or other non-transformer keys
alongside the transformer weights. These cause load_state_dict() to fail
with strict=True. Instead of disabling strict mode, explicitly whitelist
valid ZImageTransformer2DModel key prefixes and discard everything else.
Also moves RAM allocation after filtering so it doesn't over-allocate
for discarded keys.
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
* Add FLUX.2 LOKR model support (detection and loading) (#88)
Fix BFL LOKR models being misidentified as AIToolkit format
Fix alpha key warning in LOKR QKV split layers
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
* Fix BFL→diffusers key mapping for non-block layers in FLUX.2 LoRA/LoKR
BFL's FLUX.2 model uses different names than diffusers' Flux2Transformer2DModel
for top-level modules (embedders, modulations, output layers). The existing
conversion only handled block-level renames (double_blocks→transformer_blocks),
causing "Failed to find module" warnings for non-block LoRA keys like img_in,
txt_in, modulation.lin, time_in, and final_layer.
---------
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev>
* WIP: Add FLUX.2 Klein LoRA support (BFL PEFT format)
Initial implementation for loading and applying LoRA models trained
with BFL's PEFT format for FLUX.2 Klein transformers.
Changes:
- Add LoRA_Diffusers_Flux2_Config and LoRA_LyCORIS_Flux2_Config
- Add BflPeft format to FluxLoRAFormat taxonomy
- Add flux_bfl_peft_lora_conversion_utils for weight conversion
- Add Flux2KleinLoraLoaderInvocation node
Status: Work in progress - not yet fully tested
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* feat(flux2): add LoRA support for FLUX.2 Klein models
Add BFL PEFT LoRA support for FLUX.2 Klein, including runtime conversion
of BFL-format keys to diffusers format with fused QKV splitting, improved
detection of Klein 4B LoRAs via MLP ratio check, and frontend graph wiring.
* feat(flux2): detect Klein LoRA variant (4B/9B) and filter by compatibility
Auto-detect FLUX.2 Klein LoRA variant from tensor dimensions during model
probe, warn on variant mismatch at load time, and filter the LoRA picker
to only show variant-compatible LoRAs.
* Chore Ruff
* Chore pnpm
* Fix detection and loading of 3 unrecognized Flux.2 Klein LoRA formats
Three Flux.2 Klein LoRAs were either unrecognized or misclassified due to
format detection gaps:
1. PEFT-wrapped BFL format (base_model.model.* prefix) was not recognized
because the detector only accepted the diffusion_model.* prefix.
2. Klein 4B LoRAs with hidden_size=3072 were misidentified as Flux.1 due to
a break statement exiting the detection loop before txt_in/vector_in
dimensions could be checked.
3. Flux2 native diffusers format (to_qkv_mlp_proj, ff.linear_in) was not
detected because the detector only checked for Flux.1 diffusers keys.
Also handles mixed PEFT/standard LoRA suffix formats within the same file.
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
* Fix bare except clauses and mutable default arguments
Replace bare `except:` with `except Exception:` in sqlite_database.py
and mlsd/utils.py to avoid catching KeyboardInterrupt and SystemExit,
which can prevent graceful shutdowns and mask critical errors (PEP 8
E722).
Replace mutable default arguments (lists) with None in
imwatermark/vendor.py to prevent shared state between calls, which
is a known Python gotcha that can cause subtle bugs when default
mutable objects are modified in place.
* add tests for mutable defaults and bare except fixes
* Simplify exception propagation tests
* Remove unused db initialization in error propagation tests
Removed unused database initialization in tests for KeyboardInterrupt and SystemExit.
---------
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
* feat(z-image): add Z-Image Base (undistilled) model variant support
- Add ZImageVariantType enum with 'turbo' and 'zbase' variants
- Auto-detect variant on import via scheduler_config.json shift value (3.0=turbo, 6.0=zbase)
- Add database migration to populate variant field for existing Z-Image models
- Re-add LCM scheduler with variant-aware filtering (LCM hidden for zbase)
- Auto-reset scheduler to Euler when switching to zbase model if LCM selected
- Update frontend to show/hide LCM option based on model variant
- Add toast notification when scheduler is auto-reset
Z-Image Base models are undistilled and require more steps (28-50) with higher
guidance (3.0-5.0), while Z-Image Turbo is distilled for ~8 steps with CFG 1.0.
LCM scheduler only works with distilled (Turbo) models.
* Chore ruff format
* Chore fix windows path
* feat(z-image): filter LoRAs by variant compatibility and warn on mismatch
LoRA picker now hides Z-Image LoRAs with incompatible variants (e.g. ZBase
LoRAs when using Turbo model). LoRAs without a variant are always shown.
Backend loaders warn at runtime if a LoRA variant doesn't match the
transformer variant.
* Chore typegen
---------
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
* fix(flux2): Fix image quality degradation at resolutions > 1024x1024
This commit addresses severe quality degradation and artifacts when
generating images larger than 1024x1024 with FLUX.2 Klein models.
Root causes fixed:
1. Dynamic max_image_seq_len in scheduler (flux2_denoise.py)
- Previously hardcoded to 4096 (1024x1024 only)
- Now dynamically calculated based on actual resolution
- Allows proper schedule shifting at all resolutions
2. Smoothed mu calculation discontinuity (sampling_utils.py)
- Eliminated 40-50% mu value drop at seq_len 4300 threshold
- Implemented smooth cosine interpolation (4096-4500 transition zone)
- Gradual blend between low-res and high-res formulas
Impact:
- FLUX.2 Klein 9B: Major quality improvement at high resolutions
- FLUX.2 Klein 4B: Improved quality at high resolutions
- Baseline 1024x1024: Unchanged (no regression)
- All generation modes: T2I and Kontext (reference images)
Fixes: Community-reported quality degradation issue
See: Discord discussions in #garbage-bin and #devchat
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix(flux2): Fix high-resolution quality degradation for FLUX.2 Klein
Fixes grid/diamond artifacts and color loss at resolutions > 1024x1024.
Root causes identified and fixed:
- BN normalization was incorrectly applied to random noise input
(diffusers only normalizes image latents from VAE.encode)
- BN denormalization must be applied to output before VAE decode
- mu parameter was resolution-dependent causing over-shifted schedules
at high resolutions (now fixed to 2.02, matching ComfyUI)
Changes:
- Remove BN normalization on noise input (not needed for N(0,1) noise)
- Preserve BN denormalization on denoised output (required for VAE)
- Fix mu to constant 2.02 for all resolutions (matches ComfyUI)
Tested at 2048x2048 with FLUX.2 Klein 4B
* Chore Ruff
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
* fix(flux2): support Heun scheduler for FLUX.2 Klein models
FlowMatchHeunDiscreteScheduler does not support dynamic shifting parameters
(use_dynamic_shifting, base_shift, max_shift, etc.) or sigmas/mu in set_timesteps.
This caused FLUX.2 Klein to fail when using Heun scheduler.
- Create Heun scheduler with only num_train_timesteps and shift parameters
- Use num_inference_steps instead of sigmas for Heun's set_timesteps call
- Euler and LCM schedulers continue to use full dynamic shifting support
* fix(flux2): fix Heun scheduler detection using inspect.signature
The previous hasattr check for state_in_first_order failed because
the attribute doesn't exist before set_timesteps() is called. Now
using inspect.signature to check for sigmas parameter support,
matching the FLUX1 implementation.
---------
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
* fix(ui): improve DyPE field ordering and add 'On' preset option
- Add ui_order to DyPE fields (100, 101, 102) to group them at bottom of node
- Change DyPEPreset from Enum to Literal type for proper frontend dropdown support
- Add ui_choice_labels for human-readable dropdown options
- Add new 'On' preset to enable DyPE regardless of resolution
- Fix frontend input field sorting to respect ui_order (unordered first, then ordered)
- Bump flux_denoise node version to 4.4.0
* Chore Ruff check fix
* fix(flux): remove .value from dype_preset logging
DyPEPreset is now a Literal type (string) instead of an Enum,
so .value is no longer needed.
* fix(tests): update DyPE tests for Literal type change
Update test imports and assertions to use string constants
instead of Enum attributes since DyPEPreset is now a Literal type.
* feat(flux): add DyPE scale and exponent controls to Linear UI
- Add dype_scale (λs) and dype_exponent (λt) sliders to generation settings
- Add Zod schemas and parameter types for DyPE scale/exponent
- Pass custom values from Linear UI to flux_denoise node
- Fix bug where DyPE was enabled even when preset was "off"
- Add enhanced logging showing all DyPE parameters when enabled
* fix(flux): apply DyPE scale/exponent and add metadata recall
- Fix DyPE scale and exponent parameters not being applied in frequency
computation (compute_vision_yarn_freqs, compute_yarn_freqs now call
get_timestep_mscale)
- Add metadata handlers for dype_scale and dype_exponent to enable
recall from generated images
- Add i18n translations referencing existing parameter labels
* fix(flux): apply DyPE scale/exponent and add metadata recall
- Fix DyPE scale and exponent parameters not being applied in frequency
computation (compute_vision_yarn_freqs, compute_yarn_freqs now call
get_timestep_mscale)
- Add metadata handlers for dype_scale and dype_exponent to enable
recall from generated images
- Add i18n translations referencing existing parameter labels
* feat(ui): show DyPE scale/exponent only when preset is "on"
- Hide scale/exponent controls in UI when preset is not "on"
- Only parse/recall scale/exponent from metadata when preset is "on"
- Prevents confusion where custom values override preset behavior
* fix(dype): only allow custom scale/exponent with 'on' preset
Presets (auto, 4k) now use their predefined values and ignore
any custom_scale/custom_exponent parameters. Only the 'on' preset
allows manual override of these values.
This matches the frontend UI behavior where the scale/exponent
fields are only shown when 'On' is selected.
* refactor(dype): rename 'on' preset to 'manual'
Rename the 'on' DyPE preset to 'manual' to better reflect its purpose:
allowing users to manually configure scale and exponent values.
Updated in:
- Backend presets (DYPE_PRESET_ON -> DYPE_PRESET_MANUAL)
- Frontend UI labels and options
- Redux slice type definitions
- Zod schema validation
- Tests
* refactor(dype): rename 'on' preset to 'manual'
Rename the 'on' DyPE preset to 'manual' to better reflect its purpose:
allowing users to manually configure scale and exponent values.
Updated in:
- Backend presets (DYPE_PRESET_ON -> DYPE_PRESET_MANUAL)
- Frontend UI labels and options
- Redux slice type definitions
- Zod schema validation
- Tests
* fix(dype): update remaining 'on' references to 'manual'
- Update docstrings, comments, and error messages to use 'manual' preset name
- Simplify FLUX graph builder to always send dype_scale/dype_exponent
- Fix UI condition to show DyPE controls for 'manual' preset
---------
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
* fix(model_manager): detect Flux VAE by latent space dimensions instead of filename
VAE detection previously relied solely on filename pattern matching, which failed
for Flux VAE files with generic names like "ae.safetensors". Now probes the model's
decoder.conv_in weight shape to determine the latent space dimensions:
- 16 channels -> Flux VAE
- 4 channels -> SD/SDXL VAE (with filename fallback for SD1/SD2/SDXL distinction)
* fix(model_manager): add latent space probing for Flux2 VAE detection
Extend Flux2 VAE detection to also check for 32-dimensional latent space
(decoder.conv_in with 32 input channels) in addition to BatchNorm layers.
This provides more robust detection for Flux2 VAE files regardless of filename.
* Chore Ruff format
---------
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
* docs: add DyPE implementation plan for FLUX high-resolution generation
Add detailed plan for porting ComfyUI-DyPE (Dynamic Position Extrapolation)
to InvokeAI, enabling 4K+ image generation with FLUX models without
training. Estimated effort: 5-7 developer days.
* docs: update DyPE plan with design decisions
- Integrate DyPE directly into FluxDenoise (no separate node)
- Add 4K preset and "auto" mode for automatic activation
- Confirm FLUX Schnell support (same base resolution as Dev)
* docs: add activation threshold for DyPE auto mode
FLUX can handle resolutions up to ~1.5x natively without artifacts.
Set activation_threshold=1536 so DyPE only kicks in above that.
* feat(flux): implement DyPE for high-resolution generation
Add Dynamic Position Extrapolation (DyPE) support to FLUX models,
enabling artifact-free generation at 4K+ resolutions.
New files:
- invokeai/backend/flux/dype/base.py: DyPEConfig and scaling calculations
- invokeai/backend/flux/dype/rope.py: DyPE-enhanced RoPE functions
- invokeai/backend/flux/dype/embed.py: DyPEEmbedND position embedder
- invokeai/backend/flux/dype/presets.py: Presets (off, auto, 4k)
- invokeai/backend/flux/extensions/dype_extension.py: Pipeline integration
Modified files:
- invokeai/backend/flux/denoise.py: Add dype_extension parameter
- invokeai/app/invocations/flux_denoise.py: Add UI parameters
UI parameters:
- dype_preset: off | auto | 4k
- dype_scale: Custom magnitude override (0-8)
- dype_exponent: Custom decay speed override (0-1000)
Auto mode activates DyPE for resolutions > 1536px.
Based on: https://github.com/wildminder/ComfyUI-DyPE
* feat(flux): add DyPE preset selector to Linear UI
Add Linear UI integration for FLUX DyPE (Dynamic Position Extrapolation):
- Add ParamFluxDypePreset component with Off/Auto/4K options
- Integrate preset selector in GenerationSettingsAccordion for FLUX models
- Add state management (paramsSlice, types) for fluxDypePreset
- Add dype_preset to FLUX denoise graph builder and metadata
- Add translations for DyPE preset label and popover
- Add zFluxDypePresetField schema definition
Fix DyPE frequency computation:
- Remove incorrect mscale multiplication on frequencies
- Use only NTK-aware theta scaling for position extrapolation
* feat(flux): add DyPE preset to metadata recall
- Add FluxDypePreset handler to ImageMetadataHandlers
- Parse dype_preset from metadata and dispatch setFluxDypePreset on recall
- Add translation key metadata.dypePreset
* chore: remove dype-implementation-plan.md
Remove internal planning document from the branch.
* chore(flux): bump flux_denoise version to 4.3.0
Version bump for dype_preset field addition.
* chore: ruff check fix
* chore: ruff format
* Fix truncated DyPE label in advanced options UI
Shorten the label from "DyPE (High-Res)" to "DyPE" to prevent text truncation in the sidebar. The high-resolution context is preserved in the informational popover tooltip.
* Add DyPE preset to recall parameters in image viewer
The dype_preset metadata was being saved but not displayed in the Recall Parameters tab. Add FluxDypePreset handler to ImageMetadataActions so users can see and recall this parameter.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
* WIP: feat(flux2): add FLUX 2 Kontext model support
- Add new invocation nodes for FLUX 2:
- flux2_denoise: Denoising invocation for FLUX 2
- flux2_klein_model_loader: Model loader for Klein architecture
- flux2_klein_text_encoder: Text encoder for Qwen3-based encoding
- flux2_vae_decode: VAE decoder for FLUX 2
- Add backend support:
- New flux2 module with denoise and sampling utilities
- Extended model manager configs for FLUX 2 models
- Updated model loaders for Klein architecture
- Update frontend:
- Extended graph builder for FLUX 2 support
- Added FLUX 2 model types and configurations
- Updated readiness checks and UI components
* fix(flux2): correct VAE decode with proper BN denormalization
FLUX.2 VAE uses Batch Normalization in the patchified latent space
(128 channels). The decode must:
1. Patchify latents from (B, 32, H, W) to (B, 128, H/2, W/2)
2. Apply BN denormalization using running_mean/running_var
3. Unpatchify back to (B, 32, H, W) for VAE decode
Also fixed image normalization from [-1, 1] to [0, 255].
This fixes washed-out colors in generated FLUX.2 Klein images.
* feat(flux2): add FLUX.2 Klein model support with ComfyUI checkpoint compatibility
- Add FLUX.2 transformer loader with BFL-to-diffusers weight conversion
- Fix AdaLayerNorm scale-shift swap for final_layer.adaLN_modulation weights
- Add VAE batch normalization handling for FLUX.2 latent normalization
- Add Qwen3 text encoder loader with ComfyUI FP8 quantization support
- Add frontend components for FLUX.2 Klein model selection
- Update configs and schema for FLUX.2 model types
* Chore Ruff
* Fix Flux1 vae probing
* Fix Windows Paths schema.ts
* Add 4B und 9B klein to Starter Models.
* feat(flux2): add non-commercial license indicator for FLUX.2 Klein 9B
- Add isFlux2Klein9BMainModelConfig and isNonCommercialMainModelConfig functions
- Update MainModelPicker and InitialStateMainModelPicker to show license icon
- Update license tooltip text to include FLUX.2 Klein 9B
* feat(flux2): add Klein/Qwen3 variant support and encoder filtering
Backend:
- Add klein_4b/klein_9b variants for FLUX.2 Klein models
- Add qwen3_4b/qwen3_8b variants for Qwen3 encoder models
- Validate encoder variant matches Klein model (4B↔4B, 9B↔8B)
- Auto-detect Qwen3 variant from hidden_size during probing
Frontend:
- Show variant field for all model types in ModelView
- Filter Qwen3 encoder dropdown to only show compatible variants
- Update variant type definitions (zFlux2VariantType, zQwen3VariantType)
- Remove unused exports (isFluxDevMainModelConfig, isFlux2Klein9BMainModelConfig)
* Chore Ruff
* feat(flux2): add Klein 9B Base (undistilled) variant support
Distinguish between FLUX.2 Klein 9B (distilled) and Klein 9B Base (undistilled)
models by checking guidance_embeds in diffusers config or guidance_in keys in
safetensors. Klein 9B Base requires more steps but offers higher quality.
* feat(flux2): improve diffusers compatibility and distilled model support
Backend changes:
- Update text encoder layers from [9,18,27] to (10,20,30) matching diffusers
- Use apply_chat_template with system message instead of manual formatting
- Change position IDs from ones to zeros to match diffusers implementation
- Add get_schedule_flux2() with empirical mu computation for proper schedule shifting
- Add txt_embed_scale parameter for Qwen3 embedding magnitude control
- Add shift_schedule toggle for base (28+ steps) vs distilled (4 steps) models
- Zero out guidance_embedder weights for Klein models without guidance_embeds
UI changes:
- Clear Klein VAE and Qwen3 encoder when switching away from flux2 base
- Clear Qwen3 encoder when switching between different Klein model variants
- Add toast notification informing user to select compatible encoder
* feat(flux2): fix distilled model scheduling with proper dynamic shifting
- Configure scheduler with FLUX.2 Klein parameters from scheduler_config.json
(use_dynamic_shifting=True, shift=3.0, time_shift_type="exponential")
- Pass mu parameter to scheduler.set_timesteps() for resolution-aware shifting
- Remove manual shift_schedule parameter (scheduler handles this automatically)
- Simplify get_schedule_flux2() to return linear sigmas only
- Remove txt_embed_scale parameter (no longer needed)
This matches the diffusers Flux2KleinPipeline behavior where the
FlowMatchEulerDiscreteScheduler applies dynamic timestep shifting
based on image resolution via the mu parameter.
Fixes 4-step distilled Klein 9B model quality issues.
* fix(ui): fix FLUX.1 graph building with posCondCollect node lookup
The posCondCollect node was created with getPrefixedId() which generates
a random suffix (e.g., 'pos_cond_collect:abc123'), but g.getNode() was
called with the plain string 'pos_cond_collect', causing a node lookup
failure.
Fix by declaring posCondCollect as a module-scoped variable and
referencing it directly instead of using g.getNode().
* Remove Flux2 Klein Base from Starter Models
* Remove Logging
* Add Default Values for Flux2 Klein and add variant as additional info to from_base
* Add migrations for the z-image qwen3 encoder without a variant value
* Add img2img, inpainting and outpainting support for FLUX.2 Klein
- Add flux2_vae_encode invocation for encoding images to FLUX.2 latents
- Integrate inpaint_extension into FLUX.2 denoise loop for proper mask handling
- Apply BN normalization to init_latents and noise for consistency in inpainting
- Use manual Euler stepping for img2img/inpaint to preserve exact timestep schedule
- Add flux2_img2img, flux2_inpaint, flux2_outpaint generation modes
- Expand starter models with FP8 variants, standalone transformers, and separate VAE/encoders
- Fix outpainting to always use full denoising (0-1) since strength doesn't apply
- Improve error messages in model loader with clear guidance for standalone models
* Add GGUF quantized model support and Diffusers VAE loader for FLUX.2 Klein
- Add Main_GGUF_Flux2_Config for GGUF-quantized FLUX.2 transformer models
- Add VAE_Diffusers_Flux2_Config for FLUX.2 VAE in diffusers format
- Add Flux2GGUFCheckpointModel loader with BFL-to-diffusers conversion
- Add Flux2VAEDiffusersLoader for AutoencoderKLFlux2
- Add FLUX.2 Klein 4B/9B hardware requirements to documentation
- Update starter model descriptions to clarify dependencies install together
- Update frontend schema for new model configs
* Fix FLUX.2 model detection and add FP8 weight dequantization support
- Improve FLUX.2 variant detection for GGUF/checkpoint models (BFL format keys)
- Fix guidance_embeds logic: distilled=False, undistilled=True
- Add FP8 weight dequantization for ComfyUI-style quantized models
- Prevent FLUX.2 models from being misidentified as FLUX.1
- Preserve user-editable fields (name, description, etc.) on model reidentify
- Improve Qwen3Encoder detection by variant in starter models
- Add defensive checks for tensor operations
* Chore ruff format
* Chore Typegen
* Fix FLUX.2 Klein 9B model loading by detecting hidden_size from weights
Previously num_attention_heads was hardcoded to 24, which is correct for
Klein 4B but causes size mismatches when loading Klein 9B checkpoints.
Now dynamically calculates num_attention_heads from the hidden_size
dimension of context_embedder weights:
- Klein 4B: hidden_size=3072 → num_attention_heads=24
- Klein 9B: hidden_size=4096 → num_attention_heads=32
Fixes both Checkpoint and GGUF loaders for FLUX.2 models.
* Only clear Qwen3 encoder when FLUX.2 Klein variant changes
Previously the encoder was cleared whenever switching between any Klein
models, even if they had the same variant. Now compares the variant of
the old and new model and only clears the encoder when switching between
different variants (e.g., klein_4b to klein_9b).
This allows users to switch between different Klein 9B models without
having to re-select the Qwen3 encoder each time.
* Add metadata recall support for FLUX.2 Klein parameters
The scheduler, VAE model, and Qwen3 encoder model were not being
recalled correctly for FLUX.2 Klein images. This adds dedicated
metadata handlers for the Klein-specific parameters.
* Fix FLUX.2 Klein denoising scaling and Z-Image VAE compatibility
- Apply exponential denoising scaling (exponent 0.2) to FLUX.2 Klein,
matching FLUX.1 behavior for more intuitive inpainting strength
- Add isFlux1VAEModelConfig type guard to filter FLUX 1.0 VAEs only
- Restrict Z-Image VAE selection to FLUX 1.0 VAEs, excluding FLUX.2
Klein 32-channel VAEs which are incompatible
* chore pnpm fix
* Add FLUX.2 Klein to starter bundles and documentation
- Add FLUX.2 Klein hardware requirements to quick start guide
- Create flux2_klein_bundle with GGUF Q4 model, VAE, and Qwen3 encoder
- Add "What's New" entry announcing FLUX.2 Klein support
* Add FLUX.2 Klein built-in reference image editing support
FLUX.2 Klein has native multi-reference image editing without requiring
a separate model (unlike FLUX.1 which needs a Kontext model).
Backend changes:
- Add Flux2RefImageExtension for encoding reference images with FLUX.2 VAE
- Apply BN normalization to reference image latents for correct scaling
- Use T-coordinate offset scale=10 like diffusers (T=10, 20, 30...)
- Concatenate reference latents with generated image during denoising
- Extract only generated portion in step callback for correct preview
Frontend changes:
- Add flux2_reference_image config type without model field
- Hide model selector for FLUX.2 reference images (built-in support)
- Add type guards to handle configs without model property
- Update validators to skip model validation for FLUX.2
- Add 'flux2' to SUPPORTS_REF_IMAGES_BASE_MODELS
* Chore windows path fix
* Add reference image resizing for FLUX.2 Klein
Resize large reference images to match BFL FLUX.2 sampling.py limits:
- Single reference: max 2024² pixels (~4.1M)
- Multiple references: max 1024² pixels (~1M)
Uses same scaling approach as BFL's cap_pixels() function.
* fix(model_manager): prevent Z-Image LoRAs from being misclassified as main models
Z-Image LoRAs containing keys like `diffusion_model.context_refiner.*` were being
incorrectly classified as main checkpoint models instead of LoRAs. This happened
because the `_has_z_image_keys()` function checked for Z-Image specific keys
(like `context_refiner`) without verifying if the file was actually a LoRA.
Since main models have higher priority than LoRAs in the classification sort order,
the incorrect main model classification would win.
The fix adds detection of LoRA-specific weight suffixes (`.lora_down.weight`,
`.lora_up.weight`, `.lora_A.weight`, `.lora_B.weight`, `.dora_scale`) and returns
False if any are found, ensuring LoRAs are correctly classified.
* refactor(mm): simplify _has_z_image_keys with early return
Return True directly when a Z-Image key is found instead of using an
intermediate variable.
* feat(flux): add scheduler selection for Flux models
Add support for alternative diffusers Flow Matching schedulers:
- Euler (default, 1st order)
- Heun (2nd order, better quality, 2x slower)
- LCM (optimized for few steps)
Backend:
- Add schedulers.py with scheduler type definitions and class mapping
- Modify denoise.py to accept optional scheduler parameter
- Add scheduler InputField to flux_denoise invocation (v4.2.0)
Frontend:
- Add fluxScheduler to Redux state and paramsSlice
- Create ParamFluxScheduler component for Linear UI
- Add scheduler to buildFLUXGraph for generation
* fix(flux): prevent progress percentage overflow with LCM scheduler
LCM scheduler may have more internal timesteps than user-facing steps,
causing user_step to exceed total_steps. This resulted in progress
percentage > 1.0, which caused a pydantic validation error.
Fix: Only call step_callback when user_step <= total_steps.
* Ruff format
* fix(flux): remove initial step-0 callback for consistent step count
Remove the initial step_callback at step=0 to match SD/SDXL behavior.
Previously Flux showed N+1 steps (step 0 + N denoising steps), while
SD/SDXL showed only N steps. Now all models display N steps consistently.
* feat(flux): add scheduler support with metadata recall
- Handle LCM scheduler by using num_inference_steps instead of custom sigmas
- Fix progress bar to show user-facing steps instead of internal scheduler steps
- Pass scheduler parameter to Flux denoise node in graph builder
- Add model-aware metadata recall for Flux scheduler
---------
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
* feat(flux): add scheduler selection for Flux models
Add support for alternative diffusers Flow Matching schedulers:
- Euler (default, 1st order)
- Heun (2nd order, better quality, 2x slower)
- LCM (optimized for few steps)
Backend:
- Add schedulers.py with scheduler type definitions and class mapping
- Modify denoise.py to accept optional scheduler parameter
- Add scheduler InputField to flux_denoise invocation (v4.2.0)
Frontend:
- Add fluxScheduler to Redux state and paramsSlice
- Create ParamFluxScheduler component for Linear UI
- Add scheduler to buildFLUXGraph for generation
* feat(z-image): add scheduler selection for Z-Image models
Add support for alternative diffusers Flow Matching schedulers for Z-Image:
- Euler (default) - 1st order, optimized for Z-Image-Turbo (8 steps)
- Heun (2nd order) - Better quality, 2x slower
- LCM - Optimized for few-step generation
Backend:
- Extend schedulers.py with Z-Image scheduler types and mapping
- Add scheduler InputField to z_image_denoise invocation (v1.3.0)
- Refactor denoising loop to support diffusers schedulers
Frontend:
- Add zImageScheduler to Redux state in paramsSlice
- Create ParamZImageScheduler component for Linear UI
- Add scheduler to buildZImageGraph for generation
* fix ruff check
* fix(schedulers): prevent progress percentage overflow with LCM scheduler
LCM scheduler may have more internal timesteps than user-facing steps,
causing user_step to exceed total_steps. This resulted in progress
percentage > 1.0, which caused a pydantic validation error.
Fix: Only call step_callback when user_step <= total_steps.
* Ruff format
* fix(schedulers): remove initial step-0 callback for consistent step count
Remove the initial step_callback at step=0 to match SD/SDXL behavior.
Previously Flux/Z-Image showed N+1 steps (step 0 + N denoising steps),
while SD/SDXL showed only N steps. Now all models display N steps
consistently in the server log.
* feat(z-image): add scheduler support with metadata recall
- Handle LCM scheduler by using num_inference_steps instead of custom sigmas
- Fix progress bar to show user-facing steps instead of internal scheduler steps
- Pass scheduler parameter to Z-Image denoise node in graph builder
- Add model-aware metadata recall for Flux and Z-Image schedulers
---------
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>