Commit Graph

3195 Commits

Author SHA1 Message Date
Lincoln Stein
5cef8bd364 (fix) default timeout to 0 min, to disable timeout feature and restore previous default behavior 2026-01-04 07:01:01 -05:00
Jonathan
e39b880f6d Merge branch 'main' into copilot/add-unload-model-option 2026-01-03 15:41:59 -05:00
Alexander Eichhorn
689953e3cf Feature/zimage scheduler support (#8705)
* feat(flux): add scheduler selection for Flux models

Add support for alternative diffusers Flow Matching schedulers:
- Euler (default, 1st order)
- Heun (2nd order, better quality, 2x slower)
- LCM (optimized for few steps)

Backend:
- Add schedulers.py with scheduler type definitions and class mapping
- Modify denoise.py to accept optional scheduler parameter
- Add scheduler InputField to flux_denoise invocation (v4.2.0)

Frontend:
- Add fluxScheduler to Redux state and paramsSlice
- Create ParamFluxScheduler component for Linear UI
- Add scheduler to buildFLUXGraph for generation

* feat(z-image): add scheduler selection for Z-Image models

Add support for alternative diffusers Flow Matching schedulers for Z-Image:
- Euler (default) - 1st order, optimized for Z-Image-Turbo (8 steps)
- Heun (2nd order) - Better quality, 2x slower
- LCM - Optimized for few-step generation

Backend:
- Extend schedulers.py with Z-Image scheduler types and mapping
- Add scheduler InputField to z_image_denoise invocation (v1.3.0)
- Refactor denoising loop to support diffusers schedulers

Frontend:
- Add zImageScheduler to Redux state in paramsSlice
- Create ParamZImageScheduler component for Linear UI
- Add scheduler to buildZImageGraph for generation

* fix ruff check

* fix(schedulers): prevent progress percentage overflow with LCM scheduler

LCM scheduler may have more internal timesteps than user-facing steps,
causing user_step to exceed total_steps. This resulted in progress
percentage > 1.0, which caused a pydantic validation error.

Fix: Only call step_callback when user_step <= total_steps.

* Ruff format

* fix(schedulers): remove initial step-0 callback for consistent step count

Remove the initial step_callback at step=0 to match SD/SDXL behavior.
Previously Flux/Z-Image showed N+1 steps (step 0 + N denoising steps),
while SD/SDXL showed only N steps. Now all models display N steps
consistently in the server log.

* feat(z-image): add scheduler support with metadata recall

- Handle LCM scheduler by using num_inference_steps instead of custom sigmas
- Fix progress bar to show user-facing steps instead of internal scheduler steps
- Pass scheduler parameter to Z-Image denoise node in graph builder
- Add model-aware metadata recall for Flux and Z-Image schedulers

---------

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-01-03 20:37:04 +00:00
Lincoln Stein
87608ade45 (chore) update config docstrings 2026-01-01 19:35:15 -05:00
Lincoln Stein
d44b99ae0a Merge branch 'main' into copilot/add-unload-model-option 2025-12-28 22:39:45 -05:00
blessedcoolant
1675712094 Implement PBR Maps Node (#8700)
* feat: Implement PBR Maps Generation Node

* feat(ui): Add PBR Maps Generation to UI

* chore: fix typegen checks

* chore: possible fix for nvidia 5000 series cards

* fix: Use safetensor models for PBR maps instead of pickles.

* fix: incorrect naming of upconv_block for PBR network

* fix: incorrect naming of displacement map variable

* chore: add relevant docs to the PBR generate function

* fix: clear cuda cache after loading state_dict for PBR maps

* fix: load torch_device only once as multiple models are loaded

* chore(ui): update the filter icon for PBR to CubeBold

More relevant

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-29 02:11:46 +00:00
copilot-swe-agent[bot]
1bd1c76a2c Change default model_cache_keep_alive to 5 minutes
Changed the default value of model_cache_keep_alive from 0 (indefinite)
to 5 minutes as requested. This means models will now be automatically
cleared from cache after 5 minutes of inactivity by default, unless
users explicitly configure a different value.

Users can still set it to 0 in their config to get the old behavior
of keeping models indefinitely.

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-28 02:11:20 +00:00
Lincoln Stein
a7205e4e36 Merge branch 'main' into copilot/add-unload-model-option 2025-12-25 21:33:59 -05:00
Alexander Eichhorn
65efc3db7d Feature: Add Z-Image-Turbo regional guidance (#8672)
* feat: Add Regional Guidance support for Z-Image model

Implements regional prompting for Z-Image (S3-DiT Transformer) allowing
different prompts to affect different image regions using attention masks.

Backend changes:
- Add ZImageRegionalPromptingExtension for mask preparation
- Add ZImageTextConditioning and ZImageRegionalTextConditioning data classes
- Patch transformer forward to inject 4D regional attention masks
- Use additive float mask (0.0 attend, -inf block) in bfloat16 for compatibility
- Alternate regional/full attention layers for global coherence

Frontend changes:
- Update buildZImageGraph to support regional conditioning collectors
- Update addRegions to create z_image_text_encoder nodes for regions
- Update addZImageLoRAs to handle optional negCond when guidance_scale=0
- Add Z-Image validation (no IP adapters, no autoNegative)

* @Pfannkuchensack
Fix windows path again

* ruff check fix

* ruff formating

* fix(ui): Z-Image CFG guidance_scale check uses > 1 instead of > 0

Changed the guidance_scale check from > 0 to > 1 for Z-Image models.
Since Z-Image uses guidance_scale=1.0 as "no CFG" (matching FLUX convention),
negative conditioning should only be created when guidance_scale > 1.

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-26 02:25:38 +00:00
Lincoln Stein
b9493ddce7 Workaround for Windows being unable to remove tmp directories when installing GGUF files (#8699)
* (bugfix)(mm) work around Windows being unable to rmtree tmp directories after GGUF install

* (style) fix ruff error

* (fix) add workaround for Windows Permission Denied on GGUF file move() call

* (fix) perform torch copy() in GGUF reader to avoid deletion failures on Windows

* (style) fix ruff formatting issues
2025-12-26 02:02:39 +00:00
Lincoln Stein
5b69403ba8 Merge branch 'main' into copilot/add-unload-model-option 2025-12-24 15:39:46 -05:00
Alexander Eichhorn
4cb9b8d97d Feature: add prompt template node (#8680)
* feat(nodes): add Prompt Template node

Add a new node that applies Style Preset templates to prompts in workflows.
The node takes a style preset ID and positive/negative prompts as inputs,
then replaces {prompt} placeholders in the template with the provided prompts.

This makes Style Preset templates accessible in Workflow mode, enabling
users to apply consistent styling across their workflow-based generations.

* feat(nodes): add StylePresetField for database-driven preset selection

Adds a new StylePresetField type that enables dropdown selection of
style presets from the database in the workflow editor.

Changes:
- Add StylePresetField to backend (fields.py)
- Update Prompt Template node to use StylePresetField instead of string ID
- Add frontend field type definitions (zod schemas, type guards)
- Create StylePresetFieldInputComponent with Combobox
- Register field in InputFieldRenderer and nodesSlice
- Add translations for preset selection

* fix schema.ts on windows.

* chore(api): regenerate schema.ts after merge

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-24 14:33:16 -05:00
Lincoln Stein
a21b7792d8 (chore) regenerate config docstrings 2025-12-24 00:29:48 -05:00
Lincoln Stein
1e15b8c106 Merge branch 'main' into copilot/add-unload-model-option 2025-12-24 00:14:45 -05:00
Alexander Eichhorn
21138e5d52 fix support multi-subfolder downloads for Z-Image Qwen3 encoder (#8692)
* fix(model-install): support multi-subfolder downloads for Z-Image Qwen3 encoder

The Z-Image Qwen3 text encoder requires both text_encoder and tokenizer
subfolders from the HuggingFace repo, but the previous implementation
only downloaded the text_encoder subfolder, causing model identification
to fail.

Changes:
- Add subfolders property to HFModelSource supporting '+' separated paths
- Extend filter_files() and download_urls() to handle multiple subfolders
- Update _multifile_download() to preserve subfolder structure
- Make Qwen3Encoder probe check both nested and direct config.json paths
- Update Qwen3EncoderLoader to handle both directory structures
- Change starter model source to text_encoder+tokenizer

* ruff format

* fix schema description

* fix schema description

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-23 23:39:43 -05:00
copilot-swe-agent[bot]
8d76b4e4d4 Fix ruff whitespace errors and improve timeout logging
- Remove all trailing whitespace (W293 errors)
- Add debug logging when timeout fires but activity detected
- Add debug logging when timeout fires but cache is empty
- Only log "Clearing model cache" message when actually clearing
- Prevents misleading timeout messages during active generation

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 04:05:57 +00:00
copilot-swe-agent[bot]
b16717bbf8 Explicitly pass all ModelCache constructor parameters
- Add explicit storage_device parameter (cpu)
- Add explicit log_memory_usage parameter from config
- Improves code clarity and configuration transparency

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 00:30:51 +00:00
copilot-swe-agent[bot]
9bbd2b3f11 Add model_cache_keep_alive config option and timeout mechanism
- Added model_cache_keep_alive config field (minutes, default 0 = infinite)
- Implemented timeout tracking in ModelCache class
- Added _record_activity() to track model usage
- Added _on_timeout() to auto-clear cache when timeout expires
- Added shutdown() method to clean up timers
- Integrated timeout with get(), lock(), unlock(), and put() operations
- Updated ModelManagerService to pass keep_alive parameter
- Added cleanup in stop() method

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
2025-12-24 00:22:59 +00:00
Alexander Eichhorn
73be5e5d35 Merge branch 'main' into feature/z-image-control 2025-12-22 22:56:30 +01:00
Alexander Eichhorn
2be701cfe3 Feature: Add Tag System for user made Workflows (#8673)
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-12-22 15:41:48 -05:00
blessedcoolant
874b547598 chore: format code for ruff checks 2025-12-23 01:04:22 +05:30
Alexander Eichhorn
3668d5b83b feat(z-image): add Extension-based Z-Image ControlNet support
Implement Z-Image ControlNet as an Extension pattern (similar to FLUX ControlNet)
instead of merging control weights into the base transformer. This provides:
- Lower memory usage (no weight duplication)
- Flexibility to enable/disable control per step
- Cleaner architecture with separate control adapter

Key implementation details:
- ZImageControlNetExtension: computes control hints per denoising step
- z_image_forward_with_control: custom forward pass with hint injection
- patchify_control_context: utility for control image patchification
- ZImageControlAdapter: standalone adapter with control_layers and noise_refiner

Architecture matches original VideoX-Fun implementation:
- Hints computed ONCE using INITIAL unified state (before main layers)
- Hints injected at every other main transformer layer (15 control blocks)
- Control signal added after each designated layer's forward pass

V2.0 ControlNet support (control_in_dim=33):
- Channels 0-15: control image latents
- Channels 16-31: reference image (zeros for pure control)
- Channel 32: inpaint mask (1.0 = don't inpaint, use control signal)
2025-12-21 22:30:28 +01:00
Alexander Eichhorn
1c13ca8159 style: apply ruff formatting 2025-12-21 18:52:12 +01:00
Alexander Eichhorn
3ed0e55d9d fix: resolve linting errors in Z-Image ControlNet support
- Add missing ControlNet_Checkpoint_ZImage_Config import
- Remove unused imports (Any, Dict, ADALN_EMBED_DIM, is_torch_version)
- Add strict=True to zip() calls
- Replace mutable list defaults with immutable tuples
- Replace dict() calls with literal syntax
- Sort imports in z_image_denoise.py
2025-12-21 18:50:43 +01:00
Alexander Eichhorn
8db8aa8594 Add Z-Image ControlNet V2.0 support
VRAM usage is high.

- Auto-detect control_in_dim from adapter weights (16 for V1, 33 for V2.0)
- Auto-detect n_refiner_layers from state dict
- Add zero-padding for V2.0's additional channels
- Use accelerate.init_empty_weights() for efficient model creation
- Add ControlNet_Checkpoint_ZImage_Config to frontend schema
2025-12-21 18:43:02 +01:00
Alexander Eichhorn
456d578f20 WIP not working.
feat: Add Z-Image ControlNet support with spatial conditioning

Add comprehensive ControlNet support for Z-Image models including:

Backend:
- New ControlNet_Checkpoint_ZImage_Config for Z-Image control adapter models
- Z-Image control key detection (_has_z_image_control_keys) to identify control layers
- ZImageControlAdapter loader for standalone control models
- ZImageControlTransformer2DModel combining base transformer with control layers
- Memory-efficient model loading by building combined state dict
2025-12-21 18:43:02 +01:00
Alexander Eichhorn
f417c269d1 fix(vae): Fix dtype mismatch in FP32 VAE decode mode
The previous mixed-precision optimization for FP32 mode only converted
some VAE decoder layers (post_quant_conv, conv_in, mid_block) to the
latents dtype while leaving others (up_blocks, conv_norm_out) in float32.
This caused "expected scalar type Half but found Float" errors after
recent diffusers updates.

Simplify FP32 mode to consistently use float32 for both VAE and latents,
removing the incomplete mixed-precision logic. This trades some VRAM
usage for stability and correctness.

Also removes now-unused attention processor imports.
2025-12-16 15:58:48 +01:00
Alexander Eichhorn
39cdcdc9e8 fix(z-image): remove unused WithMetadata and WithBoard mixins from denoise node
The Z-Image denoise node outputs latents, not images, so these mixins
were unnecessary. Metadata and board handling is correctly done in the
L2I (latents-to-image) node. This aligns with how FLUX denoise works.
2025-12-16 09:41:26 +01:00
blessedcoolant
8785d9a3a9 chore: fix ruff checks 2025-12-14 19:51:22 +05:30
Alexander Eichhorn
ba2475c3f0 fix(z-image): improve device/dtype compatibility and error handling
Add robust device capability detection for bfloat16, replacing hardcoded
dtype with runtime checks that fallback to float16/float32 on unsupported
hardware. This prevents runtime failures on GPUs and CPUs without bfloat16.

Key changes:
- Add TorchDevice.choose_bfloat16_safe_dtype() helper for safe dtype selection
- Fix LoRA device mismatch in layer_patcher.py (add device= to .to() call)
- Replace all assert statements with descriptive exceptions (TypeError/ValueError)
- Add hidden_states bounds check and apply_chat_template fallback in text encoder
- Add GGUF QKV tensor validation (divisible by 3 check)
- Fix CPU noise generation to use float32 for compatibility
- Remove verbose debug logging from LoRA conversion utils
2025-12-09 07:37:06 +01:00
Alexander Eichhorn
841372944f feat(z-image): add metadata recall for VAE and Qwen3 encoder
Add support for saving and recalling Z-Image component models (VAE and
Qwen3 Encoder) in image metadata.

Backend:
- Add qwen3_encoder field to CoreMetadataInvocation (version 2.1.0)

Frontend:
- Add vae and qwen3_encoder to Z-Image graph metadata
- Add Qwen3EncoderModel metadata handler for recall
- Add ZImageVAEModel metadata handler (uses zImageVaeModelSelected
  instead of vaeSelected to set Z-Image-specific VAE state)
- Add qwen3Encoder translation key

This enables "Recall Parameters" / "Remix Image" to restore the VAE
and Qwen3 Encoder settings used for Z-Image generations.
2025-12-09 07:12:36 +01:00
Alexander Eichhorn
e9d52734d1 feat(z-image): add single-file checkpoint support for Z-Image models
Add support for loading Z-Image transformer and Qwen3 encoder models
from single-file safetensors format (in addition to existing diffusers
directory format).

Changes:
- Add Main_Checkpoint_ZImage_Config and Main_GGUF_ZImage_Config for
  single-file Z-Image transformer models
- Add Qwen3Encoder_Checkpoint_Config for single-file Qwen3 text encoder
- Add ZImageCheckpointModel and ZImageGGUFCheckpointModel loaders with
  automatic key conversion from original to diffusers format
- Add Qwen3EncoderCheckpointLoader using Qwen3ForCausalLM with fast
  loading via init_empty_weights and proper weight tying for lm_head
- Update z_image_denoise to accept Checkpoint format models
2025-12-09 06:32:51 +01:00
Alexander Eichhorn
9f6d04c690 Merge branch 'main' into feat/z-image-turbo-support 2025-12-05 00:45:02 +01:00
Alexander Eichhorn
280202908a feat: Add GGUF quantized Z-Image support and improve VAE/encoder flexibility
Add comprehensive support for GGUF quantized Z-Image models and improve component flexibility:

Backend:
- New Main_GGUF_ZImage_Config for GGUF quantized Z-Image transformers
- Z-Image key detection (_has_z_image_keys) to identify S3-DiT models
- GGUF quantization detection and sidecar LoRA patching for quantized models
- Qwen3Encoder_Qwen3Encoder_Config for standalone Qwen3 encoder models

Model Loader:
- Split Z-Image model
2025-12-02 20:31:11 +01:00
Alexander Eichhorn
6f9f8e57ac Feature(UI): bulk remove models loras (#8659)
* feat: Add bulk delete functionality for models, LoRAs, and embeddings

Implements a comprehensive bulk deletion feature for the model manager that allows users to select and delete multiple models, LoRAs, and embeddings at once.

Key changes:

Frontend:
- Add multi-selection state management to modelManagerV2 slice
- Update ModelListItem to support Ctrl/Cmd+Click multi-selection with checkboxes
- Create ModelListHeader component showing selection count and bulk actions
- Create BulkDeleteModelsModal for confirming bulk deletions
- Integrate bulk delete UI into ModelList with proper error handling
- Add API mutation for bulk delete operations

Backend:
- Add POST /api/v2/models/i/bulk_delete endpoint
- Implement BulkDeleteModelsRequest and BulkDeleteModelsResponse schemas
- Handle partial failures with detailed error reporting
- Return lists of successfully deleted and failed models

This feature significantly improves user experience when managing large model libraries, especially when restructuring model storage locations.

Fixes issue where users had to delete models individually after moving model files to new storage locations.

* fix: prevent model list header from scrolling with content

* fix: improve error handling in bulk model deletion

- Added proper error serialization using serialize-error for better error logging
- Explicitly defined BulkDeleteModelsResponse type instead of relying on generated schema reference

* refactor: improve code organization in ModelList components

- Reordered imports to follow conventional grouping (external, internal, then third-party utilities)
- Added type assertion for error serialization to satisfy TypeScript
- Extracted inline event handler into named callback function for better readability

* refactor: consolidate Button component props to single line

* feat(ui): enhance model manager bulk selection with select-all and actions menu

- Added select-all checkbox in navigation header with indeterminate state support
- Replaced single delete button with actions dropdown menu for future extensibility
- Made checkboxes always visible instead of conditionally showing on selection
- Moved model filtering logic to ModelListNavigation for select-all functionality
- Improved UX by showing selection state for filtered models only

* fix the wrong path seperater from my windows system

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 20:09:27 -05:00
Alexander Eichhorn
f05ea28cbd feat: Add Z-Image LoRA support
Add comprehensive LoRA support for Z-Image models including:

Backend:
- New Z-Image LoRA config classes (LoRA_LyCORIS_ZImage_Config, LoRA_Diffusers_ZImage_Config)
- Z-Image LoRA conversion utilities with key mapping for transformer and Qwen3 encoder
- LoRA prefix constants (Z_IMAGE_LORA_TRANSFORMER_PREFIX, Z_IMAGE_LORA_QWEN3_PREFIX)
- LoRA detection logic to distinguish Z-Image from Flux models
- Layer patcher improvements for proper dtype conversion and parameter
2025-12-01 22:23:30 +01:00
Alexander Eichhorn
eb3f1c9a61 feat: Add Z-Image-Turbo model support
Add comprehensive support for Z-Image-Turbo (S3-DiT) models including:

Backend:
- New BaseModelType.ZImage in taxonomy
- Z-Image model config classes (ZImageTransformerConfig, Qwen3TextEncoderConfig)
- Model loader for Z-Image transformer and Qwen3 text encoder
- Z-Image conditioning data structures
- Step callback support for Z-Image with FLUX latent RGB factors

Invocations:
- z_image_model_loader: Load Z-Image transformer and Qwen3 encoder
- z_image_text_encoder: Encode prompts using Qwen3 with chat template
- z_image_denoise: Flow matching denoising with time-shifted sigmas
- z_image_image_to_latents: Encode images to 16-channel latents
- z_image_latents_to_image: Decode latents using FLUX VAE

Frontend:
- Z-Image graph builder for text-to-image generation
- Model picker and validation updates for z-image base type
- CFG scale now allows 0 (required for Z-Image-Turbo)
- Clip skip disabled for Z-Image (uses Qwen3, not CLIP)
- Optimal dimension settings for Z-Image (1024x1024)

Technical details:
- Uses Qwen3 text encoder (not CLIP/T5)
- 16 latent channels with FLUX-compatible VAE
- Flow matching scheduler with dynamic time shift
- 8 inference steps recommended for Turbo variant
- bfloat16 inference dtype
2025-12-01 00:22:32 +01:00
dunkeroni
5642099a40 Feat: SDXL Color Compensation (#8637)
* feat(nodes/UI): add SDXL color compensation option

* adjust value

* Better warnings on wrong VAE base model

* Restrict XL compensation to XL models

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>

* fix: BaseModelType missing import

* (chore): appease the ruff

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-11-16 14:32:12 +00:00
Jonathan
abcc987f6f Rework graph.py (#8642)
* Rework graph, add documentation

* Minor fixes to README.md

* Updated schema

* Fixed test to match behavior - all nodes executed, parents before children

* Update invokeai/app/services/shared/graph.py

Cleaned up code

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>

* Change silent corrections to enforcing invariants

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2025-11-16 09:10:47 -05:00
Lincoln Stein
2fb4c92310 fix(mm): directory path leakage on scan folder error 2025-10-27 08:54:57 -04:00
psychedelicious
dcfd4ea756 feat(mm): reidentify models
Add route and model record service method to reidentify a model. This
re-probes the model files and replaces the model's config with the new
one if it does not error.
2025-10-16 10:33:02 +11:00
psychedelicious
563da9ee8e feat(mm): write warning README file to models dir 2025-10-16 08:08:44 +11:00
psychedelicious
7cfaca804a chore: ruff 2025-10-15 10:46:16 +11:00
psychedelicious
3d0f29f85f tidy: app "config", settings modal, infill methods
We had an "infill methods" route that long ago told the frontend infill
method, upscale method (model), NSFW checker, and watermark feature
availability.

None of these were used except for the patchmatch check. Removed them,
made the check exclusively for patchmatch, updated related code in redux
app startup listeners and settings modal.
2025-10-15 10:46:16 +11:00
psychedelicious
240dc673e4 tidy: removing unused code paths 6 2025-10-15 10:46:16 +11:00
psychedelicious
b2e93d7be7 tidy: removing unused code paths 5 2025-10-15 10:46:16 +11:00
psychedelicious
906ec4519d tidy: removing unused code paths 2 2025-10-15 10:46:16 +11:00
psychedelicious
7cff5da2c0 tidy: removing unused code paths 1 2025-10-15 10:46:16 +11:00
psychedelicious
454d05bbde refactor: model manager v3 (#8607)
* feat(mm): add UnknownModelConfig

* refactor(ui): move model categorisation-ish logic to central location, simplify model manager models list

* refactor(ui)refactor(ui): more cleanup of model categories

* refactor(ui): remove unused excludeSubmodels

I can't remember what this was for and don't see any reference to it.
Maybe it's just remnants from a previous implementation?

* feat(nodes): add unknown as model base

* chore(ui): typegen

* feat(ui): add unknown model base support in ui

* feat(ui): allow changing model type in MM, fix up base and variant selects

* feat(mm): omit model description instead of making it "base type filename model"

* feat(app): add setting to allow unknown models

* feat(ui): allow changing model format in MM

* feat(app): add the installed model config to install complete events

* chore(ui): typegen

* feat(ui): toast warning when installed model is unidentified

* docs: update config docstrings

* chore(ui): typegen

* tests(mm): fix test for MM, leave the UnknownModelConfig class in the list of configs

* tidy(ui): prefer types from zod schemas for model attrs

* chore(ui): lint

* fix(ui): wrong translation string

* feat(mm): normalized model storage

Store models in a flat directory structure. Each model is in a dir named
its unique key (a UUID). Inside that dir is either the model file or the
model dir.

* feat(mm): add migration to flat model storage

* fix(mm): normalized multi-file/diffusers model installation no worky

now worky

* refactor: port MM probes to new api

- Add concept of match certainty to new probe
- Port CLIP Embed models to new API
- Fiddle with stuff

* feat(mm): port TIs to new API

* tidy(mm): remove unused probes

* feat(mm): port spandrel to new API

* fix(mm): parsing for spandrel

* fix(mm): loader for clip embed

* fix(mm): tis use existing weight_files method

* feat(mm): port vae to new API

* fix(mm): vae class inheritance and config_path

* tidy(mm): patcher types and import paths

* feat(mm): better errors when invalid model config found in db

* feat(mm): port t5 to new API

* feat(mm): make config_path optional

* refactor(mm): simplify model classification process

Previously, we had a multi-phase strategy to identify models from their
files on disk:
1. Run each model config classes' `matches()` method on the files. It
checks if the model could possibly be an identified as the candidate
model type. This was intended to be a quick check. Break on the first
match.
2. If we have a match, run the config class's `parse()` method. It
derive some additional model config attrs from the model files. This was
intended to encapsulate heavier operations that may require loading the
model into memory.
3. Derive the common model config attrs, like name, description,
calculate the hash, etc. Some of these are also heavier operations.

This strategy has some issues:
- It is not clear how the pieces fit together. There is some
back-and-forth between different methods and the config base class. It
is hard to trace the flow of logic until you fully wrap your head around
the system and therefore difficult to add a model architecture to the
probe.
- The assumption that we could do quick, lightweight checks before
heavier checks is incorrect. We often _must_ load the model state dict
in the `matches()` method. So there is no practical perf benefit to
splitting up the responsibility of `matches()` and `parse()`.
- Sometimes we need to do the same checks in `matches()` and `parse()`.
In these cases, splitting the logic is has a negative perf impact
because we are doing the same work twice.
- As we introduce the concept of an "unknown" model config (i.e. a model
that we cannot identify, but still record in the db; see #8582), we will
_always_ run _all_ the checks for every model. Therefore we need not try
to defer heavier checks or resource-intensive ops like hashing. We are
going to do them anyways.
- There are situations where a model may match multiple configs. One
known case are SD pipeline models with merged LoRAs. In the old probe
API, we relied on the implicit order of checks to know that if a model
matched for pipeline _and_ LoRA, we prefer the pipeline match. But, in
the new API, we do not have this implicit ordering of checks. To resolve
this in a resilient way, we need to get all matches up front, then use
tie-breaker logic to figure out which should win (or add "differential
diagnosis" logic to the matchers).
- Field overrides weren't handled well by this strategy. They were only
applied at the very end, if a model matched successfully. This means we
cannot tell the system "Hey, this model is type X with base Y. Trust me
bro.". We cannot override the match logic. As we move towards letting
users correct mis-identified models (see #8582), this is a requirement.

We can simplify the process significantly and better support "unknown"
models.

Firstly, model config classes now have a single `from_model_on_disk()`
method that attempts to construct an instance of the class from the
model files. This replaces the `matches()` and `parse()` methods.

If we fail to create the config instance, a special exception is raised
that indicates why we think the files cannot be identified as the given
model config class.

Next, the flow for model identification is a bit simpler:
- Derive all the common fields up-front (name, desc, hash, etc).
- Merge in overrides.
- Call `from_model_on_disk()` for every config class, passing in the
fields. Overrides are handled in this method.
- Record the results for each config class and choose the best one.

The identification logic is a bit more verbose, with the special
exceptions and handling of overrides, but it is very clear what is
happening.

The one downside I can think of for this strategy is we do need to check
every model type, instead of stopping at the first match. It's a bit
less efficient. In practice, however, this isn't a hot code path, and
the improved clarity is worth far more than perf optimizations that the
end user will likely never notice.

* refactor(mm): remove unused methods in config.py

* refactor(mm): add model config parsing utils

* fix(mm): abstractmethod bork

* tidy(mm): clarify that model id utils are private

* fix(mm): fall back to UnknownModelConfig correctly

* feat(mm): port CLIPVisionDiffusersConfig to new api

* feat(mm): port SigLIPDiffusersConfig to new api

* feat(mm): make match helpers more succint

* feat(mm): port flux redux to new api

* feat(mm): port ip adapter to new api

* tidy(mm): skip optimistic override handling for now

* refactor(mm): continue iterating on config

* feat(mm): port flux "control lora" and t2i adapter to new api

* tidy(ui): use Extract to get model config types

* fix(mm): t2i base determination

* feat(mm): port cnet to new api

* refactor(mm): add config validation utils, make it all consistent and clean

* feat(mm): wip port of main models to new api

* feat(mm): wip port of main models to new api

* feat(mm): wip port of main models to new api

* docs(mm): add todos

* tidy(mm): removed unused model merge class

* feat(mm): wip port main models to new api

* tidy(mm): clean up model heuristic utils

* tidy(mm): clean up ModelOnDisk caching

* tidy(mm): flux lora format util

* refactor(mm): make config classes narrow

Simpler logic to identify, less complexity to add new model, fewer
useless attrs that do not relate to the model arch, etc

* refactor(mm): diffusers loras

w

* feat(mm): consistent naming for all model config classes

* fix(mm): tag generation & scattered probe fixes

* tidy(mm): consistent class names

* refactor(mm): split configs into separate files

* docs(mm): add comments for identification utils

* chore(ui): typegen

* refactor(mm): remove legacy probe, new configs dir structure, update imports

* fix(mm): inverted condition

* docs(mm): update docsstrings in factory.py

* docs(mm): document flux variant attr

* feat(mm): add helper method for legacy configs

* feat(mm): satisfy type checker in flux denoise

* docs(mm): remove extraneous comment

* fix(mm): ensure unknown model configs get unknown attrs

* fix(mm): t5 identification

* fix(mm): sdxl ip adapter identification

* feat(mm): more flexible config matching utils

* fix(mm): clip vision identification

* feat(mm): add sanity checks before probing paths

* docs(mm): add reminder for self for field migrations

* feat(mm): clearer naming for main config class hierarchy

* feat(mm): fix clip vision starter model bases, add ref to actual models

* feat(mm): add model config schema migration logic

* fix(mm): duplicate import

* refactor(mm): split big migration into 3

Split the big migration that did all of these things into 3:

- Migration 22: Remove unique contraint on base/name/type in models
table
- Migration 23: Migrate configs to v6.8.0 schemas
- Migration 24: Normalize file storage

* fix(mm): pop base/type/format when creating unknown model config

* fix(db): migration 22 insert only real cols

* fix(db): migration 23 fall back to unknown model when config change fails

* feat(db): run migrations 23 and 24

* fix(mm): false negative on flux lora

* fix(mm): vae checkpoint probe checking for dir instead of file

* fix(mm): ModelOnDisk skips dirs when looking for weights

Previously a path w/ any of the known weights suffixes would be seen as
a weights file, even if it was a directory. We now check to ensure the
candidate path is actually a file before adding it to the list of
weights.

* feat(mm): add method to get main model defaults from a base

* feat(mm): do not log when multiple non-unknown model matches

* refactor(mm): continued iteration on model identifcation

* tests(mm): refactor model identification tests

Overhaul of model identification (probing) tests. Previously we didn't
test the correctness of probing except in a few narrow cases - now we
do.

See tests/model_identification/README.md for a detailed overview of the
new test setup. It includes instructions for adding a new test case. In
brief:

- Download the model you want to add as a test case
- Run a script against it to generate the test model files
- Fill in the expected model type/format/base/etc in the generated test
metadata JSON file

Included test cases:
- All starter models
- A handful of other models that I had installed
- Models present in the previous test cases as smoke tests, now also
tested for correctness

* fix(mm): omit type/format/base when creating unknown config instance

* feat(mm): use ValueError for model id sanity checks

* feat(mm): add flag for updating models to allow class changes

* tests(mm): fix remaining MM tests

* feat: allow users to edit models freely

* feat(ui): add warning for model settings edit

* tests(mm): flux state dict tests

* tidy: remove unused file

* fix(mm): lora state dict loading in model id

* feat(ui): use translation string for model edit warning

* docs(db): update version numbers in migration comments

* chore: bump version to v6.9.0a1

* docs: update model id readme

* tests(mm): attempt to fix windows model id tests

* fix(mm): issue with deleting single file models

* feat(mm): just delete the dir w/ rmtree when deleting model

* tests(mm): windows CI issue

* fix(ui): typegen schema sync

* fix(mm): fixes for migration 23

- Handle CLIP Embed and Main SD models missing variant field
- Handle errors when calling the discriminator function, previously only
handled ValidationError but it could be a ValueError or something else
- Better logging for config migration

* chore: bump version to v6.9.0a2

* chore: bump version to v6.9.0a3
2025-10-15 10:18:53 +11:00
dunkeroni
bd4bb075a5 bump node version to 2.0.0 2025-10-09 17:55:13 +11:00