Commit Graph

44 Commits

Author SHA1 Message Date
Billy
3cd4306eec Update import path 2025-06-26 19:47:06 +10:00
Billy
2832ca300f Formatting 2025-06-24 07:26:42 +10:00
Billy
de5f413440 Filter bundle_emb for all LoRAs 2025-06-24 07:12:11 +10:00
Billy
150a876c73 Formatting 2025-06-23 13:52:19 +10:00
Billy
62c3b01e4f Merge branch 'main' into OMI 2025-06-23 13:52:07 +10:00
Billy
e1157f343b Support for Flux and SDXL 2025-06-23 13:51:16 +10:00
Billy
4ee54eac1d Another attempt 2025-06-20 14:10:06 +10:00
Billy
1fd83f5e68 Import 2025-06-19 11:01:50 +10:00
Billy
637487c573 Convert FROM OMI to diffusers 2025-06-19 11:00:27 +10:00
Billy
4e98e7d0a2 Typo: dot should be comma 2025-06-19 10:47:24 +10:00
Billy
12f65d800d Formatting 2025-06-19 09:40:58 +10:00
Billy
45d09f8f51 Use OMI conversion utils 2025-06-19 09:40:49 +10:00
Billy
9b4fdb493e Loader 2025-06-18 10:53:54 +10:00
Billy
47e21d6e04 Formatting 2025-06-17 13:56:38 +10:00
Billy
84ab4a1c30 Convert from OMI to default LoRA state dict 2025-06-17 13:56:22 +10:00
Kevin Turner
2981591c36 test: add some aitoolkit lora tests 2025-06-16 19:08:11 +10:00
Kevin Turner
ab8c739cd8 fix(LoRA): add ai-toolkit to lora loader 2025-06-16 19:08:11 +10:00
Billy
182580ff69 Imports 2025-03-26 12:55:10 +11:00
Ryan Dick
0db6639b4b Add FLUX OneTrainer model probing. 2025-01-28 14:51:35 +00:00
Ryan Dick
d30a9ced38 Rename model_cache_default.py -> model_cache.py. 2024-12-24 14:23:18 +00:00
Ryan Dick
e0bfa6157b Remove ModelCacheBase. 2024-12-24 14:23:18 +00:00
Brandon Rising
c9b2cce627 Add diffusers config object for control loras 2024-12-17 14:01:41 -05:00
Ryan Dick
41664f88db Rename backend/patches/conversions/ to backend/patches/lora_conversions/ 2024-12-17 13:20:19 +00:00
Ryan Dick
42f8d6aa11 Rename backend/lora/ to backend/patches 2024-12-17 13:20:19 +00:00
Brandon Rising
046d19446c Rename Structural Lora to Control Lora 2024-12-17 07:28:45 -05:00
Brandon Rising
f3b253987f Initial setup for flux tools control loras 2024-12-17 07:28:45 -05:00
Ryan Dick
e88d3cf2f7 Assume alpha=rank for FLUX diffusers PEFT LoRA models. 2024-09-16 13:57:07 +00:00
Ryan Dick
81fbaf2b8b Assume LoRA alpha=8 for FLUX diffusers PEFT LoRAs. 2024-09-15 04:39:56 +03:00
Ryan Dick
5800e60b06 Add model probe support for FLUX LoRA models in Diffusers format. 2024-09-15 04:39:56 +03:00
Ryan Dick
cf9f30cc56 Rename flux_kohya_lora_conversion_utils.py 2024-09-15 04:39:56 +03:00
Ryan Dick
50c9410121 WIP 2024-09-15 04:39:56 +03:00
Ryan Dick
db61ec4322 Get probing of FLUX LoRA kohya models working. 2024-09-15 04:39:56 +03:00
Ryan Dick
04b37e64ea Move the responsibilities of 1) state_dict loading from file, and 2) SDXL lora key conversions, out of LoRAModelRaw and into LoRALoader. 2024-09-15 04:39:56 +03:00
Ryan Dick
2b3e4e123d Split LoRA layer implementations into separate files. 2024-09-12 15:53:30 +00:00
Ryan Dick
9da5925287 Add ruff rule to disallow relative parent imports. 2024-07-04 09:35:37 -04:00
Lincoln Stein
3e0fb45dd7 Load single-file checkpoints directly without conversion (#6510)
* use model_class.load_singlefile() instead of converting; works, but performance is poor

* adjust the convert api - not right just yet

* working, needs sql migrator update

* rename migration_11 before conflict merge with main

* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* implement lightweight version-by-version config migration

* simplified config schema migration code

* associate sdxl config with sdxl VAEs

* remove use of original_config_file in load_single_file()

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-27 17:31:28 -04:00
Lincoln Stein
3d6d89feb4 [mm] Do not write diffuser model to disk when convert_cache set to zero (#6072)
* pass model config to _load_model

* make conversion work again

* do not write diffusers to disk when convert_cache set to 0

* adding same model to cache twice is a no-op, not an assertion error

* fix issues identified by psychedelicious during pr review

* following conversion, avoid redundant read of cached submodels

* fix error introduced while merging

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-29 16:11:08 -04:00
psychedelicious
132790eebe tidy(nodes): use canonical capitalizations 2024-03-07 10:56:59 +11:00
psychedelicious
7c9128b253 tidy(mm): use canonical capitalization for all model-related enums, classes
For example, "Lora" -> "LoRA", "Vae" -> "VAE".
2024-03-05 23:50:19 +11:00
psychedelicious
dd9daf8efb chore: ruff 2024-03-01 10:42:33 +11:00
psychedelicious
5a3195f757 final tidying before marking PR as ready for review
- Replace AnyModelLoader with ModelLoaderRegistry
- Fix type check errors in multiple files
- Remove apparently unneeded `get_model_config_enum()` method from model manager
- Remove last vestiges of old model manager
- Updated tests and documentation

resolve conflict with seamless.py
2024-03-01 10:42:33 +11:00
Lincoln Stein
5d612ec095 Tidy names and locations of modules
- Rename old "model_management" directory to "model_management_OLD" in order to catch
  dangling references to original model manager.
- Caught and fixed most dangling references (still checking)
- Rename lora, textual_inversion and model_patcher modules
- Introduce a RawModel base class to simplfy the Union returned by the
  model loaders.
- Tidy up the model manager 2-related tests. Add useful fixtures, and
  a finalizer to the queue and installer fixtures that will stop the
  services and release threads.
2024-03-01 10:42:33 +11:00
Lincoln Stein
0d3addc69b added textual inversion and lora loaders 2024-03-01 10:42:33 +11:00
Lincoln Stein
67eb715093 loaders for main, controlnet, ip-adapter, clipvision and t2i 2024-03-01 10:42:33 +11:00