Kent Keirsey
|
af58a75e97
|
Support PEFT Loras with Base_Model.model prefix (#8433)
* Support PEFT Loras with Base_Model.model prefix
* update tests
* ruff
* fix python complaints
* update kes
* format keys
* remove unneeded test
|
2025-08-18 09:14:46 -04:00 |
|
Kevin Turner
|
312960645b
|
fix: move AI Toolkit to the bottom of the detection list
to avoid disrupting already-working LoRA
|
2025-06-16 19:08:11 +10:00 |
|
Kevin Turner
|
a214f4fff5
|
fix: group aitoolkit lora layers
|
2025-06-16 19:08:11 +10:00 |
|
Kevin Turner
|
2981591c36
|
test: add some aitoolkit lora tests
|
2025-06-16 19:08:11 +10:00 |
|
Kevin Turner
|
b08f90c99f
|
WIP!: …they weren't in diffusers format…
|
2025-06-16 19:08:11 +10:00 |
|
Kevin Turner
|
ab8c739cd8
|
fix(LoRA): add ai-toolkit to lora loader
|
2025-06-16 19:08:11 +10:00 |
|
Kevin Turner
|
5c5108c28a
|
feat(LoRA): support AI Toolkit LoRA for FLUX [WIP]
|
2025-06-16 19:08:11 +10:00 |
|
Kevin Turner
|
68108435ae
|
feat(LoRA): allow LoRA layer patcher to continue past unknown layers
|
2025-05-30 13:29:02 +10:00 |
|
Kent Keirsey
|
1f63b60021
|
Implementing support for Non-Standard LoRA Format (#7985)
* integrate loRA
* idk anymore tbh
* enable fused matrix for quantized models
* integrate loRA
* idk anymore tbh
* enable fused matrix for quantized models
* ruff fix
---------
Co-authored-by: Sam <bhaskarmdutt@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
|
2025-05-05 09:40:38 -04:00 |
|
Billy
|
323d409fb6
|
Make ruff happy
|
2025-03-27 17:47:57 +11:00 |
|
Billy
|
f251722f56
|
LoRA classification API
|
2025-03-27 17:47:01 +11:00 |
|
Billy
|
f2689598c0
|
Formatting
|
2025-03-06 09:11:00 +11:00 |
|
Ryan Dick
|
6c919e1bca
|
Handle DoRA layer device casting when model is partially-loaded.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
5357d6e08e
|
Rename ConcatenatedLoRALayer to MergedLayerPatch. And other minor cleanup.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
7fef569e38
|
Update frontend graph building logic to support FLUX LoRAs that modify the T5 encoder weights.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
e7fb435cc5
|
Update DoRALayer with a custom get_parameters() override that 1) applies alpha scaling to delta_v, and 2) warns if the base model is incompatible.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
5d472ac1b8
|
Move quantized weight handling for patch layers up from ConcatenatedLoRALayer to CustomModuleMixin.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
28514ba59a
|
Update ConcatenatedLoRALayer to work with all sub-layer types.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
b8eed2bdcb
|
Relax lora_layers_from_flux_diffusers_grouped_state_dict(...) so that it can work with more LoRA variants (e.g. hada)
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
1054283f5c
|
Fix bug in FLUX T5 Koyha-style LoRA key parsing.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
409b69ee5d
|
Fix typo in DoRALayer.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
206f261e45
|
Add utils for loading FLUX OneTrainer DoRA models.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
7eee4da896
|
Further updates to lora_model_from_flux_diffusers_state_dict() so that it can be re-used for OneTrainer LoRAs.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
908976ac08
|
Add support for LyCoris-style LoRA keys in lora_model_from_flux_diffusers_state_dict(). Previously, it only supported PEFT-style LoRA keys.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
dfa253e75b
|
Add utils for working with Kohya LoRA keys.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
4f369e3dfb
|
First draft of DoRALayer. Not tested yet.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
5bd6428fdd
|
Add is_state_dict_likely_in_flux_onetrainer_format() util function.
|
2025-01-28 14:51:35 +00:00 |
|
Ryan Dick
|
f88c1ba0c3
|
Fix bug with some LoRA variants when applied to a BnB NF4 quantized model. Note the previous commit which added a unit test to trigger this bug.
|
2025-01-22 09:20:40 +11:00 |
|
Ryan Dick
|
2619ef53ca
|
Handle device casting in ia2_layer.py.
|
2025-01-07 00:31:00 +00:00 |
|
Ryan Dick
|
6fd9b0a274
|
Delete old sidecar wrapper implementation. This functionality has moved into the custom layers.
|
2024-12-29 17:33:08 +00:00 |
|
Ryan Dick
|
6d49ee839c
|
Switch the LayerPatcher to use 'custom modules' to manage layer patching.
|
2024-12-29 01:18:30 +00:00 |
|
Ryan Dick
|
2855bb6b41
|
Update BaseLayerPatch.get_parameters(...) to accept a dict of orig_parameters rather than orig_module. This will enable compatibility between patching and cpu->gpu streaming.
|
2024-12-28 21:12:53 +00:00 |
|
Ryan Dick
|
6d7314ac0a
|
Consolidate the LayerPatching patching modes into a single implementation.
|
2024-12-24 15:57:54 +00:00 |
|
Ryan Dick
|
80db9537ff
|
Rename model_patcher.py -> layer_patcher.py.
|
2024-12-24 15:57:54 +00:00 |
|
Ryan Dick
|
6f926f05b0
|
Update apply_smart_model_patches() so that layer restore matches the behavior of non-smart mode.
|
2024-12-24 15:57:54 +00:00 |
|
Ryan Dick
|
cefcb340d9
|
Add LoRAPatcher.smart_apply_lora_patches()
|
2024-12-24 15:57:54 +00:00 |
|
Ryan Dick
|
b272d46056
|
Enable ability to control the weight of FLUX Control LoRAs.
|
2024-12-17 13:36:10 +00:00 |
|
Ryan Dick
|
dd09509dbd
|
Rename ModelPatcher -> LayerPatcher to avoid conflicts with another ModelPatcher definition.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
7fad4c9491
|
Rename LoRAModelRaw to ModelPatchRaw.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
b820862eab
|
Rename ModelPatcher methods to reflect that they are general model patching methods and are not LoRA-specific.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
c604a0956e
|
Rename LoRAPatcher -> ModelPatcher.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
80f64abd1e
|
Use a FluxControlLoRALayer when loading FLUX control LoRAs.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
37e3089457
|
Push LoRA layer reshaping down into the patch layers and add a new FluxControlLoRALayer type.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
fe09f2d27a
|
Move handling of LoRA scale and patch weight down into the layer patch classes.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
e7e3f7e144
|
Ensure that patches are on the correct device when used in sidecar wrappers.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
606d58d7db
|
Add sidecar wrapper for FLUX RMSNorm layers to support SetParameterLayers used by FLUX structural control LoRAs.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
c76a448846
|
Delete old sidecar_layers/ dir.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
46133b5656
|
Switch LoRAPatcher to use the new sidecar_wrappers/ rather than sidecar_layers/.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
ac28370fd2
|
Break up functions in LoRAPatcher in preparation for more refactoring.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
1e0552c813
|
Add optimized implementations for the LinearSidecarWrapper when using LoRALayer or ConcatenatedLoRALayer patch types (since these are the most common).
|
2024-12-17 13:20:19 +00:00 |
|