156 Commits

Author SHA1 Message Date
Lincoln Stein
e52e7418bb close #3304 2023-04-29 20:07:21 -04:00
Lincoln Stein
73be58a0b5 fix issue #3293 2023-04-29 11:37:07 -04:00
Lincoln Stein
264af3c054 fix crash caused by incorrect conflict resolution 2023-04-24 22:20:12 -04:00
Lincoln Stein
7f7d5894fa Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-25 02:51:27 +01:00
Lincoln Stein
40744ed996 Merge branch 'v2.3' into fix_inconsistent_loras 2023-04-22 20:22:32 +01:00
Lincoln Stein
e5188309ec Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-20 17:25:09 +01:00
StAlKeR7779
967d853020 Merge branch 'v2.3' into feat/lokr_support 2023-04-16 23:10:45 +03:00
StAlKeR7779
e91117bc74 Add support for lokr lycoris format 2023-04-16 23:05:13 +03:00
Damian Stewart
4d58444153 fix issues and further cleanup 2023-04-16 17:54:21 +02:00
Lincoln Stein
1183bf96ed hotfix to 2.3.4
- Pin diffusers to 0.14
- Small fix to LoRA loading routine that was preventing placement of
  LoRA files in subdirectories.
- Bump version to 2.3.4.post1
2023-04-13 08:48:30 -04:00
Lincoln Stein
afa3cdce27 add a list_compatible_loras() method 2023-04-13 00:11:26 -04:00
Lincoln Stein
6dfbd1c677 implement caching scheme for vector length 2023-04-12 23:56:52 -04:00
Lincoln Stein
2251d3abfe fixup relative path to devices module 2023-04-10 23:44:58 -04:00
Lincoln Stein
0b22a3f34d distinguish LoRA/LyCORIS files based on what SD model they were based on
- Attempting to run a prompt with a LoRA based on SD v1.X against a
  model based on v2.X will now throw an
  `IncompatibleModelException`. To import this exception:
  `from ldm.modules.lora_manager import IncompatibleModelException`
  (maybe this should be defined in ModelManager?)

- Enhance `LoraManager.list_loras()` to accept an optional integer
  argument, `token_vector_length`. This will filter the returned LoRA
  models to return only those that match the indicated length. Use:
  ```
  768 => for models based on SD v1.X
  1024 => for models based on SD v2.X
  ```

  Note that this filtering requires each LoRA file to be opened
  by `torch.safetensors`. It will take ~8s to scan a directory of
  40 files.

- Added new static methods to `ldm.modules.kohya_lora_manager`:
  - check_model_compatibility()
  - vector_length_from_checkpoint()
  - vector_length_from_checkpoint_file()
2023-04-10 23:33:28 -04:00
Lincoln Stein
0784e49d92 code cleanup and change default LoRA weight
- Remove unused (and probably dangerous) `unload_applied_loras()` method
- Remove unused `LoraManager.loras_to_load` attribute
- Change default LoRA weight to 0.75 when using WebUI to add a LoRA to prompt.
2023-04-06 16:34:22 -04:00
Sergey Borisov
baf60948ee Update kohya_lora_manager.py
Bias parsing, fix LoHa parsing and weight calculation
2023-04-06 01:44:20 +03:00
Sergey Borisov
b62cce20b8 Clean up 2023-04-05 20:18:04 +03:00
Sergey Borisov
6a8848b61f Draft implementation if LyCORIS(LoCon and LoHi) 2023-04-05 17:59:29 +03:00
Lincoln Stein
793488e90a sort lora list alphabetically 2023-04-03 16:19:30 -04:00
Lincoln Stein
3a0fed2fda add withLora() readline autocompletion support 2023-04-02 15:35:39 -04:00
blessedcoolant
fad6fc807b fix(ui): LoraManager UI causing overload 2023-04-02 19:37:47 +12:00
Lincoln Stein
16aeb8d640 tweak debugging message for lora unloading 2023-04-01 23:45:36 -04:00
Lincoln Stein
e0bd30b98c more elegant handling of lora context 2023-04-01 23:41:22 -04:00
Lincoln Stein
90f77c047c Update ldm/modules/lora_manager.py
Co-authored-by: neecapp <ryree0@gmail.com>
2023-04-01 23:24:50 -04:00
Lincoln Stein
941fc2297f Update ldm/modules/kohya_lora_manager.py
Co-authored-by: neecapp <ryree0@gmail.com>
2023-04-01 23:23:49 -04:00
Lincoln Stein
110b067c52 Update ldm/modules/kohya_lora_manager.py
Co-authored-by: neecapp <ryree0@gmail.com>
2023-04-01 23:23:29 -04:00
Lincoln Stein
8518f8c2ac LoRA alpha can be 0 2023-04-01 17:28:36 -04:00
Lincoln Stein
605ceb2e95 add support for loras ending with .pt 2023-04-01 17:12:07 -04:00
Lincoln Stein
c9372f919c moved LoRA manager cleanup routines into a context 2023-04-01 16:49:23 -04:00
Lincoln Stein
acd9838559 Merge branch 'v2.3' into feat/lora-support-2.3 2023-04-01 10:55:22 -04:00
Lincoln Stein
1e5a44a474 bump version to 2.3.3 final 2023-04-01 09:43:46 -04:00
Lincoln Stein
879c80022e preliminary LoRA support ready for testing
Instructions:

1. Download LoRA .safetensors files of your choice and place in
   `INVOKEAIROOT/loras`. Unlike the draft version of this, the file
   names can contain underscores and alphanumerics. Names with
   arbitrary unicode characters are not supported.

2. Add `withLora(lora-file-basename,weight)` to your prompt. The
   weight is optional and will default to 1.0. A few examples, assuming
   that a LoRA file named `loras/sushi.safetensors` is present:

```
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
```

Multiple `withLora()` prompt fragments are allowed. The weight can be
arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher
weights make the LoRA's influence stronger.

In my limited testing, I found it useful to reduce the CFG to avoid
oversharpening. Also I got better results when running the LoRA on top
of the model on which it was based during training.

Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice
versa. You will get a nasty stack trace. This needs to be cleaned up.

3. You can change the location of the `loras` directory by passing the
   `--lora_directory` option to `invokeai.

Documentation can be found in docs/features/LORAS.md.
2023-03-31 00:03:16 -04:00
Lincoln Stein
ea5f6b9826 Merge branch 'release/2.3.3-rc3' into feat/lora-support-2.3 2023-03-30 22:02:37 -04:00
Lincoln Stein
3d4f4b677f support external legacy config files with no personalization section 2023-03-30 21:39:05 -04:00
Lincoln Stein
c2487e4330 Kohya lora models load but generate freezes 2023-03-30 07:38:39 -04:00
Lincoln Stein
071df30597 handle a fourth variant of embedding .pt files
- This variant, exemplified by "easynegative.safetensors" has a single
  'embparam' key containing a Tensor.
- Also refactored code to make it easier to read.
- Handle both pickle and safetensor formats.
2023-03-26 23:40:29 -04:00
Jordan
d9c46277ea add peft setup (need to install huggingface/peft) 2023-02-25 20:21:20 -07:00
Jordan
523e44ccfe simplify manager 2023-02-24 01:32:09 -07:00
Jordan
4ce8b1ba21 setup cross conditioning for lora 2023-02-23 19:27:45 -07:00
Jordan
68a3132d81 move legacy lora manager to its own file 2023-02-23 17:41:20 -07:00
Jordan
b69f9d4af1 initial setup of cross attention 2023-02-23 17:30:34 -07:00
Jordan
6a1129ab64 switch all none diffusers stuff to legacy, and load through compel prompts 2023-02-23 16:48:33 -07:00
Jordan
f64a4db5fa setup legacy class to abstract hacky logic for none diffusers lora and format prompt for compel 2023-02-23 05:56:39 -07:00
Jordan
3f477da46c Merge branch 'add_lora_support' of https://github.com/jordanramstad/InvokeAI into add_lora_support 2023-02-23 01:45:34 -07:00
Jordan
71972c3709 re-enable load attn procs support (no multiplier) 2023-02-23 01:44:13 -07:00
Jordan
d4083221a6 Merge branch 'main' into add_lora_support 2023-02-22 13:28:04 -08:00
Jordan
5b4a241f5c Merge branch 'main' into add_lora_support 2023-02-21 20:38:33 -08:00
Jordan
cd333e414b move key converter to wrapper 2023-02-21 21:38:15 -07:00
Jordan
af3543a8c7 further cleanup and implement wrapper 2023-02-21 20:42:40 -07:00
Jonathan
a461875abd Merge branch 'main' into refactor_use_compel 2023-02-21 21:14:28 -06:00