Commit Graph

1277 Commits

Author SHA1 Message Date
Lincoln Stein
afa3cdce27 add a list_compatible_loras() method 2023-04-13 00:11:26 -04:00
Lincoln Stein
6dfbd1c677 implement caching scheme for vector length 2023-04-12 23:56:52 -04:00
Lincoln Stein
2251d3abfe fixup relative path to devices module 2023-04-10 23:44:58 -04:00
Lincoln Stein
0b22a3f34d distinguish LoRA/LyCORIS files based on what SD model they were based on
- Attempting to run a prompt with a LoRA based on SD v1.X against a
  model based on v2.X will now throw an
  `IncompatibleModelException`. To import this exception:
  `from ldm.modules.lora_manager import IncompatibleModelException`
  (maybe this should be defined in ModelManager?)

- Enhance `LoraManager.list_loras()` to accept an optional integer
  argument, `token_vector_length`. This will filter the returned LoRA
  models to return only those that match the indicated length. Use:
  ```
  768 => for models based on SD v1.X
  1024 => for models based on SD v2.X
  ```

  Note that this filtering requires each LoRA file to be opened
  by `torch.safetensors`. It will take ~8s to scan a directory of
  40 files.

- Added new static methods to `ldm.modules.kohya_lora_manager`:
  - check_model_compatibility()
  - vector_length_from_checkpoint()
  - vector_length_from_checkpoint_file()
2023-04-10 23:33:28 -04:00
Lincoln Stein
2528e14fe9 raise generation exceptions so that frontend can catch 2023-04-10 14:26:09 -04:00
Lincoln Stein
16ccc807cc control which revision of a diffusers model is downloaded
- Previously the user's preferred precision was used to select which
  version branch of a diffusers model would be downloaded. Half-precision
  would try to download the 'fp16' branch if it existed.

- Turns out that with waifu-diffusion this logic doesn't work, as
  'fp16' gets you waifu-diffusion v1.3, while 'main' gets you
  waifu-diffusion v1.4. Who knew?

- This PR adds a new optional "revision" field to `models.yaml`. This
  can be used to override the diffusers branch version. In the case of
  Waifu diffusion, INITIAL_MODELS.yaml now specifies the "main" branch.

- This PR also quenches the NSFW nag that downloading diffusers sometimes
  triggers.

- Closes #3160
2023-04-09 22:07:55 -04:00
Lincoln Stein
2af511c98a release 2.3.4 2023-04-09 13:31:45 -04:00
Lincoln Stein
c5b34d21e5 Merge branch 'v2.3' into bugfix/truncate-filenames-in-invokeai-batch 2023-04-09 11:29:32 -04:00
StAlKeR7779
894e2e643d Pass extra_conditioning_info in inpaint 2023-04-09 00:50:30 +03:00
Lincoln Stein
5983e65b22 invokeai-batch: truncate image filenames that exceed filesystem's max filename size
- Closes #3115
2023-04-07 18:20:32 -04:00
Lincoln Stein
1faf9c5cdd bump version 2023-04-07 09:52:32 -04:00
Damian Stewart
b141ab42d3 bump compel version to fix lora + blend 2023-04-07 14:12:22 +02:00
Lincoln Stein
35c4ff8ab0 prevent crash when prompt blend requested 2023-04-06 21:22:47 -04:00
Lincoln Stein
0784e49d92 code cleanup and change default LoRA weight
- Remove unused (and probably dangerous) `unload_applied_loras()` method
- Remove unused `LoraManager.loras_to_load` attribute
- Change default LoRA weight to 0.75 when using WebUI to add a LoRA to prompt.
2023-04-06 16:34:22 -04:00
Damian Stewart
09fe21116b Update shared_invokeai_diffusion.py
add line to docs
2023-04-06 11:01:00 +02:00
Lincoln Stein
3dffa33097 Merge branch 'v2.3' into feat/lora-support-2.3 2023-04-05 21:59:54 -04:00
Sergey Borisov
baf60948ee Update kohya_lora_manager.py
Bias parsing, fix LoHa parsing and weight calculation
2023-04-06 01:44:20 +03:00
Sergey Borisov
b62cce20b8 Clean up 2023-04-05 20:18:04 +03:00
Sergey Borisov
6a8848b61f Draft implementation if LyCORIS(LoCon and LoHi) 2023-04-05 17:59:29 +03:00
Lincoln Stein
c8fa01908c remove app tests
- removed app directory (a 3.0 feature), so app tests had to go too
- fixed regular expression in the concepts lib which was causing deprecation warnings
2023-04-04 23:41:26 -04:00
Lincoln Stein
cb1d433f30 create loras directory at update time 2023-04-04 22:47:15 -04:00
Lincoln Stein
ad5142d6f7 remove nodes app directory
- This was inadvertently included in the PR when rebased from main
2023-04-04 06:45:51 -04:00
Lincoln Stein
793488e90a sort lora list alphabetically 2023-04-03 16:19:30 -04:00
Lincoln Stein
afcb278e66 fix crash when no extra conditioning provided (redux) 2023-04-02 19:43:56 -04:00
Lincoln Stein
0a0e44b51e fix crash when no extra conditioning provided 2023-04-02 17:13:08 -04:00
Lincoln Stein
d4d3441a52 save name of last model to disk whenever model changes
- this allows invokeai to restore the last used model on startup, even
  after a crash or keyboard interrupt.
2023-04-02 15:46:39 -04:00
Lincoln Stein
3a0fed2fda add withLora() readline autocompletion support 2023-04-02 15:35:39 -04:00
blessedcoolant
fad6fc807b fix(ui): LoraManager UI causing overload 2023-04-02 19:37:47 +12:00
Lincoln Stein
d7b2dbba66 limit number of suggested concepts to those with at least 6 likes 2023-04-02 00:31:55 -04:00
Lincoln Stein
16aeb8d640 tweak debugging message for lora unloading 2023-04-01 23:45:36 -04:00
Lincoln Stein
e0bd30b98c more elegant handling of lora context 2023-04-01 23:41:22 -04:00
Lincoln Stein
90f77c047c Update ldm/modules/lora_manager.py
Co-authored-by: neecapp <ryree0@gmail.com>
2023-04-01 23:24:50 -04:00
Lincoln Stein
941fc2297f Update ldm/modules/kohya_lora_manager.py
Co-authored-by: neecapp <ryree0@gmail.com>
2023-04-01 23:23:49 -04:00
Lincoln Stein
110b067c52 Update ldm/modules/kohya_lora_manager.py
Co-authored-by: neecapp <ryree0@gmail.com>
2023-04-01 23:23:29 -04:00
Lincoln Stein
8518f8c2ac LoRA alpha can be 0 2023-04-01 17:28:36 -04:00
Lincoln Stein
605ceb2e95 add support for loras ending with .pt 2023-04-01 17:12:07 -04:00
Lincoln Stein
b632b35079 remove direct legacy checkpoint rendering capabilities 2023-04-01 17:08:30 -04:00
Lincoln Stein
c9372f919c moved LoRA manager cleanup routines into a context 2023-04-01 16:49:23 -04:00
Lincoln Stein
acd9838559 Merge branch 'v2.3' into feat/lora-support-2.3 2023-04-01 10:55:22 -04:00
Lincoln Stein
1e5a44a474 bump version to 2.3.3 final 2023-04-01 09:43:46 -04:00
Lincoln Stein
78ea5d773d Update ldm/invoke/config/invokeai_update.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-04-01 09:43:02 -04:00
Lincoln Stein
74ff73ffc8 default --ckpt_convert to true 2023-03-31 01:51:45 -04:00
Lincoln Stein
993baadc22 making this a prerelease for zipfile purposes 2023-03-31 00:44:39 -04:00
Lincoln Stein
8fbe019273 Merge branch 'release/2.3.3-rc3' into feat/lora-support-2.3 2023-03-31 00:33:47 -04:00
Lincoln Stein
352805d607 fix for python 3.9 2023-03-31 00:33:10 -04:00
Lincoln Stein
879c80022e preliminary LoRA support ready for testing
Instructions:

1. Download LoRA .safetensors files of your choice and place in
   `INVOKEAIROOT/loras`. Unlike the draft version of this, the file
   names can contain underscores and alphanumerics. Names with
   arbitrary unicode characters are not supported.

2. Add `withLora(lora-file-basename,weight)` to your prompt. The
   weight is optional and will default to 1.0. A few examples, assuming
   that a LoRA file named `loras/sushi.safetensors` is present:

```
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
```

Multiple `withLora()` prompt fragments are allowed. The weight can be
arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher
weights make the LoRA's influence stronger.

In my limited testing, I found it useful to reduce the CFG to avoid
oversharpening. Also I got better results when running the LoRA on top
of the model on which it was based during training.

Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice
versa. You will get a nasty stack trace. This needs to be cleaned up.

3. You can change the location of the `loras` directory by passing the
   `--lora_directory` option to `invokeai.

Documentation can be found in docs/features/LORAS.md.
2023-03-31 00:03:16 -04:00
Lincoln Stein
ea5f6b9826 Merge branch 'release/2.3.3-rc3' into feat/lora-support-2.3 2023-03-30 22:02:37 -04:00
Lincoln Stein
4145e27ce6 move personalization fallback section into a static method 2023-03-30 21:53:19 -04:00
Lincoln Stein
3d4f4b677f support external legacy config files with no personalization section 2023-03-30 21:39:05 -04:00
Lincoln Stein
249173faf5 remove extraneous warnings about overwriting trigger terms 2023-03-30 20:37:10 -04:00