Both @mauwii and @keturn have been offline for some time. I am
temporarily removing them from CODEOWNERS so that they will not be
responsible for code reviews until they wish to/are able to re-engage
fully.
Note that I have volunteered @GreggHelt2 to be a codeowner of the
generation backend code, replacing @keturn . Let me know if you're
uncomfortable with this.
- Pin diffusers to 0.14
- Small fix to LoRA loading routine that was preventing placement of
LoRA files in subdirectories.
- Bump version to 2.3.4.post1
- Pin diffusers to 0.14
- Small fix to LoRA loading routine that was preventing placement of
LoRA files in subdirectories.
- Bump version to 2.3.4.post1
- Previous PR to truncate long filenames won't work on windows due to
lack of support for os.pathconf(). This works around the limitation by
hardcoding the value for PC_NAME_MAX when pathconf is unavailable.
- The `multiprocessing` send() and recv() methods weren't working
properly on Windows due to issues involving `utf-8` encoding and
pickling/unpickling. Changed these calls to `send_bytes()` and
`recv_bytes()` , which seems to fix the issue.
Not fully tested on Windows since I lack a GPU machine to test on, but
is working on CPU.
- Attempting to run a prompt with a LoRA based on SD v1.X against a
model based on v2.X will now throw an
`IncompatibleModelException`. To import this exception:
`from ldm.modules.lora_manager import IncompatibleModelException`
(maybe this should be defined in ModelManager?)
- Enhance `LoraManager.list_loras()` to accept an optional integer
argument, `token_vector_length`. This will filter the returned LoRA
models to return only those that match the indicated length. Use:
```
768 => for models based on SD v1.X
1024 => for models based on SD v2.X
```
Note that this filtering requires each LoRA file to be opened
by `torch.safetensors`. It will take ~8s to scan a directory of
40 files.
- Added new static methods to `ldm.modules.kohya_lora_manager`:
- check_model_compatibility()
- vector_length_from_checkpoint()
- vector_length_from_checkpoint_file()
- Previously the user's preferred precision was used to select which
version branch of a diffusers model would be downloaded. Half-precision
would try to download the 'fp16' branch if it existed.
- Turns out that with waifu-diffusion this logic doesn't work, as
'fp16' gets you waifu-diffusion v1.3, while 'main' gets you
waifu-diffusion v1.4. Who knew?
- This PR adds a new optional "revision" field to `models.yaml`. This
can be used to override the diffusers branch version. In the case of
Waifu diffusion, INITIAL_MODELS.yaml now specifies the "main" branch.
- This PR also quenches the NSFW nag that downloading diffusers sometimes
triggers.
- Closes#3160
- Previous PR to truncate long filenames won't work on windows
due to lack of support for os.pathconf(). This works around the
limitation by hardcoding the value for PC_NAME_MAX when pathconf
is unavailable.
Some quick bug fixes related to the UI for the 2.3.4. release.
**Features:**
- Added the ability to now add Textual Inversions to the Negative Prompt
using the UI.
- Added the ability to clear Textual Inversions and Loras from Prompt
and Negative Prompt with a single click.
- Textual Inversions now have status pips - indicating whether they are
used in the Main Prompt, Negative Prompt or both.
**Fixes**
- Fixes#3138
- Fixes#3144
- Fixed `usePrompt` not updating the Lora and TI count in prompt /
negative prompt.
- Fixed the TI regex not respecting names in substrings.
- Fixed trailing spaces when adding and removing loras and TI's.
- Fixed an issue with the TI regex not respecting the `<` and `>` used
by HuggingFace concepts.
- Some other minor bug fixes.
NOTE: This PR works with `diffusers` models **only**. As a result
InvokeAI is now converting all legacy checkpoint/safetensors files into
diffusers models on the fly. This introduces a bit of extra delay when
loading legacy models. You can avoid this by converting the files to
diffusers either at import time, or after the fact.
# Instructions:
1. Download LoRA .safetensors files of your choice and place in
`INVOKEAIROOT/loras`. Unlike the draft version of this PR, the file
names can now contain underscores and hyphens. Names with arbitrary
unicode characters are not supported.
2. Add `withLora(lora-file-basename,weight)` to your prompt. The weight
is optional and will default to 1.0. A few examples, assuming that a
LoRA file named `loras/sushi.safetensors` is present:
```
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
```
Multiple `withLora()` prompt fragments are allowed. The weight can be
arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher
weights make the LoRA's influence stronger. The last version of the
syntax, which uses the default weight of 1.0, is waiting on the next
version of the Compel library to be released and may not work at this
time.
In my limited testing, I found it useful to reduce the CFG to avoid
oversharpening. Also I got better results when running the LoRA on top
of the model on which it was based during training.
Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice
versa. You will get a nasty stack trace. This needs to be cleaned up.
3. You can change the location of the `loras` directory by passing the
`--lora_directory` option to `invokeai.
Documentation can be found in docs/features/LORAS.md.
Note that this PR incorporates the unmerged 2.3.3 PR code (#3058) and
bumps the version number up to 2.3.4a0.
A zillion thanks to @felorhik, @neecapp and many others for this
implementation. @blessedcoolant and I just did a little tidying up.