Compare commits

..

227 Commits

Author SHA1 Message Date
Lincoln Stein
020fb6c1e4 bump version number for release 2.3.5 2023-04-24 23:12:32 -04:00
Lincoln Stein
c63ef500b6 Merge branch 'v2.3' into bugfix/control-model-revision 2023-04-25 03:58:00 +01:00
Lincoln Stein
994a76aeaa [Enhancement] distinguish v1 from v2 LoRA models (#3175)
# Distinguish LoRA/LyCORIS files based on what version of SD they were
built on top of

- Attempting to run a prompt with a LoRA based on SD v1.X against a
model based on v2.X will now throw an `IncompatibleModelException`. To
import this exception:
`from ldm.modules.lora_manager import IncompatibleModelException` (maybe
this should be defined in ModelManager?)
    
- Enhance `LoraManager.list_loras()` to accept an optional integer
argument, `token_vector_length`. This will filter the returned LoRA
models to return only those that match the indicated length. Use:
      ```
      768 => for models based on SD v1.X
      1024 => for models based on SD v2.X
      ```
Note that this filtering requires each LoRA file to be opened by
`torch.safetensors`. It will take ~8s to scan a directory of 40 files.
    
- Added new static methods to `ldm.modules.kohya_lora_manager`:
      - check_model_compatibility()
      - vector_length_from_checkpoint()
      - vector_length_from_checkpoint_file()

- You can now create subdirectories within the `loras` directory and
organize the model files.
2023-04-25 03:57:45 +01:00
Lincoln Stein
144dfe4a5b Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-25 03:54:46 +01:00
Lincoln Stein
5dbc63e2ae Revert "improvements to the installation and upgrade processes" (#3266)
Reverts invoke-ai/InvokeAI#3186
2023-04-25 03:54:04 +01:00
Lincoln Stein
c6ae1edc82 Revert "improvements to the installation and upgrade processes" 2023-04-24 22:53:43 -04:00
Lincoln Stein
2cd0e036ac Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-25 03:24:25 +01:00
Lincoln Stein
5f1d311c52 Merge branch 'v2.3' into bugfix/control-model-revision 2023-04-25 03:23:12 +01:00
Lincoln Stein
c088cf0344 improvements to the installation and upgrade processes (#3186)
- Moved all postinstallation config file and model munging code out of
the CLI and into a separate script named `invokeai-postinstall`

- Fixed two calls to `shutil.copytree()` so that they don't try to
preserve the file mode of the copied files. This is necessary to run
correctly in a Nix environment (see thread at
https://discord.com/channels/1020123559063990373/1091716696965918732/1095662756738371615)

- Update the installer so that an existing virtual environment will be
updated, not overwritten.

- Pin npyscreen version to see if this fixes issues people have had with
installing this module.
2023-04-25 03:20:58 +01:00
Lincoln Stein
264af3c054 fix crash caused by incorrect conflict resolution 2023-04-24 22:20:12 -04:00
Lincoln Stein
b332432a88 Merge branch 'v2.3' into lstein/bugfix/improve-update-handling 2023-04-25 03:09:12 +01:00
Lincoln Stein
7f7d5894fa Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-25 02:51:27 +01:00
Lincoln Stein
96c39b61cf Enable LoRAs to patch the text_encoder as well as the unet (#3214)
Load LoRAs during compel's text embedding encode pass in case there are
requested LoRAs which also want to patch the text encoder.

Also generally cleanup the attention processor patching stuff. It's
still a mess, but at least now it's a *stateless* mess.
2023-04-24 23:22:51 +01:00
Lincoln Stein
40744ed996 Merge branch 'v2.3' into fix_inconsistent_loras 2023-04-22 20:22:32 +01:00
Lincoln Stein
f36452d650 rebuild front end 2023-04-20 12:27:08 -04:00
Lincoln Stein
e5188309ec Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-20 17:25:09 +01:00
Lincoln Stein
a9e8005a92 CODEOWNERS update - 2.3 branch (#3230)
Both @mauwii and @keturn have been offline for some time. I am
temporarily removing them from CODEOWNERS so that they will not be
responsible for code reviews until they wish to/are able to re-engage
fully.

Note that I have volunteered @GreggHelt2 to be a codeowner of the
generation backend code, replacing @keturn . Let me know if you're
uncomfortable with this.
2023-04-20 17:19:51 +01:00
Lincoln Stein
c2e6d98e66 Merge branch 'v2.3' into dev/codeowner-fix-2.3 2023-04-20 17:19:30 +01:00
Lincoln Stein
40d9b5dc27 [Feature] Add support for LoKR LyCORIS format (#3216)
It's like LoHA but use Kronecker product instead of Hadamard product.
https://github.com/KohakuBlueleaf/LyCORIS#lokr

I tested it on this 2 LoKR's:
https://civitai.com/models/34518/unofficial-vspo-yakumo-beni
https://civitai.com/models/35136/mika-pikazo-lokr

More tests hard to find as it's new format)
Better to test with https://github.com/invoke-ai/InvokeAI/pull/3214

Also a bit refactor forward function.
//LyCORIS also have (IA)^3 format, but I can't find examples in this
format and even on LyCORIS page it's marked as experimental. So, until
there some test examples I prefer not to add this.
2023-04-19 22:51:33 +01:00
Lincoln Stein
1a704efff1 update codeowners in response to team changes 2023-04-18 19:30:52 -04:00
Lincoln Stein
f49d2619be Merge branch 'v2.3' into fix_inconsistent_loras 2023-04-18 19:09:35 -04:00
Lincoln Stein
da96ec9dd5 Merge branch 'v2.3' into feat/lokr_support 2023-04-18 19:08:03 -04:00
Lincoln Stein
298ccda365 fix the "import from directory" function in console model installer (#3211)
- This was inadvertently broken when we stopped supporting direct
loading of checkpoint models.
- Now fixed.
- May fix #3209
2023-04-17 23:04:27 -04:00
StAlKeR7779
967d853020 Merge branch 'v2.3' into feat/lokr_support 2023-04-16 23:10:45 +03:00
StAlKeR7779
e91117bc74 Add support for lokr lycoris format 2023-04-16 23:05:13 +03:00
Damian Stewart
4d58444153 fix issues and further cleanup 2023-04-16 17:54:21 +02:00
Damian Stewart
3667eb4d0d activate LoRAs when generating prompt embeddings; also cleanup attention stuff 2023-04-16 17:03:31 +02:00
Lincoln Stein
203a7157e1 fix the "import from directory" function in console model installer
- This was inadvertently broken when we stopped supporting direct
  loading of checkpoint models.
- Now fixed.
2023-04-15 21:07:02 -04:00
Lincoln Stein
6365a7c790 Merge branch 'v2.3' into lstein/bugfix/improve-update-handling 2023-04-13 22:49:41 -04:00
Lincoln Stein
5fcb3d90e4 fix missing files variable 2023-04-13 22:49:04 -04:00
Lincoln Stein
2c449bfb34 Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-13 22:23:59 -04:00
Lincoln Stein
8fb4b05556 change lora and TI list dynamically when model changes 2023-04-13 22:22:43 -04:00
Lincoln Stein
4d7289b20f explicitly set permissions of config files 2023-04-13 22:03:52 -04:00
Lincoln Stein
d81584c8fd hotfix to 2.3.4 (#3188)
- Pin diffusers to 0.14
- Small fix to LoRA loading routine that was preventing placement of
LoRA files in subdirectories.
- Bump version to 2.3.4.post1
2023-04-13 12:39:16 -04:00
Lincoln Stein
1183bf96ed hotfix to 2.3.4
- Pin diffusers to 0.14
- Small fix to LoRA loading routine that was preventing placement of
  LoRA files in subdirectories.
- Bump version to 2.3.4.post1
2023-04-13 08:48:30 -04:00
Lincoln Stein
d81394cda8 fix directory permissions after install 2023-04-13 08:39:47 -04:00
Lincoln Stein
0eda1a03e1 pin diffusers to 0.14 2023-04-13 00:40:26 -04:00
Lincoln Stein
be7e067c95 getLoraModels event filters loras by compatibility 2023-04-13 00:31:11 -04:00
Lincoln Stein
afa3cdce27 add a list_compatible_loras() method 2023-04-13 00:11:26 -04:00
Lincoln Stein
6dfbd1c677 implement caching scheme for vector length 2023-04-12 23:56:52 -04:00
Lincoln Stein
a775c7730e improvements to the installation and upgrade processes
- Moved all postinstallation config file and model munging code out
  of the CLI and into a separate script named `invokeai-postinstall`

- Fixed two calls to `shutil.copytree()` so that they don't try to preserve
  the file mode of the copied files. This is necessary to run correctly
  in a Nix environment
  (see thread at https://discord.com/channels/1020123559063990373/1091716696965918732/1095662756738371615)

- Update the installer so that an existing virtual environment will be
  updated, not overwritten.

- Pin npyscreen version to see if this fixes issues people have had with
  installing this module.
2023-04-12 22:40:53 -04:00
Lincoln Stein
018d5dab53 [Bugfix] make invokeai-batch work on windows (#3164)
- Previous PR to truncate long filenames won't work on windows due to
lack of support for os.pathconf(). This works around the limitation by
hardcoding the value for PC_NAME_MAX when pathconf is unavailable.
- The `multiprocessing` send() and recv() methods weren't working
properly on Windows due to issues involving `utf-8` encoding and
pickling/unpickling. Changed these calls to `send_bytes()` and
`recv_bytes()` , which seems to fix the issue.

Not fully tested on Windows since I lack a GPU machine to test on, but
is working on CPU.
2023-04-11 11:37:39 -04:00
Lincoln Stein
96a5de30e3 Merge branch 'v2.3' into bugfix/pathconf-on-windows 2023-04-11 11:11:20 -04:00
Lincoln Stein
2251d3abfe fixup relative path to devices module 2023-04-10 23:44:58 -04:00
Lincoln Stein
0b22a3f34d distinguish LoRA/LyCORIS files based on what SD model they were based on
- Attempting to run a prompt with a LoRA based on SD v1.X against a
  model based on v2.X will now throw an
  `IncompatibleModelException`. To import this exception:
  `from ldm.modules.lora_manager import IncompatibleModelException`
  (maybe this should be defined in ModelManager?)

- Enhance `LoraManager.list_loras()` to accept an optional integer
  argument, `token_vector_length`. This will filter the returned LoRA
  models to return only those that match the indicated length. Use:
  ```
  768 => for models based on SD v1.X
  1024 => for models based on SD v2.X
  ```

  Note that this filtering requires each LoRA file to be opened
  by `torch.safetensors`. It will take ~8s to scan a directory of
  40 files.

- Added new static methods to `ldm.modules.kohya_lora_manager`:
  - check_model_compatibility()
  - vector_length_from_checkpoint()
  - vector_length_from_checkpoint_file()
2023-04-10 23:33:28 -04:00
Lincoln Stein
2528e14fe9 raise generation exceptions so that frontend can catch 2023-04-10 14:26:09 -04:00
Lincoln Stein
4d62d5b802 [Bugfix] detect running invoke before updating (#3163)
This PR addresses the issue that when `invokeai-update` is run on a
Windows system, and an instance of InvokeAI is open and running, the
user's `.venv` can get corrupted.

Issue first reported here:


https://discord.com/channels/1020123559063990373/1094688269356249108/1094688434750230628
2023-04-09 22:29:46 -04:00
Lincoln Stein
17de5c7008 Merge branch 'v2.3' into bugfix/pathconf-on-windows 2023-04-09 22:10:24 -04:00
Lincoln Stein
f95403dcda Merge branch 'v2.3' into bugfix/detect-running-invoke-before-updating 2023-04-09 22:09:17 -04:00
Lincoln Stein
16ccc807cc control which revision of a diffusers model is downloaded
- Previously the user's preferred precision was used to select which
  version branch of a diffusers model would be downloaded. Half-precision
  would try to download the 'fp16' branch if it existed.

- Turns out that with waifu-diffusion this logic doesn't work, as
  'fp16' gets you waifu-diffusion v1.3, while 'main' gets you
  waifu-diffusion v1.4. Who knew?

- This PR adds a new optional "revision" field to `models.yaml`. This
  can be used to override the diffusers branch version. In the case of
  Waifu diffusion, INITIAL_MODELS.yaml now specifies the "main" branch.

- This PR also quenches the NSFW nag that downloading diffusers sometimes
  triggers.

- Closes #3160
2023-04-09 22:07:55 -04:00
Lincoln Stein
e54d060d17 send and receive messages as bytes, not objects 2023-04-09 18:17:55 -04:00
Lincoln Stein
a01f1d4940 workaround no os.pathconf() on Windows platforms
- Previous PR to truncate long filenames won't work on windows
  due to lack of support for os.pathconf(). This works around the
  limitation by hardcoding the value for PC_NAME_MAX when pathconf
  is unavailable.
2023-04-09 17:45:34 -04:00
Lincoln Stein
1873817ac9 adjustments for windows 2023-04-09 17:24:47 -04:00
Lincoln Stein
31333a736c check if invokeai is running before trying to update
- on windows systems, updating the .venv while invokeai is using it leads to
  corruption.
2023-04-09 16:57:14 -04:00
Lincoln Stein
03274b6da6 fix extracting loras from legacy blends (#3161) 2023-04-09 16:43:35 -04:00
Damian Stewart
0646649c05 fix extracting loras from legacy blends 2023-04-09 22:21:44 +02:00
Lincoln Stein
2af511c98a release 2.3.4 2023-04-09 13:31:45 -04:00
Lincoln Stein
f0039cc70a [Bugfix] truncate filenames in invokeai batch that exceed max filename length (#3143)
- This prevents `invokeai-batch` from trying to create image files whose
names would exceed PC_NAME_MAX.
- Closes #3115
2023-04-09 12:36:10 -04:00
Lincoln Stein
8fa7d5ca64 Merge branch 'v2.3' into bugfix/truncate-filenames-in-invokeai-batch 2023-04-09 12:16:06 -04:00
Lincoln Stein
d90aa42799 [WebUI] 2.3.4 UI Bug Fixes (#3139)
Some quick bug fixes related to the UI for the 2.3.4. release.

**Features:**

- Added the ability to now add Textual Inversions to the Negative Prompt
using the UI.
- Added the ability to clear Textual Inversions and Loras from Prompt
and Negative Prompt with a single click.
- Textual Inversions now have status pips - indicating whether they are
used in the Main Prompt, Negative Prompt or both.

**Fixes**

- Fixes #3138
- Fixes #3144
- Fixed `usePrompt` not updating the Lora and TI count in prompt /
negative prompt.
- Fixed the TI regex not respecting names in substrings.
- Fixed trailing spaces when adding and removing loras and TI's.
- Fixed an issue with the TI regex not respecting the `<` and `>` used
by HuggingFace concepts.
- Some other minor bug fixes.
2023-04-09 12:07:41 -04:00
Lincoln Stein
c5b34d21e5 Merge branch 'v2.3' into bugfix/truncate-filenames-in-invokeai-batch 2023-04-09 11:29:32 -04:00
blessedcoolant
40a4867143 Merge branch 'v2.3' into 234-ui-bugfixes 2023-04-09 15:56:44 +12:00
Lincoln Stein
4b25f80427 [Bugfix] Pass extra_conditioning_info in inpaint, so lora can be initialized (#3151) 2023-04-08 21:17:53 -04:00
StAlKeR7779
894e2e643d Pass extra_conditioning_info in inpaint 2023-04-09 00:50:30 +03:00
blessedcoolant
a38ff1a16b build(ui): Test Build (2.3.4 Feat Updates) 2023-04-09 07:37:41 +12:00
blessedcoolant
41f268b475 feat(ui): Improve TI & Lora UI 2023-04-09 07:35:19 +12:00
blessedcoolant
b3ae3f595f fix(ui): Fixed Use Prompt not detecting Loras / TI Count 2023-04-09 03:44:17 +12:00
blessedcoolant
29962613d8 chore(ui): Move Lora & TI Managers to Prompt Extras 2023-04-08 22:47:30 +12:00
blessedcoolant
1170cee1d8 fix(ui): Options panel sliding because of long Lora or TI names 2023-04-08 16:48:28 +12:00
Lincoln Stein
5983e65b22 invokeai-batch: truncate image filenames that exceed filesystem's max filename size
- Closes #3115
2023-04-07 18:20:32 -04:00
blessedcoolant
bc724fcdc3 fix(ui): Fix Main Width Slider being read only. 2023-04-08 04:15:55 +12:00
Lincoln Stein
1faf9c5cdd bump version 2023-04-07 09:52:32 -04:00
Lincoln Stein
6d1f8e6997 [FEATURE] Lora support in 2.3 (#3072)
NOTE: This PR works with `diffusers` models **only**. As a result
InvokeAI is now converting all legacy checkpoint/safetensors files into
diffusers models on the fly. This introduces a bit of extra delay when
loading legacy models. You can avoid this by converting the files to
diffusers either at import time, or after the fact.

# Instructions:

1. Download LoRA .safetensors files of your choice and place in
`INVOKEAIROOT/loras`. Unlike the draft version of this PR, the file
names can now contain underscores and hyphens. Names with arbitrary
unicode characters are not supported.

2. Add `withLora(lora-file-basename,weight)` to your prompt. The weight
is optional and will default to 1.0. A few examples, assuming that a
LoRA file named `loras/sushi.safetensors` is present:

```
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
```

Multiple `withLora()` prompt fragments are allowed. The weight can be
arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher
weights make the LoRA's influence stronger. The last version of the
syntax, which uses the default weight of 1.0, is waiting on the next
version of the Compel library to be released and may not work at this
time.

In my limited testing, I found it useful to reduce the CFG to avoid
oversharpening. Also I got better results when running the LoRA on top
of the model on which it was based during training.

Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice
versa. You will get a nasty stack trace. This needs to be cleaned up.

3. You can change the location of the `loras` directory by passing the
`--lora_directory` option to `invokeai.

Documentation can be found in docs/features/LORAS.md.

Note that this PR incorporates the unmerged 2.3.3 PR code (#3058) and
bumps the version number up to 2.3.4a0.

A zillion thanks to @felorhik, @neecapp and many others for this
implementation. @blessedcoolant and I just did a little tidying up.
2023-04-07 09:37:28 -04:00
Damian Stewart
b141ab42d3 bump compel version to fix lora + blend 2023-04-07 14:12:22 +02:00
Lincoln Stein
0590bd6626 Merge branch 'v2.3' into feat/lora-support-2.3 2023-04-06 22:30:08 -04:00
Lincoln Stein
35c4ff8ab0 prevent crash when prompt blend requested 2023-04-06 21:22:47 -04:00
Lincoln Stein
0784e49d92 code cleanup and change default LoRA weight
- Remove unused (and probably dangerous) `unload_applied_loras()` method
- Remove unused `LoraManager.loras_to_load` attribute
- Change default LoRA weight to 0.75 when using WebUI to add a LoRA to prompt.
2023-04-06 16:34:22 -04:00
Damian Stewart
09fe21116b Update shared_invokeai_diffusion.py
add line to docs
2023-04-06 11:01:00 +02:00
Lincoln Stein
b185931f84 [Bugfix] Pip - Access is denied durring installation (#3123)
Now, for python 3.9 installer run upgrade pip command like this:
`pip install --upgrade pip`
And because pip binary locked as running process this lead to error(at
least on windows):
```
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'e:\invokeai\.venv\scripts\pip.exe'
Check the permissions.
```
To prevent this recomended command to upgrade pip is:
`python -m pip install --upgrade pip`
Which not locking pip file.
2023-04-05 23:50:50 -04:00
Lincoln Stein
1a4d229650 Merge branch 'v2.3' into bugfix/pip-upgrade 2023-04-05 22:44:58 -04:00
Lincoln Stein
e9d2205976 rebuild frontend 2023-04-05 22:03:52 -04:00
Lincoln Stein
4b624dccf0 Merge branch 'feat/lora-support-2.3' of github.com:invoke-ai/InvokeAI into feat/lora-support-2.3 2023-04-05 22:02:01 -04:00
Lincoln Stein
3dffa33097 Merge branch 'v2.3' into feat/lora-support-2.3 2023-04-05 21:59:54 -04:00
Lincoln Stein
ab9756b8d2 [FEATURE] LyCORIS support in 2.3 (#3118)
Implementation of LyCORIS(extended LoRA), which is 2 formats - LoCon and
LoHa([info1](https://github.com/KohakuBlueleaf/LyCORIS/blob/locon-archive/README.md),
[info2](https://github.com/KohakuBlueleaf/LyCORIS/blob/main/Algo.md)).

It's works but i found 2 a bit different implementations of forward
function for LoHa. Both works, but I don't know which is better.

2 functions generate same images if remove `self.org_module.weight.data`
addition from LyCORIS implementation, but who's right?
2023-04-05 21:58:56 -04:00
Sergey Borisov
4b74b51ffe Fix naming 2023-04-06 04:55:10 +03:00
Sergey Borisov
0a020e1c06 Change pip upgrade command 2023-04-06 04:24:25 +03:00
Sergey Borisov
baf60948ee Update kohya_lora_manager.py
Bias parsing, fix LoHa parsing and weight calculation
2023-04-06 01:44:20 +03:00
Lincoln Stein
4e4fa1b71d [Enhancement] save name of last model to disk whenever model changes (#3102)
- this allows invokeai to restore the last used model on startup, even
after a crash or keyboard interrupt.
2023-04-05 17:37:10 -04:00
Lincoln Stein
7bd870febb decrease minimum number of likes to 5 2023-04-05 15:51:58 -04:00
Sergey Borisov
b62cce20b8 Clean up 2023-04-05 20:18:04 +03:00
Sergey Borisov
6a8848b61f Draft implementation if LyCORIS(LoCon and LoHi) 2023-04-05 17:59:29 +03:00
Lincoln Stein
c8fa01908c remove app tests
- removed app directory (a 3.0 feature), so app tests had to go too
- fixed regular expression in the concepts lib which was causing deprecation warnings
2023-04-04 23:41:26 -04:00
Lincoln Stein
261be4e2e5 adjust debouncing timeout; fix duplicated ti triggers in menu 2023-04-04 23:15:09 -04:00
Lincoln Stein
e0695234e7 bump compel version 2023-04-04 22:47:54 -04:00
Lincoln Stein
cb1d433f30 create loras directory at update time 2023-04-04 22:47:15 -04:00
Lincoln Stein
e3772f674d sort loras and TIs in case-insensitive fashion 2023-04-04 11:24:10 -04:00
Lincoln Stein
ad5142d6f7 remove nodes app directory
- This was inadvertently included in the PR when rebased from main
2023-04-04 06:45:51 -04:00
Lincoln Stein
fc4b76c8b9 change label for HF concepts library option 2023-04-03 16:54:54 -04:00
Lincoln Stein
1e6d804104 Merge branch 'feat/lora-support-2.3' of github.com:invoke-ai/InvokeAI into feat/lora-support-2.3 2023-04-03 16:20:00 -04:00
Lincoln Stein
793488e90a sort lora list alphabetically 2023-04-03 16:19:30 -04:00
blessedcoolant
11cd8d026f build: Frontend (Lora Support) 2023-04-04 04:35:19 +12:00
blessedcoolant
25faec8d70 feat(ui): Make HuggingFace Concepts display optional 2023-04-04 04:29:56 +12:00
blessedcoolant
a14fc3ace5 fix: Fix Lora / TI Prompt Interaction 2023-04-04 04:29:13 +12:00
Lincoln Stein
667dee7b22 add scrollbars to textual inversion button menu 2023-04-03 08:39:47 -04:00
Lincoln Stein
f75a20b218 rebuild frontend 2023-04-02 23:34:15 -04:00
Lincoln Stein
8246e4abf2 fix cpu overload issue with TI trigger button 2023-04-02 23:33:21 -04:00
Lincoln Stein
afcb278e66 fix crash when no extra conditioning provided (redux) 2023-04-02 19:43:56 -04:00
Lincoln Stein
0a0e44b51e fix crash when no extra conditioning provided 2023-04-02 17:13:08 -04:00
Lincoln Stein
d4d3441a52 save name of last model to disk whenever model changes
- this allows invokeai to restore the last used model on startup, even
  after a crash or keyboard interrupt.
2023-04-02 15:46:39 -04:00
Lincoln Stein
3a0fed2fda add withLora() readline autocompletion support 2023-04-02 15:35:39 -04:00
blessedcoolant
fad6fc807b fix(ui): LoraManager UI causing overload 2023-04-02 19:37:47 +12:00
Lincoln Stein
63ecdb19fe rebuild frontend 2023-04-02 00:34:33 -04:00
Lincoln Stein
d7b2dbba66 limit number of suggested concepts to those with at least 6 likes 2023-04-02 00:31:55 -04:00
Lincoln Stein
16aeb8d640 tweak debugging message for lora unloading 2023-04-01 23:45:36 -04:00
Lincoln Stein
e0bd30b98c more elegant handling of lora context 2023-04-01 23:41:22 -04:00
Lincoln Stein
90f77c047c Update ldm/modules/lora_manager.py
Co-authored-by: neecapp <ryree0@gmail.com>
2023-04-01 23:24:50 -04:00
Lincoln Stein
941fc2297f Update ldm/modules/kohya_lora_manager.py
Co-authored-by: neecapp <ryree0@gmail.com>
2023-04-01 23:23:49 -04:00
Lincoln Stein
110b067c52 Update ldm/modules/kohya_lora_manager.py
Co-authored-by: neecapp <ryree0@gmail.com>
2023-04-01 23:23:29 -04:00
Lincoln Stein
71e4addd10 add debugging to where spinloop is occurring 2023-04-01 23:12:10 -04:00
Lincoln Stein
67435da996 added a button to retrieve textual inversion triggers; but causes high browser load 2023-04-01 22:57:54 -04:00
Lincoln Stein
8518f8c2ac LoRA alpha can be 0 2023-04-01 17:28:36 -04:00
Lincoln Stein
d3b63ca0fe detect lora files with .pt suffix 2023-04-01 17:25:54 -04:00
Lincoln Stein
605ceb2e95 add support for loras ending with .pt 2023-04-01 17:12:07 -04:00
Lincoln Stein
b632b35079 remove direct legacy checkpoint rendering capabilities 2023-04-01 17:08:30 -04:00
Lincoln Stein
c9372f919c moved LoRA manager cleanup routines into a context 2023-04-01 16:49:23 -04:00
Lincoln Stein
acd9838559 Merge branch 'v2.3' into feat/lora-support-2.3 2023-04-01 10:55:22 -04:00
Lincoln Stein
fd74f51384 Release 2.3.3 (#3058)
(note that this is actually release candidate 7, but I made the mistake
of including an old rc number in the branch and can't easily change it)

## Updating Root directory

- Introduced new mechanism for updating the root directory when
necessary. Currently only used to update the invoke.sh script using new
dialog colors.
- Fixed ROCm torch module version number

## Loading legacy 2.0/2.1 models
- Due to not converting the torch.dtype precision correctly, the
`load_pipeline_from_original_stable_diffusion_ckpt()` was returning
models of dtype float32 regardless of the precision setting. This caused
a precision mismatch crash.
- Problem now fixed (also see #3057 for the same fix to `main`)

## Support for a fourth textual inversion embedding file format
- This variant, exemplified by "easynegative.safetensors" has a single
'embparam' key containing a Tensor.
- Also refactored code to make it easier to read.
- Handle both pickle and safetensor formats.

## Persistent model selection
- To be consistent with WebUI parameter behavior, the currently selected
model is saved on exit and restored on restart for both WebUI and CLI

## Bug fixes
- Name of VAE cache directory was "hug", not "hub". This is fixed.

## VAE fixes
- Allow custom VAEs to be assigned to a legacy model by placing a
like-named vae file adjacent to the checkpoint file.
- The custom VAE will be picked up and incorporated into the diffusers
model if the user chooses to convert/optimize.

## Custom config file loading
- Some of the civitai models instruct users to place a custom .yaml file
adjacent to the checkpoint file. This generally wasn't working because
some of the .yaml files use FrozenCLIPEmbedder rather than
WeightedFrozenCLIPEmbedder, and our FrozenCLIPEmbedder class doesn't
handle the `personalization_config` section used by the the textual
inversion manager. Other .yaml files don't have the
`personalization_config` section at all. Both these issues are
fixed.#1685

## Consistent pytorch version
- There was an inconsistency between the pytorch version requirement in
`pyproject.toml` and the requirement in the installer (which does a
little jiggery-pokery to load torch with the right CUDA/ROCm version
prior to the main pip install. This was causing torch to be installed,
then uninstalled, and reinstalled with a different version number. This
is now fixed.
2023-04-01 10:17:43 -04:00
Lincoln Stein
1e5a44a474 bump version to 2.3.3 final 2023-04-01 09:43:46 -04:00
Lincoln Stein
78ea5d773d Update ldm/invoke/config/invokeai_update.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-04-01 09:43:02 -04:00
Lincoln Stein
7547784e98 Update installer/lib/installer.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-04-01 09:41:38 -04:00
Lincoln Stein
e82641d5f9 Update installer/lib/installer.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-04-01 09:41:25 -04:00
blessedcoolant
beff122d90 build(ui): Add Lora To Other Tabs
Sorry my bad. Forgot to add it to imagetoimage and unified canvas. Done now.
2023-04-01 00:26:23 +13:00
blessedcoolant
dabf56bee8 feat: Add Lora Manager to remaining tabs 2023-04-01 00:24:58 +13:00
blessedcoolant
4faf902ec4 build(ui): Rebuild Frontend - Add Lora WebUI
Typescript was broken for some reason. Fixed it and also did a clean build that passes lints.
2023-04-01 00:20:07 +13:00
blessedcoolant
2c5c20c8a0 localization(ui): Localize Lora Stuff 2023-04-01 00:18:41 +13:00
blessedcoolant
a8b9458de2 fix: LoraManager UI not returning a component 2023-04-01 00:17:22 +13:00
blessedcoolant
274d6238fa fix: Typescript being broken 2023-04-01 00:11:20 +13:00
blessedcoolant
10400761f0 build(ui): Add Lora to WebUI 2023-04-01 00:01:01 +13:00
blessedcoolant
b598b844e4 fix(ui): Missing Colors
husky was causing issues
2023-03-31 23:58:06 +13:00
blessedcoolant
8554f81e57 feat(ui): Add Lora To WebUI 2023-03-31 23:53:47 +13:00
Lincoln Stein
74ff73ffc8 default --ckpt_convert to true 2023-03-31 01:51:45 -04:00
Lincoln Stein
993baadc22 making this a prerelease for zipfile purposes 2023-03-31 00:44:39 -04:00
Lincoln Stein
ccfb0b94b9 added @EgoringKosmos recipe for fixing ROCm installs 2023-03-31 00:38:30 -04:00
Lincoln Stein
8fbe019273 Merge branch 'release/2.3.3-rc3' into feat/lora-support-2.3 2023-03-31 00:33:47 -04:00
Lincoln Stein
352805d607 fix for python 3.9 2023-03-31 00:33:10 -04:00
Lincoln Stein
879c80022e preliminary LoRA support ready for testing
Instructions:

1. Download LoRA .safetensors files of your choice and place in
   `INVOKEAIROOT/loras`. Unlike the draft version of this, the file
   names can contain underscores and alphanumerics. Names with
   arbitrary unicode characters are not supported.

2. Add `withLora(lora-file-basename,weight)` to your prompt. The
   weight is optional and will default to 1.0. A few examples, assuming
   that a LoRA file named `loras/sushi.safetensors` is present:

```
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
```

Multiple `withLora()` prompt fragments are allowed. The weight can be
arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher
weights make the LoRA's influence stronger.

In my limited testing, I found it useful to reduce the CFG to avoid
oversharpening. Also I got better results when running the LoRA on top
of the model on which it was based during training.

Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice
versa. You will get a nasty stack trace. This needs to be cleaned up.

3. You can change the location of the `loras` directory by passing the
   `--lora_directory` option to `invokeai.

Documentation can be found in docs/features/LORAS.md.
2023-03-31 00:03:16 -04:00
Lincoln Stein
ea5f6b9826 Merge branch 'release/2.3.3-rc3' into feat/lora-support-2.3 2023-03-30 22:02:37 -04:00
Lincoln Stein
4145e27ce6 move personalization fallback section into a static method 2023-03-30 21:53:19 -04:00
Lincoln Stein
3d4f4b677f support external legacy config files with no personalization section 2023-03-30 21:39:05 -04:00
Lincoln Stein
249173faf5 remove extraneous warnings about overwriting trigger terms 2023-03-30 20:37:10 -04:00
Lincoln Stein
794ef868af fix incorrect loading of external VAEs
- Closes #3073
2023-03-30 18:50:27 -04:00
Lincoln Stein
a1ed22517f reenable line completion during CLI edit_model cmd 2023-03-30 15:54:10 -04:00
Lincoln Stein
3765ee9b59 make invokeai-model-install work with editable install 2023-03-30 14:32:35 -04:00
Lincoln Stein
91e4c60876 add solution to ROCm fail-to-install error 2023-03-30 13:50:23 -04:00
Lincoln Stein
46e578e1ef Merge branch 'release/2.3.3-rc3' of github.com:invoke-ai/InvokeAI into release/2.3.3-rc3 2023-03-30 13:22:26 -04:00
Lincoln Stein
3a8ef0a00c make CONCEPTS documentation title more meaningful 2023-03-30 13:21:50 -04:00
Lincoln Stein
2a586f3179 upgrade compel to work with lora syntax 2023-03-30 08:08:33 -04:00
Lincoln Stein
6ce24846eb merge with 2.3 release candidate 6 2023-03-30 07:39:54 -04:00
Lincoln Stein
c2487e4330 Kohya lora models load but generate freezes 2023-03-30 07:38:39 -04:00
Lincoln Stein
cf262dd2ea Update installer/lib/installer.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-03-29 12:44:02 -04:00
Lincoln Stein
5a8d66ab02 merge lora support 2023-03-28 23:54:17 -04:00
Lincoln Stein
b0b0c48d8a bump version to 2.3.3 2023-03-28 23:20:05 -04:00
Lincoln Stein
8404e06d77 update documentation
- Add link to Statcomm's visual guide to docs (his permission pending)
- Update the what's new sections.
2023-03-28 17:52:22 -04:00
Lincoln Stein
a91d01c27a enhancements to update routines
- Allow invokeai-update to update using a release, tag or branch.
- Allow CLI's root directory update routine to update directory
  contents regardless of whether current version is released.
- In model importation routine, clarify wording of instructions when user is
  asked to choose the type of model being imported.
2023-03-28 15:58:36 -04:00
Lincoln Stein
5eeca47887 bump rc version number 2023-03-28 13:08:38 -04:00
Lincoln Stein
66b361294b update embedding file documentation 2023-03-28 12:24:01 -04:00
Lincoln Stein
0fb1e79a0b update model installation documentation 2023-03-28 12:07:47 -04:00
Lincoln Stein
14f1efaf4f launch --model supersedes persistent model 2023-03-28 10:53:32 -04:00
Lincoln Stein
23aa17e387 fix typo in name of vae cache 2023-03-28 10:48:03 -04:00
Lincoln Stein
f23cc54e1b save and restore selected model on startup/exit 2023-03-28 10:39:19 -04:00
Lincoln Stein
e3d992d5d7 add metadata dump script 2023-03-28 10:01:31 -04:00
Lincoln Stein
bb972b2e3d Add support for yet another TI embedding file format (2.3 version) (#3045)
- This variant, exemplified by "easynegative.safetensors" has a single
'embparam' key containing a Tensor.
- Also refactored code to make it easier to read.
- Handle both pickle and safetensor formats.
2023-03-28 00:46:30 -04:00
Lincoln Stein
41a8fdea53 fix bugs in online ckpt conversion of 2.0 models
This commit fixes bugs related to the on-the-fly conversion and loading of
legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
  on-the-fly using --ckpt_convert, generation would crash with a
  precision incompatibility error.

- In addition, broken logic was causing some 2.0-derived ckpt files to
  be converted into diffusers and then processed through the legacy
  generation routines - not good.
2023-03-28 00:11:37 -04:00
Lincoln Stein
a78ff86e42 Merge branch 'v2.3' into enhance/handle-another-embedding-variant 2023-03-27 22:38:36 -04:00
Lincoln Stein
071df30597 handle a fourth variant of embedding .pt files
- This variant, exemplified by "easynegative.safetensors" has a single
  'embparam' key containing a Tensor.
- Also refactored code to make it easier to read.
- Handle both pickle and safetensor formats.
2023-03-26 23:40:29 -04:00
Jordan
9cf7e5f634 Merge branch 'main' into add_lora_support 2023-02-25 19:21:31 -08:00
Jordan
d9c46277ea add peft setup (need to install huggingface/peft) 2023-02-25 20:21:20 -07:00
blessedcoolant
c22d529528 Add node-based invocation system (#1650)
This PR adds the core of the node-based invocation system first
discussed in https://github.com/invoke-ai/InvokeAI/discussions/597 and
implements it through a basic CLI and API. This supersedes #1047, which
was too far behind to rebase.

## Architecture

### Invocations
The core of the new system is **invocations**, found in
`/ldm/invoke/app/invocations`. These represent individual nodes of
execution, each with inputs and outputs. Core invocations are already
implemented (`txt2img`, `img2img`, `upscale`, `face_restore`) as well as
a debug invocation (`show_image`). To implement a new invocation, all
that is required is to add a new implementation in this folder (there is
a markdown document describing the specifics, though it is slightly
out-of-date).

### Sessions
Invocations and links between them are maintained in a **session**.
These can be queued for invocation (either the next ready node, or all
nodes). Some notes:
* Sessions may be added to at any time (including after invocation), but
may not be modified.
* Links are always added with a node, and are always links from existing
nodes to the new node. These links can be relative "history" links, e.g.
`-1` to link from a previously executed node, and can link either
specific outputs, or can opportunistically link all matching outputs by
name and type by using `*`.
* There are no iteration/looping constructs. Most needs for this could
be solved by either duplicating nodes or cloning sessions. This is open
for discussion, but is a difficult problem to solve in a way that
doesn't make the code even more complex/confusing (especially regarding
node ids and history).

### Services
These make up the core the invocation system, found in
`/ldm/invoke/app/services`. One of the key design philosophies here is
that most components should be replaceable when possible. For example,
if someone wants to use cloud storage for their images, they should be
able to replace the image storage service easily.

The services are broken down as follows (several of these are
intentionally implemented with an initial simple/naïve approach):
* Invoker: Responsible for creating and executing **sessions** and
managing services used to do so.
* Session Manager: Manages session history. An on-disk implementation is
provided, which stores sessions as json files on disk, and caches
recently used sessions for quick access.
* Image Storage: Stores images of multiple types. An on-disk
implementation is provided, which stores images on disk and retains
recently used images in an in-memory cache.
* Invocation Queue: Used to queue invocations for execution. An
in-memory implementation is provided.
* Events: An event system, primarily used with socket.io to support
future web UI integration.

## Apps

Apps are available through the `/scripts/invoke-new.py` script (to-be
integrated/renamed).

### CLI
```
python scripts/invoke-new.py
```

Implements a simple CLI. The CLI creates a single session, and
automatically links all inputs to the previous node's output. Commands
are automatically generated from all invocations, with command options
being automatically generated from invocation inputs. Help is also
available for the cli and for each command, and is very verbose.
Additionally, the CLI supports command piping for single-line entry of
multiple commands. Example:

```
> txt2img --prompt "a cat eating sushi" --steps 20 --seed 1234 | upscale | show_image
```

### API
```
python scripts/invoke-new.py --api --host 0.0.0.0
```

Implements an API using FastAPI with Socket.io support for signaling.
API documentation is available at `http://localhost:9090/docs` or
`http://localhost:9090/redoc`. This includes OpenAPI schema for all
available invocations, session interaction APIs, and image APIs.
Socket.io signals are per-session, and can be subscribed to by session
id. These aren't currently auto-documented, though the code for event
emission is centralized in `/ldm/invoke/app/services/events.py`.

A very simple test html and script are available at
`http://localhost:9090/static/test.html` This demonstrates creating a
session from a graph, invoking it, and receiving signals from Socket.io.

## What's left?

* There are a number of features not currently covered by invocations. I
kept the set of invocations small during core development in order to
simplify refactoring as I went. Now that the invocation code has
stabilized, I'd love some help filling those out!
* There's no image metadata generated. It would be fairly
straightforward (and would make good sense) to serialize either a
session and node reference into an image, or the entire node into the
image. There are a lot of questions to answer around source images,
linked images, etc. though. This history is all stored in the session as
well, and with complex sessions, the metadata in an image may lose its
value. This needs some further discussion.
* We need a list of features (both current and future) that would be
difficult to implement without looping constructs so we can have a good
conversation around it. I'm really hoping we can avoid needing
looping/iteration in the graph execution, since it'll necessitate
separating an execution of a graph into its own concept/system, and will
further complicate the system.
* The API likely needs further filling out to support the UI. I think
using the new API for the current UI is possible, and potentially
interesting, since it could work like the new/demo CLI in a "single
operation at a time" workflow. I don't know how compatible that will be
with our UI goals though. It would be nice to support only a single API
though.
* Deeper separation of systems. I intentionally tried to not touch
Generate or other systems too much, but a lot could be gained by
breaking those apart. Even breaking apart Args into two pieces (command
line arguments and the parser for the current CLI) would make it easier
to maintain. This is probably in the future though.
2023-02-26 12:25:41 +13:00
Kyle Schouviller
cd98d88fe7 [nodes] Removed InvokerServices, simplying service model 2023-02-24 20:11:28 -08:00
Kyle Schouviller
34e3aa1f88 parent 9eed1919c2
author Kyle Schouviller <kyle0654@hotmail.com> 1669872800 -0800
committer Kyle Schouviller <kyle0654@hotmail.com> 1676240900 -0800

Adding base node architecture

Fix type annotation errors

Runs and generates, but breaks in saving session

Fix default model value setting. Fix deprecation warning.

Fixed node api

Adding markdown docs

Simplifying Generate construction in apps

[nodes] A few minor changes (#2510)

* Pin api-related requirements

* Remove confusing extra CORS origins list

* Adds response models for HTTP 200

[nodes] Adding graph_execution_state to soon replace session. Adding tests with pytest.

Minor typing fixes

[nodes] Fix some small output query hookups

[node] Fixing some additional typing issues

[nodes] Move and expand graph code. Add base item storage and sqlite implementation.

Update startup to match new code

[nodes] Add callbacks to item storage

[nodes] Adding an InvocationContext object to use for invocations to provide easier extensibility

[nodes] New execution model that handles iteration

[nodes] Fixing the CLI

[nodes] Adding a note to the CLI

[nodes] Split processing thread into separate service

[node] Add error message on node processing failure

Removing old files and duplicated packages

Adding python-multipart
2023-02-24 18:57:02 -08:00
psychedelicious
49ffb64ef3 ui: translations update from weblate (#2804)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-02-25 10:09:37 +11:00
Jordan
ef822902d4 Merge branch 'main' into add_lora_support 2023-02-24 12:06:31 -08:00
Gabriel Mackievicz Telles
ec14e2db35 translationBot(ui): update translation (Portuguese (Brazil))
Currently translated at 91.8% (431 of 469 strings)

Co-authored-by: Gabriel Mackievicz Telles <telles.gabriel@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translation: InvokeAI/Web UI
2023-02-24 17:54:54 +01:00
Jeff Mahoney
5725fcb3e0 translationBot(ui): added translation (Romanian)
Co-authored-by: Jeff Mahoney <jbmahoney@gmail.com>
2023-02-24 17:54:54 +01:00
gallegonovato
1447b6df96 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (469 of 469 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-02-24 17:54:54 +01:00
Lincoln Stein
e700da23d8 Sync main with v2.3.1 (#2792)
This PR will bring `main` up to date with released v2.3.1
2023-02-24 11:54:46 -05:00
Jordan
036ca31282 Merge pull request #4 from damian0815/pr/2712
tweaks and small refactors
2023-02-24 03:49:41 -08:00
Damian Stewart
7dbe027b18 tweaks and small refactors 2023-02-24 12:46:57 +01:00
Jordan
523e44ccfe simplify manager 2023-02-24 01:32:09 -07:00
Jordan
6a7948466e Merge branch 'main' into add_lora_support 2023-02-23 18:33:52 -08:00
Jordan
4ce8b1ba21 setup cross conditioning for lora 2023-02-23 19:27:45 -07:00
Jordan
68a3132d81 move legacy lora manager to its own file 2023-02-23 17:41:20 -07:00
Jordan
b69f9d4af1 initial setup of cross attention 2023-02-23 17:30:34 -07:00
Jordan
6a1129ab64 switch all none diffusers stuff to legacy, and load through compel prompts 2023-02-23 16:48:33 -07:00
Jordan
8e1fd92e7f Merge branch 'main' into add_lora_support 2023-02-23 15:15:46 -08:00
Jordan
f64a4db5fa setup legacy class to abstract hacky logic for none diffusers lora and format prompt for compel 2023-02-23 05:56:39 -07:00
Jordan
3f477da46c Merge branch 'add_lora_support' of https://github.com/jordanramstad/InvokeAI into add_lora_support 2023-02-23 01:45:34 -07:00
Jordan
71972c3709 re-enable load attn procs support (no multiplier) 2023-02-23 01:44:13 -07:00
Jordan
d4083221a6 Merge branch 'main' into add_lora_support 2023-02-22 13:28:04 -08:00
Jordan
5b4a241f5c Merge branch 'main' into add_lora_support 2023-02-21 20:38:33 -08:00
Jordan
cd333e414b move key converter to wrapper 2023-02-21 21:38:15 -07:00
Jordan
af3543a8c7 further cleanup and implement wrapper 2023-02-21 20:42:40 -07:00
Jordan
686f6ef8d6 Merge branch 'main' into add_lora_support 2023-02-21 18:35:11 -08:00
Jordan
f70b7272f3 cleanup / concept of loading through diffusers 2023-02-21 19:33:39 -07:00
Jordan
24d92979db fix typo 2023-02-21 02:08:02 -07:00
Jordan
c669336d6b Update lora_manager.py 2023-02-21 02:05:11 -07:00
Jordan
5529309e73 adjusting back to hooks, forcing to be last in execution 2023-02-21 01:34:06 -07:00
Jordan
49c0516602 change hook to override 2023-02-20 23:45:57 -07:00
Jordan
c1c62f770f Merge branch 'main' into add_lora_support 2023-02-20 20:33:59 -08:00
Jordan
e2b6dfeeb9 Update generate.py 2023-02-20 21:33:20 -07:00
Jordan
8f527c2b2d Merge pull request #2 from jordanramstad/prompt-fix
fix prompt
2023-02-20 20:11:00 -08:00
neecapp
3732af63e8 fix prompt 2023-02-20 23:06:05 -05:00
Jordan
de89041779 optimize functions for unloading 2023-02-20 17:02:36 -07:00
Jordan
488326dd95 Merge branch 'add_lora_support' of https://github.com/jordanramstad/InvokeAI into add_lora_support 2023-02-20 16:50:16 -07:00
Jordan
c3edede73f add notes and adjust functions 2023-02-20 16:49:59 -07:00
Jordan
6e730bd654 Merge branch 'main' into add_lora_support 2023-02-20 15:34:52 -08:00
Jordan
884a5543c7 adjust loader to use a settings dict 2023-02-20 16:33:53 -07:00
Jordan
ac972ebbe3 update prompt setup so lora's can be loaded in other ways 2023-02-20 16:06:30 -07:00
Jordan
3c6c18b34c cleanup suggestions from neecap 2023-02-20 15:19:29 -07:00
Jordan
8f6e43d4a4 code cleanup 2023-02-20 14:06:58 -07:00
Jordan
404000bf93 Merge pull request #1 from neecapp/add_lora_support
Rewrite lora manager with hooks
2023-02-20 12:31:03 -08:00
neecapp
e744774171 Rewrite lora manager with hooks 2023-02-20 13:49:16 -05:00
Jordan
096e1d3a5d start of rewrite for add / remove 2023-02-20 02:37:44 -07:00
Jordan
82e4d5aed2 change to new method to load safetensors 2023-02-19 17:33:24 -07:00
Jordan
5a7145c485 Create convert_lora.py 2023-02-18 23:18:41 -07:00
Jordan
afc8639c25 add pending support for safetensors with cloneofsimo/lora 2023-02-18 21:07:34 -07:00
Jordan
141be95c2c initial setup of lora support 2023-02-18 05:29:04 -07:00
91 changed files with 4459 additions and 3678 deletions

6
.coveragerc Normal file
View File

@@ -0,0 +1,6 @@
[run]
omit='.env/*'
source='.'
[report]
show_missing = true

34
.github/CODEOWNERS vendored
View File

@@ -1,13 +1,13 @@
# continuous integration
/.github/workflows/ @mauwii @lstein @blessedcoolant
/.github/workflows/ @lstein @blessedcoolant
# documentation
/docs/ @lstein @mauwii @blessedcoolant
mkdocs.yml @mauwii @lstein
/docs/ @lstein @blessedcoolant
mkdocs.yml @lstein @ebr
# installation and configuration
/pyproject.toml @mauwii @lstein @ebr
/docker/ @mauwii
/pyproject.toml @lstein @ebr
/docker/ @lstein
/scripts/ @ebr @lstein @blessedcoolant
/installer/ @ebr @lstein
ldm/invoke/config @lstein @ebr
@@ -21,13 +21,13 @@ invokeai/configs @lstein @ebr @blessedcoolant
# generation and model management
/ldm/*.py @lstein @blessedcoolant
/ldm/generate.py @lstein @keturn
/ldm/generate.py @lstein @gregghelt2
/ldm/invoke/args.py @lstein @blessedcoolant
/ldm/invoke/ckpt* @lstein @blessedcoolant
/ldm/invoke/ckpt_generator @lstein @blessedcoolant
/ldm/invoke/CLI.py @lstein @blessedcoolant
/ldm/invoke/config @lstein @ebr @mauwii @blessedcoolant
/ldm/invoke/generator @keturn @damian0815
/ldm/invoke/config @lstein @ebr @blessedcoolant
/ldm/invoke/generator @gregghelt2 @damian0815
/ldm/invoke/globals.py @lstein @blessedcoolant
/ldm/invoke/merge_diffusers.py @lstein @blessedcoolant
/ldm/invoke/model_manager.py @lstein @blessedcoolant
@@ -36,17 +36,17 @@ invokeai/configs @lstein @ebr @blessedcoolant
/ldm/invoke/restoration @lstein @blessedcoolant
# attention, textual inversion, model configuration
/ldm/models @damian0815 @keturn @blessedcoolant
/ldm/models @damian0815 @gregghelt2 @blessedcoolant
/ldm/modules/textual_inversion_manager.py @lstein @blessedcoolant
/ldm/modules/attention.py @damian0815 @keturn
/ldm/modules/diffusionmodules @damian0815 @keturn
/ldm/modules/distributions @damian0815 @keturn
/ldm/modules/ema.py @damian0815 @keturn
/ldm/modules/attention.py @damian0815 @gregghelt2
/ldm/modules/diffusionmodules @damian0815 @gregghelt2
/ldm/modules/distributions @damian0815 @gregghelt2
/ldm/modules/ema.py @damian0815 @gregghelt2
/ldm/modules/embedding_manager.py @lstein
/ldm/modules/encoders @damian0815 @keturn
/ldm/modules/image_degradation @damian0815 @keturn
/ldm/modules/losses @damian0815 @keturn
/ldm/modules/x_transformer.py @damian0815 @keturn
/ldm/modules/encoders @damian0815 @gregghelt2
/ldm/modules/image_degradation @damian0815 @gregghelt2
/ldm/modules/losses @damian0815 @gregghelt2
/ldm/modules/x_transformer.py @damian0815 @gregghelt2
# Nodes
apps/ @Kyle0654 @jpphoto

1
.gitignore vendored
View File

@@ -68,6 +68,7 @@ htmlcov/
.cache
nosetests.xml
coverage.xml
cov.xml
*.cover
*.py,cover
.hypothesis/

5
.pytest.ini Normal file
View File

@@ -0,0 +1,5 @@
[pytest]
DJANGO_SETTINGS_MODULE = webtas.settings
; python_files = tests.py test_*.py *_tests.py
addopts = --cov=. --cov-config=.coveragerc --cov-report xml:cov.xml

View File

@@ -0,0 +1,93 @@
# Invoke.AI Architecture
```mermaid
flowchart TB
subgraph apps[Applications]
webui[WebUI]
cli[CLI]
subgraph webapi[Web API]
api[HTTP API]
sio[Socket.IO]
end
end
subgraph invoke[Invoke]
direction LR
invoker
services
sessions
invocations
end
subgraph core[AI Core]
Generate
end
webui --> webapi
webapi --> invoke
cli --> invoke
invoker --> services & sessions
invocations --> services
sessions --> invocations
services --> core
%% Styles
classDef sg fill:#5028C8,font-weight:bold,stroke-width:2,color:#fff,stroke:#14141A
classDef default stroke-width:2px,stroke:#F6B314,color:#fff,fill:#14141A
class apps,webapi,invoke,core sg
```
## Applications
Applications are built on top of the invoke framework. They should construct `invoker` and then interact through it. They should avoid interacting directly with core code in order to support a variety of configurations.
### Web UI
The Web UI is built on top of an HTTP API built with [FastAPI](https://fastapi.tiangolo.com/) and [Socket.IO](https://socket.io/). The frontend code is found in `/frontend` and the backend code is found in `/ldm/invoke/app/api_app.py` and `/ldm/invoke/app/api/`. The code is further organized as such:
| Component | Description |
| --- | --- |
| api_app.py | Sets up the API app, annotates the OpenAPI spec with additional data, and runs the API |
| dependencies | Creates all invoker services and the invoker, and provides them to the API |
| events | An eventing system that could in the future be adapted to support horizontal scale-out |
| sockets | The Socket.IO interface - handles listening to and emitting session events (events are defined in the events service module) |
| routers | API definitions for different areas of API functionality |
### CLI
The CLI is built automatically from invocation metadata, and also supports invocation piping and auto-linking. Code is available in `/ldm/invoke/app/cli_app.py`.
## Invoke
The Invoke framework provides the interface to the underlying AI systems and is built with flexibility and extensibility in mind. There are four major concepts: invoker, sessions, invocations, and services.
### Invoker
The invoker (`/ldm/invoke/app/services/invoker.py`) is the primary interface through which applications interact with the framework. Its primary purpose is to create, manage, and invoke sessions. It also maintains two sets of services:
- **invocation services**, which are used by invocations to interact with core functionality.
- **invoker services**, which are used by the invoker to manage sessions and manage the invocation queue.
### Sessions
Invocations and links between them form a graph, which is maintained in a session. Sessions can be queued for invocation, which will execute their graph (either the next ready invocation, or all invocations). Sessions also maintain execution history for the graph (including storage of any outputs). An invocation may be added to a session at any time, and there is capability to add and entire graph at once, as well as to automatically link new invocations to previous invocations. Invocations can not be deleted or modified once added.
The session graph does not support looping. This is left as an application problem to prevent additional complexity in the graph.
### Invocations
Invocations represent individual units of execution, with inputs and outputs. All invocations are located in `/ldm/invoke/app/invocations`, and are all automatically discovered and made available in the applications. These are the primary way to expose new functionality in Invoke.AI, and the [implementation guide](INVOCATIONS.md) explains how to add new invocations.
### Services
Services provide invocations access AI Core functionality and other necessary functionality (e.g. image storage). These are available in `/ldm/invoke/app/services`. As a general rule, new services should provide an interface as an abstract base class, and may provide a lightweight local implementation by default in their module. The goal for all services should be to enable the usage of different implementations (e.g. using cloud storage for image storage), but should not load any module dependencies unless that implementation has been used (i.e. don't import anything that won't be used, especially if it's expensive to import).
## AI Core
The AI Core is represented by the rest of the code base (i.e. the code outside of `/ldm/invoke/app/`).

View File

@@ -0,0 +1,105 @@
# Invocations
Invocations represent a single operation, its inputs, and its outputs. These operations and their outputs can be chained together to generate and modify images.
## Creating a new invocation
To create a new invocation, either find the appropriate module file in `/ldm/invoke/app/invocations` to add your invocation to, or create a new one in that folder. All invocations in that folder will be discovered and made available to the CLI and API automatically. Invocations make use of [typing](https://docs.python.org/3/library/typing.html) and [pydantic](https://pydantic-docs.helpmanual.io/) for validation and integration into the CLI and API.
An invocation looks like this:
```py
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
type: Literal['upscale'] = 'upscale'
# Inputs
image: Union[ImageField,None] = Field(description="The input image")
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2,4] = Field(default=2, description = "The upscale level")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get(self.image.image_type, self.image.image_name)
results = context.services.generate.upscale_and_reconstruct(
image_list = [[image, 0]],
upscale = (self.level, self.strength),
strength = 0.0, # GFPGAN strength
save_original = False,
image_callback = None,
)
# Results are image and seed, unwrap for now
# TODO: can this return multiple results?
image_type = ImageType.RESULT
image_name = context.services.images.create_name(context.graph_execution_state_id, self.id)
context.services.images.save(image_type, image_name, results[0][0])
return ImageOutput(
image = ImageField(image_type = image_type, image_name = image_name)
)
```
Each portion is important to implement correctly.
### Class definition and type
```py
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
type: Literal['upscale'] = 'upscale'
```
All invocations must derive from `BaseInvocation`. They should have a docstring that declares what they do in a single, short line. They should also have a `type` with a type hint that's `Literal["command_name"]`, where `command_name` is what the user will type on the CLI or use in the API to create this invocation. The `command_name` must be unique. The `type` must be assigned to the value of the literal in the type hint.
### Inputs
```py
# Inputs
image: Union[ImageField,None] = Field(description="The input image")
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2,4] = Field(default=2, description="The upscale level")
```
Inputs consist of three parts: a name, a type hint, and a `Field` with default, description, and validation information. For example:
| Part | Value | Description |
| ---- | ----- | ----------- |
| Name | `strength` | This field is referred to as `strength` |
| Type Hint | `float` | This field must be of type `float` |
| Field | `Field(default=0.75, gt=0, le=1, description="The strength")` | The default value is `0.75`, the value must be in the range (0,1], and help text will show "The strength" for this field. |
Notice that `image` has type `Union[ImageField,None]`. The `Union` allows this field to be parsed with `None` as a value, which enables linking to previous invocations. All fields should either provide a default value or allow `None` as a value, so that they can be overwritten with a linked output from another invocation.
The special type `ImageField` is also used here. All images are passed as `ImageField`, which protects them from pydantic validation errors (since images only ever come from links).
Finally, note that for all linking, the `type` of the linked fields must match. If the `name` also matches, then the field can be **automatically linked** to a previous invocation by name and matching.
### Invoke Function
```py
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get(self.image.image_type, self.image.image_name)
results = context.services.generate.upscale_and_reconstruct(
image_list = [[image, 0]],
upscale = (self.level, self.strength),
strength = 0.0, # GFPGAN strength
save_original = False,
image_callback = None,
)
# Results are image and seed, unwrap for now
image_type = ImageType.RESULT
image_name = context.services.images.create_name(context.graph_execution_state_id, self.id)
context.services.images.save(image_type, image_name, results[0][0])
return ImageOutput(
image = ImageField(image_type = image_type, image_name = image_name)
)
```
The `invoke` function is the last portion of an invocation. It is provided an `InvocationContext` which contains services to perform work as well as a `session_id` for use as needed. It should return a class with output values that derives from `BaseInvocationOutput`.
Before being called, the invocation will have all of its fields set from defaults, inputs, and finally links (overriding in that order).
Assume that this invocation may be running simultaneously with other invocations, may be running on another machine, or in other interesting scenarios. If you need functionality, please provide it as a service in the `InvocationServices` class, and make sure it can be overridden.
### Outputs
```py
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
type: Literal['image'] = 'image'
image: ImageField = Field(default=None, description="The output image")
```
Output classes look like an invocation class without the invoke method. Prefer to use an existing output class if available, and prefer to name inputs the same as outputs when possible, to promote automatic invocation linking.

View File

@@ -1,5 +1,5 @@
---
title: Concepts Library
title: Styles and Subjects
---
# :material-library-shelves: The Hugging Face Concepts Library and Importing Textual Inversion files
@@ -25,10 +25,14 @@ library which downloads and merges TI files automatically upon request. You can
also install your own or others' TI files by placing them in a designated
directory.
You may also be interested in using [LoRA Models](LORAS.md) to
generate images with specialized styles and subjects.
### An Example
Here are a few examples to illustrate how it works. All these images were
generated using the command-line client and the Stable Diffusion 1.5 model:
Here are a few examples to illustrate how Textual Inversion works. All
these images were generated using the command-line client and the
Stable Diffusion 1.5 model:
| Japanese gardener | Japanese gardener &lt;ghibli-face&gt; | Japanese gardener &lt;hoi4-leaders&gt; | Japanese gardener &lt;cartoona-animals&gt; |
| :--------------------------------: | :-----------------------------------: | :------------------------------------: | :----------------------------------------: |
@@ -109,21 +113,50 @@ For example, TI files generated by the Hugging Face toolkit share the named
`learned_embedding.bin`. You can use subdirectories to keep them distinct.
At startup time, InvokeAI will scan the `embeddings` directory and load any TI
files it finds there. At startup you will see a message similar to this one:
files it finds there. At startup you will see messages similar to these:
```bash
>> Current embedding manager terms: *, <HOI4-Leader>, <princess-knight>
>> Loading embeddings from /data/lstein/invokeai-2.3/embeddings
| Loading v1 embedding file: style-hamunaptra
| Loading v4 embedding file: embeddings/learned_embeds-steps-500.bin
| Loading v2 embedding file: lfa
| Loading v3 embedding file: easynegative
| Loading v1 embedding file: rem_rezero
| Loading v2 embedding file: midj-strong
| Loading v4 embedding file: anime-background-style-v2/learned_embeds.bin
| Loading v4 embedding file: kamon-style/learned_embeds.bin
** Notice: kamon-style/learned_embeds.bin was trained on a model with an incompatible token dimension: 768 vs 1024.
>> Textual inversion triggers: <anime-background-style-v2>, <easynegative>, <lfa>, <midj-strong>, <milo>, Rem3-2600, Style-Hamunaptra
```
Note the `*` trigger term. This is a placeholder term that many early TI
tutorials taught people to use rather than a more descriptive term.
Unfortunately, if you have multiple TI files that all use this term, only the
first one loaded will be triggered by use of the term.
Textual Inversion embeddings trained on version 1.X stable diffusion
models are incompatible with version 2.X models and vice-versa.
To avoid this problem, you can use the `merge_embeddings.py` script to merge two
or more TI files together. If it encounters a collision of terms, the script
will prompt you to select new terms that do not collide. See
[Textual Inversion](TEXTUAL_INVERSION.md) for details.
After the embeddings load, InvokeAI will print out a list of all the
recognized trigger terms. To trigger the term, include it in the
prompt exactly as written, including angle brackets if any and
respecting the capitalization.
There are at least four different embedding file formats, and each uses
a different convention for the trigger terms. In some cases, the
trigger term is specified in the file contents and may or may not be
surrounded by angle brackets. In the example above, `Rem3-2600`,
`Style-Hamunaptra`, and `<midj-strong>` were specified this way and
there is no easy way to change the term.
In other cases the trigger term is not contained within the embedding
file. In this case, InvokeAI constructs a trigger term consisting of
the base name of the file (without the file extension) surrounded by
angle brackets. In the example above `<easynegative`> is such a file
(the filename was `easynegative.safetensors`). In such cases, you can
change the trigger term simply by renaming the file.
## Training your own Textual Inversion models
InvokeAI provides a script that lets you train your own Textual
Inversion embeddings using a small number (about a half-dozen) images
of your desired style or subject. Please see [Textual
Inversion](TEXTUAL_INVERSION.md) for details.
## Further Reading

100
docs/features/LORAS.md Normal file
View File

@@ -0,0 +1,100 @@
---
title: Low-Rank Adaptation (LoRA) Models
---
# :material-library-shelves: Using Low-Rank Adaptation (LoRA) Models
## Introduction
LoRA is a technique for fine-tuning Stable Diffusion models using much
less time and memory than traditional training techniques. The
resulting model files are much smaller than full model files, and can
be used to generate specialized styles and subjects.
LoRAs are built on top of Stable Diffusion v1.x or 2.x checkpoint or
diffusers models. To load a LoRA, you include its name in the text
prompt using a simple syntax described below. While you will generally
get the best results when you use the same model the LoRA was trained
on, they will work to a greater or lesser extent with other models.
The major caveat is that a LoRA built on top of a SD v1.x model cannot
be used with a v2.x model, and vice-versa. If you try, you will get an
error! You may refer to multiple LoRAs in your prompt.
When you apply a LoRA in a prompt you can specify a weight. The higher
the weight, the more influence it will have on the image. Useful
ranges for weights are usually in the 0.0 to 1.0 range (with ranges
between 0.5 and 1.0 being most typical). However you can specify a
higher weight if you wish. Like models, each LoRA has a slightly
different useful weight range and will interact with other generation
parameters such as the CFG, step count and sampler. The author of the
LoRA will often provide guidance on the best settings, but feel free
to experiment. Be aware that it often helps to reduce the CFG value
when using LoRAs.
## Installing LoRAs
This is very easy! Download a LoRA model file from your favorite site
(e.g. [CIVITAI](https://civitai.com) and place it in the `loras`
folder in the InvokeAI root directory (usually `~invokeai/loras` on
Linux/Macintosh machines, and `C:\Users\your-name\invokeai/loras` on
Windows systems). If the `loras` folder does not already exist, just
create it. The vast majority of LoRA models use the Kohya file format,
which is a type of `.safetensors` file.
You may change where InvokeAI looks for the `loras` folder by passing the
`--lora_directory` option to the `invoke.sh`/`invoke.bat` launcher, or
by placing the option in `invokeai.init`. For example:
```
invoke.sh --lora_directory=C:\Users\your-name\SDModels\lora
```
## Using a LoRA in your prompt
To activate a LoRA use the syntax `withLora(my-lora-name,weight)`
somewhere in the text of the prompt. The position doesn't matter; use
whatever is most comfortable for you.
For example, if you have a LoRA named `parchment_people.safetensors`
in your `loras` directory, you can load it with a weight of 0.9 with a
prompt like this one:
```
family sitting at dinner table withLora(parchment_people,0.9)
```
Add additional `withLora()` phrases to load more LoRAs.
You may omit the weight entirely to default to a weight of 1.0:
```
family sitting at dinner table withLora(parchment_people)
```
If you watch the console as your prompt executes, you will see
messages relating to the loading and execution of the LoRA. If things
don't work as expected, note down the console messages and report them
on the InvokeAI Issues pages or Discord channel.
That's pretty much all you need to know!
## Training Kohya Models
InvokeAI cannot currently train LoRA models, but it can load and use
existing LoRA ones to generate images. While there are several LoRA
model file formats, the predominant one is ["Kohya"
format](https://github.com/kohya-ss/sd-scripts), written by [Kohya
S.](https://github.com/kohya-ss). InvokeAI provides support for this
format. For creating your own Kohya models, we recommend the Windows
GUI written by former InvokeAI-team member
[bmaltais](https://github.com/bmaltais), which can be found at
[kohya_ss](https://github.com/bmaltais/kohya_ss).
We can also recommend the [HuggingFace DreamBooth Training
UI](https://huggingface.co/spaces/lora-library/LoRA-DreamBooth-Training-UI),
a paid service that supports both Textual Inversion and LoRA training.
You may also be interested in [Textual
Inversion](TEXTUAL_INVERSION.md) training, which is supported by
InvokeAI as a text console and command-line tool.

View File

@@ -20,6 +20,8 @@ title: Overview
Scriptable access to InvokeAI's features.
- [Visual Manual for InvokeAI](https://docs.google.com/presentation/d/e/2PACX-1vSE90aC7bVVg0d9KXVMhy-Wve-wModgPFp7AGVTOCgf4xE03SnV24mjdwldolfCr59D_35oheHe4Cow/pub?start=false&loop=true&delayms=60000) (contributed by Statcomm)
- Image Generation
- [Prompt Engineering](PROMPTS.md)

View File

@@ -142,6 +142,10 @@ This method is recommended for those familiar with running Docker containers
- [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
- [WebUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
- [Visual Manual for InvokeAI v2.3.1](https://docs.google.com/presentation/d/e/2PACX-1vSE90aC7bVVg0d9KXVMhy-Wve-wModgPFp7AGVTOCgf4xE03SnV24mjdwldolfCr59D_35oheHe4Cow/pub?start=false&loop=true&delayms=60000) (contributed by Statcomm)
<!-- separator -->
<!-- separator -->
### The InvokeAI Command Line Interface
@@ -155,6 +159,7 @@ This method is recommended for those familiar with running Docker containers
- [Inpainting](features/INPAINTING.md)
- [Outpainting](features/OUTPAINTING.md)
- [Adding custom styles and subjects](features/CONCEPTS.md)
- [Using LoRA models](features/LORAS.md)
- [Upscaling and Face Reconstruction](features/POSTPROCESS.md)
- [Embiggen upscaling](features/EMBIGGEN.md)
- [Other Features](features/OTHER.md)
@@ -165,7 +170,7 @@ This method is recommended for those familiar with running Docker containers
- [Installing](installation/050_INSTALLING_MODELS.md)
- [Model Merging](features/MODEL_MERGING.md)
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
- [Adding custom styles and subjects via embeddings](features/CONCEPTS.md)
- [Textual Inversion](features/TEXTUAL_INVERSION.md)
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
<!-- seperator -->
@@ -177,6 +182,154 @@ This method is recommended for those familiar with running Docker containers
## :octicons-log-16: Latest Changes
### v2.3.3 <small>(29 March 2023)</small>
#### Bug Fixes
1. When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
2. Textual inversion will select an appropriate batchsize based on whether `xformers` is active, and will default to `xformers` enabled if the library is detected.
3. The batch script log file names have been fixed to be compatible with Windows.
4. Occasional corruption of the `.next_prefix` file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
5. An infinite loop when opening the developer's console from within the `invoke.sh` script has been corrected.
#### Enhancements
1. It is now possible to load and run several community-contributed SD-2.0 based models, including the infamous "Illuminati" model.
2. The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI `embeddings` directory.
3. If no `--model` is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
4. On Linux systems, the `invoke.sh` launcher now uses a prettier console-based interface. To take advantage of it, install the `dialog` package using your package manager (e.g. `sudo apt install dialog`).
5. When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
```
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
```
### v2.3.2 <small>(13 March 2023)</small>
#### Bugfixes
Since version 2.3.1 the following bugs have been fixed:
1. Black images appearing for potential NSFW images when generating with legacy checkpoint models and both `--no-nsfw_checker` and `--ckpt_convert` turned on.
2. Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
3. The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
4. When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
5. Crashes that occurred during model merging.
6. Restore previous naming of Stable Diffusion base and 768 models.
7. Upgraded to latest versions of `diffusers`, `transformers`, `safetensors` and `accelerate` libraries upstream. We hope that this will fix the `assertion NDArray > 2**32` issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
As part of the upgrade to `diffusers`, the location of the diffusers-based models has changed from `models/diffusers` to `models/hub`. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your `models/diffusers` directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
#### New "Invokeai-batch" script
2.3.2 introduces a new command-line only script called
`invokeai-batch` that can be used to generate hundreds of images from
prompts and settings that vary systematically. This can be used to try
the same prompt across multiple combinations of models, steps, CFG
settings and so forth. It also allows you to template prompts and
generate a combinatorial list like: ``` a shack in the mountains,
photograph a shack in the mountains, watercolor a shack in the
mountains, oil painting a chalet in the mountains, photograph a chalet
in the mountains, watercolor a chalet in the mountains, oil painting a
shack in the desert, photograph ... ```
If you have a system with multiple GPUs, or a single GPU with lots of
VRAM, you can parallelize generation across the combinatorial set,
reducing wait times and using your system's resources efficiently
(make sure you have good GPU cooling).
To try `invokeai-batch` out. Launch the "developer's console" using
the `invoke` launcher script, or activate the invokeai virtual
environment manually. From the console, give the command
`invokeai-batch --help` in order to learn how the script works and
create your first template file for dynamic prompt generation.
### v2.3.1 <small>(26 February 2023)</small>
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
#### Enhanced support for model management
InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
There are three ways of accessing the model management features:
1. ***From the WebUI***, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
![image](https://user-images.githubusercontent.com/111189/220638091-918492cc-0719-4194-b033-3741e8289b30.png)
2. **Using the Model Installer App**
Choose option (5) _download and install models_ from the `invoke` launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
Command-line users can start this app using the command `invokeai-model-install`.
![image](https://user-images.githubusercontent.com/111189/220660363-22ff3a2e-8082-410e-a818-d2b3a0529bac.png)
3. **Using the Command Line Client (CLI)**
The `!install_model` and `!convert_model` commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do **not** need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
Please see [INSTALLING MODELS](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/) for more information on model management.
#### An Improved Installer Experience
The installer now launches a console-based UI for setting and changing commonly-used startup options:
![image](https://user-images.githubusercontent.com/111189/220644777-3d3a90ca-f9e2-4e6d-93da-cbdd66bf12f3.png)
After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching `invoke.sh`/`invoke.bat` and entering option (6) _change InvokeAI startup options_
Command-line users can launch the new configure app using `invokeai-configure`.
This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch `invoke.sh` or `invoke.bat` and choose option (9) _update InvokeAI_ . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
![image](https://user-images.githubusercontent.com/111189/220650124-30a77137-d9cd-406e-a87d-d8283f99a4b3.png)
Command-line users can run this interface by typing `invokeai-configure`
#### Image Symmetry Options
There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting _Symmetry_ from the image generation settings, or within the CLI by using the options `--h_symmetry_time_pct` and `--v_symmetry_time_pct` (these can be abbreviated to `--h_sym` and `--v_sym` like all other options).
![image](https://user-images.githubusercontent.com/111189/220658687-47fd0f2c-7069-4d95-aec9-7196fceb360d.png)
#### A New Unified Canvas Look
This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select _Use Canvas Beta Layout_:
![image](https://user-images.githubusercontent.com/111189/220646958-b7eca95e-dc39-4cd2-b277-63eac98ed446.png)
Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
![image](https://user-images.githubusercontent.com/111189/220647560-4a9265a1-6926-44f9-9d08-e1ef2ce61ff8.png)
#### Model conversion and merging within the WebUI
The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the `invoke.sh`/`invoke.bat` scripts.
#### An easier way to contribute translations to the WebUI
We have migrated our translation efforts to [Weblate](https://hosted.weblate.org/engage/invokeai/), a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief [translation guide](https://github.com/invoke-ai/InvokeAI/blob/v2.3.1/docs/other/TRANSLATION.md) for more information on how to contribute.
#### Numerous internal bugfixes and performance issues
This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to `diffusers 0.13.0`, and using the `compel` library for prompt parsing. See [Detailed Change Log](#full-change-log) for a detailed list of bugs caught and squished.
#### Summary of InvokeAI command line scripts (all accessible via the launcher menu)
| Command | Description |
|--------------------------|---------------------------------------------------------------------|
| `invokeai` | Command line interface |
| `invokeai --web` | Web interface |
| `invokeai-model-install` | Model installer with console forms-based front end |
| `invokeai-ti --gui` | Textual inversion, with a console forms-based front end |
| `invokeai-merge --gui` | Model merging, with a console forms-based front end |
| `invokeai-configure` | Startup configuration; can also be used to reinstall support models |
| `invokeai-update` | InvokeAI software updater |
### v2.3.0 <small>(9 February 2023)</small>
#### Migration to Stable Diffusion `diffusers` models

View File

@@ -77,7 +77,7 @@ machine. To test, open up a terminal window and issue the following
command:
```
rocm-smi
rocminfo
```
If you get a table labeled "ROCm System Management Interface" the
@@ -95,9 +95,17 @@ recent version of Ubuntu, 22.04. However, this [community-contributed
recipe](https://novaspirit.github.io/amdgpu-rocm-ubu22/) is reported
to work well.
After installation, please run `rocm-smi` a second time to confirm
After installation, please run `rocminfo` a second time to confirm
that the driver is present and the GPU is recognized. You may need to
do a reboot in order to load the driver.
do a reboot in order to load the driver. In addition, if you see
errors relating to your username not being a member of the `render`
group, you may fix this by adding yourself to this group with the command:
```
sudo usermod -a -G render myUserName
```
(Thanks to @EgoringKosmos for the usermod recipe.)
### Linux Install with a ROCm-docker Container

View File

@@ -11,7 +11,7 @@ The model checkpoint files ('\*.ckpt') are the Stable Diffusion
captioned images gathered from multiple sources.
Originally there was only a single Stable Diffusion weights file,
which many people named `model.ckpt`. Now there are dozens or more
which many people named `model.ckpt`. Now there are hundreds
that have been fine tuned to provide particulary styles, genres, or
other features. In addition, there are several new formats that
improve on the original checkpoint format: a `.safetensors` format
@@ -29,9 +29,10 @@ and performance are being made at a rapid pace. Among other features
is the ability to download and install a `diffusers` model just by
providing its HuggingFace repository ID.
While InvokeAI will continue to support `.ckpt` and `.safetensors`
While InvokeAI will continue to support legacy `.ckpt` and `.safetensors`
models for the near future, these are deprecated and support will
likely be withdrawn at some point in the not-too-distant future.
be withdrawn in version 3.0, after which all legacy models will be
converted into diffusers at the time they are loaded.
This manual will guide you through installing and configuring model
weight files and converting legacy `.ckpt` and `.safetensors` files
@@ -89,15 +90,18 @@ aware that CIVITAI hosts many models that generate NSFW content.
!!! note
InvokeAI 2.3.x does not support directly importing and
running Stable Diffusion version 2 checkpoint models. You may instead
convert them into `diffusers` models using the conversion methods
described below.
running Stable Diffusion version 2 checkpoint models. If you
try to import them, they will be automatically
converted into `diffusers` models on the fly. This adds about 20s
to loading time. To avoid this overhead, you are encouraged to
use one of the conversion methods described below to convert them
permanently.
## Installation
There are multiple ways to install and manage models:
1. The `invokeai-configure` script which will download and install them for you.
1. The `invokeai-model-install` script which will download and install them for you.
2. The command-line tool (CLI) has commands that allows you to import, configure and modify
models files.
@@ -105,14 +109,41 @@ There are multiple ways to install and manage models:
3. The web interface (WebUI) has a GUI for importing and managing
models.
### Installation via `invokeai-configure`
### Installation via `invokeai-model-install`
From the `invoke` launcher, choose option (6) "re-run the configure
script to download new models." This will launch the same script that
prompted you to select models at install time. You can use this to add
models that you skipped the first time around. It is all right to
specify a model that was previously downloaded; the script will just
confirm that the files are complete.
From the `invoke` launcher, choose option (5) "Download and install
models." This will launch the same script that prompted you to select
models at install time. You can use this to add models that you
skipped the first time around. It is all right to specify a model that
was previously downloaded; the script will just confirm that the files
are complete.
This script allows you to load 3d party models. Look for a large text
entry box labeled "IMPORT LOCAL AND REMOTE MODELS." In this box, you
can cut and paste one or more of any of the following:
1. A URL that points to a downloadable .ckpt or .safetensors file.
2. A file path pointing to a .ckpt or .safetensors file.
3. A diffusers model repo_id (from HuggingFace) in the format
"owner/repo_name".
4. A directory path pointing to a diffusers model directory.
5. A directory path pointing to a directory containing a bunch of
.ckpt and .safetensors files. All will be imported.
You can enter multiple items into the textbox, each one on a separate
line. You can paste into the textbox using ctrl-shift-V or by dragging
and dropping a file/directory from the desktop into the box.
The script also lets you designate a directory that will be scanned
for new model files each time InvokeAI starts up. These models will be
added automatically.
Lastly, the script gives you a checkbox option to convert legacy models
into diffusers, or to run the legacy model directly. If you choose to
convert, the original .ckpt/.safetensors file will **not** be deleted,
but a new diffusers directory will be created, using twice your disk
space. However, the diffusers version will load faster, and will be
compatible with InvokeAI 3.0.
### Installation via the CLI
@@ -144,19 +175,15 @@ invoke> !import_model https://example.org/sd_models/martians.safetensors
For this to work, the URL must not be password-protected. Otherwise
you will receive a 404 error.
When you import a legacy model, the CLI will first ask you what type
of model this is. You can indicate whether it is a model based on
Stable Diffusion 1.x (1.4 or 1.5), one based on Stable Diffusion 2.x,
or a 1.x inpainting model. Be careful to indicate the correct model
type, or it will not load correctly. You can correct the model type
after the fact using the `!edit_model` command.
The system will then ask you a few other questions about the model,
including what size image it was trained on (usually 512x512), what
name and description you wish to use for it, and whether you would
like to install a custom VAE (variable autoencoder) file for the
model. For recent models, the answer to the VAE question is usually
"no," but it won't hurt to answer "yes".
When you import a legacy model, the CLI will try to figure out what
type of model it is and select the correct load configuration file.
However, one thing it can't do is to distinguish between Stable
Diffusion 2.x models trained on 512x512 vs 768x768 images. In this
case, the CLI will pop up a menu of choices, asking you to select
which type of model it is. Please consult the model documentation to
identify the correct answer, as loading with the wrong configuration
will lead to black images. You can correct the model type after the
fact using the `!edit_model` command.
After importing, the model will load. If this is successful, you will
be asked if you want to keep the model loaded in memory to start
@@ -211,129 +238,6 @@ description for the model, whether to make this the default model that
is loaded at InvokeAI startup time, and whether to replace its
VAE. Generally the answer to the latter question is "no".
### Specifying a configuration file for legacy checkpoints
Some checkpoint files come with instructions to use a specific .yaml
configuration file. For InvokeAI load this file correctly, please put
the config file in the same directory as the corresponding `.ckpt` or
`.safetensors` file and make sure the file has the same basename as
the weights file. Here is an example:
```bash
wonderful-model-v2.ckpt
wonderful-model-v2.yaml
```
Similarly, to use a custom VAE, name the VAE like this:
```bash
wonderful-model-v2.vae.pt
```
### Converting legacy models into `diffusers`
The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
models file into `diffusers` and install it.This will enable the model
to load and run faster without loss of image quality.
The usage is identical to `!import_model`. You may point the command
to either a downloaded model file on disk, or to a (non-password
protected) URL:
```bash
invoke> !convert_model C:/Users/fred/Downloads/martians.safetensors
```
After a successful conversion, the CLI will offer you the option of
deleting the original `.ckpt` or `.safetensors` file.
### Optimizing a previously-installed model
Lastly, if you have previously installed a `.ckpt` or `.safetensors`
file and wish to convert it into a `diffusers` model, you can do this
without re-downloading and converting the original file using the
`!optimize_model` command. Simply pass the short name of an existing
installed model:
```bash
invoke> !optimize_model martians-v1.0
```
The model will be converted into `diffusers` format and replace the
previously installed version. You will again be offered the
opportunity to delete the original `.ckpt` or `.safetensors` file.
### Related CLI Commands
There are a whole series of additional model management commands in
the CLI that you can read about in [Command-Line
Interface](../features/CLI.md). These include:
* `!models` - List all installed models
* `!switch <model name>` - Switch to the indicated model
* `!edit_model <model name>` - Edit the indicated model to change its name, description or other properties
* `!del_model <model name>` - Delete the indicated model
### Manually editing `configs/models.yaml`
If you are comfortable with a text editor then you may simply edit `models.yaml`
directly.
You will need to download the desired `.ckpt/.safetensors` file and
place it somewhere on your machine's filesystem. Alternatively, for a
`diffusers` model, record the repo_id or download the whole model
directory. Then using a **text** editor (e.g. the Windows Notepad
application), open the file `configs/models.yaml`, and add a new
stanza that follows this model:
#### A legacy model
A legacy `.ckpt` or `.safetensors` entry will look like this:
```yaml
arabian-nights-1.0:
description: A great fine-tune in Arabian Nights style
weights: ./path/to/arabian-nights-1.0.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
format: ckpt
width: 512
height: 512
default: false
```
Note that `format` is `ckpt` for both `.ckpt` and `.safetensors` files.
#### A diffusers model
A stanza for a `diffusers` model will look like this for a HuggingFace
model with a repository ID:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
repo_id: captahab/arabian-nights-1.1
format: diffusers
default: true
```
And for a downloaded directory:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
path: /path/to/captahab-arabian-nights-1.1
format: diffusers
default: true
```
There is additional syntax for indicating an external VAE to use with
this model. See `INITIAL_MODELS.yaml` and `models.yaml` for examples.
After you save the modified `models.yaml` file relaunch
`invokeai`. The new model will now be available for your use.
### Installation via the WebUI
To access the WebUI Model Manager, click on the button that looks like
@@ -413,3 +317,143 @@ And here is what the same argument looks like in `invokeai.init`:
--no-nsfw_checker
--autoconvert /home/fred/stable-diffusion-checkpoints
```
### Specifying a configuration file for legacy checkpoints
Some checkpoint files come with instructions to use a specific .yaml
configuration file. For InvokeAI load this file correctly, please put
the config file in the same directory as the corresponding `.ckpt` or
`.safetensors` file and make sure the file has the same basename as
the model file. Here is an example:
```bash
wonderful-model-v2.ckpt
wonderful-model-v2.yaml
```
This is not needed for `diffusers` models, which come with their own
pre-packaged configuration.
### Specifying a custom VAE file for legacy checkpoints
To associate a custom VAE with a legacy file, place the VAE file in
the same directory as the corresponding `.ckpt` or
`.safetensors` file and make sure the file has the same basename as
the model file. Use the suffix `.vae.pt` for VAE checkpoint files, and
`.vae.safetensors` for VAE safetensors files. There is no requirement
that both the model and the VAE follow the same format.
Example:
```bash
wonderful-model-v2.pt
wonderful-model-v2.vae.safetensors
```
### Converting legacy models into `diffusers`
The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
models file into `diffusers` and install it.This will enable the model
to load and run faster without loss of image quality.
The usage is identical to `!import_model`. You may point the command
to either a downloaded model file on disk, or to a (non-password
protected) URL:
```bash
invoke> !convert_model C:/Users/fred/Downloads/martians.safetensors
```
After a successful conversion, the CLI will offer you the option of
deleting the original `.ckpt` or `.safetensors` file.
### Optimizing a previously-installed model
Lastly, if you have previously installed a `.ckpt` or `.safetensors`
file and wish to convert it into a `diffusers` model, you can do this
without re-downloading and converting the original file using the
`!optimize_model` command. Simply pass the short name of an existing
installed model:
```bash
invoke> !optimize_model martians-v1.0
```
The model will be converted into `diffusers` format and replace the
previously installed version. You will again be offered the
opportunity to delete the original `.ckpt` or `.safetensors` file.
Alternatively you can use the WebUI's model manager to handle diffusers
optimization. Select the legacy model you wish to convert, and then
look for a button labeled "Convert to Diffusers" in the upper right of
the window.
### Related CLI Commands
There are a whole series of additional model management commands in
the CLI that you can read about in [Command-Line
Interface](../features/CLI.md). These include:
* `!models` - List all installed models
* `!switch <model name>` - Switch to the indicated model
* `!edit_model <model name>` - Edit the indicated model to change its name, description or other properties
* `!del_model <model name>` - Delete the indicated model
### Manually editing `configs/models.yaml`
If you are comfortable with a text editor then you may simply edit `models.yaml`
directly.
You will need to download the desired `.ckpt/.safetensors` file and
place it somewhere on your machine's filesystem. Alternatively, for a
`diffusers` model, record the repo_id or download the whole model
directory. Then using a **text** editor (e.g. the Windows Notepad
application), open the file `configs/models.yaml`, and add a new
stanza that follows this model:
#### A legacy model
A legacy `.ckpt` or `.safetensors` entry will look like this:
```yaml
arabian-nights-1.0:
description: A great fine-tune in Arabian Nights style
weights: ./path/to/arabian-nights-1.0.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
format: ckpt
width: 512
height: 512
default: false
```
Note that `format` is `ckpt` for both `.ckpt` and `.safetensors` files.
#### A diffusers model
A stanza for a `diffusers` model will look like this for a HuggingFace
model with a repository ID:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
repo_id: captahab/arabian-nights-1.1
format: diffusers
default: true
```
And for a downloaded directory:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
path: /path/to/captahab-arabian-nights-1.1
format: diffusers
default: true
```
There is additional syntax for indicating an external VAE to use with
this model. See `INITIAL_MODELS.yaml` and `models.yaml` for examples.
After you save the modified `models.yaml` file relaunch
`invokeai`. The new model will now be available for your use.

View File

@@ -23,14 +23,16 @@ We thank them for all of their time and hard work.
* @damian0815 - Attention Systems and Gameplay Engineer
* @mauwii (Matthias Wild) - Continuous integration and product maintenance engineer
* @Netsvetaev (Artur Netsvetaev) - UI/UX Developer
* @tildebyte - General gadfly and resident (self-appointed) know-it-all
* @keturn - Lead for Diffusers port
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @jpphoto (Jonathan Pollack) - Inference and rendering engine optimization
* @genomancer (Gregg Helt) - Model training and merging
* @gogurtenjoyer - User support and testing
* @whosawwhatsis - User support and testing
## **Contributions by**
- [tildebyte](https://github.com/tildebyte)
- [Sean McLellan](https://github.com/Oceanswave)
- [Kevin Gibbons](https://github.com/bakkot)
- [Tesseract Cat](https://github.com/TesseractCat)
@@ -78,6 +80,7 @@ We thank them for all of their time and hard work.
- [psychedelicious](https://github.com/psychedelicious)
- [damian0815](https://github.com/damian0815)
- [Eugene Brodsky](https://github.com/ebr)
- [Statcomm](https://github.com/statcomm)
## **Original CompVis Authors**

View File

@@ -144,8 +144,8 @@ class Installer:
from plumbum import FG, local
pip = local[get_pip_from_venv(venv_dir)]
pip[ "install", "--upgrade", "pip"] & FG
python = local[get_python_from_venv(venv_dir)]
python[ "-m", "pip", "install", "--upgrade", "pip"] & FG
return venv_dir
@@ -242,8 +242,8 @@ class InvokeAiInstance:
from plumbum import FG, local
# Note that we're installing pinned versions of torch and
# torchvision here, which may not correspond to what is
# in pyproject.toml. This is a hack to prevent torch 2.0 from
# torchvision here, which *should* correspond to what is
# in pyproject.toml. This is to prevent torch 2.0 from
# being installed and immediately uninstalled and replaced with 1.13
pip = local[self.pip]
@@ -252,7 +252,7 @@ class InvokeAiInstance:
"install",
"--require-virtualenv",
"torch~=1.13.1",
"torchvision>=0.14.1",
"torchvision~=0.14.1",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,
@@ -384,7 +384,7 @@ class InvokeAiInstance:
os.chmod(dest, 0o0755)
if OS == "Linux":
shutil.copy(Path(__file__).parent / '..' / "templates" / "dialogrc", self.runtime / '.dialogrc')
shutil.copy(Path(__file__).parents[1] / "templates" / "dialogrc", self.runtime / '.dialogrc')
def update(self):
pass
@@ -412,6 +412,22 @@ def get_pip_from_venv(venv_path: Path) -> str:
return str(venv_path.expanduser().resolve() / pip)
def get_python_from_venv(venv_path: Path) -> str:
"""
Given a path to a virtual environment, get the absolute path to the `python` executable
in a cross-platform fashion. Does not validate that the python executable
actually exists in the virtualenv.
:param venv_path: Path to the virtual environment
:type venv_path: Path
:return: Absolute path to the python executable
:rtype: str
"""
python = "Scripts\python.exe" if OS == "Windows" else "bin/python"
return str(venv_path.expanduser().resolve() / python)
def set_sys_path(venv_path: Path) -> None:
"""
Given a path to a virtual environment, set the sys.path, in a cross-platform fashion,

View File

@@ -25,15 +25,23 @@ from invokeai.backend.modules.parameters import parameters_to_command
import invokeai.frontend.dist as frontend
from ldm.generate import Generate
from ldm.invoke.args import Args, APP_ID, APP_VERSION, calculate_init_img_hash
from ldm.invoke.conditioning import get_tokens_for_prompt_object, get_prompt_structure, split_weighted_subprompts, \
get_tokenizer
from ldm.invoke.concepts_lib import HuggingFaceConceptsLibrary
from ldm.invoke.conditioning import (
get_tokens_for_prompt_object,
get_prompt_structure,
split_weighted_subprompts,
)
from ldm.invoke.generator.diffusers_pipeline import PipelineIntermediateState
from ldm.invoke.generator.inpaint import infill_methods
from ldm.invoke.globals import Globals, global_converted_ckpts_dir
from ldm.invoke.globals import (
Globals,
global_converted_ckpts_dir,
global_models_dir,
)
from ldm.invoke.pngwriter import PngWriter, retrieve_metadata
from compel.prompt_parser import Blend
from ldm.invoke.globals import global_models_dir
from ldm.invoke.merge_diffusers import merge_diffusion_models
from ldm.modules.lora_manager import LoraManager
# Loading Arguments
opt = Args()
@@ -193,8 +201,7 @@ class InvokeAIWebServer:
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(
file_path), self.thumbnail_image_path
pil_image, os.path.basename(file_path), self.thumbnail_image_path
)
response = {
@@ -224,7 +231,7 @@ class InvokeAIWebServer:
server="flask_socketio",
width=1600,
height=1000,
port=self.port
port=self.port,
).run()
except KeyboardInterrupt:
import sys
@@ -265,16 +272,14 @@ class InvokeAIWebServer:
# location for "finished" images
self.result_path = args.outdir
# temporary path for intermediates
self.intermediate_path = os.path.join(
self.result_path, "intermediates/")
self.intermediate_path = os.path.join(self.result_path, "intermediates/")
# path for user-uploaded init images and masks
self.init_image_path = os.path.join(self.result_path, "init-images/")
self.mask_image_path = os.path.join(self.result_path, "mask-images/")
# path for temp images e.g. gallery generations which are not committed
self.temp_image_path = os.path.join(self.result_path, "temp-images/")
# path for thumbnail images
self.thumbnail_image_path = os.path.join(
self.result_path, "thumbnails/")
self.thumbnail_image_path = os.path.join(self.result_path, "thumbnails/")
# txt log
self.log_path = os.path.join(self.result_path, "invoke_log.txt")
# make all output paths
@@ -299,21 +304,22 @@ class InvokeAIWebServer:
config["infill_methods"] = infill_methods()
socketio.emit("systemConfig", config)
@socketio.on('searchForModels')
@socketio.on("searchForModels")
def handle_search_models(search_folder: str):
try:
if not search_folder:
socketio.emit(
"foundModels",
{'search_folder': None, 'found_models': None},
{"search_folder": None, "found_models": None},
)
else:
search_folder, found_models = self.generate.model_manager.search_models(
search_folder)
(
search_folder,
found_models,
) = self.generate.model_manager.search_models(search_folder)
socketio.emit(
"foundModels",
{'search_folder': search_folder,
'found_models': found_models},
{"search_folder": search_folder, "found_models": found_models},
)
except Exception as e:
self.handle_exceptions(e)
@@ -322,11 +328,11 @@ class InvokeAIWebServer:
@socketio.on("addNewModel")
def handle_add_model(new_model_config: dict):
try:
model_name = new_model_config['name']
del new_model_config['name']
model_name = new_model_config["name"]
del new_model_config["name"]
model_attributes = new_model_config
if len(model_attributes['vae']) == 0:
del model_attributes['vae']
if len(model_attributes["vae"]) == 0:
del model_attributes["vae"]
update = False
current_model_list = self.generate.model_manager.list_models()
if model_name in current_model_list:
@@ -335,14 +341,20 @@ class InvokeAIWebServer:
print(f">> Adding New Model: {model_name}")
self.generate.model_manager.add_model(
model_name=model_name, model_attributes=model_attributes, clobber=True)
model_name=model_name,
model_attributes=model_attributes,
clobber=True,
)
self.generate.model_manager.commit(opt.conf)
new_model_list = self.generate.model_manager.list_models()
socketio.emit(
"newModelAdded",
{"new_model_name": model_name,
"model_list": new_model_list, 'update': update},
{
"new_model_name": model_name,
"model_list": new_model_list,
"update": update,
},
)
print(f">> New Model Added: {model_name}")
except Exception as e:
@@ -357,8 +369,10 @@ class InvokeAIWebServer:
updated_model_list = self.generate.model_manager.list_models()
socketio.emit(
"modelDeleted",
{"deleted_model_name": model_name,
"model_list": updated_model_list},
{
"deleted_model_name": model_name,
"model_list": updated_model_list,
},
)
print(f">> Model Deleted: {model_name}")
except Exception as e:
@@ -383,41 +397,48 @@ class InvokeAIWebServer:
except Exception as e:
self.handle_exceptions(e)
@socketio.on('convertToDiffusers')
@socketio.on("convertToDiffusers")
def convert_to_diffusers(model_to_convert: dict):
try:
if (model_info := self.generate.model_manager.model_info(model_name=model_to_convert['model_name'])):
if 'weights' in model_info:
ckpt_path = Path(model_info['weights'])
original_config_file = Path(model_info['config'])
model_name = model_to_convert['model_name']
model_description = model_info['description']
if model_info := self.generate.model_manager.model_info(
model_name=model_to_convert["model_name"]
):
if "weights" in model_info:
ckpt_path = Path(model_info["weights"])
original_config_file = Path(model_info["config"])
model_name = model_to_convert["model_name"]
model_description = model_info["description"]
else:
self.socketio.emit(
"error", {"message": "Model is not a valid checkpoint file"})
"error", {"message": "Model is not a valid checkpoint file"}
)
else:
self.socketio.emit(
"error", {"message": "Could not retrieve model info."})
"error", {"message": "Could not retrieve model info."}
)
if not ckpt_path.is_absolute():
ckpt_path = Path(Globals.root, ckpt_path)
if original_config_file and not original_config_file.is_absolute():
original_config_file = Path(
Globals.root, original_config_file)
original_config_file = Path(Globals.root, original_config_file)
diffusers_path = Path(
ckpt_path.parent.absolute(),
f'{model_name}_diffusers'
ckpt_path.parent.absolute(), f"{model_name}_diffusers"
)
if model_to_convert['save_location'] == 'root':
if model_to_convert["save_location"] == "root":
diffusers_path = Path(
global_converted_ckpts_dir(), f'{model_name}_diffusers')
global_converted_ckpts_dir(), f"{model_name}_diffusers"
)
if model_to_convert['save_location'] == 'custom' and model_to_convert['custom_location'] is not None:
if (
model_to_convert["save_location"] == "custom"
and model_to_convert["custom_location"] is not None
):
diffusers_path = Path(
model_to_convert['custom_location'], f'{model_name}_diffusers')
model_to_convert["custom_location"], f"{model_name}_diffusers"
)
if diffusers_path.exists():
shutil.rmtree(diffusers_path)
@@ -435,54 +456,91 @@ class InvokeAIWebServer:
new_model_list = self.generate.model_manager.list_models()
socketio.emit(
"modelConverted",
{"new_model_name": model_name,
"model_list": new_model_list, 'update': True},
{
"new_model_name": model_name,
"model_list": new_model_list,
"update": True,
},
)
print(f">> Model Converted: {model_name}")
except Exception as e:
self.handle_exceptions(e)
@socketio.on('mergeDiffusersModels')
@socketio.on("mergeDiffusersModels")
def merge_diffusers_models(model_merge_info: dict):
try:
models_to_merge = model_merge_info['models_to_merge']
models_to_merge = model_merge_info["models_to_merge"]
model_ids_or_paths = [
self.generate.model_manager.model_name_or_path(x) for x in models_to_merge]
self.generate.model_manager.model_name_or_path(x)
for x in models_to_merge
]
merged_pipe = merge_diffusion_models(
model_ids_or_paths, model_merge_info['alpha'], model_merge_info['interp'], model_merge_info['force'])
model_ids_or_paths,
model_merge_info["alpha"],
model_merge_info["interp"],
model_merge_info["force"],
)
dump_path = global_models_dir() / 'merged_models'
if model_merge_info['model_merge_save_path'] is not None:
dump_path = Path(model_merge_info['model_merge_save_path'])
dump_path = global_models_dir() / "merged_models"
if model_merge_info["model_merge_save_path"] is not None:
dump_path = Path(model_merge_info["model_merge_save_path"])
os.makedirs(dump_path, exist_ok=True)
dump_path = dump_path / model_merge_info['merged_model_name']
dump_path = dump_path / model_merge_info["merged_model_name"]
merged_pipe.save_pretrained(dump_path, safe_serialization=1)
merged_model_config = dict(
model_name=model_merge_info['merged_model_name'],
model_name=model_merge_info["merged_model_name"],
description=f'Merge of models {", ".join(models_to_merge)}',
commit_to_conf=opt.conf
commit_to_conf=opt.conf,
)
if vae := self.generate.model_manager.config[models_to_merge[0]].get("vae", None):
print(
f">> Using configured VAE assigned to {models_to_merge[0]}")
if vae := self.generate.model_manager.config[models_to_merge[0]].get(
"vae", None
):
print(f">> Using configured VAE assigned to {models_to_merge[0]}")
merged_model_config.update(vae=vae)
self.generate.model_manager.import_diffuser_model(
dump_path, **merged_model_config)
dump_path, **merged_model_config
)
new_model_list = self.generate.model_manager.list_models()
socketio.emit(
"modelsMerged",
{"merged_models": models_to_merge,
"merged_model_name": model_merge_info['merged_model_name'],
"model_list": new_model_list, 'update': True},
{
"merged_models": models_to_merge,
"merged_model_name": model_merge_info["merged_model_name"],
"model_list": new_model_list,
"update": True,
},
)
print(f">> Models Merged: {models_to_merge}")
print(
f">> New Model Added: {model_merge_info['merged_model_name']}")
print(f">> New Model Added: {model_merge_info['merged_model_name']}")
except Exception as e:
self.handle_exceptions(e)
@socketio.on("getLoraModels")
def get_lora_models():
try:
model = self.generate.model
lora_mgr = LoraManager(model)
loras = lora_mgr.list_compatible_loras()
found_loras = []
for lora in sorted(loras, key=str.casefold):
found_loras.append({"name":lora,"location":str(loras[lora])})
socketio.emit("foundLoras", found_loras)
except Exception as e:
self.handle_exceptions(e)
@socketio.on("getTextualInversionTriggers")
def get_ti_triggers():
try:
local_triggers = self.generate.model.textual_inversion_manager.get_all_trigger_strings()
locals = [{'name': x} for x in sorted(local_triggers, key=str.casefold)]
concepts = HuggingFaceConceptsLibrary().list_concepts(minimum_likes=5)
concepts = [{'name': f'<{x}>'} for x in sorted(concepts, key=str.casefold) if f'<{x}>' not in local_triggers]
socketio.emit("foundTextualInversionTriggers", {'local_triggers': locals, 'huggingface_concepts': concepts})
except Exception as e:
self.handle_exceptions(e)
@@ -500,7 +558,8 @@ class InvokeAIWebServer:
os.remove(thumbnail_path)
except Exception as e:
socketio.emit(
"error", {"message": f"Unable to delete {f}: {str(e)}"})
"error", {"message": f"Unable to delete {f}: {str(e)}"}
)
pass
socketio.emit("tempFolderEmptied")
@@ -511,8 +570,7 @@ class InvokeAIWebServer:
def save_temp_image_to_gallery(url):
try:
image_path = self.get_image_path_from_url(url)
new_path = os.path.join(
self.result_path, os.path.basename(image_path))
new_path = os.path.join(self.result_path, os.path.basename(image_path))
shutil.copy2(image_path, new_path)
if os.path.splitext(new_path)[1] == ".png":
@@ -525,8 +583,7 @@ class InvokeAIWebServer:
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(
new_path), self.thumbnail_image_path
pil_image, os.path.basename(new_path), self.thumbnail_image_path
)
image_array = [
@@ -585,8 +642,7 @@ class InvokeAIWebServer:
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(
path), self.thumbnail_image_path
pil_image, os.path.basename(path), self.thumbnail_image_path
)
image_array.append(
@@ -605,7 +661,8 @@ class InvokeAIWebServer:
)
except Exception as e:
socketio.emit(
"error", {"message": f"Unable to load {path}: {str(e)}"})
"error", {"message": f"Unable to load {path}: {str(e)}"}
)
pass
socketio.emit(
@@ -655,8 +712,7 @@ class InvokeAIWebServer:
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(
path), self.thumbnail_image_path
pil_image, os.path.basename(path), self.thumbnail_image_path
)
image_array.append(
@@ -676,7 +732,8 @@ class InvokeAIWebServer:
except Exception as e:
print(f">> Unable to load {path}")
socketio.emit(
"error", {"message": f"Unable to load {path}: {str(e)}"})
"error", {"message": f"Unable to load {path}: {str(e)}"}
)
pass
socketio.emit(
@@ -710,10 +767,9 @@ class InvokeAIWebServer:
printable_parameters["init_mask"][:64] + "..."
)
print(
f'\n>> Image Generation Parameters:\n\n{printable_parameters}\n')
print(f'>> ESRGAN Parameters: {esrgan_parameters}')
print(f'>> Facetool Parameters: {facetool_parameters}')
print(f"\n>> Image Generation Parameters:\n\n{printable_parameters}\n")
print(f">> ESRGAN Parameters: {esrgan_parameters}")
print(f">> Facetool Parameters: {facetool_parameters}")
self.generate_images(
generation_parameters,
@@ -750,11 +806,9 @@ class InvokeAIWebServer:
if postprocessing_parameters["type"] == "esrgan":
progress.set_current_status("common.statusUpscalingESRGAN")
elif postprocessing_parameters["type"] == "gfpgan":
progress.set_current_status(
"common.statusRestoringFacesGFPGAN")
progress.set_current_status("common.statusRestoringFacesGFPGAN")
elif postprocessing_parameters["type"] == "codeformer":
progress.set_current_status(
"common.statusRestoringFacesCodeFormer")
progress.set_current_status("common.statusRestoringFacesCodeFormer")
socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
@@ -919,8 +973,7 @@ class InvokeAIWebServer:
init_img_url = generation_parameters["init_img"]
original_bounding_box = generation_parameters["bounding_box"].copy(
)
original_bounding_box = generation_parameters["bounding_box"].copy()
initial_image = dataURL_to_image(
generation_parameters["init_img"]
@@ -997,8 +1050,9 @@ class InvokeAIWebServer:
elif generation_parameters["generation_mode"] == "img2img":
init_img_url = generation_parameters["init_img"]
init_img_path = self.get_image_path_from_url(init_img_url)
generation_parameters["init_img"] = Image.open(
init_img_path).convert('RGB')
generation_parameters["init_img"] = Image.open(init_img_path).convert(
"RGB"
)
def image_progress(sample, step):
if self.canceled.is_set():
@@ -1058,8 +1112,7 @@ class InvokeAIWebServer:
)
if generation_parameters["progress_latents"]:
image = self.generate.sample_to_lowres_estimated_image(
sample)
image = self.generate.sample_to_lowres_estimated_image(sample)
(width, height) = image.size
width *= 8
height *= 8
@@ -1078,8 +1131,7 @@ class InvokeAIWebServer:
},
)
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
def image_done(image, seed, first_seed, attention_maps_image=None):
@@ -1106,8 +1158,7 @@ class InvokeAIWebServer:
progress.set_current_status("common.statusGenerationComplete")
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
all_parameters = generation_parameters
@@ -1118,8 +1169,7 @@ class InvokeAIWebServer:
and all_parameters["variation_amount"] > 0
):
first_seed = first_seed or seed
this_variation = [
[seed, all_parameters["variation_amount"]]]
this_variation = [[seed, all_parameters["variation_amount"]]]
all_parameters["with_variations"] = (
prior_variations + this_variation
)
@@ -1135,14 +1185,13 @@ class InvokeAIWebServer:
if esrgan_parameters:
progress.set_current_status("common.statusUpscaling")
progress.set_current_status_has_steps(False)
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
image = self.esrgan.process(
image=image,
upsampler_scale=esrgan_parameters["level"],
denoise_str=esrgan_parameters['denoise_str'],
denoise_str=esrgan_parameters["denoise_str"],
strength=esrgan_parameters["strength"],
seed=seed,
)
@@ -1150,7 +1199,7 @@ class InvokeAIWebServer:
postprocessing = True
all_parameters["upscale"] = [
esrgan_parameters["level"],
esrgan_parameters['denoise_str'],
esrgan_parameters["denoise_str"],
esrgan_parameters["strength"],
]
@@ -1159,15 +1208,14 @@ class InvokeAIWebServer:
if facetool_parameters:
if facetool_parameters["type"] == "gfpgan":
progress.set_current_status(
"common.statusRestoringFacesGFPGAN")
progress.set_current_status("common.statusRestoringFacesGFPGAN")
elif facetool_parameters["type"] == "codeformer":
progress.set_current_status(
"common.statusRestoringFacesCodeFormer")
"common.statusRestoringFacesCodeFormer"
)
progress.set_current_status_has_steps(False)
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
if facetool_parameters["type"] == "gfpgan":
@@ -1197,8 +1245,7 @@ class InvokeAIWebServer:
all_parameters["facetool_type"] = facetool_parameters["type"]
progress.set_current_status("common.statusSavingImage")
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
# restore the stashed URLS and discard the paths, we are about to send the result to client
@@ -1215,8 +1262,7 @@ class InvokeAIWebServer:
if generation_parameters["generation_mode"] == "unifiedCanvas":
all_parameters["bounding_box"] = original_bounding_box
metadata = self.parameters_to_generated_image_metadata(
all_parameters)
metadata = self.parameters_to_generated_image_metadata(all_parameters)
command = parameters_to_command(all_parameters)
@@ -1246,22 +1292,27 @@ class InvokeAIWebServer:
if progress.total_iterations > progress.current_iteration:
progress.set_current_step(1)
progress.set_current_status(
"common.statusIterationComplete")
progress.set_current_status("common.statusIterationComplete")
progress.set_current_status_has_steps(False)
else:
progress.mark_complete()
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
parsed_prompt, _ = get_prompt_structure(
generation_parameters["prompt"])
tokens = None if type(parsed_prompt) is Blend else \
get_tokens_for_prompt_object(get_tokenizer(self.generate.model), parsed_prompt)
attention_maps_image_base64_url = None if attention_maps_image is None \
parsed_prompt, _ = get_prompt_structure(generation_parameters["prompt"])
tokens = (
None
if type(parsed_prompt) is Blend
else get_tokens_for_prompt_object(
self.generate.model.tokenizer, parsed_prompt
)
)
attention_maps_image_base64_url = (
None
if attention_maps_image is None
else image_to_dataURL(attention_maps_image)
)
self.socketio.emit(
"generationResult",
@@ -1293,7 +1344,7 @@ class InvokeAIWebServer:
self.generate.prompt2image(
**generation_parameters,
step_callback=diffusers_step_callback_adapter,
image_callback=image_done
image_callback=image_done,
)
except KeyboardInterrupt:
@@ -1416,8 +1467,7 @@ class InvokeAIWebServer:
self, parameters, original_image_path
):
try:
current_metadata = retrieve_metadata(
original_image_path)["sd-metadata"]
current_metadata = retrieve_metadata(original_image_path)["sd-metadata"]
postprocessing_metadata = {}
"""
@@ -1457,8 +1507,7 @@ class InvokeAIWebServer:
postprocessing_metadata
)
else:
current_metadata["image"]["postprocessing"] = [
postprocessing_metadata]
current_metadata["image"]["postprocessing"] = [postprocessing_metadata]
return current_metadata
@@ -1554,8 +1603,7 @@ class InvokeAIWebServer:
)
elif "thumbnails" in url:
return os.path.abspath(
os.path.join(self.thumbnail_image_path,
os.path.basename(url))
os.path.join(self.thumbnail_image_path, os.path.basename(url))
)
else:
return os.path.abspath(
@@ -1601,7 +1649,7 @@ class InvokeAIWebServer:
except Exception as e:
self.handle_exceptions(e)
def handle_exceptions(self, exception, emit_key: str = 'error'):
def handle_exceptions(self, exception, emit_key: str = "error"):
self.socketio.emit(emit_key, {"message": (str(exception))})
print("\n")
traceback.print_exc()

View File

@@ -80,7 +80,8 @@ trinart-2.0:
repo_id: stabilityai/sd-vae-ft-mse
recommended: False
waifu-diffusion-1.4:
description: An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)
description: An SD-2.1 model trained on 5.4M anime/manga-style images (4.27 GB)
revision: main
repo_id: hakurei/waifu-diffusion
format: diffusers
vae:

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -5,8 +5,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI - A Stable Diffusion Toolkit</title>
<link rel="shortcut icon" type="icon" href="./assets/favicon-0d253ced.ico" />
<script type="module" crossorigin src="./assets/index-c09cf9ca.js"></script>
<link rel="stylesheet" href="./assets/index-14cb2922.css">
<script type="module" crossorigin src="./assets/index-b12e648e.js"></script>
<link rel="stylesheet" href="./assets/index-2ab0eb58.css">
</head>
<body>

View File

@@ -327,6 +327,13 @@
"addModel": "Add Model",
"updateModel": "Update Model",
"availableModels": "Available Models",
"addLora": "Add Lora",
"clearLoras": "Clear Loras",
"noLoraModels": "No Loras Found",
"addTextualInversionTrigger": "Add Textual Inversion",
"addTIToNegative": "Add To Negative",
"clearTextualInversions": "Clear Textual Inversions",
"noTextualInversionTriggers": "No Textual Inversions Found",
"search": "Search",
"load": "Load",
"active": "active",
@@ -483,6 +490,7 @@
"useCanvasBeta": "Use Canvas Beta Layout",
"enableImageDebugging": "Enable Image Debugging",
"useSlidersForAll": "Use Sliders For All Options",
"showHuggingFaceConcepts": "Show Textual Inversions from HF Concepts Library",
"resetWebUI": "Reset Web UI",
"resetWebUIDesc1": "Resetting the web UI only resets the browser's local cache of your images and remembered settings. It does not delete any images from disk.",
"resetWebUIDesc2": "If images aren't showing up in the gallery or something else isn't working, please try resetting before submitting an issue on GitHub.",

View File

@@ -15,8 +15,8 @@
"postinstall": "patch-package"
},
"dependencies": {
"@chakra-ui/icons": "^2.0.17",
"@chakra-ui/react": "^2.5.1",
"@chakra-ui/icons": "^2.0.18",
"@chakra-ui/react": "^2.5.5",
"@emotion/cache": "^11.10.5",
"@emotion/react": "^11.10.6",
"@emotion/styled": "^11.10.6",
@@ -52,6 +52,7 @@
"redux-persist": "^6.0.0",
"socket.io": "^4.6.0",
"socket.io-client": "^4.6.0",
"typescript": "^5.0.3",
"use-image": "^1.1.0",
"uuid": "^9.0.0",
"yarn": "^1.22.19"
@@ -61,8 +62,8 @@
"@types/react": "^18.0.28",
"@types/react-dom": "^18.0.11",
"@types/react-transition-group": "^4.4.5",
"@typescript-eslint/eslint-plugin": "^5.52.0",
"@typescript-eslint/parser": "^5.52.0",
"@typescript-eslint/eslint-plugin": "^5.57.0",
"@typescript-eslint/parser": "^5.57.0",
"babel-plugin-transform-imports": "^2.0.0",
"eslint": "^8.34.0",
"eslint-config-prettier": "^8.6.0",

View File

@@ -327,6 +327,13 @@
"addModel": "Add Model",
"updateModel": "Update Model",
"availableModels": "Available Models",
"addLora": "Add Lora",
"clearLoras": "Clear Loras",
"noLoraModels": "No Loras Found",
"addTextualInversionTrigger": "Add Textual Inversion",
"addTIToNegative": "Add To Negative",
"clearTextualInversions": "Clear Textual Inversions",
"noTextualInversionTriggers": "No Textual Inversions Found",
"search": "Search",
"load": "Load",
"active": "active",
@@ -483,6 +490,7 @@
"useCanvasBeta": "Use Canvas Beta Layout",
"enableImageDebugging": "Enable Image Debugging",
"useSlidersForAll": "Use Sliders For All Options",
"showHuggingFaceConcepts": "Show Textual Inversions from HF Concepts Library",
"resetWebUI": "Reset Web UI",
"resetWebUIDesc1": "Resetting the web UI only resets the browser's local cache of your images and remembered settings. It does not delete any images from disk.",
"resetWebUIDesc2": "If images aren't showing up in the gallery or something else isn't working, please try resetting before submitting an issue on GitHub.",

View File

@@ -271,6 +271,23 @@ export declare type FoundModelResponse = {
found_models: FoundModel[];
};
export declare type FoundLora = {
name: string;
location: string;
};
export declare type FoundTextualInversionTriggers = {
name: string;
location: string;
};
export declare type FoundLorasRsponse = FoundLora[];
export declare type FoundTextualInversionTriggersResponse = {
local_triggers: FoundTextualInversionTriggers[];
huggingface_concepts: FoundTextualInversionTriggers[];
};
export declare type SystemStatusResponse = SystemStatus;
export declare type SystemConfigResponse = SystemConfig;

View File

@@ -52,6 +52,12 @@ export const requestModelChange = createAction<string>(
'socketio/requestModelChange'
);
export const getLoraModels = createAction<undefined>('socketio/getLoraModels');
export const getTextualInversionTriggers = createAction<undefined>(
'socketio/getTextualInversionTriggers'
);
export const saveStagingAreaImageToGallery = createAction<string>(
'socketio/saveStagingAreaImageToGallery'
);

View File

@@ -196,6 +196,12 @@ const makeSocketIOEmitters = (
dispatch(modelChangeRequested());
socketio.emit('requestModelChange', modelName);
},
emitGetLoraModels: () => {
socketio.emit('getLoraModels');
},
emitGetTextualInversionTriggers: () => {
socketio.emit('getTextualInversionTriggers');
},
emitSaveStagingAreaImageToGallery: (url: string) => {
socketio.emit('requestSaveStagingAreaImageToGallery', url);
},

View File

@@ -11,6 +11,7 @@ import {
errorOccurred,
processingCanceled,
setCurrentStatus,
setFoundLoras,
setFoundModels,
setIsCancelable,
setIsConnected,
@@ -19,6 +20,8 @@ import {
setSearchFolder,
setSystemConfig,
setSystemStatus,
setFoundLocalTextualInversionTriggers,
setFoundHuggingFaceTextualInversionTriggers,
} from 'features/system/store/systemSlice';
import {
@@ -30,12 +33,18 @@ import {
setIntermediateImage,
} from 'features/gallery/store/gallerySlice';
import {
getLoraModels,
getTextualInversionTriggers,
} from 'app/socketio/actions';
import type { RootState } from 'app/store';
import { addImageToStagingArea } from 'features/canvas/store/canvasSlice';
import {
clearInitialImage,
setHuggingFaceTextualInversionConcepts,
setInfillMethod,
setInitialImage,
setLocalTextualInversionTriggers,
setMaskPath,
} from 'features/parameters/store/generationSlice';
import { tabMap } from 'features/ui/store/tabMap';
@@ -458,6 +467,8 @@ const makeSocketIOListeners = (
const { model_name, model_list } = data;
dispatch(setModelList(model_list));
dispatch(setCurrentStatus(i18n.t('common.statusModelChanged')));
dispatch(getLoraModels());
dispatch(getTextualInversionTriggers());
dispatch(setIsProcessing(false));
dispatch(setIsCancelable(true));
dispatch(
@@ -482,6 +493,37 @@ const makeSocketIOListeners = (
})
);
},
onFoundLoras: (data: InvokeAI.FoundLorasRsponse) => {
dispatch(setFoundLoras(data));
},
onFoundTextualInversionTriggers: (
data: InvokeAI.FoundTextualInversionTriggersResponse
) => {
const localTriggers = data.local_triggers;
const huggingFaceConcepts = data.huggingface_concepts;
dispatch(setFoundLocalTextualInversionTriggers(localTriggers));
dispatch(
setFoundHuggingFaceTextualInversionTriggers(huggingFaceConcepts)
);
// Assign Local TI's
const foundLocalTINames: string[] = [];
localTriggers.forEach((textualInversion) => {
foundLocalTINames.push(textualInversion.name);
});
dispatch(setLocalTextualInversionTriggers(foundLocalTINames));
// Assign HuggingFace Concepts
const foundHuggingFaceConceptNames: string[] = [];
huggingFaceConcepts.forEach((concept) => {
foundHuggingFaceConceptNames.push(concept.name);
});
dispatch(
setHuggingFaceTextualInversionConcepts(foundHuggingFaceConceptNames)
);
},
onTempFolderEmptied: () => {
dispatch(
addToast({

View File

@@ -51,6 +51,8 @@ export const socketioMiddleware = () => {
onModelConverted,
onModelsMerged,
onModelChangeFailed,
onFoundLoras,
onFoundTextualInversionTriggers,
onTempFolderEmptied,
} = makeSocketIOListeners(store);
@@ -69,6 +71,8 @@ export const socketioMiddleware = () => {
emitConvertToDiffusers,
emitMergeDiffusersModels,
emitRequestModelChange,
emitGetLoraModels,
emitGetTextualInversionTriggers,
emitSaveStagingAreaImageToGallery,
emitRequestEmptyTempFolder,
} = makeSocketIOEmitters(store, socketio);
@@ -145,6 +149,17 @@ export const socketioMiddleware = () => {
onModelChangeFailed(data);
});
socketio.on('foundLoras', (data: InvokeAI.FoundLorasRsponse) => {
onFoundLoras(data);
});
socketio.on(
'foundTextualInversionTriggers',
(data: InvokeAI.FoundTextualInversionTriggersResponse) => {
onFoundTextualInversionTriggers(data);
}
);
socketio.on('tempFolderEmptied', () => {
onTempFolderEmptied();
});
@@ -226,6 +241,16 @@ export const socketioMiddleware = () => {
break;
}
case 'socketio/getLoraModels': {
emitGetLoraModels();
break;
}
case 'socketio/getTextualInversionTriggers': {
emitGetTextualInversionTriggers();
break;
}
case 'socketio/saveStagingAreaImageToGallery': {
emitSaveStagingAreaImageToGallery(action.payload);
break;

View File

@@ -7,13 +7,14 @@ import {
MenuButtonProps,
MenuListProps,
MenuItemProps,
Text,
} from '@chakra-ui/react';
import { MouseEventHandler, ReactNode } from 'react';
import { MdArrowDropDown, MdArrowDropUp } from 'react-icons/md';
import IAIButton from './IAIButton';
import IAIIconButton from './IAIIconButton';
interface IAIMenuItem {
export interface IAIMenuItem {
item: ReactNode | string;
onClick: MouseEventHandler<HTMLButtonElement> | undefined;
}
@@ -43,6 +44,7 @@ export default function IAISimpleMenu(props: IAIMenuProps) {
const renderMenuItems = () => {
const menuItemsToRender: ReactNode[] = [];
menuItems.forEach((menuItem, index) => {
menuItemsToRender.push(
<MenuItem
@@ -82,12 +84,17 @@ export default function IAISimpleMenu(props: IAIMenuProps) {
fontSize="1.5rem"
{...menuButtonProps}
>
{menuType === 'regular' && buttonText}
{menuType === 'regular' && (
<Text fontSize="0.9rem">{buttonText}</Text>
)}
</MenuButton>
<MenuList
zIndex={15}
padding={0}
borderRadius="0.5rem"
overflow="scroll"
maxWidth={'22.5rem'}
maxHeight={500}
backgroundColor="var(--background-color-secondary)"
color="var(--text-color-secondary)"
borderColor="var(--border-color)"

View File

@@ -34,7 +34,6 @@ export default function MainWidth() {
withSliderMarks
sliderMarkRightOffset={-8}
inputWidth="6.2rem"
inputReadOnly
sliderNumberInputProps={{ max: 15360 }}
/>
) : (

View File

@@ -0,0 +1,86 @@
import { Box, Flex } from '@chakra-ui/react';
import { getLoraModels } from 'app/socketio/actions';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import IAISimpleMenu, { IAIMenuItem } from 'common/components/IAISimpleMenu';
import {
setClearLoras,
setLorasInUse,
} from 'features/parameters/store/generationSlice';
import { useEffect } from 'react';
import { useTranslation } from 'react-i18next';
import { MdClear } from 'react-icons/md';
export default function LoraManager() {
const dispatch = useAppDispatch();
const foundLoras = useAppSelector((state) => state.system.foundLoras);
const lorasInUse = useAppSelector((state) => state.generation.lorasInUse);
const { t } = useTranslation();
const handleLora = (lora: string) => {
dispatch(setLorasInUse(lora));
};
useEffect(() => {
dispatch(getLoraModels());
}, [dispatch]);
const renderLoraOption = (lora: string) => {
const thisloraExists = lorasInUse.includes(lora);
const loraExistsStyle = {
fontWeight: 'bold',
color: 'var(--context-menu-active-item)',
};
return <Box style={thisloraExists ? loraExistsStyle : {}}>{lora}</Box>;
};
const numOfActiveLoras = () => {
const foundLoraNames: string[] = [];
foundLoras?.forEach((lora) => {
foundLoraNames.push(lora.name);
});
return foundLoraNames.filter((lora) => lorasInUse.includes(lora)).length;
};
const makeLoraItems = () => {
const lorasFound: IAIMenuItem[] = [];
foundLoras?.forEach((lora) => {
if (lora.name !== ' ') {
const newLoraItem: IAIMenuItem = {
item: renderLoraOption(lora.name),
onClick: () => handleLora(lora.name),
};
lorasFound.push(newLoraItem);
}
});
return lorasFound;
};
return foundLoras && foundLoras?.length > 0 ? (
<Flex columnGap={2}>
<IAISimpleMenu
menuItems={makeLoraItems()}
menuType="regular"
buttonText={`${t('modelManager.addLora')} (${numOfActiveLoras()})`}
menuButtonProps={{ width: '100%', padding: '0 1rem' }}
/>
<IAIIconButton
icon={<MdClear />}
tooltip={t('modelManager.clearLoras')}
aria-label={t('modelManager.clearLoras')}
onClick={() => dispatch(setClearLoras())}
/>
</Flex>
) : (
<Box
background="var(--btn-base-color)"
padding={2}
textAlign="center"
borderRadius={4}
fontWeight="bold"
>
{t('modelManager.noLoraModels')}
</Box>
);
}

View File

@@ -0,0 +1,12 @@
import { Flex } from '@chakra-ui/react';
import LoraManager from './LoraManager/LoraManager';
import TextualInversionManager from './TextualInversionManager/TextualInversionManager';
export default function PromptExtras() {
return (
<Flex flexDir="column" rowGap={2}>
<LoraManager />
<TextualInversionManager />
</Flex>
);
}

View File

@@ -0,0 +1,163 @@
import { Box, Flex } from '@chakra-ui/react';
import { getTextualInversionTriggers } from 'app/socketio/actions';
import { RootState } from 'app/store';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import IAISimpleMenu, { IAIMenuItem } from 'common/components/IAISimpleMenu';
import {
setAddTIToNegative,
setClearTextualInversions,
setTextualInversionsInUse,
} from 'features/parameters/store/generationSlice';
import { useEffect } from 'react';
import { useTranslation } from 'react-i18next';
import { MdArrowDownward, MdClear } from 'react-icons/md';
export default function TextualInversionManager() {
const dispatch = useAppDispatch();
const textualInversionsInUse = useAppSelector(
(state: RootState) => state.generation.textualInversionsInUse
);
const negativeTextualInversionsInUse = useAppSelector(
(state: RootState) => state.generation.negativeTextualInversionsInUse
);
const foundLocalTextualInversionTriggers = useAppSelector(
(state) => state.system.foundLocalTextualInversionTriggers
);
const foundHuggingFaceTextualInversionTriggers = useAppSelector(
(state) => state.system.foundHuggingFaceTextualInversionTriggers
);
const localTextualInversionTriggers = useAppSelector(
(state) => state.generation.localTextualInversionTriggers
);
const huggingFaceTextualInversionConcepts = useAppSelector(
(state) => state.generation.huggingFaceTextualInversionConcepts
);
const shouldShowHuggingFaceConcepts = useAppSelector(
(state) => state.ui.shouldShowHuggingFaceConcepts
);
const addTIToNegative = useAppSelector(
(state) => state.generation.addTIToNegative
);
const { t } = useTranslation();
useEffect(() => {
dispatch(getTextualInversionTriggers());
}, [dispatch]);
const handleTextualInversion = (textual_inversion: string) => {
dispatch(setTextualInversionsInUse(textual_inversion));
};
const TIPip = ({ color }: { color: string }) => {
return (
<Box width={2} height={2} borderRadius={9999} backgroundColor={color}>
{' '}
</Box>
);
};
const renderTextualInversionOption = (textual_inversion: string) => {
return (
<Flex alignItems="center" columnGap={1}>
{textual_inversion}
{textualInversionsInUse.includes(textual_inversion) && (
<TIPip color="var(--context-menu-active-item)" />
)}
{negativeTextualInversionsInUse.includes(textual_inversion) && (
<TIPip color="var(--status-bad-color)" />
)}
</Flex>
);
};
const numOfActiveTextualInversions = () => {
const allTextualInversions = localTextualInversionTriggers.concat(
huggingFaceTextualInversionConcepts
);
return allTextualInversions.filter(
(ti) =>
textualInversionsInUse.includes(ti) ||
negativeTextualInversionsInUse.includes(ti)
).length;
};
const makeTextualInversionItems = () => {
const textualInversionsFound: IAIMenuItem[] = [];
foundLocalTextualInversionTriggers?.forEach((textualInversion) => {
if (textualInversion.name !== ' ') {
const newTextualInversionItem: IAIMenuItem = {
item: renderTextualInversionOption(textualInversion.name),
onClick: () => handleTextualInversion(textualInversion.name),
};
textualInversionsFound.push(newTextualInversionItem);
}
});
if (shouldShowHuggingFaceConcepts) {
foundHuggingFaceTextualInversionTriggers?.forEach((textualInversion) => {
if (textualInversion.name !== ' ') {
const newTextualInversionItem: IAIMenuItem = {
item: renderTextualInversionOption(textualInversion.name),
onClick: () => handleTextualInversion(textualInversion.name),
};
textualInversionsFound.push(newTextualInversionItem);
}
});
}
return textualInversionsFound;
};
return foundLocalTextualInversionTriggers &&
(foundLocalTextualInversionTriggers?.length > 0 ||
(foundHuggingFaceTextualInversionTriggers &&
foundHuggingFaceTextualInversionTriggers?.length > 0 &&
shouldShowHuggingFaceConcepts)) ? (
<Flex columnGap={2}>
<IAISimpleMenu
menuItems={makeTextualInversionItems()}
menuType="regular"
buttonText={`${t(
'modelManager.addTextualInversionTrigger'
)} (${numOfActiveTextualInversions()})`}
menuButtonProps={{
width: '100%',
padding: '0 1rem',
}}
/>
<IAIIconButton
icon={<MdArrowDownward />}
style={{
backgroundColor: addTIToNegative ? 'var(--btn-delete-image)' : '',
}}
tooltip={t('modelManager.addTIToNegative')}
aria-label={t('modelManager.addTIToNegative')}
onClick={() => dispatch(setAddTIToNegative(!addTIToNegative))}
/>
<IAIIconButton
icon={<MdClear />}
tooltip={t('modelManager.clearTextualInversions')}
aria-label={t('modelManager.clearTextualInversions')}
onClick={() => dispatch(setClearTextualInversions())}
/>
</Flex>
) : (
<Box
background="var(--btn-base-color)"
padding={2}
textAlign="center"
borderRadius={4}
fontWeight="bold"
>
{t('modelManager.noTextualInversionTriggers')}
</Box>
);
}

View File

@@ -1,24 +1,43 @@
import { FormControl, Textarea } from '@chakra-ui/react';
import type { RootState } from 'app/store';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import { setNegativePrompt } from 'features/parameters/store/generationSlice';
import {
handlePromptCheckers,
setNegativePrompt,
} from 'features/parameters/store/generationSlice';
import { useTranslation } from 'react-i18next';
import { ChangeEvent, useState } from 'react';
const NegativePromptInput = () => {
const negativePrompt = useAppSelector(
(state: RootState) => state.generation.negativePrompt
);
const [promptTimer, setPromptTimer] = useState<number | undefined>(undefined);
const dispatch = useAppDispatch();
const { t } = useTranslation();
const handleNegativeChangePrompt = (e: ChangeEvent<HTMLTextAreaElement>) => {
dispatch(setNegativePrompt(e.target.value));
// Debounce Prompt UI Checking
clearTimeout(promptTimer);
const newPromptTimer = window.setTimeout(() => {
dispatch(
handlePromptCheckers({ prompt: e.target.value, toNegative: true })
);
}, 500);
setPromptTimer(newPromptTimer);
};
return (
<FormControl>
<Textarea
id="negativePrompt"
name="negativePrompt"
value={negativePrompt}
onChange={(e) => dispatch(setNegativePrompt(e.target.value))}
onChange={handleNegativeChangePrompt}
background="var(--prompt-bg-color)"
placeholder={t('parameters.negativePrompts')}
_placeholder={{ fontSize: '0.8rem' }}

View File

@@ -2,12 +2,13 @@ import { FormControl, Textarea } from '@chakra-ui/react';
import { generateImage } from 'app/socketio/actions';
import { RootState } from 'app/store';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import { ChangeEvent, KeyboardEvent, useRef } from 'react';
import { ChangeEvent, KeyboardEvent, useRef, useState } from 'react';
import { createSelector } from '@reduxjs/toolkit';
import { readinessSelector } from 'app/selectors/readinessSelector';
import {
GenerationState,
handlePromptCheckers,
setPrompt,
} from 'features/parameters/store/generationSlice';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
@@ -40,11 +41,21 @@ const PromptInput = () => {
const { isReady } = useAppSelector(readinessSelector);
const promptRef = useRef<HTMLTextAreaElement>(null);
const [promptTimer, setPromptTimer] = useState<number | undefined>(undefined);
const { t } = useTranslation();
const handleChangePrompt = (e: ChangeEvent<HTMLTextAreaElement>) => {
dispatch(setPrompt(e.target.value));
// Debounce Prompt UI Checking
clearTimeout(promptTimer);
const newPromptTimer = window.setTimeout(() => {
dispatch(
handlePromptCheckers({ prompt: e.target.value, toNegative: false })
);
}, 500);
setPromptTimer(newPromptTimer);
};
useHotkeys(

View File

@@ -3,7 +3,11 @@ import { getPromptAndNegative } from 'common/util/getPromptAndNegative';
import * as InvokeAI from 'app/invokeai';
import promptToString from 'common/util/promptToString';
import { useAppDispatch } from 'app/storeHooks';
import { setNegativePrompt, setPrompt } from '../store/generationSlice';
import {
handlePromptCheckers,
setNegativePrompt,
setPrompt,
} from '../store/generationSlice';
// TECHDEBT: We have two metadata prompt formats and need to handle recalling either of them.
// This hook provides a function to do that.
@@ -20,6 +24,10 @@ const useSetBothPrompts = () => {
dispatch(setPrompt(prompt));
dispatch(setNegativePrompt(negativePrompt));
dispatch(handlePromptCheckers({ prompt: prompt, toNegative: false }));
dispatch(
handlePromptCheckers({ prompt: negativePrompt, toNegative: true })
);
};
};

View File

@@ -1,4 +1,4 @@
import type { PayloadAction } from '@reduxjs/toolkit';
import { PayloadAction } from '@reduxjs/toolkit';
import { createSlice } from '@reduxjs/toolkit';
import * as InvokeAI from 'app/invokeai';
import { getPromptAndNegative } from 'common/util/getPromptAndNegative';
@@ -17,6 +17,12 @@ export interface GenerationState {
perlin: number;
prompt: string;
negativePrompt: string;
lorasInUse: string[];
huggingFaceTextualInversionConcepts: string[];
localTextualInversionTriggers: string[];
textualInversionsInUse: string[];
negativeTextualInversionsInUse: string[];
addTIToNegative: boolean;
sampler: string;
seamBlur: number;
seamless: boolean;
@@ -48,6 +54,12 @@ const initialGenerationState: GenerationState = {
perlin: 0,
prompt: '',
negativePrompt: '',
lorasInUse: [],
huggingFaceTextualInversionConcepts: [],
localTextualInversionTriggers: [],
textualInversionsInUse: [],
negativeTextualInversionsInUse: [],
addTIToNegative: false,
sampler: 'k_lms',
seamBlur: 16,
seamless: false,
@@ -71,12 +83,99 @@ const initialGenerationState: GenerationState = {
const initialState: GenerationState = initialGenerationState;
const loraExists = (state: GenerationState, lora: string) => {
const loraRegex = new RegExp(`withLora\\(${lora},?\\s*([^\\)]+)?\\)`);
if (state.prompt.match(loraRegex)) return true;
return false;
};
const getTIRegex = (textualInversion: string) => {
if (textualInversion.includes('<' || '>')) {
return new RegExp(`${textualInversion}`);
} else {
return new RegExp(`\\b${textualInversion}\\b`);
}
};
const textualInversionExists = (
state: GenerationState,
textualInversion: string
) => {
const textualInversionRegex = getTIRegex(textualInversion);
if (!state.addTIToNegative) {
if (state.prompt.match(textualInversionRegex)) return true;
} else {
if (state.negativePrompt.match(textualInversionRegex)) return true;
}
return false;
};
const handleTypedTICheck = (
state: GenerationState,
newPrompt: string,
toNegative: boolean
) => {
let textualInversionsInUse = !toNegative
? [...state.textualInversionsInUse]
: [...state.negativeTextualInversionsInUse]; // Get Words In Prompt
const textualInversionRegex = /([\w<>!@%&*_-]+)/g; // Scan For Each Word
const textualInversionMatches = [
...newPrompt.matchAll(textualInversionRegex),
]; // Match All Words
if (textualInversionMatches.length > 0) {
textualInversionsInUse = []; // Reset Textual Inversions In Use
textualInversionMatches.forEach((textualInversionMatch) => {
const textualInversionName = textualInversionMatch[0];
if (
(!textualInversionsInUse.includes(textualInversionName) &&
state.localTextualInversionTriggers.includes(textualInversionName)) ||
state.huggingFaceTextualInversionConcepts.includes(textualInversionName)
) {
textualInversionsInUse.push(textualInversionName); // Add Textual Inversions In Prompt
}
});
} else {
textualInversionsInUse = []; // If No Matches, Remove Textual Inversions In Use
}
if (!toNegative) {
state.textualInversionsInUse = textualInversionsInUse;
} else {
state.negativeTextualInversionsInUse = textualInversionsInUse;
}
};
const handleTypedLoraCheck = (state: GenerationState, newPrompt: string) => {
let lorasInUse = [...state.lorasInUse]; // Get Loras In Prompt
const loraRegex = /withLora\(([^\\)]+)\)/g; // Scan For Lora Syntax
const loraMatches = [...newPrompt.matchAll(loraRegex)]; // Match All Lora Syntaxes
if (loraMatches.length > 0) {
lorasInUse = []; // Reset Loras In Use
loraMatches.forEach((loraMatch) => {
const loraName = loraMatch[1].split(',')[0];
if (!lorasInUse.includes(loraName)) lorasInUse.push(loraName); // Add Loras In Prompt
});
} else {
lorasInUse = []; // If No Matches, Remove Loras In Use
}
state.lorasInUse = lorasInUse;
};
export const generationSlice = createSlice({
name: 'generation',
initialState,
reducers: {
setPrompt: (state, action: PayloadAction<string | InvokeAI.Prompt>) => {
const newPrompt = action.payload;
if (typeof newPrompt === 'string') {
state.prompt = newPrompt;
} else {
@@ -94,6 +193,136 @@ export const generationSlice = createSlice({
state.negativePrompt = promptToString(newPrompt);
}
},
handlePromptCheckers: (
state,
action: PayloadAction<{
prompt: string | InvokeAI.Prompt;
toNegative: boolean;
}>
) => {
const newPrompt = action.payload.prompt;
if (typeof newPrompt === 'string') {
if (!action.payload.toNegative) handleTypedLoraCheck(state, newPrompt);
handleTypedTICheck(state, newPrompt, action.payload.toNegative);
}
},
setLorasInUse: (state, action: PayloadAction<string>) => {
const newLora = action.payload;
const loras = [...state.lorasInUse];
if (loraExists(state, newLora)) {
const loraRegex = new RegExp(
`withLora\\(${newLora},?\\s*([^\\)]+)?\\)`,
'g'
);
const newPrompt = state.prompt.replaceAll(loraRegex, '');
state.prompt = newPrompt.trim();
if (loras.includes(newLora)) {
const newLoraIndex = loras.indexOf(newLora);
if (newLoraIndex > -1) loras.splice(newLoraIndex, 1);
}
} else {
state.prompt = `${state.prompt.trim()} withLora(${newLora},0.75)`;
if (!loras.includes(newLora)) loras.push(newLora);
}
state.lorasInUse = loras;
},
setClearLoras: (state) => {
const lorasInUse = [...state.lorasInUse];
lorasInUse.forEach((lora) => {
const loraRegex = new RegExp(
`withLora\\(${lora},?\\s*([^\\)]+)?\\)`,
'g'
);
const newPrompt = state.prompt.replaceAll(loraRegex, '');
state.prompt = newPrompt.trim();
});
state.lorasInUse = [];
},
setTextualInversionsInUse: (state, action: PayloadAction<string>) => {
const newTextualInversion = action.payload;
const textualInversions = [...state.textualInversionsInUse];
const negativeTextualInversions = [
...state.negativeTextualInversionsInUse,
];
if (textualInversionExists(state, newTextualInversion)) {
const textualInversionRegex = getTIRegex(newTextualInversion);
if (!state.addTIToNegative) {
const newPrompt = state.prompt.replace(textualInversionRegex, '');
state.prompt = newPrompt.trim();
const newTIIndex = textualInversions.indexOf(newTextualInversion);
if (newTIIndex > -1) textualInversions.splice(newTIIndex, 1);
} else {
const newPrompt = state.negativePrompt.replace(
textualInversionRegex,
''
);
state.negativePrompt = newPrompt.trim();
const newTIIndex =
negativeTextualInversions.indexOf(newTextualInversion);
if (newTIIndex > -1) negativeTextualInversions.splice(newTIIndex, 1);
}
} else {
if (!state.addTIToNegative) {
state.prompt = `${state.prompt.trim()} ${newTextualInversion}`;
textualInversions.push(newTextualInversion);
} else {
state.negativePrompt = `${state.negativePrompt.trim()} ${newTextualInversion}`;
negativeTextualInversions.push(newTextualInversion);
}
}
state.textualInversionsInUse = textualInversions;
state.negativeTextualInversionsInUse = negativeTextualInversions;
},
setClearTextualInversions: (state) => {
const textualInversions = [...state.textualInversionsInUse];
const negativeTextualInversions = [
...state.negativeTextualInversionsInUse,
];
textualInversions.forEach((ti) => {
const textualInversionRegex = getTIRegex(ti);
const newPrompt = state.prompt.replace(textualInversionRegex, '');
state.prompt = newPrompt.trim();
});
negativeTextualInversions.forEach((ti) => {
const textualInversionRegex = getTIRegex(ti);
const newPrompt = state.negativePrompt.replace(
textualInversionRegex,
''
);
state.negativePrompt = newPrompt.trim();
});
state.textualInversionsInUse = [];
state.negativeTextualInversionsInUse = [];
},
setAddTIToNegative: (state, action: PayloadAction<boolean>) => {
state.addTIToNegative = action.payload;
},
setLocalTextualInversionTriggers: (
state,
action: PayloadAction<string[]>
) => {
state.localTextualInversionTriggers = action.payload;
},
setHuggingFaceTextualInversionConcepts: (
state,
action: PayloadAction<string[]>
) => {
state.huggingFaceTextualInversionConcepts = action.payload;
},
setIterations: (state, action: PayloadAction<number>) => {
state.iterations = action.payload;
},
@@ -374,6 +603,14 @@ export const {
setPerlin,
setPrompt,
setNegativePrompt,
handlePromptCheckers,
setLorasInUse,
setClearLoras,
setHuggingFaceTextualInversionConcepts,
setLocalTextualInversionTriggers,
setTextualInversionsInUse,
setAddTIToNegative,
setClearTextualInversions,
setSampler,
setSeamBlur,
setSeamless,

View File

@@ -31,6 +31,7 @@ import {
} from 'features/system/store/systemSlice';
import { uiSelector } from 'features/ui/store/uiSelectors';
import {
setShouldShowHuggingFaceConcepts,
setShouldUseCanvasBetaLayout,
setShouldUseSliders,
} from 'features/ui/store/uiSlice';
@@ -52,7 +53,11 @@ const selector = createSelector(
enableImageDebugging,
} = system;
const { shouldUseCanvasBetaLayout, shouldUseSliders } = ui;
const {
shouldUseCanvasBetaLayout,
shouldUseSliders,
shouldShowHuggingFaceConcepts,
} = ui;
return {
shouldDisplayInProgressType,
@@ -63,6 +68,7 @@ const selector = createSelector(
enableImageDebugging,
shouldUseCanvasBetaLayout,
shouldUseSliders,
shouldShowHuggingFaceConcepts,
};
},
{
@@ -107,6 +113,7 @@ const SettingsModal = ({ children }: SettingsModalProps) => {
enableImageDebugging,
shouldUseCanvasBetaLayout,
shouldUseSliders,
shouldShowHuggingFaceConcepts,
} = useAppSelector(selector);
/**
@@ -206,6 +213,14 @@ const SettingsModal = ({ children }: SettingsModalProps) => {
dispatch(setShouldUseSliders(e.target.checked))
}
/>
<IAISwitch
styleClass="settings-modal-item"
label={t('settings.showHuggingFaceConcepts')}
isChecked={shouldShowHuggingFaceConcepts}
onChange={(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldShowHuggingFaceConcepts(e.target.checked))
}
/>
</div>
<div className="settings-modal-items">

View File

@@ -51,6 +51,13 @@ export interface SystemState
toastQueue: UseToastOptions[];
searchFolder: string | null;
foundModels: InvokeAI.FoundModel[] | null;
foundLoras: InvokeAI.FoundLora[] | null;
foundLocalTextualInversionTriggers:
| InvokeAI.FoundTextualInversionTriggers[]
| null;
foundHuggingFaceTextualInversionTriggers:
| InvokeAI.FoundTextualInversionTriggers[]
| null;
openModel: string | null;
cancelOptions: {
cancelType: CancelType;
@@ -93,6 +100,9 @@ const initialSystemState: SystemState = {
toastQueue: [],
searchFolder: null,
foundModels: null,
foundLoras: null,
foundLocalTextualInversionTriggers: null,
foundHuggingFaceTextualInversionTriggers: null,
openModel: null,
cancelOptions: {
cancelType: 'immediate',
@@ -262,6 +272,24 @@ export const systemSlice = createSlice({
) => {
state.foundModels = action.payload;
},
setFoundLoras: (
state,
action: PayloadAction<InvokeAI.FoundLora[] | null>
) => {
state.foundLoras = action.payload;
},
setFoundLocalTextualInversionTriggers: (
state,
action: PayloadAction<InvokeAI.FoundTextualInversionTriggers[] | null>
) => {
state.foundLocalTextualInversionTriggers = action.payload;
},
setFoundHuggingFaceTextualInversionTriggers: (
state,
action: PayloadAction<InvokeAI.FoundTextualInversionTriggers[] | null>
) => {
state.foundHuggingFaceTextualInversionTriggers = action.payload;
},
setOpenModel: (state, action: PayloadAction<string | null>) => {
state.openModel = action.payload;
},
@@ -303,6 +331,9 @@ export const {
setProcessingIndeterminateTask,
setSearchFolder,
setFoundModels,
setFoundLoras,
setFoundLocalTextualInversionTriggers,
setFoundHuggingFaceTextualInversionTriggers,
setOpenModel,
setCancelType,
setCancelAfter,

View File

@@ -18,6 +18,7 @@ import PromptInput from 'features/parameters/components/PromptInput/PromptInput'
import InvokeOptionsPanel from 'features/ui/components/InvokeParametersPanel';
import { useTranslation } from 'react-i18next';
import ImageToImageOptions from './ImageToImageOptions';
import PromptExtras from 'features/parameters/components/PromptInput/Extras/PromptExtras';
export default function ImageToImagePanel() {
const { t } = useTranslation();
@@ -63,6 +64,7 @@ export default function ImageToImagePanel() {
<Flex flexDir="column" rowGap="0.5rem">
<PromptInput />
<NegativePromptInput />
<PromptExtras />
</Flex>
<ProcessButtons />
<MainSettings />

View File

@@ -17,6 +17,7 @@ import NegativePromptInput from 'features/parameters/components/PromptInput/Nega
import PromptInput from 'features/parameters/components/PromptInput/PromptInput';
import InvokeOptionsPanel from 'features/ui/components/InvokeParametersPanel';
import { useTranslation } from 'react-i18next';
import PromptExtras from 'features/parameters/components/PromptInput/Extras/PromptExtras';
export default function TextToImagePanel() {
const { t } = useTranslation();
@@ -62,6 +63,7 @@ export default function TextToImagePanel() {
<Flex flexDir="column" rowGap="0.5rem">
<PromptInput />
<NegativePromptInput />
<PromptExtras />
</Flex>
<ProcessButtons />
<MainSettings />

View File

@@ -17,6 +17,7 @@ import NegativePromptInput from 'features/parameters/components/PromptInput/Nega
import PromptInput from 'features/parameters/components/PromptInput/PromptInput';
import InvokeOptionsPanel from 'features/ui/components/InvokeParametersPanel';
import { useTranslation } from 'react-i18next';
import PromptExtras from 'features/parameters/components/PromptInput/Extras/PromptExtras';
export default function UnifiedCanvasPanel() {
const { t } = useTranslation();
@@ -73,6 +74,7 @@ export default function UnifiedCanvasPanel() {
<Flex flexDir="column" rowGap="0.5rem">
<PromptInput />
<NegativePromptInput />
<PromptExtras />
</Flex>
<ProcessButtons />
<MainSettings />

View File

@@ -15,6 +15,7 @@ const initialtabsState: UIState = {
shouldUseCanvasBetaLayout: false,
shouldShowExistingModelsInSearch: false,
shouldUseSliders: false,
shouldShowHuggingFaceConcepts: false,
addNewModelUIOption: null,
};
@@ -70,6 +71,12 @@ export const uiSlice = createSlice({
setShouldUseSliders: (state, action: PayloadAction<boolean>) => {
state.shouldUseSliders = action.payload;
},
setShouldShowHuggingFaceConcepts: (
state,
action: PayloadAction<boolean>
) => {
state.shouldShowHuggingFaceConcepts = action.payload;
},
setAddNewModelUIOption: (state, action: PayloadAction<AddNewModelType>) => {
state.addNewModelUIOption = action.payload;
},
@@ -88,6 +95,7 @@ export const {
setShouldUseCanvasBetaLayout,
setShouldShowExistingModelsInSearch,
setShouldUseSliders,
setShouldShowHuggingFaceConcepts,
setAddNewModelUIOption,
} = uiSlice.actions;

View File

@@ -12,5 +12,6 @@ export interface UIState {
shouldUseCanvasBetaLayout: boolean;
shouldShowExistingModelsInSearch: boolean;
shouldUseSliders: boolean;
shouldShowHuggingFaceConcepts: boolean;
addNewModelUIOption: AddNewModelType;
}

View File

@@ -8,7 +8,6 @@
--accent-color-bright: rgb(104, 60, 230);
--accent-color-hover: var(--accent-color-bright);
// App Colors
--root-bg-color: rgb(10, 10, 10);
--background-color: rgb(26, 26, 32);
--background-color-light: rgb(40, 44, 48);
@@ -119,6 +118,7 @@
--context-menu-bg-color: rgb(46, 48, 58);
--context-menu-box-shadow: none;
--context-menu-bg-color-hover: rgb(30, 32, 42);
--context-menu-active-item: var(--accent-color-bright);
// Shadows
--floating-button-drop-shadow-color: var(--accent-color);

View File

@@ -117,6 +117,7 @@
--context-menu-bg-color: rgb(46, 48, 58);
--context-menu-box-shadow: none;
--context-menu-bg-color-hover: rgb(30, 32, 42);
--context-menu-active-item: var(--accent-color-bright);
// Shadows
--floating-button-drop-shadow-color: var(--accent-color);

View File

@@ -114,6 +114,7 @@
--context-menu-box-shadow: 0px 10px 38px -10px rgba(22, 23, 24, 0.35),
0px 10px 20px -15px rgba(22, 23, 24, 0.2);
--context-menu-bg-color-hover: var(--background-color-secondary);
--context-menu-active-item: rgb(0, 0, 0);
// Shadows
--floating-button-drop-shadow-color: rgba(0, 0, 0, 0.7);

File diff suppressed because one or more lines are too long

View File

@@ -56,26 +56,26 @@
"@babel/helper-validator-identifier" "^7.19.1"
to-fast-properties "^2.0.0"
"@chakra-ui/accordion@2.1.9":
version "2.1.9"
resolved "https://registry.yarnpkg.com/@chakra-ui/accordion/-/accordion-2.1.9.tgz#20fa86d94dc034251df2f7c8595ae4dd541a29d9"
integrity sha512-a9CKIAUHezc0f5FR/SQ4GVxnWuIb2HbDTxTEKTp58w/J9pecIbJaNrJ5TUZ0MVbDU9jkgO9RsZ29jkja8PomAw==
"@chakra-ui/accordion@2.1.11":
version "2.1.11"
resolved "https://registry.yarnpkg.com/@chakra-ui/accordion/-/accordion-2.1.11.tgz#c6df0100c543645d0631df3aefde2ea2b8ed6313"
integrity sha512-mfVPmqETp9pyRDHJ33AdF19oHv/LyxVzQJtlxUByuvs8Cj9QQZ2LQLg5kejm+b3mj03A7A6yfbuo3RNaI4Bhsg==
dependencies:
"@chakra-ui/descendant" "3.0.13"
"@chakra-ui/descendant" "3.0.14"
"@chakra-ui/icon" "3.0.16"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-use-controllable-state" "2.0.8"
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/transition" "2.0.15"
"@chakra-ui/transition" "2.0.16"
"@chakra-ui/alert@2.0.17":
version "2.0.17"
resolved "https://registry.yarnpkg.com/@chakra-ui/alert/-/alert-2.0.17.tgz#b129732ec308db6a6a1afa7c06a6595ad853c967"
integrity sha512-0Y5vw+HkeXpwbL1roVpSSNM6luMRmUbwduUSHEA4OnX1ismvsDb1ZBfpi4Vxp6w8euJ2Uj6df3krbd5tbCP6tg==
"@chakra-ui/alert@2.1.0":
version "2.1.0"
resolved "https://registry.yarnpkg.com/@chakra-ui/alert/-/alert-2.1.0.tgz#7a234ac6426231b39243088648455cbcf1cbdf24"
integrity sha512-OcfHwoXI5VrmM+tHJTHT62Bx6TfyfCxSa0PWUOueJzSyhlUOKBND5we6UtrOB7D0jwX45qKKEDJOLG5yCG21jQ==
dependencies:
"@chakra-ui/icon" "3.0.16"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/spinner" "2.0.13"
@@ -84,23 +84,23 @@
resolved "https://registry.yarnpkg.com/@chakra-ui/anatomy/-/anatomy-2.1.2.tgz#ea66b1841e7195da08ddc862daaa3f3e56e565f5"
integrity sha512-pKfOS/mztc4sUXHNc8ypJ1gPWSolWT770jrgVRfolVbYlki8y5Y+As996zMF6k5lewTu6j9DQequ7Cc9a69IVQ==
"@chakra-ui/avatar@2.2.5":
version "2.2.5"
resolved "https://registry.yarnpkg.com/@chakra-ui/avatar/-/avatar-2.2.5.tgz#50eb7cc5a172d394b301fa0abd5f607b7f5d3563"
integrity sha512-TEHXuGE79+fEn61qJ7J/A0Ec+WjyNwobrDTATcLg9Zx2/WEMmZNfrWIAlI5ANQAwVbdSWeGVbyoLAK5mbcrE0A==
"@chakra-ui/avatar@2.2.8":
version "2.2.8"
resolved "https://registry.yarnpkg.com/@chakra-ui/avatar/-/avatar-2.2.8.tgz#a6e16accb2bb9c879f197090ccc9df1ff42992a6"
integrity sha512-uBs9PMrqyK111tPIYIKnOM4n3mwgKqGpvYmtwBnnbQLTNLg4gtiWWVbpTuNMpyu1av0xQYomjUt8Doed8w6p8g==
dependencies:
"@chakra-ui/image" "2.0.15"
"@chakra-ui/react-children-utils" "2.0.6"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/breadcrumb@2.1.4":
version "2.1.4"
resolved "https://registry.yarnpkg.com/@chakra-ui/breadcrumb/-/breadcrumb-2.1.4.tgz#0d249dc2a92639bd2bf46d097dd5445112bd2367"
integrity sha512-vyBx5TAxPnHhb0b8nyRGfqyjleD//9mySFhk96c9GL+T6YDO4swHw5y/kvDv3Ngc/iRwJ9hdI49PZKwPxLqsEg==
"@chakra-ui/breadcrumb@2.1.5":
version "2.1.5"
resolved "https://registry.yarnpkg.com/@chakra-ui/breadcrumb/-/breadcrumb-2.1.5.tgz#a43b22cc8005291a615696a8c88efc37064562f3"
integrity sha512-p3eQQrHQBkRB69xOmNyBJqEdfCrMt+e0eOH+Pm/DjFWfIVIbnIaFbmDCeWClqlLa21Ypc6h1hR9jEmvg8kmOog==
dependencies:
"@chakra-ui/react-children-utils" "2.0.6"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/breakpoint-utils@2.0.8":
@@ -110,12 +110,12 @@
dependencies:
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/button@2.0.16":
version "2.0.16"
resolved "https://registry.yarnpkg.com/@chakra-ui/button/-/button-2.0.16.tgz#ff315b57ee47c3511a6507fcfb6f00bb93e2ac7d"
integrity sha512-NjuTKa7gNhnGSUutKuTc8HoAOe9WWIigpciBG7yj3ok67kg8bXtSzPyQFZlgTY6XGdAckWTT+Do4tvhwa5LA+g==
"@chakra-ui/button@2.0.18":
version "2.0.18"
resolved "https://registry.yarnpkg.com/@chakra-ui/button/-/button-2.0.18.tgz#c13d2e404e22a9873ba5373fde494bedafe32fdd"
integrity sha512-E3c99+lOm6ou4nQVOTLkG+IdOPMjsQK+Qe7VyP8A/xeAMFONuibrWPRPpprr4ZkB4kEoLMfNuyH2+aEza3ScUA==
dependencies:
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/spinner" "2.0.13"
@@ -127,13 +127,13 @@
dependencies:
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/checkbox@2.2.10":
version "2.2.10"
resolved "https://registry.yarnpkg.com/@chakra-ui/checkbox/-/checkbox-2.2.10.tgz#e4f773e7d2464f1d6e9d18dd88b679290cb33171"
integrity sha512-vzxEjw99qj7loxAdP1WuHNt4EAvj/t6cc8oxyOB2mEvkAzhxI34rLR+3zWDuHWsmhyUO+XEDh4FiWdR+DK5Siw==
"@chakra-ui/checkbox@2.2.14":
version "2.2.14"
resolved "https://registry.yarnpkg.com/@chakra-ui/checkbox/-/checkbox-2.2.14.tgz#902acc99a9a80c1c304788a230cf36f8116e8260"
integrity sha512-uqo6lFWLqYBujPglrvRhTAErtuIXpmdpc5w0W4bjK7kyvLhxOpUh1hlDb2WoqlNpfRn/OaNeF6VinPnf9BJL8w==
dependencies:
"@chakra-ui/form-control" "2.0.17"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/form-control" "2.0.18"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-types" "2.0.7"
"@chakra-ui/react-use-callback-ref" "2.0.7"
"@chakra-ui/react-use-controllable-state" "2.0.8"
@@ -142,7 +142,7 @@
"@chakra-ui/react-use-update-effect" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/visually-hidden" "2.0.15"
"@zag-js/focus-visible" "0.2.1"
"@zag-js/focus-visible" "0.2.2"
"@chakra-ui/clickable@2.0.14":
version "2.0.14"
@@ -180,17 +180,17 @@
"@chakra-ui/react-use-callback-ref" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/css-reset@2.0.12":
version "2.0.12"
resolved "https://registry.yarnpkg.com/@chakra-ui/css-reset/-/css-reset-2.0.12.tgz#6eebcbe9e971facd215e174e063ace29f647a045"
integrity sha512-Q5OYIMvqTl2vZ947kIYxcS5DhQXeStB84BzzBd6C10wOx1gFUu9pL+jLpOnHR3hhpWRMdX5o7eT+gMJWIYUZ0Q==
"@chakra-ui/css-reset@2.1.1":
version "2.1.1"
resolved "https://registry.yarnpkg.com/@chakra-ui/css-reset/-/css-reset-2.1.1.tgz#c61f3d2103c13e62a86fd2d359682092e961852c"
integrity sha512-jwEOfIAWmQsnChHQTW/eRE+dfE4MjmhvSvoUug5nkV1pI7veC/20noFlIZxzi82EbiQI8Fs0+Jnusgxr2yaOHA==
"@chakra-ui/descendant@3.0.13":
version "3.0.13"
resolved "https://registry.yarnpkg.com/@chakra-ui/descendant/-/descendant-3.0.13.tgz#e883a2233ee07fe1ae6c014567824c0f79df11cf"
integrity sha512-9nzxZVxUSMc4xPL5fSaRkEOQjDQWUGjGvrZI7VzWk9eq63cojOtIxtWMSW383G9148PzWJjJYt30Eud5tdZzlg==
"@chakra-ui/descendant@3.0.14":
version "3.0.14"
resolved "https://registry.yarnpkg.com/@chakra-ui/descendant/-/descendant-3.0.14.tgz#fe8bac3f0e1ffe562e3e73eac393dbf222d57e13"
integrity sha512-+Ahvp9H4HMpfScIv9w1vaecGz7qWAaK1YFHHolz/SIsGLaLGlbdp+5UNabQC7L6TUnzzJDQDxzwif78rTD7ang==
dependencies:
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@chakra-ui/dom-utils@2.0.6":
@@ -198,12 +198,12 @@
resolved "https://registry.yarnpkg.com/@chakra-ui/dom-utils/-/dom-utils-2.0.6.tgz#68f49f3b4a0bdebd5e416d6fd2c012c9ad64b76a"
integrity sha512-PVtDkPrDD5b8aoL6Atg7SLjkwhWb7BwMcLOF1L449L3nZN+DAO3nyAh6iUhZVJyunELj9d0r65CDlnMREyJZmA==
"@chakra-ui/editable@2.0.19":
version "2.0.19"
resolved "https://registry.yarnpkg.com/@chakra-ui/editable/-/editable-2.0.19.tgz#1af2fe3c215111f61f7872fb5f599f4d8da24e7d"
integrity sha512-YxRJsJ2JQd42zfPBgTKzIhg1HugT+gfQz1ZosmUN+IZT9YZXL2yodHTUz6Lee04Vc/CdEqgBFLuREXEUNBfGtA==
"@chakra-ui/editable@2.0.21":
version "2.0.21"
resolved "https://registry.yarnpkg.com/@chakra-ui/editable/-/editable-2.0.21.tgz#bc74510470d6d455844438e540851896d3879132"
integrity sha512-oYuXbHnggxSYJN7P9Pn0Scs9tPC91no4z1y58Oe+ILoJKZ+bFAEHtL7FEISDNJxw++MEukeFu7GU1hVqmdLsKQ==
dependencies:
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-types" "2.0.7"
"@chakra-ui/react-use-callback-ref" "2.0.7"
"@chakra-ui/react-use-controllable-state" "2.0.8"
@@ -226,13 +226,13 @@
"@chakra-ui/dom-utils" "2.0.6"
react-focus-lock "^2.9.2"
"@chakra-ui/form-control@2.0.17":
version "2.0.17"
resolved "https://registry.yarnpkg.com/@chakra-ui/form-control/-/form-control-2.0.17.tgz#2f710325e77ce35067337616d440f903b137bdd5"
integrity sha512-34ptCaJ2LNvQNOlB6MAKsmH1AkT1xo7E+3Vw10Urr81yTOjDTM/iU6vG3JKPfRDMyXeowPjXmutlnuk72SSjRg==
"@chakra-ui/form-control@2.0.18":
version "2.0.18"
resolved "https://registry.yarnpkg.com/@chakra-ui/form-control/-/form-control-2.0.18.tgz#1923f293afde70b2b07ca731d98fef3660098c56"
integrity sha512-I0a0jG01IAtRPccOXSNugyRdUAe8Dy40ctqedZvznMweOXzbMCF1m+sHPLdWeWC/VI13VoAispdPY0/zHOdjsQ==
dependencies:
"@chakra-ui/icon" "3.0.16"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-types" "2.0.7"
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
@@ -254,10 +254,10 @@
dependencies:
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/icons@^2.0.17":
version "2.0.17"
resolved "https://registry.yarnpkg.com/@chakra-ui/icons/-/icons-2.0.17.tgz#625a46d169707aad36d65c04a4626a422f92e5ae"
integrity sha512-HMJP0WrJgAmFR9+Xh/CBH0nVnGMsJ4ZC8MK6tMgxPKd9/muvn0I4hsicHqdPlLpmB0TlxlhkBAKaVMtOdz6F0w==
"@chakra-ui/icons@^2.0.18":
version "2.0.18"
resolved "https://registry.yarnpkg.com/@chakra-ui/icons/-/icons-2.0.18.tgz#6f859d2e0d8f31fea9cb2e6507d65eb65cb95cf5"
integrity sha512-E/+DF/jw7kdN4/XxCZRnr4FdMXhkl50Q34MVwN9rADWMwPK9uSZPGyC7HOx6rilo7q4bFjYDH3yRj9g+VfbVkg==
dependencies:
"@chakra-ui/icon" "3.0.16"
@@ -269,27 +269,27 @@
"@chakra-ui/react-use-safe-layout-effect" "2.0.5"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/input@2.0.20":
version "2.0.20"
resolved "https://registry.yarnpkg.com/@chakra-ui/input/-/input-2.0.20.tgz#8db3ec46b52be901c94599b3659a9003bdb2dd07"
integrity sha512-ypmsy4n4uNBVgn6Gd24Zrpi+qRf/T9WEzWkysuYC9Qfxo+i7yuf3snp7XmBy8KSGVSiXE11eO8ZN5oCg6Xg0jg==
"@chakra-ui/input@2.0.21":
version "2.0.21"
resolved "https://registry.yarnpkg.com/@chakra-ui/input/-/input-2.0.21.tgz#a7e55ea6fa32ae39c0f6ec44ca2189933fda9eb5"
integrity sha512-AIWjjg6MgcOtlvKmVoZfPPfgF+sBSWL3Zq2HSCAMvS6h7jfxz/Xv0UTFGPk5F4Wt0YHT7qMySg0Jsm0b78HZJg==
dependencies:
"@chakra-ui/form-control" "2.0.17"
"@chakra-ui/form-control" "2.0.18"
"@chakra-ui/object-utils" "2.0.8"
"@chakra-ui/react-children-utils" "2.0.6"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/layout@2.1.16":
version "2.1.16"
resolved "https://registry.yarnpkg.com/@chakra-ui/layout/-/layout-2.1.16.tgz#9d90f25cf9f0537d19cd36a417f7ddc1461e8591"
integrity sha512-QFS3feozIGsvB0H74lUocev55aRF26eNrdmhfJifwikZAiq+zzZAMdBdNU9UJhHClnMOU8/iGZ0MF7ti4zQS1A==
"@chakra-ui/layout@2.1.18":
version "2.1.18"
resolved "https://registry.yarnpkg.com/@chakra-ui/layout/-/layout-2.1.18.tgz#f5dba687dfced9145d495f3a21edb5672df6bb73"
integrity sha512-F4Gh2e+DGdaWdWT5NZduIFD9NM7Bnuh8sXARFHWPvIu7yvAwZ3ddqC9GK4F3qUngdmkJxDLWQqRSwSh96Lxbhw==
dependencies:
"@chakra-ui/breakpoint-utils" "2.0.8"
"@chakra-ui/icon" "3.0.16"
"@chakra-ui/object-utils" "2.0.8"
"@chakra-ui/react-children-utils" "2.0.6"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/lazy-utils@2.0.5":
@@ -311,17 +311,17 @@
"@chakra-ui/react-env" "3.0.0"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/menu@2.1.9":
version "2.1.9"
resolved "https://registry.yarnpkg.com/@chakra-ui/menu/-/menu-2.1.9.tgz#2f3239a9b2855fd77fc317d9e6b904c1ad50d7c6"
integrity sha512-ue5nD4QJcl3H3UwN0zZNJmH89XUebnvEdW6THAUL41hDjJ0J/Fjpg9Sgzwug2aBbBXBNbVMsUuhcCj6x91d+IQ==
"@chakra-ui/menu@2.1.12":
version "2.1.12"
resolved "https://registry.yarnpkg.com/@chakra-ui/menu/-/menu-2.1.12.tgz#ab83b7a5165bd31a6c68328d7f65a79e3412c48d"
integrity sha512-ylNK1VJlr/3/EGg9dLPZ87cBJJjeiYXeU/gOAphsKXMnByrXWhbp4YVnyyyha2KZ0zEw0aPU4nCZ+A69aT9wrg==
dependencies:
"@chakra-ui/clickable" "2.0.14"
"@chakra-ui/descendant" "3.0.13"
"@chakra-ui/descendant" "3.0.14"
"@chakra-ui/lazy-utils" "2.0.5"
"@chakra-ui/popper" "3.0.13"
"@chakra-ui/react-children-utils" "2.0.6"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-use-animation-state" "2.0.8"
"@chakra-ui/react-use-controllable-state" "2.0.8"
"@chakra-ui/react-use-disclosure" "2.0.8"
@@ -330,33 +330,33 @@
"@chakra-ui/react-use-outside-click" "2.0.7"
"@chakra-ui/react-use-update-effect" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/transition" "2.0.15"
"@chakra-ui/transition" "2.0.16"
"@chakra-ui/modal@2.2.9":
version "2.2.9"
resolved "https://registry.yarnpkg.com/@chakra-ui/modal/-/modal-2.2.9.tgz#aad65a2c60aa974e023f8b3facc0e79eb742e006"
integrity sha512-nTfNp7XsVwn5+xJOtstoFA8j0kq/9sJj7KesyYzjEDaMKvCZvIOntRYowoydho43jb4+YC7ebKhp0KOIINS0gg==
"@chakra-ui/modal@2.2.11":
version "2.2.11"
resolved "https://registry.yarnpkg.com/@chakra-ui/modal/-/modal-2.2.11.tgz#8a964288759f3d681e23bfc3a837a3e2c7523f8e"
integrity sha512-2J0ZUV5tEzkPiawdkgPz6bmex7NXAde1VXooMwdvK+vuT8PV3U61yorTJOZVLdw7TjjI1Yo94mzsp6UwBud43Q==
dependencies:
"@chakra-ui/close-button" "2.0.17"
"@chakra-ui/focus-lock" "2.0.16"
"@chakra-ui/portal" "2.0.15"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/portal" "2.0.16"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-types" "2.0.7"
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/transition" "2.0.15"
"@chakra-ui/transition" "2.0.16"
aria-hidden "^1.2.2"
react-remove-scroll "^2.5.5"
"@chakra-ui/number-input@2.0.18":
version "2.0.18"
resolved "https://registry.yarnpkg.com/@chakra-ui/number-input/-/number-input-2.0.18.tgz#072a00ef869ebafa4960cfdee8caae8208864289"
integrity sha512-cPkyAFFHHzeFBselrT1BtjlzMkJ6TKrTDUnHFlzqXy6aqeXuhrjFhMfXucjedSpOqedsP9ZbKFTdIAhu9DdL/A==
"@chakra-ui/number-input@2.0.19":
version "2.0.19"
resolved "https://registry.yarnpkg.com/@chakra-ui/number-input/-/number-input-2.0.19.tgz#82d4522036904c04d07e7050822fc522f9b32233"
integrity sha512-HDaITvtMEqOauOrCPsARDxKD9PSHmhWywpcyCSOX0lMe4xx2aaGhU0QQFhsJsykj8Er6pytMv6t0KZksdDv3YA==
dependencies:
"@chakra-ui/counter" "2.0.14"
"@chakra-ui/form-control" "2.0.17"
"@chakra-ui/form-control" "2.0.18"
"@chakra-ui/icon" "3.0.16"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-types" "2.0.7"
"@chakra-ui/react-use-callback-ref" "2.0.7"
"@chakra-ui/react-use-event-listener" "2.0.7"
@@ -376,27 +376,27 @@
resolved "https://registry.yarnpkg.com/@chakra-ui/object-utils/-/object-utils-2.0.8.tgz#307f927f6434f99feb32ba92bdf451a6b59a6199"
integrity sha512-2upjT2JgRuiupdrtBWklKBS6tqeGMA77Nh6Q0JaoQuH/8yq+15CGckqn3IUWkWoGI0Fg3bK9LDlbbD+9DLw95Q==
"@chakra-ui/pin-input@2.0.19":
version "2.0.19"
resolved "https://registry.yarnpkg.com/@chakra-ui/pin-input/-/pin-input-2.0.19.tgz#f9b196174f0518feec5c1ee3fcaf2134c301148a"
integrity sha512-6O7s4vWz4cqQ6zvMov9sYj6ZqWAsTxR/MNGe3DNgu1zWQg8veNCYtj1rNGhNS3eZNUMAa8uM2dXIphGTP53Xow==
"@chakra-ui/pin-input@2.0.20":
version "2.0.20"
resolved "https://registry.yarnpkg.com/@chakra-ui/pin-input/-/pin-input-2.0.20.tgz#5bf115bf4282b69fc6532a9c542cbf41f815d200"
integrity sha512-IHVmerrtHN8F+jRB3W1HnMir1S1TUCWhI7qDInxqPtoRffHt6mzZgLZ0izx8p1fD4HkW4c1d4/ZLEz9uH9bBRg==
dependencies:
"@chakra-ui/descendant" "3.0.13"
"@chakra-ui/descendant" "3.0.14"
"@chakra-ui/react-children-utils" "2.0.6"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-use-controllable-state" "2.0.8"
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/popover@2.1.8":
version "2.1.8"
resolved "https://registry.yarnpkg.com/@chakra-ui/popover/-/popover-2.1.8.tgz#e906ce0533693d735b6e13a3a6ffe16d8e0a9ab4"
integrity sha512-ob7fAz+WWmXIq7iGHVB3wDKzZTj+T+noYBT/U1Q+jIf+jMr2WOpJLTfb0HTZcfhvn4EBFlfBg7Wk5qbXNaOn7g==
"@chakra-ui/popover@2.1.9":
version "2.1.9"
resolved "https://registry.yarnpkg.com/@chakra-ui/popover/-/popover-2.1.9.tgz#890cc0dfc5022757715ccf772ec194e7a409275f"
integrity sha512-OMJ12VVs9N32tFaZSOqikkKPtwAVwXYsES/D1pff/amBrE3ngCrpxJSIp4uvTdORfIYDojJqrR52ZplDKS9hRQ==
dependencies:
"@chakra-ui/close-button" "2.0.17"
"@chakra-ui/lazy-utils" "2.0.5"
"@chakra-ui/popper" "3.0.13"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-types" "2.0.7"
"@chakra-ui/react-use-animation-state" "2.0.8"
"@chakra-ui/react-use-disclosure" "2.0.8"
@@ -414,53 +414,53 @@
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@popperjs/core" "^2.9.3"
"@chakra-ui/portal@2.0.15":
version "2.0.15"
resolved "https://registry.yarnpkg.com/@chakra-ui/portal/-/portal-2.0.15.tgz#21e1f97c4407fc15df8c365cb5cf799dac73ce41"
integrity sha512-z8v7K3j1/nMuBzp2+wRIIw7s/eipVtnXLdjK5yqbMxMRa44E8Mu5VNJLz3aQFLHXEUST+ifqrjImQeli9do6LQ==
"@chakra-ui/portal@2.0.16":
version "2.0.16"
resolved "https://registry.yarnpkg.com/@chakra-ui/portal/-/portal-2.0.16.tgz#e5ce3f9d9e559f17a95276e0c006d0e9b7703442"
integrity sha512-bVID0qbQ0l4xq38LdqAN4EKD4/uFkDnXzFwOlviC9sl0dNhzICDb1ltuH/Adl1d2HTMqyN60O3GO58eHy7plnQ==
dependencies:
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-use-safe-layout-effect" "2.0.5"
"@chakra-ui/progress@2.1.5":
version "2.1.5"
resolved "https://registry.yarnpkg.com/@chakra-ui/progress/-/progress-2.1.5.tgz#eb6a47adf2bff93971262d163461d390782a04ff"
integrity sha512-jj5Vp4lxUchuwp4RPCepM0yAyKi344bgsOd3Apd+ldxclDcewPc82fbwDu7g/Xv27LqJkT+7E/SlQy04wGrk0g==
"@chakra-ui/progress@2.1.6":
version "2.1.6"
resolved "https://registry.yarnpkg.com/@chakra-ui/progress/-/progress-2.1.6.tgz#398db20440979c37adb0a34821f805ae3471873b"
integrity sha512-hHh5Ysv4z6bK+j2GJbi/FT9CVyto2PtNUNwBmr3oNMVsoOUMoRjczfXvvYqp0EHr9PCpxqrq7sRwgQXUzhbDSw==
dependencies:
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/provider@2.1.2":
version "2.1.2"
resolved "https://registry.yarnpkg.com/@chakra-ui/provider/-/provider-2.1.2.tgz#b025cb718826b003b3c9535b6961e8f3be70ebd5"
integrity sha512-4lLlz8QuJv00BhfyKzWpzfoti9MDOdJ/MqXixJV/EZ02RMBOdE9qy9bSz/WckPC2MVhtRUuwMkxH+0QY21PXuw==
"@chakra-ui/provider@2.2.2":
version "2.2.2"
resolved "https://registry.yarnpkg.com/@chakra-ui/provider/-/provider-2.2.2.tgz#a798d1c243f33e00c85763834a7350e0d1c643ad"
integrity sha512-UVwnIDnAWq1aKroN5AF+OpNpUqLVeIUk7tKvX3z4CY9FsPFFi6LTEhRHdhpwaU1Tau3Tf9agEu5URegpY7S8BA==
dependencies:
"@chakra-ui/css-reset" "2.0.12"
"@chakra-ui/portal" "2.0.15"
"@chakra-ui/css-reset" "2.1.1"
"@chakra-ui/portal" "2.0.16"
"@chakra-ui/react-env" "3.0.0"
"@chakra-ui/system" "2.5.1"
"@chakra-ui/system" "2.5.5"
"@chakra-ui/utils" "2.0.15"
"@chakra-ui/radio@2.0.19":
version "2.0.19"
resolved "https://registry.yarnpkg.com/@chakra-ui/radio/-/radio-2.0.19.tgz#8d5c02eae8eddbced4476b1b50921ade62f0a744"
integrity sha512-PlJiV59eGSmeKP4v/4+ccQUWGRd0cjPKkj/p3L+UbOf8pl9dWm8y9kIeL5TYbghQSDv0nzkrH4+yMnnDTZjdMQ==
"@chakra-ui/radio@2.0.22":
version "2.0.22"
resolved "https://registry.yarnpkg.com/@chakra-ui/radio/-/radio-2.0.22.tgz#fad0ce7c9ba4051991ed517cac4cfe526d6d47d9"
integrity sha512-GsQ5WAnLwivWl6gPk8P1x+tCcpVakCt5R5T0HumF7DGPXKdJbjS+RaFySrbETmyTJsKY4QrfXn+g8CWVrMjPjw==
dependencies:
"@chakra-ui/form-control" "2.0.17"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/form-control" "2.0.18"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-types" "2.0.7"
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
"@zag-js/focus-visible" "0.2.1"
"@zag-js/focus-visible" "0.2.2"
"@chakra-ui/react-children-utils@2.0.6":
version "2.0.6"
resolved "https://registry.yarnpkg.com/@chakra-ui/react-children-utils/-/react-children-utils-2.0.6.tgz#6c480c6a60678fcb75cb7d57107c7a79e5179b92"
integrity sha512-QVR2RC7QsOsbWwEnq9YduhpqSFnZGvjjGREV8ygKi8ADhXh93C8azLECCUVgRJF2Wc+So1fgxmjLcbZfY2VmBA==
"@chakra-ui/react-context@2.0.7":
version "2.0.7"
resolved "https://registry.yarnpkg.com/@chakra-ui/react-context/-/react-context-2.0.7.tgz#f79a2b072d04d4280ec8799dc03a8a1af521ca2e"
integrity sha512-i7EGmSU+h2GB30cwrKB4t1R5BMHyGoJM5L2Zz7b+ZUX4aAqyPcfe97wPiQB6Rgr1ImGXrUeov4CDVrRZ2FPgLQ==
"@chakra-ui/react-context@2.0.8":
version "2.0.8"
resolved "https://registry.yarnpkg.com/@chakra-ui/react-context/-/react-context-2.0.8.tgz#5e0ed33ac3995875a21dea0e12b0ee5fc4c2e3cc"
integrity sha512-tRTKdn6lCTXM6WPjSokAAKCw2ioih7Eg8cNgaYRSwKBck8nkz9YqxgIIEj3dJD7MGtpl24S/SNI98iRWkRwR/A==
"@chakra-ui/react-env@3.0.0":
version "3.0.0"
@@ -568,12 +568,12 @@
resolved "https://registry.yarnpkg.com/@chakra-ui/react-use-safe-layout-effect/-/react-use-safe-layout-effect-2.0.5.tgz#6cf388c37fd2a42b5295a292e149b32f860a00a7"
integrity sha512-MwAQBz3VxoeFLaesaSEN87reVNVbjcQBDex2WGexAg6hUB6n4gc1OWYH/iXp4tzp4kuggBNhEHkk9BMYXWfhJQ==
"@chakra-ui/react-use-size@2.0.9":
version "2.0.9"
resolved "https://registry.yarnpkg.com/@chakra-ui/react-use-size/-/react-use-size-2.0.9.tgz#00717867b98a24c3bdcfaa0c3e70732404193486"
integrity sha512-Jce7QmO1jlQZq+Y77VKckWzroRnajChzUQ8xhLQZO6VbYvrpg3cu+X2QCz3G+MZzB+1/hnvvAqmZ+uJLd8rEJg==
"@chakra-ui/react-use-size@2.0.10":
version "2.0.10"
resolved "https://registry.yarnpkg.com/@chakra-ui/react-use-size/-/react-use-size-2.0.10.tgz#6131950852490c06e5fb3760bf64097c8057391f"
integrity sha512-fdIkH14GDnKQrtQfxX8N3gxbXRPXEl67Y3zeD9z4bKKcQUAYIMqs0MsPZY+FMpGQw8QqafM44nXfL038aIrC5w==
dependencies:
"@zag-js/element-size" "0.3.1"
"@zag-js/element-size" "0.3.2"
"@chakra-ui/react-use-timeout@2.0.5":
version "2.0.5"
@@ -594,69 +594,69 @@
dependencies:
"@chakra-ui/utils" "2.0.15"
"@chakra-ui/react@^2.5.1":
version "2.5.1"
resolved "https://registry.yarnpkg.com/@chakra-ui/react/-/react-2.5.1.tgz#05414db2b512bd4402e42eecc6b915d85102c576"
integrity sha512-ugkaqfcNMb9L4TkalWiF3rnqfr0TlUUD46JZaDIZiORVisaSwXTZTQrVfG40VghhaJT28rnC5WtiE8kd567ZBQ==
"@chakra-ui/react@^2.5.5":
version "2.5.5"
resolved "https://registry.yarnpkg.com/@chakra-ui/react/-/react-2.5.5.tgz#5ae2450ec0d10d63e1314747466f21cf542032ff"
integrity sha512-aBVMUtdWv2MrptD/tKSqICPsuJ+I+jvauegffO1qPUDlK3RrXIDeOHkLGWohgXNcjY5bGVWguFEzJm97//0ooQ==
dependencies:
"@chakra-ui/accordion" "2.1.9"
"@chakra-ui/alert" "2.0.17"
"@chakra-ui/avatar" "2.2.5"
"@chakra-ui/breadcrumb" "2.1.4"
"@chakra-ui/button" "2.0.16"
"@chakra-ui/accordion" "2.1.11"
"@chakra-ui/alert" "2.1.0"
"@chakra-ui/avatar" "2.2.8"
"@chakra-ui/breadcrumb" "2.1.5"
"@chakra-ui/button" "2.0.18"
"@chakra-ui/card" "2.1.6"
"@chakra-ui/checkbox" "2.2.10"
"@chakra-ui/checkbox" "2.2.14"
"@chakra-ui/close-button" "2.0.17"
"@chakra-ui/control-box" "2.0.13"
"@chakra-ui/counter" "2.0.14"
"@chakra-ui/css-reset" "2.0.12"
"@chakra-ui/editable" "2.0.19"
"@chakra-ui/css-reset" "2.1.1"
"@chakra-ui/editable" "2.0.21"
"@chakra-ui/focus-lock" "2.0.16"
"@chakra-ui/form-control" "2.0.17"
"@chakra-ui/form-control" "2.0.18"
"@chakra-ui/hooks" "2.1.6"
"@chakra-ui/icon" "3.0.16"
"@chakra-ui/image" "2.0.15"
"@chakra-ui/input" "2.0.20"
"@chakra-ui/layout" "2.1.16"
"@chakra-ui/input" "2.0.21"
"@chakra-ui/layout" "2.1.18"
"@chakra-ui/live-region" "2.0.13"
"@chakra-ui/media-query" "3.2.12"
"@chakra-ui/menu" "2.1.9"
"@chakra-ui/modal" "2.2.9"
"@chakra-ui/number-input" "2.0.18"
"@chakra-ui/pin-input" "2.0.19"
"@chakra-ui/popover" "2.1.8"
"@chakra-ui/menu" "2.1.12"
"@chakra-ui/modal" "2.2.11"
"@chakra-ui/number-input" "2.0.19"
"@chakra-ui/pin-input" "2.0.20"
"@chakra-ui/popover" "2.1.9"
"@chakra-ui/popper" "3.0.13"
"@chakra-ui/portal" "2.0.15"
"@chakra-ui/progress" "2.1.5"
"@chakra-ui/provider" "2.1.2"
"@chakra-ui/radio" "2.0.19"
"@chakra-ui/portal" "2.0.16"
"@chakra-ui/progress" "2.1.6"
"@chakra-ui/provider" "2.2.2"
"@chakra-ui/radio" "2.0.22"
"@chakra-ui/react-env" "3.0.0"
"@chakra-ui/select" "2.0.18"
"@chakra-ui/select" "2.0.19"
"@chakra-ui/skeleton" "2.0.24"
"@chakra-ui/slider" "2.0.21"
"@chakra-ui/slider" "2.0.23"
"@chakra-ui/spinner" "2.0.13"
"@chakra-ui/stat" "2.0.17"
"@chakra-ui/styled-system" "2.6.1"
"@chakra-ui/switch" "2.0.22"
"@chakra-ui/system" "2.5.1"
"@chakra-ui/table" "2.0.16"
"@chakra-ui/tabs" "2.1.8"
"@chakra-ui/tag" "2.0.17"
"@chakra-ui/textarea" "2.0.18"
"@chakra-ui/theme" "2.2.5"
"@chakra-ui/theme-utils" "2.0.11"
"@chakra-ui/toast" "6.0.1"
"@chakra-ui/tooltip" "2.2.6"
"@chakra-ui/transition" "2.0.15"
"@chakra-ui/stat" "2.0.18"
"@chakra-ui/styled-system" "2.8.0"
"@chakra-ui/switch" "2.0.26"
"@chakra-ui/system" "2.5.5"
"@chakra-ui/table" "2.0.17"
"@chakra-ui/tabs" "2.1.9"
"@chakra-ui/tag" "3.0.0"
"@chakra-ui/textarea" "2.0.19"
"@chakra-ui/theme" "3.0.1"
"@chakra-ui/theme-utils" "2.0.15"
"@chakra-ui/toast" "6.1.1"
"@chakra-ui/tooltip" "2.2.7"
"@chakra-ui/transition" "2.0.16"
"@chakra-ui/utils" "2.0.15"
"@chakra-ui/visually-hidden" "2.0.15"
"@chakra-ui/select@2.0.18":
version "2.0.18"
resolved "https://registry.yarnpkg.com/@chakra-ui/select/-/select-2.0.18.tgz#4eb6092610067c1b4131353fe39b4466e251395b"
integrity sha512-1d2lUT5LM6oOs5x4lzBh4GFDuXX62+lr+sgV7099g951/5UNbb0CS2hSZHsO7yZThLNbr7QTWZvAOAayVcGzdw==
"@chakra-ui/select@2.0.19":
version "2.0.19"
resolved "https://registry.yarnpkg.com/@chakra-ui/select/-/select-2.0.19.tgz#957e95a17a890d8c0a851e2f00a8d8dd17932d66"
integrity sha512-eAlFh+JhwtJ17OrB6fO6gEAGOMH18ERNrXLqWbYLrs674Le7xuREgtuAYDoxUzvYXYYTTdOJtVbcHGriI3o6rA==
dependencies:
"@chakra-ui/form-control" "2.0.17"
"@chakra-ui/form-control" "2.0.18"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/shared-utils@2.0.5":
@@ -673,20 +673,20 @@
"@chakra-ui/react-use-previous" "2.0.5"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/slider@2.0.21":
version "2.0.21"
resolved "https://registry.yarnpkg.com/@chakra-ui/slider/-/slider-2.0.21.tgz#f65b15bf0d5f827699ff9a2d6faee35006e2bfce"
integrity sha512-Mm76yJxEqJl21+3waEcKg3tM8Y4elJ7mcViN6Brj35PTfzUJfSJxeBGo1nLPJ+X5jLj7o/L4kfBmUk3lY4QYEQ==
"@chakra-ui/slider@2.0.23":
version "2.0.23"
resolved "https://registry.yarnpkg.com/@chakra-ui/slider/-/slider-2.0.23.tgz#9130c7aee8ca876be64d1aeba6b84fe421c8207b"
integrity sha512-/eyRUXLla+ZdBUPXpakE3SAS2JS8mIJR6qcUYiPVKSpRAi6tMyYeQijAXn2QC1AUVd2JrG8Pz+1Jy7Po3uA7cA==
dependencies:
"@chakra-ui/number-utils" "2.0.7"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-types" "2.0.7"
"@chakra-ui/react-use-callback-ref" "2.0.7"
"@chakra-ui/react-use-controllable-state" "2.0.8"
"@chakra-ui/react-use-latest-ref" "2.0.5"
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@chakra-ui/react-use-pan-event" "2.0.9"
"@chakra-ui/react-use-size" "2.0.9"
"@chakra-ui/react-use-size" "2.0.10"
"@chakra-ui/react-use-update-effect" "2.0.7"
"@chakra-ui/spinner@2.0.13":
@@ -696,82 +696,82 @@
dependencies:
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/stat@2.0.17":
version "2.0.17"
resolved "https://registry.yarnpkg.com/@chakra-ui/stat/-/stat-2.0.17.tgz#2cd712cc7e0d58d9cbd542deea911f1b0925074f"
integrity sha512-PhD+5oVLWjQmGLfeZSmexp3AtLcaggWBwoMZ4z8QMZIQzf/fJJWMk0bMqxlpTv8ORDkfY/4ImuFB/RJHvcqlcA==
"@chakra-ui/stat@2.0.18":
version "2.0.18"
resolved "https://registry.yarnpkg.com/@chakra-ui/stat/-/stat-2.0.18.tgz#9e5d21d162b7cf2cf92065c19291ead2d4660772"
integrity sha512-wKyfBqhVlIs9bkSerUc6F9KJMw0yTIEKArW7dejWwzToCLPr47u+CtYO6jlJHV6lRvkhi4K4Qc6pyvtJxZ3VpA==
dependencies:
"@chakra-ui/icon" "3.0.16"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/styled-system@2.6.1":
version "2.6.1"
resolved "https://registry.yarnpkg.com/@chakra-ui/styled-system/-/styled-system-2.6.1.tgz#302d496d34c0b7b30c646a7e3c9b113a2f4588da"
integrity sha512-jy/1dVi1LxjoRCm+Eo5mqBgvPy5SCWMlIcz6GbIZBDpkGeKZwtqrZLjekxxLBCy8ORY+kJlUB0FT6AzVR/1tjw==
"@chakra-ui/styled-system@2.8.0":
version "2.8.0"
resolved "https://registry.yarnpkg.com/@chakra-ui/styled-system/-/styled-system-2.8.0.tgz#c02aa7b4a15bd826c19d055cd226bd44f7470f26"
integrity sha512-bmRv/8ACJGGKGx84U1npiUddwdNifJ+/ETklGwooS5APM0ymwUtBYZpFxjYNJrqvVYpg3mVY6HhMyBVptLS7iA==
dependencies:
"@chakra-ui/shared-utils" "2.0.5"
csstype "^3.0.11"
lodash.mergewith "4.6.2"
"@chakra-ui/switch@2.0.22":
version "2.0.22"
resolved "https://registry.yarnpkg.com/@chakra-ui/switch/-/switch-2.0.22.tgz#7b35e2b10ea4cf91fb49f5175b4335c61dcd25b3"
integrity sha512-+/Yy6y7VFD91uSPruF8ZvePi3tl5D8UNVATtWEQ+QBI92DLSM+PtgJ2F0Y9GMZ9NzMxpZ80DqwY7/kqcPCfLvw==
"@chakra-ui/switch@2.0.26":
version "2.0.26"
resolved "https://registry.yarnpkg.com/@chakra-ui/switch/-/switch-2.0.26.tgz#b93eeafd788e47c21222524adceffe9ef62602d6"
integrity sha512-x62lF6VazSZJQuVxosChVR6+0lIJe8Pxgkl/C9vxjhp2yVYb3mew5tcX/sDOu0dYZy8ro/9hMfGkdN4r9xEU8A==
dependencies:
"@chakra-ui/checkbox" "2.2.10"
"@chakra-ui/checkbox" "2.2.14"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/system@2.5.1":
version "2.5.1"
resolved "https://registry.yarnpkg.com/@chakra-ui/system/-/system-2.5.1.tgz#bc03a11ae31e795966c7618280548d5cd866f47e"
integrity sha512-4+86OrcSoq7lGkm5fh+sJ3IWXSTzjz+HOllRbCW2Rtnmcg7ritiXVNV2VygEg2DrCcx5+tNqRHDM764zW+AEug==
"@chakra-ui/system@2.5.5":
version "2.5.5"
resolved "https://registry.yarnpkg.com/@chakra-ui/system/-/system-2.5.5.tgz#b8b070d07ca9b0190363100396eea02cca754cec"
integrity sha512-52BIp/Zyvefgxn5RTByfkTeG4J+y81LWEjWm8jCaRFsLVm8IFgqIrngtcq4I7gD5n/UKbneHlb4eLHo4uc5yDQ==
dependencies:
"@chakra-ui/color-mode" "2.1.12"
"@chakra-ui/object-utils" "2.0.8"
"@chakra-ui/react-utils" "2.0.12"
"@chakra-ui/styled-system" "2.6.1"
"@chakra-ui/theme-utils" "2.0.11"
"@chakra-ui/styled-system" "2.8.0"
"@chakra-ui/theme-utils" "2.0.15"
"@chakra-ui/utils" "2.0.15"
react-fast-compare "3.2.0"
react-fast-compare "3.2.1"
"@chakra-ui/table@2.0.16":
version "2.0.16"
resolved "https://registry.yarnpkg.com/@chakra-ui/table/-/table-2.0.16.tgz#e69736cba5cfb218c5e40592ad9280c6e32f6fe7"
integrity sha512-vWDXZ6Ad3Aj66curp1tZBHvCfQHX2FJ4ijLiqGgQszWFIchfhJ5vMgEBJaFMZ+BN1draAjuRTZqaQefOApzvRg==
"@chakra-ui/table@2.0.17":
version "2.0.17"
resolved "https://registry.yarnpkg.com/@chakra-ui/table/-/table-2.0.17.tgz#ad394dc6dcbe5a8a9e6d899997ecca3471603977"
integrity sha512-OScheTEp1LOYvTki2NFwnAYvac8siAhW9BI5RKm5f5ORL2gVJo4I72RUqE0aKe1oboxgm7CYt5afT5PS5cG61A==
dependencies:
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/tabs@2.1.8":
version "2.1.8"
resolved "https://registry.yarnpkg.com/@chakra-ui/tabs/-/tabs-2.1.8.tgz#e83071380f9a3633810308d45de51be7a74f5eb9"
integrity sha512-B7LeFN04Ny2jsSy5TFOQxnbZ6ITxGxLxsB2PE0vvQjMSblBrUryOxdjw80HZhfiw6od0ikK9CeKQOIt9QCguSw==
"@chakra-ui/tabs@2.1.9":
version "2.1.9"
resolved "https://registry.yarnpkg.com/@chakra-ui/tabs/-/tabs-2.1.9.tgz#2e5214cb453c6cc0c240e82bd88af1042fc6fe0e"
integrity sha512-Yf8e0kRvaGM6jfkJum0aInQ0U3ZlCafmrYYni2lqjcTtThqu+Yosmo3iYlnullXxCw5MVznfrkb9ySvgQowuYg==
dependencies:
"@chakra-ui/clickable" "2.0.14"
"@chakra-ui/descendant" "3.0.13"
"@chakra-ui/descendant" "3.0.14"
"@chakra-ui/lazy-utils" "2.0.5"
"@chakra-ui/react-children-utils" "2.0.6"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-use-controllable-state" "2.0.8"
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@chakra-ui/react-use-safe-layout-effect" "2.0.5"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/tag@2.0.17":
version "2.0.17"
resolved "https://registry.yarnpkg.com/@chakra-ui/tag/-/tag-2.0.17.tgz#97adb86db190ddb3526060b78c590392e0ac8b4c"
integrity sha512-A47zE9Ft9qxOJ+5r1cUseKRCoEdqCRzFm0pOtZgRcckqavglk75Xjgz8HbBpUO2zqqd49MlqdOwR8o87fXS1vg==
"@chakra-ui/tag@3.0.0":
version "3.0.0"
resolved "https://registry.yarnpkg.com/@chakra-ui/tag/-/tag-3.0.0.tgz#d86cdab59bb3ff7fc628c2dbe7a5ff1b36bd3e96"
integrity sha512-YWdMmw/1OWRwNkG9pX+wVtZio+B89odaPj6XeMn5nfNN8+jyhIEpouWv34+CO9G0m1lupJTxPSfgLAd7cqXZMA==
dependencies:
"@chakra-ui/icon" "3.0.16"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/textarea@2.0.18":
version "2.0.18"
resolved "https://registry.yarnpkg.com/@chakra-ui/textarea/-/textarea-2.0.18.tgz#da6d629b465f65bbc7b48039c2e48a4ae1d853b4"
integrity sha512-aGHHb29vVifO0OtcK/k8cMykzjOKo/coDTU0NJqz7OOLAWIMNV2eGenvmO1n9tTZbmbqHiX+Sa1nPRX+pd14lg==
"@chakra-ui/textarea@2.0.19":
version "2.0.19"
resolved "https://registry.yarnpkg.com/@chakra-ui/textarea/-/textarea-2.0.19.tgz#470b459f9cb3255d2abbe07d46b0a5b60a6a32c5"
integrity sha512-adJk+qVGsFeJDvfn56CcJKKse8k7oMGlODrmpnpTdF+xvlsiTM+1GfaJvgNSpHHuQFdz/A0z1uJtfGefk0G2ZA==
dependencies:
"@chakra-ui/form-control" "2.0.17"
"@chakra-ui/form-control" "2.0.18"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/theme-tools@2.0.17":
@@ -783,57 +783,57 @@
"@chakra-ui/shared-utils" "2.0.5"
color2k "^2.0.0"
"@chakra-ui/theme-utils@2.0.11":
version "2.0.11"
resolved "https://registry.yarnpkg.com/@chakra-ui/theme-utils/-/theme-utils-2.0.11.tgz#c01b1d14fdd63326d1ad11fd8f0872921ea43872"
integrity sha512-lBAay6Sq3/fl7exd3mFxWAbzgdQowytor0fnlHrpNStn1HgFjXukwsf6356XQOie2Vd8qaMM7qZtMh4AiC0dcg==
"@chakra-ui/theme-utils@2.0.15":
version "2.0.15"
resolved "https://registry.yarnpkg.com/@chakra-ui/theme-utils/-/theme-utils-2.0.15.tgz#968a5e8c47bb403323fe67049c7b751a6e47f069"
integrity sha512-UuxtEgE7gwMTGDXtUpTOI7F5X0iHB9ekEOG5PWPn2wWBL7rlk2JtPI7UP5Um5Yg6vvBfXYGK1ySahxqsgf+87g==
dependencies:
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/styled-system" "2.6.1"
"@chakra-ui/theme" "2.2.5"
"@chakra-ui/styled-system" "2.8.0"
"@chakra-ui/theme" "3.0.1"
lodash.mergewith "4.6.2"
"@chakra-ui/theme@2.2.5":
version "2.2.5"
resolved "https://registry.yarnpkg.com/@chakra-ui/theme/-/theme-2.2.5.tgz#18ed1755ff27c1ff1f1a77083ffc546c361c926e"
integrity sha512-hYASZMwu0NqEv6PPydu+F3I+kMNd44yR4TwjR/lXBz/LEh64L6UPY6kQjebCfgdVtsGdl3HKg+eLlfa7SvfRgw==
"@chakra-ui/theme@3.0.1":
version "3.0.1"
resolved "https://registry.yarnpkg.com/@chakra-ui/theme/-/theme-3.0.1.tgz#151fc5d1e23d0fd0cd29d28acf8f6017269c13fc"
integrity sha512-92kDm/Ux/51uJqhRKevQo/O/rdwucDYcpHg2QuwzdAxISCeYvgtl2TtgOOl5EnqEP0j3IEAvZHZUlv8TTbawaw==
dependencies:
"@chakra-ui/anatomy" "2.1.2"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/theme-tools" "2.0.17"
"@chakra-ui/toast@6.0.1":
version "6.0.1"
resolved "https://registry.yarnpkg.com/@chakra-ui/toast/-/toast-6.0.1.tgz#726b67a57cdd592320bb3f450c66d007a2a1d902"
integrity sha512-ej2kJXvu/d2h6qnXU5D8XTyw0qpsfmbiU7hUffo/sPxkz89AUOQ08RUuUmB1ssW/FZcQvNMJ5WgzCTKHGBxtxw==
"@chakra-ui/toast@6.1.1":
version "6.1.1"
resolved "https://registry.yarnpkg.com/@chakra-ui/toast/-/toast-6.1.1.tgz#7ca78f38069bc87fa75b64de76c8fc2758bdf419"
integrity sha512-JtjIKkPVjEu8okGGCipCxNVgK/15h5AicTATZ6RbG2MsHmr4GfKG3fUCvpbuZseArqmLqGLQZQJjVE9vJzaSkQ==
dependencies:
"@chakra-ui/alert" "2.0.17"
"@chakra-ui/alert" "2.1.0"
"@chakra-ui/close-button" "2.0.17"
"@chakra-ui/portal" "2.0.15"
"@chakra-ui/react-context" "2.0.7"
"@chakra-ui/portal" "2.0.16"
"@chakra-ui/react-context" "2.0.8"
"@chakra-ui/react-use-timeout" "2.0.5"
"@chakra-ui/react-use-update-effect" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/styled-system" "2.6.1"
"@chakra-ui/theme" "2.2.5"
"@chakra-ui/styled-system" "2.8.0"
"@chakra-ui/theme" "3.0.1"
"@chakra-ui/tooltip@2.2.6":
version "2.2.6"
resolved "https://registry.yarnpkg.com/@chakra-ui/tooltip/-/tooltip-2.2.6.tgz#a38f9ff2dd8a574c8cf49526c3846533455f8ddd"
integrity sha512-4cbneidZ5+HCWge3OZzewRQieIvhDjSsl+scrl4Scx7E0z3OmqlTIESU5nGIZDBLYqKn/UirEZhqaQ33FOS2fw==
"@chakra-ui/tooltip@2.2.7":
version "2.2.7"
resolved "https://registry.yarnpkg.com/@chakra-ui/tooltip/-/tooltip-2.2.7.tgz#7c305efb057a5fe4694b1b8d82395aec776d8f57"
integrity sha512-ImUJ6NnVqARaYqpgtO+kzucDRmxo8AF3jMjARw0bx2LxUkKwgRCOEaaRK5p5dHc0Kr6t5/XqjDeUNa19/sLauA==
dependencies:
"@chakra-ui/popper" "3.0.13"
"@chakra-ui/portal" "2.0.15"
"@chakra-ui/portal" "2.0.16"
"@chakra-ui/react-types" "2.0.7"
"@chakra-ui/react-use-disclosure" "2.0.8"
"@chakra-ui/react-use-event-listener" "2.0.7"
"@chakra-ui/react-use-merge-refs" "2.0.7"
"@chakra-ui/shared-utils" "2.0.5"
"@chakra-ui/transition@2.0.15":
version "2.0.15"
resolved "https://registry.yarnpkg.com/@chakra-ui/transition/-/transition-2.0.15.tgz#c640df2ea82f5ad58c55a6e1a7c338f377cb96d8"
integrity sha512-o9LBK/llQfUDHF/Ty3cQ6nShpekKTqHUoJlUOzNKhoTsNpoRerr9v0jwojrX1YI02KtVjfhFU6PiqXlDfREoNw==
"@chakra-ui/transition@2.0.16":
version "2.0.16"
resolved "https://registry.yarnpkg.com/@chakra-ui/transition/-/transition-2.0.16.tgz#498c91e6835bb5d950fd1d1402f483b85f7dcd87"
integrity sha512-E+RkwlPc3H7P1crEXmXwDXMB2lqY2LLia2P5siQ4IEnRWIgZXlIw+8Em+NtHNgusel2N+9yuB0wT9SeZZeZ3CQ==
dependencies:
"@chakra-ui/shared-utils" "2.0.5"
@@ -1081,6 +1081,18 @@
resolved "https://registry.yarnpkg.com/@esbuild/win32-x64/-/win32-x64-0.16.17.tgz#c5a1a4bfe1b57f0c3e61b29883525c6da3e5c091"
integrity sha512-y+EHuSchhL7FjHgvQL/0fnnFmO4T1bhvWANX6gcnqTjtnKWbTvUMCpGnv2+t+31d7RzyEAYAd4u2fnIhHL6N/Q==
"@eslint-community/eslint-utils@^4.2.0":
version "4.4.0"
resolved "https://registry.yarnpkg.com/@eslint-community/eslint-utils/-/eslint-utils-4.4.0.tgz#a23514e8fb9af1269d5f7788aa556798d61c6b59"
integrity sha512-1/sA4dwrzBAyeUoQ6oxahHKmrZvsnLCg4RfxW3ZFGGmQkSNQPFNLV9CUEFQP1x9EYXHTo5p6xdhZM1Ne9p/AfA==
dependencies:
eslint-visitor-keys "^3.3.0"
"@eslint-community/regexpp@^4.4.0":
version "4.5.0"
resolved "https://registry.yarnpkg.com/@eslint-community/regexpp/-/regexpp-4.5.0.tgz#f6f729b02feee2c749f57e334b7a1b5f40a81724"
integrity sha512-vITaYzIcNmjn5tF5uxcZ/ft7/RXGrMUIS9HalWckEOF6ESiwXKoMzAQf2UW0aVd6rnOeExTJVd5hmWXucBKGXQ==
"@eslint/eslintrc@^1.4.1":
version "1.4.1"
resolved "https://registry.yarnpkg.com/@eslint/eslintrc/-/eslintrc-1.4.1.tgz#af58772019a2d271b7e2d4c23ff4ddcba3ccfb3e"
@@ -1768,47 +1780,47 @@
resolved "https://registry.yarnpkg.com/@types/uuid/-/uuid-9.0.0.tgz#53ef263e5239728b56096b0a869595135b7952d2"
integrity sha512-kr90f+ERiQtKWMz5rP32ltJ/BtULDI5RVO0uavn1HQUOwjx0R1h0rnDYNL0CepF1zL5bSY6FISAfd9tOdDhU5Q==
"@typescript-eslint/eslint-plugin@^5.52.0":
version "5.52.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/eslint-plugin/-/eslint-plugin-5.52.0.tgz#5fb0d43574c2411f16ea80f5fc335b8eaa7b28a8"
integrity sha512-lHazYdvYVsBokwCdKOppvYJKaJ4S41CgKBcPvyd0xjZNbvQdhn/pnJlGtQksQ/NhInzdaeaSarlBjDXHuclEbg==
"@typescript-eslint/eslint-plugin@^5.57.0":
version "5.57.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/eslint-plugin/-/eslint-plugin-5.57.0.tgz#52c8a7a4512f10e7249ca1e2e61f81c62c34365c"
integrity sha512-itag0qpN6q2UMM6Xgk6xoHa0D0/P+M17THnr4SVgqn9Rgam5k/He33MA7/D7QoJcdMxHFyX7U9imaBonAX/6qA==
dependencies:
"@typescript-eslint/scope-manager" "5.52.0"
"@typescript-eslint/type-utils" "5.52.0"
"@typescript-eslint/utils" "5.52.0"
"@eslint-community/regexpp" "^4.4.0"
"@typescript-eslint/scope-manager" "5.57.0"
"@typescript-eslint/type-utils" "5.57.0"
"@typescript-eslint/utils" "5.57.0"
debug "^4.3.4"
grapheme-splitter "^1.0.4"
ignore "^5.2.0"
natural-compare-lite "^1.4.0"
regexpp "^3.2.0"
semver "^7.3.7"
tsutils "^3.21.0"
"@typescript-eslint/parser@^5.52.0":
version "5.52.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/parser/-/parser-5.52.0.tgz#73c136df6c0133f1d7870de7131ccf356f5be5a4"
integrity sha512-e2KiLQOZRo4Y0D/b+3y08i3jsekoSkOYStROYmPUnGMEoA0h+k2qOH5H6tcjIc68WDvGwH+PaOrP1XRzLJ6QlA==
"@typescript-eslint/parser@^5.57.0":
version "5.57.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/parser/-/parser-5.57.0.tgz#f675bf2cd1a838949fd0de5683834417b757e4fa"
integrity sha512-orrduvpWYkgLCyAdNtR1QIWovcNZlEm6yL8nwH/eTxWLd8gsP+25pdLHYzL2QdkqrieaDwLpytHqycncv0woUQ==
dependencies:
"@typescript-eslint/scope-manager" "5.52.0"
"@typescript-eslint/types" "5.52.0"
"@typescript-eslint/typescript-estree" "5.52.0"
"@typescript-eslint/scope-manager" "5.57.0"
"@typescript-eslint/types" "5.57.0"
"@typescript-eslint/typescript-estree" "5.57.0"
debug "^4.3.4"
"@typescript-eslint/scope-manager@5.52.0":
version "5.52.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/scope-manager/-/scope-manager-5.52.0.tgz#a993d89a0556ea16811db48eabd7c5b72dcb83d1"
integrity sha512-AR7sxxfBKiNV0FWBSARxM8DmNxrwgnYMPwmpkC1Pl1n+eT8/I2NAUPuwDy/FmDcC6F8pBfmOcaxcxRHspgOBMw==
"@typescript-eslint/scope-manager@5.57.0":
version "5.57.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/scope-manager/-/scope-manager-5.57.0.tgz#79ccd3fa7bde0758059172d44239e871e087ea36"
integrity sha512-NANBNOQvllPlizl9LatX8+MHi7bx7WGIWYjPHDmQe5Si/0YEYfxSljJpoTyTWFTgRy3X8gLYSE4xQ2U+aCozSw==
dependencies:
"@typescript-eslint/types" "5.52.0"
"@typescript-eslint/visitor-keys" "5.52.0"
"@typescript-eslint/types" "5.57.0"
"@typescript-eslint/visitor-keys" "5.57.0"
"@typescript-eslint/type-utils@5.52.0":
version "5.52.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/type-utils/-/type-utils-5.52.0.tgz#9fd28cd02e6f21f5109e35496df41893f33167aa"
integrity sha512-tEKuUHfDOv852QGlpPtB3lHOoig5pyFQN/cUiZtpw99D93nEBjexRLre5sQZlkMoHry/lZr8qDAt2oAHLKA6Jw==
"@typescript-eslint/type-utils@5.57.0":
version "5.57.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/type-utils/-/type-utils-5.57.0.tgz#98e7531c4e927855d45bd362de922a619b4319f2"
integrity sha512-kxXoq9zOTbvqzLbdNKy1yFrxLC6GDJFE2Yuo3KqSwTmDOFjUGeWSakgoXT864WcK5/NAJkkONCiKb1ddsqhLXQ==
dependencies:
"@typescript-eslint/typescript-estree" "5.52.0"
"@typescript-eslint/utils" "5.52.0"
"@typescript-eslint/typescript-estree" "5.57.0"
"@typescript-eslint/utils" "5.57.0"
debug "^4.3.4"
tsutils "^3.21.0"
@@ -1822,13 +1834,18 @@
resolved "https://registry.yarnpkg.com/@typescript-eslint/types/-/types-5.52.0.tgz#19e9abc6afb5bd37a1a9bea877a1a836c0b3241b"
integrity sha512-oV7XU4CHYfBhk78fS7tkum+/Dpgsfi91IIDy7fjCyq2k6KB63M6gMC0YIvy+iABzmXThCRI6xpCEyVObBdWSDQ==
"@typescript-eslint/typescript-estree@5.52.0", "@typescript-eslint/typescript-estree@^5.13.0":
version "5.52.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/typescript-estree/-/typescript-estree-5.52.0.tgz#6408cb3c2ccc01c03c278cb201cf07e73347dfca"
integrity sha512-WeWnjanyEwt6+fVrSR0MYgEpUAuROxuAH516WPjUblIrClzYJj0kBbjdnbQXLpgAN8qbEuGywiQsXUVDiAoEuQ==
"@typescript-eslint/types@5.57.0":
version "5.57.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/types/-/types-5.57.0.tgz#727bfa2b64c73a4376264379cf1f447998eaa132"
integrity sha512-mxsod+aZRSyLT+jiqHw1KK6xrANm19/+VFALVFP5qa/aiJnlP38qpyaTd0fEKhWvQk6YeNZ5LGwI1pDpBRBhtQ==
"@typescript-eslint/typescript-estree@5.57.0":
version "5.57.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/typescript-estree/-/typescript-estree-5.57.0.tgz#ebcd0ee3e1d6230e888d88cddf654252d41e2e40"
integrity sha512-LTzQ23TV82KpO8HPnWuxM2V7ieXW8O142I7hQTxWIHDcCEIjtkat6H96PFkYBQqGFLW/G/eVVOB9Z8rcvdY/Vw==
dependencies:
"@typescript-eslint/types" "5.52.0"
"@typescript-eslint/visitor-keys" "5.52.0"
"@typescript-eslint/types" "5.57.0"
"@typescript-eslint/visitor-keys" "5.57.0"
debug "^4.3.4"
globby "^11.1.0"
is-glob "^4.0.3"
@@ -1848,18 +1865,31 @@
semver "^7.3.5"
tsutils "^3.21.0"
"@typescript-eslint/utils@5.52.0":
"@typescript-eslint/typescript-estree@^5.13.0":
version "5.52.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/utils/-/utils-5.52.0.tgz#b260bb5a8f6b00a0ed51db66bdba4ed5e4845a72"
integrity sha512-As3lChhrbwWQLNk2HC8Ree96hldKIqk98EYvypd3It8Q1f8d5zWyIoaZEp2va5667M4ZyE7X8UUR+azXrFl+NA==
resolved "https://registry.yarnpkg.com/@typescript-eslint/typescript-estree/-/typescript-estree-5.52.0.tgz#6408cb3c2ccc01c03c278cb201cf07e73347dfca"
integrity sha512-WeWnjanyEwt6+fVrSR0MYgEpUAuROxuAH516WPjUblIrClzYJj0kBbjdnbQXLpgAN8qbEuGywiQsXUVDiAoEuQ==
dependencies:
"@typescript-eslint/types" "5.52.0"
"@typescript-eslint/visitor-keys" "5.52.0"
debug "^4.3.4"
globby "^11.1.0"
is-glob "^4.0.3"
semver "^7.3.7"
tsutils "^3.21.0"
"@typescript-eslint/utils@5.57.0":
version "5.57.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/utils/-/utils-5.57.0.tgz#eab8f6563a2ac31f60f3e7024b91bf75f43ecef6"
integrity sha512-ps/4WohXV7C+LTSgAL5CApxvxbMkl9B9AUZRtnEFonpIxZDIT7wC1xfvuJONMidrkB9scs4zhtRyIwHh4+18kw==
dependencies:
"@eslint-community/eslint-utils" "^4.2.0"
"@types/json-schema" "^7.0.9"
"@types/semver" "^7.3.12"
"@typescript-eslint/scope-manager" "5.52.0"
"@typescript-eslint/types" "5.52.0"
"@typescript-eslint/typescript-estree" "5.52.0"
"@typescript-eslint/scope-manager" "5.57.0"
"@typescript-eslint/types" "5.57.0"
"@typescript-eslint/typescript-estree" "5.57.0"
eslint-scope "^5.1.1"
eslint-utils "^3.0.0"
semver "^7.3.7"
"@typescript-eslint/visitor-keys@4.33.0":
@@ -1878,6 +1908,14 @@
"@typescript-eslint/types" "5.52.0"
eslint-visitor-keys "^3.3.0"
"@typescript-eslint/visitor-keys@5.57.0":
version "5.57.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/visitor-keys/-/visitor-keys-5.57.0.tgz#e2b2f4174aff1d15eef887ce3d019ecc2d7a8ac1"
integrity sha512-ery2g3k0hv5BLiKpPuwYt9KBkAp2ugT6VvyShXdLOkax895EC55sP0Tx5L0fZaQueiK3fBLvHVvEl3jFS5ia+g==
dependencies:
"@typescript-eslint/types" "5.57.0"
eslint-visitor-keys "^3.3.0"
"@vitejs/plugin-react-swc@^3.2.0":
version "3.2.0"
resolved "https://registry.yarnpkg.com/@vitejs/plugin-react-swc/-/plugin-react-swc-3.2.0.tgz#7c4f6e116a296c27f680d05750f9dbf798cf7709"
@@ -1890,15 +1928,15 @@
resolved "https://registry.yarnpkg.com/@yarnpkg/lockfile/-/lockfile-1.1.0.tgz#e77a97fbd345b76d83245edcd17d393b1b41fb31"
integrity sha512-GpSwvyXOcOOlV70vbnzjj4fW5xW/FdUF6nQEt1ENy7m4ZCczi1+/buVUPAqmGfqznsORNFzUMjctTIp8a9tuCQ==
"@zag-js/element-size@0.3.1":
version "0.3.1"
resolved "https://registry.yarnpkg.com/@zag-js/element-size/-/element-size-0.3.1.tgz#f9f6ae98355e2250d18d0f6e2f1134a0ae4c6a2f"
integrity sha512-jR5j4G//bRzcxwAACWi9EfITnwjNmn10LxF4NmALrdZU7/PNWP3uUCdhCxd/0SCyeiJXUl0yvD57rWAbKPs1nw==
"@zag-js/element-size@0.3.2":
version "0.3.2"
resolved "https://registry.yarnpkg.com/@zag-js/element-size/-/element-size-0.3.2.tgz#ebb76af2a024230482406db41344598d1a9f54f4"
integrity sha512-bVvvigUGvAuj7PCkE5AbzvTJDTw5f3bg9nQdv+ErhVN8SfPPppLJEmmWdxqsRzrHXgx8ypJt/+Ty0kjtISVDsQ==
"@zag-js/focus-visible@0.2.1":
version "0.2.1"
resolved "https://registry.yarnpkg.com/@zag-js/focus-visible/-/focus-visible-0.2.1.tgz#bf4f1009f4fd35a9728dfaa9214d8cb318fe8b1e"
integrity sha512-19uTjoZGP4/Ax7kSNhhay9JA83BirKzpqLkeEAilrpdI1hE5xuq6q+tzJOsrMOOqJrm7LkmZp5lbsTQzvK2pYg==
"@zag-js/focus-visible@0.2.2":
version "0.2.2"
resolved "https://registry.yarnpkg.com/@zag-js/focus-visible/-/focus-visible-0.2.2.tgz#56233480ca1275d3218fb2e10696a33d1a6b9e64"
integrity sha512-0j2gZq8HiZ51z4zNnSkF1iSkqlwRDvdH+son3wHdoz+7IUdMN/5Exd4TxMJ+gq2Of1DiXReYLL9qqh2PdQ4wgA==
accepts@~1.3.4:
version "1.3.8"
@@ -4510,10 +4548,10 @@ react-dropzone@^14.2.3:
file-selector "^0.6.0"
prop-types "^15.8.1"
react-fast-compare@3.2.0:
version "3.2.0"
resolved "https://registry.yarnpkg.com/react-fast-compare/-/react-fast-compare-3.2.0.tgz#641a9da81b6a6320f270e89724fb45a0b39e43bb"
integrity sha512-rtGImPZ0YyLrscKI9xTpV8psd6I8VAtjKCzQDlzyDvqJA8XOW78TXYQwNRNd8g8JZnDu8q9Fu/1v4HPAVwVdHA==
react-fast-compare@3.2.1:
version "3.2.1"
resolved "https://registry.yarnpkg.com/react-fast-compare/-/react-fast-compare-3.2.1.tgz#53933d9e14f364281d6cba24bfed7a4afb808b5f"
integrity sha512-xTYf9zFim2pEif/Fw16dBiXpe0hoy5PxcD8+OwBnTtNLfIm3g6WxhKNurY+6OmdH1u6Ta/W/Vl6vjbYP1MFnDg==
react-fast-compare@^2.0.1:
version "2.0.4"
@@ -5311,6 +5349,11 @@ typescript@^4.0.0, typescript@^4.5.5:
resolved "https://registry.yarnpkg.com/typescript/-/typescript-4.9.5.tgz#095979f9bcc0d09da324d58d03ce8f8374cbe65a"
integrity sha512-1FXk9E2Hm+QzZQ7z+McJiHL4NW1F2EzMu9Nq9i3zAaGqibafqYwCVU6WyWAuyQRRzOlxou8xZSyXLEN8oKj24g==
typescript@^5.0.3:
version "5.0.3"
resolved "https://registry.yarnpkg.com/typescript/-/typescript-5.0.3.tgz#fe976f0c826a88d0a382007681cbb2da44afdedf"
integrity sha512-xv8mOEDnigb/tN9PSMTwSEqAnUvkoXMQlicOb0IUVDBSQCgBSaAAROUZYy2IcUy5qU6XajK5jjjO7TMWqBTKZA==
unbox-primitive@^1.0.2:
version "1.0.2"
resolved "https://registry.yarnpkg.com/unbox-primitive/-/unbox-primitive-1.0.2.tgz#29032021057d5e6cdbd08c5129c226dff8ed6f9e"

View File

@@ -22,6 +22,7 @@ import transformers
from diffusers.pipeline_utils import DiffusionPipeline
from diffusers.utils.import_utils import is_xformers_available
from omegaconf import OmegaConf
from pathlib import Path
from PIL import Image, ImageOps
from pytorch_lightning import logging, seed_everything
@@ -528,6 +529,11 @@ class Generate:
log_tokens=self.log_tokenization,
)
# untested code, so commented out for now
# if self.model.peft_manager:
# self.model = self.model.peft_manager.load(self.model, self.model.unet.dtype)
init_image, mask_image = self._make_images(
init_img,
init_mask,
@@ -627,9 +633,8 @@ class Generate:
except RuntimeError:
# Clear the CUDA cache on an exception
self.clear_cuda_cache()
print(traceback.format_exc(), file=sys.stderr)
print(">> Could not generate image.")
print("** Could not generate image.")
raise
toc = time.time()
print("\n>> Usage stats:")
@@ -909,12 +914,8 @@ class Generate:
return self._load_generator(".omnibus", "Omnibus")
def _load_generator(self, module, class_name):
if self.is_legacy_model(self.model_name):
mn = f"ldm.invoke.ckpt_generator{module}"
cn = f"Ckpt{class_name}"
else:
mn = f"ldm.invoke.generator{module}"
cn = class_name
mn = f"ldm.invoke.generator{module}"
cn = class_name
module = importlib.import_module(mn)
constructor = getattr(module, cn)
return constructor(self.model, self.precision)
@@ -976,7 +977,7 @@ class Generate:
self.generators = {}
seed_everything(random.randrange(0, np.iinfo(np.uint32).max))
if self.embedding_path is not None:
if self.embedding_path and not model_data.get("ti_embeddings_loaded"):
print(f'>> Loading embeddings from {self.embedding_path}')
for root, _, files in os.walk(self.embedding_path):
for name in files:
@@ -984,14 +985,24 @@ class Generate:
self.model.textual_inversion_manager.load_textual_inversion(
ti_path, defer_injecting_tokens=True
)
print(
f'>> Textual inversion triggers: {", ".join(sorted(self.model.textual_inversion_manager.get_all_trigger_strings()))}'
)
model_data["ti_embeddings_loaded"] = True
print(
f'>> Textual inversion triggers: {", ".join(sorted(self.model.textual_inversion_manager.get_all_trigger_strings()))}'
)
self.model_name = model_name
self._set_sampler() # requires self.model_name to be set first
self._save_last_used_model(model_name)
return self.model
def _save_last_used_model(self,model_name:str):
"""
Save name of the last model used.
"""
model_file_path = Path(Globals.root,'.last_model')
with open(model_file_path,'w') as f:
f.write(model_name)
def load_huggingface_concepts(self, concepts: list[str]):
self.model.textual_inversion_manager.load_huggingface_concepts(concepts)
@@ -1032,6 +1043,8 @@ class Generate:
image_callback=None,
prefix=None,
):
results = []
for r in image_list:
image, seed = r
try:
@@ -1085,6 +1098,10 @@ class Generate:
else:
r[0] = image
results.append([image, seed])
return results
def apply_textmask(
self, image_path: str, prompt: str, callback, threshold: float = 0.5
):
@@ -1113,9 +1130,6 @@ class Generate:
def sample_to_lowres_estimated_image(self, samples):
return self._make_base().sample_to_lowres_estimated_image(samples)
def is_legacy_model(self, model_name) -> bool:
return self.model_manager.is_legacy(model_name)
def _set_sampler(self):
if isinstance(self.model, DiffusionPipeline):
return self._set_scheduler()

View File

@@ -64,7 +64,7 @@ def main():
Globals.internet_available = args.internet_available and check_internet()
Globals.disable_xformers = not args.xformers
Globals.sequential_guidance = args.sequential_guidance
Globals.ckpt_convert = args.ckpt_convert
Globals.ckpt_convert = True # always true as of 2.3.4 for LoRA support
# run any post-install patches needed
run_patches()
@@ -110,6 +110,9 @@ def main():
else:
embedding_path = None
if opt.lora_path:
Globals.lora_models_dir = opt.lora_path
# migrate legacy models
ModelManager.migrate_models()
@@ -126,11 +129,13 @@ def main():
print(f"{e}. Aborting.")
sys.exit(-1)
model = opt.model or retrieve_last_used_model()
# creating a Generate object:
try:
gen = Generate(
conf=opt.conf,
model=opt.model,
model=model,
sampler_name=opt.sampler_name,
embedding_path=embedding_path,
full_precision=opt.full_precision,
@@ -166,14 +171,13 @@ def main():
if path := opt.autoimport:
gen.model_manager.heuristic_import(
str(path),
convert=False,
commit_to_conf=opt.conf,
config_file_callback=lambda x: _pick_configuration_file(completer,x),
)
if path := opt.autoconvert:
gen.model_manager.heuristic_import(
str(path), convert=True, commit_to_conf=opt.conf
str(path), commit_to_conf=opt.conf
)
# web server loops forever
@@ -635,7 +639,7 @@ def set_default_output_dir(opt: Args, completer: Completer):
completer.set_default_dir(opt.outdir)
def import_model(model_path: str, gen, opt, completer, convert=False):
def import_model(model_path: str, gen, opt, completer):
"""
model_path can be (1) a URL to a .ckpt file; (2) a local .ckpt file path;
(3) a huggingface repository id; or (4) a local directory containing a
@@ -666,7 +670,6 @@ def import_model(model_path: str, gen, opt, completer, convert=False):
model_path,
model_name=model_name,
description=model_desc,
convert=convert,
config_file_callback=lambda x: _pick_configuration_file(completer,x),
)
if not imported_name:
@@ -772,14 +775,10 @@ def convert_model(model_name_or_path: Union[Path, str], gen, opt, completer):
original_config_file = Path(model_info["config"])
model_name = model_name_or_path
model_description = model_info["description"]
vae = model_info["vae"]
vae_path = model_info.get("vae")
else:
print(f"** {model_name_or_path} is not a legacy .ckpt weights file")
return
if vae_repo := ldm.invoke.model_manager.VAE_TO_REPO_ID.get(Path(vae).stem):
vae_repo = dict(repo_id=vae_repo)
else:
vae_repo = None
model_name = manager.convert_and_import(
ckpt_path,
diffusers_path=Path(
@@ -788,11 +787,11 @@ def convert_model(model_name_or_path: Union[Path, str], gen, opt, completer):
model_name=model_name,
model_description=model_description,
original_config_file=original_config_file,
vae=vae_repo,
vae_path=vae_path,
)
else:
try:
import_model(model_name_or_path, gen, opt, completer, convert=True)
import_model(model_name_or_path, gen, opt, completer)
except KeyboardInterrupt:
return
@@ -834,6 +833,7 @@ def edit_model(model_name: str, gen, opt, completer):
print(f"\n>> Editing model {model_name} from configuration file {opt.conf}")
new_name = _get_model_name(manager.list_models(), completer, model_name)
completer.complete_extensions(('.yaml','.ckpt','.safetensors','.pt'))
for attribute in info.keys():
if type(info[attribute]) != str:
continue
@@ -841,6 +841,7 @@ def edit_model(model_name: str, gen, opt, completer):
continue
completer.set_line(info[attribute])
info[attribute] = input(f"{attribute}: ") or info[attribute]
completer.complete_extensions(None)
if info["format"] == "diffusers":
vae = info.get("vae", dict(repo_id=None, path=None, subfolder=None))
@@ -1287,6 +1288,17 @@ def check_internet() -> bool:
except:
return False
def retrieve_last_used_model()->str:
"""
Return name of the last model used.
"""
model_file_path = Path(Globals.root,'.last_model')
if not model_file_path.exists():
return None
with open(model_file_path,'r') as f:
return f.readline()
# This routine performs any patch-ups needed after installation
def run_patches():
install_missing_config_files()
@@ -1326,11 +1338,14 @@ def do_version_update(root_version: version.Version, app_version: Union[str, ver
from release to release. This is not an elegant solution. Instead, the
launcher should be moved into the source tree and installed using pip.
"""
if root_version < version.Version('v2.3.4'):
dest = Path(Globals.root,'loras')
dest.mkdir(exist_ok=True)
if root_version < version.Version('v2.3.3'):
if sys.platform == "linux":
print('>> Downloading new version of launcher script and its config file')
from ldm.util import download_with_progress_bar
url_base = f'https://raw.githubusercontent.com/invoke-ai/InvokeAI/release/v{str(app_version)}/installer/templates/'
url_base = f'https://raw.githubusercontent.com/invoke-ai/InvokeAI/v{str(app_version)}/installer/templates/'
dest = Path(Globals.root,'invoke.sh.in')
assert download_with_progress_bar(url_base+'invoke.sh.in',dest)

View File

@@ -1,2 +1 @@
__version__='2.3.3-rc2'
__version__='2.3.5-rc1'

View File

@@ -522,8 +522,8 @@ class Args(object):
'--ckpt_convert',
action=argparse.BooleanOptionalAction,
dest='ckpt_convert',
default=False,
help='Load legacy ckpt files as diffusers. Pass --no-ckpt-convert to inhibit this behavior',
default=True,
help='Deprecated option. Legacy ckpt files are now always converted to diffusers when loaded.'
)
model_group.add_argument(
'--internet',
@@ -654,6 +654,13 @@ class Args(object):
type=str,
help='Path to a directory containing .bin and/or .pt files, or a single .bin/.pt file. You may use subdirectories. (default is ROOTDIR/embeddings)'
)
render_group.add_argument(
'--lora_directory',
dest='lora_path',
default='loras',
type=str,
help='Path to a directory containing LoRA files; subdirectories are not supported. (default is ROOTDIR/loras)'
)
render_group.add_argument(
'--embeddings',
action=argparse.BooleanOptionalAction,
@@ -791,12 +798,12 @@ class Args(object):
*Model manipulation*
!models -- list models in configs/models.yaml
!switch <model_name> -- switch to model named <model_name>
!import_model /path/to/weights/file.ckpt -- adds a .ckpt model to your config
!import_model /path/to/weights/file -- imports a model from a ckpt or safetensors file
!import_model /path/to/weights/ -- interactively import models from a directory
!import_model http://path_to_model.ckpt -- downloads and adds a .ckpt model to your config
!import_model hakurei/waifu-diffusion -- downloads and adds a diffusers model to your config
!optimize_model <model_name> -- converts a .ckpt model to a diffusers model
!convert_model /path/to/weights/file.ckpt -- converts a .ckpt file path to a diffusers model
!import_model http://path_to_model -- downloads and adds a ckpt or safetensors model to your config
!import_model hakurei/waifu-diffusion -- downloads and adds a diffusers model to your config using its repo_id
!optimize_model <model_name> -- converts a .ckpt/.safetensors model to a diffusers model
!convert_model /path/to/weights/file -- converts a .ckpt file path to a diffusers model and adds to your config
!edit_model <model_name> -- edit a model's description
!del_model <model_name> -- delete a model
"""

View File

@@ -1,4 +0,0 @@
'''
Initialization file for the ldm.invoke.generator package
'''
from .base import CkptGenerator

View File

@@ -1,335 +0,0 @@
'''
Base class for ldm.invoke.ckpt_generator.*
including img2img, txt2img, and inpaint
THESE MODULES ARE TRANSITIONAL AND WILL BE REMOVED AT A FUTURE DATE
WHEN LEGACY CKPT MODEL SUPPORT IS DISCONTINUED.
'''
import torch
import numpy as np
import random
import os
import os.path as osp
import traceback
from tqdm import tqdm, trange
from PIL import Image, ImageFilter, ImageChops
import cv2 as cv
from einops import rearrange, repeat
from pathlib import Path
from pytorch_lightning import seed_everything
import invokeai.assets.web as web_assets
from ldm.invoke.devices import choose_autocast
from ldm.models.diffusion.cross_attention_map_saving import AttentionMapSaver
from ldm.util import rand_perlin_2d
downsampling = 8
CAUTION_IMG = 'caution.png'
class CkptGenerator():
def __init__(self, model, precision):
self.model = model
self.precision = precision
self.seed = None
self.latent_channels = model.channels
self.downsampling_factor = downsampling # BUG: should come from model or config
self.safety_checker = None
self.perlin = 0.0
self.threshold = 0
self.variation_amount = 0
self.with_variations = []
self.use_mps_noise = False
self.free_gpu_mem = None
self.caution_img = None
# this is going to be overridden in img2img.py, txt2img.py and inpaint.py
def get_make_image(self,prompt,**kwargs):
"""
Returns a function returning an image derived from the prompt and the initial image
Return value depends on the seed at the time you call it
"""
raise NotImplementedError("image_iterator() must be implemented in a descendent class")
def set_variation(self, seed, variation_amount, with_variations):
self.seed = seed
self.variation_amount = variation_amount
self.with_variations = with_variations
def generate(self,prompt,init_image,width,height,sampler, iterations=1,seed=None,
image_callback=None, step_callback=None, threshold=0.0, perlin=0.0,
safety_checker:dict=None,
attention_maps_callback = None,
free_gpu_mem: bool=False,
**kwargs):
scope = choose_autocast(self.precision)
self.safety_checker = safety_checker
self.free_gpu_mem = free_gpu_mem
attention_maps_images = []
attention_maps_callback = lambda saver: attention_maps_images.append(saver.get_stacked_maps_image())
make_image = self.get_make_image(
prompt,
sampler = sampler,
init_image = init_image,
width = width,
height = height,
step_callback = step_callback,
threshold = threshold,
perlin = perlin,
attention_maps_callback = attention_maps_callback,
**kwargs
)
results = []
seed = seed if seed is not None and seed >= 0 else self.new_seed()
first_seed = seed
seed, initial_noise = self.generate_initial_noise(seed, width, height)
# There used to be an additional self.model.ema_scope() here, but it breaks
# the inpaint-1.5 model. Not sure what it did.... ?
with scope(self.model.device.type):
for n in trange(iterations, desc='Generating'):
x_T = None
if self.variation_amount > 0:
seed_everything(seed)
target_noise = self.get_noise(width,height)
x_T = self.slerp(self.variation_amount, initial_noise, target_noise)
elif initial_noise is not None:
# i.e. we specified particular variations
x_T = initial_noise
else:
seed_everything(seed)
try:
x_T = self.get_noise(width,height)
except:
print('** An error occurred while getting initial noise **')
print(traceback.format_exc())
image = make_image(x_T)
if self.safety_checker is not None:
image = self.safety_check(image)
results.append([image, seed])
if image_callback is not None:
attention_maps_image = None if len(attention_maps_images)==0 else attention_maps_images[-1]
image_callback(image, seed, first_seed=first_seed, attention_maps_image=attention_maps_image)
seed = self.new_seed()
return results
def sample_to_image(self,samples)->Image.Image:
"""
Given samples returned from a sampler, converts
it into a PIL Image
"""
x_samples = self.model.decode_first_stage(samples)
x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0)
if len(x_samples) != 1:
raise Exception(
f'>> expected to get a single image, but got {len(x_samples)}')
x_sample = 255.0 * rearrange(
x_samples[0].cpu().numpy(), 'c h w -> h w c'
)
return Image.fromarray(x_sample.astype(np.uint8))
# write an approximate RGB image from latent samples for a single step to PNG
def repaste_and_color_correct(self, result: Image.Image, init_image: Image.Image, init_mask: Image.Image, mask_blur_radius: int = 8) -> Image.Image:
if init_image is None or init_mask is None:
return result
# Get the original alpha channel of the mask if there is one.
# Otherwise it is some other black/white image format ('1', 'L' or 'RGB')
pil_init_mask = init_mask.getchannel('A') if init_mask.mode == 'RGBA' else init_mask.convert('L')
pil_init_image = init_image.convert('RGBA') # Add an alpha channel if one doesn't exist
# Build an image with only visible pixels from source to use as reference for color-matching.
init_rgb_pixels = np.asarray(init_image.convert('RGB'), dtype=np.uint8)
init_a_pixels = np.asarray(pil_init_image.getchannel('A'), dtype=np.uint8)
init_mask_pixels = np.asarray(pil_init_mask, dtype=np.uint8)
# Get numpy version of result
np_image = np.asarray(result, dtype=np.uint8)
# Mask and calculate mean and standard deviation
mask_pixels = init_a_pixels * init_mask_pixels > 0
np_init_rgb_pixels_masked = init_rgb_pixels[mask_pixels, :]
np_image_masked = np_image[mask_pixels, :]
if np_init_rgb_pixels_masked.size > 0:
init_means = np_init_rgb_pixels_masked.mean(axis=0)
init_std = np_init_rgb_pixels_masked.std(axis=0)
gen_means = np_image_masked.mean(axis=0)
gen_std = np_image_masked.std(axis=0)
# Color correct
np_matched_result = np_image.copy()
np_matched_result[:,:,:] = (((np_matched_result[:,:,:].astype(np.float32) - gen_means[None,None,:]) / gen_std[None,None,:]) * init_std[None,None,:] + init_means[None,None,:]).clip(0, 255).astype(np.uint8)
matched_result = Image.fromarray(np_matched_result, mode='RGB')
else:
matched_result = Image.fromarray(np_image, mode='RGB')
# Blur the mask out (into init image) by specified amount
if mask_blur_radius > 0:
nm = np.asarray(pil_init_mask, dtype=np.uint8)
nmd = cv.erode(nm, kernel=np.ones((3,3), dtype=np.uint8), iterations=int(mask_blur_radius / 2))
pmd = Image.fromarray(nmd, mode='L')
blurred_init_mask = pmd.filter(ImageFilter.BoxBlur(mask_blur_radius))
else:
blurred_init_mask = pil_init_mask
multiplied_blurred_init_mask = ImageChops.multiply(blurred_init_mask, self.pil_image.split()[-1])
# Paste original on color-corrected generation (using blurred mask)
matched_result.paste(init_image, (0,0), mask = multiplied_blurred_init_mask)
return matched_result
def sample_to_lowres_estimated_image(self,samples):
# origingally adapted from code by @erucipe and @keturn here:
# https://discuss.huggingface.co/t/decoding-latents-to-rgb-without-upscaling/23204/7
# these updated numbers for v1.5 are from @torridgristle
v1_5_latent_rgb_factors = torch.tensor([
# R G B
[ 0.3444, 0.1385, 0.0670], # L1
[ 0.1247, 0.4027, 0.1494], # L2
[-0.3192, 0.2513, 0.2103], # L3
[-0.1307, -0.1874, -0.7445] # L4
], dtype=samples.dtype, device=samples.device)
latent_image = samples[0].permute(1, 2, 0) @ v1_5_latent_rgb_factors
latents_ubyte = (((latent_image + 1) / 2)
.clamp(0, 1) # change scale from -1..1 to 0..1
.mul(0xFF) # to 0..255
.byte()).cpu()
return Image.fromarray(latents_ubyte.numpy())
def generate_initial_noise(self, seed, width, height):
initial_noise = None
if self.variation_amount > 0 or len(self.with_variations) > 0:
# use fixed initial noise plus random noise per iteration
seed_everything(seed)
initial_noise = self.get_noise(width,height)
for v_seed, v_weight in self.with_variations:
seed = v_seed
seed_everything(seed)
next_noise = self.get_noise(width,height)
initial_noise = self.slerp(v_weight, initial_noise, next_noise)
if self.variation_amount > 0:
random.seed() # reset RNG to an actually random state, so we can get a random seed for variations
seed = random.randrange(0,np.iinfo(np.uint32).max)
return (seed, initial_noise)
else:
return (seed, None)
# returns a tensor filled with random numbers from a normal distribution
def get_noise(self,width,height):
"""
Returns a tensor filled with random numbers, either form a normal distribution
(txt2img) or from the latent image (img2img, inpaint)
"""
raise NotImplementedError("get_noise() must be implemented in a descendent class")
def get_perlin_noise(self,width,height):
fixdevice = 'cpu' if (self.model.device.type == 'mps') else self.model.device
return torch.stack([rand_perlin_2d((height, width), (8, 8), device = self.model.device).to(fixdevice) for _ in range(self.latent_channels)], dim=0).to(self.model.device)
def new_seed(self):
self.seed = random.randrange(0, np.iinfo(np.uint32).max)
return self.seed
def slerp(self, t, v0, v1, DOT_THRESHOLD=0.9995):
'''
Spherical linear interpolation
Args:
t (float/np.ndarray): Float value between 0.0 and 1.0
v0 (np.ndarray): Starting vector
v1 (np.ndarray): Final vector
DOT_THRESHOLD (float): Threshold for considering the two vectors as
colineal. Not recommended to alter this.
Returns:
v2 (np.ndarray): Interpolation vector between v0 and v1
'''
inputs_are_torch = False
if not isinstance(v0, np.ndarray):
inputs_are_torch = True
v0 = v0.detach().cpu().numpy()
if not isinstance(v1, np.ndarray):
inputs_are_torch = True
v1 = v1.detach().cpu().numpy()
dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
if np.abs(dot) > DOT_THRESHOLD:
v2 = (1 - t) * v0 + t * v1
else:
theta_0 = np.arccos(dot)
sin_theta_0 = np.sin(theta_0)
theta_t = theta_0 * t
sin_theta_t = np.sin(theta_t)
s0 = np.sin(theta_0 - theta_t) / sin_theta_0
s1 = sin_theta_t / sin_theta_0
v2 = s0 * v0 + s1 * v1
if inputs_are_torch:
v2 = torch.from_numpy(v2).to(self.model.device)
return v2
def safety_check(self,image:Image.Image):
'''
If the CompViz safety checker flags an NSFW image, we
blur it out.
'''
import diffusers
checker = self.safety_checker['checker']
extractor = self.safety_checker['extractor']
features = extractor([image], return_tensors="pt")
features.to(self.model.device)
# unfortunately checker requires the numpy version, so we have to convert back
x_image = np.array(image).astype(np.float32) / 255.0
x_image = x_image[None].transpose(0, 3, 1, 2)
diffusers.logging.set_verbosity_error()
checked_image, has_nsfw_concept = checker(images=x_image, clip_input=features.pixel_values)
if has_nsfw_concept[0]:
print('** An image with potential non-safe content has been detected. A blurred image will be returned. **')
return self.blur(image)
else:
return image
def blur(self,input):
blurry = input.filter(filter=ImageFilter.GaussianBlur(radius=32))
try:
caution = self.get_caution_img()
if caution:
blurry.paste(caution,(0,0),caution)
except FileNotFoundError:
pass
return blurry
def get_caution_img(self):
path = None
if self.caution_img:
return self.caution_img
path = Path(web_assets.__path__[0]) / CAUTION_IMG
caution = Image.open(path)
self.caution_img = caution.resize((caution.width // 2, caution.height //2))
return self.caution_img
# this is a handy routine for debugging use. Given a generated sample,
# convert it into a PNG image and store it at the indicated path
def save_sample(self, sample, filepath):
image = self.sample_to_image(sample)
dirname = os.path.dirname(filepath) or '.'
if not os.path.exists(dirname):
print(f'** creating directory {dirname}')
os.makedirs(dirname, exist_ok=True)
image.save(filepath,'PNG')
def torch_dtype(self)->torch.dtype:
return torch.float16 if self.precision == 'float16' else torch.float32

View File

@@ -1,501 +0,0 @@
'''
ldm.invoke.ckpt_generator.embiggen descends from ldm.invoke.ckpt_generator
and generates with ldm.invoke.ckpt_generator.img2img
'''
import torch
import numpy as np
from tqdm import trange
from PIL import Image
from ldm.invoke.ckpt_generator.base import CkptGenerator
from ldm.invoke.ckpt_generator.img2img import CkptImg2Img
from ldm.invoke.devices import choose_autocast
from ldm.models.diffusion.ddim import DDIMSampler
class CkptEmbiggen(CkptGenerator):
def __init__(self, model, precision):
super().__init__(model, precision)
self.init_latent = None
# Replace generate because Embiggen doesn't need/use most of what it does normallly
def generate(self,prompt,iterations=1,seed=None,
image_callback=None, step_callback=None,
**kwargs):
scope = choose_autocast(self.precision)
make_image = self.get_make_image(
prompt,
step_callback = step_callback,
**kwargs
)
results = []
seed = seed if seed else self.new_seed()
# Noise will be generated by the Img2Img generator when called
with scope(self.model.device.type), self.model.ema_scope():
for n in trange(iterations, desc='Generating'):
# make_image will call Img2Img which will do the equivalent of get_noise itself
image = make_image()
results.append([image, seed])
if image_callback is not None:
image_callback(image, seed, prompt_in=prompt)
seed = self.new_seed()
return results
@torch.no_grad()
def get_make_image(
self,
prompt,
sampler,
steps,
cfg_scale,
ddim_eta,
conditioning,
init_img,
strength,
width,
height,
embiggen,
embiggen_tiles,
step_callback=None,
**kwargs
):
"""
Returns a function returning an image derived from the prompt and multi-stage twice-baked potato layering over the img2img on the initial image
Return value depends on the seed at the time you call it
"""
assert not sampler.uses_inpainting_model(), "--embiggen is not supported by inpainting models"
# Construct embiggen arg array, and sanity check arguments
if embiggen == None: # embiggen can also be called with just embiggen_tiles
embiggen = [1.0] # If not specified, assume no scaling
elif embiggen[0] < 0:
embiggen[0] = 1.0
print(
'>> Embiggen scaling factor cannot be negative, fell back to the default of 1.0 !')
if len(embiggen) < 2:
embiggen.append(0.75)
elif embiggen[1] > 1.0 or embiggen[1] < 0:
embiggen[1] = 0.75
print('>> Embiggen upscaling strength for ESRGAN must be between 0 and 1, fell back to the default of 0.75 !')
if len(embiggen) < 3:
embiggen.append(0.25)
elif embiggen[2] < 0:
embiggen[2] = 0.25
print('>> Overlap size for Embiggen must be a positive ratio between 0 and 1 OR a number of pixels, fell back to the default of 0.25 !')
# Convert tiles from their user-freindly count-from-one to count-from-zero, because we need to do modulo math
# and then sort them, because... people.
if embiggen_tiles:
embiggen_tiles = list(map(lambda n: n-1, embiggen_tiles))
embiggen_tiles.sort()
if strength >= 0.5:
print(f'* WARNING: Embiggen may produce mirror motifs if the strength (-f) is too high (currently {strength}). Try values between 0.35-0.45.')
# Prep img2img generator, since we wrap over it
gen_img2img = CkptImg2Img(self.model,self.precision)
# Open original init image (not a tensor) to manipulate
initsuperimage = Image.open(init_img)
with Image.open(init_img) as img:
initsuperimage = img.convert('RGB')
# Size of the target super init image in pixels
initsuperwidth, initsuperheight = initsuperimage.size
# Increase by scaling factor if not already resized, using ESRGAN as able
if embiggen[0] != 1.0:
initsuperwidth = round(initsuperwidth*embiggen[0])
initsuperheight = round(initsuperheight*embiggen[0])
if embiggen[1] > 0: # No point in ESRGAN upscaling if strength is set zero
from ldm.invoke.restoration.realesrgan import ESRGAN
esrgan = ESRGAN()
print(
f'>> ESRGAN upscaling init image prior to cutting with Embiggen with strength {embiggen[1]}')
if embiggen[0] > 2:
initsuperimage = esrgan.process(
initsuperimage,
embiggen[1], # upscale strength
self.seed,
4, # upscale scale
)
else:
initsuperimage = esrgan.process(
initsuperimage,
embiggen[1], # upscale strength
self.seed,
2, # upscale scale
)
# We could keep recursively re-running ESRGAN for a requested embiggen[0] larger than 4x
# but from personal experiance it doesn't greatly improve anything after 4x
# Resize to target scaling factor resolution
initsuperimage = initsuperimage.resize(
(initsuperwidth, initsuperheight), Image.Resampling.LANCZOS)
# Use width and height as tile widths and height
# Determine buffer size in pixels
if embiggen[2] < 1:
if embiggen[2] < 0:
embiggen[2] = 0
overlap_size_x = round(embiggen[2] * width)
overlap_size_y = round(embiggen[2] * height)
else:
overlap_size_x = round(embiggen[2])
overlap_size_y = round(embiggen[2])
# With overall image width and height known, determine how many tiles we need
def ceildiv(a, b):
return -1 * (-a // b)
# X and Y needs to be determined independantly (we may have savings on one based on the buffer pixel count)
# (initsuperwidth - width) is the area remaining to the right that we need to layers tiles to fill
# (width - overlap_size_x) is how much new we can fill with a single tile
emb_tiles_x = 1
emb_tiles_y = 1
if (initsuperwidth - width) > 0:
emb_tiles_x = ceildiv(initsuperwidth - width,
width - overlap_size_x) + 1
if (initsuperheight - height) > 0:
emb_tiles_y = ceildiv(initsuperheight - height,
height - overlap_size_y) + 1
# Sanity
assert emb_tiles_x > 1 or emb_tiles_y > 1, f'ERROR: Based on the requested dimensions of {initsuperwidth}x{initsuperheight} and tiles of {width}x{height} you don\'t need to Embiggen! Check your arguments.'
# Prep alpha layers --------------
# https://stackoverflow.com/questions/69321734/how-to-create-different-transparency-like-gradient-with-python-pil
# agradientL is Left-side transparent
agradientL = Image.linear_gradient('L').rotate(
90).resize((overlap_size_x, height))
# agradientT is Top-side transparent
agradientT = Image.linear_gradient('L').resize((width, overlap_size_y))
# radial corner is the left-top corner, made full circle then cut to just the left-top quadrant
agradientC = Image.new('L', (256, 256))
for y in range(256):
for x in range(256):
# Find distance to lower right corner (numpy takes arrays)
distanceToLR = np.sqrt([(255 - x) ** 2 + (255 - y) ** 2])[0]
# Clamp values to max 255
if distanceToLR > 255:
distanceToLR = 255
#Place the pixel as invert of distance
agradientC.putpixel((x, y), round(255 - distanceToLR))
# Create alternative asymmetric diagonal corner to use on "tailing" intersections to prevent hard edges
# Fits for a left-fading gradient on the bottom side and full opacity on the right side.
agradientAsymC = Image.new('L', (256, 256))
for y in range(256):
for x in range(256):
value = round(max(0, x-(255-y)) * (255 / max(1,y)))
#Clamp values
value = max(0, value)
value = min(255, value)
agradientAsymC.putpixel((x, y), value)
# Create alpha layers default fully white
alphaLayerL = Image.new("L", (width, height), 255)
alphaLayerT = Image.new("L", (width, height), 255)
alphaLayerLTC = Image.new("L", (width, height), 255)
# Paste gradients into alpha layers
alphaLayerL.paste(agradientL, (0, 0))
alphaLayerT.paste(agradientT, (0, 0))
alphaLayerLTC.paste(agradientL, (0, 0))
alphaLayerLTC.paste(agradientT, (0, 0))
alphaLayerLTC.paste(agradientC.resize((overlap_size_x, overlap_size_y)), (0, 0))
# make masks with an asymmetric upper-right corner so when the curved transparent corner of the next tile
# to its right is placed it doesn't reveal a hard trailing semi-transparent edge in the overlapping space
alphaLayerTaC = alphaLayerT.copy()
alphaLayerTaC.paste(agradientAsymC.rotate(270).resize((overlap_size_x, overlap_size_y)), (width - overlap_size_x, 0))
alphaLayerLTaC = alphaLayerLTC.copy()
alphaLayerLTaC.paste(agradientAsymC.rotate(270).resize((overlap_size_x, overlap_size_y)), (width - overlap_size_x, 0))
if embiggen_tiles:
# Individual unconnected sides
alphaLayerR = Image.new("L", (width, height), 255)
alphaLayerR.paste(agradientL.rotate(
180), (width - overlap_size_x, 0))
alphaLayerB = Image.new("L", (width, height), 255)
alphaLayerB.paste(agradientT.rotate(
180), (0, height - overlap_size_y))
alphaLayerTB = Image.new("L", (width, height), 255)
alphaLayerTB.paste(agradientT, (0, 0))
alphaLayerTB.paste(agradientT.rotate(
180), (0, height - overlap_size_y))
alphaLayerLR = Image.new("L", (width, height), 255)
alphaLayerLR.paste(agradientL, (0, 0))
alphaLayerLR.paste(agradientL.rotate(
180), (width - overlap_size_x, 0))
# Sides and corner Layers
alphaLayerRBC = Image.new("L", (width, height), 255)
alphaLayerRBC.paste(agradientL.rotate(
180), (width - overlap_size_x, 0))
alphaLayerRBC.paste(agradientT.rotate(
180), (0, height - overlap_size_y))
alphaLayerRBC.paste(agradientC.rotate(180).resize(
(overlap_size_x, overlap_size_y)), (width - overlap_size_x, height - overlap_size_y))
alphaLayerLBC = Image.new("L", (width, height), 255)
alphaLayerLBC.paste(agradientL, (0, 0))
alphaLayerLBC.paste(agradientT.rotate(
180), (0, height - overlap_size_y))
alphaLayerLBC.paste(agradientC.rotate(90).resize(
(overlap_size_x, overlap_size_y)), (0, height - overlap_size_y))
alphaLayerRTC = Image.new("L", (width, height), 255)
alphaLayerRTC.paste(agradientL.rotate(
180), (width - overlap_size_x, 0))
alphaLayerRTC.paste(agradientT, (0, 0))
alphaLayerRTC.paste(agradientC.rotate(270).resize(
(overlap_size_x, overlap_size_y)), (width - overlap_size_x, 0))
# All but X layers
alphaLayerABT = Image.new("L", (width, height), 255)
alphaLayerABT.paste(alphaLayerLBC, (0, 0))
alphaLayerABT.paste(agradientL.rotate(
180), (width - overlap_size_x, 0))
alphaLayerABT.paste(agradientC.rotate(180).resize(
(overlap_size_x, overlap_size_y)), (width - overlap_size_x, height - overlap_size_y))
alphaLayerABL = Image.new("L", (width, height), 255)
alphaLayerABL.paste(alphaLayerRTC, (0, 0))
alphaLayerABL.paste(agradientT.rotate(
180), (0, height - overlap_size_y))
alphaLayerABL.paste(agradientC.rotate(180).resize(
(overlap_size_x, overlap_size_y)), (width - overlap_size_x, height - overlap_size_y))
alphaLayerABR = Image.new("L", (width, height), 255)
alphaLayerABR.paste(alphaLayerLBC, (0, 0))
alphaLayerABR.paste(agradientT, (0, 0))
alphaLayerABR.paste(agradientC.resize(
(overlap_size_x, overlap_size_y)), (0, 0))
alphaLayerABB = Image.new("L", (width, height), 255)
alphaLayerABB.paste(alphaLayerRTC, (0, 0))
alphaLayerABB.paste(agradientL, (0, 0))
alphaLayerABB.paste(agradientC.resize(
(overlap_size_x, overlap_size_y)), (0, 0))
# All-around layer
alphaLayerAA = Image.new("L", (width, height), 255)
alphaLayerAA.paste(alphaLayerABT, (0, 0))
alphaLayerAA.paste(agradientT, (0, 0))
alphaLayerAA.paste(agradientC.resize(
(overlap_size_x, overlap_size_y)), (0, 0))
alphaLayerAA.paste(agradientC.rotate(270).resize(
(overlap_size_x, overlap_size_y)), (width - overlap_size_x, 0))
# Clean up temporary gradients
del agradientL
del agradientT
del agradientC
def make_image():
# Make main tiles -------------------------------------------------
if embiggen_tiles:
print(f'>> Making {len(embiggen_tiles)} Embiggen tiles...')
else:
print(
f'>> Making {(emb_tiles_x * emb_tiles_y)} Embiggen tiles ({emb_tiles_x}x{emb_tiles_y})...')
emb_tile_store = []
# Although we could use the same seed for every tile for determinism, at higher strengths this may
# produce duplicated structures for each tile and make the tiling effect more obvious
# instead track and iterate a local seed we pass to Img2Img
seed = self.seed
seedintlimit = np.iinfo(np.uint32).max - 1 # only retreive this one from numpy
for tile in range(emb_tiles_x * emb_tiles_y):
# Don't iterate on first tile
if tile != 0:
if seed < seedintlimit:
seed += 1
else:
seed = 0
# Determine if this is a re-run and replace
if embiggen_tiles and not tile in embiggen_tiles:
continue
# Get row and column entries
emb_row_i = tile // emb_tiles_x
emb_column_i = tile % emb_tiles_x
# Determine bounds to cut up the init image
# Determine upper-left point
if emb_column_i + 1 == emb_tiles_x:
left = initsuperwidth - width
else:
left = round(emb_column_i * (width - overlap_size_x))
if emb_row_i + 1 == emb_tiles_y:
top = initsuperheight - height
else:
top = round(emb_row_i * (height - overlap_size_y))
right = left + width
bottom = top + height
# Cropped image of above dimension (does not modify the original)
newinitimage = initsuperimage.crop((left, top, right, bottom))
# DEBUG:
# newinitimagepath = init_img[0:-4] + f'_emb_Ti{tile}.png'
# newinitimage.save(newinitimagepath)
if embiggen_tiles:
print(
f'Making tile #{tile + 1} ({embiggen_tiles.index(tile) + 1} of {len(embiggen_tiles)} requested)')
else:
print(
f'Starting {tile + 1} of {(emb_tiles_x * emb_tiles_y)} tiles')
# create a torch tensor from an Image
newinitimage = np.array(
newinitimage).astype(np.float32) / 255.0
newinitimage = newinitimage[None].transpose(0, 3, 1, 2)
newinitimage = torch.from_numpy(newinitimage)
newinitimage = 2.0 * newinitimage - 1.0
newinitimage = newinitimage.to(self.model.device)
tile_results = gen_img2img.generate(
prompt,
iterations = 1,
seed = seed,
sampler = DDIMSampler(self.model, device=self.model.device),
steps = steps,
cfg_scale = cfg_scale,
conditioning = conditioning,
ddim_eta = ddim_eta,
image_callback = None, # called only after the final image is generated
step_callback = step_callback, # called after each intermediate image is generated
width = width,
height = height,
init_image = newinitimage, # notice that init_image is different from init_img
mask_image = None,
strength = strength,
)
emb_tile_store.append(tile_results[0][0])
# DEBUG (but, also has other uses), worth saving if you want tiles without a transparency overlap to manually composite
# emb_tile_store[-1].save(init_img[0:-4] + f'_emb_To{tile}.png')
del newinitimage
# Sanity check we have them all
if len(emb_tile_store) == (emb_tiles_x * emb_tiles_y) or (embiggen_tiles != [] and len(emb_tile_store) == len(embiggen_tiles)):
outputsuperimage = Image.new(
"RGBA", (initsuperwidth, initsuperheight))
if embiggen_tiles:
outputsuperimage.alpha_composite(
initsuperimage.convert('RGBA'), (0, 0))
for tile in range(emb_tiles_x * emb_tiles_y):
if embiggen_tiles:
if tile in embiggen_tiles:
intileimage = emb_tile_store.pop(0)
else:
continue
else:
intileimage = emb_tile_store[tile]
intileimage = intileimage.convert('RGBA')
# Get row and column entries
emb_row_i = tile // emb_tiles_x
emb_column_i = tile % emb_tiles_x
if emb_row_i == 0 and emb_column_i == 0 and not embiggen_tiles:
left = 0
top = 0
else:
# Determine upper-left point
if emb_column_i + 1 == emb_tiles_x:
left = initsuperwidth - width
else:
left = round(emb_column_i *
(width - overlap_size_x))
if emb_row_i + 1 == emb_tiles_y:
top = initsuperheight - height
else:
top = round(emb_row_i * (height - overlap_size_y))
# Handle gradients for various conditions
# Handle emb_rerun case
if embiggen_tiles:
# top of image
if emb_row_i == 0:
if emb_column_i == 0:
if (tile+1) in embiggen_tiles: # Look-ahead right
if (tile+emb_tiles_x) not in embiggen_tiles: # Look-ahead down
intileimage.putalpha(alphaLayerB)
# Otherwise do nothing on this tile
elif (tile+emb_tiles_x) in embiggen_tiles: # Look-ahead down only
intileimage.putalpha(alphaLayerR)
else:
intileimage.putalpha(alphaLayerRBC)
elif emb_column_i == emb_tiles_x - 1:
if (tile+emb_tiles_x) in embiggen_tiles: # Look-ahead down
intileimage.putalpha(alphaLayerL)
else:
intileimage.putalpha(alphaLayerLBC)
else:
if (tile+1) in embiggen_tiles: # Look-ahead right
if (tile+emb_tiles_x) in embiggen_tiles: # Look-ahead down
intileimage.putalpha(alphaLayerL)
else:
intileimage.putalpha(alphaLayerLBC)
elif (tile+emb_tiles_x) in embiggen_tiles: # Look-ahead down only
intileimage.putalpha(alphaLayerLR)
else:
intileimage.putalpha(alphaLayerABT)
# bottom of image
elif emb_row_i == emb_tiles_y - 1:
if emb_column_i == 0:
if (tile+1) in embiggen_tiles: # Look-ahead right
intileimage.putalpha(alphaLayerTaC)
else:
intileimage.putalpha(alphaLayerRTC)
elif emb_column_i == emb_tiles_x - 1:
# No tiles to look ahead to
intileimage.putalpha(alphaLayerLTC)
else:
if (tile+1) in embiggen_tiles: # Look-ahead right
intileimage.putalpha(alphaLayerLTaC)
else:
intileimage.putalpha(alphaLayerABB)
# vertical middle of image
else:
if emb_column_i == 0:
if (tile+1) in embiggen_tiles: # Look-ahead right
if (tile+emb_tiles_x) in embiggen_tiles: # Look-ahead down
intileimage.putalpha(alphaLayerTaC)
else:
intileimage.putalpha(alphaLayerTB)
elif (tile+emb_tiles_x) in embiggen_tiles: # Look-ahead down only
intileimage.putalpha(alphaLayerRTC)
else:
intileimage.putalpha(alphaLayerABL)
elif emb_column_i == emb_tiles_x - 1:
if (tile+emb_tiles_x) in embiggen_tiles: # Look-ahead down
intileimage.putalpha(alphaLayerLTC)
else:
intileimage.putalpha(alphaLayerABR)
else:
if (tile+1) in embiggen_tiles: # Look-ahead right
if (tile+emb_tiles_x) in embiggen_tiles: # Look-ahead down
intileimage.putalpha(alphaLayerLTaC)
else:
intileimage.putalpha(alphaLayerABR)
elif (tile+emb_tiles_x) in embiggen_tiles: # Look-ahead down only
intileimage.putalpha(alphaLayerABB)
else:
intileimage.putalpha(alphaLayerAA)
# Handle normal tiling case (much simpler - since we tile left to right, top to bottom)
else:
if emb_row_i == 0 and emb_column_i >= 1:
intileimage.putalpha(alphaLayerL)
elif emb_row_i >= 1 and emb_column_i == 0:
if emb_column_i + 1 == emb_tiles_x: # If we don't have anything that can be placed to the right
intileimage.putalpha(alphaLayerT)
else:
intileimage.putalpha(alphaLayerTaC)
else:
if emb_column_i + 1 == emb_tiles_x: # If we don't have anything that can be placed to the right
intileimage.putalpha(alphaLayerLTC)
else:
intileimage.putalpha(alphaLayerLTaC)
# Layer tile onto final image
outputsuperimage.alpha_composite(intileimage, (left, top))
else:
print(f'Error: could not find all Embiggen output tiles in memory? Something must have gone wrong with img2img generation.')
# after internal loops and patching up return Embiggen image
return outputsuperimage
# end of function declaration
return make_image

View File

@@ -1,97 +0,0 @@
'''
ldm.invoke.ckpt_generator.img2img descends from ldm.invoke.generator
'''
import torch
import numpy as np
import PIL
from torch import Tensor
from PIL import Image
from ldm.invoke.devices import choose_autocast
from ldm.invoke.ckpt_generator.base import CkptGenerator
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent
class CkptImg2Img(CkptGenerator):
def __init__(self, model, precision):
super().__init__(model, precision)
self.init_latent = None # by get_noise()
def get_make_image(self,prompt,sampler,steps,cfg_scale,ddim_eta,
conditioning,init_image,strength,step_callback=None,threshold=0.0,perlin=0.0,**kwargs):
"""
Returns a function returning an image derived from the prompt and the initial image
Return value depends on the seed at the time you call it.
"""
self.perlin = perlin
sampler.make_schedule(
ddim_num_steps=steps, ddim_eta=ddim_eta, verbose=False
)
if isinstance(init_image, PIL.Image.Image):
init_image = self._image_to_tensor(init_image.convert('RGB'))
scope = choose_autocast(self.precision)
with scope(self.model.device.type):
self.init_latent = self.model.get_first_stage_encoding(
self.model.encode_first_stage(init_image)
) # move to latent space
t_enc = int(strength * steps)
uc, c, extra_conditioning_info = conditioning
def make_image(x_T):
# encode (scaled latent)
z_enc = sampler.stochastic_encode(
self.init_latent,
torch.tensor([t_enc - 1]).to(self.model.device),
noise=x_T
)
if self.free_gpu_mem and self.model.model.device != self.model.device:
self.model.model.to(self.model.device)
# decode it
samples = sampler.decode(
z_enc,
c,
t_enc,
img_callback = step_callback,
unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=uc,
init_latent = self.init_latent, # changes how noising is performed in ksampler
extra_conditioning_info = extra_conditioning_info,
all_timesteps_count = steps
)
if self.free_gpu_mem:
self.model.model.to("cpu")
return self.sample_to_image(samples)
return make_image
def get_noise(self,width,height):
device = self.model.device
init_latent = self.init_latent
assert init_latent is not None,'call to get_noise() when init_latent not set'
if device.type == 'mps':
x = torch.randn_like(init_latent, device='cpu').to(device)
else:
x = torch.randn_like(init_latent, device=device)
if self.perlin > 0.0:
shape = init_latent.shape
x = (1-self.perlin)*x + self.perlin*self.get_perlin_noise(shape[3], shape[2])
return x
def _image_to_tensor(self, image:Image, normalize:bool=True)->Tensor:
image = np.array(image).astype(np.float32) / 255.0
if len(image.shape) == 2: # 'L' image, as in a mask
image = image[None,None]
else: # 'RGB' image
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
if normalize:
image = 2.0 * image - 1.0
return image.to(self.model.device)

View File

@@ -1,358 +0,0 @@
'''
ldm.invoke.ckpt_generator.inpaint descends from ldm.invoke.ckpt_generator
'''
import math
import torch
import torchvision.transforms as T
import numpy as np
import cv2 as cv
import PIL
from PIL import Image, ImageFilter, ImageOps, ImageChops
from skimage.exposure.histogram_matching import match_histograms
from einops import rearrange, repeat
from ldm.invoke.devices import choose_autocast
from ldm.invoke.ckpt_generator.img2img import CkptImg2Img
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.ksampler import KSampler
from ldm.invoke.generator.base import downsampling
from ldm.util import debug_image
from ldm.invoke.patchmatch import PatchMatch
from ldm.invoke.globals import Globals
def infill_methods()->list[str]:
methods = list()
if PatchMatch.patchmatch_available():
methods.append('patchmatch')
methods.append('tile')
return methods
class CkptInpaint(CkptImg2Img):
def __init__(self, model, precision):
self.init_latent = None
self.pil_image = None
self.pil_mask = None
self.mask_blur_radius = 0
self.infill_method = None
super().__init__(model, precision)
# Outpaint support code
def get_tile_images(self, image: np.ndarray, width=8, height=8):
_nrows, _ncols, depth = image.shape
_strides = image.strides
nrows, _m = divmod(_nrows, height)
ncols, _n = divmod(_ncols, width)
if _m != 0 or _n != 0:
return None
return np.lib.stride_tricks.as_strided(
np.ravel(image),
shape=(nrows, ncols, height, width, depth),
strides=(height * _strides[0], width * _strides[1], *_strides),
writeable=False
)
def infill_patchmatch(self, im: Image.Image) -> Image:
if im.mode != 'RGBA':
return im
# Skip patchmatch if patchmatch isn't available
if not PatchMatch.patchmatch_available():
return im
# Patchmatch (note, we may want to expose patch_size? Increasing it significantly impacts performance though)
im_patched_np = PatchMatch.inpaint(im.convert('RGB'), ImageOps.invert(im.split()[-1]), patch_size = 3)
im_patched = Image.fromarray(im_patched_np, mode = 'RGB')
return im_patched
def tile_fill_missing(self, im: Image.Image, tile_size: int = 16, seed: int = None) -> Image:
# Only fill if there's an alpha layer
if im.mode != 'RGBA':
return im
a = np.asarray(im, dtype=np.uint8)
tile_size = (tile_size, tile_size)
# Get the image as tiles of a specified size
tiles = self.get_tile_images(a,*tile_size).copy()
# Get the mask as tiles
tiles_mask = tiles[:,:,:,:,3]
# Find any mask tiles with any fully transparent pixels (we will be replacing these later)
tmask_shape = tiles_mask.shape
tiles_mask = tiles_mask.reshape(math.prod(tiles_mask.shape))
n,ny = (math.prod(tmask_shape[0:2])), math.prod(tmask_shape[2:])
tiles_mask = (tiles_mask > 0)
tiles_mask = tiles_mask.reshape((n,ny)).all(axis = 1)
# Get RGB tiles in single array and filter by the mask
tshape = tiles.shape
tiles_all = tiles.reshape((math.prod(tiles.shape[0:2]), * tiles.shape[2:]))
filtered_tiles = tiles_all[tiles_mask]
if len(filtered_tiles) == 0:
return im
# Find all invalid tiles and replace with a random valid tile
replace_count = (tiles_mask == False).sum()
rng = np.random.default_rng(seed = seed)
tiles_all[np.logical_not(tiles_mask)] = filtered_tiles[rng.choice(filtered_tiles.shape[0], replace_count),:,:,:]
# Convert back to an image
tiles_all = tiles_all.reshape(tshape)
tiles_all = tiles_all.swapaxes(1,2)
st = tiles_all.reshape((math.prod(tiles_all.shape[0:2]), math.prod(tiles_all.shape[2:4]), tiles_all.shape[4]))
si = Image.fromarray(st, mode='RGBA')
return si
def mask_edge(self, mask: Image, edge_size: int, edge_blur: int) -> Image:
npimg = np.asarray(mask, dtype=np.uint8)
# Detect any partially transparent regions
npgradient = np.uint8(255 * (1.0 - np.floor(np.abs(0.5 - np.float32(npimg) / 255.0) * 2.0)))
# Detect hard edges
npedge = cv.Canny(npimg, threshold1=100, threshold2=200)
# Combine
npmask = npgradient + npedge
# Expand
npmask = cv.dilate(npmask, np.ones((3,3), np.uint8), iterations = int(edge_size / 2))
new_mask = Image.fromarray(npmask)
if edge_blur > 0:
new_mask = new_mask.filter(ImageFilter.BoxBlur(edge_blur))
return ImageOps.invert(new_mask)
def seam_paint(self,
im: Image.Image,
seam_size: int,
seam_blur: int,
prompt,sampler,steps,cfg_scale,ddim_eta,
conditioning,strength,
noise,
step_callback
) -> Image.Image:
hard_mask = self.pil_image.split()[-1].copy()
mask = self.mask_edge(hard_mask, seam_size, seam_blur)
make_image = self.get_make_image(
prompt,
sampler,
steps,
cfg_scale,
ddim_eta,
conditioning,
init_image = im.copy().convert('RGBA'),
mask_image = mask.convert('RGB'), # Code currently requires an RGB mask
strength = strength,
mask_blur_radius = 0,
seam_size = 0,
step_callback = step_callback,
inpaint_width = im.width,
inpaint_height = im.height
)
seam_noise = self.get_noise(im.width, im.height)
result = make_image(seam_noise)
return result
@torch.no_grad()
def get_make_image(self,prompt,sampler,steps,cfg_scale,ddim_eta,
conditioning,init_image,mask_image,strength,
mask_blur_radius: int = 8,
# Seam settings - when 0, doesn't fill seam
seam_size: int = 0,
seam_blur: int = 0,
seam_strength: float = 0.7,
seam_steps: int = 10,
tile_size: int = 32,
step_callback=None,
inpaint_replace=False, enable_image_debugging=False,
infill_method = None,
inpaint_width=None,
inpaint_height=None,
**kwargs):
"""
Returns a function returning an image derived from the prompt and
the initial image + mask. Return value depends on the seed at
the time you call it. kwargs are 'init_latent' and 'strength'
"""
self.enable_image_debugging = enable_image_debugging
self.infill_method = infill_method or infill_methods()[0], # The infill method to use
self.inpaint_width = inpaint_width
self.inpaint_height = inpaint_height
if isinstance(init_image, PIL.Image.Image):
self.pil_image = init_image.copy()
# Do infill
if infill_method == 'patchmatch' and PatchMatch.patchmatch_available():
init_filled = self.infill_patchmatch(self.pil_image.copy())
else: # if infill_method == 'tile': # Only two methods right now, so always use 'tile' if not patchmatch
init_filled = self.tile_fill_missing(
self.pil_image.copy(),
seed = self.seed,
tile_size = tile_size
)
init_filled.paste(init_image, (0,0), init_image.split()[-1])
# Resize if requested for inpainting
if inpaint_width and inpaint_height:
init_filled = init_filled.resize((inpaint_width, inpaint_height))
debug_image(init_filled, "init_filled", debug_status=self.enable_image_debugging)
# Create init tensor
init_image = self._image_to_tensor(init_filled.convert('RGB'))
if isinstance(mask_image, PIL.Image.Image):
self.pil_mask = mask_image.copy()
debug_image(mask_image, "mask_image BEFORE multiply with pil_image", debug_status=self.enable_image_debugging)
mask_image = ImageChops.multiply(mask_image, self.pil_image.split()[-1].convert('RGB'))
self.pil_mask = mask_image
# Resize if requested for inpainting
if inpaint_width and inpaint_height:
mask_image = mask_image.resize((inpaint_width, inpaint_height))
debug_image(mask_image, "mask_image AFTER multiply with pil_image", debug_status=self.enable_image_debugging)
mask_image = mask_image.resize(
(
mask_image.width // downsampling,
mask_image.height // downsampling
),
resample=Image.Resampling.NEAREST
)
mask_image = self._image_to_tensor(mask_image,normalize=False)
self.mask_blur_radius = mask_blur_radius
# klms samplers not supported yet, so ignore previous sampler
if isinstance(sampler,KSampler):
print(
f">> Using recommended DDIM sampler for inpainting."
)
sampler = DDIMSampler(self.model, device=self.model.device)
sampler.make_schedule(
ddim_num_steps=steps, ddim_eta=ddim_eta, verbose=False
)
mask_image = mask_image[0][0].unsqueeze(0).repeat(4,1,1).unsqueeze(0)
mask_image = repeat(mask_image, '1 ... -> b ...', b=1)
scope = choose_autocast(self.precision)
with scope(self.model.device.type):
self.init_latent = self.model.get_first_stage_encoding(
self.model.encode_first_stage(init_image)
) # move to latent space
t_enc = int(strength * steps)
# todo: support cross-attention control
uc, c, _ = conditioning
print(f">> target t_enc is {t_enc} steps")
@torch.no_grad()
def make_image(x_T):
# encode (scaled latent)
z_enc = sampler.stochastic_encode(
self.init_latent,
torch.tensor([t_enc - 1]).to(self.model.device),
noise=x_T
)
# to replace masked area with latent noise, weighted by inpaint_replace strength
if inpaint_replace > 0.0:
print(f'>> inpaint will replace what was under the mask with a strength of {inpaint_replace}')
l_noise = self.get_noise(kwargs['width'],kwargs['height'])
inverted_mask = 1.0-mask_image # there will be 1s where the mask is
masked_region = (1.0-inpaint_replace) * inverted_mask * z_enc + inpaint_replace * inverted_mask * l_noise
z_enc = z_enc * mask_image + masked_region
if self.free_gpu_mem and self.model.model.device != self.model.device:
self.model.model.to(self.model.device)
# decode it
samples = sampler.decode(
z_enc,
c,
t_enc,
img_callback = step_callback,
unconditional_guidance_scale = cfg_scale,
unconditional_conditioning = uc,
mask = mask_image,
init_latent = self.init_latent
)
result = self.sample_to_image(samples)
# Seam paint if this is our first pass (seam_size set to 0 during seam painting)
if seam_size > 0:
old_image = self.pil_image or init_image
old_mask = self.pil_mask or mask_image
result = self.seam_paint(
result,
seam_size,
seam_blur,
prompt,
sampler,
seam_steps,
cfg_scale,
ddim_eta,
conditioning,
seam_strength,
x_T,
step_callback)
# Restore original settings
self.get_make_image(prompt,sampler,steps,cfg_scale,ddim_eta,
conditioning,
old_image,
old_mask,
strength,
mask_blur_radius, seam_size, seam_blur, seam_strength,
seam_steps, tile_size, step_callback,
inpaint_replace, enable_image_debugging,
inpaint_width = inpaint_width,
inpaint_height = inpaint_height,
infill_method = infill_method,
**kwargs)
return result
return make_image
def sample_to_image(self, samples)->Image.Image:
gen_result = super().sample_to_image(samples).convert('RGB')
debug_image(gen_result, "gen_result", debug_status=self.enable_image_debugging)
# Resize if necessary
if self.inpaint_width and self.inpaint_height:
gen_result = gen_result.resize(self.pil_image.size)
if self.pil_image is None or self.pil_mask is None:
return gen_result
corrected_result = super().repaste_and_color_correct(gen_result, self.pil_image, self.pil_mask, self.mask_blur_radius)
debug_image(corrected_result, "corrected_result", debug_status=self.enable_image_debugging)
return corrected_result

View File

@@ -1,175 +0,0 @@
"""omnibus module to be used with the runwayml 9-channel custom inpainting model"""
import torch
import numpy as np
from einops import repeat
from PIL import Image, ImageOps, ImageChops
from ldm.invoke.devices import choose_autocast
from ldm.invoke.ckpt_generator.base import downsampling
from ldm.invoke.ckpt_generator.img2img import CkptImg2Img
from ldm.invoke.ckpt_generator.txt2img import CkptTxt2Img
class CkptOmnibus(CkptImg2Img,CkptTxt2Img):
def __init__(self, model, precision):
super().__init__(model, precision)
self.pil_mask = None
self.pil_image = None
def get_make_image(
self,
prompt,
sampler,
steps,
cfg_scale,
ddim_eta,
conditioning,
width,
height,
init_image = None,
mask_image = None,
strength = None,
step_callback=None,
threshold=0.0,
perlin=0.0,
mask_blur_radius: int = 8,
**kwargs):
"""
Returns a function returning an image derived from the prompt and the initial image
Return value depends on the seed at the time you call it.
"""
self.perlin = perlin
num_samples = 1
sampler.make_schedule(
ddim_num_steps=steps, ddim_eta=ddim_eta, verbose=False
)
if isinstance(init_image, Image.Image):
self.pil_image = init_image
if init_image.mode != 'RGB':
init_image = init_image.convert('RGB')
init_image = self._image_to_tensor(init_image)
if isinstance(mask_image, Image.Image):
self.pil_mask = mask_image
mask_image = ImageChops.multiply(mask_image.convert('L'), self.pil_image.split()[-1])
mask_image = self._image_to_tensor(ImageOps.invert(mask_image), normalize=False)
self.mask_blur_radius = mask_blur_radius
t_enc = steps
if init_image is not None and mask_image is not None: # inpainting
masked_image = init_image * (1 - mask_image) # masked image is the image masked by mask - masked regions zero
elif init_image is not None: # img2img
scope = choose_autocast(self.precision)
with scope(self.model.device.type):
self.init_latent = self.model.get_first_stage_encoding(
self.model.encode_first_stage(init_image)
) # move to latent space
# create a completely black mask (1s)
mask_image = torch.ones(1, 1, init_image.shape[2], init_image.shape[3], device=self.model.device)
# and the masked image is just a copy of the original
masked_image = init_image
else: # txt2img
init_image = torch.zeros(1, 3, height, width, device=self.model.device)
mask_image = torch.ones(1, 1, height, width, device=self.model.device)
masked_image = init_image
self.init_latent = init_image
height = init_image.shape[2]
width = init_image.shape[3]
model = self.model
def make_image(x_T):
with torch.no_grad():
scope = choose_autocast(self.precision)
with scope(self.model.device.type):
batch = self.make_batch_sd(
init_image,
mask_image,
masked_image,
prompt=prompt,
device=model.device,
num_samples=num_samples,
)
c = model.cond_stage_model.encode(batch["txt"])
c_cat = list()
for ck in model.concat_keys:
cc = batch[ck].float()
if ck != model.masked_image_key:
bchw = [num_samples, 4, height//8, width//8]
cc = torch.nn.functional.interpolate(cc, size=bchw[-2:])
else:
cc = model.get_first_stage_encoding(model.encode_first_stage(cc))
c_cat.append(cc)
c_cat = torch.cat(c_cat, dim=1)
# cond
cond={"c_concat": [c_cat], "c_crossattn": [c]}
# uncond cond
uc_cross = model.get_unconditional_conditioning(num_samples, "")
uc_full = {"c_concat": [c_cat], "c_crossattn": [uc_cross]}
shape = [model.channels, height//8, width//8]
samples, _ = sampler.sample(
batch_size = 1,
S = steps,
x_T = x_T,
conditioning = cond,
shape = shape,
verbose = False,
unconditional_guidance_scale = cfg_scale,
unconditional_conditioning = uc_full,
eta = 1.0,
img_callback = step_callback,
threshold = threshold,
)
if self.free_gpu_mem:
self.model.model.to("cpu")
return self.sample_to_image(samples)
return make_image
def make_batch_sd(
self,
image,
mask,
masked_image,
prompt,
device,
num_samples=1):
batch = {
"image": repeat(image.to(device=device), "1 ... -> n ...", n=num_samples),
"txt": num_samples * [prompt],
"mask": repeat(mask.to(device=device), "1 ... -> n ...", n=num_samples),
"masked_image": repeat(masked_image.to(device=device), "1 ... -> n ...", n=num_samples),
}
return batch
def get_noise(self, width:int, height:int):
if self.init_latent is not None:
height = self.init_latent.shape[2]
width = self.init_latent.shape[3]
return CkptTxt2Img.get_noise(self,width,height)
def sample_to_image(self, samples)->Image.Image:
gen_result = super().sample_to_image(samples).convert('RGB')
if self.pil_image is None or self.pil_mask is None:
return gen_result
if self.pil_image.size != self.pil_mask.size:
return gen_result
corrected_result = super(CkptImg2Img, self).repaste_and_color_correct(gen_result, self.pil_image, self.pil_mask, self.mask_blur_radius)
return corrected_result

View File

@@ -1,90 +0,0 @@
'''
ldm.invoke.ckpt_generator.txt2img inherits from ldm.invoke.ckpt_generator
'''
import torch
import numpy as np
from ldm.invoke.ckpt_generator.base import CkptGenerator
from ldm.models.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent
import gc
class CkptTxt2Img(CkptGenerator):
def __init__(self, model, precision):
super().__init__(model, precision)
@torch.no_grad()
def get_make_image(self,prompt,sampler,steps,cfg_scale,ddim_eta,
conditioning,width,height,step_callback=None,threshold=0.0,perlin=0.0,
attention_maps_callback=None,
**kwargs):
"""
Returns a function returning an image derived from the prompt and the initial image
Return value depends on the seed at the time you call it
kwargs are 'width' and 'height'
"""
self.perlin = perlin
uc, c, extra_conditioning_info = conditioning
@torch.no_grad()
def make_image(x_T):
shape = [
self.latent_channels,
height // self.downsampling_factor,
width // self.downsampling_factor,
]
if self.free_gpu_mem and self.model.model.device != self.model.device:
self.model.model.to(self.model.device)
sampler.make_schedule(ddim_num_steps=steps, ddim_eta=ddim_eta, verbose=False)
samples, _ = sampler.sample(
batch_size = 1,
S = steps,
x_T = x_T,
conditioning = c,
shape = shape,
verbose = False,
unconditional_guidance_scale = cfg_scale,
unconditional_conditioning = uc,
extra_conditioning_info = extra_conditioning_info,
eta = ddim_eta,
img_callback = step_callback,
threshold = threshold,
attention_maps_callback = attention_maps_callback,
)
if self.free_gpu_mem:
self.model.model.to('cpu')
self.model.cond_stage_model.device = 'cpu'
self.model.cond_stage_model.to('cpu')
gc.collect()
torch.cuda.empty_cache()
return self.sample_to_image(samples)
return make_image
# returns a tensor filled with random numbers from a normal distribution
def get_noise(self,width,height):
device = self.model.device
if self.use_mps_noise or device.type == 'mps':
x = torch.randn([1,
self.latent_channels,
height // self.downsampling_factor,
width // self.downsampling_factor],
dtype=self.torch_dtype(),
device='cpu').to(device)
else:
x = torch.randn([1,
self.latent_channels,
height // self.downsampling_factor,
width // self.downsampling_factor],
dtype=self.torch_dtype(),
device=device)
if self.perlin > 0.0:
x = (1-self.perlin)*x + self.perlin*self.get_perlin_noise(width // self.downsampling_factor, height // self.downsampling_factor)
return x

View File

@@ -1,182 +0,0 @@
'''
ldm.invoke.ckpt_generator.txt2img inherits from ldm.invoke.ckpt_generator
'''
import torch
import numpy as np
import math
import gc
from ldm.invoke.ckpt_generator.base import CkptGenerator
from ldm.invoke.ckpt_generator.omnibus import CkptOmnibus
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent
from PIL import Image
class CkptTxt2Img2Img(CkptGenerator):
def __init__(self, model, precision):
super().__init__(model, precision)
self.init_latent = None # for get_noise()
@torch.no_grad()
def get_make_image(self,prompt,sampler,steps,cfg_scale,ddim_eta,
conditioning,width,height,strength,step_callback=None,**kwargs):
"""
Returns a function returning an image derived from the prompt and the initial image
Return value depends on the seed at the time you call it
kwargs are 'width' and 'height'
"""
uc, c, extra_conditioning_info = conditioning
scale_dim = min(width, height)
scale = 512 / scale_dim
init_width = math.ceil(scale * width / 64) * 64
init_height = math.ceil(scale * height / 64) * 64
@torch.no_grad()
def make_image(x_T):
shape = [
self.latent_channels,
init_height // self.downsampling_factor,
init_width // self.downsampling_factor,
]
sampler.make_schedule(
ddim_num_steps=steps, ddim_eta=ddim_eta, verbose=False
)
#x = self.get_noise(init_width, init_height)
x = x_T
if self.free_gpu_mem and self.model.model.device != self.model.device:
self.model.model.to(self.model.device)
samples, _ = sampler.sample(
batch_size = 1,
S = steps,
x_T = x,
conditioning = c,
shape = shape,
verbose = False,
unconditional_guidance_scale = cfg_scale,
unconditional_conditioning = uc,
eta = ddim_eta,
img_callback = step_callback,
extra_conditioning_info = extra_conditioning_info
)
print(
f"\n>> Interpolating from {init_width}x{init_height} to {width}x{height} using DDIM sampling"
)
# resizing
samples = torch.nn.functional.interpolate(
samples,
size=(height // self.downsampling_factor, width // self.downsampling_factor),
mode="bilinear"
)
t_enc = int(strength * steps)
ddim_sampler = DDIMSampler(self.model, device=self.model.device)
ddim_sampler.make_schedule(
ddim_num_steps=steps, ddim_eta=ddim_eta, verbose=False
)
z_enc = ddim_sampler.stochastic_encode(
samples,
torch.tensor([t_enc-1]).to(self.model.device),
noise=self.get_noise(width,height,False)
)
# decode it
samples = ddim_sampler.decode(
z_enc,
c,
t_enc,
img_callback = step_callback,
unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=uc,
extra_conditioning_info=extra_conditioning_info,
all_timesteps_count=steps
)
if self.free_gpu_mem:
self.model.model.to('cpu')
self.model.cond_stage_model.device = 'cpu'
self.model.cond_stage_model.to('cpu')
gc.collect()
torch.cuda.empty_cache()
return self.sample_to_image(samples)
# in the case of the inpainting model being loaded, the trick of
# providing an interpolated latent doesn't work, so we transiently
# create a 512x512 PIL image, upscale it, and run the inpainting
# over it in img2img mode. Because the inpaing model is so conservative
# it doesn't change the image (much)
def inpaint_make_image(x_T):
omnibus = CkptOmnibus(self.model,self.precision)
result = omnibus.generate(
prompt,
sampler=sampler,
width=init_width,
height=init_height,
step_callback=step_callback,
steps = steps,
cfg_scale = cfg_scale,
ddim_eta = ddim_eta,
conditioning = conditioning,
**kwargs
)
assert result is not None and len(result)>0,'** txt2img failed **'
image = result[0][0]
interpolated_image = image.resize((width,height),resample=Image.Resampling.LANCZOS)
print(kwargs.pop('init_image',None))
result = omnibus.generate(
prompt,
sampler=sampler,
init_image=interpolated_image,
width=width,
height=height,
seed=result[0][1],
step_callback=step_callback,
steps = steps,
cfg_scale = cfg_scale,
ddim_eta = ddim_eta,
conditioning = conditioning,
**kwargs
)
return result[0][0]
if sampler.uses_inpainting_model():
return inpaint_make_image
else:
return make_image
# returns a tensor filled with random numbers from a normal distribution
def get_noise(self,width,height,scale = True):
# print(f"Get noise: {width}x{height}")
if scale:
trained_square = 512 * 512
actual_square = width * height
scale = math.sqrt(trained_square / actual_square)
scaled_width = math.ceil(scale * width / 64) * 64
scaled_height = math.ceil(scale * height / 64) * 64
else:
scaled_width = width
scaled_height = height
device = self.model.device
if self.use_mps_noise or device.type == 'mps':
return torch.randn([1,
self.latent_channels,
scaled_height // self.downsampling_factor,
scaled_width // self.downsampling_factor],
device='cpu').to(device)
else:
return torch.randn([1,
self.latent_channels,
scaled_height // self.downsampling_factor,
scaled_width // self.downsampling_factor],
device=device)

View File

@@ -1037,10 +1037,10 @@ def convert_open_clip_checkpoint(checkpoint):
return text_model
def replace_checkpoint_vae(checkpoint, vae_path:str):
if vae_path.endswith(".safetensors"):
vae_ckpt = load_file(vae_path)
else:
if Path(vae_path).suffix in ['.pt','.ckpt']:
vae_ckpt = torch.load(vae_path, map_location="cpu")
else:
vae_ckpt = load_file(vae_path)
state_dict = vae_ckpt['state_dict'] if "state_dict" in vae_ckpt else vae_ckpt
for vae_key in state_dict:
new_key = f'first_stage_model.{vae_key}'
@@ -1264,10 +1264,10 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
cache_dir=cache_dir,
)
pipe = pipeline_class(
vae=vae,
text_encoder=text_model,
vae=vae.to(precision),
text_encoder=text_model.to(precision),
tokenizer=tokenizer,
unet=unet,
unet=unet.to(precision),
scheduler=scheduler,
safety_checker=None,
feature_extractor=None,

View File

@@ -24,13 +24,15 @@ class HuggingFaceConceptsLibrary(object):
self.concepts_loaded = dict()
self.triggers = dict() # concept name to trigger phrase
self.concept_names = dict() # trigger phrase to concept name
self.match_trigger = re.compile('(<[\w\- >]+>)') # trigger is slightly less restrictive than HF concept name
self.match_concept = re.compile('<([\w\-]+)>') # HF concept name can only contain A-Za-z0-9_-
self.match_trigger = re.compile('(<[a-zA-Z0-9_\- >]+>)') # trigger is slightly less restrictive than HF concept name
self.match_concept = re.compile('<([a-zA-Z0-9_\-]+)>') # HF concept name can only contain A-Za-z0-9_-
def list_concepts(self)->list:
def list_concepts(self, minimum_likes: int=0)->list:
'''
Return a list of all the concepts by name, without the 'sd-concepts-library' part.
Also adds local concepts in invokeai/embeddings folder.
If minimum_likes is provided, then only concepts that have received at least that
many "likes" will be returned.
'''
local_concepts_now = self.get_local_concepts(os.path.join(self.root, 'embeddings'))
local_concepts_to_add = set(local_concepts_now).difference(set(self.local_concepts))
@@ -44,7 +46,7 @@ class HuggingFaceConceptsLibrary(object):
else:
try:
models = self.hf_api.list_models(filter=ModelFilter(model_name='sd-concepts-library/'))
self.concept_list = [a.id.split('/')[1] for a in models]
self.concept_list = [a.id.split('/')[1] for a in models if a.likes>=minimum_likes]
# when init, add all in dir. when not init, add only concepts added between init and now
self.concept_list.extend(list(local_concepts_to_add))
except Exception as e:
@@ -181,7 +183,7 @@ class HuggingFaceConceptsLibrary(object):
print(f'>> Downloading {repo_id}...',end='')
try:
for file in ('README.md','learned_embeds.bin','token_identifier.txt','type_of_concept.txt'):
for file in ('learned_embeds.bin','token_identifier.txt'):
url = hf_hub_url(repo_id, file)
request.urlretrieve(url, os.path.join(dest,file),reporthook=tally_download_size)
except ul_error.HTTPError as e:

View File

@@ -12,21 +12,13 @@ from typing import Union, Optional, Any
from transformers import CLIPTokenizer
from compel import Compel
from compel.prompt_parser import FlattenedPrompt, Blend, Fragment, CrossAttentionControlSubstitute, PromptParser
from compel.prompt_parser import FlattenedPrompt, Blend, Fragment, CrossAttentionControlSubstitute, PromptParser, \
Conjunction
from .devices import torch_dtype
from .generator.diffusers_pipeline import StableDiffusionGeneratorPipeline
from ..models.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent
from ldm.invoke.globals import Globals
def get_tokenizer(model) -> CLIPTokenizer:
# TODO remove legacy ckpt fallback handling
return (getattr(model, 'tokenizer', None) # diffusers
or model.cond_stage_model.tokenizer) # ldm
def get_text_encoder(model) -> Any:
# TODO remove legacy ckpt fallback handling
return (getattr(model, 'text_encoder', None) # diffusers
or UnsqueezingLDMTransformer(model.cond_stage_model.transformer)) # ldm
class UnsqueezingLDMTransformer:
def __init__(self, ldm_transformer):
self.ldm_transformer = ldm_transformer
@@ -40,40 +32,61 @@ class UnsqueezingLDMTransformer:
return insufficiently_unsqueezed_tensor.unsqueeze(0)
def get_uc_and_c_and_ec(prompt_string, model, log_tokens=False, skip_normalize_legacy_blend=False):
def get_uc_and_c_and_ec(prompt_string,
model: StableDiffusionGeneratorPipeline,
log_tokens=False, skip_normalize_legacy_blend=False):
# lazy-load any deferred textual inversions.
# this might take a couple of seconds the first time a textual inversion is used.
model.textual_inversion_manager.create_deferred_token_ids_for_any_trigger_terms(prompt_string)
tokenizer = get_tokenizer(model)
text_encoder = get_text_encoder(model)
compel = Compel(tokenizer=tokenizer,
text_encoder=text_encoder,
compel = Compel(tokenizer=model.tokenizer,
text_encoder=model.text_encoder,
textual_inversion_manager=model.textual_inversion_manager,
dtype_for_device_getter=torch_dtype)
# get rid of any newline characters
prompt_string = prompt_string.replace("\n", " ")
positive_prompt_string, negative_prompt_string = split_prompt_to_positive_and_negative(prompt_string)
legacy_blend = try_parse_legacy_blend(positive_prompt_string, skip_normalize_legacy_blend)
positive_prompt: FlattenedPrompt|Blend
positive_conjunction: Conjunction
if legacy_blend is not None:
positive_prompt = legacy_blend
positive_conjunction = legacy_blend
else:
positive_prompt = Compel.parse_prompt_string(positive_prompt_string)
negative_prompt: FlattenedPrompt|Blend = Compel.parse_prompt_string(negative_prompt_string)
positive_conjunction = Compel.parse_prompt_string(positive_prompt_string)
positive_prompt = positive_conjunction.prompts[0]
should_use_lora_manager = True
lora_weights = positive_conjunction.lora_weights
lora_conditions = None
if model.peft_manager:
should_use_lora_manager = model.peft_manager.should_use(lora_weights)
if not should_use_lora_manager:
model.peft_manager.set_loras(lora_weights)
if model.lora_manager and should_use_lora_manager:
lora_conditions = model.lora_manager.set_loras_conditions(lora_weights)
negative_conjunction = Compel.parse_prompt_string(negative_prompt_string)
negative_prompt: FlattenedPrompt | Blend = negative_conjunction.prompts[0]
tokens_count = get_max_token_count(model.tokenizer, positive_prompt)
if log_tokens or getattr(Globals, "log_tokenization", False):
log_tokenization(positive_prompt, negative_prompt, tokenizer=tokenizer)
log_tokenization(positive_prompt, negative_prompt, tokenizer=model.tokenizer)
c, options = compel.build_conditioning_tensor_for_prompt_object(positive_prompt)
uc, _ = compel.build_conditioning_tensor_for_prompt_object(negative_prompt)
tokens_count = get_max_token_count(tokenizer, positive_prompt)
# some LoRA models also mess with the text encoder, so they must be active while compel builds conditioning tensors
lora_conditioning_ec = InvokeAIDiffuserComponent.ExtraConditioningInfo(tokens_count_including_eos_bos=tokens_count,
lora_conditions=lora_conditions)
with InvokeAIDiffuserComponent.custom_attention_context(model.unet,
extra_conditioning_info=lora_conditioning_ec,
step_count=-1):
c, options = compel.build_conditioning_tensor_for_prompt_object(positive_prompt)
uc, _ = compel.build_conditioning_tensor_for_prompt_object(negative_prompt)
# now build the "real" ec
ec = InvokeAIDiffuserComponent.ExtraConditioningInfo(tokens_count_including_eos_bos=tokens_count,
cross_attention_control_args=options.get(
'cross_attention_control', None))
'cross_attention_control', None),
lora_conditions=lora_conditions)
return uc, c, ec
@@ -81,12 +94,14 @@ def get_prompt_structure(prompt_string, skip_normalize_legacy_blend: bool = Fals
Union[FlattenedPrompt, Blend], FlattenedPrompt):
positive_prompt_string, negative_prompt_string = split_prompt_to_positive_and_negative(prompt_string)
legacy_blend = try_parse_legacy_blend(positive_prompt_string, skip_normalize_legacy_blend)
positive_prompt: FlattenedPrompt|Blend
positive_conjunction: Conjunction
if legacy_blend is not None:
positive_prompt = legacy_blend
positive_conjunction = legacy_blend
else:
positive_prompt = Compel.parse_prompt_string(positive_prompt_string)
negative_prompt: FlattenedPrompt|Blend = Compel.parse_prompt_string(negative_prompt_string)
positive_conjunction = Compel.parse_prompt_string(positive_prompt_string)
positive_prompt = positive_conjunction.prompts[0]
negative_conjunction = Compel.parse_prompt_string(negative_prompt_string)
negative_prompt: FlattenedPrompt|Blend = negative_conjunction.prompts[0]
return positive_prompt, negative_prompt
@@ -203,18 +218,26 @@ def log_tokenization_for_text(text, tokenizer, display_label=None):
print(f'{discarded}\x1b[0m')
def try_parse_legacy_blend(text: str, skip_normalize: bool=False) -> Optional[Blend]:
def try_parse_legacy_blend(text: str, skip_normalize: bool=False) -> Optional[Conjunction]:
weighted_subprompts = split_weighted_subprompts(text, skip_normalize=skip_normalize)
if len(weighted_subprompts) <= 1:
return None
strings = [x[0] for x in weighted_subprompts]
weights = [x[1] for x in weighted_subprompts]
pp = PromptParser()
parsed_conjunctions = [pp.parse_conjunction(x) for x in strings]
flattened_prompts = [x.prompts[0] for x in parsed_conjunctions]
flattened_prompts = []
weights = []
loras = []
for i, x in enumerate(parsed_conjunctions):
if len(x.prompts)>0:
flattened_prompts.append(x.prompts[0])
weights.append(weighted_subprompts[i][1])
if len(x.lora_weights)>0:
loras.extend(x.lora_weights)
return Blend(prompts=flattened_prompts, weights=weights, normalize_weights=not skip_normalize)
return Conjunction([Blend(prompts=flattened_prompts, weights=weights, normalize_weights=not skip_normalize)],
lora_weights = loras)
def split_weighted_subprompts(text, skip_normalize=False)->list:

View File

@@ -655,6 +655,7 @@ def initialize_rootdir(root: str, yes_to_all: bool = False):
"models",
"configs",
"embeddings",
"loras",
"text-inversion-output",
"text-inversion-training-data",
):

View File

@@ -4,18 +4,19 @@ pip install <path_to_git_source>.
'''
import os
import platform
import psutil
import requests
from rich import box, print
from rich.console import Console, Group, group
from rich.console import Console, group
from rich.panel import Panel
from rich.prompt import Prompt
from rich.style import Style
from rich.syntax import Syntax
from rich.text import Text
from ldm.invoke import __version__
INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive"
INVOKE_AI_TAG="https://github.com/invoke-ai/InvokeAI/archive/refs/tags"
INVOKE_AI_BRANCH="https://github.com/invoke-ai/InvokeAI/archive/refs/heads"
INVOKE_AI_REL="https://api.github.com/repos/invoke-ai/InvokeAI/releases"
OS = platform.uname().system
@@ -30,6 +31,19 @@ else:
def get_versions()->dict:
return requests.get(url=INVOKE_AI_REL).json()
def invokeai_is_running()->bool:
for p in psutil.process_iter():
try:
cmdline = p.cmdline()
matches = [x for x in cmdline if x.endswith(('invokeai','invokeai.exe'))]
if matches:
print(f':exclamation: [bold red]An InvokeAI instance appears to be running as process {p.pid}[/red bold]')
return True
except psutil.AccessDenied:
continue
return False
def welcome(versions: dict):
@group()
@@ -41,7 +55,8 @@ def welcome(versions: dict):
yield '[bold yellow]Options:'
yield f'''[1] Update to the latest official release ([italic]{versions[0]['tag_name']}[/italic])
[2] Update to the bleeding-edge development version ([italic]main[/italic])
[3] Manually enter the tag or branch name you wish to update'''
[3] Manually enter the [bold]tag name[/bold] for the version you wish to update to
[4] Manually enter the [bold]branch name[/bold] for the version you wish to update to'''
console.rule()
print(
@@ -59,20 +74,33 @@ def welcome(versions: dict):
def main():
versions = get_versions()
if invokeai_is_running():
print(f':exclamation: [bold red]Please terminate all running instances of InvokeAI before updating.[/red bold]')
return
welcome(versions)
tag = None
choice = Prompt.ask('Choice:',choices=['1','2','3'],default='1')
branch = None
release = None
choice = Prompt.ask('Choice:',choices=['1','2','3','4'],default='1')
if choice=='1':
tag = versions[0]['tag_name']
release = versions[0]['tag_name']
elif choice=='2':
tag = 'main'
release = 'main'
elif choice=='3':
tag = Prompt.ask('Enter an InvokeAI tag or branch name')
tag = Prompt.ask('Enter an InvokeAI tag name')
elif choice=='4':
branch = Prompt.ask('Enter an InvokeAI branch name')
print(f':crossed_fingers: Upgrading to [yellow]{tag}[/yellow]')
cmd = f'pip install {INVOKE_AI_SRC}/{tag}.zip --use-pep517 --upgrade'
print(f':crossed_fingers: Upgrading to [yellow]{tag if tag else release}[/yellow]')
if release:
cmd = f'pip install {INVOKE_AI_SRC}/{release}.zip --use-pep517 --upgrade'
elif tag:
cmd = f'pip install {INVOKE_AI_TAG}/{tag}.zip --use-pep517 --upgrade'
else:
cmd = f'pip install {INVOKE_AI_BRANCH}/{branch}.zip --use-pep517 --upgrade'
print('')
print('')
if os.system(cmd)==0:

View File

@@ -196,16 +196,6 @@ class addModelsForm(npyscreen.FormMultiPage):
scroll_exit=True,
)
self.nextrely += 1
self.convert_models = self.add_widget_intelligent(
npyscreen.TitleSelectOne,
name="== CONVERT IMPORTED MODELS INTO DIFFUSERS==",
values=["Keep original format", "Convert to diffusers"],
value=0,
begin_entry_at=4,
max_height=4,
hidden=True, # will appear when imported models box is edited
scroll_exit=True,
)
self.cancel = self.add_widget_intelligent(
npyscreen.ButtonPress,
name="CANCEL",
@@ -240,8 +230,6 @@ class addModelsForm(npyscreen.FormMultiPage):
self.show_directory_fields.addVisibleWhenSelected(i)
self.show_directory_fields.when_value_edited = self._clear_scan_directory
self.import_model_paths.when_value_edited = self._show_hide_convert
self.autoload_directory.when_value_edited = self._show_hide_convert
def resize(self):
super().resize()
@@ -252,13 +240,6 @@ class addModelsForm(npyscreen.FormMultiPage):
if not self.show_directory_fields.value:
self.autoload_directory.value = ""
def _show_hide_convert(self):
model_paths = self.import_model_paths.value or ""
autoload_directory = self.autoload_directory.value or ""
self.convert_models.hidden = (
len(model_paths) == 0 and len(autoload_directory) == 0
)
def _get_starter_model_labels(self) -> List[str]:
window_width, window_height = get_terminal_size()
label_width = 25
@@ -318,7 +299,6 @@ class addModelsForm(npyscreen.FormMultiPage):
.scan_directory: Path to a directory of models to scan and import
.autoscan_on_startup: True if invokeai should scan and import at startup time
.import_model_paths: list of URLs, repo_ids and file paths to import
.convert_to_diffusers: if True, convert legacy checkpoints into diffusers
"""
# we're using a global here rather than storing the result in the parentapp
# due to some bug in npyscreen that is causing attributes to be lost
@@ -354,7 +334,6 @@ class addModelsForm(npyscreen.FormMultiPage):
# URLs and the like
selections.import_model_paths = self.import_model_paths.value.split()
selections.convert_to_diffusers = self.convert_models.value[0] == 1
class AddModelApplication(npyscreen.NPSAppManaged):
@@ -367,7 +346,6 @@ class AddModelApplication(npyscreen.NPSAppManaged):
scan_directory=None,
autoscan_on_startup=None,
import_model_paths=None,
convert_to_diffusers=None,
)
def onStart(self):
@@ -387,7 +365,6 @@ def process_and_execute(opt: Namespace, selections: Namespace):
directory_to_scan = selections.scan_directory
scan_at_startup = selections.autoscan_on_startup
potential_models_to_install = selections.import_model_paths
convert_to_diffusers = selections.convert_to_diffusers
install_requested_models(
install_initial_models=models_to_install,
@@ -395,7 +372,6 @@ def process_and_execute(opt: Namespace, selections: Namespace):
scan_directory=Path(directory_to_scan) if directory_to_scan else None,
external_models=potential_models_to_install,
scan_at_startup=scan_at_startup,
convert_to_diffusers=convert_to_diffusers,
precision="float32"
if opt.full_precision
else choose_precision(torch.device(choose_torch_device())),

View File

@@ -11,6 +11,7 @@ from tempfile import TemporaryFile
import requests
from diffusers import AutoencoderKL
from diffusers import logging as dlogging
from huggingface_hub import hf_hub_url
from omegaconf import OmegaConf
from omegaconf.dictconfig import DictConfig
@@ -29,7 +30,13 @@ Model_dir = "models"
Weights_dir = "ldm/stable-diffusion-v1/"
# the initial "configs" dir is now bundled in the `invokeai.configs` package
Dataset_path = Path(configs.__path__[0]) / "INITIAL_MODELS.yaml"
Dataset_path = None
for path in configs.__path__:
file =Path(path, "INITIAL_MODELS.yaml")
if file.exists():
Dataset_path = file
break
assert Dataset_path,f"Could not find the file INITIAL_MODELS.yaml in {configs.__path__}"
# initial models omegaconf
Datasets = None
@@ -62,7 +69,6 @@ def install_requested_models(
scan_directory: Path = None,
external_models: List[str] = None,
scan_at_startup: bool = False,
convert_to_diffusers: bool = False,
precision: str = "float16",
purge_deleted: bool = False,
config_file_path: Path = None,
@@ -105,20 +111,20 @@ def install_requested_models(
if len(external_models)>0:
print("== INSTALLING EXTERNAL MODELS ==")
for path_url_or_repo in external_models:
print(f'DEBUG: path_url_or_repo = {path_url_or_repo}')
try:
model_manager.heuristic_import(
path_url_or_repo,
convert=convert_to_diffusers,
config_file_callback=_pick_configuration_file,
commit_to_conf=config_file_path
)
except KeyboardInterrupt:
sys.exit(-1)
except Exception:
pass
except Exception as e:
print(f'An exception has occurred: {str(e)}')
if scan_at_startup and scan_directory.is_dir():
argument = '--autoconvert' if convert_to_diffusers else '--autoimport'
argument = '--autoconvert'
initfile = Path(Globals.root, Globals.initfile)
replacement = Path(Globals.root, f'{Globals.initfile}.new')
directory = str(scan_directory).replace('\\','/')
@@ -290,13 +296,21 @@ def _download_diffusion_weights(
mconfig: DictConfig, access_token: str, precision: str = "float32"
):
repo_id = mconfig["repo_id"]
revision = mconfig.get('revision',None)
model_class = (
StableDiffusionGeneratorPipeline
if mconfig.get("format", None) == "diffusers"
else AutoencoderKL
)
extra_arg_list = [{"revision": "fp16"}, {}] if precision == "float16" else [{}]
extra_arg_list = [{"revision": revision}] if revision \
else [{"revision": "fp16"}, {}] if precision == "float16" \
else [{}]
path = None
# quench safety checker warnings
verbosity = dlogging.get_verbosity()
dlogging.set_verbosity_error()
for extra_args in extra_arg_list:
try:
path = download_from_hf(
@@ -312,6 +326,7 @@ def _download_diffusion_weights(
print(f"An unexpected error occurred while downloading the model: {e})")
if path:
break
dlogging.set_verbosity(verbosity)
return path
@@ -442,6 +457,8 @@ def new_config_file_contents(
stanza["description"] = mod["description"]
stanza["repo_id"] = mod["repo_id"]
stanza["format"] = mod["format"]
if "revision" in mod:
stanza["revision"] = mod["revision"]
# diffusers don't need width and height (probably .ckpt doesn't either)
# so we no longer require these in INITIAL_MODELS.yaml
if "width" in mod:
@@ -466,10 +483,9 @@ def new_config_file_contents(
conf[model] = stanza
# if no default model was chosen, then we select the first
# one in the list
# if no default model was chosen, then we select the first one in the list
if not default_selected:
conf[list(successfully_downloaded.keys())[0]]["default"] = True
conf[list(conf.keys())[0]]["default"] = True
return OmegaConf.to_yaml(conf)

View File

@@ -99,8 +99,9 @@ def expand_prompts(
sequence = 0
for command in commands:
sequence += 1
parent_conn.send(
command + f' --fnformat="dp.{sequence:04}.{{prompt}}.png"'
format = _get_fn_format(outdir, sequence)
parent_conn.send_bytes(
(command + f' --fnformat="{format}"').encode('utf-8')
)
parent_conn.close()
else:
@@ -110,7 +111,20 @@ def expand_prompts(
for p in children:
p.terminate()
def _get_fn_format(directory:str, sequence:int)->str:
"""
Get a filename that doesn't exceed filename length restrictions
on the current platform.
"""
try:
max_length = os.pathconf(directory,'PC_NAME_MAX')
except:
max_length = 255
prefix = f'dp.{sequence:04}.'
suffix = '.png'
max_length -= len(prefix)+len(suffix)
return f'{prefix}{{prompt:0.{max_length}}}{suffix}'
class MessageToStdin(object):
def __init__(self, connection: Connection):
self.connection = connection
@@ -119,7 +133,7 @@ class MessageToStdin(object):
def readline(self) -> str:
try:
if len(self.linebuffer) == 0:
message = self.connection.recv()
message = self.connection.recv_bytes().decode('utf-8')
self.linebuffer = message.split("\n")
result = self.linebuffer.pop(0)
return result

View File

@@ -29,6 +29,8 @@ from typing_extensions import ParamSpec
from ldm.invoke.globals import Globals
from ldm.models.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent, PostprocessingSettings
from ldm.modules.textual_inversion_manager import TextualInversionManager
from ldm.modules.lora_manager import LoraManager
from ldm.modules.peft_manager import PeftManager
from ..devices import normalize_device, CPU_DEVICE
from ..offloading import LazilyLoadedModelGroup, FullyLoadedModelGroup, ModelGroup
from ...models.diffusion.cross_attention_map_saving import AttentionMapSaver
@@ -289,11 +291,14 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
safety_checker=safety_checker,
feature_extractor=feature_extractor,
)
self.lora_manager = LoraManager(self)
self.peft_manager = PeftManager()
self.invokeai_diffuser = InvokeAIDiffuserComponent(self.unet, self._unet_forward, is_running_diffusers=True)
use_full_precision = (precision == 'float32' or precision == 'autocast')
self.textual_inversion_manager = TextualInversionManager(tokenizer=self.tokenizer,
text_encoder=self.text_encoder,
full_precision=use_full_precision)
# InvokeAI's interface for text embeddings and whatnot
self.embeddings_provider = EmbeddingsProvider(
tokenizer=self.tokenizer,
@@ -462,8 +467,9 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
if additional_guidance is None:
additional_guidance = []
extra_conditioning_info = conditioning_data.extra
with self.invokeai_diffuser.custom_attention_context(extra_conditioning_info=extra_conditioning_info,
step_count=len(self.scheduler.timesteps)
with InvokeAIDiffuserComponent.custom_attention_context(self.invokeai_diffuser.model,
extra_conditioning_info=extra_conditioning_info,
step_count=len(self.scheduler.timesteps)
):
yield PipelineIntermediateState(run_id=run_id, step=-1, timestep=self.scheduler.num_train_timesteps,

View File

@@ -255,8 +255,8 @@ class Inpaint(Img2Img):
pipeline.scheduler = sampler
# todo: support cross-attention control
uc, c, _ = conditioning
conditioning_data = (ConditioningData(uc, c, cfg_scale)
uc, c, extra_conditioning_info = conditioning
conditioning_data = (ConditioningData(uc, c, cfg_scale, extra_conditioning_info)
.add_scheduler_args_if_applicable(pipeline.scheduler, eta=ddim_eta))

View File

@@ -23,6 +23,7 @@ Globals = Namespace()
Globals.initfile = 'invokeai.init'
Globals.models_file = 'models.yaml'
Globals.models_dir = 'models'
Globals.lora_models_dir = 'loras'
Globals.config_dir = 'configs'
Globals.autoscan_dir = 'weights'
Globals.converted_ckpts_dir = 'converted_ckpts'
@@ -61,7 +62,7 @@ Globals.sequential_guidance = False
Globals.full_precision = False
# whether we should convert ckpt files into diffusers models on the fly
Globals.ckpt_convert = False
Globals.ckpt_convert = True
# logging tokenization everywhere
Globals.log_tokenization = False
@@ -75,6 +76,11 @@ def global_config_dir()->Path:
def global_models_dir()->Path:
return Path(Globals.root, Globals.models_dir)
def global_lora_models_dir()->Path:
return Path(Globals.lora_models_dir) \
if Path(Globals.lora_models_dir).is_absolute() \
else Path(Globals.root, Globals.lora_models_dir)
def global_autoscan_dir()->Path:
return Path(Globals.root, Globals.autoscan_dir)

29
ldm/invoke/invokeai_metadata.py Executable file
View File

@@ -0,0 +1,29 @@
#!/usr/bin/env python
import sys
import json
from ldm.invoke.pngwriter import retrieve_metadata
def main():
if len(sys.argv) < 2:
print("Usage: file2prompt.py <file1.png> <file2.png> <file3.png>...")
print("This script opens up the indicated invoke.py-generated PNG file(s) and prints out their metadata.")
exit(-1)
filenames = sys.argv[1:]
for f in filenames:
try:
metadata = retrieve_metadata(f)
print(f'{f}:\n',json.dumps(metadata['sd-metadata'], indent=4))
except FileNotFoundError:
sys.stderr.write(f'{f} not found\n')
continue
except PermissionError:
sys.stderr.write(f'{f} could not be opened due to inadequate permissions\n')
continue
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
pass

View File

@@ -19,7 +19,7 @@ import warnings
from enum import Enum
from pathlib import Path
from shutil import move, rmtree
from typing import Any, Callable, Optional, Union
from typing import Any, Callable, Optional, Union, List
import safetensors
import safetensors.torch
@@ -46,12 +46,7 @@ class SDLegacyType(Enum):
V2_v = 5
UNKNOWN = 99
DEFAULT_MAX_MODELS = 2
VAE_TO_REPO_ID = { # hack, see note in convert_and_import()
"vae-ft-mse-840000-ema-pruned": "stabilityai/sd-vae-ft-mse",
}
class ModelManager(object):
def __init__(
@@ -108,11 +103,7 @@ class ModelManager(object):
requested_model = self.models[model_name]["model"]
print(f">> Retrieving model {model_name} from system RAM cache")
self.models[model_name]["model"] = self._model_from_cpu(requested_model)
width = self.models[model_name]["width"]
height = self.models[model_name]["height"]
hash = self.models[model_name]["hash"]
else: # we're about to load a new model, so potentially offload the least recently used one
else:
requested_model, width, height, hash = self._load_model(model_name)
self.models[model_name] = {
"model": requested_model,
@@ -123,13 +114,8 @@ class ModelManager(object):
self.current_model = model_name
self._push_newest_model(model_name)
return {
"model": requested_model,
"width": width,
"height": height,
"hash": hash,
}
return self.models[model_name]
def default_model(self) -> str | None:
"""
Returns the name of the default model, or None
@@ -166,19 +152,6 @@ class ModelManager(object):
"""
return list(self.config.keys())
def is_legacy(self, model_name: str) -> bool:
"""
Return true if this is a legacy (.ckpt) model
"""
# if we are converting legacy files automatically, then
# there are no legacy ckpts!
if Globals.ckpt_convert:
return False
info = self.model_info(model_name)
if "weights" in info and info["weights"].endswith((".ckpt", ".safetensors")):
return True
return False
def list_models(self) -> dict:
"""
Return a dict of models in the format:
@@ -379,105 +352,49 @@ class ModelManager(object):
if not os.path.isabs(weights):
weights = os.path.normpath(os.path.join(Globals.root, weights))
# check whether this is a v2 file and force conversion
convert = Globals.ckpt_convert or self.is_v2_config(config)
if matching_config := self._scan_for_matching_file(Path(weights),suffixes=['.yaml']):
print(f' | Using external config file {matching_config}')
config = matching_config
# if converting automatically to diffusers, then we do the conversion and return
# a diffusers pipeline
if convert:
print(
f">> Converting legacy checkpoint {model_name} into a diffusers model..."
)
from ldm.invoke.ckpt_to_diffuser import load_pipeline_from_original_stable_diffusion_ckpt
self.offload_model(self.current_model)
if vae_config := self._choose_diffusers_vae(model_name):
vae = self._load_vae(vae_config)
if self._has_cuda():
torch.cuda.empty_cache()
pipeline = load_pipeline_from_original_stable_diffusion_ckpt(
checkpoint_path=weights,
original_config_file=config,
vae=vae,
return_generator_pipeline=True,
precision=torch.float16
if self.precision == "float16"
else torch.float32,
)
if self.sequential_offload:
pipeline.enable_offload_submodels(self.device)
else:
pipeline.to(self.device)
return (
pipeline,
width,
height,
"NOHASH",
)
# for usage statistics
if self._has_cuda():
torch.cuda.reset_peak_memory_stats()
torch.cuda.empty_cache()
# this does the work
if not os.path.isabs(config):
config = os.path.join(Globals.root, config)
omega_config = OmegaConf.load(config)
with open(weights, "rb") as f:
weight_bytes = f.read()
model_hash = self._cached_sha256(weights, weight_bytes)
sd = None
if weights.endswith(".ckpt"):
self.scan_model(model_name, weights)
sd = torch.load(io.BytesIO(weight_bytes), map_location="cpu")
else:
sd = safetensors.torch.load(weight_bytes)
del weight_bytes
# merged models from auto11 merge board are flat for some reason
if "state_dict" in sd:
sd = sd["state_dict"]
print(" | Forcing garbage collection prior to loading new model")
gc.collect()
model = instantiate_from_config(omega_config.model)
model.load_state_dict(sd, strict=False)
if self.precision == "float16":
print(" | Using faster float16 precision")
model = model.to(torch.float16)
else:
print(" | Using more accurate float32 precision")
# look and load a matching vae file. Code borrowed from AUTOMATIC1111 modules/sd_models.py
# get the path to the custom vae, if any
vae_path = None
# first we use whatever is in the config file
if vae:
if not os.path.isabs(vae):
vae = os.path.normpath(os.path.join(Globals.root, vae))
if os.path.exists(vae):
print(f" | Loading VAE weights from: {vae}")
if vae.endswith((".ckpt", ".pt")):
self.scan_model(vae, vae)
vae_ckpt = torch.load(vae, map_location="cpu")
else:
vae_ckpt = safetensors.torch.load_file(vae)
vae_dict = {k: v for k, v in vae_ckpt.items() if k[0:4] != "loss"}
model.first_stage_model.load_state_dict(vae_dict, strict=False)
else:
print(f" | VAE file {vae} not found. Skipping.")
path = Path(vae if os.path.isabs(vae) else os.path.normpath(os.path.join(Globals.root, vae)))
if path.exists():
vae_path = path
# then we look for a file with the same basename
vae_path = vae_path or self._scan_for_matching_file(Path(weights))
# Do the conversion and return a diffusers pipeline
print(
f">> Converting legacy checkpoint {model_name} into a diffusers model..."
)
from ldm.invoke.ckpt_to_diffuser import load_pipeline_from_original_stable_diffusion_ckpt
model.to(self.device)
# model.to doesn't change the cond_stage_model.device used to move the tokenizer output, so set it here
model.cond_stage_model.device = self.device
if self._has_cuda():
torch.cuda.empty_cache()
pipeline = load_pipeline_from_original_stable_diffusion_ckpt(
checkpoint_path=weights,
original_config_file=config,
vae_path=vae_path,
return_generator_pipeline=True,
precision=torch.float16
if self.precision == "float16"
else torch.float32,
)
if self.sequential_offload:
pipeline.enable_offload_submodels(self.device)
else:
pipeline.to(self.device)
model.eval()
return (
pipeline,
width,
height,
"NOHASH",
)
for module in model.modules():
if isinstance(module, (torch.nn.Conv2d, torch.nn.ConvTranspose2d)):
module._orig_padding_mode = module.padding_mode
return model, width, height, model_hash
def _load_diffusers_model(self, mconfig):
name_or_path = self.model_name_or_path(mconfig)
@@ -500,9 +417,9 @@ class ModelManager(object):
pipeline_args.update(cache_dir=global_cache_dir("hub"))
if using_fp16:
pipeline_args.update(torch_dtype=torch.float16)
fp_args_list = [{"revision": "fp16"}, {}]
else:
fp_args_list = [{}]
revision = mconfig.get('revision') or ('fp16' if using_fp16 else None)
fp_args_list = [{"revision": revision}] if revision else []
fp_args_list.append({})
verbosity = dlogging.get_verbosity()
dlogging.set_verbosity_error()
@@ -544,6 +461,8 @@ class ModelManager(object):
return pipeline, width, height, model_hash
def is_v2_config(self, config: Path) -> bool:
if not os.path.isabs(config):
config = os.path.join(Globals.root, config)
try:
mconfig = OmegaConf.load(config)
return (
@@ -758,7 +677,6 @@ class ModelManager(object):
def heuristic_import(
self,
path_url_or_repo: str,
convert: bool = False,
model_name: str = None,
description: str = None,
model_config_file: Path = None,
@@ -780,9 +698,6 @@ class ModelManager(object):
The model_name and/or description can be provided. If not, they will
be generated automatically.
If convert is true, legacy models will be converted to diffusers
before importing.
If commit_to_conf is provided, the newly loaded model will be written
to the `models.yaml` file at the indicated path. Otherwise, the changes
will only remain in memory.
@@ -818,7 +733,6 @@ class ModelManager(object):
print(f" | {thing} appears to be a diffusers file on disk")
model_name = self.import_diffuser_model(
thing,
vae=dict(repo_id="stabilityai/sd-vae-ft-mse"),
model_name=model_name,
description=description,
commit_to_conf=commit_to_conf,
@@ -839,7 +753,6 @@ class ModelManager(object):
):
if model_name := self.heuristic_import(
str(m),
convert,
commit_to_conf=commit_to_conf,
config_file_callback=config_file_callback,
):
@@ -906,11 +819,11 @@ class ModelManager(object):
)
elif model_type == SDLegacyType.V2:
print(
f"** {thing} is a V2 checkpoint file, but its parameterization cannot be determined. Please provide configuration file path."
f"** {thing} is a V2 checkpoint file, but its parameterization cannot be determined. Please provide the configuration file type or path."
)
else:
print(
f"** {thing} is a legacy checkpoint file but not a known Stable Diffusion model. Please provide configuration file path."
f"** {thing} is a legacy checkpoint file but not a known Stable Diffusion model. Please provide the configuration file type or path."
)
if not model_config_file and config_file_callback:
@@ -918,52 +831,28 @@ class ModelManager(object):
if not model_config_file:
return
if self.is_v2_config(model_config_file):
convert = True
print(" | This SD-v2 model will be converted to diffusers format for use")
# look for a custom vae
vae_path = None
for suffix in ["pt", "ckpt", "safetensors"]:
if (model_path.with_suffix(f".vae.{suffix}")).exists():
vae_path = model_path.with_suffix(f".vae.{suffix}")
print(f" | Using VAE file {vae_path.name}")
if (vae_path := self._scan_for_matching_file(model_path)):
print(f" | Using VAE file {vae_path.name}")
diffuser_path = Path(
Globals.root, "models", Globals.converted_ckpts_dir, model_path.stem
)
vae = None if vae_path else dict(repo_id="stabilityai/sd-vae-ft-mse")
if convert:
diffuser_path = Path(
Globals.root, "models", Globals.converted_ckpts_dir, model_path.stem
)
model_name = self.convert_and_import(
model_path,
diffusers_path=diffuser_path,
vae=vae,
vae_path=vae_path,
model_name=model_name,
model_description=description,
original_config_file=model_config_file,
commit_to_conf=commit_to_conf,
scan_needed=False,
)
# in the event that this file was downloaded automatically prior to conversion
# we do not keep the original .ckpt/.safetensors around
if is_temporary:
model_path.unlink(missing_ok=True)
else:
model_name = self.import_ckpt_model(
model_path,
config=model_config_file,
model_name=model_name,
model_description=description,
vae=str(
vae_path
or Path(
Globals.root,
"models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt",
)
),
commit_to_conf=commit_to_conf,
)
model_name = self.convert_and_import(
model_path,
diffusers_path=diffuser_path,
vae=vae,
vae_path=vae_path,
model_name=model_name,
model_description=description,
original_config_file=model_config_file,
commit_to_conf=commit_to_conf,
scan_needed=False,
)
# in the event that this file was downloaded automatically prior to conversion
# we do not keep the original .ckpt/.safetensors around
if is_temporary:
model_path.unlink(missing_ok=True)
if commit_to_conf:
self.commit(commit_to_conf)
return model_name
@@ -1006,14 +895,17 @@ class ModelManager(object):
try:
# By passing the specified VAE to the conversion function, the autoencoder
# will be built into the model rather than tacked on afterward via the config file
vae_model = self._load_vae(vae) if vae else None
vae_model=None
if vae:
vae_model=self._load_vae(vae)
vae_path=None
convert_ckpt_to_diffusers(
ckpt_path,
diffusers_path,
extract_ema=True,
original_config_file=original_config_file,
vae=vae_model,
vae_path=str(vae_path) if vae_path else None,
vae_path=vae_path,
scan_needed=scan_needed,
)
print(
@@ -1060,36 +952,6 @@ class ModelManager(object):
return search_folder, found_models
def _choose_diffusers_vae(
self, model_name: str, vae: str = None
) -> Union[dict, str]:
# In the event that the original entry is using a custom ckpt VAE, we try to
# map that VAE onto a diffuser VAE using a hard-coded dictionary.
# I would prefer to do this differently: We load the ckpt model into memory, swap the
# VAE in memory, and then pass that to convert_ckpt_to_diffusers() so that the swapped
# VAE is built into the model. However, when I tried this I got obscure key errors.
if vae:
return vae
if model_name in self.config and (
vae_ckpt_path := self.model_info(model_name).get("vae", None)
):
vae_basename = Path(vae_ckpt_path).stem
diffusers_vae = None
if diffusers_vae := VAE_TO_REPO_ID.get(vae_basename, None):
print(
f">> {vae_basename} VAE corresponds to known {diffusers_vae} diffusers version"
)
vae = {"repo_id": diffusers_vae}
else:
print(
f'** Custom VAE "{vae_basename}" found, but corresponding diffusers model unknown'
)
print(
'** Using "stabilityai/sd-vae-ft-mse"; If this isn\'t right, please edit the model config'
)
vae = {"repo_id": "stabilityai/sd-vae-ft-mse"}
return vae
def _make_cache_room(self) -> None:
num_loaded_models = len(self.models)
if num_loaded_models >= self.max_loaded_models:
@@ -1351,6 +1213,22 @@ class ModelManager(object):
f.write(hash)
return hash
@classmethod
def _scan_for_matching_file(
self,model_path: Path,
suffixes: List[str]=['.vae.pt','.vae.ckpt','.vae.safetensors']
)->Path:
"""
Find a file with same basename as the indicated model, but with one
of the suffixes passed.
"""
# look for a custom vae
vae_path = None
for suffix in suffixes:
if model_path.with_suffix(suffix).exists():
vae_path = model_path.with_suffix(suffix)
return vae_path
def _load_vae(self, vae_config) -> AutoencoderKL:
vae_args = {}
try:
@@ -1362,7 +1240,7 @@ class ModelManager(object):
using_fp16 = self.precision == "float16"
vae_args.update(
cache_dir=global_cache_dir("hug"),
cache_dir=global_cache_dir("hub"),
local_files_only=not Globals.internet_available,
)

View File

@@ -11,9 +11,11 @@ seeds:
import os
import re
import atexit
from typing import List
from ldm.invoke.args import Args
from ldm.invoke.concepts_lib import HuggingFaceConceptsLibrary
from ldm.invoke.globals import Globals
from ldm.modules.lora_manager import LoraManager
# ---------------readline utilities---------------------
try:
@@ -136,6 +138,9 @@ class Completer(object):
elif re.search('<[\w-]*$',buffer):
self.matches= self._concept_completions(text,state)
elif re.search('withLora\(?[a-zA-Z0-9._-]*$',buffer):
self.matches= self._lora_completions(text,state)
# looking for a model
elif re.match('^'+'|'.join(MODEL_COMMANDS),buffer):
self.matches= self._model_completions(text, state)
@@ -298,6 +303,15 @@ class Completer(object):
matches.sort()
return matches
def _lora_completions(self, text, state)->List[str]:
loras: dict = LoraManager.list_loras()
lora_names = [f'withLora({x},1)' for x in loras.keys()]
matches = list()
for lora in lora_names:
if lora.startswith(text):
matches.append(lora)
return sorted(matches)
def _model_completions(self, text, state, ckpt_only=False):
m = re.search('(!switch\s+)(\w*)',text)
if m:

View File

@@ -288,16 +288,7 @@ class InvokeAICrossAttentionMixin:
return self.einsum_op_tensor_mem(q, k, v, 32)
def restore_default_cross_attention(model, is_running_diffusers: bool, restore_attention_processor: Optional[AttnProcessor]=None):
if is_running_diffusers:
unet = model
unet.set_attn_processor(restore_attention_processor or CrossAttnProcessor())
else:
remove_attention_function(model)
def override_cross_attention(model, context: Context, is_running_diffusers = False):
def setup_cross_attention_control_attention_processors(unet: UNet2DConditionModel, context: Context):
"""
Inject attention parameters and functions into the passed in model to enable cross attention editing.
@@ -323,24 +314,15 @@ def override_cross_attention(model, context: Context, is_running_diffusers = Fal
context.cross_attention_mask = mask.to(device)
context.cross_attention_index_map = indices.to(device)
if is_running_diffusers:
unet = model
old_attn_processors = unet.attn_processors
if torch.backends.mps.is_available():
# see note in StableDiffusionGeneratorPipeline.__init__ about borked slicing on MPS
unet.set_attn_processor(SwapCrossAttnProcessor())
else:
# try to re-use an existing slice size
default_slice_size = 4
slice_size = next((p.slice_size for p in old_attn_processors.values() if type(p) is SlicedAttnProcessor), default_slice_size)
unet.set_attn_processor(SlicedSwapCrossAttnProcesser(slice_size=slice_size))
return old_attn_processors
old_attn_processors = unet.attn_processors
if torch.backends.mps.is_available():
# see note in StableDiffusionGeneratorPipeline.__init__ about borked slicing on MPS
unet.set_attn_processor(SwapCrossAttnProcessor())
else:
context.register_cross_attention_modules(model)
inject_attention_function(model, context)
return None
# try to re-use an existing slice size
default_slice_size = 4
slice_size = next((p.slice_size for p in old_attn_processors.values() if type(p) is SlicedAttnProcessor), default_slice_size)
unet.set_attn_processor(SlicedSwapCrossAttnProcesser(slice_size=slice_size))
def get_cross_attention_modules(model, which: CrossAttentionType) -> list[tuple[str, InvokeAICrossAttentionMixin]]:

View File

@@ -12,17 +12,6 @@ class DDIMSampler(Sampler):
self.invokeai_diffuser = InvokeAIDiffuserComponent(self.model,
model_forward_callback = lambda x, sigma, cond: self.model.apply_model(x, sigma, cond))
def prepare_to_sample(self, t_enc, **kwargs):
super().prepare_to_sample(t_enc, **kwargs)
extra_conditioning_info = kwargs.get('extra_conditioning_info', None)
all_timesteps_count = kwargs.get('all_timesteps_count', t_enc)
if extra_conditioning_info is not None and extra_conditioning_info.wants_cross_attention_control:
self.invokeai_diffuser.override_cross_attention(extra_conditioning_info, step_count = all_timesteps_count)
else:
self.invokeai_diffuser.restore_default_cross_attention()
# This is the central routine
@torch.no_grad()

View File

@@ -19,7 +19,7 @@ from functools import partial
from tqdm import tqdm
from torchvision.utils import make_grid
from pytorch_lightning.utilities.distributed import rank_zero_only
from omegaconf import ListConfig
from omegaconf import ListConfig, OmegaConf
import urllib
from ldm.modules.textual_inversion_manager import TextualInversionManager
@@ -609,6 +609,7 @@ class DDPM(pl.LightningModule):
opt = torch.optim.AdamW(params, lr=lr)
return opt
class LatentDiffusion(DDPM):
"""main class"""
@@ -617,7 +618,7 @@ class LatentDiffusion(DDPM):
self,
first_stage_config,
cond_stage_config,
personalization_config,
personalization_config=None,
num_timesteps_cond=None,
cond_stage_key='image',
cond_stage_trainable=False,
@@ -675,7 +676,8 @@ class LatentDiffusion(DDPM):
self.model.train = disabled_train
for param in self.model.parameters():
param.requires_grad = False
personalization_config = personalization_config or self._fallback_personalization_config()
self.embedding_manager = self.instantiate_embedding_manager(
personalization_config, self.cond_stage_model
)
@@ -2150,6 +2152,25 @@ class LatentDiffusion(DDPM):
self.emb_ckpt_counter += 500
@classmethod
def _fallback_personalization_config(self)->dict:
"""
This protects us against custom legacy config files that
don't contain the personalization_config section.
"""
return OmegaConf.create(
dict(
target='ldm.modules.embedding_manager.EmbeddingManager',
params=dict(
placeholder_strings=list('*'),
initializer_words=list('sculpture'),
per_image_tokens=False,
num_vectors_per_token=1,
progressive_words=False,
)
)
)
class DiffusionWrapper(pl.LightningModule):
def __init__(self, diff_model_config, conditioning_key):

View File

@@ -38,15 +38,6 @@ class CFGDenoiser(nn.Module):
model_forward_callback=lambda x, sigma, cond: self.inner_model(x, sigma, cond=cond))
def prepare_to_sample(self, t_enc, **kwargs):
extra_conditioning_info = kwargs.get('extra_conditioning_info', None)
if extra_conditioning_info is not None and extra_conditioning_info.wants_cross_attention_control:
self.invokeai_diffuser.override_cross_attention(extra_conditioning_info, step_count = t_enc)
else:
self.invokeai_diffuser.restore_default_cross_attention()
def forward(self, x, sigma, uncond, cond, cond_scale):
next_x = self.invokeai_diffuser.do_diffusion_step(x, sigma, uncond, cond, cond_scale)

View File

@@ -14,17 +14,6 @@ class PLMSSampler(Sampler):
def __init__(self, model, schedule='linear', device=None, **kwargs):
super().__init__(model,schedule,model.num_timesteps, device)
def prepare_to_sample(self, t_enc, **kwargs):
super().prepare_to_sample(t_enc, **kwargs)
extra_conditioning_info = kwargs.get('extra_conditioning_info', None)
all_timesteps_count = kwargs.get('all_timesteps_count', t_enc)
if extra_conditioning_info is not None and extra_conditioning_info.wants_cross_attention_control:
self.invokeai_diffuser.override_cross_attention(extra_conditioning_info, step_count = all_timesteps_count)
else:
self.invokeai_diffuser.restore_default_cross_attention()
# this is the essential routine
@torch.no_grad()

View File

@@ -1,25 +1,36 @@
from contextlib import contextmanager
from dataclasses import dataclass
from math import ceil
from typing import Callable, Optional, Union, Any, Dict
from typing import Callable, Optional, Union, Any
import numpy as np
import torch
from diffusers.models.cross_attention import AttnProcessor
from diffusers import UNet2DConditionModel
from typing_extensions import TypeAlias
from ldm.invoke.globals import Globals
from ldm.models.diffusion.cross_attention_control import Arguments, \
restore_default_cross_attention, override_cross_attention, Context, get_cross_attention_modules, \
CrossAttentionType, SwapCrossAttnContext
from ldm.models.diffusion.cross_attention_control import (
Arguments,
setup_cross_attention_control_attention_processors,
Context,
get_cross_attention_modules,
CrossAttentionType,
SwapCrossAttnContext,
)
from ldm.models.diffusion.cross_attention_map_saving import AttentionMapSaver
from ldm.modules.lora_manager import LoraCondition
ModelForwardCallback: TypeAlias = Union[
# x, t, conditioning, Optional[cross-attention kwargs]
Callable[[torch.Tensor, torch.Tensor, torch.Tensor, Optional[dict[str, Any]]], torch.Tensor],
Callable[[torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor]
Callable[
[torch.Tensor, torch.Tensor, torch.Tensor, Optional[dict[str, Any]]],
torch.Tensor,
],
Callable[[torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor],
]
@dataclass(frozen=True)
class PostprocessingSettings:
threshold: float
@@ -29,31 +40,39 @@ class PostprocessingSettings:
class InvokeAIDiffuserComponent:
'''
"""
The aim of this component is to provide a single place for code that can be applied identically to
all InvokeAI diffusion procedures.
At the moment it includes the following features:
* Cross attention control ("prompt2prompt")
* Hybrid conditioning (used for inpainting)
'''
* "LoRA" and "PEFT" augmentions to the unet's attention weights
"""
debug_thresholding = False
sequential_guidance = False
@dataclass
class ExtraConditioningInfo:
tokens_count_including_eos_bos: int
cross_attention_control_args: Optional[Arguments] = None
lora_conditions: Optional[list[LoraCondition]] = None
@property
def wants_cross_attention_control(self):
return self.cross_attention_control_args is not None
@property
def has_lora_conditions(self):
return self.lora_conditions is not None
def __init__(self, model, model_forward_callback: ModelForwardCallback,
is_running_diffusers: bool=False,
):
def __init__(
self,
model,
model_forward_callback: ModelForwardCallback,
is_running_diffusers: bool = False,
):
"""
:param model: the unet model to pass through to cross attention control
:param model_forward_callback: a lambda with arguments (x, sigma, conditioning_to_apply). will be called repeatedly. most likely, this should simply call model.forward(x, sigma, conditioning)
@@ -65,43 +84,44 @@ class InvokeAIDiffuserComponent:
self.cross_attention_control_context = None
self.sequential_guidance = Globals.sequential_guidance
@classmethod
@contextmanager
def custom_attention_context(self,
extra_conditioning_info: Optional[ExtraConditioningInfo],
step_count: int):
do_swap = extra_conditioning_info is not None and extra_conditioning_info.wants_cross_attention_control
old_attn_processor = None
if do_swap:
old_attn_processor = self.override_cross_attention(extra_conditioning_info,
step_count=step_count)
def custom_attention_context(
clss,
unet: UNet2DConditionModel, # note: also may futz with the text encoder depending on requested LoRAs
extra_conditioning_info: Optional[ExtraConditioningInfo],
step_count: int
):
old_attn_processors = None
if extra_conditioning_info and (
extra_conditioning_info.wants_cross_attention_control
| extra_conditioning_info.has_lora_conditions
):
old_attn_processors = unet.attn_processors
# Load lora conditions into the model
if extra_conditioning_info.has_lora_conditions:
for condition in extra_conditioning_info.lora_conditions:
condition() # target model is stored in condition state for some reason
if extra_conditioning_info.wants_cross_attention_control:
cross_attention_control_context = Context(
arguments=extra_conditioning_info.cross_attention_control_args,
step_count=step_count,
)
setup_cross_attention_control_attention_processors(
unet,
cross_attention_control_context,
)
try:
yield None
finally:
if old_attn_processor is not None:
self.restore_default_cross_attention(old_attn_processor)
if old_attn_processors is not None:
unet.set_attn_processor(old_attn_processors)
if extra_conditioning_info and extra_conditioning_info.has_lora_conditions:
for lora_condition in extra_conditioning_info.lora_conditions:
lora_condition.unload()
# TODO resuscitate attention map saving
#self.remove_attention_map_saving()
def override_cross_attention(self, conditioning: ExtraConditioningInfo, step_count: int) -> Dict[str, AttnProcessor]:
"""
setup cross attention .swap control. for diffusers this replaces the attention processor, so
the previous attention processor is returned so that the caller can restore it later.
"""
self.conditioning = conditioning
self.cross_attention_control_context = Context(
arguments=self.conditioning.cross_attention_control_args,
step_count=step_count
)
return override_cross_attention(self.model,
self.cross_attention_control_context,
is_running_diffusers=self.is_running_diffusers)
def restore_default_cross_attention(self, restore_attention_processor: Optional['AttnProcessor']=None):
self.conditioning = None
self.cross_attention_control_context = None
restore_default_cross_attention(self.model,
is_running_diffusers=self.is_running_diffusers,
restore_attention_processor=restore_attention_processor)
# self.remove_attention_map_saving()
def setup_attention_map_saving(self, saver: AttentionMapSaver):
def callback(slice, dim, offset, slice_size, key):
@@ -110,26 +130,40 @@ class InvokeAIDiffuserComponent:
return
saver.add_attention_maps(slice, key)
tokens_cross_attention_modules = get_cross_attention_modules(self.model, CrossAttentionType.TOKENS)
tokens_cross_attention_modules = get_cross_attention_modules(
self.model, CrossAttentionType.TOKENS
)
for identifier, module in tokens_cross_attention_modules:
key = ('down' if identifier.startswith('down') else
'up' if identifier.startswith('up') else
'mid')
key = (
"down"
if identifier.startswith("down")
else "up"
if identifier.startswith("up")
else "mid"
)
module.set_attention_slice_calculated_callback(
lambda slice, dim, offset, slice_size, key=key: callback(slice, dim, offset, slice_size, key))
lambda slice, dim, offset, slice_size, key=key: callback(
slice, dim, offset, slice_size, key
)
)
def remove_attention_map_saving(self):
tokens_cross_attention_modules = get_cross_attention_modules(self.model, CrossAttentionType.TOKENS)
tokens_cross_attention_modules = get_cross_attention_modules(
self.model, CrossAttentionType.TOKENS
)
for _, module in tokens_cross_attention_modules:
module.set_attention_slice_calculated_callback(None)
def do_diffusion_step(self, x: torch.Tensor, sigma: torch.Tensor,
unconditioning: Union[torch.Tensor,dict],
conditioning: Union[torch.Tensor,dict],
unconditional_guidance_scale: float,
step_index: Optional[int]=None,
total_step_count: Optional[int]=None,
):
def do_diffusion_step(
self,
x: torch.Tensor,
sigma: torch.Tensor,
unconditioning: Union[torch.Tensor, dict],
conditioning: Union[torch.Tensor, dict],
unconditional_guidance_scale: float,
step_index: Optional[int] = None,
total_step_count: Optional[int] = None,
):
"""
:param x: current latents
:param sigma: aka t, passed to the internal model to control how much denoising will occur
@@ -140,33 +174,55 @@ class InvokeAIDiffuserComponent:
:return: the new latents after applying the model to x using unscaled unconditioning and CFG-scaled conditioning.
"""
cross_attention_control_types_to_do = []
context: Context = self.cross_attention_control_context
if self.cross_attention_control_context is not None:
percent_through = self.calculate_percent_through(sigma, step_index, total_step_count)
cross_attention_control_types_to_do = context.get_active_cross_attention_control_types_for_step(percent_through)
percent_through = self.calculate_percent_through(
sigma, step_index, total_step_count
)
cross_attention_control_types_to_do = (
context.get_active_cross_attention_control_types_for_step(
percent_through
)
)
wants_cross_attention_control = (len(cross_attention_control_types_to_do) > 0)
wants_cross_attention_control = len(cross_attention_control_types_to_do) > 0
wants_hybrid_conditioning = isinstance(conditioning, dict)
if wants_hybrid_conditioning:
unconditioned_next_x, conditioned_next_x = self._apply_hybrid_conditioning(x, sigma, unconditioning,
conditioning)
unconditioned_next_x, conditioned_next_x = self._apply_hybrid_conditioning(
x, sigma, unconditioning, conditioning
)
elif wants_cross_attention_control:
unconditioned_next_x, conditioned_next_x = self._apply_cross_attention_controlled_conditioning(x, sigma,
unconditioning,
conditioning,
cross_attention_control_types_to_do)
(
unconditioned_next_x,
conditioned_next_x,
) = self._apply_cross_attention_controlled_conditioning(
x,
sigma,
unconditioning,
conditioning,
cross_attention_control_types_to_do,
)
elif self.sequential_guidance:
unconditioned_next_x, conditioned_next_x = self._apply_standard_conditioning_sequentially(
x, sigma, unconditioning, conditioning)
(
unconditioned_next_x,
conditioned_next_x,
) = self._apply_standard_conditioning_sequentially(
x, sigma, unconditioning, conditioning
)
else:
unconditioned_next_x, conditioned_next_x = self._apply_standard_conditioning(
x, sigma, unconditioning, conditioning)
(
unconditioned_next_x,
conditioned_next_x,
) = self._apply_standard_conditioning(
x, sigma, unconditioning, conditioning
)
combined_next_x = self._combine(unconditioned_next_x, conditioned_next_x, unconditional_guidance_scale)
combined_next_x = self._combine(
unconditioned_next_x, conditioned_next_x, unconditional_guidance_scale
)
return combined_next_x
@@ -176,24 +232,33 @@ class InvokeAIDiffuserComponent:
latents: torch.Tensor,
sigma,
step_index,
total_step_count
total_step_count,
) -> torch.Tensor:
if postprocessing_settings is not None:
percent_through = self.calculate_percent_through(sigma, step_index, total_step_count)
latents = self.apply_threshold(postprocessing_settings, latents, percent_through)
latents = self.apply_symmetry(postprocessing_settings, latents, percent_through)
percent_through = self.calculate_percent_through(
sigma, step_index, total_step_count
)
latents = self.apply_threshold(
postprocessing_settings, latents, percent_through
)
latents = self.apply_symmetry(
postprocessing_settings, latents, percent_through
)
return latents
def calculate_percent_through(self, sigma, step_index, total_step_count):
if step_index is not None and total_step_count is not None:
# 🧨diffusers codepath
percent_through = step_index / total_step_count # will never reach 1.0 - this is deliberate
percent_through = (
step_index / total_step_count
) # will never reach 1.0 - this is deliberate
else:
# legacy compvis codepath
# TODO remove when compvis codepath support is dropped
if step_index is None and sigma is None:
raise ValueError(
f"Either step_index or sigma is required when doing cross attention control, but both are None.")
f"Either step_index or sigma is required when doing cross attention control, but both are None."
)
percent_through = self.estimate_percent_through(step_index, sigma)
return percent_through
@@ -204,24 +269,30 @@ class InvokeAIDiffuserComponent:
x_twice = torch.cat([x] * 2)
sigma_twice = torch.cat([sigma] * 2)
both_conditionings = torch.cat([unconditioning, conditioning])
both_results = self.model_forward_callback(x_twice, sigma_twice, both_conditionings)
both_results = self.model_forward_callback(
x_twice, sigma_twice, both_conditionings
)
unconditioned_next_x, conditioned_next_x = both_results.chunk(2)
if conditioned_next_x.device.type == 'mps':
if conditioned_next_x.device.type == "mps":
# prevent a result filled with zeros. seems to be a torch bug.
conditioned_next_x = conditioned_next_x.clone()
return unconditioned_next_x, conditioned_next_x
def _apply_standard_conditioning_sequentially(self, x: torch.Tensor, sigma, unconditioning: torch.Tensor, conditioning: torch.Tensor):
def _apply_standard_conditioning_sequentially(
self,
x: torch.Tensor,
sigma,
unconditioning: torch.Tensor,
conditioning: torch.Tensor,
):
# low-memory sequential path
unconditioned_next_x = self.model_forward_callback(x, sigma, unconditioning)
conditioned_next_x = self.model_forward_callback(x, sigma, conditioning)
if conditioned_next_x.device.type == 'mps':
if conditioned_next_x.device.type == "mps":
# prevent a result filled with zeros. seems to be a torch bug.
conditioned_next_x = conditioned_next_x.clone()
return unconditioned_next_x, conditioned_next_x
def _apply_hybrid_conditioning(self, x, sigma, unconditioning, conditioning):
assert isinstance(conditioning, dict)
assert isinstance(unconditioning, dict)
@@ -236,48 +307,80 @@ class InvokeAIDiffuserComponent:
]
else:
both_conditionings[k] = torch.cat([unconditioning[k], conditioning[k]])
unconditioned_next_x, conditioned_next_x = self.model_forward_callback(x_twice, sigma_twice, both_conditionings).chunk(2)
unconditioned_next_x, conditioned_next_x = self.model_forward_callback(
x_twice, sigma_twice, both_conditionings
).chunk(2)
return unconditioned_next_x, conditioned_next_x
def _apply_cross_attention_controlled_conditioning(self,
x: torch.Tensor,
sigma,
unconditioning,
conditioning,
cross_attention_control_types_to_do):
def _apply_cross_attention_controlled_conditioning(
self,
x: torch.Tensor,
sigma,
unconditioning,
conditioning,
cross_attention_control_types_to_do,
):
if self.is_running_diffusers:
return self._apply_cross_attention_controlled_conditioning__diffusers(x, sigma, unconditioning,
conditioning,
cross_attention_control_types_to_do)
return self._apply_cross_attention_controlled_conditioning__diffusers(
x,
sigma,
unconditioning,
conditioning,
cross_attention_control_types_to_do,
)
else:
return self._apply_cross_attention_controlled_conditioning__compvis(x, sigma, unconditioning, conditioning,
cross_attention_control_types_to_do)
return self._apply_cross_attention_controlled_conditioning__compvis(
x,
sigma,
unconditioning,
conditioning,
cross_attention_control_types_to_do,
)
def _apply_cross_attention_controlled_conditioning__diffusers(self,
x: torch.Tensor,
sigma,
unconditioning,
conditioning,
cross_attention_control_types_to_do):
def _apply_cross_attention_controlled_conditioning__diffusers(
self,
x: torch.Tensor,
sigma,
unconditioning,
conditioning,
cross_attention_control_types_to_do,
):
context: Context = self.cross_attention_control_context
cross_attn_processor_context = SwapCrossAttnContext(modified_text_embeddings=context.arguments.edited_conditioning,
index_map=context.cross_attention_index_map,
mask=context.cross_attention_mask,
cross_attention_types_to_do=[])
cross_attn_processor_context = SwapCrossAttnContext(
modified_text_embeddings=context.arguments.edited_conditioning,
index_map=context.cross_attention_index_map,
mask=context.cross_attention_mask,
cross_attention_types_to_do=[],
)
# no cross attention for unconditioning (negative prompt)
unconditioned_next_x = self.model_forward_callback(x, sigma, unconditioning,
{"swap_cross_attn_context": cross_attn_processor_context})
unconditioned_next_x = self.model_forward_callback(
x,
sigma,
unconditioning,
{"swap_cross_attn_context": cross_attn_processor_context},
)
# do requested cross attention types for conditioning (positive prompt)
cross_attn_processor_context.cross_attention_types_to_do = cross_attention_control_types_to_do
conditioned_next_x = self.model_forward_callback(x, sigma, conditioning,
{"swap_cross_attn_context": cross_attn_processor_context})
cross_attn_processor_context.cross_attention_types_to_do = (
cross_attention_control_types_to_do
)
conditioned_next_x = self.model_forward_callback(
x,
sigma,
conditioning,
{"swap_cross_attn_context": cross_attn_processor_context},
)
return unconditioned_next_x, conditioned_next_x
def _apply_cross_attention_controlled_conditioning__compvis(self, x:torch.Tensor, sigma, unconditioning, conditioning, cross_attention_control_types_to_do):
def _apply_cross_attention_controlled_conditioning__compvis(
self,
x: torch.Tensor,
sigma,
unconditioning,
conditioning,
cross_attention_control_types_to_do,
):
# print('pct', percent_through, ': doing cross attention control on', cross_attention_control_types_to_do)
# slower non-batched path (20% slower on mac MPS)
# We are only interested in using attention maps for conditioned_next_x, but batching them with generation of
@@ -287,24 +390,26 @@ class InvokeAIDiffuserComponent:
# representing batched uncond + cond, but then when it comes to applying the saved attention, the
# wrangler gets an attention tensor which only has shape[0]=8, representing just self.edited_conditionings.)
# todo: give CrossAttentionControl's `wrangler` function more info so it can work with a batched call as well.
context:Context = self.cross_attention_control_context
context: Context = self.cross_attention_control_context
try:
unconditioned_next_x = self.model_forward_callback(x, sigma, unconditioning)
# process x using the original prompt, saving the attention maps
#print("saving attention maps for", cross_attention_control_types_to_do)
# print("saving attention maps for", cross_attention_control_types_to_do)
for ca_type in cross_attention_control_types_to_do:
context.request_save_attention_maps(ca_type)
_ = self.model_forward_callback(x, sigma, conditioning)
context.clear_requests(cleanup=False)
# process x again, using the saved attention maps to control where self.edited_conditioning will be applied
#print("applying saved attention maps for", cross_attention_control_types_to_do)
# print("applying saved attention maps for", cross_attention_control_types_to_do)
for ca_type in cross_attention_control_types_to_do:
context.request_apply_saved_attention_maps(ca_type)
edited_conditioning = self.conditioning.cross_attention_control_args.edited_conditioning
conditioned_next_x = self.model_forward_callback(x, sigma, edited_conditioning)
edited_conditioning = context.arguments.edited_conditioning
conditioned_next_x = self.model_forward_callback(
x, sigma, edited_conditioning
)
context.clear_requests(cleanup=True)
except:
@@ -323,17 +428,21 @@ class InvokeAIDiffuserComponent:
self,
postprocessing_settings: PostprocessingSettings,
latents: torch.Tensor,
percent_through: float
percent_through: float,
) -> torch.Tensor:
if postprocessing_settings.threshold is None or postprocessing_settings.threshold == 0.0:
if (
postprocessing_settings.threshold is None
or postprocessing_settings.threshold == 0.0
):
return latents
threshold = postprocessing_settings.threshold
warmup = postprocessing_settings.warmup
if percent_through < warmup:
current_threshold = threshold + threshold * 5 * (1 - (percent_through / warmup))
current_threshold = threshold + threshold * 5 * (
1 - (percent_through / warmup)
)
else:
current_threshold = threshold
@@ -347,10 +456,14 @@ class InvokeAIDiffuserComponent:
if self.debug_thresholding:
std, mean = [i.item() for i in torch.std_mean(latents)]
outside = torch.count_nonzero((latents < -current_threshold) | (latents > current_threshold))
print(f"\nThreshold: %={percent_through} threshold={current_threshold:.3f} (of {threshold:.3f})\n"
f" | min, mean, max = {minval:.3f}, {mean:.3f}, {maxval:.3f}\tstd={std}\n"
f" | {outside / latents.numel() * 100:.2f}% values outside threshold")
outside = torch.count_nonzero(
(latents < -current_threshold) | (latents > current_threshold)
)
print(
f"\nThreshold: %={percent_through} threshold={current_threshold:.3f} (of {threshold:.3f})\n"
f" | min, mean, max = {minval:.3f}, {mean:.3f}, {maxval:.3f}\tstd={std}\n"
f" | {outside / latents.numel() * 100:.2f}% values outside threshold"
)
if maxval < current_threshold and minval > -current_threshold:
return latents
@@ -363,17 +476,23 @@ class InvokeAIDiffuserComponent:
latents = torch.clone(latents)
maxval = np.clip(maxval * scale, 1, current_threshold)
num_altered += torch.count_nonzero(latents > maxval)
latents[latents > maxval] = torch.rand_like(latents[latents > maxval]) * maxval
latents[latents > maxval] = (
torch.rand_like(latents[latents > maxval]) * maxval
)
if minval < -current_threshold:
latents = torch.clone(latents)
minval = np.clip(minval * scale, -current_threshold, -1)
num_altered += torch.count_nonzero(latents < minval)
latents[latents < minval] = torch.rand_like(latents[latents < minval]) * minval
latents[latents < minval] = (
torch.rand_like(latents[latents < minval]) * minval
)
if self.debug_thresholding:
print(f" | min, , max = {minval:.3f}, , {maxval:.3f}\t(scaled by {scale})\n"
f" | {num_altered / latents.numel() * 100:.2f}% values altered")
print(
f" | min, , max = {minval:.3f}, , {maxval:.3f}\t(scaled by {scale})\n"
f" | {num_altered / latents.numel() * 100:.2f}% values altered"
)
return latents
@@ -381,9 +500,8 @@ class InvokeAIDiffuserComponent:
self,
postprocessing_settings: PostprocessingSettings,
latents: torch.Tensor,
percent_through: float
percent_through: float,
) -> torch.Tensor:
# Reset our last percent through if this is our first step.
if percent_through == 0.0:
self.last_percent_through = 0.0
@@ -393,36 +511,52 @@ class InvokeAIDiffuserComponent:
# Check for out of bounds
h_symmetry_time_pct = postprocessing_settings.h_symmetry_time_pct
if (h_symmetry_time_pct is not None and (h_symmetry_time_pct <= 0.0 or h_symmetry_time_pct > 1.0)):
if h_symmetry_time_pct is not None and (
h_symmetry_time_pct <= 0.0 or h_symmetry_time_pct > 1.0
):
h_symmetry_time_pct = None
v_symmetry_time_pct = postprocessing_settings.v_symmetry_time_pct
if (v_symmetry_time_pct is not None and (v_symmetry_time_pct <= 0.0 or v_symmetry_time_pct > 1.0)):
if v_symmetry_time_pct is not None and (
v_symmetry_time_pct <= 0.0 or v_symmetry_time_pct > 1.0
):
v_symmetry_time_pct = None
dev = latents.device.type
latents.to(device='cpu')
latents.to(device="cpu")
if (
h_symmetry_time_pct != None and
self.last_percent_through < h_symmetry_time_pct and
percent_through >= h_symmetry_time_pct
h_symmetry_time_pct != None
and self.last_percent_through < h_symmetry_time_pct
and percent_through >= h_symmetry_time_pct
):
# Horizontal symmetry occurs on the 3rd dimension of the latent
width = latents.shape[3]
x_flipped = torch.flip(latents, dims=[3])
latents = torch.cat([latents[:, :, :, 0:int(width/2)], x_flipped[:, :, :, int(width/2):int(width)]], dim=3)
latents = torch.cat(
[
latents[:, :, :, 0 : int(width / 2)],
x_flipped[:, :, :, int(width / 2) : int(width)],
],
dim=3,
)
if (
v_symmetry_time_pct != None and
self.last_percent_through < v_symmetry_time_pct and
percent_through >= v_symmetry_time_pct
v_symmetry_time_pct != None
and self.last_percent_through < v_symmetry_time_pct
and percent_through >= v_symmetry_time_pct
):
# Vertical symmetry occurs on the 2nd dimension of the latent
height = latents.shape[2]
y_flipped = torch.flip(latents, dims=[2])
latents = torch.cat([latents[:, :, 0:int(height / 2)], y_flipped[:, :, int(height / 2):int(height)]], dim=2)
latents = torch.cat(
[
latents[:, :, 0 : int(height / 2)],
y_flipped[:, :, int(height / 2) : int(height)],
],
dim=2,
)
self.last_percent_through = percent_through
return latents.to(device=dev)
@@ -430,7 +564,9 @@ class InvokeAIDiffuserComponent:
def estimate_percent_through(self, step_index, sigma):
if step_index is not None and self.cross_attention_control_context is not None:
# percent_through will never reach 1.0 (but this is intended)
return float(step_index) / float(self.cross_attention_control_context.step_count)
return float(step_index) / float(
self.cross_attention_control_context.step_count
)
# find the best possible index of the current sigma in the sigma sequence
smaller_sigmas = torch.nonzero(self.model.sigmas <= sigma)
sigma_index = smaller_sigmas[-1].item() if smaller_sigmas.shape[0] > 0 else 0
@@ -439,33 +575,38 @@ class InvokeAIDiffuserComponent:
return 1.0 - float(sigma_index + 1) / float(self.model.sigmas.shape[0])
# print('estimated percent_through', percent_through, 'from sigma', sigma.item())
# todo: make this work
@classmethod
def apply_conjunction(cls, x, t, forward_func, uc, c_or_weighted_c_list, global_guidance_scale):
def apply_conjunction(
cls, x, t, forward_func, uc, c_or_weighted_c_list, global_guidance_scale
):
x_in = torch.cat([x] * 2)
t_in = torch.cat([t] * 2) # aka sigmas
t_in = torch.cat([t] * 2) # aka sigmas
deltas = None
uncond_latents = None
weighted_cond_list = c_or_weighted_c_list if type(c_or_weighted_c_list) is list else [(c_or_weighted_c_list, 1)]
weighted_cond_list = (
c_or_weighted_c_list
if type(c_or_weighted_c_list) is list
else [(c_or_weighted_c_list, 1)]
)
# below is fugly omg
num_actual_conditionings = len(c_or_weighted_c_list)
conditionings = [uc] + [c for c,weight in weighted_cond_list]
weights = [1] + [weight for c,weight in weighted_cond_list]
chunk_count = ceil(len(conditionings)/2)
conditionings = [uc] + [c for c, weight in weighted_cond_list]
weights = [1] + [weight for c, weight in weighted_cond_list]
chunk_count = ceil(len(conditionings) / 2)
deltas = None
for chunk_index in range(chunk_count):
offset = chunk_index*2
chunk_size = min(2, len(conditionings)-offset)
offset = chunk_index * 2
chunk_size = min(2, len(conditionings) - offset)
if chunk_size == 1:
c_in = conditionings[offset]
latents_a = forward_func(x_in[:-1], t_in[:-1], c_in)
latents_b = None
else:
c_in = torch.cat(conditionings[offset:offset+2])
c_in = torch.cat(conditionings[offset : offset + 2])
latents_a, latents_b = forward_func(x_in, t_in, c_in).chunk(2)
# first chunk is guaranteed to be 2 entries: uncond_latents + first conditioining
@@ -478,11 +619,15 @@ class InvokeAIDiffuserComponent:
deltas = torch.cat((deltas, latents_b - uncond_latents))
# merge the weighted deltas together into a single merged delta
per_delta_weights = torch.tensor(weights[1:], dtype=deltas.dtype, device=deltas.device)
per_delta_weights = torch.tensor(
weights[1:], dtype=deltas.dtype, device=deltas.device
)
normalize = False
if normalize:
per_delta_weights /= torch.sum(per_delta_weights)
reshaped_weights = per_delta_weights.reshape(per_delta_weights.shape + (1, 1, 1))
reshaped_weights = per_delta_weights.reshape(
per_delta_weights.shape + (1, 1, 1)
)
deltas_merged = torch.sum(deltas * reshaped_weights, dim=0, keepdim=True)
# old_return_value = super().forward(x, sigma, uncond, cond, cond_scale)

View File

@@ -463,6 +463,9 @@ class FrozenCLIPEmbedder(AbstractEncoder):
def encode(self, text, **kwargs):
return self(text, **kwargs)
def set_textual_inversion_manager(self, manager): #TextualInversionManager):
self.textual_inversion_manager = manager
@property
def device(self):
return self.transformer.device
@@ -476,10 +479,6 @@ class WeightedFrozenCLIPEmbedder(FrozenCLIPEmbedder):
fragment_weights_key = "fragment_weights"
return_tokens_key = "return_tokens"
def set_textual_inversion_manager(self, manager): #TextualInversionManager):
# TODO all of the weighting and expanding stuff needs be moved out of this class
self.textual_inversion_manager = manager
def forward(self, text: list, **kwargs):
# TODO all of the weighting and expanding stuff needs be moved out of this class
'''

View File

@@ -0,0 +1,589 @@
import json
from pathlib import Path
from typing import Optional
import torch
from diffusers.models import UNet2DConditionModel
from filelock import FileLock, Timeout
from safetensors.torch import load_file
from torch.utils.hooks import RemovableHandle
from transformers import CLIPTextModel
from ..invoke.globals import global_lora_models_dir
from ..invoke.devices import choose_torch_device
"""
This module supports loading LoRA weights trained with https://github.com/kohya-ss/sd-scripts
To be removed once support for diffusers LoRA weights is well supported
"""
class IncompatibleModelException(Exception):
"Raised when there is an attempt to load a LoRA into a model that is incompatible with it"
pass
class LoRALayer:
lora_name: str
name: str
scale: float
up: torch.nn.Module
mid: Optional[torch.nn.Module] = None
down: torch.nn.Module
def __init__(self, lora_name: str, name: str, rank=4, alpha=1.0):
self.lora_name = lora_name
self.name = name
self.scale = alpha / rank if (alpha and rank) else 1.0
def forward(self, lora, input_h):
if self.mid is None:
weight = self.up(self.down(*input_h))
else:
weight = self.up(self.mid(self.down(*input_h)))
return weight * lora.multiplier * self.scale
class LoHALayer:
lora_name: str
name: str
scale: float
w1_a: torch.Tensor
w1_b: torch.Tensor
w2_a: torch.Tensor
w2_b: torch.Tensor
t1: Optional[torch.Tensor] = None
t2: Optional[torch.Tensor] = None
bias: Optional[torch.Tensor] = None
org_module: torch.nn.Module
def __init__(self, lora_name: str, name: str, rank=4, alpha=1.0):
self.lora_name = lora_name
self.name = name
self.scale = alpha / rank if (alpha and rank) else 1.0
def forward(self, lora, input_h):
if type(self.org_module) == torch.nn.Conv2d:
op = torch.nn.functional.conv2d
extra_args = dict(
stride=self.org_module.stride,
padding=self.org_module.padding,
dilation=self.org_module.dilation,
groups=self.org_module.groups,
)
else:
op = torch.nn.functional.linear
extra_args = {}
if self.t1 is None:
weight = (self.w1_a @ self.w1_b) * (self.w2_a @ self.w2_b)
else:
rebuild1 = torch.einsum(
"i j k l, j r, i p -> p r k l", self.t1, self.w1_b, self.w1_a
)
rebuild2 = torch.einsum(
"i j k l, j r, i p -> p r k l", self.t2, self.w2_b, self.w2_a
)
weight = rebuild1 * rebuild2
bias = self.bias if self.bias is not None else 0
return op(
*input_h,
(weight + bias).view(self.org_module.weight.shape),
None,
**extra_args,
) * lora.multiplier * self.scale
class LoKRLayer:
lora_name: str
name: str
scale: float
w1: Optional[torch.Tensor] = None
w1_a: Optional[torch.Tensor] = None
w1_b: Optional[torch.Tensor] = None
w2: Optional[torch.Tensor] = None
w2_a: Optional[torch.Tensor] = None
w2_b: Optional[torch.Tensor] = None
t2: Optional[torch.Tensor] = None
bias: Optional[torch.Tensor] = None
org_module: torch.nn.Module
def __init__(self, lora_name: str, name: str, rank=4, alpha=1.0):
self.lora_name = lora_name
self.name = name
self.scale = alpha / rank if (alpha and rank) else 1.0
def forward(self, lora, input_h):
if type(self.org_module) == torch.nn.Conv2d:
op = torch.nn.functional.conv2d
extra_args = dict(
stride=self.org_module.stride,
padding=self.org_module.padding,
dilation=self.org_module.dilation,
groups=self.org_module.groups,
)
else:
op = torch.nn.functional.linear
extra_args = {}
w1 = self.w1
if w1 is None:
w1 = self.w1_a @ self.w1_b
w2 = self.w2
if w2 is None:
if self.t2 is None:
w2 = self.w2_a @ self.w2_b
else:
w2 = torch.einsum('i j k l, i p, j r -> p r k l', self.t2, self.w2_a, self.w2_b)
if len(w2.shape) == 4:
w1 = w1.unsqueeze(2).unsqueeze(2)
w2 = w2.contiguous()
weight = torch.kron(w1, w2).reshape(self.org_module.weight.shape)
bias = self.bias if self.bias is not None else 0
return op(
*input_h,
(weight + bias).view(self.org_module.weight.shape),
None,
**extra_args
) * lora.multiplier * self.scale
class LoRAModuleWrapper:
unet: UNet2DConditionModel
text_encoder: CLIPTextModel
hooks: list[RemovableHandle]
def __init__(self, unet, text_encoder):
self.unet = unet
self.text_encoder = text_encoder
self.hooks = []
self.text_modules = None
self.unet_modules = None
self.applied_loras = {}
self.loaded_loras = {}
self.UNET_TARGET_REPLACE_MODULE = [
"Transformer2DModel",
"Attention",
"ResnetBlock2D",
"Downsample2D",
"Upsample2D",
"SpatialTransformer",
]
self.TEXT_ENCODER_TARGET_REPLACE_MODULE = [
"ResidualAttentionBlock",
"CLIPAttention",
"CLIPMLP",
]
self.LORA_PREFIX_UNET = "lora_unet"
self.LORA_PREFIX_TEXT_ENCODER = "lora_te"
def find_modules(
prefix, root_module: torch.nn.Module, target_replace_modules
) -> dict[str, torch.nn.Module]:
mapping = {}
for name, module in root_module.named_modules():
if module.__class__.__name__ in target_replace_modules:
for child_name, child_module in module.named_modules():
layer_type = child_module.__class__.__name__
if layer_type == "Linear" or (
layer_type == "Conv2d"
and child_module.kernel_size in [(1, 1), (3, 3)]
):
lora_name = prefix + "." + name + "." + child_name
lora_name = lora_name.replace(".", "_")
mapping[lora_name] = child_module
self.apply_module_forward(child_module, lora_name)
return mapping
if self.text_modules is None:
self.text_modules = find_modules(
self.LORA_PREFIX_TEXT_ENCODER,
text_encoder,
self.TEXT_ENCODER_TARGET_REPLACE_MODULE,
)
if self.unet_modules is None:
self.unet_modules = find_modules(
self.LORA_PREFIX_UNET, unet, self.UNET_TARGET_REPLACE_MODULE
)
def lora_forward_hook(self, name):
wrapper = self
def lora_forward(module, input_h, output):
if len(wrapper.loaded_loras) == 0:
return output
for lora in wrapper.applied_loras.values():
layer = lora.layers.get(name, None)
if layer is None:
continue
output += layer.forward(lora, input_h)
return output
return lora_forward
def apply_module_forward(self, module, name):
handle = module.register_forward_hook(self.lora_forward_hook(name))
self.hooks.append(handle)
def clear_hooks(self):
for hook in self.hooks:
hook.remove()
self.hooks.clear()
def clear_applied_loras(self):
self.applied_loras.clear()
def clear_loaded_loras(self):
self.loaded_loras.clear()
class LoRA:
name: str
layers: dict[str, LoRALayer]
device: torch.device
dtype: torch.dtype
wrapper: LoRAModuleWrapper
multiplier: float
def __init__(self, name: str, device, dtype, wrapper, multiplier=1.0):
self.name = name
self.layers = {}
self.multiplier = multiplier
self.device = device
self.dtype = dtype
self.wrapper = wrapper
def load_from_dict(self, state_dict):
state_dict_groupped = dict()
for key, value in state_dict.items():
stem, leaf = key.split(".", 1)
if stem not in state_dict_groupped:
state_dict_groupped[stem] = dict()
state_dict_groupped[stem][leaf] = value
for stem, values in state_dict_groupped.items():
if stem.startswith(self.wrapper.LORA_PREFIX_TEXT_ENCODER):
wrapped = self.wrapper.text_modules.get(stem, None)
elif stem.startswith(self.wrapper.LORA_PREFIX_UNET):
wrapped = self.wrapper.unet_modules.get(stem, None)
else:
continue
if wrapped is None:
print(f">> Missing layer: {stem}")
continue
# TODO: diff key
bias = None
alpha = None
if "alpha" in values:
alpha = values["alpha"].item()
if (
"bias_indices" in values
and "bias_values" in values
and "bias_size" in values
):
bias = torch.sparse_coo_tensor(
values["bias_indices"],
values["bias_values"],
tuple(values["bias_size"]),
).to(device=self.device, dtype=self.dtype)
# lora and locon
if "lora_down.weight" in values:
value_down = values["lora_down.weight"]
value_mid = values.get("lora_mid.weight", None)
value_up = values["lora_up.weight"]
if type(wrapped) == torch.nn.Conv2d:
if value_mid is not None:
layer_down = torch.nn.Conv2d(
value_down.shape[1], value_down.shape[0], (1, 1), bias=False
)
layer_mid = torch.nn.Conv2d(
value_mid.shape[1],
value_mid.shape[0],
wrapped.kernel_size,
wrapped.stride,
wrapped.padding,
bias=False,
)
else:
layer_down = torch.nn.Conv2d(
value_down.shape[1],
value_down.shape[0],
wrapped.kernel_size,
wrapped.stride,
wrapped.padding,
bias=False,
)
layer_mid = None
layer_up = torch.nn.Conv2d(
value_up.shape[1], value_up.shape[0], (1, 1), bias=False
)
elif type(wrapped) == torch.nn.Linear:
layer_down = torch.nn.Linear(
value_down.shape[1], value_down.shape[0], bias=False
)
layer_mid = None
layer_up = torch.nn.Linear(
value_up.shape[1], value_up.shape[0], bias=False
)
else:
print(
f">> Encountered unknown lora layer module in {self.name}: {stem} - {type(wrapped).__name__}"
)
return
with torch.no_grad():
layer_down.weight.copy_(value_down)
if layer_mid is not None:
layer_mid.weight.copy_(value_mid)
layer_up.weight.copy_(value_up)
layer_down.to(device=self.device, dtype=self.dtype)
if layer_mid is not None:
layer_mid.to(device=self.device, dtype=self.dtype)
layer_up.to(device=self.device, dtype=self.dtype)
rank = value_down.shape[0]
layer = LoRALayer(self.name, stem, rank, alpha)
# layer.bias = bias # TODO: find and debug lora/locon with bias
layer.down = layer_down
layer.mid = layer_mid
layer.up = layer_up
# loha
elif "hada_w1_b" in values:
rank = values["hada_w1_b"].shape[0]
layer = LoHALayer(self.name, stem, rank, alpha)
layer.org_module = wrapped
layer.bias = bias
layer.w1_a = values["hada_w1_a"].to(
device=self.device, dtype=self.dtype
)
layer.w1_b = values["hada_w1_b"].to(
device=self.device, dtype=self.dtype
)
layer.w2_a = values["hada_w2_a"].to(
device=self.device, dtype=self.dtype
)
layer.w2_b = values["hada_w2_b"].to(
device=self.device, dtype=self.dtype
)
if "hada_t1" in values:
layer.t1 = values["hada_t1"].to(
device=self.device, dtype=self.dtype
)
else:
layer.t1 = None
if "hada_t2" in values:
layer.t2 = values["hada_t2"].to(
device=self.device, dtype=self.dtype
)
else:
layer.t2 = None
# lokr
elif "lokr_w1_b" in values or "lokr_w1" in values:
if "lokr_w1_b" in values:
rank = values["lokr_w1_b"].shape[0]
elif "lokr_w2_b" in values:
rank = values["lokr_w2_b"].shape[0]
else:
rank = None # unscaled
layer = LoKRLayer(self.name, stem, rank, alpha)
layer.org_module = wrapped
layer.bias = bias
if "lokr_w1" in values:
layer.w1 = values["lokr_w1"].to(device=self.device, dtype=self.dtype)
else:
layer.w1_a = values["lokr_w1_a"].to(device=self.device, dtype=self.dtype)
layer.w1_b = values["lokr_w1_b"].to(device=self.device, dtype=self.dtype)
if "lokr_w2" in values:
layer.w2 = values["lokr_w2"].to(device=self.device, dtype=self.dtype)
else:
layer.w2_a = values["lokr_w2_a"].to(device=self.device, dtype=self.dtype)
layer.w2_b = values["lokr_w2_b"].to(device=self.device, dtype=self.dtype)
if "lokr_t2" in values:
layer.t2 = values["lokr_t2"].to(device=self.device, dtype=self.dtype)
else:
print(
f">> Encountered unknown lora layer module in {self.name}: {stem} - {type(wrapped).__name__}"
)
return
self.layers[stem] = layer
class KohyaLoraManager:
lora_path = Path(global_lora_models_dir())
vector_length_cache_path = lora_path / '.vectorlength.cache'
def __init__(self, pipe):
self.unet = pipe.unet
self.wrapper = LoRAModuleWrapper(pipe.unet, pipe.text_encoder)
self.text_encoder = pipe.text_encoder
self.device = torch.device(choose_torch_device())
self.dtype = pipe.unet.dtype
def load_lora_module(self, name, path_file, multiplier: float = 1.0):
print(f" | Found lora {name} at {path_file}")
if path_file.suffix == ".safetensors":
checkpoint = load_file(path_file.absolute().as_posix(), device="cpu")
else:
checkpoint = torch.load(path_file, map_location="cpu")
if not self.check_model_compatibility(checkpoint):
raise IncompatibleModelException
lora = LoRA(name, self.device, self.dtype, self.wrapper, multiplier)
lora.load_from_dict(checkpoint)
self.wrapper.loaded_loras[name] = lora
return lora
def apply_lora_model(self, name, mult: float = 1.0):
path_file = None
for suffix in ["ckpt", "safetensors", "pt"]:
path_files = [x for x in Path(self.lora_path).glob(f"**/{name}.{suffix}")]
if len(path_files):
path_file = path_files[0]
print(f" | Loading lora {path_file.name} with weight {mult}")
break
if not path_file:
print(f" ** Unable to find lora: {name}")
return
lora = self.wrapper.loaded_loras.get(name, None)
if lora is None:
lora = self.load_lora_module(name, path_file, mult)
lora.multiplier = mult
self.wrapper.applied_loras[name] = lora
def unload_applied_lora(self, lora_name: str) -> bool:
"""If the indicated LoRA has previously been applied then
unload it and return True. Return False if the LoRA was
not previously applied (for status reporting)
"""
if lora_name in self.wrapper.applied_loras:
del self.wrapper.applied_loras[lora_name]
return True
return False
def unload_lora(self, lora_name: str) -> bool:
if lora_name in self.wrapper.loaded_loras:
del self.wrapper.loaded_loras[lora_name]
return True
return False
def clear_loras(self):
self.wrapper.clear_applied_loras()
def check_model_compatibility(self, checkpoint) -> bool:
"""Checks whether the LoRA checkpoint is compatible with the token vector
length of the model that this manager is associated with.
"""
model_token_vector_length = (
self.text_encoder.get_input_embeddings().weight.data[0].shape[0]
)
lora_token_vector_length = self.vector_length_from_checkpoint(checkpoint)
return model_token_vector_length == lora_token_vector_length
@staticmethod
def vector_length_from_checkpoint(checkpoint: dict) -> int:
"""Return the vector token length for the passed LoRA checkpoint object.
This is used to determine which SD model version the LoRA was based on.
768 -> SDv1
1024-> SDv2
"""
key1 = "lora_te_text_model_encoder_layers_0_mlp_fc1.lora_down.weight"
key2 = "lora_te_text_model_encoder_layers_0_self_attn_k_proj.hada_w1_a"
lora_token_vector_length = (
checkpoint[key1].shape[1]
if key1 in checkpoint
else checkpoint[key2].shape[0]
if key2 in checkpoint
else 768
)
return lora_token_vector_length
@classmethod
def vector_length_from_checkpoint_file(self, checkpoint_path: Path) -> int:
with LoraVectorLengthCache(self.vector_length_cache_path) as cache:
if str(checkpoint_path) not in cache:
if checkpoint_path.suffix == ".safetensors":
checkpoint = load_file(
checkpoint_path.absolute().as_posix(), device="cpu"
)
else:
checkpoint = torch.load(checkpoint_path, map_location="cpu")
cache[str(checkpoint_path)] = KohyaLoraManager.vector_length_from_checkpoint(
checkpoint
)
return cache[str(checkpoint_path)]
class LoraVectorLengthCache(object):
def __init__(self, cache_path: Path):
self.cache_path = cache_path
self.lock = FileLock(Path(cache_path.parent, ".cachelock"))
self.cache = {}
def __enter__(self):
self.lock.acquire(timeout=10)
try:
if self.cache_path.exists():
with open(self.cache_path, "r") as json_file:
self.cache = json.load(json_file)
except Timeout:
print(
"** Can't acquire lock on lora vector length cache. Operations will be slower"
)
except (json.JSONDecodeError, OSError):
self.cache_path.unlink()
return self.cache
def __exit__(self, type, value, traceback):
with open(self.cache_path, "w") as json_file:
json.dump(self.cache, json_file)
self.lock.release()

101
ldm/modules/lora_manager.py Normal file
View File

@@ -0,0 +1,101 @@
import os
from diffusers import StableDiffusionPipeline
from pathlib import Path
from diffusers import UNet2DConditionModel, StableDiffusionPipeline
from ldm.invoke.globals import global_lora_models_dir
from .kohya_lora_manager import KohyaLoraManager, IncompatibleModelException
from typing import Optional, Dict
class LoraCondition:
name: str
weight: float
def __init__(self,
name,
weight: float = 1.0,
unet: UNet2DConditionModel=None, # for diffusers format LoRAs
kohya_manager: Optional[KohyaLoraManager]=None, # for KohyaLoraManager-compatible LoRAs
):
self.name = name
self.weight = weight
self.kohya_manager = kohya_manager
self.unet = unet
def __call__(self):
# TODO: make model able to load from huggingface, rather then just local files
path = Path(global_lora_models_dir(), self.name)
if path.is_dir():
if not self.unet:
print(f" ** Unable to load diffusers-format LoRA {self.name}: unet is None")
return
if self.unet.load_attn_procs:
file = Path(path, "pytorch_lora_weights.bin")
if file.is_file():
print(f">> Loading LoRA: {path}")
self.unet.load_attn_procs(path.absolute().as_posix())
else:
print(f" ** Unable to find valid LoRA at: {path}")
else:
print(" ** Invalid Model to load LoRA")
elif self.kohya_manager:
try:
self.kohya_manager.apply_lora_model(self.name,self.weight)
except IncompatibleModelException:
print(f" ** LoRA {self.name} is incompatible with this model; will generate without the LoRA applied.")
else:
print(" ** Unable to load LoRA")
def unload(self):
if self.kohya_manager and self.kohya_manager.unload_applied_lora(self.name):
print(f'>> unloading LoRA {self.name}')
class LoraManager:
def __init__(self, pipe: StableDiffusionPipeline):
# Kohya class handles lora not generated through diffusers
self.kohya = KohyaLoraManager(pipe)
self.unet = pipe.unet
def set_loras_conditions(self, lora_weights: list):
conditions = []
if len(lora_weights) > 0:
for lora in lora_weights:
conditions.append(LoraCondition(lora.model, lora.weight, self.unet, self.kohya))
if len(conditions) > 0:
return conditions
return None
def list_compatible_loras(self)->Dict[str, Path]:
'''
List all the LoRAs in the global lora directory that
are compatible with the current model. Return a dictionary
of the lora basename and its path.
'''
model_length = self.kohya.text_encoder.get_input_embeddings().weight.data[0].shape[0]
return self.list_loras(model_length)
@staticmethod
def list_loras(token_vector_length:int=None)->Dict[str, Path]:
'''List the LoRAS in the global lora directory.
If token_vector_length is provided, then only return
LoRAS that have the indicated length:
768: v1 models
1024: v2 models
'''
path = Path(global_lora_models_dir())
models_found = dict()
for root,_,files in os.walk(path):
for x in files:
name = Path(x).stem
suffix = Path(x).suffix
if suffix not in [".ckpt", ".pt", ".safetensors"]:
continue
path = Path(root,x)
if token_vector_length is None:
models_found[name]=Path(root,x) # unconditional addition
elif token_vector_length == KohyaLoraManager.vector_length_from_checkpoint_file(path):
models_found[name]=Path(root,x) # conditional on the base model matching
return models_found

View File

@@ -0,0 +1,95 @@
from peft import LoraModel, LoraConfig, set_peft_model_state_dict
import torch
import json
from pathlib import Path
from ldm.invoke.globals import global_lora_models_dir
class LoraPeftModule:
def __init__(self, lora_dir, multiplier: float = 1.0):
self.lora_dir = lora_dir
self.multiplier = multiplier
self.config = self.load_config()
self.checkpoint = self.load_checkpoint()
def load_config(self):
lora_config_file = Path(self.lora_dir, f'lora_config.json')
with open(lora_config_file, "r") as f:
return json.load(f)
def load_checkpoint(self):
return torch.load(Path(self.lora_dir, f'lora.pt'))
def unet(self, text_encoder):
lora_ds = {
k.replace("text_encoder_", ""): v for k, v in self.checkpoint.items() if "text_encoder_" in k
}
config = LoraConfig(**self.config["peft_config"])
model = LoraModel(config, text_encoder)
set_peft_model_state_dict(model, lora_ds)
return model
def text_encoder(self, unet):
lora_ds = {
k: v for k, v in self.checkpoint.items() if "text_encoder_" not in k
}
config = LoraConfig(**self.config["text_encoder_peft_config"])
model = LoraModel(config, unet)
set_peft_model_state_dict(model, lora_ds)
return model
def apply(self, pipe, dtype):
pipe.unet = self.unet(pipe.unet)
if "text_encoder_peft_config" in self.config:
pipe.text_encoder = self.text_encoder(pipe.text_encoder)
if dtype in (torch.float16, torch.bfloat16):
pipe.unet.half()
pipe.text_encoder.half()
return pipe
class PeftManager:
modules: list[LoraPeftModule]
def __init__(self):
self.lora_path = global_lora_models_dir()
self.modules = []
def set_loras(self, lora_weights: list):
if len(lora_weights) > 0:
for lora in lora_weights:
self.add(lora.model, lora.weight)
def add(self, name, multiplier: float = 1.0):
lora_dir = Path(self.lora_path, name)
if lora_dir.exists():
lora_config_file = Path(lora_dir, f'lora_config.json')
lora_checkpoint = Path(lora_dir, f'lora.pt')
if lora_config_file.exists() and lora_checkpoint.exists():
self.modules.append(LoraPeftModule(lora_dir, multiplier))
return
print(f">> Failed to load lora {name}")
def load(self, pipe, dtype):
if len(self.modules) > 0:
for module in self.modules:
pipe = module.apply(pipe, dtype)
return pipe
# Simple check to allow previous functionality
def should_use(self, lora_weights: list):
if len(lora_weights) > 0:
for lora in lora_weights:
lora_dir = Path(self.lora_path, lora.model)
if lora_dir.exists():
lora_config_file = Path(lora_dir, f'lora_config.json')
lora_checkpoint = Path(lora_dir, f'lora.pt')
if lora_config_file.exists() and lora_checkpoint.exists():
return False
return True

View File

@@ -1,9 +1,9 @@
import os
import traceback
from dataclasses import dataclass
from pathlib import Path
from typing import Optional, Union
import safetensors.torch
import torch
from picklescan.scanner import scan_file_path
from transformers import CLIPTextModel, CLIPTokenizer
@@ -71,21 +71,6 @@ class TextualInversionManager(BaseTextualInversionManager):
if str(ckpt_path).endswith(".DS_Store"):
return
try:
scan_result = scan_file_path(str(ckpt_path))
if scan_result.infected_files == 1:
print(
f"\n### Security Issues Found in Model: {scan_result.issues_count}"
)
print("### For your safety, InvokeAI will not load this embed.")
return
except Exception:
print(
f"### {ckpt_path.parents[0].name}/{ckpt_path.name} is damaged or corrupt."
)
return
embedding_info = self._parse_embedding(str(ckpt_path))
if embedding_info is None:
@@ -96,7 +81,7 @@ class TextualInversionManager(BaseTextualInversionManager):
!= embedding_info["token_dim"]
):
print(
f"** Notice: {ckpt_path.parents[0].name}/{ckpt_path.name} was trained on a model with an incompatible token dimension: {self.text_encoder.get_input_embeddings().weight.data[0].shape[0]} vs {embedding_info['token_dim']}."
f" ** Notice: {ckpt_path.parents[0].name}/{ckpt_path.name} was trained on a model with an incompatible token dimension: {self.text_encoder.get_input_embeddings().weight.data[0].shape[0]} vs {embedding_info['token_dim']}."
)
return
@@ -309,92 +294,72 @@ class TextualInversionManager(BaseTextualInversionManager):
return token_id
def _parse_embedding(self, embedding_file: str):
file_type = embedding_file.split(".")[-1]
if file_type == "pt":
return self._parse_embedding_pt(embedding_file)
elif file_type == "bin":
return self._parse_embedding_bin(embedding_file)
else:
print(f"** Notice: unrecognized embedding file format: {embedding_file}")
def _parse_embedding(self, embedding_file: str)->dict:
suffix = Path(embedding_file).suffix
try:
if suffix in [".pt",".ckpt",".bin"]:
scan_result = scan_file_path(embedding_file)
if scan_result.infected_files == 1:
print(
f" ** Security Issues Found in Model: {scan_result.issues_count}"
)
print(" ** For your safety, InvokeAI will not load this embed.")
return
ckpt = torch.load(embedding_file,map_location="cpu")
else:
ckpt = safetensors.torch.load_file(embedding_file)
except Exception as e:
print(f" ** Notice: unrecognized embedding file format: {embedding_file}: {e}")
return None
def _parse_embedding_pt(self, embedding_file):
embedding_ckpt = torch.load(embedding_file, map_location="cpu")
embedding_info = {}
# Check if valid embedding file
if "string_to_token" and "string_to_param" in embedding_ckpt:
# Catch variants that do not have the expected keys or values.
try:
embedding_info["name"] = embedding_ckpt["name"] or os.path.basename(
os.path.splitext(embedding_file)[0]
)
# Check num of embeddings and warn user only the first will be used
embedding_info["num_of_embeddings"] = len(
embedding_ckpt["string_to_token"]
)
if embedding_info["num_of_embeddings"] > 1:
print(">> More than 1 embedding found. Will use the first one")
embedding = list(embedding_ckpt["string_to_param"].values())[0]
except (AttributeError, KeyError):
return self._handle_broken_pt_variants(embedding_ckpt, embedding_file)
embedding_info["embedding"] = embedding
embedding_info["num_vectors_per_token"] = embedding.size()[0]
embedding_info["token_dim"] = embedding.size()[1]
try:
embedding_info["trained_steps"] = embedding_ckpt["step"]
embedding_info["trained_model_name"] = embedding_ckpt[
"sd_checkpoint_name"
]
embedding_info["trained_model_checksum"] = embedding_ckpt[
"sd_checkpoint"
]
except AttributeError:
print(">> No Training Details Found. Passing ...")
# .pt files found at https://cyberes.github.io/stable-diffusion-textual-inversion-models/
# They are actually .bin files
elif len(embedding_ckpt.keys()) == 1:
embedding_info = self._parse_embedding_bin(embedding_file)
# try to figure out what kind of embedding file it is and parse accordingly
keys = list(ckpt.keys())
if all(x in keys for x in ['string_to_token','string_to_param','name','step']):
return self._parse_embedding_v1(ckpt, embedding_file) # example rem_rezero.pt
elif all(x in keys for x in ['string_to_token','string_to_param']):
return self._parse_embedding_v2(ckpt, embedding_file) # example midj-strong.pt
elif 'emb_params' in keys:
return self._parse_embedding_v3(ckpt, embedding_file) # example easynegative.safetensors
else:
print(">> Invalid embedding format")
embedding_info = None
return self._parse_embedding_v4(ckpt, embedding_file) # usually a '.bin' file
def _parse_embedding_v1(self, embedding_ckpt: dict, file_path: str):
basename = Path(file_path).stem
print(f' | Loading v1 embedding file: {basename}')
embedding_info = {}
embedding_info["name"] = embedding_ckpt["name"]
# Check num of embeddings and warn user only the first will be used
embedding_info["num_of_embeddings"] = len(
embedding_ckpt["string_to_token"]
)
if embedding_info["num_of_embeddings"] > 1:
print(" | More than 1 embedding found. Will use the first one")
embedding = list(embedding_ckpt["string_to_param"].values())[0]
embedding_info["embedding"] = embedding
embedding_info["num_vectors_per_token"] = embedding.size()[0]
embedding_info["token_dim"] = embedding.size()[1]
embedding_info["trained_steps"] = embedding_ckpt["step"]
embedding_info["trained_model_name"] = embedding_ckpt[
"sd_checkpoint_name"
]
embedding_info["trained_model_checksum"] = embedding_ckpt[
"sd_checkpoint"
]
return embedding_info
def _parse_embedding_bin(self, embedding_file):
embedding_ckpt = torch.load(embedding_file, map_location="cpu")
embedding_info = {}
if list(embedding_ckpt.keys()) == 0:
print(">> Invalid concepts file")
embedding_info = None
else:
for token in list(embedding_ckpt.keys()):
embedding_info["name"] = (
token
or f"<{os.path.basename(os.path.splitext(embedding_file)[0])}>"
)
embedding_info["embedding"] = embedding_ckpt[token]
embedding_info[
"num_vectors_per_token"
] = 1 # All Concepts seem to default to 1
embedding_info["token_dim"] = embedding_info["embedding"].size()[0]
return embedding_info
def _handle_broken_pt_variants(
self, embedding_ckpt: dict, embedding_file: str
def _parse_embedding_v2 (
self, embedding_ckpt: dict, file_path: str
) -> dict:
"""
This handles the broken .pt file variants. We only know of one at present.
This handles embedding .pt file variant #2.
"""
basename = Path(file_path).stem
print(f' | Loading v2 embedding file: {basename}')
embedding_info = {}
if isinstance(
list(embedding_ckpt["string_to_token"].values())[0], torch.Tensor
@@ -403,7 +368,7 @@ class TextualInversionManager(BaseTextualInversionManager):
embedding_info["name"] = (
token
if token != "*"
else f"<{os.path.basename(os.path.splitext(embedding_file)[0])}>"
else f"<{basename}>"
)
embedding_info["embedding"] = embedding_ckpt[
"string_to_param"
@@ -413,7 +378,46 @@ class TextualInversionManager(BaseTextualInversionManager):
].shape[0]
embedding_info["token_dim"] = embedding_info["embedding"].size()[1]
else:
print(">> Invalid embedding format")
print(f" ** {basename}: Unrecognized embedding format")
embedding_info = None
return embedding_info
def _parse_embedding_v3(self, embedding_ckpt: dict, file_path: str):
"""
Parse 'version 3' of the .pt textual inversion embedding files.
"""
basename = Path(file_path).stem
print(f' | Loading v3 embedding file: {basename}')
embedding_info = {}
embedding_info["name"] = f'<{basename}>'
embedding_info["num_of_embeddings"] = 1
embedding = embedding_ckpt['emb_params']
embedding_info["embedding"] = embedding
embedding_info["num_vectors_per_token"] = embedding.size()[0]
embedding_info["token_dim"] = embedding.size()[1]
return embedding_info
def _parse_embedding_v4(self, embedding_ckpt: dict, filepath: str):
"""
Parse 'version 4' of the textual inversion embedding files. This one
is usually associated with .bin files trained by HuggingFace diffusers.
"""
basename = Path(filepath).stem
short_path = Path(filepath).parents[0].name+'/'+Path(filepath).name
print(f' | Loading v4 embedding file: {short_path}')
embedding_info = {}
if list(embedding_ckpt.keys()) == 0:
print(f" ** Invalid embeddings file: {short_path}")
embedding_info = None
else:
for token in list(embedding_ckpt.keys()):
embedding_info["name"] = (
token
or f"<{basename}>"
)
embedding_info["embedding"] = embedding_ckpt[token]
embedding_info["num_vectors_per_token"] = 1 # All Concepts seem to default to 1
embedding_info["token_dim"] = embedding_info["embedding"].size()[0]
return embedding_info

View File

@@ -32,13 +32,16 @@ dependencies = [
"albumentations",
"click",
"clip_anytorch",
"compel==0.1.7",
"compel~=1.1.0",
"datasets",
"diffusers[torch]~=0.14",
"diffusers[torch]==0.14",
"dnspython==2.2.1",
"einops",
"eventlet",
"facexlib",
"fastapi==0.85.0",
"fastapi-events==0.6.0",
"fastapi-socketio==0.0.9",
"flask==2.1.3",
"flask_cors==3.0.10",
"flask_socketio==5.3.0",
@@ -54,6 +57,7 @@ dependencies = [
"numpy<1.24",
"omegaconf",
"opencv-python",
"peft",
"picklescan",
"pillow",
"prompt-toolkit",
@@ -61,6 +65,7 @@ dependencies = [
"packaging",
"pypatchmatch",
"pyreadline3",
"python-multipart==0.0.5",
"pytorch-lightning==1.7.7",
"realesrgan",
"requests==2.28.2",
@@ -122,6 +127,7 @@ requires-python = ">=3.9, <3.11"
"invokeai-ti" = "ldm.invoke.training.textual_inversion:main"
"invokeai-update" = "ldm.invoke.config.invokeai_update:main"
"invokeai-batch" = "ldm.invoke.dynamic_prompts:main"
"invokeai-metadata" = "ldm.invoke.invokeai_metadata:main"
[project.urls]
"Bug Reports" = "https://github.com/invoke-ai/InvokeAI/issues"

20
scripts/invoke-new.py Normal file
View File

@@ -0,0 +1,20 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
import os
import sys
def main():
# Change working directory to the repo root
os.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
if '--web' in sys.argv:
from ldm.invoke.app.api_app import invoke_api
invoke_api()
else:
# TODO: Parse some top-level args here.
from ldm.invoke.app.cli_app import invoke_cli
invoke_cli()
if __name__ == '__main__':
main()

206
static/dream_web/test.html Normal file
View File

@@ -0,0 +1,206 @@
<html lang="en">
<head>
<title>InvokeAI Test</title>
<meta charset="utf-8">
<link rel="icon" type="image/x-icon" href="static/dream_web/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<!--<script src="config.js"></script>-->
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.0.1/socket.io.js"
integrity="sha512-q/dWJ3kcmjBLU4Qc47E4A9kTB4m3wuTY7vkFJDTZKjTs8jhyGQnaUrxa0Ytd0ssMZhbNua9hE+E7Qv1j+DyZwA=="
crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/fslightbox/3.0.9/index.js"></script>
<style>
body {
background:#334;
}
#gallery > a.image {
display:inline-block;
width:128px;
height:128px;
margin:8px;
background-size:contain;
background-repeat:no-repeat;
background-position:center;
}
.results {
border-color:green;
border-width:1px;
border-style:solid;
}
.intermediates {
border-color:yellow;
border-width:1px;
border-style:solid;
}
</style>
</head>
<body>
<button onclick="dostuff();">Test Invoke</button>
<div id="gallery"></div>
<div id="textlog">
<textarea id='log' rows=10 cols=60 autofocus>
</textarea>
</div>
<script type="text/javascript">
const socket_url = `ws://${window.location.host}`;
const socket = io(socket_url, {
path: "/ws/socket.io"
});
socket.on('connect', (data) => {
//socket.emit('subscribe', { 'session': 'WcBtYATwT92Mrb9zLgeyNw==' });
});
function addGalleryImage(src, className) {
let gallery = document.getElementById("gallery");
let div = document.createElement("a");
div.href = src;
div.setAttribute('data-fslightbox', '');
div.classList.add("image");
div.classList.add(className);
div.style.backgroundImage = `url('${src}')`;
gallery.appendChild(div);
refreshFsLightbox();
}
function log(event, data) {
document.getElementById("log").value += `${event} => ${JSON.stringify(data)}\n`;
if (data.result?.image?.image_type) {
addGalleryImage(`/api/v1/images/${data.result.image.image_type}/${data.result.image.image_name}`, data.result.image.image_type);
}
if (data.result?.mask?.image_type) {
addGalleryImage(`/api/v1/images/${data.result.mask.image_type}/${data.result.mask.image_name}`, data.result.mask.image_type);
}
console.log(`${event} => ${JSON.stringify(data)}`);
}
socket.on('generator_progress', (data) => log('generator_progress', data));
socket.on('invocation_complete', (data) => log('invocation_complete', data));
socket.on('invocation_started', (data) => log('invocation_started', data));
socket.on('session_complete', (data) => {
log('session_complete', data);
// NOTE: you may not want to unsubscribe if you plan to continue using this session,
// just make sure you unsubscribe eventually
socket.emit('unsubscribe', { 'session': data.session_id });
});
function dostuff() {
let prompt = "hyper detailed 4k cryengine 3D render of a cat in a dune buggy, trending on artstation, soft atmospheric lighting, volumetric lighting, cinematic still, golden hour, crepuscular rays, smooth [grainy]";
let strength = 0.95;
let sampler = 'keuler_a';
let steps = 50;
let seed = -1;
// Start building nodes
var id = 1;
var initialNode = {"id": id.toString(), "type": "txt2img", "prompt": prompt, "sampler": sampler, "steps": steps, "seed": seed};
id++;
var upscaleNode = {"id": id.toString(), "type": "show_image" };
id++
nodes = {};
nodes[initialNode.id] = initialNode;
nodes[upscaleNode.id] = upscaleNode;
links = [
[{ "node_id": initialNode.id, field: "image" },
{ "node_id": upscaleNode.id, field: "image" }]
];
// expandSize = 128;
// for (var i = 0; i < 6; ++i) {
// var i_seed = (seed == -1) ? -1 : seed + i + 1;
// var startid = id - 1;
// var offset = (i < 3) ? -expandSize : (3 * expandSize) + ((i - 3 + 1) * expandSize);
// nodes.push({"id": id.toString(), "type": "crop", "x": offset, "y": 0, "width": 512, "height": 512});
// let id_gen = id;
// links.push({"from_node": {"id": startid.toString(), "field": "image"},"to_node": {"id": id_gen.toString(),"field": "image"}});
// id++;
// nodes.push({"id": id.toString(), "type": "tomask"});
// let id_mask = id;
// links.push({"from_node": {"id": id_gen.toString(), "field": "image"},"to_node": {"id": id_mask.toString(),"field": "image"}});
// id++;
// nodes.push({"id": id.toString(), "type": "blur", "radius": 32});
// let id_mask_blur = id;
// links.push({"from_node": {"id": id_mask.toString(), "field": "mask"},"to_node": {"id": id_mask_blur.toString(),"field": "image"}});
// id++
// nodes.push({"id": id.toString(), "type": "ilerp", "min": 128, "max": 255});
// let id_ilerp = id;
// links.push({"from_node": {"id": id_mask_blur.toString(), "field": "image"},"to_node": {"id": id_ilerp.toString(),"field": "image"}});
// id++
// nodes.push({"id": id.toString(), "type": "cv_inpaint"});
// let id_cv_inpaint = id;
// links.push({"from_node": {"id": id_gen.toString(), "field": "image"},"to_node": {"id": id_cv_inpaint.toString(),"field": "image"}});
// links.push({"from_node": {"id": id_mask.toString(), "field": "mask"},"to_node": {"id": id_cv_inpaint.toString(),"field": "mask"}});
// id++;
// nodes.push({"id": id.toString(), "type": "img2img", "prompt": prompt, "strength": strength, "sampler": sampler, "steps": steps, "seed": i_seed, "color_match": true, "inpaint_replace": inpaint_replace});
// let id_i2i = id;
// links.push({"from_node": {"id": id_cv_inpaint.toString(), "field": "image"},"to_node": {"id": id_i2i.toString(),"field": "image"}});
// links.push({"from_node": {"id": id_ilerp.toString(), "field": "image"},"to_node": {"id": id_i2i.toString(),"field": "mask"}});
// id++;
// nodes.push({"id": id.toString(), "type": "paste", "x": offset, "y": 0});
// let id_paste = id;
// links.push({"from_node": {"id": startid.toString(), "field": "image"},"to_node": {"id": id_paste.toString(),"field": "base_image"}});
// links.push({"from_node": {"id": id_i2i.toString(), "field": "image"},"to_node": {"id": id_paste.toString(),"field": "image"}});
// links.push({"from_node": {"id": id_ilerp.toString(), "field": "image"},"to_node": {"id": id_paste.toString(),"field": "mask"}});
// id++;
// }
var graph = {
"nodes": nodes,
"edges": links
};
// var defaultGraph = {"nodes": [
// {"id": "1", "type": "txt2img", "prompt": prompt},
// {"id": "2", "type": "crop", "x": -256, "y": 128, "width": 512, "height": 512},
// {"id": "3", "type": "tomask"},
// {"id": "4", "type": "cv_inpaint"},
// {"id": "5", "type": "img2img", "prompt": prompt, "strength": 0.9},
// {"id": "6", "type": "paste", "x": -256, "y": 128},
// ],
// "links": [
// {"from_node": {"id": "1","field": "image"},"to_node": {"id": "2","field": "image"}},
// {"from_node": {"id": "2","field": "image"},"to_node": {"id": "3","field": "image"}},
// {"from_node": {"id": "2","field": "image"},"to_node": {"id": "4","field": "image"}},
// {"from_node": {"id": "3","field": "mask"},"to_node": {"id": "4","field": "mask"}},
// {"from_node": {"id": "4","field": "image"},"to_node": {"id": "5","field": "image"}},
// {"from_node": {"id": "3","field": "mask"},"to_node": {"id": "5","field": "mask"}},
// {"from_node": {"id": "1","field": "image"},"to_node": {"id": "6","field": "base_image"}},
// {"from_node": {"id": "5","field": "image"},"to_node": {"id": "6","field": "image"}}
// ]};
fetch('/api/v1/sessions/', {
method: 'POST',
headers: new Headers({'content-type': 'application/json'}),
body: JSON.stringify(graph)
}).then(response => response.json())
.then(data => {
sessionId = data.id;
socket.emit('subscribe', { 'session': sessionId });
fetch(`/api/v1/sessions/${sessionId}/invoke?all=true`, {
method: 'PUT',
headers: new Headers({'content-type': 'application/json'}),
body: ''
});
});
}
</script>
</body>
</html>

0
tests/__init__.py Normal file
View File