Compare commits

...

68 Commits

Author SHA1 Message Date
Lincoln Stein
aa1538bd70 [Fix] Lora double hook (#3471)
Currently hooks registers multiple time for some modules.
As result - lora applies multiple time to this modules on generation and
images looks weird.
If have any other minds how to fix it better - feel free to push.
2023-05-29 20:53:27 -04:00
Sergey Borisov
9e87a080a8 Avoid double hook registration in lora 2023-05-26 20:37:02 +03:00
Lincoln Stein
f3b2e02921 make updater error message persist in console 2023-05-22 11:16:40 -04:00
Lincoln Stein
fab5df9317 2.3.5 fixes to automatic updating and vae conversions (#3444)
# Minor fixes to the 2.3 branch

This is a proposed `2.3.5.post2` to correct the updater problems in
2.3.5.post1 and make transition to 3.0.0 easier.

## Updating fixed

The invokeai-update script will now recognize when the user previously
installed xformers and modifies the pip install command so as to include
xformers as an extra that needs to be updated. This will prevent the
problems experienced during the upgrade to `2.3.5.post1` in which torch
was updated but xformers wasn't.

## VAE autoconversion improved

In addition to looking for instances in which a user has entered a VAE
ckpt into the "vae" field directly, the model manager now also handles
the case in which the user entered a ckpt (rather than a diffusers) into
the path field. These two cases now both work:

```
vae: models/ldm/stable-diffusion-1/vae-ft-mse-840000-ema-pruned.ckpt
```
and

```
vae:
      path: models/ldm/stable-diffusion-1/vae-ft-mse-840000-ema-pruned.ckpt
```
In addition, if a 32-bit checkpoint VAE is encountered and user is using
half precision, the VAE is now converted to 16 bits on the fly.
2023-05-22 10:56:33 -04:00
Lincoln Stein
2e21e5b8f3 fixes to automatic updating and vae conversions
This PR makes the following minor fixes to the 2.3 branch:

1. The invokeai-update script will now recognize when the user
previously installed xformers and modifies the pip install command
so as to include xformers as an extra that needs to be updated.

2. In addition to looking for instances in which a user has entered a
VAE ckpt into the "vae" field directly, it also handles the case in
which the user entered a ckpt into the path field. These two cases
now work:

   vae: models/ldm/stable-diffusion-1/vae-ft-mse-840000-ema-pruned.ckpt

and

   vae:
      path: models/ldm/stable-diffusion-1/vae-ft-mse-840000-ema-pruned.ckpt

3. If a 32-bit checkpoint VAE is encountered and user is using half precision,
the VAE is now converted to 16 bits on the fly.
2023-05-21 19:11:43 -04:00
blessedcoolant
0ce628b22e autoconvert legacy VAEs (#3235)
This draft PR implements a system in which if a diffusers model is
loaded, and the model manager detects that the user tried to assign a
legacy checkpoint VAE to the model, the checkpoint will be converted to
a diffusers VAE in RAM.

It is draft because it has not been carefully tested yet, and there are
some edge cases that are not handled properly.
2023-05-19 12:26:25 +12:00
Lincoln Stein
ddcf9a3396 Merge branch 'v2.3' into lstein/enhance/autoconvert-vaes 2023-05-18 13:41:55 -04:00
Lincoln Stein
53f5dfbce1 quench fp16 revision not found error 2023-05-16 23:45:14 -04:00
Lincoln Stein
060ea144a1 quench torch 2.0.0 deprecation warning 2023-05-16 23:45:14 -04:00
Lincoln Stein
31a65b1e5d bump compel version 2023-05-16 23:45:14 -04:00
Lincoln Stein
bdc75be33f 1. update installer torch version
2. bump version to 2.3.5.post1
2023-05-16 23:45:14 -04:00
Lincoln Stein
628d307c69 1. Update the following dependencies
- torch 2.0.0
- xformers 0.0.19
- compel 1.1.3

2. Fix LoRA documentation so that it shows up in mkdocs.
2023-05-16 23:45:14 -04:00
Lincoln Stein
18f0cbd9a7 Turn the HuggingFaceConceptsLib into a singleton to prevent redundant connections (#3337)
This is a partial fix for #3330 . It prevents the concepts library from
hitting HuggingFace to download concept library terms every time a model
is changed.

It does not address the issue that the web interface downloads the
concepts even if it is not going to use them.
2023-05-10 00:00:04 -04:00
Lincoln Stein
d83d69ccc1 Merge branch 'v2.3' into lstein/bugfix/concept-lib-singleton 2023-05-09 23:38:41 -04:00
Lincoln Stein
332ac72e0e [Bugfix] Update check failing because process disappears (#3334)
Fixes #3228, where the check to see if invokeai is running fails because
a process no longer exists.
2023-05-04 20:32:51 -04:00
Lincoln Stein
fa886ee9e0 turn the HuggingFaceConceptsLib into a singleton to prevent redundant downloads 2023-05-03 15:12:34 -04:00
Dan Nguyen
03bbb308c9 [Bugfix] Update check failing because process disappears
Fixes #3228, where the check to see if invokeai is running fails because
a process no longer exists.
2023-05-03 10:54:43 -05:00
Lincoln Stein
1dcac3929b Release v2.3.5 (#3309)
# Version 2.3.5
This will be the 2.3.5 release once it is merged into the `v2.3` branch.
Changes on the RC branch are:

- Bump version number
- Fix bug in LoRA path determination (do it at runtime, not at module
load time, or root will get confused); closes #3293.
- Remove dangling debug statement.
2023-05-01 12:40:47 -04:00
Lincoln Stein
d73f1c363c bump version number 2023-05-01 09:28:49 -04:00
Lincoln Stein
e52e7418bb close #3304 2023-04-29 20:07:21 -04:00
Lincoln Stein
73be58a0b5 fix issue #3293 2023-04-29 11:37:07 -04:00
Lincoln Stein
5a7d11bca8 remove debugging statement 2023-04-27 08:21:26 -04:00
Lincoln Stein
5bbf7fe34a [Bugfix] Renames in 0.15.0 diffusers (#3184)
Link to PR in diffusers repository:
https://github.com/huggingface/diffusers/pull/2691

Imports:
`diffusers.models.cross_attention ->
diffusers.models.attention_processor`

Unions:
`AttnProcessor -> AttentionProcessor`

Classes:
| Old name | New name |
| --- | --- |
| CrossAttention | Attention |
| CrossAttnProcessor | AttnProcessor |
| XFormersCrossAttnProcessor | XFormersAttnProcessor |
| CrossAttnAddedKVProcessor | AttnAddedKVProcessor |
| LoRACrossAttnProcessor | LoRAAttnProcessor |
| LoRAXFormersCrossAttnProcessor | LoRAXFormersAttnProcessor |
| FlaxCrossAttention | FlaxAttention |
| AttendExciteCrossAttnProcessor | AttendExciteAttnProcessor |
| Pix2PixZeroCrossAttnProcessor | Pix2PixZeroAttnProcessor |


Also config values no longer sets as attributes of object:
https://github.com/huggingface/diffusers/pull/2849
2023-04-27 11:38:27 +01:00
Lincoln Stein
bfb968bbe8 Merge branch 'v2.3' into fix/new_diffusers_names 2023-04-26 23:54:37 +01:00
Lincoln Stein
6db72f83a2 bump version number to 2.3.5-rc1 (#3267)
Bump version number for 2.3.5 release candidate.
2023-04-26 23:53:53 +01:00
Sergey Borisov
432e526999 Revert merge changes 2023-04-25 14:49:08 +03:00
Lincoln Stein
830740b93b remove redundant/buggy restore_default_attention() method 2023-04-25 07:05:07 -04:00
StAlKeR7779
ff3f289342 Merge branch 'v2.3' into fix/new_diffusers_names 2023-04-25 13:21:26 +03:00
Lincoln Stein
34abbb3589 Merge branch 'v2.3' into release/v2.3.5 2023-04-25 04:33:09 +01:00
Lincoln Stein
c0eb1a9921 increase sha256 chunksize when calculating model hash (#3162)
- Thanks to @abdBarho, who discovered that increasing the chunksize
dramatically decreases the amount of time to calculate the hash.
2023-04-25 04:25:55 +01:00
Lincoln Stein
2ddd0301f4 bump version number to 2.3.5-rc1 2023-04-24 23:24:33 -04:00
Lincoln Stein
ce6629b6f5 Merge branch 'v2.3' into enhance/increase-sha256-chunksize 2023-04-25 03:58:30 +01:00
Lincoln Stein
994a76aeaa [Enhancement] distinguish v1 from v2 LoRA models (#3175)
# Distinguish LoRA/LyCORIS files based on what version of SD they were
built on top of

- Attempting to run a prompt with a LoRA based on SD v1.X against a
model based on v2.X will now throw an `IncompatibleModelException`. To
import this exception:
`from ldm.modules.lora_manager import IncompatibleModelException` (maybe
this should be defined in ModelManager?)
    
- Enhance `LoraManager.list_loras()` to accept an optional integer
argument, `token_vector_length`. This will filter the returned LoRA
models to return only those that match the indicated length. Use:
      ```
      768 => for models based on SD v1.X
      1024 => for models based on SD v2.X
      ```
Note that this filtering requires each LoRA file to be opened by
`torch.safetensors`. It will take ~8s to scan a directory of 40 files.
    
- Added new static methods to `ldm.modules.kohya_lora_manager`:
      - check_model_compatibility()
      - vector_length_from_checkpoint()
      - vector_length_from_checkpoint_file()

- You can now create subdirectories within the `loras` directory and
organize the model files.
2023-04-25 03:57:45 +01:00
Lincoln Stein
144dfe4a5b Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-25 03:54:46 +01:00
Lincoln Stein
5dbc63e2ae Revert "improvements to the installation and upgrade processes" (#3266)
Reverts invoke-ai/InvokeAI#3186
2023-04-25 03:54:04 +01:00
Lincoln Stein
c6ae1edc82 Revert "improvements to the installation and upgrade processes" 2023-04-24 22:53:43 -04:00
Lincoln Stein
0f3c456d59 merge with v2.3 2023-04-24 22:51:48 -04:00
Lincoln Stein
2cd0e036ac Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-25 03:24:25 +01:00
Lincoln Stein
a45b3387c0 Merge branch 'v2.3' into enhance/increase-sha256-chunksize 2023-04-25 03:22:43 +01:00
Lincoln Stein
c088cf0344 improvements to the installation and upgrade processes (#3186)
- Moved all postinstallation config file and model munging code out of
the CLI and into a separate script named `invokeai-postinstall`

- Fixed two calls to `shutil.copytree()` so that they don't try to
preserve the file mode of the copied files. This is necessary to run
correctly in a Nix environment (see thread at
https://discord.com/channels/1020123559063990373/1091716696965918732/1095662756738371615)

- Update the installer so that an existing virtual environment will be
updated, not overwritten.

- Pin npyscreen version to see if this fixes issues people have had with
installing this module.
2023-04-25 03:20:58 +01:00
Lincoln Stein
264af3c054 fix crash caused by incorrect conflict resolution 2023-04-24 22:20:12 -04:00
Lincoln Stein
7f7d5894fa Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-25 02:51:27 +01:00
Lincoln Stein
2a2c86896a pull in diffusers 0.15.1
- Change diffusers dependency to `diffusers~=0.15.0` which *should*
  enforce  non-breaking changes.
2023-04-20 13:29:20 -04:00
Lincoln Stein
f36452d650 rebuild front end 2023-04-20 12:27:08 -04:00
Lincoln Stein
e5188309ec Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-20 17:25:09 +01:00
Lincoln Stein
aabe79686e Merge branch 'v2.3' into fix/new_diffusers_names 2023-04-20 17:20:33 +01:00
Lincoln Stein
23d9361528 autoconvert ckpt VAEs assigned to diffusers models 2023-04-19 17:44:27 -04:00
Lincoln Stein
ce22a1577c convert VAEs to diffusers format automatically
- If the user enters a VAE .ckpt path into the VAE field of a
  diffusers model, the VAE will be automatically converted behind
  the scenes into a diffusers version, then loaded.

- This commit is untested (done on an airplane).
2023-04-18 21:20:08 -04:00
Lincoln Stein
216b1c3a4a Merge branch 'v2.3' into fix/new_diffusers_names 2023-04-18 19:37:25 -04:00
Lincoln Stein
47883860a6 Merge branch 'v2.3' into enhance/increase-sha256-chunksize 2023-04-13 23:00:34 -04:00
Lincoln Stein
8f17d17208 Merge branch 'v2.3' into fix/new_diffusers_names 2023-04-13 22:44:05 -04:00
Lincoln Stein
c6ecf3afc5 pin diffusers to 0.15.*, and fix deprecation warning on unet.in_channels 2023-04-13 22:38:50 -04:00
Lincoln Stein
2c449bfb34 Merge branch 'v2.3' into bugfix/lora-incompatibility-handling 2023-04-13 22:23:59 -04:00
Lincoln Stein
8fb4b05556 change lora and TI list dynamically when model changes 2023-04-13 22:22:43 -04:00
StAlKeR7779
0bc5dcc663 Refactor 2023-04-13 16:05:04 +03:00
Lincoln Stein
0eda1a03e1 pin diffusers to 0.14 2023-04-13 00:40:26 -04:00
Lincoln Stein
be7e067c95 getLoraModels event filters loras by compatibility 2023-04-13 00:31:11 -04:00
Lincoln Stein
afa3cdce27 add a list_compatible_loras() method 2023-04-13 00:11:26 -04:00
Lincoln Stein
6dfbd1c677 implement caching scheme for vector length 2023-04-12 23:56:52 -04:00
StAlKeR7779
16c97ca0cb Fix num_train_timesteps in config 2023-04-12 23:57:45 +03:00
StAlKeR7779
e24dd97b80 Fix that config attributes no longer accessible as object attributes 2023-04-12 23:40:14 +03:00
StAlKeR7779
5a54039dd7 Fix imports for diffusers 0.15.0
Imports:
`diffusers.models.cross_attention -> diffusers.models.attention_processor`

Unions:
`AttnProcessor -> AttentionProcessor`

Classes:
| Old name | New name|
| --- | --- |
| CrossAttention | Attention |
| CrossAttnProcessor | AttnProcessor |
| XFormersCrossAttnProcessor | XFormersAttnProcessor |
| CrossAttnAddedKVProcessor | AttnAddedKVProcessor |
| LoRACrossAttnProcessor | LoRAAttnProcessor |
| LoRAXFormersCrossAttnProcessor | LoRAXFormersAttnProcessor |

Same names in this class:
`SlicedAttnProcessor, SlicedAttnAddedKVProcessor`
2023-04-12 22:54:25 +03:00
Lincoln Stein
9385edb453 Merge branch 'v2.3' into enhance/increase-sha256-chunksize 2023-04-11 18:51:44 -04:00
Lincoln Stein
2251d3abfe fixup relative path to devices module 2023-04-10 23:44:58 -04:00
Lincoln Stein
0b22a3f34d distinguish LoRA/LyCORIS files based on what SD model they were based on
- Attempting to run a prompt with a LoRA based on SD v1.X against a
  model based on v2.X will now throw an
  `IncompatibleModelException`. To import this exception:
  `from ldm.modules.lora_manager import IncompatibleModelException`
  (maybe this should be defined in ModelManager?)

- Enhance `LoraManager.list_loras()` to accept an optional integer
  argument, `token_vector_length`. This will filter the returned LoRA
  models to return only those that match the indicated length. Use:
  ```
  768 => for models based on SD v1.X
  1024 => for models based on SD v2.X
  ```

  Note that this filtering requires each LoRA file to be opened
  by `torch.safetensors`. It will take ~8s to scan a directory of
  40 files.

- Added new static methods to `ldm.modules.kohya_lora_manager`:
  - check_model_compatibility()
  - vector_length_from_checkpoint()
  - vector_length_from_checkpoint_file()
2023-04-10 23:33:28 -04:00
Lincoln Stein
2528e14fe9 raise generation exceptions so that frontend can catch 2023-04-10 14:26:09 -04:00
Lincoln Stein
16ccc807cc control which revision of a diffusers model is downloaded
- Previously the user's preferred precision was used to select which
  version branch of a diffusers model would be downloaded. Half-precision
  would try to download the 'fp16' branch if it existed.

- Turns out that with waifu-diffusion this logic doesn't work, as
  'fp16' gets you waifu-diffusion v1.3, while 'main' gets you
  waifu-diffusion v1.4. Who knew?

- This PR adds a new optional "revision" field to `models.yaml`. This
  can be used to override the diffusers branch version. In the case of
  Waifu diffusion, INITIAL_MODELS.yaml now specifies the "main" branch.

- This PR also quenches the NSFW nag that downloading diffusers sometimes
  triggers.

- Closes #3160
2023-04-09 22:07:55 -04:00
Lincoln Stein
66364501d5 increase sha256 chunksize when calculating model hash
- Thanks to @abdBarho, who discovered that increasing the chunksize
  dramatically decreases the amount of time to calculate the hash.
2023-04-09 16:39:16 -04:00
32 changed files with 606 additions and 401 deletions

2
.gitignore vendored
View File

@@ -233,5 +233,3 @@ installer/install.sh
installer/update.bat
installer/update.sh
# no longer stored in source directory
models

View File

@@ -41,6 +41,16 @@ Windows systems). If the `loras` folder does not already exist, just
create it. The vast majority of LoRA models use the Kohya file format,
which is a type of `.safetensors` file.
!!! warning "LoRA Naming Restrictions"
InvokeAI will only recognize LoRA files that contain the
characters a-z, A-Z, 0-9 and the underscore character
_. Other characters, including the hyphen, will cause the
LoRA file not to load. These naming restrictions may be
relaxed in the future, but for now you will need to rename
files that contain hyphens, commas, brackets, and other
non-word characters.
You may change where InvokeAI looks for the `loras` folder by passing the
`--lora_directory` option to the `invoke.sh`/`invoke.bat` launcher, or
by placing the option in `invokeai.init`. For example:

View File

@@ -33,6 +33,11 @@ title: Overview
Restore mangled faces and make images larger with upscaling. Also see
the [Embiggen Upscaling Guide](EMBIGGEN.md).
- The [Using LoRA Models](LORAS.md)
Add custom subjects and styles using HuggingFace's repository of
embeddings.
- The [Concepts Library](CONCEPTS.md)
Add custom subjects and styles using HuggingFace's repository of

View File

@@ -79,7 +79,7 @@ title: Manual Installation, Linux
and obtaining an access token for downloading. It will then download and
install the weights files for you.
Please look [here](../INSTALL_MANUAL.md) for a manual process for doing
Please look [here](../020_INSTALL_MANUAL.md) for a manual process for doing
the same thing.
7. Start generating images!

View File

@@ -75,7 +75,7 @@ Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehan
obtaining an access token for downloading. It will then download and install the
weights files for you.
Please look [here](../INSTALL_MANUAL.md) for a manual process for doing the
Please look [here](../020_INSTALL_MANUAL.md) for a manual process for doing the
same thing.
8. Start generating images!

View File

@@ -0,0 +1,5 @@
mkdocs
mkdocs-material>=8, <9
mkdocs-git-revision-date-localized-plugin
mkdocs-redirects==1.2.0

View File

@@ -132,13 +132,12 @@ class Installer:
# Prefer to copy python executables
# so that updates to system python don't break InvokeAI
if not venv_dir.exists():
try:
venv.create(venv_dir, with_pip=True)
# If installing over an existing environment previously created with symlinks,
# the executables will fail to copy. Keep symlinks in that case
except shutil.SameFileError:
venv.create(venv_dir, with_pip=True, symlinks=True)
try:
venv.create(venv_dir, with_pip=True)
# If installing over an existing environment previously created with symlinks,
# the executables will fail to copy. Keep symlinks in that case
except shutil.SameFileError:
venv.create(venv_dir, with_pip=True, symlinks=True)
# upgrade pip in Python 3.9 environments
if int(platform.python_version_tuple()[1]) == 9:
@@ -244,16 +243,15 @@ class InvokeAiInstance:
# Note that we're installing pinned versions of torch and
# torchvision here, which *should* correspond to what is
# in pyproject.toml. This is to prevent torch 2.0 from
# being installed and immediately uninstalled and replaced with 1.13
# in pyproject.toml.
pip = local[self.pip]
(
pip[
"install",
"--require-virtualenv",
"torch~=1.13.1",
"torchvision~=0.14.1",
"torch~=2.0.0",
"torchvision>=0.14.1",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,

View File

@@ -25,7 +25,7 @@ from invokeai.backend.modules.parameters import parameters_to_command
import invokeai.frontend.dist as frontend
from ldm.generate import Generate
from ldm.invoke.args import Args, APP_ID, APP_VERSION, calculate_init_img_hash
from ldm.invoke.concepts_lib import HuggingFaceConceptsLibrary
from ldm.invoke.concepts_lib import get_hf_concepts_lib
from ldm.invoke.conditioning import (
get_tokens_for_prompt_object,
get_prompt_structure,
@@ -37,11 +37,11 @@ from ldm.invoke.globals import (
Globals,
global_converted_ckpts_dir,
global_models_dir,
global_lora_models_dir,
)
from ldm.invoke.pngwriter import PngWriter, retrieve_metadata
from compel.prompt_parser import Blend
from ldm.invoke.merge_diffusers import merge_diffusion_models
from ldm.modules.lora_manager import LoraManager
# Loading Arguments
opt = Args()
@@ -523,20 +523,12 @@ class InvokeAIWebServer:
@socketio.on("getLoraModels")
def get_lora_models():
try:
lora_path = global_lora_models_dir()
loras = []
for root, _, files in os.walk(lora_path):
models = [
Path(root, x)
for x in files
if Path(x).suffix in [".ckpt", ".pt", ".safetensors"]
]
loras = loras + models
model = self.generate.model
lora_mgr = LoraManager(model)
loras = lora_mgr.list_compatible_loras()
found_loras = []
for lora in sorted(loras, key=lambda s: s.stem.lower()):
location = str(lora.resolve()).replace("\\", "/")
found_loras.append({"name": lora.stem, "location": location})
for lora in sorted(loras, key=str.casefold):
found_loras.append({"name":lora,"location":str(loras[lora])})
socketio.emit("foundLoras", found_loras)
except Exception as e:
self.handle_exceptions(e)
@@ -546,7 +538,7 @@ class InvokeAIWebServer:
try:
local_triggers = self.generate.model.textual_inversion_manager.get_all_trigger_strings()
locals = [{'name': x} for x in sorted(local_triggers, key=str.casefold)]
concepts = HuggingFaceConceptsLibrary().list_concepts(minimum_likes=5)
concepts = get_hf_concepts_lib().list_concepts(minimum_likes=5)
concepts = [{'name': f'<{x}>'} for x in sorted(concepts, key=str.casefold) if f'<{x}>' not in local_triggers]
socketio.emit("foundTextualInversionTriggers", {'local_triggers': locals, 'huggingface_concepts': concepts})
except Exception as e:

View File

@@ -80,7 +80,8 @@ trinart-2.0:
repo_id: stabilityai/sd-vae-ft-mse
recommended: False
waifu-diffusion-1.4:
description: An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)
description: An SD-2.1 model trained on 5.4M anime/manga-style images (4.27 GB)
revision: main
repo_id: hakurei/waifu-diffusion
format: diffusers
vae:

File diff suppressed because one or more lines are too long

View File

@@ -5,7 +5,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI - A Stable Diffusion Toolkit</title>
<link rel="shortcut icon" type="icon" href="./assets/favicon-0d253ced.ico" />
<script type="module" crossorigin src="./assets/index-f56b39bc.js"></script>
<script type="module" crossorigin src="./assets/index-b12e648e.js"></script>
<link rel="stylesheet" href="./assets/index-2ab0eb58.css">
</head>

View File

@@ -33,6 +33,10 @@ import {
setIntermediateImage,
} from 'features/gallery/store/gallerySlice';
import {
getLoraModels,
getTextualInversionTriggers,
} from 'app/socketio/actions';
import type { RootState } from 'app/store';
import { addImageToStagingArea } from 'features/canvas/store/canvasSlice';
import {
@@ -463,6 +467,8 @@ const makeSocketIOListeners = (
const { model_name, model_list } = data;
dispatch(setModelList(model_list));
dispatch(setCurrentStatus(i18n.t('common.statusModelChanged')));
dispatch(getLoraModels());
dispatch(getTextualInversionTriggers());
dispatch(setIsProcessing(false));
dispatch(setIsCancelable(true));
dispatch(

File diff suppressed because one or more lines are too long

View File

@@ -13,11 +13,16 @@ import time
import traceback
from typing import List
import warnings
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=UserWarning)
import torch
import cv2
import diffusers
import numpy as np
import skimage
import torch
import transformers
from diffusers.pipeline_utils import DiffusionPipeline
from diffusers.utils.import_utils import is_xformers_available
@@ -633,9 +638,8 @@ class Generate:
except RuntimeError:
# Clear the CUDA cache on an exception
self.clear_cuda_cache()
print(traceback.format_exc(), file=sys.stderr)
print(">> Could not generate image.")
print("** Could not generate image.")
raise
toc = time.time()
print("\n>> Usage stats:")
@@ -980,13 +984,15 @@ class Generate:
seed_everything(random.randrange(0, np.iinfo(np.uint32).max))
if self.embedding_path and not model_data.get("ti_embeddings_loaded"):
print(f'>> Loading embeddings from {self.embedding_path}')
for root, _, files in os.walk(self.embedding_path):
for name in files:
ti_path = os.path.join(root, name)
self.model.textual_inversion_manager.load_textual_inversion(
ti_path, defer_injecting_tokens=True
)
model_data["ti_embeddings_loaded"] = True
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=UserWarning)
for root, _, files in os.walk(self.embedding_path):
for name in files:
ti_path = os.path.join(root, name)
self.model.textual_inversion_manager.load_textual_inversion(
ti_path, defer_injecting_tokens=True
)
model_data["ti_embeddings_loaded"] = True
print(
f'>> Textual inversion triggers: {", ".join(sorted(self.model.textual_inversion_manager.get_all_trigger_strings()))}'
)

View File

@@ -4,11 +4,11 @@ import shlex
import sys
import traceback
from argparse import Namespace
from packaging import version
from pathlib import Path
from typing import Union
import click
from compel import PromptParser
if sys.platform == "darwin":
@@ -25,6 +25,7 @@ from .generator.diffusers_pipeline import PipelineIntermediateState
from .globals import Globals, global_config_dir
from .image_util import make_grid
from .log import write_log
from .model_manager import ModelManager
from .pngwriter import PngWriter, retrieve_metadata, write_metadata
from .readline import Completer, get_completer
from ..util import url_attachment_name
@@ -64,6 +65,9 @@ def main():
Globals.sequential_guidance = args.sequential_guidance
Globals.ckpt_convert = True # always true as of 2.3.4 for LoRA support
# run any post-install patches needed
run_patches()
print(f">> Internet connectivity is {Globals.internet_available}")
if not args.conf:
@@ -108,6 +112,9 @@ def main():
if opt.lora_path:
Globals.lora_models_dir = opt.lora_path
# migrate legacy models
ModelManager.migrate_models()
# load the infile as a list of lines
if opt.infile:
try:
@@ -1291,6 +1298,62 @@ def retrieve_last_used_model()->str:
with open(model_file_path,'r') as f:
return f.readline()
# This routine performs any patch-ups needed after installation
def run_patches():
install_missing_config_files()
version_file = Path(Globals.root,'.version')
if version_file.exists():
with open(version_file,'r') as f:
root_version = version.parse(f.readline() or 'v2.3.2')
else:
root_version = version.parse('v2.3.2')
app_version = version.parse(ldm.invoke.__version__)
if root_version < app_version:
try:
do_version_update(root_version, ldm.invoke.__version__)
with open(version_file,'w') as f:
f.write(ldm.invoke.__version__)
except:
print("** Update failed. Will try again on next launch")
def install_missing_config_files():
"""
install ckpt configuration files that may have been added to the
distro after original root directory configuration
"""
import invokeai.configs as conf
from shutil import copyfile
root_configs = Path(global_config_dir(), 'stable-diffusion')
repo_configs = Path(conf.__path__[0], 'stable-diffusion')
for src in repo_configs.iterdir():
dest = root_configs / src.name
if not dest.exists():
copyfile(src,dest)
def do_version_update(root_version: version.Version, app_version: Union[str, version.Version]):
"""
Make any updates to the launcher .sh and .bat scripts that may be needed
from release to release. This is not an elegant solution. Instead, the
launcher should be moved into the source tree and installed using pip.
"""
if root_version < version.Version('v2.3.4'):
dest = Path(Globals.root,'loras')
dest.mkdir(exist_ok=True)
if root_version < version.Version('v2.3.3'):
if sys.platform == "linux":
print('>> Downloading new version of launcher script and its config file')
from ldm.util import download_with_progress_bar
url_base = f'https://raw.githubusercontent.com/invoke-ai/InvokeAI/v{str(app_version)}/installer/templates/'
dest = Path(Globals.root,'invoke.sh.in')
assert download_with_progress_bar(url_base+'invoke.sh.in',dest)
dest.replace(Path(Globals.root,'invoke.sh'))
os.chmod(Path(Globals.root,'invoke.sh'), 0o0755)
dest = Path(Globals.root,'dialogrc')
assert download_with_progress_bar(url_base+'dialogrc',dest)
dest.replace(Path(Globals.root,'.dialogrc'))
if __name__ == '__main__':
main()

View File

@@ -1 +1,3 @@
__version__='2.3.4.post1'
__version__='2.3.5.post2'

View File

@@ -620,7 +620,10 @@ def convert_ldm_vae_checkpoint(checkpoint, config):
for key in keys:
if key.startswith(vae_key):
vae_state_dict[key.replace(vae_key, "")] = checkpoint.get(key)
new_checkpoint = convert_ldm_vae_state_dict(vae_state_dict,config)
return new_checkpoint
def convert_ldm_vae_state_dict(vae_state_dict, config):
new_checkpoint = {}
new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"]

View File

@@ -12,6 +12,14 @@ from urllib import request, error as ul_error
from huggingface_hub import HfFolder, hf_hub_url, ModelSearchArguments, ModelFilter, HfApi
from ldm.invoke.globals import Globals
singleton = None
def get_hf_concepts_lib():
global singleton
if singleton is None:
singleton = HuggingFaceConceptsLibrary()
return singleton
class HuggingFaceConceptsLibrary(object):
def __init__(self, root=None):
'''

View File

@@ -21,6 +21,7 @@ from urllib import request
from shutil import get_terminal_size
import npyscreen
import torch
import transformers
from diffusers import AutoencoderKL
from huggingface_hub import HfFolder
@@ -663,19 +664,8 @@ def initialize_rootdir(root: str, yes_to_all: bool = False):
configs_src = Path(configs.__path__[0])
configs_dest = Path(root) / "configs"
if not os.path.samefile(configs_src, configs_dest):
shutil.copytree(configs_src,
configs_dest,
dirs_exist_ok=True,
copy_function=shutil.copyfile,
)
# Fix up directory permissions so that they are writable
# This can happen when running under Nix environment which
# makes the runtime directory template immutable.
for root,dirs,files in os.walk(os.path.join(root,name)):
for d in dirs:
Path(root,d).chmod(0o775)
for f in files:
Path(root,d).chmod(0o644)
shutil.copytree(configs_src, configs_dest, dirs_exist_ok=True)
# -------------------------------------
def run_console_ui(

View File

@@ -6,6 +6,7 @@ import os
import platform
import psutil
import requests
import pkg_resources
from rich import box, print
from rich.console import Console, group
from rich.panel import Panel
@@ -39,21 +40,11 @@ def invokeai_is_running()->bool:
if matches:
print(f':exclamation: [bold red]An InvokeAI instance appears to be running as process {p.pid}[/red bold]')
return True
except psutil.AccessDenied:
except (psutil.AccessDenied,psutil.NoSuchProcess):
continue
return False
def do_post_install():
'''
Run postinstallation script.
'''
print("Looking for postinstallation script to run on this version...")
try:
from ldm.invoke.config.post_install.py import post_install
post_install()
except:
print("Postinstallation script not available for this version of InvokeAI")
def welcome(versions: dict):
@group()
@@ -82,10 +73,20 @@ def welcome(versions: dict):
)
console.line()
def get_extras():
extras = ''
try:
dist = pkg_resources.get_distribution('xformers')
extras = '[xformers]'
except pkg_resources.DistributionNotFound:
pass
return extras
def main():
versions = get_versions()
if invokeai_is_running():
print(f':exclamation: [bold red]Please terminate all running instances of InvokeAI before updating.[/red bold]')
input('Press any key to continue...')
return
welcome(versions)
@@ -104,20 +105,21 @@ def main():
elif choice=='4':
branch = Prompt.ask('Enter an InvokeAI branch name')
extras = get_extras()
print(f':crossed_fingers: Upgrading to [yellow]{tag if tag else release}[/yellow]')
if release:
cmd = f'pip install {INVOKE_AI_SRC}/{release}.zip --use-pep517 --upgrade'
cmd = f"pip install 'invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip' --use-pep517 --upgrade"
elif tag:
cmd = f'pip install {INVOKE_AI_TAG}/{tag}.zip --use-pep517 --upgrade'
cmd = f"pip install 'invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip' --use-pep517 --upgrade"
else:
cmd = f'pip install {INVOKE_AI_BRANCH}/{branch}.zip --use-pep517 --upgrade'
cmd = f"pip install 'invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip' --use-pep517 --upgrade"
print('')
print('')
if os.system(cmd)==0:
print(f':heavy_check_mark: Upgrade successful')
else:
print(f':exclamation: [bold red]Upgrade failed[/red bold]')
do_post_install()
if __name__ == "__main__":
try:

View File

@@ -11,6 +11,7 @@ from tempfile import TemporaryFile
import requests
from diffusers import AutoencoderKL
from diffusers import logging as dlogging
from huggingface_hub import hf_hub_url
from omegaconf import OmegaConf
from omegaconf.dictconfig import DictConfig
@@ -110,7 +111,6 @@ def install_requested_models(
if len(external_models)>0:
print("== INSTALLING EXTERNAL MODELS ==")
for path_url_or_repo in external_models:
print(f'DEBUG: path_url_or_repo = {path_url_or_repo}')
try:
model_manager.heuristic_import(
path_url_or_repo,
@@ -295,13 +295,21 @@ def _download_diffusion_weights(
mconfig: DictConfig, access_token: str, precision: str = "float32"
):
repo_id = mconfig["repo_id"]
revision = mconfig.get('revision',None)
model_class = (
StableDiffusionGeneratorPipeline
if mconfig.get("format", None) == "diffusers"
else AutoencoderKL
)
extra_arg_list = [{"revision": "fp16"}, {}] if precision == "float16" else [{}]
extra_arg_list = [{"revision": revision}] if revision \
else [{"revision": "fp16"}, {}] if precision == "float16" \
else [{}]
path = None
# quench safety checker warnings
verbosity = dlogging.get_verbosity()
dlogging.set_verbosity_error()
for extra_args in extra_arg_list:
try:
path = download_from_hf(
@@ -317,6 +325,7 @@ def _download_diffusion_weights(
print(f"An unexpected error occurred while downloading the model: {e})")
if path:
break
dlogging.set_verbosity(verbosity)
return path
@@ -388,19 +397,7 @@ def update_config_file(successfully_downloaded: dict, config_file: Path):
if config_file is default_config_file() and not config_file.parent.exists():
configs_src = Dataset_path.parent
configs_dest = default_config_file().parent
shutil.copytree(configs_src,
configs_dest,
dirs_exist_ok=True,
copy_function=shutil.copyfile,
)
# Fix up directory permissions so that they are writable
# This can happen when running under Nix environment which
# makes the runtime directory template immutable.
for root,dirs,files in os.walk(default_config_file().parent):
for d in dirs:
Path(root,d).chmod(0o775)
for f in files:
Path(root,d).chmod(0o644)
shutil.copytree(configs_src, configs_dest, dirs_exist_ok=True)
yaml = new_config_file_contents(successfully_downloaded, config_file)
@@ -459,6 +456,8 @@ def new_config_file_contents(
stanza["description"] = mod["description"]
stanza["repo_id"] = mod["repo_id"]
stanza["format"] = mod["format"]
if "revision" in mod:
stanza["revision"] = mod["revision"]
# diffusers don't need width and height (probably .ckpt doesn't either)
# so we no longer require these in INITIAL_MODELS.yaml
if "width" in mod:
@@ -483,10 +482,9 @@ def new_config_file_contents(
conf[model] = stanza
# if no default model was chosen, then we select the first
# one in the list
# if no default model was chosen, then we select the first one in the list
if not default_selected:
conf[list(successfully_downloaded.keys())[0]]["default"] = True
conf[list(conf.keys())[0]]["default"] = True
return OmegaConf.to_yaml(conf)

View File

@@ -1,168 +0,0 @@
'''ldm.invoke.config.post_install
This defines a single exportable function, post_install(), which does
post-installation stuff like migrating models directories, adding new
config files, etc.
From the command line, its entry point is invokeai-postinstall.
'''
import os
import sys
from packaging import version
from pathlib import Path
from shutil import move,rmtree,copyfile
from typing import Union
import invokeai.configs as conf
import ldm.invoke
from ..globals import Globals, global_cache_dir, global_config_dir
def post_install():
'''
Do version and model updates, etc.
Should be called once after every version update.
'''
_migrate_models()
_run_patches()
def _migrate_models():
"""
Migrate the ~/invokeai/models directory from the legacy format used through 2.2.5
to the 2.3.0 "diffusers" version. This should be a one-time operation, called at
script startup time.
"""
# Three transformer models to check: bert, clip and safety checker, and
# the diffusers as well
models_dir = Path(Globals.root, "models")
legacy_locations = [
Path(
models_dir,
"CompVis/stable-diffusion-safety-checker/models--CompVis--stable-diffusion-safety-checker",
),
Path("bert-base-uncased/models--bert-base-uncased"),
Path(
"openai/clip-vit-large-patch14/models--openai--clip-vit-large-patch14"
),
]
legacy_locations.extend(list(global_cache_dir("diffusers").glob("*")))
legacy_layout = False
for model in legacy_locations:
legacy_layout = legacy_layout or model.exists()
if not legacy_layout:
return
print(
"""
>> ALERT:
>> The location of your previously-installed diffusers models needs to move from
>> invokeai/models/diffusers to invokeai/models/hub due to a change introduced by
>> diffusers version 0.14. InvokeAI will now move all models from the "diffusers" directory
>> into "hub" and then remove the diffusers directory. This is a quick, safe, one-time
>> operation. However if you have customized either of these directories and need to
>> make adjustments, please press ctrl-C now to abort and relaunch InvokeAI when you are ready.
>> Otherwise press <enter> to continue."""
)
print("** This is a quick one-time operation.")
input("continue> ")
# transformer files get moved into the hub directory
if _is_huggingface_hub_directory_present():
hub = global_cache_dir("hub")
else:
hub = models_dir / "hub"
os.makedirs(hub, exist_ok=True)
for model in legacy_locations:
source = models_dir / model
dest = hub / model.stem
if dest.exists() and not source.exists():
continue
print(f"** {source} => {dest}")
if source.exists():
if dest.is_symlink():
print(f"** Found symlink at {dest.name}. Not migrating.")
elif dest.exists():
if source.is_dir():
rmtree(source)
else:
source.unlink()
else:
move(source, dest)
# now clean up by removing any empty directories
empty = [
root
for root, dirs, files, in os.walk(models_dir)
if not len(dirs) and not len(files)
]
for d in empty:
os.rmdir(d)
print("** Migration is done. Continuing...")
def _is_huggingface_hub_directory_present() -> bool:
return (
os.getenv("HF_HOME") is not None or os.getenv("XDG_CACHE_HOME") is not None
)
# This routine performs any patch-ups needed after installation
def _run_patches():
_install_missing_config_files()
version_file = Path(Globals.root,'.version')
if version_file.exists():
with open(version_file,'r') as f:
root_version = version.parse(f.readline() or 'v2.3.2')
else:
root_version = version.parse('v2.3.2')
app_version = version.parse(ldm.invoke.__version__)
if root_version < app_version:
try:
_do_version_update(root_version, ldm.invoke.__version__)
with open(version_file,'w') as f:
f.write(ldm.invoke.__version__)
except:
print("** Version patching failed. Please try invokeai-postinstall later.")
def _install_missing_config_files():
"""
install ckpt configuration files that may have been added to the
distro after original root directory configuration
"""
root_configs = Path(global_config_dir(), 'stable-diffusion')
repo_configs = None
for f in conf.__path__:
if Path(f, 'stable-diffusion', 'v1-inference.yaml').exists():
repo_configs = Path(f, 'stable-diffusion')
break
if not repo_configs:
return
for src in repo_configs.iterdir():
dest = root_configs / src.name
if not dest.exists():
copyfile(src,dest)
def _do_version_update(root_version: version.Version, app_version: Union[str, version.Version]):
"""
Make any updates to the launcher .sh and .bat scripts that may be needed
from release to release. This is not an elegant solution. Instead, the
launcher should be moved into the source tree and installed using pip.
"""
if root_version < version.Version('v2.3.4'):
dest = Path(Globals.root,'loras')
dest.mkdir(exist_ok=True)
if root_version < version.Version('v2.3.3'):
if sys.platform == "linux":
print('>> Downloading new version of launcher script and its config file')
from ldm.util import download_with_progress_bar
url_base = f'https://raw.githubusercontent.com/invoke-ai/InvokeAI/v{str(app_version)}/installer/templates/'
dest = Path(Globals.root,'invoke.sh.in')
assert download_with_progress_bar(url_base+'invoke.sh.in',dest)
dest.replace(Path(Globals.root,'invoke.sh'))
os.chmod(Path(Globals.root,'invoke.sh'), 0o0755)
dest = Path(Globals.root,'dialogrc')
assert download_with_progress_bar(url_base+'dialogrc',dest)
dest.replace(Path(Globals.root,'.dialogrc'))

View File

@@ -400,8 +400,15 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
@property
def _submodels(self) -> Sequence[torch.nn.Module]:
module_names, _, _ = self.extract_init_dict(dict(self.config))
values = [getattr(self, name) for name in module_names.keys()]
return [m for m in values if isinstance(m, torch.nn.Module)]
submodels = []
for name in module_names.keys():
if hasattr(self, name):
value = getattr(self, name)
else:
value = getattr(self.config, name)
if isinstance(value, torch.nn.Module):
submodels.append(value)
return submodels
def image_from_embeddings(self, latents: torch.Tensor, num_inference_steps: int,
conditioning_data: ConditioningData,
@@ -472,7 +479,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
step_count=len(self.scheduler.timesteps)
):
yield PipelineIntermediateState(run_id=run_id, step=-1, timestep=self.scheduler.num_train_timesteps,
yield PipelineIntermediateState(run_id=run_id, step=-1, timestep=self.scheduler.config.num_train_timesteps,
latents=latents)
batch_size = latents.shape[0]
@@ -756,7 +763,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
@property
def channels(self) -> int:
"""Compatible with DiffusionWrapper"""
return self.unet.in_channels
return self.unet.config.in_channels
def decode_latents(self, latents):
# Explicit call to get the vae loaded, since `decode` isn't the forward method.

View File

@@ -9,7 +9,6 @@ from __future__ import annotations
import contextlib
import gc
import hashlib
import io
import os
import re
import sys
@@ -31,11 +30,10 @@ from huggingface_hub import scan_cache_dir
from omegaconf import OmegaConf
from omegaconf.dictconfig import DictConfig
from picklescan.scanner import scan_file_path
from ldm.invoke.devices import CPU_DEVICE
from ldm.invoke.generator.diffusers_pipeline import StableDiffusionGeneratorPipeline
from ldm.invoke.globals import Globals, global_cache_dir
from ldm.util import ask_user, download_with_resume, instantiate_from_config, url_attachment_name
from ldm.util import ask_user, download_with_resume, url_attachment_name
class SDLegacyType(Enum):
@@ -370,14 +368,9 @@ class ModelManager(object):
print(
f">> Converting legacy checkpoint {model_name} into a diffusers model..."
)
from ldm.invoke.ckpt_to_diffuser import load_pipeline_from_original_stable_diffusion_ckpt
# try:
# if self.list_models()[self.current_model]['status'] == 'active':
# self.offload_model(self.current_model)
# except Exception:
# pass
from .ckpt_to_diffuser import (
load_pipeline_from_original_stable_diffusion_ckpt,
)
if self._has_cuda():
torch.cuda.empty_cache()
pipeline = load_pipeline_from_original_stable_diffusion_ckpt(
@@ -423,9 +416,9 @@ class ModelManager(object):
pipeline_args.update(cache_dir=global_cache_dir("hub"))
if using_fp16:
pipeline_args.update(torch_dtype=torch.float16)
fp_args_list = [{"revision": "fp16"}, {}]
else:
fp_args_list = [{}]
revision = mconfig.get('revision') or ('fp16' if using_fp16 else None)
fp_args_list = [{"revision": revision}] if revision else []
fp_args_list.append({})
verbosity = dlogging.get_verbosity()
dlogging.set_verbosity_error()
@@ -439,7 +432,7 @@ class ModelManager(object):
**fp_args,
)
except OSError as e:
if str(e).startswith("fp16 is not a valid"):
if 'Revision Not Found' in str(e):
pass
else:
print(
@@ -1007,6 +1000,81 @@ class ModelManager(object):
"""
)
@classmethod
def migrate_models(cls):
"""
Migrate the ~/invokeai/models directory from the legacy format used through 2.2.5
to the 2.3.0 "diffusers" version. This should be a one-time operation, called at
script startup time.
"""
# Three transformer models to check: bert, clip and safety checker, and
# the diffusers as well
models_dir = Path(Globals.root, "models")
legacy_locations = [
Path(
models_dir,
"CompVis/stable-diffusion-safety-checker/models--CompVis--stable-diffusion-safety-checker",
),
Path("bert-base-uncased/models--bert-base-uncased"),
Path(
"openai/clip-vit-large-patch14/models--openai--clip-vit-large-patch14"
),
]
legacy_locations.extend(list(global_cache_dir("diffusers").glob("*")))
legacy_layout = False
for model in legacy_locations:
legacy_layout = legacy_layout or model.exists()
if not legacy_layout:
return
print(
"""
>> ALERT:
>> The location of your previously-installed diffusers models needs to move from
>> invokeai/models/diffusers to invokeai/models/hub due to a change introduced by
>> diffusers version 0.14. InvokeAI will now move all models from the "diffusers" directory
>> into "hub" and then remove the diffusers directory. This is a quick, safe, one-time
>> operation. However if you have customized either of these directories and need to
>> make adjustments, please press ctrl-C now to abort and relaunch InvokeAI when you are ready.
>> Otherwise press <enter> to continue."""
)
print("** This is a quick one-time operation.")
input("continue> ")
# transformer files get moved into the hub directory
if cls._is_huggingface_hub_directory_present():
hub = global_cache_dir("hub")
else:
hub = models_dir / "hub"
os.makedirs(hub, exist_ok=True)
for model in legacy_locations:
source = models_dir / model
dest = hub / model.stem
if dest.exists() and not source.exists():
continue
print(f"** {source} => {dest}")
if source.exists():
if dest.is_symlink():
print(f"** Found symlink at {dest.name}. Not migrating.")
elif dest.exists():
if source.is_dir():
rmtree(source)
else:
source.unlink()
else:
move(source, dest)
# now clean up by removing any empty directories
empty = [
root
for root, dirs, files, in os.walk(models_dir)
if not len(dirs) and not len(files)
]
for d in empty:
os.rmdir(d)
print("** Migration is done. Continuing...")
def _resolve_path(
self, source: Union[str, Path], dest_directory: str
) -> Optional[Path]:
@@ -1087,7 +1155,7 @@ class ModelManager(object):
return self.device.type == "cuda"
def _diffuser_sha256(
self, name_or_path: Union[str, Path], chunksize=4096
self, name_or_path: Union[str, Path], chunksize=16777216
) -> Union[str, bytes]:
path = None
if isinstance(name_or_path, Path):
@@ -1161,6 +1229,17 @@ class ModelManager(object):
return vae_path
def _load_vae(self, vae_config) -> AutoencoderKL:
using_fp16 = self.precision == "float16"
dtype = torch.float16 if using_fp16 else torch.float32
# Handle the common case of a user shoving a VAE .ckpt into
# the vae field for a diffusers. We convert it into diffusers
# format and use it.
if isinstance(vae_config,(str,Path)):
return self.convert_vae(vae_config).to(dtype=dtype)
elif isinstance(vae_config,DictConfig) and (vae_path := vae_config.get('path')):
return self.convert_vae(vae_path).to(dtype=dtype)
vae_args = {}
try:
name_or_path = self.model_name_or_path(vae_config)
@@ -1168,7 +1247,6 @@ class ModelManager(object):
return None
if name_or_path is None:
return None
using_fp16 = self.precision == "float16"
vae_args.update(
cache_dir=global_cache_dir("hub"),
@@ -1208,6 +1286,32 @@ class ModelManager(object):
return vae
@staticmethod
def convert_vae(vae_path: Union[Path,str])->AutoencoderKL:
print(" | A checkpoint VAE was detected. Converting to diffusers format.")
vae_path = Path(Globals.root,vae_path).resolve()
from .ckpt_to_diffuser import (
create_vae_diffusers_config,
convert_ldm_vae_state_dict,
)
vae_path = Path(vae_path)
if vae_path.suffix in ['.pt','.ckpt']:
vae_state_dict = torch.load(vae_path, map_location="cpu")
else:
vae_state_dict = safetensors.torch.load_file(vae_path)
if 'state_dict' in vae_state_dict:
vae_state_dict = vae_state_dict['state_dict']
# TODO: see if this works with 1.x inpaint models and 2.x models
config_file_path = Path(Globals.root,"configs/stable-diffusion/v1-inference.yaml")
original_conf = OmegaConf.load(config_file_path)
vae_config = create_vae_diffusers_config(original_conf, image_size=512) # TODO: fix
diffusers_vae = convert_ldm_vae_state_dict(vae_state_dict,vae_config)
vae = AutoencoderKL(**vae_config)
vae.load_state_dict(diffusers_vae)
return vae
@staticmethod
def _delete_model_from_cache(repo_id):
cache_info = scan_cache_dir(global_cache_dir("diffusers"))
@@ -1231,3 +1335,8 @@ class ModelManager(object):
return path
return Path(Globals.root, path).resolve()
@staticmethod
def _is_huggingface_hub_directory_present() -> bool:
return (
os.getenv("HF_HOME") is not None or os.getenv("XDG_CACHE_HOME") is not None
)

View File

@@ -13,7 +13,7 @@ import re
import atexit
from typing import List
from ldm.invoke.args import Args
from ldm.invoke.concepts_lib import HuggingFaceConceptsLibrary
from ldm.invoke.concepts_lib import get_hf_concepts_lib
from ldm.invoke.globals import Globals
from ldm.modules.lora_manager import LoraManager
@@ -287,7 +287,7 @@ class Completer(object):
def _concept_completions(self, text, state):
if self.concepts is None:
# cache Concepts() instance so we can check for updates in concepts_list during runtime.
self.concepts = HuggingFaceConceptsLibrary()
self.concepts = get_hf_concepts_lib()
self.embedding_terms.update(set(self.concepts.list_concepts()))
else:
self.embedding_terms.update(set(self.concepts.list_concepts()))

View File

@@ -14,7 +14,6 @@ from torch import nn
from compel.cross_attention_control import Arguments
from diffusers.models.unet_2d_condition import UNet2DConditionModel
from diffusers.models.cross_attention import AttnProcessor
from ldm.invoke.devices import torch_dtype
@@ -163,7 +162,7 @@ class Context:
class InvokeAICrossAttentionMixin:
"""
Enable InvokeAI-flavoured CrossAttention calculation, which does aggressive low-memory slicing and calls
Enable InvokeAI-flavoured Attention calculation, which does aggressive low-memory slicing and calls
through both to an attention_slice_wrangler and a slicing_strategy_getter for custom attention map wrangling
and dymamic slicing strategy selection.
"""
@@ -178,7 +177,7 @@ class InvokeAICrossAttentionMixin:
Set custom attention calculator to be called when attention is calculated
:param wrangler: Callback, with args (module, suggested_attention_slice, dim, offset, slice_size),
which returns either the suggested_attention_slice or an adjusted equivalent.
`module` is the current CrossAttention module for which the callback is being invoked.
`module` is the current Attention module for which the callback is being invoked.
`suggested_attention_slice` is the default-calculated attention slice
`dim` is -1 if the attenion map has not been sliced, or 0 or 1 for dimension-0 or dimension-1 slicing.
If `dim` is >= 0, `offset` and `slice_size` specify the slice start and length.
@@ -326,7 +325,7 @@ def setup_cross_attention_control_attention_processors(unet: UNet2DConditionMode
def get_cross_attention_modules(model, which: CrossAttentionType) -> list[tuple[str, InvokeAICrossAttentionMixin]]:
from ldm.modules.attention import CrossAttention # avoid circular import
from ldm.modules.attention import CrossAttention # avoid circular import # TODO: rename as in diffusers?
cross_attention_class: type = InvokeAIDiffusersCrossAttention if isinstance(model,UNet2DConditionModel) else CrossAttention
which_attn = "attn1" if which is CrossAttentionType.SELF else "attn2"
attention_module_tuples = [(name,module) for name, module in model.named_modules() if
@@ -432,7 +431,7 @@ def get_mem_free_total(device):
class InvokeAIDiffusersCrossAttention(diffusers.models.attention.CrossAttention, InvokeAICrossAttentionMixin):
class InvokeAIDiffusersCrossAttention(diffusers.models.attention.Attention, InvokeAICrossAttentionMixin):
def __init__(self, **kwargs):
super().__init__(**kwargs)
@@ -457,8 +456,8 @@ class InvokeAIDiffusersCrossAttention(diffusers.models.attention.CrossAttention,
"""
# base implementation
class CrossAttnProcessor:
def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None):
class AttnProcessor:
def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None):
batch_size, sequence_length, _ = hidden_states.shape
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
@@ -487,7 +486,7 @@ from dataclasses import field, dataclass
import torch
from diffusers.models.cross_attention import CrossAttention, CrossAttnProcessor, SlicedAttnProcessor
from diffusers.models.attention_processor import Attention, AttnProcessor, SlicedAttnProcessor
@dataclass
@@ -532,7 +531,7 @@ class SlicedSwapCrossAttnProcesser(SlicedAttnProcessor):
# TODO: dynamically pick slice size based on memory conditions
def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None,
def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None,
# kwargs
swap_cross_attn_context: SwapCrossAttnContext=None):

View File

@@ -5,7 +5,6 @@ from typing import Callable, Optional, Union, Any
import numpy as np
import torch
from diffusers import UNet2DConditionModel
from typing_extensions import TypeAlias

View File

@@ -6,7 +6,7 @@ from torch import nn
import sys
from ldm.invoke.concepts_lib import HuggingFaceConceptsLibrary
from ldm.invoke.concepts_lib import get_hf_concepts_lib
from ldm.data.personalized import per_img_token_list
from transformers import CLIPTokenizer
from functools import partial
@@ -39,7 +39,7 @@ class EmbeddingManager(nn.Module):
super().__init__()
self.embedder = embedder
self.concepts_library=HuggingFaceConceptsLibrary()
self.concepts_library=get_hf_concepts_lib()
self.string_to_token_dict = {}
self.string_to_param_dict = nn.ParameterDict()

View File

@@ -1,15 +1,16 @@
import re
import json
from pathlib import Path
from typing import Optional
from typing import Optional, Dict, Tuple
import torch
from compel import Compel
from diffusers.models import UNet2DConditionModel
from filelock import FileLock, Timeout
from safetensors.torch import load_file
from torch.utils.hooks import RemovableHandle
from transformers import CLIPTextModel
from ldm.invoke.devices import choose_torch_device
from ..invoke.globals import global_lora_models_dir, Globals
from ..invoke.devices import choose_torch_device
"""
This module supports loading LoRA weights trained with https://github.com/kohya-ss/sd-scripts
@@ -17,6 +18,11 @@ To be removed once support for diffusers LoRA weights is well supported
"""
class IncompatibleModelException(Exception):
"Raised when there is an attempt to load a LoRA into a model that is incompatible with it"
pass
class LoRALayer:
lora_name: str
name: str
@@ -39,6 +45,7 @@ class LoRALayer:
return weight * lora.multiplier * self.scale
class LoHALayer:
lora_name: str
name: str
@@ -60,7 +67,6 @@ class LoHALayer:
self.scale = alpha / rank if (alpha and rank) else 1.0
def forward(self, lora, input_h):
if type(self.org_module) == torch.nn.Conv2d:
op = torch.nn.functional.conv2d
extra_args = dict(
@@ -75,11 +81,15 @@ class LoHALayer:
extra_args = {}
if self.t1 is None:
weight = ((self.w1_a @ self.w1_b) * (self.w2_a @ self.w2_b))
weight = (self.w1_a @ self.w1_b) * (self.w2_a @ self.w2_b)
else:
rebuild1 = torch.einsum('i j k l, j r, i p -> p r k l', self.t1, self.w1_b, self.w1_a)
rebuild2 = torch.einsum('i j k l, j r, i p -> p r k l', self.t2, self.w2_b, self.w2_a)
rebuild1 = torch.einsum(
"i j k l, j r, i p -> p r k l", self.t1, self.w1_b, self.w1_a
)
rebuild2 = torch.einsum(
"i j k l, j r, i p -> p r k l", self.t2, self.w2_b, self.w2_a
)
weight = rebuild1 * rebuild2
bias = self.bias if self.bias is not None else 0
@@ -90,7 +100,6 @@ class LoHALayer:
**extra_args,
) * lora.multiplier * self.scale
class LoKRLayer:
lora_name: str
name: str
@@ -157,24 +166,34 @@ class LoKRLayer:
class LoRAModuleWrapper:
unet: UNet2DConditionModel
text_encoder: CLIPTextModel
hooks: list[RemovableHandle]
hooks: Dict[str, Tuple[torch.nn.Module, RemovableHandle]]
def __init__(self, unet, text_encoder):
self.unet = unet
self.text_encoder = text_encoder
self.hooks = []
self.hooks = dict()
self.text_modules = None
self.unet_modules = None
self.applied_loras = {}
self.loaded_loras = {}
self.UNET_TARGET_REPLACE_MODULE = ["Transformer2DModel", "Attention", "ResnetBlock2D", "Downsample2D", "Upsample2D", "SpatialTransformer"]
self.TEXT_ENCODER_TARGET_REPLACE_MODULE = ["ResidualAttentionBlock", "CLIPAttention", "CLIPMLP"]
self.UNET_TARGET_REPLACE_MODULE = [
"Transformer2DModel",
"Attention",
"ResnetBlock2D",
"Downsample2D",
"Upsample2D",
"SpatialTransformer",
]
self.TEXT_ENCODER_TARGET_REPLACE_MODULE = [
"ResidualAttentionBlock",
"CLIPAttention",
"CLIPMLP",
]
self.LORA_PREFIX_UNET = "lora_unet"
self.LORA_PREFIX_TEXT_ENCODER = "lora_te"
def find_modules(
prefix, root_module: torch.nn.Module, target_replace_modules
) -> dict[str, torch.nn.Module]:
@@ -205,12 +224,11 @@ class LoRAModuleWrapper:
self.LORA_PREFIX_UNET, unet, self.UNET_TARGET_REPLACE_MODULE
)
def lora_forward_hook(self, name):
wrapper = self
def lora_forward(module, input_h, output):
if len(wrapper.loaded_loras) == 0:
if len(wrapper.applied_loras) == 0:
return output
for lora in wrapper.applied_loras.values():
@@ -223,11 +241,18 @@ class LoRAModuleWrapper:
return lora_forward
def apply_module_forward(self, module, name):
handle = module.register_forward_hook(self.lora_forward_hook(name))
self.hooks.append(handle)
if name in self.hooks:
registered_module, _ = self.hooks[name]
if registered_module != module:
raise Exception(f"Trying to register multiple modules to lora key: {name}")
# else it's just double hook creation - nothing to do
else:
handle = module.register_forward_hook(self.lora_forward_hook(name))
self.hooks[name] = (module, handle)
def clear_hooks(self):
for hook in self.hooks:
for _, hook in self.hooks.values():
hook.remove()
self.hooks.clear()
@@ -238,6 +263,7 @@ class LoRAModuleWrapper:
def clear_loaded_loras(self):
self.loaded_loras.clear()
class LoRA:
name: str
layers: dict[str, LoRALayer]
@@ -263,7 +289,6 @@ class LoRA:
state_dict_groupped[stem] = dict()
state_dict_groupped[stem][leaf] = value
for stem, values in state_dict_groupped.items():
if stem.startswith(self.wrapper.LORA_PREFIX_TEXT_ENCODER):
wrapped = self.wrapper.text_modules.get(stem, None)
@@ -284,34 +309,59 @@ class LoRA:
if "alpha" in values:
alpha = values["alpha"].item()
if "bias_indices" in values and "bias_values" in values and "bias_size" in values:
if (
"bias_indices" in values
and "bias_values" in values
and "bias_size" in values
):
bias = torch.sparse_coo_tensor(
values["bias_indices"],
values["bias_values"],
tuple(values["bias_size"]),
).to(device=self.device, dtype=self.dtype)
# lora and locon
if "lora_down.weight" in values:
value_down = values["lora_down.weight"]
value_mid = values.get("lora_mid.weight", None)
value_up = values["lora_up.weight"]
value_mid = values.get("lora_mid.weight", None)
value_up = values["lora_up.weight"]
if type(wrapped) == torch.nn.Conv2d:
if value_mid is not None:
layer_down = torch.nn.Conv2d(value_down.shape[1], value_down.shape[0], (1, 1), bias=False)
layer_mid = torch.nn.Conv2d(value_mid.shape[1], value_mid.shape[0], wrapped.kernel_size, wrapped.stride, wrapped.padding, bias=False)
layer_down = torch.nn.Conv2d(
value_down.shape[1], value_down.shape[0], (1, 1), bias=False
)
layer_mid = torch.nn.Conv2d(
value_mid.shape[1],
value_mid.shape[0],
wrapped.kernel_size,
wrapped.stride,
wrapped.padding,
bias=False,
)
else:
layer_down = torch.nn.Conv2d(value_down.shape[1], value_down.shape[0], wrapped.kernel_size, wrapped.stride, wrapped.padding, bias=False)
layer_mid = None
layer_down = torch.nn.Conv2d(
value_down.shape[1],
value_down.shape[0],
wrapped.kernel_size,
wrapped.stride,
wrapped.padding,
bias=False,
)
layer_mid = None
layer_up = torch.nn.Conv2d(value_up.shape[1], value_up.shape[0], (1, 1), bias=False)
layer_up = torch.nn.Conv2d(
value_up.shape[1], value_up.shape[0], (1, 1), bias=False
)
elif type(wrapped) == torch.nn.Linear:
layer_down = torch.nn.Linear(value_down.shape[1], value_down.shape[0], bias=False)
layer_mid = None
layer_up = torch.nn.Linear(value_up.shape[1], value_up.shape[0], bias=False)
layer_down = torch.nn.Linear(
value_down.shape[1], value_down.shape[0], bias=False
)
layer_mid = None
layer_up = torch.nn.Linear(
value_up.shape[1], value_up.shape[0], bias=False
)
else:
print(
@@ -319,49 +369,57 @@ class LoRA:
)
return
with torch.no_grad():
layer_down.weight.copy_(value_down)
if layer_mid is not None:
layer_mid.weight.copy_(value_mid)
layer_up.weight.copy_(value_up)
layer_down.to(device=self.device, dtype=self.dtype)
if layer_mid is not None:
layer_mid.to(device=self.device, dtype=self.dtype)
layer_up.to(device=self.device, dtype=self.dtype)
rank = value_down.shape[0]
layer = LoRALayer(self.name, stem, rank, alpha)
#layer.bias = bias # TODO: find and debug lora/locon with bias
# layer.bias = bias # TODO: find and debug lora/locon with bias
layer.down = layer_down
layer.mid = layer_mid
layer.up = layer_up
# loha
elif "hada_w1_b" in values:
rank = values["hada_w1_b"].shape[0]
layer = LoHALayer(self.name, stem, rank, alpha)
layer.org_module = wrapped
layer.bias = bias
layer.w1_a = values["hada_w1_a"].to(device=self.device, dtype=self.dtype)
layer.w1_b = values["hada_w1_b"].to(device=self.device, dtype=self.dtype)
layer.w2_a = values["hada_w2_a"].to(device=self.device, dtype=self.dtype)
layer.w2_b = values["hada_w2_b"].to(device=self.device, dtype=self.dtype)
layer.w1_a = values["hada_w1_a"].to(
device=self.device, dtype=self.dtype
)
layer.w1_b = values["hada_w1_b"].to(
device=self.device, dtype=self.dtype
)
layer.w2_a = values["hada_w2_a"].to(
device=self.device, dtype=self.dtype
)
layer.w2_b = values["hada_w2_b"].to(
device=self.device, dtype=self.dtype
)
if "hada_t1" in values:
layer.t1 = values["hada_t1"].to(device=self.device, dtype=self.dtype)
layer.t1 = values["hada_t1"].to(
device=self.device, dtype=self.dtype
)
else:
layer.t1 = None
if "hada_t2" in values:
layer.t2 = values["hada_t2"].to(device=self.device, dtype=self.dtype)
layer.t2 = values["hada_t2"].to(
device=self.device, dtype=self.dtype
)
else:
layer.t2 = None
@@ -405,14 +463,25 @@ class LoRA:
class KohyaLoraManager:
def __init__(self, pipe, lora_path):
def __init__(self, pipe):
self.vector_length_cache_path = self.lora_path / '.vectorlength.cache'
self.unet = pipe.unet
self.lora_path = lora_path
self.wrapper = LoRAModuleWrapper(pipe.unet, pipe.text_encoder)
self.text_encoder = pipe.text_encoder
self.device = torch.device(choose_torch_device())
self.dtype = pipe.unet.dtype
@classmethod
@property
def lora_path(cls)->Path:
return Path(global_lora_models_dir())
@classmethod
@property
def vector_length_cache_path(cls)->Path:
return cls.lora_path / '.vectorlength.cache'
def load_lora_module(self, name, path_file, multiplier: float = 1.0):
print(f" | Found lora {name} at {path_file}")
if path_file.suffix == ".safetensors":
@@ -420,6 +489,9 @@ class KohyaLoraManager:
else:
checkpoint = torch.load(path_file, map_location="cpu")
if not self.check_model_compatibility(checkpoint):
raise IncompatibleModelException
lora = LoRA(name, self.device, self.dtype, self.wrapper, multiplier)
lora.load_from_dict(checkpoint)
self.wrapper.loaded_loras[name] = lora
@@ -445,13 +517,89 @@ class KohyaLoraManager:
lora.multiplier = mult
self.wrapper.applied_loras[name] = lora
def unload_applied_lora(self, lora_name: str):
def unload_applied_lora(self, lora_name: str) -> bool:
"""If the indicated LoRA has previously been applied then
unload it and return True. Return False if the LoRA was
not previously applied (for status reporting)
"""
if lora_name in self.wrapper.applied_loras:
del self.wrapper.applied_loras[lora_name]
return True
return False
def unload_lora(self, lora_name: str):
def unload_lora(self, lora_name: str) -> bool:
if lora_name in self.wrapper.loaded_loras:
del self.wrapper.loaded_loras[lora_name]
return True
return False
def clear_loras(self):
self.wrapper.clear_applied_loras()
def check_model_compatibility(self, checkpoint) -> bool:
"""Checks whether the LoRA checkpoint is compatible with the token vector
length of the model that this manager is associated with.
"""
model_token_vector_length = (
self.text_encoder.get_input_embeddings().weight.data[0].shape[0]
)
lora_token_vector_length = self.vector_length_from_checkpoint(checkpoint)
return model_token_vector_length == lora_token_vector_length
@staticmethod
def vector_length_from_checkpoint(checkpoint: dict) -> int:
"""Return the vector token length for the passed LoRA checkpoint object.
This is used to determine which SD model version the LoRA was based on.
768 -> SDv1
1024-> SDv2
"""
key1 = "lora_te_text_model_encoder_layers_0_mlp_fc1.lora_down.weight"
key2 = "lora_te_text_model_encoder_layers_0_self_attn_k_proj.hada_w1_a"
lora_token_vector_length = (
checkpoint[key1].shape[1]
if key1 in checkpoint
else checkpoint[key2].shape[0]
if key2 in checkpoint
else 768
)
return lora_token_vector_length
@classmethod
def vector_length_from_checkpoint_file(self, checkpoint_path: Path) -> int:
with LoraVectorLengthCache(self.vector_length_cache_path) as cache:
if str(checkpoint_path) not in cache:
if checkpoint_path.suffix == ".safetensors":
checkpoint = load_file(
checkpoint_path.absolute().as_posix(), device="cpu"
)
else:
checkpoint = torch.load(checkpoint_path, map_location="cpu")
cache[str(checkpoint_path)] = KohyaLoraManager.vector_length_from_checkpoint(
checkpoint
)
return cache[str(checkpoint_path)]
class LoraVectorLengthCache(object):
def __init__(self, cache_path: Path):
self.cache_path = cache_path
self.lock = FileLock(Path(cache_path.parent, ".cachelock"))
self.cache = {}
def __enter__(self):
self.lock.acquire(timeout=10)
try:
if self.cache_path.exists():
with open(self.cache_path, "r") as json_file:
self.cache = json.load(json_file)
except Timeout:
print(
"** Can't acquire lock on lora vector length cache. Operations will be slower"
)
except (json.JSONDecodeError, OSError):
self.cache_path.unlink()
return self.cache
def __exit__(self, type, value, traceback):
with open(self.cache_path, "w") as json_file:
json.dump(self.cache, json_file)
self.lock.release()

View File

@@ -1,9 +1,10 @@
import os
from diffusers import StableDiffusionPipeline
from pathlib import Path
from diffusers import UNet2DConditionModel, StableDiffusionPipeline
from ldm.invoke.globals import global_lora_models_dir
from .kohya_lora_manager import KohyaLoraManager
from .kohya_lora_manager import KohyaLoraManager, IncompatibleModelException
from typing import Optional, Dict
class LoraCondition:
@@ -38,19 +39,22 @@ class LoraCondition:
else:
print(" ** Invalid Model to load LoRA")
elif self.kohya_manager:
self.kohya_manager.apply_lora_model(self.name,self.weight)
try:
self.kohya_manager.apply_lora_model(self.name,self.weight)
except IncompatibleModelException:
print(f" ** LoRA {self.name} is incompatible with this model; will generate without the LoRA applied.")
else:
print(" ** Unable to load LoRA")
def unload(self):
if self.kohya_manager:
if self.kohya_manager and self.kohya_manager.unload_applied_lora(self.name):
print(f'>> unloading LoRA {self.name}')
self.kohya_manager.unload_applied_lora(self.name)
class LoraManager:
def __init__(self, pipe: StableDiffusionPipeline):
# Kohya class handles lora not generated through diffusers
self.kohya = KohyaLoraManager(pipe, global_lora_models_dir())
self.kohya = KohyaLoraManager(pipe)
self.unet = pipe.unet
def set_loras_conditions(self, lora_weights: list):
@@ -63,16 +67,35 @@ class LoraManager:
return conditions
return None
def list_compatible_loras(self)->Dict[str, Path]:
'''
List all the LoRAs in the global lora directory that
are compatible with the current model. Return a dictionary
of the lora basename and its path.
'''
model_length = self.kohya.text_encoder.get_input_embeddings().weight.data[0].shape[0]
return self.list_loras(model_length)
@classmethod
def list_loras(self)->Dict[str, Path]:
@staticmethod
def list_loras(token_vector_length:int=None)->Dict[str, Path]:
'''List the LoRAS in the global lora directory.
If token_vector_length is provided, then only return
LoRAS that have the indicated length:
768: v1 models
1024: v2 models
'''
path = Path(global_lora_models_dir())
models_found = dict()
for root,_,files in os.walk(path):
for x in files:
name = Path(x).stem
suffix = Path(x).suffix
if suffix in [".ckpt", ".pt", ".safetensors"]:
models_found[name]=Path(root,x)
if suffix not in [".ckpt", ".pt", ".safetensors"]:
continue
path = Path(root,x)
if token_vector_length is None:
models_found[name]=Path(root,x) # unconditional addition
elif token_vector_length == KohyaLoraManager.vector_length_from_checkpoint_file(path):
models_found[name]=Path(root,x) # conditional on the base model matching
return models_found

View File

@@ -3,14 +3,16 @@ from dataclasses import dataclass
from pathlib import Path
from typing import Optional, Union
import safetensors.torch
import torch
import warnings
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=UserWarning)
import safetensors.torch
import torch
from picklescan.scanner import scan_file_path
from transformers import CLIPTextModel, CLIPTokenizer
from compel.embeddings_provider import BaseTextualInversionManager
from ldm.invoke.concepts_lib import HuggingFaceConceptsLibrary
from ldm.invoke.concepts_lib import get_hf_concepts_lib
@dataclass
class TextualInversion:
@@ -34,7 +36,7 @@ class TextualInversionManager(BaseTextualInversionManager):
self.tokenizer = tokenizer
self.text_encoder = text_encoder
self.full_precision = full_precision
self.hf_concepts_library = HuggingFaceConceptsLibrary()
self.hf_concepts_library = get_hf_concepts_lib()
self.trigger_to_sourcefile = dict()
default_textual_inversions: list[TextualInversion] = []
self.textual_inversions = default_textual_inversions

View File

@@ -32,9 +32,9 @@ dependencies = [
"albumentations",
"click",
"clip_anytorch",
"compel~=1.1.0",
"compel~=1.1.5",
"datasets",
"diffusers[torch]==0.14",
"diffusers[torch]~=0.16.1",
"dnspython==2.2.1",
"einops",
"eventlet",
@@ -53,7 +53,7 @@ dependencies = [
"imageio-ffmpeg",
"k-diffusion",
"kornia",
"npyscreen~=4.10.5",
"npyscreen",
"numpy<1.24",
"omegaconf",
"opencv-python",
@@ -76,7 +76,7 @@ dependencies = [
"taming-transformers-rom1504",
"test-tube>=0.7.5",
"torch-fidelity",
"torch~=1.13.1",
"torch~=2.0.0",
"torchmetrics",
"torchvision>=0.14.1",
"transformers~=4.26",
@@ -108,7 +108,7 @@ requires-python = ">=3.9, <3.11"
"test" = ["pytest-cov", "pytest>6.0.0"]
"xformers" = [
"triton; sys_platform=='linux'",
"xformers~=0.0.16; sys_platform!='darwin'",
"xformers~=0.0.19; sys_platform!='darwin'",
]
[project.scripts]
@@ -128,7 +128,6 @@ requires-python = ">=3.9, <3.11"
"invokeai-update" = "ldm.invoke.config.invokeai_update:main"
"invokeai-batch" = "ldm.invoke.dynamic_prompts:main"
"invokeai-metadata" = "ldm.invoke.invokeai_metadata:main"
"invokeai-postinstall" = "ldm.invoke.config.post_install:post_install"
[project.urls]
"Bug Reports" = "https://github.com/invoke-ai/InvokeAI/issues"