Compare commits

...

31 Commits

Author SHA1 Message Date
Lincoln Stein
fd74f51384 Release 2.3.3 (#3058)
(note that this is actually release candidate 7, but I made the mistake
of including an old rc number in the branch and can't easily change it)

## Updating Root directory

- Introduced new mechanism for updating the root directory when
necessary. Currently only used to update the invoke.sh script using new
dialog colors.
- Fixed ROCm torch module version number

## Loading legacy 2.0/2.1 models
- Due to not converting the torch.dtype precision correctly, the
`load_pipeline_from_original_stable_diffusion_ckpt()` was returning
models of dtype float32 regardless of the precision setting. This caused
a precision mismatch crash.
- Problem now fixed (also see #3057 for the same fix to `main`)

## Support for a fourth textual inversion embedding file format
- This variant, exemplified by "easynegative.safetensors" has a single
'embparam' key containing a Tensor.
- Also refactored code to make it easier to read.
- Handle both pickle and safetensor formats.

## Persistent model selection
- To be consistent with WebUI parameter behavior, the currently selected
model is saved on exit and restored on restart for both WebUI and CLI

## Bug fixes
- Name of VAE cache directory was "hug", not "hub". This is fixed.

## VAE fixes
- Allow custom VAEs to be assigned to a legacy model by placing a
like-named vae file adjacent to the checkpoint file.
- The custom VAE will be picked up and incorporated into the diffusers
model if the user chooses to convert/optimize.

## Custom config file loading
- Some of the civitai models instruct users to place a custom .yaml file
adjacent to the checkpoint file. This generally wasn't working because
some of the .yaml files use FrozenCLIPEmbedder rather than
WeightedFrozenCLIPEmbedder, and our FrozenCLIPEmbedder class doesn't
handle the `personalization_config` section used by the the textual
inversion manager. Other .yaml files don't have the
`personalization_config` section at all. Both these issues are
fixed.#1685

## Consistent pytorch version
- There was an inconsistency between the pytorch version requirement in
`pyproject.toml` and the requirement in the installer (which does a
little jiggery-pokery to load torch with the right CUDA/ROCm version
prior to the main pip install. This was causing torch to be installed,
then uninstalled, and reinstalled with a different version number. This
is now fixed.
2023-04-01 10:17:43 -04:00
Lincoln Stein
1e5a44a474 bump version to 2.3.3 final 2023-04-01 09:43:46 -04:00
Lincoln Stein
78ea5d773d Update ldm/invoke/config/invokeai_update.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-04-01 09:43:02 -04:00
Lincoln Stein
7547784e98 Update installer/lib/installer.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-04-01 09:41:38 -04:00
Lincoln Stein
e82641d5f9 Update installer/lib/installer.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-04-01 09:41:25 -04:00
Lincoln Stein
993baadc22 making this a prerelease for zipfile purposes 2023-03-31 00:44:39 -04:00
Lincoln Stein
ccfb0b94b9 added @EgoringKosmos recipe for fixing ROCm installs 2023-03-31 00:38:30 -04:00
Lincoln Stein
352805d607 fix for python 3.9 2023-03-31 00:33:10 -04:00
Lincoln Stein
4145e27ce6 move personalization fallback section into a static method 2023-03-30 21:53:19 -04:00
Lincoln Stein
3d4f4b677f support external legacy config files with no personalization section 2023-03-30 21:39:05 -04:00
Lincoln Stein
249173faf5 remove extraneous warnings about overwriting trigger terms 2023-03-30 20:37:10 -04:00
Lincoln Stein
794ef868af fix incorrect loading of external VAEs
- Closes #3073
2023-03-30 18:50:27 -04:00
Lincoln Stein
a1ed22517f reenable line completion during CLI edit_model cmd 2023-03-30 15:54:10 -04:00
Lincoln Stein
3765ee9b59 make invokeai-model-install work with editable install 2023-03-30 14:32:35 -04:00
Lincoln Stein
46e578e1ef Merge branch 'release/2.3.3-rc3' of github.com:invoke-ai/InvokeAI into release/2.3.3-rc3 2023-03-30 13:22:26 -04:00
Lincoln Stein
3a8ef0a00c make CONCEPTS documentation title more meaningful 2023-03-30 13:21:50 -04:00
Lincoln Stein
cf262dd2ea Update installer/lib/installer.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-03-29 12:44:02 -04:00
Lincoln Stein
b0b0c48d8a bump version to 2.3.3 2023-03-28 23:20:05 -04:00
Lincoln Stein
8404e06d77 update documentation
- Add link to Statcomm's visual guide to docs (his permission pending)
- Update the what's new sections.
2023-03-28 17:52:22 -04:00
Lincoln Stein
a91d01c27a enhancements to update routines
- Allow invokeai-update to update using a release, tag or branch.
- Allow CLI's root directory update routine to update directory
  contents regardless of whether current version is released.
- In model importation routine, clarify wording of instructions when user is
  asked to choose the type of model being imported.
2023-03-28 15:58:36 -04:00
Lincoln Stein
5eeca47887 bump rc version number 2023-03-28 13:08:38 -04:00
Lincoln Stein
66b361294b update embedding file documentation 2023-03-28 12:24:01 -04:00
Lincoln Stein
0fb1e79a0b update model installation documentation 2023-03-28 12:07:47 -04:00
Lincoln Stein
14f1efaf4f launch --model supersedes persistent model 2023-03-28 10:53:32 -04:00
Lincoln Stein
23aa17e387 fix typo in name of vae cache 2023-03-28 10:48:03 -04:00
Lincoln Stein
f23cc54e1b save and restore selected model on startup/exit 2023-03-28 10:39:19 -04:00
Lincoln Stein
e3d992d5d7 add metadata dump script 2023-03-28 10:01:31 -04:00
Lincoln Stein
bb972b2e3d Add support for yet another TI embedding file format (2.3 version) (#3045)
- This variant, exemplified by "easynegative.safetensors" has a single
'embparam' key containing a Tensor.
- Also refactored code to make it easier to read.
- Handle both pickle and safetensor formats.
2023-03-28 00:46:30 -04:00
Lincoln Stein
41a8fdea53 fix bugs in online ckpt conversion of 2.0 models
This commit fixes bugs related to the on-the-fly conversion and loading of
legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
  on-the-fly using --ckpt_convert, generation would crash with a
  precision incompatibility error.

- In addition, broken logic was causing some 2.0-derived ckpt files to
  be converted into diffusers and then processed through the legacy
  generation routines - not good.
2023-03-28 00:11:37 -04:00
Lincoln Stein
a78ff86e42 Merge branch 'v2.3' into enhance/handle-another-embedding-variant 2023-03-27 22:38:36 -04:00
Lincoln Stein
071df30597 handle a fourth variant of embedding .pt files
- This variant, exemplified by "easynegative.safetensors" has a single
  'embparam' key containing a Tensor.
- Also refactored code to make it easier to read.
- Handle both pickle and safetensor formats.
2023-03-26 23:40:29 -04:00
19 changed files with 693 additions and 382 deletions

View File

@@ -1,5 +1,5 @@
---
title: Concepts Library
title: Styles and Subjects
---
# :material-library-shelves: The Hugging Face Concepts Library and Importing Textual Inversion files
@@ -109,21 +109,43 @@ For example, TI files generated by the Hugging Face toolkit share the named
`learned_embedding.bin`. You can use subdirectories to keep them distinct.
At startup time, InvokeAI will scan the `embeddings` directory and load any TI
files it finds there. At startup you will see a message similar to this one:
files it finds there. At startup you will see messages similar to these:
```bash
>> Current embedding manager terms: *, <HOI4-Leader>, <princess-knight>
>> Loading embeddings from /data/lstein/invokeai-2.3/embeddings
| Loading v1 embedding file: style-hamunaptra
| Loading v4 embedding file: embeddings/learned_embeds-steps-500.bin
| Loading v2 embedding file: lfa
| Loading v3 embedding file: easynegative
| Loading v1 embedding file: rem_rezero
| Loading v2 embedding file: midj-strong
| Loading v4 embedding file: anime-background-style-v2/learned_embeds.bin
| Loading v4 embedding file: kamon-style/learned_embeds.bin
** Notice: kamon-style/learned_embeds.bin was trained on a model with an incompatible token dimension: 768 vs 1024.
>> Textual inversion triggers: <anime-background-style-v2>, <easynegative>, <lfa>, <midj-strong>, <milo>, Rem3-2600, Style-Hamunaptra
```
Note the `*` trigger term. This is a placeholder term that many early TI
tutorials taught people to use rather than a more descriptive term.
Unfortunately, if you have multiple TI files that all use this term, only the
first one loaded will be triggered by use of the term.
Textual Inversion embeddings trained on version 1.X stable diffusion
models are incompatible with version 2.X models and vice-versa.
To avoid this problem, you can use the `merge_embeddings.py` script to merge two
or more TI files together. If it encounters a collision of terms, the script
will prompt you to select new terms that do not collide. See
[Textual Inversion](TEXTUAL_INVERSION.md) for details.
After the embeddings load, InvokeAI will print out a list of all the
recognized trigger terms. To trigger the term, include it in the
prompt exactly as written, including angle brackets if any and
respecting the capitalization.
There are at least four different embedding file formats, and each uses
a different convention for the trigger terms. In some cases, the
trigger term is specified in the file contents and may or may not be
surrounded by angle brackets. In the example above, `Rem3-2600`,
`Style-Hamunaptra`, and `<midj-strong>` were specified this way and
there is no easy way to change the term.
In other cases the trigger term is not contained within the embedding
file. In this case, InvokeAI constructs a trigger term consisting of
the base name of the file (without the file extension) surrounded by
angle brackets. In the example above `<easynegative`> is such a file
(the filename was `easynegative.safetensors`). In such cases, you can
change the trigger term simply by renaming the file.
## Further Reading

View File

@@ -20,6 +20,8 @@ title: Overview
Scriptable access to InvokeAI's features.
- [Visual Manual for InvokeAI](https://docs.google.com/presentation/d/e/2PACX-1vSE90aC7bVVg0d9KXVMhy-Wve-wModgPFp7AGVTOCgf4xE03SnV24mjdwldolfCr59D_35oheHe4Cow/pub?start=false&loop=true&delayms=60000) (contributed by Statcomm)
- Image Generation
- [Prompt Engineering](PROMPTS.md)

View File

@@ -142,6 +142,10 @@ This method is recommended for those familiar with running Docker containers
- [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
- [WebUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
- [Visual Manual for InvokeAI v2.3.1](https://docs.google.com/presentation/d/e/2PACX-1vSE90aC7bVVg0d9KXVMhy-Wve-wModgPFp7AGVTOCgf4xE03SnV24mjdwldolfCr59D_35oheHe4Cow/pub?start=false&loop=true&delayms=60000) (contributed by Statcomm)
<!-- separator -->
<!-- separator -->
### The InvokeAI Command Line Interface
@@ -165,7 +169,7 @@ This method is recommended for those familiar with running Docker containers
- [Installing](installation/050_INSTALLING_MODELS.md)
- [Model Merging](features/MODEL_MERGING.md)
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
- [Adding custom styles and subjects via embeddings](features/CONCEPTS.md)
- [Textual Inversion](features/TEXTUAL_INVERSION.md)
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
<!-- seperator -->
@@ -177,6 +181,154 @@ This method is recommended for those familiar with running Docker containers
## :octicons-log-16: Latest Changes
### v2.3.3 <small>(29 March 2023)</small>
#### Bug Fixes
1. When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
2. Textual inversion will select an appropriate batchsize based on whether `xformers` is active, and will default to `xformers` enabled if the library is detected.
3. The batch script log file names have been fixed to be compatible with Windows.
4. Occasional corruption of the `.next_prefix` file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
5. An infinite loop when opening the developer's console from within the `invoke.sh` script has been corrected.
#### Enhancements
1. It is now possible to load and run several community-contributed SD-2.0 based models, including the infamous "Illuminati" model.
2. The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI `embeddings` directory.
3. If no `--model` is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
4. On Linux systems, the `invoke.sh` launcher now uses a prettier console-based interface. To take advantage of it, install the `dialog` package using your package manager (e.g. `sudo apt install dialog`).
5. When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
```
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
```
### v2.3.2 <small>(13 March 2023)</small>
#### Bugfixes
Since version 2.3.1 the following bugs have been fixed:
1. Black images appearing for potential NSFW images when generating with legacy checkpoint models and both `--no-nsfw_checker` and `--ckpt_convert` turned on.
2. Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
3. The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
4. When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
5. Crashes that occurred during model merging.
6. Restore previous naming of Stable Diffusion base and 768 models.
7. Upgraded to latest versions of `diffusers`, `transformers`, `safetensors` and `accelerate` libraries upstream. We hope that this will fix the `assertion NDArray > 2**32` issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
As part of the upgrade to `diffusers`, the location of the diffusers-based models has changed from `models/diffusers` to `models/hub`. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your `models/diffusers` directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
#### New "Invokeai-batch" script
2.3.2 introduces a new command-line only script called
`invokeai-batch` that can be used to generate hundreds of images from
prompts and settings that vary systematically. This can be used to try
the same prompt across multiple combinations of models, steps, CFG
settings and so forth. It also allows you to template prompts and
generate a combinatorial list like: ``` a shack in the mountains,
photograph a shack in the mountains, watercolor a shack in the
mountains, oil painting a chalet in the mountains, photograph a chalet
in the mountains, watercolor a chalet in the mountains, oil painting a
shack in the desert, photograph ... ```
If you have a system with multiple GPUs, or a single GPU with lots of
VRAM, you can parallelize generation across the combinatorial set,
reducing wait times and using your system's resources efficiently
(make sure you have good GPU cooling).
To try `invokeai-batch` out. Launch the "developer's console" using
the `invoke` launcher script, or activate the invokeai virtual
environment manually. From the console, give the command
`invokeai-batch --help` in order to learn how the script works and
create your first template file for dynamic prompt generation.
### v2.3.1 <small>(26 February 2023)</small>
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
#### Enhanced support for model management
InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
There are three ways of accessing the model management features:
1. ***From the WebUI***, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
![image](https://user-images.githubusercontent.com/111189/220638091-918492cc-0719-4194-b033-3741e8289b30.png)
2. **Using the Model Installer App**
Choose option (5) _download and install models_ from the `invoke` launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
Command-line users can start this app using the command `invokeai-model-install`.
![image](https://user-images.githubusercontent.com/111189/220660363-22ff3a2e-8082-410e-a818-d2b3a0529bac.png)
3. **Using the Command Line Client (CLI)**
The `!install_model` and `!convert_model` commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do **not** need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
Please see [INSTALLING MODELS](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/) for more information on model management.
#### An Improved Installer Experience
The installer now launches a console-based UI for setting and changing commonly-used startup options:
![image](https://user-images.githubusercontent.com/111189/220644777-3d3a90ca-f9e2-4e6d-93da-cbdd66bf12f3.png)
After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching `invoke.sh`/`invoke.bat` and entering option (6) _change InvokeAI startup options_
Command-line users can launch the new configure app using `invokeai-configure`.
This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch `invoke.sh` or `invoke.bat` and choose option (9) _update InvokeAI_ . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
![image](https://user-images.githubusercontent.com/111189/220650124-30a77137-d9cd-406e-a87d-d8283f99a4b3.png)
Command-line users can run this interface by typing `invokeai-configure`
#### Image Symmetry Options
There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting _Symmetry_ from the image generation settings, or within the CLI by using the options `--h_symmetry_time_pct` and `--v_symmetry_time_pct` (these can be abbreviated to `--h_sym` and `--v_sym` like all other options).
![image](https://user-images.githubusercontent.com/111189/220658687-47fd0f2c-7069-4d95-aec9-7196fceb360d.png)
#### A New Unified Canvas Look
This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select _Use Canvas Beta Layout_:
![image](https://user-images.githubusercontent.com/111189/220646958-b7eca95e-dc39-4cd2-b277-63eac98ed446.png)
Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
![image](https://user-images.githubusercontent.com/111189/220647560-4a9265a1-6926-44f9-9d08-e1ef2ce61ff8.png)
#### Model conversion and merging within the WebUI
The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the `invoke.sh`/`invoke.bat` scripts.
#### An easier way to contribute translations to the WebUI
We have migrated our translation efforts to [Weblate](https://hosted.weblate.org/engage/invokeai/), a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief [translation guide](https://github.com/invoke-ai/InvokeAI/blob/v2.3.1/docs/other/TRANSLATION.md) for more information on how to contribute.
#### Numerous internal bugfixes and performance issues
This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to `diffusers 0.13.0`, and using the `compel` library for prompt parsing. See [Detailed Change Log](#full-change-log) for a detailed list of bugs caught and squished.
#### Summary of InvokeAI command line scripts (all accessible via the launcher menu)
| Command | Description |
|--------------------------|---------------------------------------------------------------------|
| `invokeai` | Command line interface |
| `invokeai --web` | Web interface |
| `invokeai-model-install` | Model installer with console forms-based front end |
| `invokeai-ti --gui` | Textual inversion, with a console forms-based front end |
| `invokeai-merge --gui` | Model merging, with a console forms-based front end |
| `invokeai-configure` | Startup configuration; can also be used to reinstall support models |
| `invokeai-update` | InvokeAI software updater |
### v2.3.0 <small>(9 February 2023)</small>
#### Migration to Stable Diffusion `diffusers` models

View File

@@ -77,7 +77,7 @@ machine. To test, open up a terminal window and issue the following
command:
```
rocm-smi
rocminfo
```
If you get a table labeled "ROCm System Management Interface" the
@@ -95,9 +95,17 @@ recent version of Ubuntu, 22.04. However, this [community-contributed
recipe](https://novaspirit.github.io/amdgpu-rocm-ubu22/) is reported
to work well.
After installation, please run `rocm-smi` a second time to confirm
After installation, please run `rocminfo` a second time to confirm
that the driver is present and the GPU is recognized. You may need to
do a reboot in order to load the driver.
do a reboot in order to load the driver. In addition, if you see
errors relating to your username not being a member of the `render`
group, you may fix this by adding yourself to this group with the command:
```
sudo usermod -a -G render myUserName
```
(Thanks to @EgoringKosmos for the usermod recipe.)
### Linux Install with a ROCm-docker Container

View File

@@ -11,7 +11,7 @@ The model checkpoint files ('\*.ckpt') are the Stable Diffusion
captioned images gathered from multiple sources.
Originally there was only a single Stable Diffusion weights file,
which many people named `model.ckpt`. Now there are dozens or more
which many people named `model.ckpt`. Now there are hundreds
that have been fine tuned to provide particulary styles, genres, or
other features. In addition, there are several new formats that
improve on the original checkpoint format: a `.safetensors` format
@@ -29,9 +29,10 @@ and performance are being made at a rapid pace. Among other features
is the ability to download and install a `diffusers` model just by
providing its HuggingFace repository ID.
While InvokeAI will continue to support `.ckpt` and `.safetensors`
While InvokeAI will continue to support legacy `.ckpt` and `.safetensors`
models for the near future, these are deprecated and support will
likely be withdrawn at some point in the not-too-distant future.
be withdrawn in version 3.0, after which all legacy models will be
converted into diffusers at the time they are loaded.
This manual will guide you through installing and configuring model
weight files and converting legacy `.ckpt` and `.safetensors` files
@@ -89,15 +90,18 @@ aware that CIVITAI hosts many models that generate NSFW content.
!!! note
InvokeAI 2.3.x does not support directly importing and
running Stable Diffusion version 2 checkpoint models. You may instead
convert them into `diffusers` models using the conversion methods
described below.
running Stable Diffusion version 2 checkpoint models. If you
try to import them, they will be automatically
converted into `diffusers` models on the fly. This adds about 20s
to loading time. To avoid this overhead, you are encouraged to
use one of the conversion methods described below to convert them
permanently.
## Installation
There are multiple ways to install and manage models:
1. The `invokeai-configure` script which will download and install them for you.
1. The `invokeai-model-install` script which will download and install them for you.
2. The command-line tool (CLI) has commands that allows you to import, configure and modify
models files.
@@ -105,14 +109,41 @@ There are multiple ways to install and manage models:
3. The web interface (WebUI) has a GUI for importing and managing
models.
### Installation via `invokeai-configure`
### Installation via `invokeai-model-install`
From the `invoke` launcher, choose option (6) "re-run the configure
script to download new models." This will launch the same script that
prompted you to select models at install time. You can use this to add
models that you skipped the first time around. It is all right to
specify a model that was previously downloaded; the script will just
confirm that the files are complete.
From the `invoke` launcher, choose option (5) "Download and install
models." This will launch the same script that prompted you to select
models at install time. You can use this to add models that you
skipped the first time around. It is all right to specify a model that
was previously downloaded; the script will just confirm that the files
are complete.
This script allows you to load 3d party models. Look for a large text
entry box labeled "IMPORT LOCAL AND REMOTE MODELS." In this box, you
can cut and paste one or more of any of the following:
1. A URL that points to a downloadable .ckpt or .safetensors file.
2. A file path pointing to a .ckpt or .safetensors file.
3. A diffusers model repo_id (from HuggingFace) in the format
"owner/repo_name".
4. A directory path pointing to a diffusers model directory.
5. A directory path pointing to a directory containing a bunch of
.ckpt and .safetensors files. All will be imported.
You can enter multiple items into the textbox, each one on a separate
line. You can paste into the textbox using ctrl-shift-V or by dragging
and dropping a file/directory from the desktop into the box.
The script also lets you designate a directory that will be scanned
for new model files each time InvokeAI starts up. These models will be
added automatically.
Lastly, the script gives you a checkbox option to convert legacy models
into diffusers, or to run the legacy model directly. If you choose to
convert, the original .ckpt/.safetensors file will **not** be deleted,
but a new diffusers directory will be created, using twice your disk
space. However, the diffusers version will load faster, and will be
compatible with InvokeAI 3.0.
### Installation via the CLI
@@ -144,19 +175,15 @@ invoke> !import_model https://example.org/sd_models/martians.safetensors
For this to work, the URL must not be password-protected. Otherwise
you will receive a 404 error.
When you import a legacy model, the CLI will first ask you what type
of model this is. You can indicate whether it is a model based on
Stable Diffusion 1.x (1.4 or 1.5), one based on Stable Diffusion 2.x,
or a 1.x inpainting model. Be careful to indicate the correct model
type, or it will not load correctly. You can correct the model type
after the fact using the `!edit_model` command.
The system will then ask you a few other questions about the model,
including what size image it was trained on (usually 512x512), what
name and description you wish to use for it, and whether you would
like to install a custom VAE (variable autoencoder) file for the
model. For recent models, the answer to the VAE question is usually
"no," but it won't hurt to answer "yes".
When you import a legacy model, the CLI will try to figure out what
type of model it is and select the correct load configuration file.
However, one thing it can't do is to distinguish between Stable
Diffusion 2.x models trained on 512x512 vs 768x768 images. In this
case, the CLI will pop up a menu of choices, asking you to select
which type of model it is. Please consult the model documentation to
identify the correct answer, as loading with the wrong configuration
will lead to black images. You can correct the model type after the
fact using the `!edit_model` command.
After importing, the model will load. If this is successful, you will
be asked if you want to keep the model loaded in memory to start
@@ -211,129 +238,6 @@ description for the model, whether to make this the default model that
is loaded at InvokeAI startup time, and whether to replace its
VAE. Generally the answer to the latter question is "no".
### Specifying a configuration file for legacy checkpoints
Some checkpoint files come with instructions to use a specific .yaml
configuration file. For InvokeAI load this file correctly, please put
the config file in the same directory as the corresponding `.ckpt` or
`.safetensors` file and make sure the file has the same basename as
the weights file. Here is an example:
```bash
wonderful-model-v2.ckpt
wonderful-model-v2.yaml
```
Similarly, to use a custom VAE, name the VAE like this:
```bash
wonderful-model-v2.vae.pt
```
### Converting legacy models into `diffusers`
The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
models file into `diffusers` and install it.This will enable the model
to load and run faster without loss of image quality.
The usage is identical to `!import_model`. You may point the command
to either a downloaded model file on disk, or to a (non-password
protected) URL:
```bash
invoke> !convert_model C:/Users/fred/Downloads/martians.safetensors
```
After a successful conversion, the CLI will offer you the option of
deleting the original `.ckpt` or `.safetensors` file.
### Optimizing a previously-installed model
Lastly, if you have previously installed a `.ckpt` or `.safetensors`
file and wish to convert it into a `diffusers` model, you can do this
without re-downloading and converting the original file using the
`!optimize_model` command. Simply pass the short name of an existing
installed model:
```bash
invoke> !optimize_model martians-v1.0
```
The model will be converted into `diffusers` format and replace the
previously installed version. You will again be offered the
opportunity to delete the original `.ckpt` or `.safetensors` file.
### Related CLI Commands
There are a whole series of additional model management commands in
the CLI that you can read about in [Command-Line
Interface](../features/CLI.md). These include:
* `!models` - List all installed models
* `!switch <model name>` - Switch to the indicated model
* `!edit_model <model name>` - Edit the indicated model to change its name, description or other properties
* `!del_model <model name>` - Delete the indicated model
### Manually editing `configs/models.yaml`
If you are comfortable with a text editor then you may simply edit `models.yaml`
directly.
You will need to download the desired `.ckpt/.safetensors` file and
place it somewhere on your machine's filesystem. Alternatively, for a
`diffusers` model, record the repo_id or download the whole model
directory. Then using a **text** editor (e.g. the Windows Notepad
application), open the file `configs/models.yaml`, and add a new
stanza that follows this model:
#### A legacy model
A legacy `.ckpt` or `.safetensors` entry will look like this:
```yaml
arabian-nights-1.0:
description: A great fine-tune in Arabian Nights style
weights: ./path/to/arabian-nights-1.0.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
format: ckpt
width: 512
height: 512
default: false
```
Note that `format` is `ckpt` for both `.ckpt` and `.safetensors` files.
#### A diffusers model
A stanza for a `diffusers` model will look like this for a HuggingFace
model with a repository ID:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
repo_id: captahab/arabian-nights-1.1
format: diffusers
default: true
```
And for a downloaded directory:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
path: /path/to/captahab-arabian-nights-1.1
format: diffusers
default: true
```
There is additional syntax for indicating an external VAE to use with
this model. See `INITIAL_MODELS.yaml` and `models.yaml` for examples.
After you save the modified `models.yaml` file relaunch
`invokeai`. The new model will now be available for your use.
### Installation via the WebUI
To access the WebUI Model Manager, click on the button that looks like
@@ -413,3 +317,143 @@ And here is what the same argument looks like in `invokeai.init`:
--no-nsfw_checker
--autoconvert /home/fred/stable-diffusion-checkpoints
```
### Specifying a configuration file for legacy checkpoints
Some checkpoint files come with instructions to use a specific .yaml
configuration file. For InvokeAI load this file correctly, please put
the config file in the same directory as the corresponding `.ckpt` or
`.safetensors` file and make sure the file has the same basename as
the model file. Here is an example:
```bash
wonderful-model-v2.ckpt
wonderful-model-v2.yaml
```
This is not needed for `diffusers` models, which come with their own
pre-packaged configuration.
### Specifying a custom VAE file for legacy checkpoints
To associate a custom VAE with a legacy file, place the VAE file in
the same directory as the corresponding `.ckpt` or
`.safetensors` file and make sure the file has the same basename as
the model file. Use the suffix `.vae.pt` for VAE checkpoint files, and
`.vae.safetensors` for VAE safetensors files. There is no requirement
that both the model and the VAE follow the same format.
Example:
```bash
wonderful-model-v2.pt
wonderful-model-v2.vae.safetensors
```
### Converting legacy models into `diffusers`
The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
models file into `diffusers` and install it.This will enable the model
to load and run faster without loss of image quality.
The usage is identical to `!import_model`. You may point the command
to either a downloaded model file on disk, or to a (non-password
protected) URL:
```bash
invoke> !convert_model C:/Users/fred/Downloads/martians.safetensors
```
After a successful conversion, the CLI will offer you the option of
deleting the original `.ckpt` or `.safetensors` file.
### Optimizing a previously-installed model
Lastly, if you have previously installed a `.ckpt` or `.safetensors`
file and wish to convert it into a `diffusers` model, you can do this
without re-downloading and converting the original file using the
`!optimize_model` command. Simply pass the short name of an existing
installed model:
```bash
invoke> !optimize_model martians-v1.0
```
The model will be converted into `diffusers` format and replace the
previously installed version. You will again be offered the
opportunity to delete the original `.ckpt` or `.safetensors` file.
Alternatively you can use the WebUI's model manager to handle diffusers
optimization. Select the legacy model you wish to convert, and then
look for a button labeled "Convert to Diffusers" in the upper right of
the window.
### Related CLI Commands
There are a whole series of additional model management commands in
the CLI that you can read about in [Command-Line
Interface](../features/CLI.md). These include:
* `!models` - List all installed models
* `!switch <model name>` - Switch to the indicated model
* `!edit_model <model name>` - Edit the indicated model to change its name, description or other properties
* `!del_model <model name>` - Delete the indicated model
### Manually editing `configs/models.yaml`
If you are comfortable with a text editor then you may simply edit `models.yaml`
directly.
You will need to download the desired `.ckpt/.safetensors` file and
place it somewhere on your machine's filesystem. Alternatively, for a
`diffusers` model, record the repo_id or download the whole model
directory. Then using a **text** editor (e.g. the Windows Notepad
application), open the file `configs/models.yaml`, and add a new
stanza that follows this model:
#### A legacy model
A legacy `.ckpt` or `.safetensors` entry will look like this:
```yaml
arabian-nights-1.0:
description: A great fine-tune in Arabian Nights style
weights: ./path/to/arabian-nights-1.0.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
format: ckpt
width: 512
height: 512
default: false
```
Note that `format` is `ckpt` for both `.ckpt` and `.safetensors` files.
#### A diffusers model
A stanza for a `diffusers` model will look like this for a HuggingFace
model with a repository ID:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
repo_id: captahab/arabian-nights-1.1
format: diffusers
default: true
```
And for a downloaded directory:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
path: /path/to/captahab-arabian-nights-1.1
format: diffusers
default: true
```
There is additional syntax for indicating an external VAE to use with
this model. See `INITIAL_MODELS.yaml` and `models.yaml` for examples.
After you save the modified `models.yaml` file relaunch
`invokeai`. The new model will now be available for your use.

View File

@@ -23,14 +23,16 @@ We thank them for all of their time and hard work.
* @damian0815 - Attention Systems and Gameplay Engineer
* @mauwii (Matthias Wild) - Continuous integration and product maintenance engineer
* @Netsvetaev (Artur Netsvetaev) - UI/UX Developer
* @tildebyte - General gadfly and resident (self-appointed) know-it-all
* @keturn - Lead for Diffusers port
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @jpphoto (Jonathan Pollack) - Inference and rendering engine optimization
* @genomancer (Gregg Helt) - Model training and merging
* @gogurtenjoyer - User support and testing
* @whosawwhatsis - User support and testing
## **Contributions by**
- [tildebyte](https://github.com/tildebyte)
- [Sean McLellan](https://github.com/Oceanswave)
- [Kevin Gibbons](https://github.com/bakkot)
- [Tesseract Cat](https://github.com/TesseractCat)
@@ -78,6 +80,7 @@ We thank them for all of their time and hard work.
- [psychedelicious](https://github.com/psychedelicious)
- [damian0815](https://github.com/damian0815)
- [Eugene Brodsky](https://github.com/ebr)
- [Statcomm](https://github.com/statcomm)
## **Original CompVis Authors**

View File

@@ -242,8 +242,8 @@ class InvokeAiInstance:
from plumbum import FG, local
# Note that we're installing pinned versions of torch and
# torchvision here, which may not correspond to what is
# in pyproject.toml. This is a hack to prevent torch 2.0 from
# torchvision here, which *should* correspond to what is
# in pyproject.toml. This is to prevent torch 2.0 from
# being installed and immediately uninstalled and replaced with 1.13
pip = local[self.pip]
@@ -252,7 +252,7 @@ class InvokeAiInstance:
"install",
"--require-virtualenv",
"torch~=1.13.1",
"torchvision>=0.14.1",
"torchvision~=0.14.1",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,
@@ -384,7 +384,7 @@ class InvokeAiInstance:
os.chmod(dest, 0o0755)
if OS == "Linux":
shutil.copy(Path(__file__).parent / '..' / "templates" / "dialogrc", self.runtime / '.dialogrc')
shutil.copy(Path(__file__).parents[1] / "templates" / "dialogrc", self.runtime / '.dialogrc')
def update(self):
pass

View File

@@ -976,7 +976,7 @@ class Generate:
self.generators = {}
seed_everything(random.randrange(0, np.iinfo(np.uint32).max))
if self.embedding_path is not None:
if self.embedding_path and not model_data.get("ti_embeddings_loaded"):
print(f'>> Loading embeddings from {self.embedding_path}')
for root, _, files in os.walk(self.embedding_path):
for name in files:
@@ -984,9 +984,10 @@ class Generate:
self.model.textual_inversion_manager.load_textual_inversion(
ti_path, defer_injecting_tokens=True
)
print(
f'>> Textual inversion triggers: {", ".join(sorted(self.model.textual_inversion_manager.get_all_trigger_strings()))}'
)
model_data["ti_embeddings_loaded"] = True
print(
f'>> Textual inversion triggers: {", ".join(sorted(self.model.textual_inversion_manager.get_all_trigger_strings()))}'
)
self.model_name = model_name
self._set_sampler() # requires self.model_name to be set first

View File

@@ -126,11 +126,13 @@ def main():
print(f"{e}. Aborting.")
sys.exit(-1)
model = opt.model or retrieve_last_used_model()
# creating a Generate object:
try:
gen = Generate(
conf=opt.conf,
model=opt.model,
model=model,
sampler_name=opt.sampler_name,
embedding_path=embedding_path,
full_precision=opt.full_precision,
@@ -179,6 +181,7 @@ def main():
# web server loops forever
if opt.web or opt.gui:
invoke_ai_web_server_loop(gen, gfpgan, codeformer, esrgan)
save_last_used_model(gen.model_name)
sys.exit(0)
if not infile:
@@ -499,6 +502,7 @@ def main_loop(gen, opt, completer):
print(
f'\nGoodbye!\nYou can start InvokeAI again by running the "invoke.bat" (or "invoke.sh") script from {Globals.root}'
)
save_last_used_model(gen.model_name)
# TO DO: remove repetitive code and the awkward command.replace() trope
@@ -772,14 +776,10 @@ def convert_model(model_name_or_path: Union[Path, str], gen, opt, completer):
original_config_file = Path(model_info["config"])
model_name = model_name_or_path
model_description = model_info["description"]
vae = model_info["vae"]
vae_path = model_info.get("vae")
else:
print(f"** {model_name_or_path} is not a legacy .ckpt weights file")
return
if vae_repo := ldm.invoke.model_manager.VAE_TO_REPO_ID.get(Path(vae).stem):
vae_repo = dict(repo_id=vae_repo)
else:
vae_repo = None
model_name = manager.convert_and_import(
ckpt_path,
diffusers_path=Path(
@@ -788,7 +788,7 @@ def convert_model(model_name_or_path: Union[Path, str], gen, opt, completer):
model_name=model_name,
model_description=model_description,
original_config_file=original_config_file,
vae=vae_repo,
vae_path=vae_path,
)
else:
try:
@@ -834,6 +834,7 @@ def edit_model(model_name: str, gen, opt, completer):
print(f"\n>> Editing model {model_name} from configuration file {opt.conf}")
new_name = _get_model_name(manager.list_models(), completer, model_name)
completer.complete_extensions(('.yaml','.ckpt','.safetensors','.pt'))
for attribute in info.keys():
if type(info[attribute]) != str:
continue
@@ -841,6 +842,7 @@ def edit_model(model_name: str, gen, opt, completer):
continue
completer.set_line(info[attribute])
info[attribute] = input(f"{attribute}: ") or info[attribute]
completer.complete_extensions(None)
if info["format"] == "diffusers":
vae = info.get("vae", dict(repo_id=None, path=None, subfolder=None))
@@ -1287,6 +1289,25 @@ def check_internet() -> bool:
except:
return False
def retrieve_last_used_model()->str:
"""
Return name of the last model used.
"""
model_file_path = Path(Globals.root,'.last_model')
if not model_file_path.exists():
return None
with open(model_file_path,'r') as f:
return f.readline()
def save_last_used_model(model_name:str):
"""
Save name of the last model used.
"""
model_file_path = Path(Globals.root,'.last_model')
with open(model_file_path,'w') as f:
f.write(model_name)
# This routine performs any patch-ups needed after installation
def run_patches():
install_missing_config_files()
@@ -1330,7 +1351,7 @@ def do_version_update(root_version: version.Version, app_version: Union[str, ver
if sys.platform == "linux":
print('>> Downloading new version of launcher script and its config file')
from ldm.util import download_with_progress_bar
url_base = f'https://raw.githubusercontent.com/invoke-ai/InvokeAI/release/v{str(app_version)}/installer/templates/'
url_base = f'https://raw.githubusercontent.com/invoke-ai/InvokeAI/v{str(app_version)}/installer/templates/'
dest = Path(Globals.root,'invoke.sh.in')
assert download_with_progress_bar(url_base+'invoke.sh.in',dest)

View File

@@ -1,2 +1,2 @@
__version__='2.3.3-rc2'
__version__='2.3.3'

View File

@@ -1037,10 +1037,10 @@ def convert_open_clip_checkpoint(checkpoint):
return text_model
def replace_checkpoint_vae(checkpoint, vae_path:str):
if vae_path.endswith(".safetensors"):
vae_ckpt = load_file(vae_path)
else:
if Path(vae_path).suffix in ['.pt','.ckpt']:
vae_ckpt = torch.load(vae_path, map_location="cpu")
else:
vae_ckpt = load_file(vae_path)
state_dict = vae_ckpt['state_dict'] if "state_dict" in vae_ckpt else vae_ckpt
for vae_key in state_dict:
new_key = f'first_stage_model.{vae_key}'
@@ -1264,10 +1264,10 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
cache_dir=cache_dir,
)
pipe = pipeline_class(
vae=vae,
text_encoder=text_model,
vae=vae.to(precision),
text_encoder=text_model.to(precision),
tokenizer=tokenizer,
unet=unet,
unet=unet.to(precision),
scheduler=scheduler,
safety_checker=None,
feature_extractor=None,

View File

@@ -16,6 +16,8 @@ from rich.text import Text
from ldm.invoke import __version__
INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive"
INVOKE_AI_TAG="https://github.com/invoke-ai/InvokeAI/archive/refs/tags"
INVOKE_AI_BRANCH="https://github.com/invoke-ai/InvokeAI/archive/refs/heads"
INVOKE_AI_REL="https://api.github.com/repos/invoke-ai/InvokeAI/releases"
OS = platform.uname().system
@@ -41,7 +43,8 @@ def welcome(versions: dict):
yield '[bold yellow]Options:'
yield f'''[1] Update to the latest official release ([italic]{versions[0]['tag_name']}[/italic])
[2] Update to the bleeding-edge development version ([italic]main[/italic])
[3] Manually enter the tag or branch name you wish to update'''
[3] Manually enter the [bold]tag name[/bold] for the version you wish to update to
[4] Manually enter the [bold]branch name[/bold] for the version you wish to update to'''
console.rule()
print(
@@ -62,17 +65,26 @@ def main():
welcome(versions)
tag = None
choice = Prompt.ask('Choice:',choices=['1','2','3'],default='1')
branch = None
release = None
choice = Prompt.ask('Choice:',choices=['1','2','3','4'],default='1')
if choice=='1':
tag = versions[0]['tag_name']
release = versions[0]['tag_name']
elif choice=='2':
tag = 'main'
release = 'main'
elif choice=='3':
tag = Prompt.ask('Enter an InvokeAI tag or branch name')
tag = Prompt.ask('Enter an InvokeAI tag name')
elif choice=='4':
branch = Prompt.ask('Enter an InvokeAI branch name')
print(f':crossed_fingers: Upgrading to [yellow]{tag}[/yellow]')
cmd = f'pip install {INVOKE_AI_SRC}/{tag}.zip --use-pep517 --upgrade'
print(f':crossed_fingers: Upgrading to [yellow]{tag if tag else release}[/yellow]')
if release:
cmd = f'pip install {INVOKE_AI_SRC}/{release}.zip --use-pep517 --upgrade'
elif tag:
cmd = f'pip install {INVOKE_AI_TAG}/{tag}.zip --use-pep517 --upgrade'
else:
cmd = f'pip install {INVOKE_AI_BRANCH}/{branch}.zip --use-pep517 --upgrade'
print('')
print('')
if os.system(cmd)==0:

View File

@@ -29,7 +29,13 @@ Model_dir = "models"
Weights_dir = "ldm/stable-diffusion-v1/"
# the initial "configs" dir is now bundled in the `invokeai.configs` package
Dataset_path = Path(configs.__path__[0]) / "INITIAL_MODELS.yaml"
Dataset_path = None
for path in configs.__path__:
file =Path(path, "INITIAL_MODELS.yaml")
if file.exists():
Dataset_path = file
break
assert Dataset_path,f"Could not find the file INITIAL_MODELS.yaml in {configs.__path__}"
# initial models omegaconf
Datasets = None

29
ldm/invoke/invokeai_metadata.py Executable file
View File

@@ -0,0 +1,29 @@
#!/usr/bin/env python
import sys
import json
from ldm.invoke.pngwriter import retrieve_metadata
def main():
if len(sys.argv) < 2:
print("Usage: file2prompt.py <file1.png> <file2.png> <file3.png>...")
print("This script opens up the indicated invoke.py-generated PNG file(s) and prints out their metadata.")
exit(-1)
filenames = sys.argv[1:]
for f in filenames:
try:
metadata = retrieve_metadata(f)
print(f'{f}:\n',json.dumps(metadata['sd-metadata'], indent=4))
except FileNotFoundError:
sys.stderr.write(f'{f} not found\n')
continue
except PermissionError:
sys.stderr.write(f'{f} could not be opened due to inadequate permissions\n')
continue
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
pass

View File

@@ -19,7 +19,7 @@ import warnings
from enum import Enum
from pathlib import Path
from shutil import move, rmtree
from typing import Any, Callable, Optional, Union
from typing import Any, Callable, Optional, Union, List
import safetensors
import safetensors.torch
@@ -46,12 +46,7 @@ class SDLegacyType(Enum):
V2_v = 5
UNKNOWN = 99
DEFAULT_MAX_MODELS = 2
VAE_TO_REPO_ID = { # hack, see note in convert_and_import()
"vae-ft-mse-840000-ema-pruned": "stabilityai/sd-vae-ft-mse",
}
class ModelManager(object):
def __init__(
@@ -108,11 +103,7 @@ class ModelManager(object):
requested_model = self.models[model_name]["model"]
print(f">> Retrieving model {model_name} from system RAM cache")
self.models[model_name]["model"] = self._model_from_cpu(requested_model)
width = self.models[model_name]["width"]
height = self.models[model_name]["height"]
hash = self.models[model_name]["hash"]
else: # we're about to load a new model, so potentially offload the least recently used one
else:
requested_model, width, height, hash = self._load_model(model_name)
self.models[model_name] = {
"model": requested_model,
@@ -123,13 +114,8 @@ class ModelManager(object):
self.current_model = model_name
self._push_newest_model(model_name)
return {
"model": requested_model,
"width": width,
"height": height,
"hash": hash,
}
return self.models[model_name]
def default_model(self) -> str | None:
"""
Returns the name of the default model, or None
@@ -172,9 +158,9 @@ class ModelManager(object):
"""
# if we are converting legacy files automatically, then
# there are no legacy ckpts!
if Globals.ckpt_convert:
return False
info = self.model_info(model_name)
if Globals.ckpt_convert or info.format=='diffusers' or self.is_v2_config(info.config):
return False
if "weights" in info and info["weights"].endswith((".ckpt", ".safetensors")):
return True
return False
@@ -382,6 +368,20 @@ class ModelManager(object):
# check whether this is a v2 file and force conversion
convert = Globals.ckpt_convert or self.is_v2_config(config)
if matching_config := self._scan_for_matching_file(Path(weights),suffixes=['.yaml']):
print(f' | Using external config file {matching_config}')
config = matching_config
# get the path to the custom vae, if any
vae_path = None
# first we use whatever is in the config file
if vae:
path = Path(vae if os.path.isabs(vae) else os.path.normpath(os.path.join(Globals.root, vae)))
if path.exists():
vae_path = path
# then we look for a file with the same basename
vae_path = vae_path or self._scan_for_matching_file(Path(weights))
# if converting automatically to diffusers, then we do the conversion and return
# a diffusers pipeline
if convert:
@@ -390,15 +390,18 @@ class ModelManager(object):
)
from ldm.invoke.ckpt_to_diffuser import load_pipeline_from_original_stable_diffusion_ckpt
self.offload_model(self.current_model)
if vae_config := self._choose_diffusers_vae(model_name):
vae = self._load_vae(vae_config)
try:
if self.list_models()[self.current_model]['status'] == 'active':
self.offload_model(self.current_model)
except Exception:
pass
if self._has_cuda():
torch.cuda.empty_cache()
pipeline = load_pipeline_from_original_stable_diffusion_ckpt(
checkpoint_path=weights,
original_config_file=config,
vae=vae,
vae_path=vae_path,
return_generator_pipeline=True,
precision=torch.float16
if self.precision == "float16"
@@ -453,20 +456,17 @@ class ModelManager(object):
print(" | Using more accurate float32 precision")
# look and load a matching vae file. Code borrowed from AUTOMATIC1111 modules/sd_models.py
if vae:
if not os.path.isabs(vae):
vae = os.path.normpath(os.path.join(Globals.root, vae))
if os.path.exists(vae):
print(f" | Loading VAE weights from: {vae}")
if vae.endswith((".ckpt", ".pt")):
self.scan_model(vae, vae)
vae_ckpt = torch.load(vae, map_location="cpu")
else:
vae_ckpt = safetensors.torch.load_file(vae)
vae_dict = {k: v for k, v in vae_ckpt.items() if k[0:4] != "loss"}
model.first_stage_model.load_state_dict(vae_dict, strict=False)
if vae_path:
print(f" | Loading VAE weights from: {vae_path}")
if vae_path.suffix in [".ckpt", ".pt"]:
self.scan_model(vae_path.name, vae_path)
vae_ckpt = torch.load(vae_path, map_location="cpu")
else:
print(f" | VAE file {vae} not found. Skipping.")
vae_ckpt = safetensors.torch.load_file(vae_path)
vae_dict = {k: v for k, v in vae_ckpt["state_dict"].items() if k[0:4] != "loss"}
model.first_stage_model.load_state_dict(vae_dict, strict=False)
else:
print(" | Using VAE built into model.")
model.to(self.device)
# model.to doesn't change the cond_stage_model.device used to move the tokenizer output, so set it here
@@ -544,6 +544,8 @@ class ModelManager(object):
return pipeline, width, height, model_hash
def is_v2_config(self, config: Path) -> bool:
if not os.path.isabs(config):
config = os.path.join(Globals.root, config)
try:
mconfig = OmegaConf.load(config)
return (
@@ -818,7 +820,6 @@ class ModelManager(object):
print(f" | {thing} appears to be a diffusers file on disk")
model_name = self.import_diffuser_model(
thing,
vae=dict(repo_id="stabilityai/sd-vae-ft-mse"),
model_name=model_name,
description=description,
commit_to_conf=commit_to_conf,
@@ -906,11 +907,11 @@ class ModelManager(object):
)
elif model_type == SDLegacyType.V2:
print(
f"** {thing} is a V2 checkpoint file, but its parameterization cannot be determined. Please provide configuration file path."
f"** {thing} is a V2 checkpoint file, but its parameterization cannot be determined. Please provide the configuration file type or path."
)
else:
print(
f"** {thing} is a legacy checkpoint file but not a known Stable Diffusion model. Please provide configuration file path."
f"** {thing} is a legacy checkpoint file but not a known Stable Diffusion model. Please provide the configuration file type or path."
)
if not model_config_file and config_file_callback:
@@ -922,18 +923,14 @@ class ModelManager(object):
convert = True
print(" | This SD-v2 model will be converted to diffusers format for use")
# look for a custom vae
vae_path = None
for suffix in ["pt", "ckpt", "safetensors"]:
if (model_path.with_suffix(f".vae.{suffix}")).exists():
vae_path = model_path.with_suffix(f".vae.{suffix}")
print(f" | Using VAE file {vae_path.name}")
vae = None if vae_path else dict(repo_id="stabilityai/sd-vae-ft-mse")
if (vae_path := self._scan_for_matching_file(model_path)):
print(f" | Using VAE file {vae_path.name}")
if convert:
diffuser_path = Path(
Globals.root, "models", Globals.converted_ckpts_dir, model_path.stem
)
vae = None if vae_path else dict(repo_id="stabilityai/sd-vae-ft-mse")
model_name = self.convert_and_import(
model_path,
diffusers_path=diffuser_path,
@@ -1006,14 +1003,17 @@ class ModelManager(object):
try:
# By passing the specified VAE to the conversion function, the autoencoder
# will be built into the model rather than tacked on afterward via the config file
vae_model = self._load_vae(vae) if vae else None
vae_model=None
if vae:
vae_model=self._load_vae(vae)
vae_path=None
convert_ckpt_to_diffusers(
ckpt_path,
diffusers_path,
extract_ema=True,
original_config_file=original_config_file,
vae=vae_model,
vae_path=str(vae_path) if vae_path else None,
vae_path=vae_path,
scan_needed=scan_needed,
)
print(
@@ -1060,36 +1060,6 @@ class ModelManager(object):
return search_folder, found_models
def _choose_diffusers_vae(
self, model_name: str, vae: str = None
) -> Union[dict, str]:
# In the event that the original entry is using a custom ckpt VAE, we try to
# map that VAE onto a diffuser VAE using a hard-coded dictionary.
# I would prefer to do this differently: We load the ckpt model into memory, swap the
# VAE in memory, and then pass that to convert_ckpt_to_diffusers() so that the swapped
# VAE is built into the model. However, when I tried this I got obscure key errors.
if vae:
return vae
if model_name in self.config and (
vae_ckpt_path := self.model_info(model_name).get("vae", None)
):
vae_basename = Path(vae_ckpt_path).stem
diffusers_vae = None
if diffusers_vae := VAE_TO_REPO_ID.get(vae_basename, None):
print(
f">> {vae_basename} VAE corresponds to known {diffusers_vae} diffusers version"
)
vae = {"repo_id": diffusers_vae}
else:
print(
f'** Custom VAE "{vae_basename}" found, but corresponding diffusers model unknown'
)
print(
'** Using "stabilityai/sd-vae-ft-mse"; If this isn\'t right, please edit the model config'
)
vae = {"repo_id": "stabilityai/sd-vae-ft-mse"}
return vae
def _make_cache_room(self) -> None:
num_loaded_models = len(self.models)
if num_loaded_models >= self.max_loaded_models:
@@ -1351,6 +1321,22 @@ class ModelManager(object):
f.write(hash)
return hash
@classmethod
def _scan_for_matching_file(
self,model_path: Path,
suffixes: List[str]=['.vae.pt','.vae.ckpt','.vae.safetensors']
)->Path:
"""
Find a file with same basename as the indicated model, but with one
of the suffixes passed.
"""
# look for a custom vae
vae_path = None
for suffix in suffixes:
if model_path.with_suffix(suffix).exists():
vae_path = model_path.with_suffix(suffix)
return vae_path
def _load_vae(self, vae_config) -> AutoencoderKL:
vae_args = {}
try:
@@ -1362,7 +1348,7 @@ class ModelManager(object):
using_fp16 = self.precision == "float16"
vae_args.update(
cache_dir=global_cache_dir("hug"),
cache_dir=global_cache_dir("hub"),
local_files_only=not Globals.internet_available,
)

View File

@@ -19,7 +19,7 @@ from functools import partial
from tqdm import tqdm
from torchvision.utils import make_grid
from pytorch_lightning.utilities.distributed import rank_zero_only
from omegaconf import ListConfig
from omegaconf import ListConfig, OmegaConf
import urllib
from ldm.modules.textual_inversion_manager import TextualInversionManager
@@ -609,6 +609,7 @@ class DDPM(pl.LightningModule):
opt = torch.optim.AdamW(params, lr=lr)
return opt
class LatentDiffusion(DDPM):
"""main class"""
@@ -617,7 +618,7 @@ class LatentDiffusion(DDPM):
self,
first_stage_config,
cond_stage_config,
personalization_config,
personalization_config=None,
num_timesteps_cond=None,
cond_stage_key='image',
cond_stage_trainable=False,
@@ -675,7 +676,8 @@ class LatentDiffusion(DDPM):
self.model.train = disabled_train
for param in self.model.parameters():
param.requires_grad = False
personalization_config = personalization_config or self._fallback_personalization_config()
self.embedding_manager = self.instantiate_embedding_manager(
personalization_config, self.cond_stage_model
)
@@ -2150,6 +2152,25 @@ class LatentDiffusion(DDPM):
self.emb_ckpt_counter += 500
@classmethod
def _fallback_personalization_config(self)->dict:
"""
This protects us against custom legacy config files that
don't contain the personalization_config section.
"""
return OmegaConf.create(
dict(
target='ldm.modules.embedding_manager.EmbeddingManager',
params=dict(
placeholder_strings=list('*'),
initializer_words=list('sculpture'),
per_image_tokens=False,
num_vectors_per_token=1,
progressive_words=False,
)
)
)
class DiffusionWrapper(pl.LightningModule):
def __init__(self, diff_model_config, conditioning_key):

View File

@@ -463,6 +463,9 @@ class FrozenCLIPEmbedder(AbstractEncoder):
def encode(self, text, **kwargs):
return self(text, **kwargs)
def set_textual_inversion_manager(self, manager): #TextualInversionManager):
self.textual_inversion_manager = manager
@property
def device(self):
return self.transformer.device
@@ -476,10 +479,6 @@ class WeightedFrozenCLIPEmbedder(FrozenCLIPEmbedder):
fragment_weights_key = "fragment_weights"
return_tokens_key = "return_tokens"
def set_textual_inversion_manager(self, manager): #TextualInversionManager):
# TODO all of the weighting and expanding stuff needs be moved out of this class
self.textual_inversion_manager = manager
def forward(self, text: list, **kwargs):
# TODO all of the weighting and expanding stuff needs be moved out of this class
'''

View File

@@ -1,9 +1,9 @@
import os
import traceback
from dataclasses import dataclass
from pathlib import Path
from typing import Optional, Union
import safetensors.torch
import torch
from picklescan.scanner import scan_file_path
from transformers import CLIPTextModel, CLIPTokenizer
@@ -71,21 +71,6 @@ class TextualInversionManager(BaseTextualInversionManager):
if str(ckpt_path).endswith(".DS_Store"):
return
try:
scan_result = scan_file_path(str(ckpt_path))
if scan_result.infected_files == 1:
print(
f"\n### Security Issues Found in Model: {scan_result.issues_count}"
)
print("### For your safety, InvokeAI will not load this embed.")
return
except Exception:
print(
f"### {ckpt_path.parents[0].name}/{ckpt_path.name} is damaged or corrupt."
)
return
embedding_info = self._parse_embedding(str(ckpt_path))
if embedding_info is None:
@@ -96,7 +81,7 @@ class TextualInversionManager(BaseTextualInversionManager):
!= embedding_info["token_dim"]
):
print(
f"** Notice: {ckpt_path.parents[0].name}/{ckpt_path.name} was trained on a model with an incompatible token dimension: {self.text_encoder.get_input_embeddings().weight.data[0].shape[0]} vs {embedding_info['token_dim']}."
f" ** Notice: {ckpt_path.parents[0].name}/{ckpt_path.name} was trained on a model with an incompatible token dimension: {self.text_encoder.get_input_embeddings().weight.data[0].shape[0]} vs {embedding_info['token_dim']}."
)
return
@@ -309,92 +294,72 @@ class TextualInversionManager(BaseTextualInversionManager):
return token_id
def _parse_embedding(self, embedding_file: str):
file_type = embedding_file.split(".")[-1]
if file_type == "pt":
return self._parse_embedding_pt(embedding_file)
elif file_type == "bin":
return self._parse_embedding_bin(embedding_file)
else:
print(f"** Notice: unrecognized embedding file format: {embedding_file}")
def _parse_embedding(self, embedding_file: str)->dict:
suffix = Path(embedding_file).suffix
try:
if suffix in [".pt",".ckpt",".bin"]:
scan_result = scan_file_path(embedding_file)
if scan_result.infected_files == 1:
print(
f" ** Security Issues Found in Model: {scan_result.issues_count}"
)
print(" ** For your safety, InvokeAI will not load this embed.")
return
ckpt = torch.load(embedding_file,map_location="cpu")
else:
ckpt = safetensors.torch.load_file(embedding_file)
except Exception as e:
print(f" ** Notice: unrecognized embedding file format: {embedding_file}: {e}")
return None
def _parse_embedding_pt(self, embedding_file):
embedding_ckpt = torch.load(embedding_file, map_location="cpu")
embedding_info = {}
# Check if valid embedding file
if "string_to_token" and "string_to_param" in embedding_ckpt:
# Catch variants that do not have the expected keys or values.
try:
embedding_info["name"] = embedding_ckpt["name"] or os.path.basename(
os.path.splitext(embedding_file)[0]
)
# Check num of embeddings and warn user only the first will be used
embedding_info["num_of_embeddings"] = len(
embedding_ckpt["string_to_token"]
)
if embedding_info["num_of_embeddings"] > 1:
print(">> More than 1 embedding found. Will use the first one")
embedding = list(embedding_ckpt["string_to_param"].values())[0]
except (AttributeError, KeyError):
return self._handle_broken_pt_variants(embedding_ckpt, embedding_file)
embedding_info["embedding"] = embedding
embedding_info["num_vectors_per_token"] = embedding.size()[0]
embedding_info["token_dim"] = embedding.size()[1]
try:
embedding_info["trained_steps"] = embedding_ckpt["step"]
embedding_info["trained_model_name"] = embedding_ckpt[
"sd_checkpoint_name"
]
embedding_info["trained_model_checksum"] = embedding_ckpt[
"sd_checkpoint"
]
except AttributeError:
print(">> No Training Details Found. Passing ...")
# .pt files found at https://cyberes.github.io/stable-diffusion-textual-inversion-models/
# They are actually .bin files
elif len(embedding_ckpt.keys()) == 1:
embedding_info = self._parse_embedding_bin(embedding_file)
# try to figure out what kind of embedding file it is and parse accordingly
keys = list(ckpt.keys())
if all(x in keys for x in ['string_to_token','string_to_param','name','step']):
return self._parse_embedding_v1(ckpt, embedding_file) # example rem_rezero.pt
elif all(x in keys for x in ['string_to_token','string_to_param']):
return self._parse_embedding_v2(ckpt, embedding_file) # example midj-strong.pt
elif 'emb_params' in keys:
return self._parse_embedding_v3(ckpt, embedding_file) # example easynegative.safetensors
else:
print(">> Invalid embedding format")
embedding_info = None
return self._parse_embedding_v4(ckpt, embedding_file) # usually a '.bin' file
def _parse_embedding_v1(self, embedding_ckpt: dict, file_path: str):
basename = Path(file_path).stem
print(f' | Loading v1 embedding file: {basename}')
embedding_info = {}
embedding_info["name"] = embedding_ckpt["name"]
# Check num of embeddings and warn user only the first will be used
embedding_info["num_of_embeddings"] = len(
embedding_ckpt["string_to_token"]
)
if embedding_info["num_of_embeddings"] > 1:
print(" | More than 1 embedding found. Will use the first one")
embedding = list(embedding_ckpt["string_to_param"].values())[0]
embedding_info["embedding"] = embedding
embedding_info["num_vectors_per_token"] = embedding.size()[0]
embedding_info["token_dim"] = embedding.size()[1]
embedding_info["trained_steps"] = embedding_ckpt["step"]
embedding_info["trained_model_name"] = embedding_ckpt[
"sd_checkpoint_name"
]
embedding_info["trained_model_checksum"] = embedding_ckpt[
"sd_checkpoint"
]
return embedding_info
def _parse_embedding_bin(self, embedding_file):
embedding_ckpt = torch.load(embedding_file, map_location="cpu")
embedding_info = {}
if list(embedding_ckpt.keys()) == 0:
print(">> Invalid concepts file")
embedding_info = None
else:
for token in list(embedding_ckpt.keys()):
embedding_info["name"] = (
token
or f"<{os.path.basename(os.path.splitext(embedding_file)[0])}>"
)
embedding_info["embedding"] = embedding_ckpt[token]
embedding_info[
"num_vectors_per_token"
] = 1 # All Concepts seem to default to 1
embedding_info["token_dim"] = embedding_info["embedding"].size()[0]
return embedding_info
def _handle_broken_pt_variants(
self, embedding_ckpt: dict, embedding_file: str
def _parse_embedding_v2 (
self, embedding_ckpt: dict, file_path: str
) -> dict:
"""
This handles the broken .pt file variants. We only know of one at present.
This handles embedding .pt file variant #2.
"""
basename = Path(file_path).stem
print(f' | Loading v2 embedding file: {basename}')
embedding_info = {}
if isinstance(
list(embedding_ckpt["string_to_token"].values())[0], torch.Tensor
@@ -403,7 +368,7 @@ class TextualInversionManager(BaseTextualInversionManager):
embedding_info["name"] = (
token
if token != "*"
else f"<{os.path.basename(os.path.splitext(embedding_file)[0])}>"
else f"<{basename}>"
)
embedding_info["embedding"] = embedding_ckpt[
"string_to_param"
@@ -413,7 +378,46 @@ class TextualInversionManager(BaseTextualInversionManager):
].shape[0]
embedding_info["token_dim"] = embedding_info["embedding"].size()[1]
else:
print(">> Invalid embedding format")
print(f" ** {basename}: Unrecognized embedding format")
embedding_info = None
return embedding_info
def _parse_embedding_v3(self, embedding_ckpt: dict, file_path: str):
"""
Parse 'version 3' of the .pt textual inversion embedding files.
"""
basename = Path(file_path).stem
print(f' | Loading v3 embedding file: {basename}')
embedding_info = {}
embedding_info["name"] = f'<{basename}>'
embedding_info["num_of_embeddings"] = 1
embedding = embedding_ckpt['emb_params']
embedding_info["embedding"] = embedding
embedding_info["num_vectors_per_token"] = embedding.size()[0]
embedding_info["token_dim"] = embedding.size()[1]
return embedding_info
def _parse_embedding_v4(self, embedding_ckpt: dict, filepath: str):
"""
Parse 'version 4' of the textual inversion embedding files. This one
is usually associated with .bin files trained by HuggingFace diffusers.
"""
basename = Path(filepath).stem
short_path = Path(filepath).parents[0].name+'/'+Path(filepath).name
print(f' | Loading v4 embedding file: {short_path}')
embedding_info = {}
if list(embedding_ckpt.keys()) == 0:
print(f" ** Invalid embeddings file: {short_path}")
embedding_info = None
else:
for token in list(embedding_ckpt.keys()):
embedding_info["name"] = (
token
or f"<{basename}>"
)
embedding_info["embedding"] = embedding_ckpt[token]
embedding_info["num_vectors_per_token"] = 1 # All Concepts seem to default to 1
embedding_info["token_dim"] = embedding_info["embedding"].size()[0]
return embedding_info

View File

@@ -122,6 +122,7 @@ requires-python = ">=3.9, <3.11"
"invokeai-ti" = "ldm.invoke.training.textual_inversion:main"
"invokeai-update" = "ldm.invoke.config.invokeai_update:main"
"invokeai-batch" = "ldm.invoke.dynamic_prompts:main"
"invokeai-metadata" = "ldm.invoke.invokeai_metadata:main"
[project.urls]
"Bug Reports" = "https://github.com/invoke-ai/InvokeAI/issues"