mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-01-24 05:58:02 -05:00
Compare commits
20 Commits
v2.3.3-rc5
...
v2.3.3
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fd74f51384 | ||
|
|
1e5a44a474 | ||
|
|
78ea5d773d | ||
|
|
7547784e98 | ||
|
|
e82641d5f9 | ||
|
|
993baadc22 | ||
|
|
ccfb0b94b9 | ||
|
|
352805d607 | ||
|
|
4145e27ce6 | ||
|
|
3d4f4b677f | ||
|
|
249173faf5 | ||
|
|
794ef868af | ||
|
|
a1ed22517f | ||
|
|
3765ee9b59 | ||
|
|
46e578e1ef | ||
|
|
3a8ef0a00c | ||
|
|
cf262dd2ea | ||
|
|
b0b0c48d8a | ||
|
|
8404e06d77 | ||
|
|
a91d01c27a |
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: Concepts Library
|
||||
title: Styles and Subjects
|
||||
---
|
||||
|
||||
# :material-library-shelves: The Hugging Face Concepts Library and Importing Textual Inversion files
|
||||
|
||||
@@ -20,6 +20,8 @@ title: Overview
|
||||
|
||||
Scriptable access to InvokeAI's features.
|
||||
|
||||
- [Visual Manual for InvokeAI](https://docs.google.com/presentation/d/e/2PACX-1vSE90aC7bVVg0d9KXVMhy-Wve-wModgPFp7AGVTOCgf4xE03SnV24mjdwldolfCr59D_35oheHe4Cow/pub?start=false&loop=true&delayms=60000) (contributed by Statcomm)
|
||||
|
||||
- Image Generation
|
||||
|
||||
- [Prompt Engineering](PROMPTS.md)
|
||||
|
||||
154
docs/index.md
154
docs/index.md
@@ -142,6 +142,10 @@ This method is recommended for those familiar with running Docker containers
|
||||
- [WebUI overview](features/WEB.md)
|
||||
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
|
||||
- [WebUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
|
||||
- [Visual Manual for InvokeAI v2.3.1](https://docs.google.com/presentation/d/e/2PACX-1vSE90aC7bVVg0d9KXVMhy-Wve-wModgPFp7AGVTOCgf4xE03SnV24mjdwldolfCr59D_35oheHe4Cow/pub?start=false&loop=true&delayms=60000) (contributed by Statcomm)
|
||||
|
||||
<!-- separator -->
|
||||
|
||||
<!-- separator -->
|
||||
|
||||
### The InvokeAI Command Line Interface
|
||||
@@ -165,7 +169,7 @@ This method is recommended for those familiar with running Docker containers
|
||||
|
||||
- [Installing](installation/050_INSTALLING_MODELS.md)
|
||||
- [Model Merging](features/MODEL_MERGING.md)
|
||||
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
|
||||
- [Adding custom styles and subjects via embeddings](features/CONCEPTS.md)
|
||||
- [Textual Inversion](features/TEXTUAL_INVERSION.md)
|
||||
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
|
||||
<!-- seperator -->
|
||||
@@ -177,6 +181,154 @@ This method is recommended for those familiar with running Docker containers
|
||||
|
||||
## :octicons-log-16: Latest Changes
|
||||
|
||||
### v2.3.3 <small>(29 March 2023)</small>
|
||||
|
||||
#### Bug Fixes
|
||||
1. When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
|
||||
2. Textual inversion will select an appropriate batchsize based on whether `xformers` is active, and will default to `xformers` enabled if the library is detected.
|
||||
3. The batch script log file names have been fixed to be compatible with Windows.
|
||||
4. Occasional corruption of the `.next_prefix` file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
|
||||
5. An infinite loop when opening the developer's console from within the `invoke.sh` script has been corrected.
|
||||
|
||||
#### Enhancements
|
||||
1. It is now possible to load and run several community-contributed SD-2.0 based models, including the infamous "Illuminati" model.
|
||||
2. The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI `embeddings` directory.
|
||||
3. If no `--model` is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
|
||||
4. On Linux systems, the `invoke.sh` launcher now uses a prettier console-based interface. To take advantage of it, install the `dialog` package using your package manager (e.g. `sudo apt install dialog`).
|
||||
5. When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
|
||||
```
|
||||
my-favorite-model.ckpt
|
||||
my-favorite-model.yaml
|
||||
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
|
||||
```
|
||||
|
||||
### v2.3.2 <small>(13 March 2023)</small>
|
||||
|
||||
#### Bugfixes
|
||||
|
||||
Since version 2.3.1 the following bugs have been fixed:
|
||||
|
||||
1. Black images appearing for potential NSFW images when generating with legacy checkpoint models and both `--no-nsfw_checker` and `--ckpt_convert` turned on.
|
||||
2. Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
|
||||
3. The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
|
||||
4. When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
|
||||
5. Crashes that occurred during model merging.
|
||||
6. Restore previous naming of Stable Diffusion base and 768 models.
|
||||
7. Upgraded to latest versions of `diffusers`, `transformers`, `safetensors` and `accelerate` libraries upstream. We hope that this will fix the `assertion NDArray > 2**32` issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
|
||||
|
||||
As part of the upgrade to `diffusers`, the location of the diffusers-based models has changed from `models/diffusers` to `models/hub`. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your `models/diffusers` directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
|
||||
|
||||
#### New "Invokeai-batch" script
|
||||
|
||||
2.3.2 introduces a new command-line only script called
|
||||
`invokeai-batch` that can be used to generate hundreds of images from
|
||||
prompts and settings that vary systematically. This can be used to try
|
||||
the same prompt across multiple combinations of models, steps, CFG
|
||||
settings and so forth. It also allows you to template prompts and
|
||||
generate a combinatorial list like: ``` a shack in the mountains,
|
||||
photograph a shack in the mountains, watercolor a shack in the
|
||||
mountains, oil painting a chalet in the mountains, photograph a chalet
|
||||
in the mountains, watercolor a chalet in the mountains, oil painting a
|
||||
shack in the desert, photograph ... ```
|
||||
|
||||
If you have a system with multiple GPUs, or a single GPU with lots of
|
||||
VRAM, you can parallelize generation across the combinatorial set,
|
||||
reducing wait times and using your system's resources efficiently
|
||||
(make sure you have good GPU cooling).
|
||||
|
||||
To try `invokeai-batch` out. Launch the "developer's console" using
|
||||
the `invoke` launcher script, or activate the invokeai virtual
|
||||
environment manually. From the console, give the command
|
||||
`invokeai-batch --help` in order to learn how the script works and
|
||||
create your first template file for dynamic prompt generation.
|
||||
|
||||
### v2.3.1 <small>(26 February 2023)</small>
|
||||
|
||||
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
|
||||
|
||||
#### Enhanced support for model management
|
||||
|
||||
InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
|
||||
|
||||
There are three ways of accessing the model management features:
|
||||
|
||||
1. ***From the WebUI***, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
|
||||
|
||||

|
||||
|
||||
2. **Using the Model Installer App**
|
||||
|
||||
Choose option (5) _download and install models_ from the `invoke` launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
|
||||
|
||||
Command-line users can start this app using the command `invokeai-model-install`.
|
||||
|
||||

|
||||
|
||||
3. **Using the Command Line Client (CLI)**
|
||||
|
||||
The `!install_model` and `!convert_model` commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
|
||||
|
||||
Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do **not** need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
|
||||
|
||||
Please see [INSTALLING MODELS](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/) for more information on model management.
|
||||
|
||||
#### An Improved Installer Experience
|
||||
|
||||
The installer now launches a console-based UI for setting and changing commonly-used startup options:
|
||||
|
||||

|
||||
|
||||
After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching `invoke.sh`/`invoke.bat` and entering option (6) _change InvokeAI startup options_
|
||||
|
||||
Command-line users can launch the new configure app using `invokeai-configure`.
|
||||
|
||||
This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch `invoke.sh` or `invoke.bat` and choose option (9) _update InvokeAI_ . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
|
||||
|
||||

|
||||
|
||||
Command-line users can run this interface by typing `invokeai-configure`
|
||||
|
||||
#### Image Symmetry Options
|
||||
|
||||
There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting _Symmetry_ from the image generation settings, or within the CLI by using the options `--h_symmetry_time_pct` and `--v_symmetry_time_pct` (these can be abbreviated to `--h_sym` and `--v_sym` like all other options).
|
||||
|
||||

|
||||
|
||||
#### A New Unified Canvas Look
|
||||
|
||||
This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select _Use Canvas Beta Layout_:
|
||||
|
||||

|
||||
|
||||
Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
|
||||
|
||||

|
||||
|
||||
#### Model conversion and merging within the WebUI
|
||||
|
||||
The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the `invoke.sh`/`invoke.bat` scripts.
|
||||
|
||||
#### An easier way to contribute translations to the WebUI
|
||||
|
||||
We have migrated our translation efforts to [Weblate](https://hosted.weblate.org/engage/invokeai/), a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief [translation guide](https://github.com/invoke-ai/InvokeAI/blob/v2.3.1/docs/other/TRANSLATION.md) for more information on how to contribute.
|
||||
|
||||
#### Numerous internal bugfixes and performance issues
|
||||
|
||||
This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to `diffusers 0.13.0`, and using the `compel` library for prompt parsing. See [Detailed Change Log](#full-change-log) for a detailed list of bugs caught and squished.
|
||||
|
||||
#### Summary of InvokeAI command line scripts (all accessible via the launcher menu)
|
||||
|
||||
| Command | Description |
|
||||
|--------------------------|---------------------------------------------------------------------|
|
||||
| `invokeai` | Command line interface |
|
||||
| `invokeai --web` | Web interface |
|
||||
| `invokeai-model-install` | Model installer with console forms-based front end |
|
||||
| `invokeai-ti --gui` | Textual inversion, with a console forms-based front end |
|
||||
| `invokeai-merge --gui` | Model merging, with a console forms-based front end |
|
||||
| `invokeai-configure` | Startup configuration; can also be used to reinstall support models |
|
||||
| `invokeai-update` | InvokeAI software updater |
|
||||
|
||||
|
||||
### v2.3.0 <small>(9 February 2023)</small>
|
||||
|
||||
#### Migration to Stable Diffusion `diffusers` models
|
||||
|
||||
@@ -77,7 +77,7 @@ machine. To test, open up a terminal window and issue the following
|
||||
command:
|
||||
|
||||
```
|
||||
rocm-smi
|
||||
rocminfo
|
||||
```
|
||||
|
||||
If you get a table labeled "ROCm System Management Interface" the
|
||||
@@ -95,9 +95,17 @@ recent version of Ubuntu, 22.04. However, this [community-contributed
|
||||
recipe](https://novaspirit.github.io/amdgpu-rocm-ubu22/) is reported
|
||||
to work well.
|
||||
|
||||
After installation, please run `rocm-smi` a second time to confirm
|
||||
After installation, please run `rocminfo` a second time to confirm
|
||||
that the driver is present and the GPU is recognized. You may need to
|
||||
do a reboot in order to load the driver.
|
||||
do a reboot in order to load the driver. In addition, if you see
|
||||
errors relating to your username not being a member of the `render`
|
||||
group, you may fix this by adding yourself to this group with the command:
|
||||
|
||||
```
|
||||
sudo usermod -a -G render myUserName
|
||||
```
|
||||
|
||||
(Thanks to @EgoringKosmos for the usermod recipe.)
|
||||
|
||||
### Linux Install with a ROCm-docker Container
|
||||
|
||||
|
||||
@@ -23,14 +23,16 @@ We thank them for all of their time and hard work.
|
||||
* @damian0815 - Attention Systems and Gameplay Engineer
|
||||
* @mauwii (Matthias Wild) - Continuous integration and product maintenance engineer
|
||||
* @Netsvetaev (Artur Netsvetaev) - UI/UX Developer
|
||||
* @tildebyte - General gadfly and resident (self-appointed) know-it-all
|
||||
* @keturn - Lead for Diffusers port
|
||||
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
|
||||
* @jpphoto (Jonathan Pollack) - Inference and rendering engine optimization
|
||||
* @genomancer (Gregg Helt) - Model training and merging
|
||||
* @gogurtenjoyer - User support and testing
|
||||
* @whosawwhatsis - User support and testing
|
||||
|
||||
## **Contributions by**
|
||||
|
||||
- [tildebyte](https://github.com/tildebyte)
|
||||
- [Sean McLellan](https://github.com/Oceanswave)
|
||||
- [Kevin Gibbons](https://github.com/bakkot)
|
||||
- [Tesseract Cat](https://github.com/TesseractCat)
|
||||
@@ -78,6 +80,7 @@ We thank them for all of their time and hard work.
|
||||
- [psychedelicious](https://github.com/psychedelicious)
|
||||
- [damian0815](https://github.com/damian0815)
|
||||
- [Eugene Brodsky](https://github.com/ebr)
|
||||
- [Statcomm](https://github.com/statcomm)
|
||||
|
||||
## **Original CompVis Authors**
|
||||
|
||||
|
||||
@@ -242,8 +242,8 @@ class InvokeAiInstance:
|
||||
from plumbum import FG, local
|
||||
|
||||
# Note that we're installing pinned versions of torch and
|
||||
# torchvision here, which may not correspond to what is
|
||||
# in pyproject.toml. This is a hack to prevent torch 2.0 from
|
||||
# torchvision here, which *should* correspond to what is
|
||||
# in pyproject.toml. This is to prevent torch 2.0 from
|
||||
# being installed and immediately uninstalled and replaced with 1.13
|
||||
pip = local[self.pip]
|
||||
|
||||
@@ -252,7 +252,7 @@ class InvokeAiInstance:
|
||||
"install",
|
||||
"--require-virtualenv",
|
||||
"torch~=1.13.1",
|
||||
"torchvision>=0.14.1",
|
||||
"torchvision~=0.14.1",
|
||||
"--force-reinstall",
|
||||
"--find-links" if find_links is not None else None,
|
||||
find_links,
|
||||
@@ -384,7 +384,7 @@ class InvokeAiInstance:
|
||||
os.chmod(dest, 0o0755)
|
||||
|
||||
if OS == "Linux":
|
||||
shutil.copy(Path(__file__).parent / '..' / "templates" / "dialogrc", self.runtime / '.dialogrc')
|
||||
shutil.copy(Path(__file__).parents[1] / "templates" / "dialogrc", self.runtime / '.dialogrc')
|
||||
|
||||
def update(self):
|
||||
pass
|
||||
|
||||
@@ -976,7 +976,7 @@ class Generate:
|
||||
self.generators = {}
|
||||
|
||||
seed_everything(random.randrange(0, np.iinfo(np.uint32).max))
|
||||
if self.embedding_path is not None:
|
||||
if self.embedding_path and not model_data.get("ti_embeddings_loaded"):
|
||||
print(f'>> Loading embeddings from {self.embedding_path}')
|
||||
for root, _, files in os.walk(self.embedding_path):
|
||||
for name in files:
|
||||
@@ -984,9 +984,10 @@ class Generate:
|
||||
self.model.textual_inversion_manager.load_textual_inversion(
|
||||
ti_path, defer_injecting_tokens=True
|
||||
)
|
||||
print(
|
||||
f'>> Textual inversion triggers: {", ".join(sorted(self.model.textual_inversion_manager.get_all_trigger_strings()))}'
|
||||
)
|
||||
model_data["ti_embeddings_loaded"] = True
|
||||
print(
|
||||
f'>> Textual inversion triggers: {", ".join(sorted(self.model.textual_inversion_manager.get_all_trigger_strings()))}'
|
||||
)
|
||||
|
||||
self.model_name = model_name
|
||||
self._set_sampler() # requires self.model_name to be set first
|
||||
|
||||
@@ -776,14 +776,10 @@ def convert_model(model_name_or_path: Union[Path, str], gen, opt, completer):
|
||||
original_config_file = Path(model_info["config"])
|
||||
model_name = model_name_or_path
|
||||
model_description = model_info["description"]
|
||||
vae = model_info.get("vae")
|
||||
vae_path = model_info.get("vae")
|
||||
else:
|
||||
print(f"** {model_name_or_path} is not a legacy .ckpt weights file")
|
||||
return
|
||||
if vae and (vae_repo := ldm.invoke.model_manager.VAE_TO_REPO_ID.get(Path(vae).stem)):
|
||||
vae_repo = dict(repo_id=vae_repo)
|
||||
else:
|
||||
vae_repo = None
|
||||
model_name = manager.convert_and_import(
|
||||
ckpt_path,
|
||||
diffusers_path=Path(
|
||||
@@ -792,7 +788,7 @@ def convert_model(model_name_or_path: Union[Path, str], gen, opt, completer):
|
||||
model_name=model_name,
|
||||
model_description=model_description,
|
||||
original_config_file=original_config_file,
|
||||
vae=vae_repo,
|
||||
vae_path=vae_path,
|
||||
)
|
||||
else:
|
||||
try:
|
||||
@@ -838,6 +834,7 @@ def edit_model(model_name: str, gen, opt, completer):
|
||||
print(f"\n>> Editing model {model_name} from configuration file {opt.conf}")
|
||||
new_name = _get_model_name(manager.list_models(), completer, model_name)
|
||||
|
||||
completer.complete_extensions(('.yaml','.ckpt','.safetensors','.pt'))
|
||||
for attribute in info.keys():
|
||||
if type(info[attribute]) != str:
|
||||
continue
|
||||
@@ -845,6 +842,7 @@ def edit_model(model_name: str, gen, opt, completer):
|
||||
continue
|
||||
completer.set_line(info[attribute])
|
||||
info[attribute] = input(f"{attribute}: ") or info[attribute]
|
||||
completer.complete_extensions(None)
|
||||
|
||||
if info["format"] == "diffusers":
|
||||
vae = info.get("vae", dict(repo_id=None, path=None, subfolder=None))
|
||||
@@ -1353,7 +1351,7 @@ def do_version_update(root_version: version.Version, app_version: Union[str, ver
|
||||
if sys.platform == "linux":
|
||||
print('>> Downloading new version of launcher script and its config file')
|
||||
from ldm.util import download_with_progress_bar
|
||||
url_base = f'https://raw.githubusercontent.com/invoke-ai/InvokeAI/release/v{str(app_version)}/installer/templates/'
|
||||
url_base = f'https://raw.githubusercontent.com/invoke-ai/InvokeAI/v{str(app_version)}/installer/templates/'
|
||||
|
||||
dest = Path(Globals.root,'invoke.sh.in')
|
||||
assert download_with_progress_bar(url_base+'invoke.sh.in',dest)
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
|
||||
__version__='2.3.3-rc5'
|
||||
__version__='2.3.3'
|
||||
|
||||
@@ -1037,10 +1037,10 @@ def convert_open_clip_checkpoint(checkpoint):
|
||||
return text_model
|
||||
|
||||
def replace_checkpoint_vae(checkpoint, vae_path:str):
|
||||
if vae_path.endswith(".safetensors"):
|
||||
vae_ckpt = load_file(vae_path)
|
||||
else:
|
||||
if Path(vae_path).suffix in ['.pt','.ckpt']:
|
||||
vae_ckpt = torch.load(vae_path, map_location="cpu")
|
||||
else:
|
||||
vae_ckpt = load_file(vae_path)
|
||||
state_dict = vae_ckpt['state_dict'] if "state_dict" in vae_ckpt else vae_ckpt
|
||||
for vae_key in state_dict:
|
||||
new_key = f'first_stage_model.{vae_key}'
|
||||
|
||||
@@ -16,6 +16,8 @@ from rich.text import Text
|
||||
from ldm.invoke import __version__
|
||||
|
||||
INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive"
|
||||
INVOKE_AI_TAG="https://github.com/invoke-ai/InvokeAI/archive/refs/tags"
|
||||
INVOKE_AI_BRANCH="https://github.com/invoke-ai/InvokeAI/archive/refs/heads"
|
||||
INVOKE_AI_REL="https://api.github.com/repos/invoke-ai/InvokeAI/releases"
|
||||
|
||||
OS = platform.uname().system
|
||||
@@ -41,7 +43,8 @@ def welcome(versions: dict):
|
||||
yield '[bold yellow]Options:'
|
||||
yield f'''[1] Update to the latest official release ([italic]{versions[0]['tag_name']}[/italic])
|
||||
[2] Update to the bleeding-edge development version ([italic]main[/italic])
|
||||
[3] Manually enter the tag or branch name you wish to update'''
|
||||
[3] Manually enter the [bold]tag name[/bold] for the version you wish to update to
|
||||
[4] Manually enter the [bold]branch name[/bold] for the version you wish to update to'''
|
||||
|
||||
console.rule()
|
||||
print(
|
||||
@@ -62,17 +65,26 @@ def main():
|
||||
welcome(versions)
|
||||
|
||||
tag = None
|
||||
choice = Prompt.ask('Choice:',choices=['1','2','3'],default='1')
|
||||
branch = None
|
||||
release = None
|
||||
choice = Prompt.ask('Choice:',choices=['1','2','3','4'],default='1')
|
||||
|
||||
if choice=='1':
|
||||
tag = versions[0]['tag_name']
|
||||
release = versions[0]['tag_name']
|
||||
elif choice=='2':
|
||||
tag = 'main'
|
||||
release = 'main'
|
||||
elif choice=='3':
|
||||
tag = Prompt.ask('Enter an InvokeAI tag or branch name')
|
||||
tag = Prompt.ask('Enter an InvokeAI tag name')
|
||||
elif choice=='4':
|
||||
branch = Prompt.ask('Enter an InvokeAI branch name')
|
||||
|
||||
print(f':crossed_fingers: Upgrading to [yellow]{tag}[/yellow]')
|
||||
cmd = f'pip install {INVOKE_AI_SRC}/{tag}.zip --use-pep517 --upgrade'
|
||||
print(f':crossed_fingers: Upgrading to [yellow]{tag if tag else release}[/yellow]')
|
||||
if release:
|
||||
cmd = f'pip install {INVOKE_AI_SRC}/{release}.zip --use-pep517 --upgrade'
|
||||
elif tag:
|
||||
cmd = f'pip install {INVOKE_AI_TAG}/{tag}.zip --use-pep517 --upgrade'
|
||||
else:
|
||||
cmd = f'pip install {INVOKE_AI_BRANCH}/{branch}.zip --use-pep517 --upgrade'
|
||||
print('')
|
||||
print('')
|
||||
if os.system(cmd)==0:
|
||||
|
||||
@@ -29,7 +29,13 @@ Model_dir = "models"
|
||||
Weights_dir = "ldm/stable-diffusion-v1/"
|
||||
|
||||
# the initial "configs" dir is now bundled in the `invokeai.configs` package
|
||||
Dataset_path = Path(configs.__path__[0]) / "INITIAL_MODELS.yaml"
|
||||
Dataset_path = None
|
||||
for path in configs.__path__:
|
||||
file =Path(path, "INITIAL_MODELS.yaml")
|
||||
if file.exists():
|
||||
Dataset_path = file
|
||||
break
|
||||
assert Dataset_path,f"Could not find the file INITIAL_MODELS.yaml in {configs.__path__}"
|
||||
|
||||
# initial models omegaconf
|
||||
Datasets = None
|
||||
|
||||
@@ -19,7 +19,7 @@ import warnings
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
from shutil import move, rmtree
|
||||
from typing import Any, Callable, Optional, Union
|
||||
from typing import Any, Callable, Optional, Union, List
|
||||
|
||||
import safetensors
|
||||
import safetensors.torch
|
||||
@@ -46,12 +46,7 @@ class SDLegacyType(Enum):
|
||||
V2_v = 5
|
||||
UNKNOWN = 99
|
||||
|
||||
|
||||
DEFAULT_MAX_MODELS = 2
|
||||
VAE_TO_REPO_ID = { # hack, see note in convert_and_import()
|
||||
"vae-ft-mse-840000-ema-pruned": "stabilityai/sd-vae-ft-mse",
|
||||
}
|
||||
|
||||
|
||||
class ModelManager(object):
|
||||
def __init__(
|
||||
@@ -108,11 +103,7 @@ class ModelManager(object):
|
||||
requested_model = self.models[model_name]["model"]
|
||||
print(f">> Retrieving model {model_name} from system RAM cache")
|
||||
self.models[model_name]["model"] = self._model_from_cpu(requested_model)
|
||||
width = self.models[model_name]["width"]
|
||||
height = self.models[model_name]["height"]
|
||||
hash = self.models[model_name]["hash"]
|
||||
|
||||
else: # we're about to load a new model, so potentially offload the least recently used one
|
||||
else:
|
||||
requested_model, width, height, hash = self._load_model(model_name)
|
||||
self.models[model_name] = {
|
||||
"model": requested_model,
|
||||
@@ -123,13 +114,8 @@ class ModelManager(object):
|
||||
|
||||
self.current_model = model_name
|
||||
self._push_newest_model(model_name)
|
||||
return {
|
||||
"model": requested_model,
|
||||
"width": width,
|
||||
"height": height,
|
||||
"hash": hash,
|
||||
}
|
||||
|
||||
return self.models[model_name]
|
||||
|
||||
def default_model(self) -> str | None:
|
||||
"""
|
||||
Returns the name of the default model, or None
|
||||
@@ -382,6 +368,20 @@ class ModelManager(object):
|
||||
# check whether this is a v2 file and force conversion
|
||||
convert = Globals.ckpt_convert or self.is_v2_config(config)
|
||||
|
||||
if matching_config := self._scan_for_matching_file(Path(weights),suffixes=['.yaml']):
|
||||
print(f' | Using external config file {matching_config}')
|
||||
config = matching_config
|
||||
|
||||
# get the path to the custom vae, if any
|
||||
vae_path = None
|
||||
# first we use whatever is in the config file
|
||||
if vae:
|
||||
path = Path(vae if os.path.isabs(vae) else os.path.normpath(os.path.join(Globals.root, vae)))
|
||||
if path.exists():
|
||||
vae_path = path
|
||||
# then we look for a file with the same basename
|
||||
vae_path = vae_path or self._scan_for_matching_file(Path(weights))
|
||||
|
||||
# if converting automatically to diffusers, then we do the conversion and return
|
||||
# a diffusers pipeline
|
||||
if convert:
|
||||
@@ -390,15 +390,18 @@ class ModelManager(object):
|
||||
)
|
||||
from ldm.invoke.ckpt_to_diffuser import load_pipeline_from_original_stable_diffusion_ckpt
|
||||
|
||||
self.offload_model(self.current_model)
|
||||
if vae_config := self._choose_diffusers_vae(model_name):
|
||||
vae = self._load_vae(vae_config)
|
||||
try:
|
||||
if self.list_models()[self.current_model]['status'] == 'active':
|
||||
self.offload_model(self.current_model)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if self._has_cuda():
|
||||
torch.cuda.empty_cache()
|
||||
pipeline = load_pipeline_from_original_stable_diffusion_ckpt(
|
||||
checkpoint_path=weights,
|
||||
original_config_file=config,
|
||||
vae=vae,
|
||||
vae_path=vae_path,
|
||||
return_generator_pipeline=True,
|
||||
precision=torch.float16
|
||||
if self.precision == "float16"
|
||||
@@ -453,20 +456,17 @@ class ModelManager(object):
|
||||
print(" | Using more accurate float32 precision")
|
||||
|
||||
# look and load a matching vae file. Code borrowed from AUTOMATIC1111 modules/sd_models.py
|
||||
if vae:
|
||||
if not os.path.isabs(vae):
|
||||
vae = os.path.normpath(os.path.join(Globals.root, vae))
|
||||
if os.path.exists(vae):
|
||||
print(f" | Loading VAE weights from: {vae}")
|
||||
if vae.endswith((".ckpt", ".pt")):
|
||||
self.scan_model(vae, vae)
|
||||
vae_ckpt = torch.load(vae, map_location="cpu")
|
||||
else:
|
||||
vae_ckpt = safetensors.torch.load_file(vae)
|
||||
vae_dict = {k: v for k, v in vae_ckpt.items() if k[0:4] != "loss"}
|
||||
model.first_stage_model.load_state_dict(vae_dict, strict=False)
|
||||
if vae_path:
|
||||
print(f" | Loading VAE weights from: {vae_path}")
|
||||
if vae_path.suffix in [".ckpt", ".pt"]:
|
||||
self.scan_model(vae_path.name, vae_path)
|
||||
vae_ckpt = torch.load(vae_path, map_location="cpu")
|
||||
else:
|
||||
print(f" | VAE file {vae} not found. Skipping.")
|
||||
vae_ckpt = safetensors.torch.load_file(vae_path)
|
||||
vae_dict = {k: v for k, v in vae_ckpt["state_dict"].items() if k[0:4] != "loss"}
|
||||
model.first_stage_model.load_state_dict(vae_dict, strict=False)
|
||||
else:
|
||||
print(" | Using VAE built into model.")
|
||||
|
||||
model.to(self.device)
|
||||
# model.to doesn't change the cond_stage_model.device used to move the tokenizer output, so set it here
|
||||
@@ -820,7 +820,6 @@ class ModelManager(object):
|
||||
print(f" | {thing} appears to be a diffusers file on disk")
|
||||
model_name = self.import_diffuser_model(
|
||||
thing,
|
||||
vae=dict(repo_id="stabilityai/sd-vae-ft-mse"),
|
||||
model_name=model_name,
|
||||
description=description,
|
||||
commit_to_conf=commit_to_conf,
|
||||
@@ -908,11 +907,11 @@ class ModelManager(object):
|
||||
)
|
||||
elif model_type == SDLegacyType.V2:
|
||||
print(
|
||||
f"** {thing} is a V2 checkpoint file, but its parameterization cannot be determined. Please provide configuration file path."
|
||||
f"** {thing} is a V2 checkpoint file, but its parameterization cannot be determined. Please provide the configuration file type or path."
|
||||
)
|
||||
else:
|
||||
print(
|
||||
f"** {thing} is a legacy checkpoint file but not a known Stable Diffusion model. Please provide configuration file path."
|
||||
f"** {thing} is a legacy checkpoint file but not a known Stable Diffusion model. Please provide the configuration file type or path."
|
||||
)
|
||||
|
||||
if not model_config_file and config_file_callback:
|
||||
@@ -924,18 +923,14 @@ class ModelManager(object):
|
||||
convert = True
|
||||
print(" | This SD-v2 model will be converted to diffusers format for use")
|
||||
|
||||
# look for a custom vae
|
||||
vae_path = None
|
||||
for suffix in ["pt", "ckpt", "safetensors"]:
|
||||
if (model_path.with_suffix(f".vae.{suffix}")).exists():
|
||||
vae_path = model_path.with_suffix(f".vae.{suffix}")
|
||||
print(f" | Using VAE file {vae_path.name}")
|
||||
vae = None if vae_path else dict(repo_id="stabilityai/sd-vae-ft-mse")
|
||||
|
||||
if (vae_path := self._scan_for_matching_file(model_path)):
|
||||
print(f" | Using VAE file {vae_path.name}")
|
||||
|
||||
if convert:
|
||||
diffuser_path = Path(
|
||||
Globals.root, "models", Globals.converted_ckpts_dir, model_path.stem
|
||||
)
|
||||
vae = None if vae_path else dict(repo_id="stabilityai/sd-vae-ft-mse")
|
||||
model_name = self.convert_and_import(
|
||||
model_path,
|
||||
diffusers_path=diffuser_path,
|
||||
@@ -1008,14 +1003,17 @@ class ModelManager(object):
|
||||
try:
|
||||
# By passing the specified VAE to the conversion function, the autoencoder
|
||||
# will be built into the model rather than tacked on afterward via the config file
|
||||
vae_model = self._load_vae(vae) if vae else None
|
||||
vae_model=None
|
||||
if vae:
|
||||
vae_model=self._load_vae(vae)
|
||||
vae_path=None
|
||||
convert_ckpt_to_diffusers(
|
||||
ckpt_path,
|
||||
diffusers_path,
|
||||
extract_ema=True,
|
||||
original_config_file=original_config_file,
|
||||
vae=vae_model,
|
||||
vae_path=str(vae_path) if vae_path else None,
|
||||
vae_path=vae_path,
|
||||
scan_needed=scan_needed,
|
||||
)
|
||||
print(
|
||||
@@ -1062,36 +1060,6 @@ class ModelManager(object):
|
||||
|
||||
return search_folder, found_models
|
||||
|
||||
def _choose_diffusers_vae(
|
||||
self, model_name: str, vae: str = None
|
||||
) -> Union[dict, str]:
|
||||
# In the event that the original entry is using a custom ckpt VAE, we try to
|
||||
# map that VAE onto a diffuser VAE using a hard-coded dictionary.
|
||||
# I would prefer to do this differently: We load the ckpt model into memory, swap the
|
||||
# VAE in memory, and then pass that to convert_ckpt_to_diffusers() so that the swapped
|
||||
# VAE is built into the model. However, when I tried this I got obscure key errors.
|
||||
if vae:
|
||||
return vae
|
||||
if model_name in self.config and (
|
||||
vae_ckpt_path := self.model_info(model_name).get("vae", None)
|
||||
):
|
||||
vae_basename = Path(vae_ckpt_path).stem
|
||||
diffusers_vae = None
|
||||
if diffusers_vae := VAE_TO_REPO_ID.get(vae_basename, None):
|
||||
print(
|
||||
f">> {vae_basename} VAE corresponds to known {diffusers_vae} diffusers version"
|
||||
)
|
||||
vae = {"repo_id": diffusers_vae}
|
||||
else:
|
||||
print(
|
||||
f'** Custom VAE "{vae_basename}" found, but corresponding diffusers model unknown'
|
||||
)
|
||||
print(
|
||||
'** Using "stabilityai/sd-vae-ft-mse"; If this isn\'t right, please edit the model config'
|
||||
)
|
||||
vae = {"repo_id": "stabilityai/sd-vae-ft-mse"}
|
||||
return vae
|
||||
|
||||
def _make_cache_room(self) -> None:
|
||||
num_loaded_models = len(self.models)
|
||||
if num_loaded_models >= self.max_loaded_models:
|
||||
@@ -1353,6 +1321,22 @@ class ModelManager(object):
|
||||
f.write(hash)
|
||||
return hash
|
||||
|
||||
@classmethod
|
||||
def _scan_for_matching_file(
|
||||
self,model_path: Path,
|
||||
suffixes: List[str]=['.vae.pt','.vae.ckpt','.vae.safetensors']
|
||||
)->Path:
|
||||
"""
|
||||
Find a file with same basename as the indicated model, but with one
|
||||
of the suffixes passed.
|
||||
"""
|
||||
# look for a custom vae
|
||||
vae_path = None
|
||||
for suffix in suffixes:
|
||||
if model_path.with_suffix(suffix).exists():
|
||||
vae_path = model_path.with_suffix(suffix)
|
||||
return vae_path
|
||||
|
||||
def _load_vae(self, vae_config) -> AutoencoderKL:
|
||||
vae_args = {}
|
||||
try:
|
||||
|
||||
@@ -19,7 +19,7 @@ from functools import partial
|
||||
from tqdm import tqdm
|
||||
from torchvision.utils import make_grid
|
||||
from pytorch_lightning.utilities.distributed import rank_zero_only
|
||||
from omegaconf import ListConfig
|
||||
from omegaconf import ListConfig, OmegaConf
|
||||
import urllib
|
||||
|
||||
from ldm.modules.textual_inversion_manager import TextualInversionManager
|
||||
@@ -609,6 +609,7 @@ class DDPM(pl.LightningModule):
|
||||
opt = torch.optim.AdamW(params, lr=lr)
|
||||
return opt
|
||||
|
||||
|
||||
|
||||
class LatentDiffusion(DDPM):
|
||||
"""main class"""
|
||||
@@ -617,7 +618,7 @@ class LatentDiffusion(DDPM):
|
||||
self,
|
||||
first_stage_config,
|
||||
cond_stage_config,
|
||||
personalization_config,
|
||||
personalization_config=None,
|
||||
num_timesteps_cond=None,
|
||||
cond_stage_key='image',
|
||||
cond_stage_trainable=False,
|
||||
@@ -675,7 +676,8 @@ class LatentDiffusion(DDPM):
|
||||
self.model.train = disabled_train
|
||||
for param in self.model.parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
|
||||
personalization_config = personalization_config or self._fallback_personalization_config()
|
||||
self.embedding_manager = self.instantiate_embedding_manager(
|
||||
personalization_config, self.cond_stage_model
|
||||
)
|
||||
@@ -2150,6 +2152,25 @@ class LatentDiffusion(DDPM):
|
||||
|
||||
self.emb_ckpt_counter += 500
|
||||
|
||||
@classmethod
|
||||
def _fallback_personalization_config(self)->dict:
|
||||
"""
|
||||
This protects us against custom legacy config files that
|
||||
don't contain the personalization_config section.
|
||||
"""
|
||||
return OmegaConf.create(
|
||||
dict(
|
||||
target='ldm.modules.embedding_manager.EmbeddingManager',
|
||||
params=dict(
|
||||
placeholder_strings=list('*'),
|
||||
initializer_words=list('sculpture'),
|
||||
per_image_tokens=False,
|
||||
num_vectors_per_token=1,
|
||||
progressive_words=False,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
class DiffusionWrapper(pl.LightningModule):
|
||||
def __init__(self, diff_model_config, conditioning_key):
|
||||
|
||||
@@ -463,6 +463,9 @@ class FrozenCLIPEmbedder(AbstractEncoder):
|
||||
def encode(self, text, **kwargs):
|
||||
return self(text, **kwargs)
|
||||
|
||||
def set_textual_inversion_manager(self, manager): #TextualInversionManager):
|
||||
self.textual_inversion_manager = manager
|
||||
|
||||
@property
|
||||
def device(self):
|
||||
return self.transformer.device
|
||||
@@ -476,10 +479,6 @@ class WeightedFrozenCLIPEmbedder(FrozenCLIPEmbedder):
|
||||
fragment_weights_key = "fragment_weights"
|
||||
return_tokens_key = "return_tokens"
|
||||
|
||||
def set_textual_inversion_manager(self, manager): #TextualInversionManager):
|
||||
# TODO all of the weighting and expanding stuff needs be moved out of this class
|
||||
self.textual_inversion_manager = manager
|
||||
|
||||
def forward(self, text: list, **kwargs):
|
||||
# TODO all of the weighting and expanding stuff needs be moved out of this class
|
||||
'''
|
||||
|
||||
Reference in New Issue
Block a user