mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-01-19 16:58:03 -05:00
Compare commits
6 Commits
onnx-testi
...
lstein/thr
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d894c86db1 | ||
|
|
399d505801 | ||
|
|
ad312fa1ec | ||
|
|
80ce014b1e | ||
|
|
1fd053b42d | ||
|
|
da187d6a87 |
2
.github/workflows/mkdocs-material.yml
vendored
2
.github/workflows/mkdocs-material.yml
vendored
@@ -43,7 +43,7 @@ jobs:
|
||||
--verbose
|
||||
|
||||
- name: deploy to gh-pages
|
||||
if: ${{ github.ref == 'refs/heads/main' }}
|
||||
if: ${{ github.ref == 'refs/heads/v2.3' }}
|
||||
run: |
|
||||
python -m \
|
||||
mkdocs gh-deploy \
|
||||
|
||||
@@ -617,6 +617,8 @@ sections describe what's new for InvokeAI.
|
||||
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
|
||||
backward compatibility.
|
||||
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
||||
- Support for [inpainting](deprecated/INPAINTING.md) and
|
||||
[outpainting](features/OUTPAINTING.md)
|
||||
- img2img runs on all k\* samplers
|
||||
- Support for
|
||||
[negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 1.1 MiB After Width: | Height: | Size: 983 KiB |
@@ -76,10 +76,10 @@ From top to bottom, these are:
|
||||
with outpainting,and modify interior portions of the image with
|
||||
inpainting, erase portions of a starting image and have the AI fill in
|
||||
the erased region from a text prompt.
|
||||
4. Node Editor - this panel allows you to create
|
||||
4. Workflow Management (not yet implemented) - this panel will allow you to create
|
||||
pipelines of common operations and combine them into workflows.
|
||||
5. Model Manager - this panel allows you to import and configure new
|
||||
models using URLs, local paths, or HuggingFace diffusers repo_ids.
|
||||
5. Training (not yet implemented) - this panel will provide an interface to [textual
|
||||
inversion training](TEXTUAL_INVERSION.md) and fine tuning.
|
||||
|
||||
The inpainting, outpainting and postprocessing tabs are currently in
|
||||
development. However, limited versions of their features can already be accessed
|
||||
|
||||
@@ -37,7 +37,7 @@ guide also covers optimizing models to load quickly.
|
||||
Teach an old model new tricks. Merge 2-3 models together to create a
|
||||
new model that combines characteristics of the originals.
|
||||
|
||||
## * [Textual Inversion](TRAINING.md)
|
||||
## * [Textual Inversion](TEXTUAL_INVERSION.md)
|
||||
Personalize models by adding your own style or subjects.
|
||||
|
||||
# Other Features
|
||||
|
||||
@@ -146,6 +146,7 @@ This method is recommended for those familiar with running Docker containers
|
||||
- [Installing](installation/050_INSTALLING_MODELS.md)
|
||||
- [Model Merging](features/MODEL_MERGING.md)
|
||||
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
|
||||
- [Textual Inversion](features/TEXTUAL_INVERSION.md)
|
||||
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
|
||||
<!-- seperator -->
|
||||
### Prompt Engineering
|
||||
|
||||
@@ -354,8 +354,8 @@ experimental versions later.
|
||||
|
||||
12. **InvokeAI Options**: You can launch InvokeAI with several different command-line arguments that
|
||||
customize its behavior. For example, you can change the location of the
|
||||
image output directory or balance memory usage vs performance. See
|
||||
[Configuration](../features/CONFIGURATION.md) for a full list of the options.
|
||||
image output directory, or select your favorite sampler. See the
|
||||
[Command-Line Interface](../features/CLI.md) for a full list of the options.
|
||||
|
||||
- To set defaults that will take effect every time you launch InvokeAI,
|
||||
use a text editor (e.g. Notepad) to exit the file
|
||||
|
||||
@@ -256,7 +256,7 @@ manager, please follow these steps:
|
||||
|
||||
10. Render away!
|
||||
|
||||
Browse the [features](../features/index.md) section to learn about all the
|
||||
Browse the [features](../features/CLI.md) section to learn about all the
|
||||
things you can do with InvokeAI.
|
||||
|
||||
|
||||
@@ -270,7 +270,7 @@ manager, please follow these steps:
|
||||
|
||||
12. Other scripts
|
||||
|
||||
The [Textual Inversion](../features/TRAINING.md) script can be launched with the command:
|
||||
The [Textual Inversion](../features/TEXTUAL_INVERSION.md) script can be launched with the command:
|
||||
|
||||
```bash
|
||||
invokeai-ti --gui
|
||||
|
||||
@@ -43,7 +43,24 @@ InvokeAI comes with support for a good set of starter models. You'll
|
||||
find them listed in the master models file
|
||||
`configs/INITIAL_MODELS.yaml` in the InvokeAI root directory. The
|
||||
subset that are currently installed are found in
|
||||
`configs/models.yaml`.
|
||||
`configs/models.yaml`. As of v2.3.1, the list of starter models is:
|
||||
|
||||
|Model Name | HuggingFace Repo ID | Description | URL |
|
||||
|---------- | ---------- | ----------- | --- |
|
||||
|stable-diffusion-1.5|runwayml/stable-diffusion-v1-5|Stable Diffusion version 1.5 diffusers model (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-v1-5 |
|
||||
|sd-inpainting-1.5|runwayml/stable-diffusion-inpainting|RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-inpainting |
|
||||
|stable-diffusion-2.1|stabilityai/stable-diffusion-2-1|Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-1 |
|
||||
|sd-inpainting-2.0|stabilityai/stable-diffusion-2-inpainting|Stable Diffusion version 2.0 inpainting model (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-inpainting |
|
||||
|analog-diffusion-1.0|wavymulder/Analog-Diffusion|An SD-1.5 model trained on diverse analog photographs (2.13 GB)|https://huggingface.co/wavymulder/Analog-Diffusion |
|
||||
|deliberate-1.0|XpucT/Deliberate|Versatile model that produces detailed images up to 768px (4.27 GB)|https://huggingface.co/XpucT/Deliberate |
|
||||
|d&d-diffusion-1.0|0xJustin/Dungeons-and-Diffusion|Dungeons & Dragons characters (2.13 GB)|https://huggingface.co/0xJustin/Dungeons-and-Diffusion |
|
||||
|dreamlike-photoreal-2.0|dreamlike-art/dreamlike-photoreal-2.0|A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)|https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0 |
|
||||
|inkpunk-1.0|Envvi/Inkpunk-Diffusion|Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)|https://huggingface.co/Envvi/Inkpunk-Diffusion |
|
||||
|openjourney-4.0|prompthero/openjourney|An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)|https://huggingface.co/prompthero/openjourney |
|
||||
|portrait-plus-1.0|wavymulder/portraitplus|An SD-1.5 model trained on close range portraits of people; prompt with "portrait+" (2.13 GB)|https://huggingface.co/wavymulder/portraitplus |
|
||||
|seek-art-mega-1.0|coreco/seek.art_MEGA|A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)|https://huggingface.co/coreco/seek.art_MEGA |
|
||||
|trinart-2.0|naclbit/trinart_stable_diffusion_v2|An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)|https://huggingface.co/naclbit/trinart_stable_diffusion_v2 |
|
||||
|waifu-diffusion-1.4|hakurei/waifu-diffusion|An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)|https://huggingface.co/hakurei/waifu-diffusion |
|
||||
|
||||
Note that these files are covered by an "Ethical AI" license which
|
||||
forbids certain uses. When you initially download them, you are asked
|
||||
@@ -54,7 +71,8 @@ with the model terms by visiting the URLs in the table above.
|
||||
|
||||
## Community-Contributed Models
|
||||
|
||||
[HuggingFace](https://huggingface.co/models?library=diffusers)
|
||||
There are too many to list here and more are being contributed every
|
||||
day. [HuggingFace](https://huggingface.co/models?library=diffusers)
|
||||
is a great resource for diffusers models, and is also the home of a
|
||||
[fast-growing repository](https://huggingface.co/sd-concepts-library)
|
||||
of embedding (".bin") models that add subjects and/or styles to your
|
||||
@@ -68,106 +86,310 @@ only `.safetensors` and `.ckpt` models, but they can be easily loaded
|
||||
into InvokeAI and/or converted into optimized `diffusers` models. Be
|
||||
aware that CIVITAI hosts many models that generate NSFW content.
|
||||
|
||||
!!! note
|
||||
|
||||
InvokeAI 2.3.x does not support directly importing and
|
||||
running Stable Diffusion version 2 checkpoint models. You may instead
|
||||
convert them into `diffusers` models using the conversion methods
|
||||
described below.
|
||||
|
||||
## Installation
|
||||
|
||||
There are two ways to install and manage models:
|
||||
There are multiple ways to install and manage models:
|
||||
|
||||
1. The `invokeai-model-install` script which will download and install
|
||||
them for you. In addition to supporting main models, you can install
|
||||
ControlNet, LoRA and Textual Inversion models.
|
||||
1. The `invokeai-configure` script which will download and install them for you.
|
||||
|
||||
2. The web interface (WebUI) has a GUI for importing and managing
|
||||
2. The command-line tool (CLI) has commands that allows you to import, configure and modify
|
||||
models files.
|
||||
|
||||
3. The web interface (WebUI) has a GUI for importing and managing
|
||||
models.
|
||||
|
||||
3. By placing models (or symbolic links to models) inside one of the
|
||||
InvokeAI root directory's `autoimport` folder.
|
||||
### Installation via `invokeai-configure`
|
||||
|
||||
### Installation via `invokeai-model-install`
|
||||
From the `invoke` launcher, choose option (6) "re-run the configure
|
||||
script to download new models." This will launch the same script that
|
||||
prompted you to select models at install time. You can use this to add
|
||||
models that you skipped the first time around. It is all right to
|
||||
specify a model that was previously downloaded; the script will just
|
||||
confirm that the files are complete.
|
||||
|
||||
From the `invoke` launcher, choose option [5] "Download and install
|
||||
models." This will launch the same script that prompted you to select
|
||||
models at install time. You can use this to add models that you
|
||||
skipped the first time around. It is all right to specify a model that
|
||||
was previously downloaded; the script will just confirm that the files
|
||||
are complete.
|
||||
### Installation via the CLI
|
||||
|
||||
The installer has different panels for installing main models from
|
||||
HuggingFace, models from Civitai and other arbitrary web sites,
|
||||
ControlNet models, LoRA/LyCORIS models, and Textual Inversion
|
||||
embeddings. Each section has a text box in which you can enter a new
|
||||
model to install. You can refer to a model using its:
|
||||
You can install a new model, including any of the community-supported ones, via
|
||||
the command-line client's `!import_model` command.
|
||||
|
||||
1. Local path to the .ckpt, .safetensors or diffusers folder on your local machine
|
||||
2. A directory on your machine that contains multiple models
|
||||
3. A URL that points to a downloadable model
|
||||
4. A HuggingFace repo id
|
||||
#### Installing individual `.ckpt` and `.safetensors` models
|
||||
|
||||
Previously-installed models are shown with checkboxes. Uncheck a box
|
||||
to unregister the model from InvokeAI. Models that are physically
|
||||
installed inside the InvokeAI root directory will be deleted and
|
||||
purged (after a confirmation warning). Models that are located outside
|
||||
the InvokeAI root directory will be unregistered but not deleted.
|
||||
If the model is already downloaded to your local disk, use
|
||||
`!import_model /path/to/file.ckpt` to load it. For example:
|
||||
|
||||
Note: The installer script uses a console-based text interface that requires
|
||||
significant amounts of horizontal and vertical space. If the display
|
||||
looks messed up, just enlarge the terminal window and/or relaunch the
|
||||
script.
|
||||
|
||||
If you wish you can script model addition and deletion, as well as
|
||||
listing installed models. Start the "developer's console" and give the
|
||||
command `invokeai-model-install --help`. This will give you a series
|
||||
of command-line parameters that will let you control model
|
||||
installation. Examples:
|
||||
|
||||
```
|
||||
# (list all controlnet models)
|
||||
invokeai-model-install --list controlnet
|
||||
|
||||
# (install the model at the indicated URL)
|
||||
invokeai-model-install --add http://civitai.com/2860
|
||||
|
||||
# (delete the named model)
|
||||
invokeai-model-install --delete sd-1/main/analog-diffusion
|
||||
```bash
|
||||
invoke> !import_model C:/Users/fred/Downloads/martians.safetensors
|
||||
```
|
||||
|
||||
### Installation via the Web GUI
|
||||
!!! tip "Forward Slashes"
|
||||
On Windows systems, use forward slashes rather than backslashes
|
||||
in your file paths.
|
||||
If you do use backslashes,
|
||||
you must double them like this:
|
||||
`C:\\Users\\fred\\Downloads\\martians.safetensors`
|
||||
|
||||
To install a new model using the Web GUI, do the following:
|
||||
Alternatively you can directly import the file using its URL:
|
||||
|
||||
1. Open the InvokeAI Model Manager (cube at the bottom of the
|
||||
left-hand panel) and navigate to *Import Models*
|
||||
```bash
|
||||
invoke> !import_model https://example.org/sd_models/martians.safetensors
|
||||
```
|
||||
|
||||
2. In the field labeled *Location* type in the path to the model you
|
||||
wish to install. You may use a URL, HuggingFace repo id, or a path on
|
||||
your local disk.
|
||||
For this to work, the URL must not be password-protected. Otherwise
|
||||
you will receive a 404 error.
|
||||
|
||||
3. Alternatively, the *Scan for Models* button allows you to paste in
|
||||
the path to a folder somewhere on your machine. It will be scanned for
|
||||
importable models and prompt you to add the ones of your choice.
|
||||
When you import a legacy model, the CLI will first ask you what type
|
||||
of model this is. You can indicate whether it is a model based on
|
||||
Stable Diffusion 1.x (1.4 or 1.5), one based on Stable Diffusion 2.x,
|
||||
or a 1.x inpainting model. Be careful to indicate the correct model
|
||||
type, or it will not load correctly. You can correct the model type
|
||||
after the fact using the `!edit_model` command.
|
||||
|
||||
4. Press *Add Model* and wait for confirmation that the model
|
||||
was added.
|
||||
The system will then ask you a few other questions about the model,
|
||||
including what size image it was trained on (usually 512x512), what
|
||||
name and description you wish to use for it, and whether you would
|
||||
like to install a custom VAE (variable autoencoder) file for the
|
||||
model. For recent models, the answer to the VAE question is usually
|
||||
"no," but it won't hurt to answer "yes".
|
||||
|
||||
To delete a model, Select *Model Manager* to list all the currently
|
||||
installed models. Press the trash can icons to delete any models you
|
||||
wish to get rid of. Models whose weights are located inside the
|
||||
InvokeAI `models` directory will be purged from disk, while those
|
||||
located outside will be unregistered from InvokeAI, but not deleted.
|
||||
After importing, the model will load. If this is successful, you will
|
||||
be asked if you want to keep the model loaded in memory to start
|
||||
generating immediately. You'll also be asked if you wish to make this
|
||||
the default model on startup. You can change this later using
|
||||
`!edit_model`.
|
||||
|
||||
You can see where model weights are located by clicking on the model name.
|
||||
This will bring up an editable info panel showing the model's characteristics,
|
||||
including the `Model Location` of its files.
|
||||
#### Importing a batch of `.ckpt` and `.safetensors` models from a directory
|
||||
|
||||
### Installation via the `autoimport` function
|
||||
You may also point `!import_model` to a directory containing a set of
|
||||
`.ckpt` or `.safetensors` files. They will be imported _en masse_.
|
||||
|
||||
In the InvokeAI root directory you will find a series of folders under
|
||||
`autoimport`, one each for main models, controlnets, embeddings and
|
||||
Loras. Any models that you add to these directories will be scanned
|
||||
at startup time and registered automatically.
|
||||
!!! example
|
||||
|
||||
You may create symbolic links from these folders to models located
|
||||
elsewhere on disk and they will be autoimported. You can also create
|
||||
subfolders and organize them as you wish.
|
||||
```console
|
||||
invoke> !import_model C:/Users/fred/Downloads/civitai_models/
|
||||
```
|
||||
|
||||
The location of the autoimport directories are controlled by settings
|
||||
in `invokeai.yaml`. See [Configuration](../features/CONFIGURATION.md).
|
||||
You will be given the option to import all models found in the
|
||||
directory, or select which ones to import. If there are subfolders
|
||||
within the directory, they will be searched for models to import.
|
||||
|
||||
#### Installing `diffusers` models
|
||||
|
||||
You can install a `diffusers` model from the HuggingFace site using
|
||||
`!import_model` and the HuggingFace repo_id for the model:
|
||||
|
||||
```bash
|
||||
invoke> !import_model andite/anything-v4.0
|
||||
```
|
||||
|
||||
Alternatively, you can download the model to disk and import it from
|
||||
there. The model may be distributed as a ZIP file, or as a Git
|
||||
repository:
|
||||
|
||||
```bash
|
||||
invoke> !import_model C:/Users/fred/Downloads/andite--anything-v4.0
|
||||
```
|
||||
|
||||
!!! tip "The CLI supports file path autocompletion"
|
||||
Type a bit of the path name and hit ++tab++ in order to get a choice of
|
||||
possible completions.
|
||||
|
||||
!!! tip "On Windows, you can drag model files onto the command-line"
|
||||
Once you have typed in `!import_model `, you can drag the
|
||||
model file or directory onto the command-line to insert the model path. This way, you don't need to
|
||||
type it or copy/paste. However, you will need to reverse or
|
||||
double backslashes as noted above.
|
||||
|
||||
Before installing, the CLI will ask you for a short name and
|
||||
description for the model, whether to make this the default model that
|
||||
is loaded at InvokeAI startup time, and whether to replace its
|
||||
VAE. Generally the answer to the latter question is "no".
|
||||
|
||||
### Converting legacy models into `diffusers`
|
||||
|
||||
The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
|
||||
models file into `diffusers` and install it.This will enable the model
|
||||
to load and run faster without loss of image quality.
|
||||
|
||||
The usage is identical to `!import_model`. You may point the command
|
||||
to either a downloaded model file on disk, or to a (non-password
|
||||
protected) URL:
|
||||
|
||||
```bash
|
||||
invoke> !convert_model C:/Users/fred/Downloads/martians.safetensors
|
||||
```
|
||||
|
||||
After a successful conversion, the CLI will offer you the option of
|
||||
deleting the original `.ckpt` or `.safetensors` file.
|
||||
|
||||
### Optimizing a previously-installed model
|
||||
|
||||
Lastly, if you have previously installed a `.ckpt` or `.safetensors`
|
||||
file and wish to convert it into a `diffusers` model, you can do this
|
||||
without re-downloading and converting the original file using the
|
||||
`!optimize_model` command. Simply pass the short name of an existing
|
||||
installed model:
|
||||
|
||||
```bash
|
||||
invoke> !optimize_model martians-v1.0
|
||||
```
|
||||
|
||||
The model will be converted into `diffusers` format and replace the
|
||||
previously installed version. You will again be offered the
|
||||
opportunity to delete the original `.ckpt` or `.safetensors` file.
|
||||
|
||||
### Related CLI Commands
|
||||
|
||||
There are a whole series of additional model management commands in
|
||||
the CLI that you can read about in [Command-Line
|
||||
Interface](../features/CLI.md). These include:
|
||||
|
||||
* `!models` - List all installed models
|
||||
* `!switch <model name>` - Switch to the indicated model
|
||||
* `!edit_model <model name>` - Edit the indicated model to change its name, description or other properties
|
||||
* `!del_model <model name>` - Delete the indicated model
|
||||
|
||||
### Manually editing `configs/models.yaml`
|
||||
|
||||
|
||||
If you are comfortable with a text editor then you may simply edit `models.yaml`
|
||||
directly.
|
||||
|
||||
You will need to download the desired `.ckpt/.safetensors` file and
|
||||
place it somewhere on your machine's filesystem. Alternatively, for a
|
||||
`diffusers` model, record the repo_id or download the whole model
|
||||
directory. Then using a **text** editor (e.g. the Windows Notepad
|
||||
application), open the file `configs/models.yaml`, and add a new
|
||||
stanza that follows this model:
|
||||
|
||||
#### A legacy model
|
||||
|
||||
A legacy `.ckpt` or `.safetensors` entry will look like this:
|
||||
|
||||
```yaml
|
||||
arabian-nights-1.0:
|
||||
description: A great fine-tune in Arabian Nights style
|
||||
weights: ./path/to/arabian-nights-1.0.ckpt
|
||||
config: ./configs/stable-diffusion/v1-inference.yaml
|
||||
format: ckpt
|
||||
width: 512
|
||||
height: 512
|
||||
default: false
|
||||
```
|
||||
|
||||
Note that `format` is `ckpt` for both `.ckpt` and `.safetensors` files.
|
||||
|
||||
#### A diffusers model
|
||||
|
||||
A stanza for a `diffusers` model will look like this for a HuggingFace
|
||||
model with a repository ID:
|
||||
|
||||
```yaml
|
||||
arabian-nights-1.1:
|
||||
description: An even better fine-tune of the Arabian Nights
|
||||
repo_id: captahab/arabian-nights-1.1
|
||||
format: diffusers
|
||||
default: true
|
||||
```
|
||||
|
||||
And for a downloaded directory:
|
||||
|
||||
```yaml
|
||||
arabian-nights-1.1:
|
||||
description: An even better fine-tune of the Arabian Nights
|
||||
path: /path/to/captahab-arabian-nights-1.1
|
||||
format: diffusers
|
||||
default: true
|
||||
```
|
||||
|
||||
There is additional syntax for indicating an external VAE to use with
|
||||
this model. See `INITIAL_MODELS.yaml` and `models.yaml` for examples.
|
||||
|
||||
After you save the modified `models.yaml` file relaunch
|
||||
`invokeai`. The new model will now be available for your use.
|
||||
|
||||
### Installation via the WebUI
|
||||
|
||||
To access the WebUI Model Manager, click on the button that looks like
|
||||
a cube in the upper right side of the browser screen. This will bring
|
||||
up a dialogue that lists the models you have already installed, and
|
||||
allows you to load, delete or edit them:
|
||||
|
||||
<figure markdown>
|
||||
|
||||

|
||||
|
||||
</figure>
|
||||
|
||||
To add a new model, click on **+ Add New** and select to either a
|
||||
checkpoint/safetensors model, or a diffusers model:
|
||||
|
||||
<figure markdown>
|
||||
|
||||

|
||||
|
||||
</figure>
|
||||
|
||||
In this example, we chose **Add Diffusers**. As shown in the figure
|
||||
below, a new dialogue prompts you to enter the name to use for the
|
||||
model, its description, and either the location of the `diffusers`
|
||||
model on disk, or its Repo ID on the HuggingFace web site. If you
|
||||
choose to enter a path to disk, the system will autocomplete for you
|
||||
as you type:
|
||||
|
||||
<figure markdown>
|
||||
|
||||

|
||||
|
||||
</figure>
|
||||
|
||||
Press **Add Model** at the bottom of the dialogue (scrolled out of
|
||||
site in the figure), and the model will be downloaded, imported, and
|
||||
registered in `models.yaml`.
|
||||
|
||||
The **Add Checkpoint/Safetensor Model** option is similar, except that
|
||||
in this case you can choose to scan an entire folder for
|
||||
checkpoint/safetensors files to import. Simply type in the path of the
|
||||
directory and press the "Search" icon. This will display the
|
||||
`.ckpt` and `.safetensors` found inside the directory and its
|
||||
subfolders, and allow you to choose which ones to import:
|
||||
|
||||
<figure markdown>
|
||||
|
||||

|
||||
|
||||
</figure>
|
||||
|
||||
## Model Management Startup Options
|
||||
|
||||
The `invoke` launcher and the `invokeai` script accept a series of
|
||||
command-line arguments that modify InvokeAI's behavior when loading
|
||||
models. These can be provided on the command line, or added to the
|
||||
InvokeAI root directory's `invokeai.init` initialization file.
|
||||
|
||||
The arguments are:
|
||||
|
||||
* `--model <model name>` -- Start up with the indicated model loaded
|
||||
* `--ckpt_convert` -- When a checkpoint/safetensors model is loaded, convert it into a `diffusers` model in memory. This does not permanently save the converted model to disk.
|
||||
* `--autoconvert <path/to/directory>` -- Scan the indicated directory path for new checkpoint/safetensors files, convert them into `diffusers` models, and import them into InvokeAI.
|
||||
|
||||
Here is an example of providing an argument on the command line using
|
||||
the `invoke.sh` launch script:
|
||||
|
||||
```bash
|
||||
invoke.sh --autoconvert /home/fred/stable-diffusion-checkpoints
|
||||
```
|
||||
|
||||
And here is what the same argument looks like in `invokeai.init`:
|
||||
|
||||
```bash
|
||||
--outdir="/home/fred/invokeai/outputs
|
||||
--no-nsfw_checker
|
||||
--autoconvert /home/fred/stable-diffusion-checkpoints
|
||||
```
|
||||
|
||||
@@ -9,7 +9,6 @@ from fastapi_events.dispatcher import dispatch
|
||||
|
||||
from ..services.events import EventServiceBase
|
||||
|
||||
|
||||
class FastAPIEventService(EventServiceBase):
|
||||
event_handler_id: int
|
||||
__queue: Queue
|
||||
@@ -28,6 +27,9 @@ class FastAPIEventService(EventServiceBase):
|
||||
self.__queue.put(None)
|
||||
|
||||
def dispatch(self, event_name: str, payload: Any) -> None:
|
||||
# TODO: Remove next two debugging lines
|
||||
from .dependencies import ApiDependencies
|
||||
ApiDependencies.invoker.services.logger.debug(f'dispatch {event_name} / {payload}')
|
||||
self.__queue.put(dict(event_name=event_name, payload=payload))
|
||||
|
||||
async def __dispatch_from_queue(self, stop_event: threading.Event):
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
|
||||
import pathlib
|
||||
import threading
|
||||
from typing import Literal, List, Optional, Union
|
||||
|
||||
from fastapi import Body, Path, Query, Response
|
||||
@@ -127,54 +128,43 @@ async def update_model(
|
||||
"/import",
|
||||
operation_id="import_model",
|
||||
responses= {
|
||||
201: {"description" : "The model imported successfully"},
|
||||
404: {"description" : "The model could not be found"},
|
||||
415: {"description" : "Unrecognized file/folder format"},
|
||||
424: {"description" : "The model appeared to import successfully, but could not be found in the model manager"},
|
||||
409: {"description" : "There is already a model corresponding to this path or repo_id"},
|
||||
200: {"description" : "The path was queued for import"},
|
||||
},
|
||||
status_code=201,
|
||||
response_model=ImportModelResponse
|
||||
status_code=200
|
||||
)
|
||||
async def import_model(
|
||||
location: str = Body(description="A model path, repo_id or URL to import"),
|
||||
prediction_type: Optional[Literal['v_prediction','epsilon','sample']] = \
|
||||
Body(description='Prediction type for SDv2 checkpoint files', default="v_prediction"),
|
||||
) -> ImportModelResponse:
|
||||
""" Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically """
|
||||
) -> str:
|
||||
"""
|
||||
Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically.
|
||||
This call launches a background thread to process the imported model and always succeeds. Results are reported in the background
|
||||
as the following events:
|
||||
- model_import_started(import_path:str)
|
||||
- model_import_completed(import_path:str, import_info:AddModelResults, success:bool, error:str)
|
||||
- download_started(url:str)
|
||||
- download_progress(url:str, downloaded_bytes:int, total_bytes:int)
|
||||
- download_completed(url:str, status_code:int, download_path:str)
|
||||
"""
|
||||
|
||||
items_to_import = {location}
|
||||
prediction_types = { x.value: x for x in SchedulerPredictionType }
|
||||
logger = ApiDependencies.invoker.services.logger
|
||||
events = ApiDependencies.invoker.services.events
|
||||
|
||||
try:
|
||||
installed_models = ApiDependencies.invoker.services.model_manager.heuristic_import(
|
||||
items_to_import = items_to_import,
|
||||
prediction_type_helper = lambda x: prediction_types.get(prediction_type)
|
||||
)
|
||||
info = installed_models.get(location)
|
||||
|
||||
if not info:
|
||||
logger.error("Import failed")
|
||||
raise HTTPException(status_code=415)
|
||||
|
||||
logger.info(f'Successfully imported {location}, got {info}')
|
||||
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
|
||||
model_name=info.name,
|
||||
base_model=info.base_model,
|
||||
model_type=info.model_type
|
||||
)
|
||||
return parse_obj_as(ImportModelResponse, model_raw)
|
||||
|
||||
except ModelNotFoundException as e:
|
||||
import_thread = threading.Thread(target = ApiDependencies.invoker.services.model_manager.heuristic_import,
|
||||
kwargs = dict(items_to_import = items_to_import,
|
||||
prediction_type_helper = lambda x: prediction_types.get(prediction_type),
|
||||
event_bus = events,
|
||||
)
|
||||
)
|
||||
import_thread.start()
|
||||
return 'request queued'
|
||||
except Exception as e:
|
||||
logger.error(str(e))
|
||||
raise HTTPException(status_code=404, detail=str(e))
|
||||
except InvalidModelException as e:
|
||||
logger.error(str(e))
|
||||
raise HTTPException(status_code=415)
|
||||
except ValueError as e:
|
||||
logger.error(str(e))
|
||||
raise HTTPException(status_code=409, detail=str(e))
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
@models_router.post(
|
||||
"/add",
|
||||
|
||||
@@ -1,14 +1,6 @@
|
||||
from typing import Literal, Optional, Union, List, Annotated
|
||||
from pydantic import BaseModel, Field
|
||||
import re
|
||||
|
||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationContext, InvocationConfig
|
||||
from .model import ClipField
|
||||
|
||||
from ...backend.util.devices import torch_dtype
|
||||
from ...backend.stable_diffusion.diffusion import InvokeAIDiffuserComponent
|
||||
from ...backend.model_management import BaseModelType, ModelType, SubModelType, ModelPatcher
|
||||
|
||||
import torch
|
||||
from compel import Compel, ReturnedEmbeddingsType
|
||||
from compel.prompt_parser import (Blend, Conjunction,
|
||||
@@ -339,8 +331,8 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
|
||||
crop_left: int = Field(0, description="")
|
||||
target_width: int = Field(1024, description="")
|
||||
target_height: int = Field(1024, description="")
|
||||
clip: ClipField = Field(None, description="Clip to use")
|
||||
clip2: ClipField = Field(None, description="Clip2 to use")
|
||||
clip1: ClipField = Field(None, description="Clip to use")
|
||||
clip2: ClipField = Field(None, description="Clip to use")
|
||||
|
||||
# Schema customisation
|
||||
class Config(InvocationConfig):
|
||||
@@ -356,7 +348,7 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
|
||||
|
||||
@torch.no_grad()
|
||||
def invoke(self, context: InvocationContext) -> CompelOutput:
|
||||
c1, c1_pooled, ec1 = self.run_clip_compel(context, self.clip, self.prompt, False)
|
||||
c1, c1_pooled, ec1 = self.run_clip_compel(context, self.clip1, self.prompt, False)
|
||||
if self.style.strip() == "":
|
||||
c2, c2_pooled, ec2 = self.run_clip_compel(context, self.clip2, self.prompt, True)
|
||||
else:
|
||||
@@ -459,8 +451,8 @@ class SDXLRawPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
|
||||
crop_left: int = Field(0, description="")
|
||||
target_width: int = Field(1024, description="")
|
||||
target_height: int = Field(1024, description="")
|
||||
clip: ClipField = Field(None, description="Clip to use")
|
||||
clip2: ClipField = Field(None, description="Clip2 to use")
|
||||
clip1: ClipField = Field(None, description="Clip to use")
|
||||
clip2: ClipField = Field(None, description="Clip to use")
|
||||
|
||||
# Schema customisation
|
||||
class Config(InvocationConfig):
|
||||
@@ -476,7 +468,7 @@ class SDXLRawPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
|
||||
|
||||
@torch.no_grad()
|
||||
def invoke(self, context: InvocationContext) -> CompelOutput:
|
||||
c1, c1_pooled, ec1 = self.run_clip_raw(context, self.clip, self.prompt, False)
|
||||
c1, c1_pooled, ec1 = self.run_clip_raw(context, self.clip1, self.prompt, False)
|
||||
if self.style.strip() == "":
|
||||
c2, c2_pooled, ec2 = self.run_clip_raw(context, self.clip2, self.prompt, True)
|
||||
else:
|
||||
|
||||
@@ -518,8 +518,8 @@ class ImageScaleInvocation(BaseInvocation, PILInvocationConfig):
|
||||
type: Literal["img_scale"] = "img_scale"
|
||||
|
||||
# Inputs
|
||||
image: Optional[ImageField] = Field(default=None, description="The image to scale")
|
||||
scale_factor: Optional[float] = Field(default=2.0, gt=0, description="The factor by which to scale the image")
|
||||
image: Optional[ImageField] = Field(default=None, description="The image to scale")
|
||||
scale_factor: float = Field(gt=0, description="The factor by which to scale the image")
|
||||
resample_mode: PIL_RESAMPLING_MODES = Field(default="bicubic", description="The resampling mode")
|
||||
# fmt: on
|
||||
|
||||
|
||||
@@ -22,8 +22,7 @@ from ...backend.stable_diffusion.diffusers_pipeline import (
|
||||
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import \
|
||||
PostprocessingSettings
|
||||
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
|
||||
from ...backend.util.devices import choose_torch_device, torch_dtype
|
||||
from ...backend.model_management import ModelPatcher
|
||||
from ...backend.util.devices import torch_dtype
|
||||
from ..models.image import ImageCategory, ImageField, ResourceOrigin
|
||||
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
|
||||
InvocationConfig, InvocationContext)
|
||||
|
||||
@@ -54,7 +54,6 @@ class MainModelField(BaseModel):
|
||||
|
||||
model_name: str = Field(description="Name of the model")
|
||||
base_model: BaseModelType = Field(description="Base model")
|
||||
model_type: ModelType = Field(description="Model Type")
|
||||
|
||||
|
||||
class LoRAModelField(BaseModel):
|
||||
@@ -222,9 +221,6 @@ class LoraLoaderInvocation(BaseInvocation):
|
||||
base_model = self.lora.base_model
|
||||
lora_name = self.lora.model_name
|
||||
|
||||
# TODO: ui rewrite
|
||||
base_model = BaseModelType.StableDiffusion1
|
||||
|
||||
if not context.services.model_manager.model_exists(
|
||||
base_model=base_model,
|
||||
model_name=lora_name,
|
||||
|
||||
@@ -1,591 +0,0 @@
|
||||
# Copyright (c) 2023 Borisov Sergey (https://github.com/StAlKeR7779)
|
||||
|
||||
from contextlib import ExitStack
|
||||
from typing import List, Literal, Optional, Union
|
||||
|
||||
import re
|
||||
import inspect
|
||||
|
||||
from pydantic import BaseModel, Field, validator
|
||||
import torch
|
||||
import numpy as np
|
||||
from diffusers import ControlNetModel, DPMSolverMultistepScheduler
|
||||
from diffusers.image_processor import VaeImageProcessor
|
||||
from diffusers.schedulers import SchedulerMixin as Scheduler
|
||||
|
||||
from ..models.image import ImageCategory, ImageField, ResourceOrigin
|
||||
from ...backend.model_management import ONNXModelPatcher
|
||||
from ...backend.util import choose_torch_device
|
||||
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
|
||||
InvocationConfig, InvocationContext)
|
||||
from .compel import ConditioningField
|
||||
from .controlnet_image_processors import ControlField
|
||||
from .image import ImageOutput
|
||||
from .model import ModelInfo, UNetField, VaeField
|
||||
|
||||
from invokeai.app.invocations.metadata import CoreMetadata
|
||||
from invokeai.backend import BaseModelType, ModelType, SubModelType
|
||||
from invokeai.app.util.step_callback import stable_diffusion_step_callback
|
||||
from ...backend.stable_diffusion import PipelineIntermediateState
|
||||
|
||||
from tqdm import tqdm
|
||||
from .model import ClipField
|
||||
from .latent import LatentsField, LatentsOutput, build_latents_output, get_scheduler, SAMPLER_NAME_VALUES
|
||||
from .compel import CompelOutput
|
||||
|
||||
|
||||
ORT_TO_NP_TYPE = {
|
||||
"tensor(bool)": np.bool_,
|
||||
"tensor(int8)": np.int8,
|
||||
"tensor(uint8)": np.uint8,
|
||||
"tensor(int16)": np.int16,
|
||||
"tensor(uint16)": np.uint16,
|
||||
"tensor(int32)": np.int32,
|
||||
"tensor(uint32)": np.uint32,
|
||||
"tensor(int64)": np.int64,
|
||||
"tensor(uint64)": np.uint64,
|
||||
"tensor(float16)": np.float16,
|
||||
"tensor(float)": np.float32,
|
||||
"tensor(double)": np.float64,
|
||||
}
|
||||
|
||||
|
||||
class ONNXPromptInvocation(BaseInvocation):
|
||||
type: Literal["prompt_onnx"] = "prompt_onnx"
|
||||
|
||||
prompt: str = Field(default="", description="Prompt")
|
||||
clip: ClipField = Field(None, description="Clip to use")
|
||||
|
||||
def invoke(self, context: InvocationContext) -> CompelOutput:
|
||||
tokenizer_info = context.services.model_manager.get_model(
|
||||
**self.clip.tokenizer.dict(),
|
||||
)
|
||||
text_encoder_info = context.services.model_manager.get_model(
|
||||
**self.clip.text_encoder.dict(),
|
||||
)
|
||||
with tokenizer_info as orig_tokenizer,\
|
||||
text_encoder_info as text_encoder,\
|
||||
ExitStack() as stack:
|
||||
|
||||
#loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.clip.loras]
|
||||
loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
|
||||
|
||||
ti_list = []
|
||||
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
|
||||
name = trigger[1:-1]
|
||||
try:
|
||||
ti_list.append(
|
||||
#stack.enter_context(
|
||||
# context.services.model_manager.get_model(
|
||||
# model_name=name,
|
||||
# base_model=self.clip.text_encoder.base_model,
|
||||
# model_type=ModelType.TextualInversion,
|
||||
# )
|
||||
#)
|
||||
context.services.model_manager.get_model(
|
||||
model_name=name,
|
||||
base_model=self.clip.text_encoder.base_model,
|
||||
model_type=ModelType.TextualInversion,
|
||||
).context.model
|
||||
)
|
||||
except Exception:
|
||||
#print(e)
|
||||
#import traceback
|
||||
#print(traceback.format_exc())
|
||||
print(f"Warn: trigger: \"{trigger}\" not found")
|
||||
|
||||
with ONNXModelPatcher.apply_lora_text_encoder(text_encoder, loras),\
|
||||
ONNXModelPatcher.apply_ti(orig_tokenizer, text_encoder, ti_list) as (tokenizer, ti_manager):
|
||||
|
||||
text_encoder.create_session()
|
||||
|
||||
# copy from
|
||||
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L153
|
||||
text_inputs = tokenizer(
|
||||
self.prompt,
|
||||
padding="max_length",
|
||||
max_length=tokenizer.model_max_length,
|
||||
truncation=True,
|
||||
return_tensors="np",
|
||||
)
|
||||
text_input_ids = text_inputs.input_ids
|
||||
"""
|
||||
untruncated_ids = tokenizer(prompt, padding="max_length", return_tensors="np").input_ids
|
||||
|
||||
if not np.array_equal(text_input_ids, untruncated_ids):
|
||||
removed_text = self.tokenizer.batch_decode(
|
||||
untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
|
||||
)
|
||||
logger.warning(
|
||||
"The following part of your input was truncated because CLIP can only handle sequences up to"
|
||||
f" {self.tokenizer.model_max_length} tokens: {removed_text}"
|
||||
)
|
||||
"""
|
||||
|
||||
prompt_embeds = text_encoder(input_ids=text_input_ids.astype(np.int32))[0]
|
||||
|
||||
text_encoder.release_session()
|
||||
|
||||
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
|
||||
|
||||
# TODO: hacky but works ;D maybe rename latents somehow?
|
||||
context.services.latents.save(conditioning_name, (prompt_embeds, None))
|
||||
|
||||
return CompelOutput(
|
||||
conditioning=ConditioningField(
|
||||
conditioning_name=conditioning_name,
|
||||
),
|
||||
)
|
||||
|
||||
# Text to image
|
||||
class ONNXTextToLatentsInvocation(BaseInvocation):
|
||||
"""Generates latents from conditionings."""
|
||||
|
||||
type: Literal["t2l_onnx"] = "t2l_onnx"
|
||||
|
||||
# Inputs
|
||||
# fmt: off
|
||||
positive_conditioning: Optional[ConditioningField] = Field(description="Positive conditioning for generation")
|
||||
negative_conditioning: Optional[ConditioningField] = Field(description="Negative conditioning for generation")
|
||||
noise: Optional[LatentsField] = Field(description="The noise to use")
|
||||
steps: int = Field(default=10, gt=0, description="The number of steps to use to generate the image")
|
||||
cfg_scale: Union[float, List[float]] = Field(default=7.5, ge=1, description="The Classifier-Free Guidance, higher values may result in a result closer to the prompt", )
|
||||
scheduler: SAMPLER_NAME_VALUES = Field(default="euler", description="The scheduler to use" )
|
||||
unet: UNetField = Field(default=None, description="UNet submodel")
|
||||
#control: Union[ControlField, list[ControlField]] = Field(default=None, description="The control to use")
|
||||
#seamless: bool = Field(default=False, description="Whether or not to generate an image that can tile without seams", )
|
||||
#seamless_axes: str = Field(default="", description="The axes to tile the image on, 'x' and/or 'y'")
|
||||
# fmt: on
|
||||
|
||||
@validator("cfg_scale")
|
||||
def ge_one(cls, v):
|
||||
"""validate that all cfg_scale values are >= 1"""
|
||||
if isinstance(v, list):
|
||||
for i in v:
|
||||
if i < 1:
|
||||
raise ValueError('cfg_scale must be greater than 1')
|
||||
else:
|
||||
if v < 1:
|
||||
raise ValueError('cfg_scale must be greater than 1')
|
||||
return v
|
||||
|
||||
# Schema customisation
|
||||
class Config(InvocationConfig):
|
||||
schema_extra = {
|
||||
"ui": {
|
||||
"tags": ["latents"],
|
||||
"type_hints": {
|
||||
"model": "model",
|
||||
# "cfg_scale": "float",
|
||||
"cfg_scale": "number"
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
# based on
|
||||
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L375
|
||||
def invoke(self, context: InvocationContext) -> LatentsOutput:
|
||||
c, _ = context.services.latents.get(self.positive_conditioning.conditioning_name)
|
||||
uc, _ = context.services.latents.get(self.negative_conditioning.conditioning_name)
|
||||
graph_execution_state = context.services.graph_execution_manager.get(
|
||||
context.graph_execution_state_id)
|
||||
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
|
||||
if isinstance(c, torch.Tensor):
|
||||
c = c.cpu().numpy()
|
||||
if isinstance(uc, torch.Tensor):
|
||||
uc = uc.cpu().numpy()
|
||||
device = torch.device(choose_torch_device())
|
||||
prompt_embeds = np.concatenate([uc, c])
|
||||
|
||||
latents = context.services.latents.get(self.noise.latents_name)
|
||||
if isinstance(latents, torch.Tensor):
|
||||
latents = latents.cpu().numpy()
|
||||
|
||||
# TODO: better execution device handling
|
||||
latents = latents.astype(np.float16)
|
||||
|
||||
# get the initial random noise unless the user supplied it
|
||||
do_classifier_free_guidance = True
|
||||
#latents_dtype = prompt_embeds.dtype
|
||||
#latents_shape = (batch_size * num_images_per_prompt, 4, height // 8, width // 8)
|
||||
#if latents.shape != latents_shape:
|
||||
# raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
|
||||
|
||||
scheduler = get_scheduler(
|
||||
context=context,
|
||||
scheduler_info=self.unet.scheduler,
|
||||
scheduler_name=self.scheduler,
|
||||
)
|
||||
|
||||
def torch2numpy(latent: torch.Tensor):
|
||||
return latent.cpu().numpy()
|
||||
|
||||
def numpy2torch(latent, device):
|
||||
return torch.from_numpy(latent).to(device)
|
||||
|
||||
def dispatch_progress(
|
||||
self, context: InvocationContext, source_node_id: str,
|
||||
intermediate_state: PipelineIntermediateState) -> None:
|
||||
stable_diffusion_step_callback(
|
||||
context=context,
|
||||
intermediate_state=intermediate_state,
|
||||
node=self.dict(),
|
||||
source_node_id=source_node_id,
|
||||
)
|
||||
|
||||
scheduler.set_timesteps(self.steps)
|
||||
latents = latents * np.float64(scheduler.init_noise_sigma)
|
||||
|
||||
extra_step_kwargs = dict()
|
||||
if "eta" in set(inspect.signature(scheduler.step).parameters.keys()):
|
||||
extra_step_kwargs.update(
|
||||
eta=0.0,
|
||||
)
|
||||
|
||||
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict())
|
||||
|
||||
with unet_info as unet,\
|
||||
ExitStack() as stack:
|
||||
|
||||
#loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
|
||||
loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.unet.loras]
|
||||
|
||||
with ONNXModelPatcher.apply_lora_unet(unet, loras):
|
||||
# TODO:
|
||||
unet.create_session()
|
||||
|
||||
timestep_dtype = next(
|
||||
(input.type for input in unet.session.get_inputs() if input.name == "timestep"), "tensor(float16)"
|
||||
)
|
||||
timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype]
|
||||
import time
|
||||
times = []
|
||||
for i in tqdm(range(len(scheduler.timesteps))):
|
||||
t = scheduler.timesteps[i]
|
||||
# expand the latents if we are doing classifier free guidance
|
||||
latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents
|
||||
latent_model_input = scheduler.scale_model_input(numpy2torch(latent_model_input, device), t)
|
||||
latent_model_input = latent_model_input.cpu().numpy()
|
||||
|
||||
# predict the noise residual
|
||||
timestep = np.array([t], dtype=timestep_dtype)
|
||||
start_time = time.time()
|
||||
noise_pred = unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=prompt_embeds)
|
||||
times.append(time.time() - start_time)
|
||||
noise_pred = noise_pred[0]
|
||||
|
||||
# perform guidance
|
||||
if do_classifier_free_guidance:
|
||||
noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2)
|
||||
noise_pred = noise_pred_uncond + self.cfg_scale * (noise_pred_text - noise_pred_uncond)
|
||||
|
||||
# compute the previous noisy sample x_t -> x_t-1
|
||||
scheduler_output = scheduler.step(
|
||||
numpy2torch(noise_pred, device), t, numpy2torch(latents, device), **extra_step_kwargs
|
||||
)
|
||||
latents = torch2numpy(scheduler_output.prev_sample)
|
||||
|
||||
state = PipelineIntermediateState(
|
||||
run_id= "test",
|
||||
step=i,
|
||||
timestep=timestep,
|
||||
latents=scheduler_output.prev_sample
|
||||
)
|
||||
dispatch_progress(
|
||||
self,
|
||||
context=context,
|
||||
source_node_id=source_node_id,
|
||||
intermediate_state=state
|
||||
)
|
||||
|
||||
# call the callback, if provided
|
||||
#if callback is not None and i % callback_steps == 0:
|
||||
# callback(i, t, latents)
|
||||
print(times)
|
||||
unet.release_session()
|
||||
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
name = f'{context.graph_execution_state_id}__{self.id}'
|
||||
context.services.latents.save(name, latents)
|
||||
return build_latents_output(latents_name=name, latents=torch.from_numpy(latents))
|
||||
|
||||
# Latent to image
|
||||
class ONNXLatentsToImageInvocation(BaseInvocation):
|
||||
"""Generates an image from latents."""
|
||||
|
||||
type: Literal["l2i_onnx"] = "l2i_onnx"
|
||||
|
||||
# Inputs
|
||||
latents: Optional[LatentsField] = Field(description="The latents to generate an image from")
|
||||
vae: VaeField = Field(default=None, description="Vae submodel")
|
||||
metadata: Optional[CoreMetadata] = Field(default=None, description="Optional core metadata to be written to the image")
|
||||
#tiled: bool = Field(default=False, description="Decode latents by overlaping tiles(less memory consumption)")
|
||||
|
||||
# Schema customisation
|
||||
class Config(InvocationConfig):
|
||||
schema_extra = {
|
||||
"ui": {
|
||||
"tags": ["latents", "image"],
|
||||
},
|
||||
}
|
||||
|
||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
||||
latents = context.services.latents.get(self.latents.latents_name)
|
||||
|
||||
if self.vae.vae.submodel != SubModelType.VaeDecoder:
|
||||
raise Exception(f"Expected vae_decoder, found: {self.vae.vae.model_type}")
|
||||
|
||||
vae_info = context.services.model_manager.get_model(
|
||||
**self.vae.vae.dict(),
|
||||
)
|
||||
|
||||
# clear memory as vae decode can request a lot
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
with vae_info as vae:
|
||||
vae.create_session()
|
||||
|
||||
# copied from
|
||||
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L427
|
||||
latents = 1 / 0.18215 * latents
|
||||
# image = self.vae_decoder(latent_sample=latents)[0]
|
||||
# it seems likes there is a strange result for using half-precision vae decoder if batchsize>1
|
||||
image = np.concatenate(
|
||||
[vae(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])]
|
||||
)
|
||||
|
||||
image = np.clip(image / 2 + 0.5, 0, 1)
|
||||
image = image.transpose((0, 2, 3, 1))
|
||||
image = VaeImageProcessor.numpy_to_pil(image)[0]
|
||||
|
||||
vae.release_session()
|
||||
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
image_dto = context.services.images.create(
|
||||
image=image,
|
||||
image_origin=ResourceOrigin.INTERNAL,
|
||||
image_category=ImageCategory.GENERAL,
|
||||
node_id=self.id,
|
||||
session_id=context.graph_execution_state_id,
|
||||
is_intermediate=self.is_intermediate,
|
||||
metadata=self.metadata.dict() if self.metadata else None,
|
||||
)
|
||||
|
||||
return ImageOutput(
|
||||
image=ImageField(image_name=image_dto.image_name),
|
||||
width=image_dto.width,
|
||||
height=image_dto.height,
|
||||
)
|
||||
|
||||
class ONNXModelLoaderOutput(BaseInvocationOutput):
|
||||
"""Model loader output"""
|
||||
|
||||
#fmt: off
|
||||
type: Literal["model_loader_output_onnx"] = "model_loader_output_onnx"
|
||||
|
||||
unet: UNetField = Field(default=None, description="UNet submodel")
|
||||
clip: ClipField = Field(default=None, description="Tokenizer and text_encoder submodels")
|
||||
vae_decoder: VaeField = Field(default=None, description="Vae submodel")
|
||||
vae_encoder: VaeField = Field(default=None, description="Vae submodel")
|
||||
#fmt: on
|
||||
|
||||
class ONNXSD1ModelLoaderInvocation(BaseInvocation):
|
||||
"""Loading submodels of selected model."""
|
||||
|
||||
type: Literal["sd1_model_loader_onnx"] = "sd1_model_loader_onnx"
|
||||
|
||||
model_name: str = Field(default="", description="Model to load")
|
||||
# TODO: precision?
|
||||
|
||||
# Schema customisation
|
||||
class Config(InvocationConfig):
|
||||
schema_extra = {
|
||||
"ui": {
|
||||
"tags": ["model", "loader"],
|
||||
"type_hints": {
|
||||
"model_name": "model" # TODO: rename to model_name?
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
def invoke(self, context: InvocationContext) -> ONNXModelLoaderOutput:
|
||||
|
||||
model_name = "stable-diffusion-v1-5"
|
||||
base_model = BaseModelType.StableDiffusion1
|
||||
|
||||
# TODO: not found exceptions
|
||||
if not context.services.model_manager.model_exists(
|
||||
model_name=model_name,
|
||||
base_model=BaseModelType.StableDiffusion1,
|
||||
model_type=ModelType.ONNX,
|
||||
):
|
||||
raise Exception(f"Unkown model name: {model_name}!")
|
||||
|
||||
|
||||
return ONNXModelLoaderOutput(
|
||||
unet=UNetField(
|
||||
unet=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.UNet,
|
||||
),
|
||||
scheduler=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.Scheduler,
|
||||
),
|
||||
loras=[],
|
||||
),
|
||||
clip=ClipField(
|
||||
tokenizer=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.Tokenizer,
|
||||
),
|
||||
text_encoder=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.TextEncoder,
|
||||
),
|
||||
loras=[],
|
||||
),
|
||||
vae_decoder=VaeField(
|
||||
vae=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.VaeDecoder,
|
||||
),
|
||||
),
|
||||
vae_encoder=VaeField(
|
||||
vae=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.VaeEncoder,
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
class OnnxModelField(BaseModel):
|
||||
"""Onnx model field"""
|
||||
|
||||
model_name: str = Field(description="Name of the model")
|
||||
base_model: BaseModelType = Field(description="Base model")
|
||||
model_type: ModelType = Field(description="Model Type")
|
||||
|
||||
class OnnxModelLoaderInvocation(BaseInvocation):
|
||||
"""Loads a main model, outputting its submodels."""
|
||||
|
||||
type: Literal["onnx_model_loader"] = "onnx_model_loader"
|
||||
|
||||
model: OnnxModelField = Field(description="The model to load")
|
||||
# TODO: precision?
|
||||
|
||||
# Schema customisation
|
||||
class Config(InvocationConfig):
|
||||
schema_extra = {
|
||||
"ui": {
|
||||
"title": "Onnx Model Loader",
|
||||
"tags": ["model", "loader"],
|
||||
"type_hints": {"model": "model"},
|
||||
},
|
||||
}
|
||||
|
||||
def invoke(self, context: InvocationContext) -> ONNXModelLoaderOutput:
|
||||
base_model = self.model.base_model
|
||||
model_name = self.model.model_name
|
||||
model_type = ModelType.ONNX
|
||||
|
||||
# TODO: not found exceptions
|
||||
if not context.services.model_manager.model_exists(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
):
|
||||
raise Exception(f"Unknown {base_model} {model_type} model: {model_name}")
|
||||
|
||||
"""
|
||||
if not context.services.model_manager.model_exists(
|
||||
model_name=self.model_name,
|
||||
model_type=SDModelType.Diffusers,
|
||||
submodel=SDModelType.Tokenizer,
|
||||
):
|
||||
raise Exception(
|
||||
f"Failed to find tokenizer submodel in {self.model_name}! Check if model corrupted"
|
||||
)
|
||||
|
||||
if not context.services.model_manager.model_exists(
|
||||
model_name=self.model_name,
|
||||
model_type=SDModelType.Diffusers,
|
||||
submodel=SDModelType.TextEncoder,
|
||||
):
|
||||
raise Exception(
|
||||
f"Failed to find text_encoder submodel in {self.model_name}! Check if model corrupted"
|
||||
)
|
||||
|
||||
if not context.services.model_manager.model_exists(
|
||||
model_name=self.model_name,
|
||||
model_type=SDModelType.Diffusers,
|
||||
submodel=SDModelType.UNet,
|
||||
):
|
||||
raise Exception(
|
||||
f"Failed to find unet submodel from {self.model_name}! Check if model corrupted"
|
||||
)
|
||||
"""
|
||||
|
||||
return ONNXModelLoaderOutput(
|
||||
unet=UNetField(
|
||||
unet=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.UNet,
|
||||
),
|
||||
scheduler=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.Scheduler,
|
||||
),
|
||||
loras=[],
|
||||
),
|
||||
clip=ClipField(
|
||||
tokenizer=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.Tokenizer,
|
||||
),
|
||||
text_encoder=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.TextEncoder,
|
||||
),
|
||||
loras=[],
|
||||
skipped_layers=0,
|
||||
),
|
||||
vae_decoder=VaeField(
|
||||
vae=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.VaeDecoder,
|
||||
),
|
||||
),
|
||||
vae_encoder=VaeField(
|
||||
vae=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.VaeEncoder,
|
||||
),
|
||||
)
|
||||
)
|
||||
@@ -1,6 +1,6 @@
|
||||
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) & the InvokeAI Team
|
||||
from pathlib import Path
|
||||
from typing import Literal, Union
|
||||
from pathlib import Path, PosixPath
|
||||
from typing import Literal, Union, cast
|
||||
|
||||
import cv2 as cv
|
||||
import numpy as np
|
||||
@@ -16,20 +16,19 @@ from .image import ImageOutput
|
||||
|
||||
# TODO: Populate this from disk?
|
||||
# TODO: Use model manager to load?
|
||||
ESRGAN_MODELS = Literal[
|
||||
REALESRGAN_MODELS = Literal[
|
||||
"RealESRGAN_x4plus.pth",
|
||||
"RealESRGAN_x4plus_anime_6B.pth",
|
||||
"ESRGAN_SRx4_DF2KOST_official-ff704c30.pth",
|
||||
"RealESRGAN_x2plus.pth",
|
||||
]
|
||||
|
||||
|
||||
class ESRGANInvocation(BaseInvocation):
|
||||
class RealESRGANInvocation(BaseInvocation):
|
||||
"""Upscales an image using RealESRGAN."""
|
||||
|
||||
type: Literal["esrgan"] = "esrgan"
|
||||
type: Literal["realesrgan"] = "realesrgan"
|
||||
image: Union[ImageField, None] = Field(default=None, description="The input image")
|
||||
model_name: ESRGAN_MODELS = Field(
|
||||
model_name: REALESRGAN_MODELS = Field(
|
||||
default="RealESRGAN_x4plus.pth", description="The Real-ESRGAN model to use"
|
||||
)
|
||||
|
||||
@@ -74,17 +73,19 @@ class ESRGANInvocation(BaseInvocation):
|
||||
scale=4,
|
||||
)
|
||||
netscale = 4
|
||||
elif self.model_name in ["RealESRGAN_x2plus.pth"]:
|
||||
# x2 RRDBNet model
|
||||
rrdbnet_model = RRDBNet(
|
||||
num_in_ch=3,
|
||||
num_out_ch=3,
|
||||
num_feat=64,
|
||||
num_block=23,
|
||||
num_grow_ch=32,
|
||||
scale=2,
|
||||
)
|
||||
netscale = 2
|
||||
# TODO: add x2 models handling?
|
||||
# elif self.model_name in ["RealESRGAN_x2plus"]:
|
||||
# # x2 RRDBNet model
|
||||
# model = RRDBNet(
|
||||
# num_in_ch=3,
|
||||
# num_out_ch=3,
|
||||
# num_feat=64,
|
||||
# num_block=23,
|
||||
# num_grow_ch=32,
|
||||
# scale=2,
|
||||
# )
|
||||
# model_path = Path()
|
||||
# netscale = 2
|
||||
else:
|
||||
msg = f"Invalid RealESRGAN model: {self.model_name}"
|
||||
context.services.logger.error(msg)
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
|
||||
|
||||
from typing import Any, Optional
|
||||
from pathlib import Path
|
||||
|
||||
from invokeai.app.models.image import ProgressImage
|
||||
from invokeai.app.util.misc import get_timestamp
|
||||
from invokeai.app.services.model_manager_service import BaseModelType, ModelType, SubModelType, ModelInfo
|
||||
from invokeai.app.services.model_manager_service import BaseModelType, ModelType, SubModelType, ModelInfo, AddModelResult
|
||||
|
||||
class EventServiceBase:
|
||||
session_event: str = "session_event"
|
||||
@@ -111,7 +113,7 @@ class EventServiceBase:
|
||||
submodel: SubModelType,
|
||||
) -> None:
|
||||
"""Emitted when a model is requested"""
|
||||
self.__emit_session_event(
|
||||
self.dispatch(
|
||||
event_name="model_load_started",
|
||||
payload=dict(
|
||||
graph_execution_state_id=graph_execution_state_id,
|
||||
@@ -132,7 +134,7 @@ class EventServiceBase:
|
||||
model_info: ModelInfo,
|
||||
) -> None:
|
||||
"""Emitted when a model is correctly loaded (returns model info)"""
|
||||
self.__emit_session_event(
|
||||
self.dispatch(
|
||||
event_name="model_load_completed",
|
||||
payload=dict(
|
||||
graph_execution_state_id=graph_execution_state_id,
|
||||
@@ -145,3 +147,92 @@ class EventServiceBase:
|
||||
precision=str(model_info.precision),
|
||||
),
|
||||
)
|
||||
|
||||
def emit_model_import_started (
|
||||
self,
|
||||
import_path: str, # can be a local path, URL or repo_id
|
||||
)->None:
|
||||
"""Emitted when a model import commences"""
|
||||
self.dispatch(
|
||||
event_name="model_import_started",
|
||||
payload=dict(
|
||||
import_path = import_path,
|
||||
)
|
||||
)
|
||||
|
||||
def emit_model_import_completed (
|
||||
self,
|
||||
import_path: str, # can be a local path, URL or repo_id
|
||||
import_info: AddModelResult,
|
||||
success: bool= True,
|
||||
error: str = None,
|
||||
|
||||
)->None:
|
||||
"""Emitted when a model import completes"""
|
||||
self.dispatch(
|
||||
event_name="model_import_completed",
|
||||
payload=dict(
|
||||
import_path = import_path,
|
||||
import_info = import_info,
|
||||
success = success,
|
||||
error = error,
|
||||
)
|
||||
)
|
||||
|
||||
def emit_download_started (
|
||||
self,
|
||||
url: str,
|
||||
|
||||
)->None:
|
||||
"""Emitted when a download thread starts"""
|
||||
self.dispatch(
|
||||
event_name="download_started",
|
||||
payload=dict(
|
||||
url = url,
|
||||
)
|
||||
)
|
||||
|
||||
def emit_download_progress (
|
||||
self,
|
||||
url: str,
|
||||
downloaded_size: int,
|
||||
total_size: int,
|
||||
)->None:
|
||||
"""
|
||||
Emitted at intervals during a download process
|
||||
:param url: Requested URL
|
||||
:param downloaded_size: Bytes downloaded so far
|
||||
:param total_size: Total bytes to download
|
||||
"""
|
||||
self.dispatch(
|
||||
event_name="download_progress",
|
||||
payload=dict(
|
||||
url = url,
|
||||
downloaded_size = downloaded_size,
|
||||
total_size = total_size,
|
||||
)
|
||||
)
|
||||
|
||||
def emit_download_completed (
|
||||
self,
|
||||
url: str,
|
||||
status_code: int,
|
||||
download_path: Path,
|
||||
|
||||
)->None:
|
||||
"""
|
||||
Emitted when a download thread completes.
|
||||
:param url: Requested URL
|
||||
:param status_code: HTTP status code from request
|
||||
:param download_path: Path to downloaded file
|
||||
"""
|
||||
self.dispatch(
|
||||
event_name="download_completed",
|
||||
payload=dict(
|
||||
url = url,
|
||||
status_code = status_code,
|
||||
download_path = download_path,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -26,6 +26,7 @@ import torch
|
||||
from invokeai.app.models.exceptions import CanceledException
|
||||
from ...backend.util import choose_precision, choose_torch_device
|
||||
from .config import InvokeAIAppConfig
|
||||
from .events import EventServiceBase
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from ..invocations.baseinvocation import BaseInvocation, InvocationContext
|
||||
@@ -542,6 +543,7 @@ class ModelManagerService(ModelManagerServiceBase):
|
||||
def heuristic_import(self,
|
||||
items_to_import: set[str],
|
||||
prediction_type_helper: Optional[Callable[[Path],SchedulerPredictionType]]=None,
|
||||
event_bus: Optional[EventServiceBase]=None,
|
||||
)->dict[str, AddModelResult]:
|
||||
'''Import a list of paths, repo_ids or URLs. Returns the set of
|
||||
successfully imported items.
|
||||
@@ -559,7 +561,7 @@ class ModelManagerService(ModelManagerServiceBase):
|
||||
of the set is a dict corresponding to the newly-created OmegaConf stanza for
|
||||
that model.
|
||||
'''
|
||||
return self.mgr.heuristic_import(items_to_import, prediction_type_helper)
|
||||
return self.mgr.heuristic_import(items_to_import, prediction_type_helper, event_bus=event_bus)
|
||||
|
||||
def merge_models(
|
||||
self,
|
||||
|
||||
@@ -466,6 +466,7 @@ class Generator:
|
||||
dtype=samples.dtype,
|
||||
device=samples.device,
|
||||
)
|
||||
|
||||
latent_image = samples[0].permute(1, 2, 0) @ v1_5_latent_rgb_factors
|
||||
latents_ubyte = (
|
||||
((latent_image + 1) / 2)
|
||||
|
||||
@@ -222,7 +222,7 @@ def download_conversion_models():
|
||||
|
||||
# ---------------------------------------------
|
||||
def download_realesrgan():
|
||||
logger.info("Installing ESRGAN Upscaling models...")
|
||||
logger.info("Installing RealESRGAN models...")
|
||||
URLs = [
|
||||
dict(
|
||||
url = "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth",
|
||||
@@ -239,11 +239,6 @@ def download_realesrgan():
|
||||
dest= "core/upscaling/realesrgan/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth",
|
||||
description = "ESRGAN_SRx4_DF2KOST_official.pth",
|
||||
),
|
||||
dict(
|
||||
url= "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth",
|
||||
dest= "core/upscaling/realesrgan/RealESRGAN_x2plus.pth",
|
||||
description = "RealESRGAN_x2plus.pth",
|
||||
),
|
||||
]
|
||||
for model in URLs:
|
||||
download_with_progress_bar(model['url'], config.models_path / model['dest'], model['description'])
|
||||
|
||||
@@ -10,7 +10,7 @@ from tempfile import TemporaryDirectory
|
||||
from typing import List, Dict, Callable, Union, Set
|
||||
|
||||
import requests
|
||||
from diffusers import DiffusionPipeline
|
||||
from diffusers import StableDiffusionPipeline
|
||||
from diffusers import logging as dlogging
|
||||
from huggingface_hub import hf_hub_url, HfFolder, HfApi
|
||||
from omegaconf import OmegaConf
|
||||
@@ -89,13 +89,16 @@ class ModelInstall(object):
|
||||
config:InvokeAIAppConfig,
|
||||
prediction_type_helper: Callable[[Path],SchedulerPredictionType]=None,
|
||||
model_manager: ModelManager = None,
|
||||
access_token:str = None):
|
||||
access_token:str = None,
|
||||
event_bus = None, # EventServicesBase - getting circular import errors
|
||||
):
|
||||
self.config = config
|
||||
self.mgr = model_manager or ModelManager(config.model_conf_path)
|
||||
self.datasets = OmegaConf.load(Dataset_path)
|
||||
self.prediction_helper = prediction_type_helper
|
||||
self.access_token = access_token or HfFolder.get_token()
|
||||
self.reverse_paths = self._reverse_paths(self.datasets)
|
||||
self.event_bus = event_bus
|
||||
|
||||
def all_models(self)->Dict[str,ModelLoadInfo]:
|
||||
'''
|
||||
@@ -197,39 +200,63 @@ class ModelInstall(object):
|
||||
Returns a set of dict objects corresponding to newly-created stanzas in models.yaml.
|
||||
'''
|
||||
|
||||
if self.event_bus:
|
||||
self.event_bus.emit_model_import_started(str(model_path_id_or_url))
|
||||
|
||||
if not models_installed:
|
||||
models_installed = dict()
|
||||
|
||||
# A little hack to allow nested routines to retrieve info on the requested ID
|
||||
self.current_id = model_path_id_or_url
|
||||
path = Path(model_path_id_or_url)
|
||||
# checkpoint file, or similar
|
||||
if path.is_file():
|
||||
models_installed.update({str(path):self._install_path(path)})
|
||||
|
||||
# folders style or similar
|
||||
elif path.is_dir() and any([(path/x).exists() for x in \
|
||||
{'config.json','model_index.json','learned_embeds.bin','pytorch_lora_weights.bin'}
|
||||
]
|
||||
):
|
||||
models_installed.update({str(model_path_id_or_url): self._install_path(path)})
|
||||
try:
|
||||
# checkpoint file, or similar
|
||||
if path.is_file():
|
||||
models_installed.update({str(path):self._install_path(path)})
|
||||
|
||||
# recursive scan
|
||||
elif path.is_dir():
|
||||
for child in path.iterdir():
|
||||
self.heuristic_import(child, models_installed=models_installed)
|
||||
# folders style or similar
|
||||
elif path.is_dir() and any([(path/x).exists() for x in \
|
||||
{'config.json','model_index.json','learned_embeds.bin','pytorch_lora_weights.bin'}
|
||||
]
|
||||
):
|
||||
models_installed.update({str(model_path_id_or_url): self._install_path(path)})
|
||||
|
||||
# huggingface repo
|
||||
elif len(str(model_path_id_or_url).split('/')) == 2:
|
||||
models_installed.update({str(model_path_id_or_url): self._install_repo(str(model_path_id_or_url))})
|
||||
# recursive scan
|
||||
elif path.is_dir():
|
||||
for child in path.iterdir():
|
||||
self.heuristic_import(child, models_installed=models_installed)
|
||||
|
||||
# a URL
|
||||
elif str(model_path_id_or_url).startswith(("http:", "https:", "ftp:")):
|
||||
models_installed.update({str(model_path_id_or_url): self._install_url(model_path_id_or_url)})
|
||||
# huggingface repo
|
||||
elif len(str(model_path_id_or_url).split('/')) == 2:
|
||||
models_installed.update({str(model_path_id_or_url): self._install_repo(str(model_path_id_or_url))})
|
||||
|
||||
else:
|
||||
raise KeyError(f'{str(model_path_id_or_url)} is not recognized as a local path, repo ID or URL. Skipping')
|
||||
# a URL
|
||||
elif str(model_path_id_or_url).startswith(("http:", "https:", "ftp:")):
|
||||
models_installed.update({str(model_path_id_or_url): self._install_url(model_path_id_or_url)})
|
||||
|
||||
else:
|
||||
errmsg = f'{str(model_path_id_or_url)} is not recognized as a local path, repo ID or URL. Skipping'
|
||||
raise KeyError(errmsg)
|
||||
|
||||
if self.event_bus:
|
||||
for path, add_model_result in models_installed.items():
|
||||
self.event_bus.emit_model_import_completed(
|
||||
str(path),
|
||||
import_info = add_model_result,
|
||||
)
|
||||
except Exception as e:
|
||||
if self.event_bus:
|
||||
self.event_bus.emit_model_import_completed(
|
||||
str(path),
|
||||
import_info = None,
|
||||
success = False,
|
||||
error = str(e),
|
||||
)
|
||||
return models_installed
|
||||
else:
|
||||
raise
|
||||
|
||||
return models_installed
|
||||
|
||||
# install a model from a local path. The optional info parameter is there to prevent
|
||||
@@ -238,10 +265,14 @@ class ModelInstall(object):
|
||||
info = info or ModelProbe().heuristic_probe(path,self.prediction_helper)
|
||||
if not info:
|
||||
logger.warning(f'Unable to parse format of {path}')
|
||||
return None
|
||||
raise ValueError(f'Unable to parse format of {path}')
|
||||
|
||||
model_name = path.stem if path.is_file() else path.name
|
||||
|
||||
if self.mgr.model_exists(model_name, info.base_type, info.model_type):
|
||||
raise ValueError(f'A model named "{model_name}" is already installed.')
|
||||
errmsg = f'A model named "{model_name}" is already installed.'
|
||||
raise ValueError(errmsg)
|
||||
|
||||
attributes = self._make_attributes(path,info)
|
||||
return self.mgr.add_model(model_name = model_name,
|
||||
base_model = info.base_type,
|
||||
@@ -251,7 +282,7 @@ class ModelInstall(object):
|
||||
|
||||
def _install_url(self, url: str)->AddModelResult:
|
||||
with TemporaryDirectory(dir=self.config.models_path) as staging:
|
||||
location = download_with_resume(url,Path(staging))
|
||||
location = download_with_resume(url,Path(staging),event_bus=self.event_bus)
|
||||
if not location:
|
||||
logger.error(f'Unable to download {url}. Skipping.')
|
||||
info = ModelProbe().heuristic_probe(location)
|
||||
@@ -310,8 +341,6 @@ class ModelInstall(object):
|
||||
if key := self.reverse_paths.get(path_name):
|
||||
(name, base, mtype) = ModelManager.parse_key(key)
|
||||
return name
|
||||
elif location.is_dir():
|
||||
return location.name
|
||||
else:
|
||||
return location.stem
|
||||
|
||||
@@ -367,7 +396,7 @@ class ModelInstall(object):
|
||||
model = None
|
||||
for revision in revisions:
|
||||
try:
|
||||
model = DiffusionPipeline.from_pretrained(repo_id,revision=revision,safety_checker=None)
|
||||
model = StableDiffusionPipeline.from_pretrained(repo_id,revision=revision,safety_checker=None)
|
||||
except: # most errors are due to fp16 not being present. Fix this to catch other errors
|
||||
pass
|
||||
if model:
|
||||
@@ -386,7 +415,8 @@ class ModelInstall(object):
|
||||
p = hf_download_with_resume(repo_id,
|
||||
model_dir=location,
|
||||
model_name=filename,
|
||||
access_token = self.access_token
|
||||
access_token = self.access_token,
|
||||
event_bus = self.event_bus,
|
||||
)
|
||||
if p:
|
||||
paths.append(p)
|
||||
@@ -427,12 +457,15 @@ def hf_download_from_pretrained(
|
||||
return destination
|
||||
|
||||
# ---------------------------------------------
|
||||
# TODO: This function is almost identical to invokeai.backend.util.download_with_resume
|
||||
# and should be merged
|
||||
def hf_download_with_resume(
|
||||
repo_id: str,
|
||||
model_dir: str,
|
||||
model_name: str,
|
||||
model_dest: Path = None,
|
||||
access_token: str = None,
|
||||
event_bus = None,
|
||||
) -> Path:
|
||||
model_dest = model_dest or Path(os.path.join(model_dir, model_name))
|
||||
os.makedirs(model_dir, exist_ok=True)
|
||||
@@ -449,15 +482,22 @@ def hf_download_with_resume(
|
||||
open_mode = "ab"
|
||||
|
||||
resp = requests.get(url, headers=header, stream=True)
|
||||
total = int(resp.headers.get("content-length", 0))
|
||||
content_length = int(resp.headers.get("content-length", 0))
|
||||
|
||||
if event_bus:
|
||||
event_bus.emit_download_started(url)
|
||||
|
||||
if (
|
||||
resp.status_code == 416
|
||||
): # "range not satisfiable", which means nothing to return
|
||||
logger.info(f"{model_name}: complete file found. Skipping.")
|
||||
if event_bus:
|
||||
event_bus.emit_download_completed(url,resp.status_code,model_dest)
|
||||
return model_dest
|
||||
elif resp.status_code == 404:
|
||||
logger.warning("File not found")
|
||||
if event_bus:
|
||||
event_bus.emit_download_completed(url,resp.status_code,None)
|
||||
return None
|
||||
elif resp.status_code != 200:
|
||||
logger.warning(f"{model_name}: {resp.reason}")
|
||||
@@ -466,11 +506,15 @@ def hf_download_with_resume(
|
||||
else:
|
||||
logger.info(f"{model_name}: Downloading...")
|
||||
|
||||
MB10 = 10 * 1048576
|
||||
downloaded = exist_size
|
||||
previous_interval = 0
|
||||
|
||||
try:
|
||||
with open(model_dest, open_mode) as file, tqdm(
|
||||
desc=model_name,
|
||||
initial=exist_size,
|
||||
total=total + exist_size,
|
||||
total=content_length + exist_size,
|
||||
unit="iB",
|
||||
unit_scale=True,
|
||||
unit_divisor=1000,
|
||||
@@ -478,9 +522,20 @@ def hf_download_with_resume(
|
||||
for data in resp.iter_content(chunk_size=1024):
|
||||
size = file.write(data)
|
||||
bar.update(size)
|
||||
downloaded += size
|
||||
if event_bus and downloaded // MB10 > previous_interval:
|
||||
previous_interval = downloaded // MB10
|
||||
event_bus.emit_download_progress(url, downloaded, content_length)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"An error occurred while downloading {model_name}: {str(e)}")
|
||||
if event_bus:
|
||||
event_bus.emit_download_completed(url,500,None)
|
||||
return None
|
||||
|
||||
if event_bus:
|
||||
event_bus.emit_download_completed(url,resp.status_code,model_dest)
|
||||
|
||||
return model_dest
|
||||
|
||||
|
||||
|
||||
@@ -3,7 +3,6 @@ Initialization file for invokeai.backend.model_management
|
||||
"""
|
||||
from .model_manager import ModelManager, ModelInfo, AddModelResult, SchedulerPredictionType
|
||||
from .model_cache import ModelCache
|
||||
from .lora import ModelPatcher, ONNXModelPatcher
|
||||
from .models import BaseModelType, ModelType, SubModelType, ModelVariantType, ModelNotFoundException
|
||||
from .model_merge import ModelMerger, MergeInterpolationMethod
|
||||
|
||||
|
||||
@@ -6,22 +6,11 @@ from typing import Optional, Dict, Tuple, Any, Union, List
|
||||
from pathlib import Path
|
||||
|
||||
import torch
|
||||
from safetensors.torch import load_file
|
||||
from torch.utils.hooks import RemovableHandle
|
||||
|
||||
from diffusers.models import UNet2DConditionModel
|
||||
from transformers import CLIPTextModel
|
||||
from onnx import numpy_helper
|
||||
from onnxruntime import OrtValue
|
||||
import numpy as np
|
||||
|
||||
from compel.embeddings_provider import BaseTextualInversionManager
|
||||
from diffusers.models import UNet2DConditionModel
|
||||
from safetensors.torch import load_file
|
||||
from transformers import CLIPTextModel, CLIPTokenizer
|
||||
|
||||
# TODO: rename and split this file
|
||||
|
||||
class LoRALayerBase:
|
||||
#rank: Optional[int]
|
||||
#alpha: Optional[float]
|
||||
@@ -719,185 +708,3 @@ class TextualInversionManager(BaseTextualInversionManager):
|
||||
|
||||
return new_token_ids
|
||||
|
||||
|
||||
class ONNXModelPatcher:
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_lora_unet(
|
||||
cls,
|
||||
unet: OnnxRuntimeModel,
|
||||
loras: List[Tuple[LoRAModel, float]],
|
||||
):
|
||||
with cls.apply_lora(unet, loras, "lora_unet_"):
|
||||
yield
|
||||
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_lora_text_encoder(
|
||||
cls,
|
||||
text_encoder: OnnxRuntimeModel,
|
||||
loras: List[Tuple[LoRAModel, float]],
|
||||
):
|
||||
with cls.apply_lora(text_encoder, loras, "lora_te_"):
|
||||
yield
|
||||
|
||||
# based on
|
||||
# https://github.com/ssube/onnx-web/blob/ca2e436f0623e18b4cfe8a0363fcfcf10508acf7/api/onnx_web/convert/diffusion/lora.py#L323
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_lora(
|
||||
cls,
|
||||
model: IAIOnnxRuntimeModel,
|
||||
loras: List[Tuple[LoraModel, float]],
|
||||
prefix: str,
|
||||
):
|
||||
from .models.base import IAIOnnxRuntimeModel
|
||||
if not isinstance(model, IAIOnnxRuntimeModel):
|
||||
raise Exception("Only IAIOnnxRuntimeModel models supported")
|
||||
|
||||
orig_weights = dict()
|
||||
|
||||
try:
|
||||
blended_loras = dict()
|
||||
|
||||
for lora, lora_weight in loras:
|
||||
for layer_key, layer in lora.layers.items():
|
||||
if not layer_key.startswith(prefix):
|
||||
continue
|
||||
|
||||
layer_key = layer_key.replace(prefix, "")
|
||||
layer_weight = layer.get_weight().detach().cpu().numpy() * lora_weight
|
||||
if layer_key is blended_loras:
|
||||
blended_loras[layer_key] += layer_weight
|
||||
else:
|
||||
blended_loras[layer_key] = layer_weight
|
||||
|
||||
node_names = dict()
|
||||
for node in model.nodes.values():
|
||||
node_names[node.name.replace("/", "_").replace(".", "_").lstrip("_")] = node.name
|
||||
|
||||
for layer_key, lora_weight in blended_loras.items():
|
||||
conv_key = layer_key + "_Conv"
|
||||
gemm_key = layer_key + "_Gemm"
|
||||
matmul_key = layer_key + "_MatMul"
|
||||
|
||||
if conv_key in node_names or gemm_key in node_names:
|
||||
if conv_key in node_names:
|
||||
conv_node = model.nodes[node_names[conv_key]]
|
||||
else:
|
||||
conv_node = model.nodes[node_names[gemm_key]]
|
||||
|
||||
weight_name = [n for n in conv_node.input if ".weight" in n][0]
|
||||
orig_weight = model.tensors[weight_name]
|
||||
|
||||
if orig_weight.shape[-2:] == (1, 1):
|
||||
if lora_weight.shape[-2:] == (1, 1):
|
||||
new_weight = orig_weight.squeeze((3, 2)) + lora_weight.squeeze((3, 2))
|
||||
else:
|
||||
new_weight = orig_weight.squeeze((3, 2)) + lora_weight
|
||||
|
||||
new_weight = np.expand_dims(new_weight, (2, 3))
|
||||
else:
|
||||
if orig_weight.shape != lora_weight.shape:
|
||||
new_weight = orig_weight + lora_weight.reshape(orig_weight.shape)
|
||||
else:
|
||||
new_weight = orig_weight + lora_weight
|
||||
|
||||
orig_weights[weight_name] = orig_weight
|
||||
model.tensors[weight_name] = new_weight.astype(orig_weight.dtype)
|
||||
|
||||
elif matmul_key in node_names:
|
||||
weight_node = model.nodes[node_names[matmul_key]]
|
||||
matmul_name = [n for n in weight_node.input if "MatMul" in n][0]
|
||||
|
||||
orig_weight = model.tensors[matmul_name]
|
||||
new_weight = orig_weight + lora_weight.transpose()
|
||||
|
||||
orig_weights[matmul_name] = orig_weight
|
||||
model.tensors[matmul_name] = new_weight.astype(orig_weight.dtype)
|
||||
|
||||
else:
|
||||
# warn? err?
|
||||
pass
|
||||
|
||||
yield
|
||||
|
||||
finally:
|
||||
# restore original weights
|
||||
for name, orig_weight in orig_weights.items():
|
||||
model.tensors[name] = orig_weight
|
||||
|
||||
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_ti(
|
||||
cls,
|
||||
tokenizer: CLIPTokenizer,
|
||||
text_encoder: IAIOnnxRuntimeModel,
|
||||
ti_list: List[Any],
|
||||
) -> Tuple[CLIPTokenizer, TextualInversionManager]:
|
||||
from .models.base import IAIOnnxRuntimeModel
|
||||
if not isinstance(text_encoder, IAIOnnxRuntimeModel):
|
||||
raise Exception("Only IAIOnnxRuntimeModel models supported")
|
||||
|
||||
orig_embeddings = None
|
||||
|
||||
try:
|
||||
ti_tokenizer = copy.deepcopy(tokenizer)
|
||||
ti_manager = TextualInversionManager(ti_tokenizer)
|
||||
|
||||
def _get_trigger(ti, index):
|
||||
trigger = ti.name
|
||||
if index > 0:
|
||||
trigger += f"-!pad-{i}"
|
||||
return f"<{trigger}>"
|
||||
|
||||
# modify tokenizer
|
||||
new_tokens_added = 0
|
||||
for ti in ti_list:
|
||||
for i in range(ti.embedding.shape[0]):
|
||||
new_tokens_added += ti_tokenizer.add_tokens(_get_trigger(ti, i))
|
||||
|
||||
# modify text_encoder
|
||||
orig_embeddings = text_encoder.tensors["text_model.embeddings.token_embedding.weight"]
|
||||
|
||||
embeddings = np.concatenate(
|
||||
(
|
||||
np.copy(orig_embeddings),
|
||||
np.zeros((new_tokens_added, orig_embeddings.shape[1]))
|
||||
),
|
||||
axis=0,
|
||||
)
|
||||
|
||||
for ti in ti_list:
|
||||
ti_tokens = []
|
||||
for i in range(ti.embedding.shape[0]):
|
||||
embedding = ti.embedding[i].detach().numpy()
|
||||
trigger = _get_trigger(ti, i)
|
||||
|
||||
token_id = ti_tokenizer.convert_tokens_to_ids(trigger)
|
||||
if token_id == ti_tokenizer.unk_token_id:
|
||||
raise RuntimeError(f"Unable to find token id for token '{trigger}'")
|
||||
|
||||
if embeddings[token_id].shape != embedding.shape:
|
||||
raise ValueError(
|
||||
f"Cannot load embedding for {trigger}. It was trained on a model with token dimension {embedding.shape[0]}, but the current model has token dimension {embeddings[token_id].shape[0]}."
|
||||
)
|
||||
|
||||
embeddings[token_id] = embedding
|
||||
ti_tokens.append(token_id)
|
||||
|
||||
if len(ti_tokens) > 1:
|
||||
ti_manager.pad_tokens[ti_tokens[0]] = ti_tokens[1:]
|
||||
|
||||
text_encoder.tensors["text_model.embeddings.token_embedding.weight"] = embeddings.astype(orig_embeddings.dtype)
|
||||
|
||||
yield ti_tokenizer, ti_manager
|
||||
|
||||
finally:
|
||||
# restore
|
||||
if orig_embeddings is not None:
|
||||
text_encoder.tensors["text_model.embeddings.token_embedding.weight"] = orig_embeddings
|
||||
|
||||
@@ -328,25 +328,6 @@ class ModelCache(object):
|
||||
|
||||
refs = sys.getrefcount(cache_entry.model)
|
||||
|
||||
# manualy clear local variable references of just finished function calls
|
||||
# for some reason python don't want to collect it even by gc.collect() immidiately
|
||||
if refs > 2:
|
||||
while True:
|
||||
cleared = False
|
||||
for referrer in gc.get_referrers(cache_entry.model):
|
||||
if type(referrer).__name__ == "frame":
|
||||
# RuntimeError: cannot clear an executing frame
|
||||
with suppress(RuntimeError):
|
||||
referrer.clear()
|
||||
cleared = True
|
||||
#break
|
||||
|
||||
# repeat if referrers changes(due to frame clear), else exit loop
|
||||
if cleared:
|
||||
gc.collect()
|
||||
else:
|
||||
break
|
||||
|
||||
device = cache_entry.model.device if hasattr(cache_entry.model, "device") else None
|
||||
self.logger.debug(f"Model: {model_key}, locks: {cache_entry._locks}, device: {device}, loaded: {cache_entry.loaded}, refs: {refs}")
|
||||
|
||||
@@ -382,9 +363,6 @@ class ModelCache(object):
|
||||
self.logger.debug(f'GPU VRAM freed: {(mem.vram_used/GIG):.2f} GB')
|
||||
vram_in_use += mem.vram_used # note vram_used is negative
|
||||
self.logger.debug(f'{(vram_in_use/GIG):.2f}GB VRAM used for models; max allowed={(reserved/GIG):.2f}GB')
|
||||
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
def _local_model_hash(self, model_path: Union[str, Path]) -> str:
|
||||
sha = hashlib.sha256()
|
||||
|
||||
@@ -106,16 +106,16 @@ providing information about a model defined in models.yaml. For example:
|
||||
|
||||
>>> models = mgr.list_models()
|
||||
>>> json.dumps(models[0])
|
||||
{"path": "/home/lstein/invokeai-main/models/sd-1/controlnet/canny",
|
||||
"model_format": "diffusers",
|
||||
"name": "canny",
|
||||
"base_model": "sd-1",
|
||||
{"path": "/home/lstein/invokeai-main/models/sd-1/controlnet/canny",
|
||||
"model_format": "diffusers",
|
||||
"name": "canny",
|
||||
"base_model": "sd-1",
|
||||
"type": "controlnet"
|
||||
}
|
||||
|
||||
You can filter by model type and base model as shown here:
|
||||
|
||||
|
||||
|
||||
controlnets = mgr.list_models(model_type=ModelType.ControlNet,
|
||||
base_model=BaseModelType.StableDiffusion1)
|
||||
for c in controlnets:
|
||||
@@ -140,14 +140,14 @@ Layout of the `models` directory:
|
||||
|
||||
models
|
||||
├── sd-1
|
||||
│ ├── controlnet
|
||||
│ ├── lora
|
||||
│ ├── main
|
||||
│ └── embedding
|
||||
│ ├── controlnet
|
||||
│ ├── lora
|
||||
│ ├── main
|
||||
│ └── embedding
|
||||
├── sd-2
|
||||
│ ├── controlnet
|
||||
│ ├── lora
|
||||
│ ├── main
|
||||
│ ├── controlnet
|
||||
│ ├── lora
|
||||
│ ├── main
|
||||
│ └── embedding
|
||||
└── core
|
||||
├── face_reconstruction
|
||||
@@ -195,7 +195,7 @@ name, base model, type and a dict of model attributes. See
|
||||
`invokeai/backend/model_management/models` for the attributes required
|
||||
by each model type.
|
||||
|
||||
A model can be deleted using `del_model()`, providing the same
|
||||
A model can be deleted using `del_model()`, providing the same
|
||||
identifying information as `get_model()`
|
||||
|
||||
The `heuristic_import()` method will take a set of strings
|
||||
@@ -304,7 +304,7 @@ class ModelManager(object):
|
||||
logger: types.ModuleType = logger,
|
||||
):
|
||||
"""
|
||||
Initialize with the path to the models.yaml config file.
|
||||
Initialize with the path to the models.yaml config file.
|
||||
Optional parameters are the torch device type, precision, max_models,
|
||||
and sequential_offload boolean. Note that the default device
|
||||
type and precision are set up for a CUDA system running at half precision.
|
||||
@@ -323,7 +323,7 @@ class ModelManager(object):
|
||||
self.config_meta = ConfigMeta(**config.pop("__metadata__"))
|
||||
# TODO: metadata not found
|
||||
# TODO: version check
|
||||
|
||||
|
||||
self.app_config = InvokeAIAppConfig.get_config()
|
||||
self.logger = logger
|
||||
self.cache = ModelCache(
|
||||
@@ -431,7 +431,7 @@ class ModelManager(object):
|
||||
:param model_name: symbolic name of the model in models.yaml
|
||||
:param model_type: ModelType enum indicating the type of model to return
|
||||
:param base_model: BaseModelType enum indicating the base model used by this model
|
||||
:param submode_typel: an ModelType enum indicating the portion of
|
||||
:param submode_typel: an ModelType enum indicating the portion of
|
||||
the model to retrieve (e.g. ModelType.Vae)
|
||||
"""
|
||||
model_class = MODEL_CLASSES[base_model][model_type]
|
||||
@@ -456,7 +456,7 @@ class ModelManager(object):
|
||||
raise ModelNotFoundException(f"Model not found - {model_key}")
|
||||
|
||||
# vae/movq override
|
||||
# TODO:
|
||||
# TODO:
|
||||
if submodel_type is not None and hasattr(model_config, submodel_type):
|
||||
override_path = getattr(model_config, submodel_type)
|
||||
if override_path:
|
||||
@@ -489,7 +489,7 @@ class ModelManager(object):
|
||||
self.cache_keys[model_key].add(model_context.key)
|
||||
|
||||
model_hash = "<NO_HASH>" # TODO:
|
||||
|
||||
|
||||
return ModelInfo(
|
||||
context = model_context,
|
||||
name = model_name,
|
||||
@@ -518,7 +518,7 @@ class ModelManager(object):
|
||||
|
||||
def model_names(self) -> List[Tuple[str, BaseModelType, ModelType]]:
|
||||
"""
|
||||
Return a list of (str, BaseModelType, ModelType) corresponding to all models
|
||||
Return a list of (str, BaseModelType, ModelType) corresponding to all models
|
||||
known to the configuration.
|
||||
"""
|
||||
return [(self.parse_key(x)) for x in self.models.keys()]
|
||||
@@ -692,12 +692,12 @@ class ModelManager(object):
|
||||
if new_name is None and new_base is None:
|
||||
self.logger.error("rename_model() called with neither a new_name nor a new_base. {model_name} unchanged.")
|
||||
return
|
||||
|
||||
|
||||
model_key = self.create_key(model_name, base_model, model_type)
|
||||
model_cfg = self.models.get(model_key, None)
|
||||
if not model_cfg:
|
||||
raise ModelNotFoundException(f"Unknown model: {model_key}")
|
||||
|
||||
|
||||
old_path = self.app_config.root_path / model_cfg.path
|
||||
new_name = new_name or model_name
|
||||
new_base = new_base or base_model
|
||||
@@ -726,7 +726,7 @@ class ModelManager(object):
|
||||
self.models.pop(model_key, None) # delete
|
||||
self.models[new_key] = model_cfg
|
||||
self.commit()
|
||||
|
||||
|
||||
def convert_model (
|
||||
self,
|
||||
model_name: str,
|
||||
@@ -776,12 +776,12 @@ class ModelManager(object):
|
||||
# something went wrong, so don't leave dangling diffusers model in directory or it will cause a duplicate model error!
|
||||
rmtree(new_diffusers_path)
|
||||
raise
|
||||
|
||||
|
||||
if checkpoint_path.exists() and checkpoint_path.is_relative_to(self.app_config.models_path):
|
||||
checkpoint_path.unlink()
|
||||
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def search_models(self, search_folder):
|
||||
self.logger.info(f"Finding Models In: {search_folder}")
|
||||
models_folder_ckpt = Path(search_folder).glob("**/*.ckpt")
|
||||
@@ -824,14 +824,10 @@ class ModelManager(object):
|
||||
assert config_file_path is not None,'no config file path to write to'
|
||||
config_file_path = self.app_config.root_path / config_file_path
|
||||
tmpfile = os.path.join(os.path.dirname(config_file_path), "new_config.tmp")
|
||||
try:
|
||||
with open(tmpfile, "w", encoding="utf-8") as outfile:
|
||||
outfile.write(self.preamble())
|
||||
outfile.write(yaml_str)
|
||||
os.replace(tmpfile, config_file_path)
|
||||
except OSError as err:
|
||||
self.logger.warning(f"Could not modify the config file at {config_file_path}")
|
||||
self.logger.warning(err)
|
||||
with open(tmpfile, "w", encoding="utf-8") as outfile:
|
||||
outfile.write(self.preamble())
|
||||
outfile.write(yaml_str)
|
||||
os.replace(tmpfile, config_file_path)
|
||||
|
||||
def preamble(self) -> str:
|
||||
"""
|
||||
@@ -957,6 +953,7 @@ class ModelManager(object):
|
||||
def heuristic_import(self,
|
||||
items_to_import: Set[str],
|
||||
prediction_type_helper: Callable[[Path],SchedulerPredictionType]=None,
|
||||
event_bus = None, # EventServiceBase, with circular dependency issues
|
||||
)->Dict[str, AddModelResult]:
|
||||
'''Import a list of paths, repo_ids or URLs. Returns the set of
|
||||
successfully imported items.
|
||||
@@ -981,12 +978,16 @@ class ModelManager(object):
|
||||
# avoid circular import here
|
||||
from invokeai.backend.install.model_install_backend import ModelInstall
|
||||
successfully_installed = dict()
|
||||
|
||||
|
||||
installer = ModelInstall(config = self.app_config,
|
||||
prediction_type_helper = prediction_type_helper,
|
||||
model_manager = self)
|
||||
model_manager = self,
|
||||
event_bus = event_bus,
|
||||
)
|
||||
for thing in items_to_import:
|
||||
installed = installer.heuristic_import(thing)
|
||||
successfully_installed.update(installed)
|
||||
self.commit()
|
||||
|
||||
self.commit()
|
||||
return successfully_installed
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ class ModelProbeInfo(object):
|
||||
variant_type: ModelVariantType
|
||||
prediction_type: SchedulerPredictionType
|
||||
upcast_attention: bool
|
||||
format: Literal['diffusers','checkpoint', 'lycoris', 'olive']
|
||||
format: Literal['diffusers','checkpoint', 'lycoris']
|
||||
image_size: int
|
||||
|
||||
class ProbeBase(object):
|
||||
|
||||
@@ -10,11 +10,8 @@ from .lora import LoRAModel
|
||||
from .controlnet import ControlNetModel # TODO:
|
||||
from .textual_inversion import TextualInversionModel
|
||||
|
||||
from .stable_diffusion_onnx import ONNXStableDiffusion1Model, ONNXStableDiffusion2Model
|
||||
|
||||
MODEL_CLASSES = {
|
||||
BaseModelType.StableDiffusion1: {
|
||||
ModelType.ONNX: ONNXStableDiffusion1Model,
|
||||
ModelType.Main: StableDiffusion1Model,
|
||||
ModelType.Vae: VaeModel,
|
||||
ModelType.Lora: LoRAModel,
|
||||
@@ -22,7 +19,6 @@ MODEL_CLASSES = {
|
||||
ModelType.TextualInversion: TextualInversionModel,
|
||||
},
|
||||
BaseModelType.StableDiffusion2: {
|
||||
ModelType.ONNX: ONNXStableDiffusion2Model,
|
||||
ModelType.Main: StableDiffusion2Model,
|
||||
ModelType.Vae: VaeModel,
|
||||
ModelType.Lora: LoRAModel,
|
||||
@@ -36,7 +32,6 @@ MODEL_CLASSES = {
|
||||
ModelType.Lora: LoRAModel,
|
||||
ModelType.ControlNet: ControlNetModel,
|
||||
ModelType.TextualInversion: TextualInversionModel,
|
||||
ModelType.ONNX: ONNXStableDiffusion2Model,
|
||||
},
|
||||
BaseModelType.StableDiffusionXLRefiner: {
|
||||
ModelType.Main: StableDiffusionXLModel,
|
||||
@@ -45,7 +40,6 @@ MODEL_CLASSES = {
|
||||
ModelType.Lora: LoRAModel,
|
||||
ModelType.ControlNet: ControlNetModel,
|
||||
ModelType.TextualInversion: TextualInversionModel,
|
||||
ModelType.ONNX: ONNXStableDiffusion2Model,
|
||||
},
|
||||
#BaseModelType.Kandinsky2_1: {
|
||||
# ModelType.Main: Kandinsky2_1Model,
|
||||
|
||||
@@ -8,19 +8,13 @@ from abc import ABCMeta, abstractmethod
|
||||
from pathlib import Path
|
||||
from picklescan.scanner import scan_file_path
|
||||
import torch
|
||||
import numpy as np
|
||||
import safetensors.torch
|
||||
from pathlib import Path
|
||||
from diffusers import DiffusionPipeline, ConfigMixin, OnnxRuntimeModel
|
||||
from diffusers import DiffusionPipeline, ConfigMixin
|
||||
|
||||
from contextlib import suppress
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List, Dict, Optional, Type, Literal, TypeVar, Generic, Callable, Any, Union
|
||||
|
||||
import onnx
|
||||
from onnx import numpy_helper
|
||||
from onnx.external_data_helper import set_external_data
|
||||
from onnxruntime import InferenceSession, OrtValue, SessionOptions, ExecutionMode, GraphOptimizationLevel
|
||||
class InvalidModelException(Exception):
|
||||
pass
|
||||
|
||||
@@ -35,7 +29,6 @@ class BaseModelType(str, Enum):
|
||||
#Kandinsky2_1 = "kandinsky-2.1"
|
||||
|
||||
class ModelType(str, Enum):
|
||||
ONNX = "onnx"
|
||||
Main = "main"
|
||||
Vae = "vae"
|
||||
Lora = "lora"
|
||||
@@ -49,8 +42,6 @@ class SubModelType(str, Enum):
|
||||
Tokenizer = "tokenizer"
|
||||
Tokenizer2 = "tokenizer_2"
|
||||
Vae = "vae"
|
||||
VaeDecoder = "vae_decoder"
|
||||
VaeEncoder = "vae_encoder"
|
||||
Scheduler = "scheduler"
|
||||
SafetyChecker = "safety_checker"
|
||||
#MoVQ = "movq"
|
||||
@@ -263,18 +254,16 @@ class DiffusersModel(ModelBase):
|
||||
try:
|
||||
# TODO: set cache_dir to /dev/null to be sure that cache not used?
|
||||
model = self.child_types[child_type].from_pretrained(
|
||||
os.path.join(self.model_path, child_type.value),
|
||||
#subfolder=child_type.value,
|
||||
self.model_path,
|
||||
subfolder=child_type.value,
|
||||
torch_dtype=torch_dtype,
|
||||
variant=variant,
|
||||
local_files_only=True,
|
||||
)
|
||||
break
|
||||
except Exception as e:
|
||||
print("====ERR LOAD====")
|
||||
print(f"{variant}: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
#print("====ERR LOAD====")
|
||||
#print(f"{variant}: {e}")
|
||||
pass
|
||||
else:
|
||||
raise Exception(f"Failed to load {self.base_model}:{self.model_type}:{child_type} model")
|
||||
@@ -441,188 +430,3 @@ class SilenceWarnings(object):
|
||||
transformers_logging.set_verbosity(self.transformers_verbosity)
|
||||
diffusers_logging.set_verbosity(self.diffusers_verbosity)
|
||||
warnings.simplefilter('default')
|
||||
|
||||
ONNX_WEIGHTS_NAME = "model.onnx"
|
||||
class IAIOnnxRuntimeModel:
|
||||
class _tensor_access:
|
||||
|
||||
def __init__(self, model):
|
||||
self.model = model
|
||||
self.indexes = dict()
|
||||
for idx, obj in enumerate(self.model.proto.graph.initializer):
|
||||
self.indexes[obj.name] = idx
|
||||
|
||||
def __getitem__(self, key: str):
|
||||
return self.model.data[key].numpy()
|
||||
|
||||
def __setitem__(self, key: str, value: np.ndarray):
|
||||
new_node = numpy_helper.from_array(value)
|
||||
# set_external_data(new_node, location="in-memory-location")
|
||||
new_node.name = key
|
||||
# new_node.ClearField("raw_data")
|
||||
del self.model.proto.graph.initializer[self.indexes[key]]
|
||||
self.model.proto.graph.initializer.insert(self.indexes[key], new_node)
|
||||
self.model.data[key] = OrtValue.ortvalue_from_numpy(value)
|
||||
|
||||
# __delitem__
|
||||
|
||||
def __contains__(self, key: str):
|
||||
return key in self.model.data
|
||||
|
||||
def items(self):
|
||||
raise NotImplementedError("tensor.items")
|
||||
#return [(obj.name, obj) for obj in self.raw_proto]
|
||||
|
||||
def keys(self):
|
||||
return self.model.data.keys()
|
||||
|
||||
def values(self):
|
||||
raise NotImplementedError("tensor.values")
|
||||
#return [obj for obj in self.raw_proto]
|
||||
|
||||
|
||||
|
||||
class _access_helper:
|
||||
def __init__(self, raw_proto):
|
||||
self.indexes = dict()
|
||||
self.raw_proto = raw_proto
|
||||
for idx, obj in enumerate(raw_proto):
|
||||
self.indexes[obj.name] = idx
|
||||
|
||||
def __getitem__(self, key: str):
|
||||
return self.raw_proto[self.indexes[key]]
|
||||
|
||||
def __setitem__(self, key: str, value):
|
||||
index = self.indexes[key]
|
||||
del self.raw_proto[index]
|
||||
self.raw_proto.insert(index, value)
|
||||
|
||||
# __delitem__
|
||||
|
||||
def __contains__(self, key: str):
|
||||
return key in self.indexes
|
||||
|
||||
def items(self):
|
||||
return [(obj.name, obj) for obj in self.raw_proto]
|
||||
|
||||
def keys(self):
|
||||
return self.indexes.keys()
|
||||
|
||||
def values(self):
|
||||
return [obj for obj in self.raw_proto]
|
||||
|
||||
def __init__(self, model_path: str, provider: Optional[str]):
|
||||
self.path = model_path
|
||||
self.session = None
|
||||
self.provider = provider or "CPUExecutionProvider"
|
||||
"""
|
||||
self.data_path = self.path + "_data"
|
||||
if not os.path.exists(self.data_path):
|
||||
print(f"Moving model tensors to separate file: {self.data_path}")
|
||||
tmp_proto = onnx.load(model_path, load_external_data=True)
|
||||
onnx.save_model(tmp_proto, self.path, save_as_external_data=True, all_tensors_to_one_file=True, location=os.path.basename(self.data_path), size_threshold=1024, convert_attribute=False)
|
||||
del tmp_proto
|
||||
gc.collect()
|
||||
|
||||
self.proto = onnx.load(model_path, load_external_data=False)
|
||||
"""
|
||||
|
||||
self.proto = onnx.load(model_path, load_external_data=True)
|
||||
self.data = dict()
|
||||
for tensor in self.proto.graph.initializer:
|
||||
name = tensor.name
|
||||
|
||||
if tensor.HasField("raw_data"):
|
||||
npt = numpy_helper.to_array(tensor)
|
||||
orv = OrtValue.ortvalue_from_numpy(npt)
|
||||
self.data[name] = orv
|
||||
# set_external_data(tensor, location="in-memory-location")
|
||||
tensor.name = name
|
||||
# tensor.ClearField("raw_data")
|
||||
|
||||
self.nodes = self._access_helper(self.proto.graph.node)
|
||||
self.initializers = self._access_helper(self.proto.graph.initializer)
|
||||
# print(self.proto.graph.input)
|
||||
# print(self.proto.graph.initializer)
|
||||
|
||||
self.tensors = self._tensor_access(self)
|
||||
|
||||
# TODO: integrate with model manager/cache
|
||||
def create_session(self):
|
||||
if self.session is None:
|
||||
#onnx.save(self.proto, "tmp.onnx")
|
||||
#onnx.save_model(self.proto, "tmp.onnx", save_as_external_data=True, all_tensors_to_one_file=True, location="tmp.onnx_data", size_threshold=1024, convert_attribute=False)
|
||||
# TODO: something to be able to get weight when they already moved outside of model proto
|
||||
#(trimmed_model, external_data) = buffer_external_data_tensors(self.proto)
|
||||
sess = SessionOptions()
|
||||
#self._external_data.update(**external_data)
|
||||
# sess.add_external_initializers(list(self.data.keys()), list(self.data.values()))
|
||||
# sess.enable_profiling = True
|
||||
|
||||
# sess.intra_op_num_threads = 1
|
||||
# sess.inter_op_num_threads = 1
|
||||
# sess.execution_mode = ExecutionMode.ORT_SEQUENTIAL
|
||||
# sess.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL
|
||||
# sess.enable_cpu_mem_arena = True
|
||||
# sess.enable_mem_pattern = True
|
||||
# sess.add_session_config_entry("session.intra_op.use_xnnpack_threadpool", "1") ########### It's the key code
|
||||
|
||||
|
||||
sess.add_free_dimension_override_by_name("unet_sample_batch", 2)
|
||||
sess.add_free_dimension_override_by_name("unet_sample_channels", 4)
|
||||
sess.add_free_dimension_override_by_name("unet_hidden_batch", 2)
|
||||
sess.add_free_dimension_override_by_name("unet_hidden_sequence", 77)
|
||||
sess.add_free_dimension_override_by_name("unet_sample_height", 64)
|
||||
sess.add_free_dimension_override_by_name("unet_sample_width", 64)
|
||||
sess.add_free_dimension_override_by_name("unet_time_batch", 1)
|
||||
self.session = InferenceSession(self.proto.SerializeToString(), providers=['CUDAExecutionProvider', 'CPUExecutionProvider'], sess_options=sess)
|
||||
#self.session = InferenceSession("tmp.onnx", providers=[self.provider], sess_options=self.sess_options)
|
||||
self.io_binding = self.session.io_binding()
|
||||
|
||||
def release_session(self):
|
||||
self.session = None
|
||||
import gc
|
||||
gc.collect()
|
||||
|
||||
def __call__(self, **kwargs):
|
||||
if self.session is None:
|
||||
raise Exception("You should call create_session before running model")
|
||||
|
||||
inputs = {k: np.array(v) for k, v in kwargs.items()}
|
||||
output_names = self.session.get_outputs()
|
||||
for k in inputs:
|
||||
self.io_binding.bind_cpu_input(k, inputs[k])
|
||||
for name in output_names:
|
||||
self.io_binding.bind_output(name.name)
|
||||
self.session.run_with_iobinding(self.io_binding, None)
|
||||
return self.io_binding.copy_outputs_to_cpu()
|
||||
|
||||
# compatability with diffusers load code
|
||||
@classmethod
|
||||
def from_pretrained(
|
||||
cls,
|
||||
model_id: Union[str, Path],
|
||||
subfolder: Union[str, Path] = None,
|
||||
file_name: Optional[str] = None,
|
||||
provider: Optional[str] = None,
|
||||
sess_options: Optional["SessionOptions"] = None,
|
||||
**kwargs,
|
||||
):
|
||||
file_name = file_name or ONNX_WEIGHTS_NAME
|
||||
|
||||
if os.path.isdir(model_id):
|
||||
model_path = model_id
|
||||
if subfolder is not None:
|
||||
model_path = os.path.join(model_path, subfolder)
|
||||
model_path = os.path.join(model_path, file_name)
|
||||
|
||||
else:
|
||||
model_path = model_id
|
||||
|
||||
# load model from local directory
|
||||
if not os.path.isfile(model_path):
|
||||
raise Exception(f"Model not found: {model_path}")
|
||||
|
||||
# TODO: session options
|
||||
return cls(model_path, provider=provider)
|
||||
|
||||
|
||||
@@ -1,156 +0,0 @@
|
||||
import os
|
||||
import json
|
||||
from enum import Enum
|
||||
from pydantic import Field
|
||||
from pathlib import Path
|
||||
from typing import Literal, Optional, Union
|
||||
from .base import (
|
||||
ModelBase,
|
||||
ModelConfigBase,
|
||||
BaseModelType,
|
||||
ModelType,
|
||||
SubModelType,
|
||||
ModelVariantType,
|
||||
DiffusersModel,
|
||||
SchedulerPredictionType,
|
||||
SilenceWarnings,
|
||||
read_checkpoint_meta,
|
||||
classproperty,
|
||||
OnnxRuntimeModel,
|
||||
IAIOnnxRuntimeModel,
|
||||
)
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
|
||||
class ONNXStableDiffusion1Model(DiffusersModel):
|
||||
|
||||
class Config(ModelConfigBase):
|
||||
model_format: None
|
||||
variant: ModelVariantType
|
||||
|
||||
|
||||
def __init__(self, model_path: str, base_model: BaseModelType, model_type: ModelType):
|
||||
assert base_model == BaseModelType.StableDiffusion1
|
||||
assert model_type == ModelType.ONNX
|
||||
super().__init__(
|
||||
model_path=model_path,
|
||||
base_model=BaseModelType.StableDiffusion1,
|
||||
model_type=ModelType.ONNX,
|
||||
)
|
||||
|
||||
for child_name, child_type in self.child_types.items():
|
||||
if child_type is OnnxRuntimeModel:
|
||||
self.child_types[child_name] = IAIOnnxRuntimeModel
|
||||
|
||||
# TODO: check that no optimum models provided
|
||||
|
||||
@classmethod
|
||||
def probe_config(cls, path: str, **kwargs):
|
||||
model_format = cls.detect_format(path)
|
||||
in_channels = 4 # TODO:
|
||||
|
||||
if in_channels == 9:
|
||||
variant = ModelVariantType.Inpaint
|
||||
elif in_channels == 4:
|
||||
variant = ModelVariantType.Normal
|
||||
else:
|
||||
raise Exception("Unkown stable diffusion 1.* model format")
|
||||
|
||||
|
||||
return cls.create_config(
|
||||
path=path,
|
||||
model_format=model_format,
|
||||
|
||||
variant=variant,
|
||||
)
|
||||
|
||||
@classproperty
|
||||
def save_to_config(cls) -> bool:
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def detect_format(cls, model_path: str):
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def convert_if_required(
|
||||
cls,
|
||||
model_path: str,
|
||||
output_path: str,
|
||||
config: ModelConfigBase,
|
||||
base_model: BaseModelType,
|
||||
) -> str:
|
||||
return model_path
|
||||
|
||||
class ONNXStableDiffusion2Model(DiffusersModel):
|
||||
|
||||
# TODO: check that configs overwriten properly
|
||||
class Config(ModelConfigBase):
|
||||
model_format: None
|
||||
variant: ModelVariantType
|
||||
prediction_type: SchedulerPredictionType
|
||||
upcast_attention: bool
|
||||
|
||||
|
||||
def __init__(self, model_path: str, base_model: BaseModelType, model_type: ModelType):
|
||||
assert base_model == BaseModelType.StableDiffusion2
|
||||
assert model_type == ModelType.ONNX
|
||||
super().__init__(
|
||||
model_path=model_path,
|
||||
base_model=BaseModelType.StableDiffusion2,
|
||||
model_type=ModelType.ONNX,
|
||||
)
|
||||
|
||||
for child_name, child_type in self.child_types.items():
|
||||
if child_type is OnnxRuntimeModel:
|
||||
self.child_types[child_name] = IAIOnnxRuntimeModel
|
||||
# TODO: check that no optimum models provided
|
||||
|
||||
@classmethod
|
||||
def probe_config(cls, path: str, **kwargs):
|
||||
model_format = cls.detect_format(path)
|
||||
in_channels = 4 # TODO:
|
||||
|
||||
if in_channels == 9:
|
||||
variant = ModelVariantType.Inpaint
|
||||
elif in_channels == 5:
|
||||
variant = ModelVariantType.Depth
|
||||
elif in_channels == 4:
|
||||
variant = ModelVariantType.Normal
|
||||
else:
|
||||
raise Exception("Unkown stable diffusion 2.* model format")
|
||||
|
||||
if variant == ModelVariantType.Normal:
|
||||
prediction_type = SchedulerPredictionType.VPrediction
|
||||
upcast_attention = True
|
||||
|
||||
else:
|
||||
prediction_type = SchedulerPredictionType.Epsilon
|
||||
upcast_attention = False
|
||||
|
||||
return cls.create_config(
|
||||
path=path,
|
||||
model_format=model_format,
|
||||
|
||||
variant=variant,
|
||||
prediction_type=prediction_type,
|
||||
upcast_attention=upcast_attention,
|
||||
)
|
||||
|
||||
@classproperty
|
||||
def save_to_config(cls) -> bool:
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def detect_format(cls, model_path: str):
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def convert_if_required(
|
||||
cls,
|
||||
model_path: str,
|
||||
output_path: str,
|
||||
config: ModelConfigBase,
|
||||
base_model: BaseModelType,
|
||||
) -> str:
|
||||
return model_path
|
||||
|
||||
@@ -21,7 +21,6 @@ from tqdm import tqdm
|
||||
import invokeai.backend.util.logging as logger
|
||||
from .devices import torch_dtype
|
||||
|
||||
|
||||
def log_txt_as_img(wh, xc, size=10):
|
||||
# wh a tuple of (width, height)
|
||||
# xc a list of captions to plot
|
||||
@@ -285,7 +284,11 @@ def ask_user(question: str, answers: list):
|
||||
|
||||
|
||||
# -------------------------------------
|
||||
def download_with_resume(url: str, dest: Path, access_token: str = None) -> Path:
|
||||
def download_with_resume(url: str,
|
||||
dest: Path,
|
||||
access_token: str = None,
|
||||
event_bus = None, # EventServiceBase (circular import issues)
|
||||
) -> Path:
|
||||
"""
|
||||
Download a model file.
|
||||
:param url: https, http or ftp URL
|
||||
@@ -323,8 +326,13 @@ def download_with_resume(url: str, dest: Path, access_token: str = None) -> Path
|
||||
os.remove(dest)
|
||||
exist_size = 0
|
||||
|
||||
if event_bus:
|
||||
event_bus.emit_download_started(url)
|
||||
|
||||
if resp.status_code == 416 or (content_length > 0 and exist_size == content_length):
|
||||
logger.warning(f"{dest}: complete file found. Skipping.")
|
||||
if event_bus:
|
||||
event_bus.emit_download_completed(url,resp.status_code,dest)
|
||||
return dest
|
||||
elif resp.status_code == 206 or exist_size > 0:
|
||||
logger.warning(f"{dest}: partial file found. Resuming...")
|
||||
@@ -333,11 +341,20 @@ def download_with_resume(url: str, dest: Path, access_token: str = None) -> Path
|
||||
else:
|
||||
logger.info(f"{dest}: Downloading...")
|
||||
|
||||
try:
|
||||
if content_length < 2000:
|
||||
logger.error(f"ERROR DOWNLOADING {url}: {resp.text}")
|
||||
# If less than 2K, it's not a model - usually an HTML document of some sort
|
||||
if content_length < 2000:
|
||||
logger.error(f"ERROR DOWNLOADING {url}: {resp.text}")
|
||||
if event_bus:
|
||||
event_bus.emit_download_completed(url, 500, None)
|
||||
return None
|
||||
|
||||
# these variables are used in progress reporting events
|
||||
MB10 = 10 * 1048576
|
||||
downloaded = exist_size
|
||||
previous_interval = 0
|
||||
|
||||
try:
|
||||
|
||||
with open(dest, open_mode) as file, tqdm(
|
||||
desc=str(dest),
|
||||
initial=exist_size,
|
||||
@@ -349,10 +366,20 @@ def download_with_resume(url: str, dest: Path, access_token: str = None) -> Path
|
||||
for data in resp.iter_content(chunk_size=1024):
|
||||
size = file.write(data)
|
||||
bar.update(size)
|
||||
downloaded += size
|
||||
if event_bus and downloaded // MB10 > previous_interval:
|
||||
previous_interval = downloaded // MB10
|
||||
event_bus.emit_download_progress(url, downloaded, content_length)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"An error occurred while downloading {dest}: {str(e)}")
|
||||
if event_bus:
|
||||
event_bus.emit_download_completed(url,500,None)
|
||||
return None
|
||||
|
||||
if event_bus:
|
||||
event_bus.emit_download_completed(url,resp.status_code,dest)
|
||||
|
||||
return dest
|
||||
|
||||
|
||||
|
||||
@@ -1,4 +1,10 @@
|
||||
# This file predefines a few models that the user may want to install.
|
||||
sd-1/main/stable-diffusion-xdl-base:
|
||||
description: Stable Diffusion XL base model - NOT YET RELEASED!! (70 GB)
|
||||
repo_id: stabilityai/stable-diffusion-xl-base
|
||||
sd-1/main/stable-diffusion-xdl-refiner:
|
||||
description: Stable Diffusion XL refiner model - NOT YET RELEASED!! (60 GB)
|
||||
repo_id: stabilityai/stable-diffusion-xl-refiner
|
||||
sd-1/main/stable-diffusion-v1-5:
|
||||
description: Stable Diffusion version 1.5 diffusers model (4.27 GB)
|
||||
repo_id: runwayml/stable-diffusion-v1-5
|
||||
@@ -16,14 +22,6 @@ sd-2/main/stable-diffusion-2-inpainting:
|
||||
description: Stable Diffusion version 2.0 inpainting model (5.21 GB)
|
||||
repo_id: stabilityai/stable-diffusion-2-inpainting
|
||||
recommended: False
|
||||
sdxl/main/stable-diffusion-xl-base-0-9:
|
||||
description: Stable Diffusion XL base model (12 GB; access token required)
|
||||
repo_id: stabilityai/stable-diffusion-xl-base-0.9
|
||||
recommended: False
|
||||
sdxl-refiner/main/stable-diffusion-xl-refiner-0-9:
|
||||
description: Stable Diffusion XL refiner model (12 GB; access token required)
|
||||
repo_id: stabilityai/stable-diffusion-xl-refiner-0.9
|
||||
recommended: False
|
||||
sd-1/main/Analog-Diffusion:
|
||||
description: An SD-1.5 model trained on diverse analog photographs (2.13 GB)
|
||||
repo_id: wavymulder/Analog-Diffusion
|
||||
|
||||
199
invokeai/frontend/web/dist/assets/App-196ba8f8.js
vendored
Normal file
199
invokeai/frontend/web/dist/assets/App-196ba8f8.js
vendored
Normal file
File diff suppressed because one or more lines are too long
169
invokeai/frontend/web/dist/assets/App-3986879c.js
vendored
Normal file
169
invokeai/frontend/web/dist/assets/App-3986879c.js
vendored
Normal file
File diff suppressed because one or more lines are too long
169
invokeai/frontend/web/dist/assets/App-879ff07f.js
vendored
169
invokeai/frontend/web/dist/assets/App-879ff07f.js
vendored
File diff suppressed because one or more lines are too long
1
invokeai/frontend/web/dist/assets/MantineProvider-52361224.js
vendored
Normal file
1
invokeai/frontend/web/dist/assets/MantineProvider-52361224.js
vendored
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
1
invokeai/frontend/web/dist/assets/MantineProvider-e5b33be1.js
vendored
Normal file
1
invokeai/frontend/web/dist/assets/MantineProvider-e5b33be1.js
vendored
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
322
invokeai/frontend/web/dist/assets/ThemeLocaleProvider-a0337544.js
vendored
Normal file
322
invokeai/frontend/web/dist/assets/ThemeLocaleProvider-a0337544.js
vendored
Normal file
File diff suppressed because one or more lines are too long
302
invokeai/frontend/web/dist/assets/ThemeLocaleProvider-fa40c0d9.js
vendored
Normal file
302
invokeai/frontend/web/dist/assets/ThemeLocaleProvider-fa40c0d9.js
vendored
Normal file
File diff suppressed because one or more lines are too long
125
invokeai/frontend/web/dist/assets/index-15b43c6c.js
vendored
Normal file
125
invokeai/frontend/web/dist/assets/index-15b43c6c.js
vendored
Normal file
File diff suppressed because one or more lines are too long
125
invokeai/frontend/web/dist/assets/index-ba194473.js
vendored
125
invokeai/frontend/web/dist/assets/index-ba194473.js
vendored
File diff suppressed because one or more lines are too long
125
invokeai/frontend/web/dist/assets/index-f1a5f9cf.js
vendored
Normal file
125
invokeai/frontend/web/dist/assets/index-f1a5f9cf.js
vendored
Normal file
File diff suppressed because one or more lines are too long
2
invokeai/frontend/web/dist/index.html
vendored
2
invokeai/frontend/web/dist/index.html
vendored
@@ -12,7 +12,7 @@
|
||||
margin: 0;
|
||||
}
|
||||
</style>
|
||||
<script type="module" crossorigin src="./assets/index-ba194473.js"></script>
|
||||
<script type="module" crossorigin src="./assets/index-f1a5f9cf.js"></script>
|
||||
</head>
|
||||
|
||||
<body dir="ltr">
|
||||
|
||||
21
invokeai/frontend/web/dist/locales/en.json
vendored
21
invokeai/frontend/web/dist/locales/en.json
vendored
@@ -399,8 +399,6 @@
|
||||
"deleteModel": "Delete Model",
|
||||
"deleteConfig": "Delete Config",
|
||||
"deleteMsg1": "Are you sure you want to delete this model from InvokeAI?",
|
||||
"modelDeleted": "Model Deleted",
|
||||
"modelDeleteFailed": "Failed to delete model",
|
||||
"deleteMsg2": "This WILL delete the model from disk if it is in the InvokeAI root folder. If you are using a custom location, then the model WILL NOT be deleted from disk.",
|
||||
"formMessageDiffusersModelLocation": "Diffusers Model Location",
|
||||
"formMessageDiffusersModelLocationDesc": "Please enter at least one.",
|
||||
@@ -410,13 +408,11 @@
|
||||
"convertToDiffusers": "Convert To Diffusers",
|
||||
"convertToDiffusersHelpText1": "This model will be converted to the 🧨 Diffusers format.",
|
||||
"convertToDiffusersHelpText2": "This process will replace your Model Manager entry with the Diffusers version of the same model.",
|
||||
"convertToDiffusersHelpText3": "Your checkpoint file on disk WILL be deleted if it is in InvokeAI root folder. If it is in a custom location, then it WILL NOT be deleted.",
|
||||
"convertToDiffusersHelpText3": "Your checkpoint file on the disk will NOT be deleted or modified in anyway. You can add your checkpoint to the Model Manager again if you want to.",
|
||||
"convertToDiffusersHelpText4": "This is a one time process only. It might take around 30s-60s depending on the specifications of your computer.",
|
||||
"convertToDiffusersHelpText5": "Please make sure you have enough disk space. Models generally vary between 2GB-7GB in size.",
|
||||
"convertToDiffusersHelpText6": "Do you wish to convert this model?",
|
||||
"convertToDiffusersSaveLocation": "Save Location",
|
||||
"noCustomLocationProvided": "No Custom Location Provided",
|
||||
"convertingModelBegin": "Converting Model. Please wait.",
|
||||
"v1": "v1",
|
||||
"v2_base": "v2 (512px)",
|
||||
"v2_768": "v2 (768px)",
|
||||
@@ -454,8 +450,7 @@
|
||||
"none": "none",
|
||||
"addDifference": "Add Difference",
|
||||
"pickModelType": "Pick Model Type",
|
||||
"selectModel": "Select Model",
|
||||
"importModels": "Import Models"
|
||||
"selectModel": "Select Model"
|
||||
},
|
||||
"parameters": {
|
||||
"general": "General",
|
||||
@@ -577,7 +572,6 @@
|
||||
"uploadFailedInvalidUploadDesc": "Must be single PNG or JPEG image",
|
||||
"downloadImageStarted": "Image Download Started",
|
||||
"imageCopied": "Image Copied",
|
||||
"problemCopyingImage": "Unable to Copy Image",
|
||||
"imageLinkCopied": "Image Link Copied",
|
||||
"problemCopyingImageLink": "Unable to Copy Image Link",
|
||||
"imageNotLoaded": "No Image Loaded",
|
||||
@@ -694,15 +688,6 @@
|
||||
"reloadSchema": "Reload Schema",
|
||||
"saveNodes": "Save Nodes",
|
||||
"loadNodes": "Load Nodes",
|
||||
"clearNodes": "Clear Nodes",
|
||||
"zoomInNodes": "Zoom In",
|
||||
"zoomOutNodes": "Zoom Out",
|
||||
"fitViewportNodes": "Fit View",
|
||||
"hideGraphNodes": "Hide Graph Overlay",
|
||||
"showGraphNodes": "Show Graph Overlay",
|
||||
"hideLegendNodes": "Hide Field Type Legend",
|
||||
"showLegendNodes": "Show Field Type Legend",
|
||||
"hideMinimapnodes": "Hide MiniMap",
|
||||
"showMinimapnodes": "Show MiniMap"
|
||||
"clearNodes": "Clear Nodes"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -59,8 +59,15 @@ export const SCHEDULER_LABEL_MAP: Record<SchedulerParam, string> = {
|
||||
|
||||
export type Scheduler = (typeof SCHEDULER_NAMES)[number];
|
||||
|
||||
// Valid upscaling levels
|
||||
export const UPSCALING_LEVELS: Array<{ label: string; value: string }> = [
|
||||
{ label: '2x', value: '2' },
|
||||
{ label: '4x', value: '4' },
|
||||
];
|
||||
export const NUMPY_RAND_MIN = 0;
|
||||
|
||||
export const NUMPY_RAND_MAX = 2147483647;
|
||||
|
||||
export const FACETOOL_TYPES = ['gfpgan', 'codeformer'] as const;
|
||||
|
||||
export const NODE_MIN_WIDTH = 250;
|
||||
|
||||
@@ -90,7 +90,6 @@ import { addUserInvokedNodesListener } from './listeners/userInvokedNodes';
|
||||
import { addUserInvokedTextToImageListener } from './listeners/userInvokedTextToImage';
|
||||
import { addModelLoadStartedEventListener } from './listeners/socketio/socketModelLoadStarted';
|
||||
import { addModelLoadCompletedEventListener } from './listeners/socketio/socketModelLoadCompleted';
|
||||
import { addUpscaleRequestedListener } from './listeners/upscaleRequested';
|
||||
|
||||
export const listenerMiddleware = createListenerMiddleware();
|
||||
|
||||
@@ -229,5 +228,3 @@ addModelSelectedListener();
|
||||
addAppStartedListener();
|
||||
addModelsLoadedListener();
|
||||
addAppConfigReceivedListener();
|
||||
|
||||
addUpscaleRequestedListener();
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { log } from 'app/logging/useLogger';
|
||||
import { serializeError } from 'serialize-error';
|
||||
import { sessionCreated } from 'services/api/thunks/session';
|
||||
import { startAppListening } from '..';
|
||||
import { sessionCreated } from 'services/api/thunks/session';
|
||||
import { serializeError } from 'serialize-error';
|
||||
|
||||
const moduleLog = log.child({ namespace: 'session' });
|
||||
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
import { createAction } from '@reduxjs/toolkit';
|
||||
import { log } from 'app/logging/useLogger';
|
||||
import { buildAdHocUpscaleGraph } from 'features/nodes/util/graphBuilders/buildAdHocUpscaleGraph';
|
||||
import { sessionReadyToInvoke } from 'features/system/store/actions';
|
||||
import { sessionCreated } from 'services/api/thunks/session';
|
||||
import { startAppListening } from '..';
|
||||
|
||||
const moduleLog = log.child({ namespace: 'upscale' });
|
||||
|
||||
export const upscaleRequested = createAction<{ image_name: string }>(
|
||||
`upscale/upscaleRequested`
|
||||
);
|
||||
|
||||
export const addUpscaleRequestedListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: upscaleRequested,
|
||||
effect: async (
|
||||
action,
|
||||
{ dispatch, getState, take, unsubscribe, subscribe }
|
||||
) => {
|
||||
const { image_name } = action.payload;
|
||||
const { esrganModelName } = getState().postprocessing;
|
||||
|
||||
const graph = buildAdHocUpscaleGraph({
|
||||
image_name,
|
||||
esrganModelName,
|
||||
});
|
||||
|
||||
// Create a session to run the graph & wait til it's ready to invoke
|
||||
dispatch(sessionCreated({ graph }));
|
||||
|
||||
await take(sessionCreated.fulfilled.match);
|
||||
|
||||
dispatch(sessionReadyToInvoke());
|
||||
},
|
||||
});
|
||||
};
|
||||
@@ -11,7 +11,7 @@ interface ItemProps extends React.ComponentPropsWithoutRef<'div'> {
|
||||
|
||||
const IAIMantineSelectItemWithTooltip = forwardRef<HTMLDivElement, ItemProps>(
|
||||
({ label, tooltip, description, disabled, ...others }: ItemProps, ref) => (
|
||||
<Tooltip label={tooltip} placement="top" hasArrow openDelay={500}>
|
||||
<Tooltip label={tooltip} placement="top" hasArrow>
|
||||
<Box ref={ref} {...others}>
|
||||
<Box>
|
||||
<Text>{label}</Text>
|
||||
|
||||
@@ -5,26 +5,34 @@ import {
|
||||
ButtonGroup,
|
||||
Flex,
|
||||
FlexProps,
|
||||
Link,
|
||||
Menu,
|
||||
MenuButton,
|
||||
MenuItem,
|
||||
MenuList,
|
||||
} from '@chakra-ui/react';
|
||||
// import { runESRGAN, runFacetool } from 'app/socketio/actions';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAIButton from 'common/components/IAIButton';
|
||||
import IAIIconButton from 'common/components/IAIIconButton';
|
||||
import IAIPopover from 'common/components/IAIPopover';
|
||||
|
||||
import { skipToken } from '@reduxjs/toolkit/dist/query';
|
||||
import { useAppToaster } from 'app/components/Toaster';
|
||||
import { upscaleRequested } from 'app/store/middleware/listenerMiddleware/listeners/upscaleRequested';
|
||||
import { stateSelector } from 'app/store/store';
|
||||
import { setInitialCanvasImage } from 'features/canvas/store/canvasSlice';
|
||||
import { requestCanvasRescale } from 'features/canvas/store/thunks/requestCanvasScale';
|
||||
import { DeleteImageButton } from 'features/imageDeletion/components/DeleteImageButton';
|
||||
import { imageToDeleteSelected } from 'features/imageDeletion/store/imageDeletionSlice';
|
||||
import ParamUpscalePopover from 'features/parameters/components/Parameters/Upscale/ParamUpscaleSettings';
|
||||
import FaceRestoreSettings from 'features/parameters/components/Parameters/FaceRestore/FaceRestoreSettings';
|
||||
import UpscaleSettings from 'features/parameters/components/Parameters/Upscale/UpscaleSettings';
|
||||
import { useRecallParameters } from 'features/parameters/hooks/useRecallParameters';
|
||||
import { initialImageSelected } from 'features/parameters/store/actions';
|
||||
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
|
||||
import { useCopyImageToClipboard } from 'features/ui/hooks/useCopyImageToClipboard';
|
||||
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
|
||||
import {
|
||||
setActiveTab,
|
||||
setShouldShowImageDetails,
|
||||
setShouldShowProgressInViewer,
|
||||
} from 'features/ui/store/uiSlice';
|
||||
@@ -34,25 +42,38 @@ import { useTranslation } from 'react-i18next';
|
||||
import {
|
||||
FaAsterisk,
|
||||
FaCode,
|
||||
FaCopy,
|
||||
FaDownload,
|
||||
FaExpandArrowsAlt,
|
||||
FaGrinStars,
|
||||
FaHourglassHalf,
|
||||
FaQuoteRight,
|
||||
FaSeedling,
|
||||
FaShare,
|
||||
FaShareAlt,
|
||||
} from 'react-icons/fa';
|
||||
import {
|
||||
useGetImageDTOQuery,
|
||||
useGetImageMetadataQuery,
|
||||
} from 'services/api/endpoints/images';
|
||||
import { menuListMotionProps } from 'theme/components/menu';
|
||||
import { useDebounce } from 'use-debounce';
|
||||
import { sentImageToImg2Img } from '../../store/actions';
|
||||
import { sentImageToCanvas, sentImageToImg2Img } from '../../store/actions';
|
||||
import { menuListMotionProps } from 'theme/components/menu';
|
||||
import SingleSelectionMenuItems from '../ImageContextMenu/SingleSelectionMenuItems';
|
||||
|
||||
const currentImageButtonsSelector = createSelector(
|
||||
[stateSelector, activeTabNameSelector],
|
||||
({ gallery, system, ui }, activeTabName) => {
|
||||
const { isProcessing, isConnected, shouldConfirmOnDelete, progressImage } =
|
||||
system;
|
||||
({ gallery, system, postprocessing, ui }, activeTabName) => {
|
||||
const {
|
||||
isProcessing,
|
||||
isConnected,
|
||||
isGFPGANAvailable,
|
||||
isESRGANAvailable,
|
||||
shouldConfirmOnDelete,
|
||||
progressImage,
|
||||
} = system;
|
||||
|
||||
const { upscalingLevel, facetoolStrength } = postprocessing;
|
||||
|
||||
const {
|
||||
shouldShowImageDetails,
|
||||
@@ -67,6 +88,10 @@ const currentImageButtonsSelector = createSelector(
|
||||
shouldConfirmOnDelete,
|
||||
isProcessing,
|
||||
isConnected,
|
||||
isGFPGANAvailable,
|
||||
isESRGANAvailable,
|
||||
upscalingLevel,
|
||||
facetoolStrength,
|
||||
shouldDisableToolbarButtons: Boolean(progressImage) || !lastSelectedImage,
|
||||
shouldShowImageDetails,
|
||||
activeTabName,
|
||||
@@ -89,17 +114,27 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
|
||||
const {
|
||||
isProcessing,
|
||||
isConnected,
|
||||
isGFPGANAvailable,
|
||||
isESRGANAvailable,
|
||||
upscalingLevel,
|
||||
facetoolStrength,
|
||||
shouldDisableToolbarButtons,
|
||||
shouldShowImageDetails,
|
||||
activeTabName,
|
||||
lastSelectedImage,
|
||||
shouldShowProgressInViewer,
|
||||
} = useAppSelector(currentImageButtonsSelector);
|
||||
|
||||
const isCanvasEnabled = useFeatureStatus('unifiedCanvas').isFeatureEnabled;
|
||||
const isUpscalingEnabled = useFeatureStatus('upscaling').isFeatureEnabled;
|
||||
const isFaceRestoreEnabled = useFeatureStatus('faceRestore').isFeatureEnabled;
|
||||
|
||||
const toaster = useAppToaster();
|
||||
const { t } = useTranslation();
|
||||
|
||||
const { isClipboardAPIAvailable, copyImageToClipboard } =
|
||||
useCopyImageToClipboard();
|
||||
|
||||
const { recallBothPrompts, recallSeed, recallAllParameters } =
|
||||
useRecallParameters();
|
||||
|
||||
@@ -120,6 +155,42 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
|
||||
|
||||
const metadata = metadataData?.metadata;
|
||||
|
||||
const handleCopyImageLink = useCallback(() => {
|
||||
const getImageUrl = () => {
|
||||
if (!imageDTO) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (imageDTO.image_url.startsWith('http')) {
|
||||
return imageDTO.image_url;
|
||||
}
|
||||
|
||||
return window.location.toString() + imageDTO.image_url;
|
||||
};
|
||||
|
||||
const url = getImageUrl();
|
||||
|
||||
if (!url) {
|
||||
toaster({
|
||||
title: t('toast.problemCopyingImageLink'),
|
||||
status: 'error',
|
||||
duration: 2500,
|
||||
isClosable: true,
|
||||
});
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
navigator.clipboard.writeText(url).then(() => {
|
||||
toaster({
|
||||
title: t('toast.imageLinkCopied'),
|
||||
status: 'success',
|
||||
duration: 2500,
|
||||
isClosable: true,
|
||||
});
|
||||
});
|
||||
}, [toaster, t, imageDTO]);
|
||||
|
||||
const handleClickUseAllParameters = useCallback(() => {
|
||||
recallAllParameters(metadata);
|
||||
}, [metadata, recallAllParameters]);
|
||||
@@ -152,11 +223,8 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
|
||||
useHotkeys('shift+i', handleSendToImageToImage, [imageDTO]);
|
||||
|
||||
const handleClickUpscale = useCallback(() => {
|
||||
if (!imageDTO) {
|
||||
return;
|
||||
}
|
||||
dispatch(upscaleRequested({ image_name: imageDTO.image_name }));
|
||||
}, [dispatch, imageDTO]);
|
||||
// selectedImage && dispatch(runESRGAN(selectedImage));
|
||||
}, []);
|
||||
|
||||
const handleDelete = useCallback(() => {
|
||||
if (!imageDTO) {
|
||||
@@ -174,17 +242,53 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
|
||||
enabled: () =>
|
||||
Boolean(
|
||||
isUpscalingEnabled &&
|
||||
isESRGANAvailable &&
|
||||
!shouldDisableToolbarButtons &&
|
||||
isConnected &&
|
||||
!isProcessing
|
||||
!isProcessing &&
|
||||
upscalingLevel
|
||||
),
|
||||
},
|
||||
[
|
||||
isUpscalingEnabled,
|
||||
imageDTO,
|
||||
isESRGANAvailable,
|
||||
shouldDisableToolbarButtons,
|
||||
isConnected,
|
||||
isProcessing,
|
||||
upscalingLevel,
|
||||
]
|
||||
);
|
||||
|
||||
const handleClickFixFaces = useCallback(() => {
|
||||
// selectedImage && dispatch(runFacetool(selectedImage));
|
||||
}, []);
|
||||
|
||||
useHotkeys(
|
||||
'Shift+R',
|
||||
() => {
|
||||
handleClickFixFaces();
|
||||
},
|
||||
{
|
||||
enabled: () =>
|
||||
Boolean(
|
||||
isFaceRestoreEnabled &&
|
||||
isGFPGANAvailable &&
|
||||
!shouldDisableToolbarButtons &&
|
||||
isConnected &&
|
||||
!isProcessing &&
|
||||
facetoolStrength
|
||||
),
|
||||
},
|
||||
|
||||
[
|
||||
isFaceRestoreEnabled,
|
||||
imageDTO,
|
||||
isGFPGANAvailable,
|
||||
shouldDisableToolbarButtons,
|
||||
isConnected,
|
||||
isProcessing,
|
||||
facetoolStrength,
|
||||
]
|
||||
);
|
||||
|
||||
@@ -193,6 +297,25 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
|
||||
[dispatch, shouldShowImageDetails]
|
||||
);
|
||||
|
||||
const handleSendToCanvas = useCallback(() => {
|
||||
if (!imageDTO) return;
|
||||
dispatch(sentImageToCanvas());
|
||||
|
||||
dispatch(setInitialCanvasImage(imageDTO));
|
||||
dispatch(requestCanvasRescale());
|
||||
|
||||
if (activeTabName !== 'unifiedCanvas') {
|
||||
dispatch(setActiveTab('unifiedCanvas'));
|
||||
}
|
||||
|
||||
toaster({
|
||||
title: t('toast.sentToUnifiedCanvas'),
|
||||
status: 'success',
|
||||
duration: 2500,
|
||||
isClosable: true,
|
||||
});
|
||||
}, [imageDTO, dispatch, activeTabName, toaster, t]);
|
||||
|
||||
useHotkeys(
|
||||
'i',
|
||||
() => {
|
||||
@@ -214,6 +337,13 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
|
||||
dispatch(setShouldShowProgressInViewer(!shouldShowProgressInViewer));
|
||||
}, [dispatch, shouldShowProgressInViewer]);
|
||||
|
||||
const handleCopyImage = useCallback(() => {
|
||||
if (!imageDTO) {
|
||||
return;
|
||||
}
|
||||
copyImageToClipboard(imageDTO.image_url);
|
||||
}, [copyImageToClipboard, imageDTO]);
|
||||
|
||||
return (
|
||||
<>
|
||||
<Flex
|
||||
@@ -266,12 +396,72 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
|
||||
/>
|
||||
</ButtonGroup>
|
||||
|
||||
{isUpscalingEnabled && (
|
||||
{(isUpscalingEnabled || isFaceRestoreEnabled) && (
|
||||
<ButtonGroup
|
||||
isAttached={true}
|
||||
isDisabled={shouldDisableToolbarButtons}
|
||||
>
|
||||
{isUpscalingEnabled && <ParamUpscalePopover imageDTO={imageDTO} />}
|
||||
{isFaceRestoreEnabled && (
|
||||
<IAIPopover
|
||||
triggerComponent={
|
||||
<IAIIconButton
|
||||
icon={<FaGrinStars />}
|
||||
aria-label={t('parameters.restoreFaces')}
|
||||
/>
|
||||
}
|
||||
>
|
||||
<Flex
|
||||
sx={{
|
||||
flexDirection: 'column',
|
||||
rowGap: 4,
|
||||
}}
|
||||
>
|
||||
<FaceRestoreSettings />
|
||||
<IAIButton
|
||||
isDisabled={
|
||||
!isGFPGANAvailable ||
|
||||
!imageDTO ||
|
||||
!(isConnected && !isProcessing) ||
|
||||
!facetoolStrength
|
||||
}
|
||||
onClick={handleClickFixFaces}
|
||||
>
|
||||
{t('parameters.restoreFaces')}
|
||||
</IAIButton>
|
||||
</Flex>
|
||||
</IAIPopover>
|
||||
)}
|
||||
|
||||
{isUpscalingEnabled && (
|
||||
<IAIPopover
|
||||
triggerComponent={
|
||||
<IAIIconButton
|
||||
icon={<FaExpandArrowsAlt />}
|
||||
aria-label={t('parameters.upscale')}
|
||||
/>
|
||||
}
|
||||
>
|
||||
<Flex
|
||||
sx={{
|
||||
flexDirection: 'column',
|
||||
gap: 4,
|
||||
}}
|
||||
>
|
||||
<UpscaleSettings />
|
||||
<IAIButton
|
||||
isDisabled={
|
||||
!isESRGANAvailable ||
|
||||
!imageDTO ||
|
||||
!(isConnected && !isProcessing) ||
|
||||
!upscalingLevel
|
||||
}
|
||||
onClick={handleClickUpscale}
|
||||
>
|
||||
{t('parameters.upscaleImage')}
|
||||
</IAIButton>
|
||||
</Flex>
|
||||
</IAIPopover>
|
||||
)}
|
||||
</ButtonGroup>
|
||||
)}
|
||||
|
||||
|
||||
@@ -12,10 +12,7 @@ import { modelIdToMainModelParam } from 'features/parameters/util/modelIdToMainM
|
||||
import { forEach } from 'lodash-es';
|
||||
import { memo, useCallback, useMemo } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import {
|
||||
useGetMainModelsQuery,
|
||||
useGetOnnxModelsQuery,
|
||||
} from 'services/api/endpoints/models';
|
||||
import { useGetMainModelsQuery } from 'services/api/endpoints/models';
|
||||
import { FieldComponentProps } from './types';
|
||||
|
||||
const ModelInputFieldComponent = (
|
||||
@@ -26,7 +23,6 @@ const ModelInputFieldComponent = (
|
||||
const dispatch = useAppDispatch();
|
||||
const { t } = useTranslation();
|
||||
|
||||
const { data: onnxModels } = useGetOnnxModelsQuery();
|
||||
const { data: mainModels, isLoading } = useGetMainModelsQuery();
|
||||
|
||||
const data = useMemo(() => {
|
||||
@@ -48,39 +44,17 @@ const ModelInputFieldComponent = (
|
||||
});
|
||||
});
|
||||
|
||||
if (onnxModels) {
|
||||
forEach(onnxModels.entities, (model, id) => {
|
||||
if (!model) {
|
||||
return;
|
||||
}
|
||||
|
||||
data.push({
|
||||
value: id,
|
||||
label: model.model_name,
|
||||
group: BASE_MODEL_NAME_MAP[model.base_model],
|
||||
});
|
||||
});
|
||||
}
|
||||
return data;
|
||||
}, [mainModels, onnxModels]);
|
||||
}, [mainModels]);
|
||||
|
||||
// grab the full model entity from the RTK Query cache
|
||||
// TODO: maybe we should just store the full model entity in state?
|
||||
const selectedModel = useMemo(
|
||||
() =>
|
||||
(mainModels?.entities[
|
||||
mainModels?.entities[
|
||||
`${field.value?.base_model}/main/${field.value?.model_name}`
|
||||
] ||
|
||||
onnxModels?.entities[
|
||||
`${field.value?.base_model}/onnx/${field.value?.model_name}`
|
||||
]) ??
|
||||
null,
|
||||
[
|
||||
field.value?.base_model,
|
||||
field.value?.model_name,
|
||||
mainModels?.entities,
|
||||
onnxModels?.entities,
|
||||
]
|
||||
] ?? null,
|
||||
[field.value?.base_model, field.value?.model_name, mainModels?.entities]
|
||||
);
|
||||
|
||||
const handleChangeModel = useCallback(
|
||||
|
||||
@@ -9,7 +9,6 @@ import {
|
||||
CLIP_SKIP,
|
||||
LORA_LOADER,
|
||||
MAIN_MODEL_LOADER,
|
||||
ONNX_MODEL_LOADER,
|
||||
METADATA_ACCUMULATOR,
|
||||
NEGATIVE_CONDITIONING,
|
||||
POSITIVE_CONDITIONING,
|
||||
@@ -18,8 +17,7 @@ import {
|
||||
export const addLoRAsToGraph = (
|
||||
state: RootState,
|
||||
graph: NonNullableGraph,
|
||||
baseNodeId: string,
|
||||
modelLoader: string = MAIN_MODEL_LOADER
|
||||
baseNodeId: string
|
||||
): void => {
|
||||
/**
|
||||
* LoRA nodes get the UNet and CLIP models from the main model loader and apply the LoRA to them.
|
||||
@@ -42,10 +40,6 @@ export const addLoRAsToGraph = (
|
||||
!(
|
||||
e.source.node_id === MAIN_MODEL_LOADER &&
|
||||
['unet'].includes(e.source.field)
|
||||
) &&
|
||||
!(
|
||||
e.source.node_id === ONNX_MODEL_LOADER &&
|
||||
['unet'].includes(e.source.field)
|
||||
)
|
||||
);
|
||||
// Remove CLIP_SKIP connections to conditionings to feed it through LoRAs
|
||||
@@ -80,11 +74,12 @@ export const addLoRAsToGraph = (
|
||||
|
||||
// add to graph
|
||||
graph.nodes[currentLoraNodeId] = loraLoaderNode;
|
||||
|
||||
if (currentLoraIndex === 0) {
|
||||
// first lora = start the lora chain, attach directly to model loader
|
||||
graph.edges.push({
|
||||
source: {
|
||||
node_id: modelLoader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'unet',
|
||||
},
|
||||
destination: {
|
||||
|
||||
@@ -9,15 +9,13 @@ import {
|
||||
LATENTS_TO_IMAGE,
|
||||
MAIN_MODEL_LOADER,
|
||||
METADATA_ACCUMULATOR,
|
||||
ONNX_MODEL_LOADER,
|
||||
TEXT_TO_IMAGE_GRAPH,
|
||||
VAE_LOADER,
|
||||
} from './constants';
|
||||
|
||||
export const addVAEToGraph = (
|
||||
state: RootState,
|
||||
graph: NonNullableGraph,
|
||||
modelLoader: string = MAIN_MODEL_LOADER
|
||||
graph: NonNullableGraph
|
||||
): void => {
|
||||
const { vae } = state.generation;
|
||||
|
||||
@@ -33,12 +31,12 @@ export const addVAEToGraph = (
|
||||
vae_model: vae,
|
||||
};
|
||||
}
|
||||
const isOnnxModel = modelLoader == ONNX_MODEL_LOADER;
|
||||
|
||||
if (graph.id === TEXT_TO_IMAGE_GRAPH || graph.id === IMAGE_TO_IMAGE_GRAPH) {
|
||||
graph.edges.push({
|
||||
source: {
|
||||
node_id: isAutoVae ? modelLoader : VAE_LOADER,
|
||||
field: isAutoVae && isOnnxModel ? 'vae_decoder' : 'vae',
|
||||
node_id: isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
|
||||
field: 'vae',
|
||||
},
|
||||
destination: {
|
||||
node_id: LATENTS_TO_IMAGE,
|
||||
@@ -50,8 +48,8 @@ export const addVAEToGraph = (
|
||||
if (graph.id === IMAGE_TO_IMAGE_GRAPH) {
|
||||
graph.edges.push({
|
||||
source: {
|
||||
node_id: isAutoVae ? modelLoader : VAE_LOADER,
|
||||
field: isAutoVae && isOnnxModel ? 'vae_decoder' : 'vae',
|
||||
node_id: isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
|
||||
field: 'vae',
|
||||
},
|
||||
destination: {
|
||||
node_id: IMAGE_TO_LATENTS,
|
||||
@@ -63,8 +61,8 @@ export const addVAEToGraph = (
|
||||
if (graph.id === INPAINT_GRAPH) {
|
||||
graph.edges.push({
|
||||
source: {
|
||||
node_id: isAutoVae ? modelLoader : VAE_LOADER,
|
||||
field: isAutoVae && isOnnxModel ? 'vae_decoder' : 'vae',
|
||||
node_id: isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
|
||||
field: 'vae',
|
||||
},
|
||||
destination: {
|
||||
node_id: INPAINT,
|
||||
|
||||
@@ -1,32 +0,0 @@
|
||||
import { NonNullableGraph } from 'features/nodes/types/types';
|
||||
import { ESRGANModelName } from 'features/parameters/store/postprocessingSlice';
|
||||
import { Graph, ESRGANInvocation } from 'services/api/types';
|
||||
import { REALESRGAN as ESRGAN } from './constants';
|
||||
|
||||
type Arg = {
|
||||
image_name: string;
|
||||
esrganModelName: ESRGANModelName;
|
||||
};
|
||||
|
||||
export const buildAdHocUpscaleGraph = ({
|
||||
image_name,
|
||||
esrganModelName,
|
||||
}: Arg): Graph => {
|
||||
const realesrganNode: ESRGANInvocation = {
|
||||
id: ESRGAN,
|
||||
type: 'esrgan',
|
||||
image: { image_name },
|
||||
model_name: esrganModelName,
|
||||
is_intermediate: false,
|
||||
};
|
||||
|
||||
const graph: NonNullableGraph = {
|
||||
id: `adhoc-esrgan-graph`,
|
||||
nodes: {
|
||||
[ESRGAN]: realesrganNode,
|
||||
},
|
||||
edges: [],
|
||||
};
|
||||
|
||||
return graph;
|
||||
};
|
||||
@@ -18,7 +18,6 @@ import {
|
||||
LATENTS_TO_IMAGE,
|
||||
LATENTS_TO_LATENTS,
|
||||
MAIN_MODEL_LOADER,
|
||||
ONNX_MODEL_LOADER,
|
||||
METADATA_ACCUMULATOR,
|
||||
NEGATIVE_CONDITIONING,
|
||||
NOISE,
|
||||
@@ -60,9 +59,6 @@ export const buildCanvasImageToImageGraph = (
|
||||
? shouldUseCpuNoise
|
||||
: initialGenerationState.shouldUseCpuNoise;
|
||||
|
||||
const onnx_model_type = model.model_type.includes('onnx');
|
||||
const model_loader = onnx_model_type ? ONNX_MODEL_LOADER : MAIN_MODEL_LOADER;
|
||||
|
||||
/**
|
||||
* The easiest way to build linear graphs is to do it in the node editor, then copy and paste the
|
||||
* full graph here as a template. Then use the parameters from app state and set friendlier node
|
||||
@@ -73,17 +69,16 @@ export const buildCanvasImageToImageGraph = (
|
||||
*/
|
||||
|
||||
// copy-pasted graph from node editor, filled in with state values & friendly node ids
|
||||
// TODO: Actually create the graph correctly for ONNX
|
||||
const graph: NonNullableGraph = {
|
||||
id: IMAGE_TO_IMAGE_GRAPH,
|
||||
nodes: {
|
||||
[POSITIVE_CONDITIONING]: {
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
type: 'compel',
|
||||
id: POSITIVE_CONDITIONING,
|
||||
prompt: positivePrompt,
|
||||
},
|
||||
[NEGATIVE_CONDITIONING]: {
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
type: 'compel',
|
||||
id: NEGATIVE_CONDITIONING,
|
||||
prompt: negativePrompt,
|
||||
},
|
||||
@@ -92,9 +87,9 @@ export const buildCanvasImageToImageGraph = (
|
||||
id: NOISE,
|
||||
use_cpu,
|
||||
},
|
||||
[model_loader]: {
|
||||
type: model_loader,
|
||||
id: model_loader,
|
||||
[MAIN_MODEL_LOADER]: {
|
||||
type: 'main_model_loader',
|
||||
id: MAIN_MODEL_LOADER,
|
||||
model,
|
||||
},
|
||||
[CLIP_SKIP]: {
|
||||
@@ -103,11 +98,11 @@ export const buildCanvasImageToImageGraph = (
|
||||
skipped_layers: clipSkip,
|
||||
},
|
||||
[LATENTS_TO_IMAGE]: {
|
||||
type: onnx_model_type ? 'l2i_onnx' : 'l2i',
|
||||
type: 'l2i',
|
||||
id: LATENTS_TO_IMAGE,
|
||||
},
|
||||
[LATENTS_TO_LATENTS]: {
|
||||
type: onnx_model_type ? 'l2l_onnx' : 'l2l',
|
||||
type: 'l2l',
|
||||
id: LATENTS_TO_LATENTS,
|
||||
cfg_scale,
|
||||
scheduler,
|
||||
@@ -115,7 +110,7 @@ export const buildCanvasImageToImageGraph = (
|
||||
strength,
|
||||
},
|
||||
[IMAGE_TO_LATENTS]: {
|
||||
type: onnx_model_type ? 'i2l_onnx' : 'i2l',
|
||||
type: 'i2l',
|
||||
id: IMAGE_TO_LATENTS,
|
||||
// must be set manually later, bc `fit` parameter may require a resize node inserted
|
||||
// image: {
|
||||
@@ -126,7 +121,7 @@ export const buildCanvasImageToImageGraph = (
|
||||
edges: [
|
||||
{
|
||||
source: {
|
||||
node_id: model_loader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'clip',
|
||||
},
|
||||
destination: {
|
||||
@@ -186,7 +181,7 @@ export const buildCanvasImageToImageGraph = (
|
||||
},
|
||||
{
|
||||
source: {
|
||||
node_id: model_loader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'unet',
|
||||
},
|
||||
destination: {
|
||||
@@ -318,10 +313,10 @@ export const buildCanvasImageToImageGraph = (
|
||||
});
|
||||
|
||||
// add LoRA support
|
||||
addLoRAsToGraph(state, graph, LATENTS_TO_LATENTS, model_loader);
|
||||
addLoRAsToGraph(state, graph, LATENTS_TO_LATENTS);
|
||||
|
||||
// optionally add custom VAE
|
||||
addVAEToGraph(state, graph, model_loader);
|
||||
addVAEToGraph(state, graph);
|
||||
|
||||
// add dynamic prompts - also sets up core iteration and seed
|
||||
addDynamicPromptsToGraph(state, graph);
|
||||
|
||||
@@ -15,7 +15,6 @@ import {
|
||||
INPAINT_GRAPH,
|
||||
ITERATE,
|
||||
MAIN_MODEL_LOADER,
|
||||
ONNX_MODEL_LOADER,
|
||||
NEGATIVE_CONDITIONING,
|
||||
POSITIVE_CONDITIONING,
|
||||
RANDOM_INT,
|
||||
@@ -64,11 +63,6 @@ export const buildCanvasInpaintGraph = (
|
||||
// We may need to set the inpaint width and height to scale the image
|
||||
const { scaledBoundingBoxDimensions, boundingBoxScaleMethod } = state.canvas;
|
||||
|
||||
const model_loader = model.model_type.includes('onnx')
|
||||
? ONNX_MODEL_LOADER
|
||||
: MAIN_MODEL_LOADER;
|
||||
|
||||
// TODO: Actually create the graph correctly for ONNX
|
||||
const graph: NonNullableGraph = {
|
||||
id: INPAINT_GRAPH,
|
||||
nodes: {
|
||||
@@ -113,9 +107,9 @@ export const buildCanvasInpaintGraph = (
|
||||
id: NEGATIVE_CONDITIONING,
|
||||
prompt: negativePrompt,
|
||||
},
|
||||
[model_loader]: {
|
||||
type: model_loader,
|
||||
id: model_loader,
|
||||
[MAIN_MODEL_LOADER]: {
|
||||
type: 'main_model_loader',
|
||||
id: MAIN_MODEL_LOADER,
|
||||
model,
|
||||
},
|
||||
[CLIP_SKIP]: {
|
||||
@@ -139,7 +133,7 @@ export const buildCanvasInpaintGraph = (
|
||||
edges: [
|
||||
{
|
||||
source: {
|
||||
node_id: model_loader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'unet',
|
||||
},
|
||||
destination: {
|
||||
@@ -149,7 +143,7 @@ export const buildCanvasInpaintGraph = (
|
||||
},
|
||||
{
|
||||
source: {
|
||||
node_id: model_loader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'clip',
|
||||
},
|
||||
destination: {
|
||||
|
||||
@@ -10,7 +10,6 @@ import {
|
||||
CLIP_SKIP,
|
||||
LATENTS_TO_IMAGE,
|
||||
MAIN_MODEL_LOADER,
|
||||
ONNX_MODEL_LOADER,
|
||||
METADATA_ACCUMULATOR,
|
||||
NEGATIVE_CONDITIONING,
|
||||
NOISE,
|
||||
@@ -50,8 +49,7 @@ export const buildCanvasTextToImageGraph = (
|
||||
const use_cpu = shouldUseNoiseSettings
|
||||
? shouldUseCpuNoise
|
||||
: initialGenerationState.shouldUseCpuNoise;
|
||||
const onnx_model_type = model.model_type.includes('onnx');
|
||||
const model_loader = onnx_model_type ? ONNX_MODEL_LOADER : MAIN_MODEL_LOADER;
|
||||
|
||||
/**
|
||||
* The easiest way to build linear graphs is to do it in the node editor, then copy and paste the
|
||||
* full graph here as a template. Then use the parameters from app state and set friendlier node
|
||||
@@ -62,17 +60,16 @@ export const buildCanvasTextToImageGraph = (
|
||||
*/
|
||||
|
||||
// copy-pasted graph from node editor, filled in with state values & friendly node ids
|
||||
// TODO: Actually create the graph correctly for ONNX
|
||||
const graph: NonNullableGraph = {
|
||||
id: TEXT_TO_IMAGE_GRAPH,
|
||||
nodes: {
|
||||
[POSITIVE_CONDITIONING]: {
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
type: 'compel',
|
||||
id: POSITIVE_CONDITIONING,
|
||||
prompt: positivePrompt,
|
||||
},
|
||||
[NEGATIVE_CONDITIONING]: {
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
type: 'compel',
|
||||
id: NEGATIVE_CONDITIONING,
|
||||
prompt: negativePrompt,
|
||||
},
|
||||
@@ -84,15 +81,15 @@ export const buildCanvasTextToImageGraph = (
|
||||
use_cpu,
|
||||
},
|
||||
[TEXT_TO_LATENTS]: {
|
||||
type: onnx_model_type ? 't2l_onnx' : 't2l',
|
||||
type: 't2l',
|
||||
id: TEXT_TO_LATENTS,
|
||||
cfg_scale,
|
||||
scheduler,
|
||||
steps,
|
||||
},
|
||||
[model_loader]: {
|
||||
type: model_loader,
|
||||
id: model_loader,
|
||||
[MAIN_MODEL_LOADER]: {
|
||||
type: 'main_model_loader',
|
||||
id: MAIN_MODEL_LOADER,
|
||||
model,
|
||||
},
|
||||
[CLIP_SKIP]: {
|
||||
@@ -101,7 +98,7 @@ export const buildCanvasTextToImageGraph = (
|
||||
skipped_layers: clipSkip,
|
||||
},
|
||||
[LATENTS_TO_IMAGE]: {
|
||||
type: onnx_model_type ? 'l2i_onnx' : 'l2i',
|
||||
type: 'l2i',
|
||||
id: LATENTS_TO_IMAGE,
|
||||
},
|
||||
},
|
||||
@@ -128,7 +125,7 @@ export const buildCanvasTextToImageGraph = (
|
||||
},
|
||||
{
|
||||
source: {
|
||||
node_id: model_loader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'clip',
|
||||
},
|
||||
destination: {
|
||||
@@ -158,7 +155,7 @@ export const buildCanvasTextToImageGraph = (
|
||||
},
|
||||
{
|
||||
source: {
|
||||
node_id: model_loader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'unet',
|
||||
},
|
||||
destination: {
|
||||
@@ -222,10 +219,10 @@ export const buildCanvasTextToImageGraph = (
|
||||
});
|
||||
|
||||
// add LoRA support
|
||||
addLoRAsToGraph(state, graph, TEXT_TO_LATENTS, model_loader);
|
||||
addLoRAsToGraph(state, graph, TEXT_TO_LATENTS);
|
||||
|
||||
// optionally add custom VAE
|
||||
addVAEToGraph(state, graph, model_loader);
|
||||
addVAEToGraph(state, graph);
|
||||
|
||||
// add dynamic prompts - also sets up core iteration and seed
|
||||
addDynamicPromptsToGraph(state, graph);
|
||||
|
||||
@@ -17,7 +17,6 @@ import {
|
||||
LATENTS_TO_IMAGE,
|
||||
LATENTS_TO_LATENTS,
|
||||
MAIN_MODEL_LOADER,
|
||||
ONNX_MODEL_LOADER,
|
||||
METADATA_ACCUMULATOR,
|
||||
NEGATIVE_CONDITIONING,
|
||||
NOISE,
|
||||
@@ -83,17 +82,13 @@ export const buildLinearImageToImageGraph = (
|
||||
? shouldUseCpuNoise
|
||||
: initialGenerationState.shouldUseCpuNoise;
|
||||
|
||||
const onnx_model_type = model.model_type.includes('onnx');
|
||||
const model_loader = onnx_model_type ? ONNX_MODEL_LOADER : MAIN_MODEL_LOADER;
|
||||
|
||||
// copy-pasted graph from node editor, filled in with state values & friendly node ids
|
||||
// TODO: Actually create the graph correctly for ONNX
|
||||
const graph: NonNullableGraph = {
|
||||
id: IMAGE_TO_IMAGE_GRAPH,
|
||||
nodes: {
|
||||
[model_loader]: {
|
||||
type: model_loader,
|
||||
id: model_loader,
|
||||
[MAIN_MODEL_LOADER]: {
|
||||
type: 'main_model_loader',
|
||||
id: MAIN_MODEL_LOADER,
|
||||
model,
|
||||
},
|
||||
[CLIP_SKIP]: {
|
||||
@@ -102,12 +97,12 @@ export const buildLinearImageToImageGraph = (
|
||||
skipped_layers: clipSkip,
|
||||
},
|
||||
[POSITIVE_CONDITIONING]: {
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
type: 'compel',
|
||||
id: POSITIVE_CONDITIONING,
|
||||
prompt: positivePrompt,
|
||||
},
|
||||
[NEGATIVE_CONDITIONING]: {
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
type: 'compel',
|
||||
id: NEGATIVE_CONDITIONING,
|
||||
prompt: negativePrompt,
|
||||
},
|
||||
@@ -117,11 +112,11 @@ export const buildLinearImageToImageGraph = (
|
||||
use_cpu,
|
||||
},
|
||||
[LATENTS_TO_IMAGE]: {
|
||||
type: onnx_model_type ? 'l2i_onnx' : 'l2i',
|
||||
type: 'l2i',
|
||||
id: LATENTS_TO_IMAGE,
|
||||
},
|
||||
[LATENTS_TO_LATENTS]: {
|
||||
type: onnx_model_type ? 'l2l_onnx' : 'l2l',
|
||||
type: 'l2l',
|
||||
id: LATENTS_TO_LATENTS,
|
||||
cfg_scale,
|
||||
scheduler,
|
||||
@@ -129,7 +124,7 @@ export const buildLinearImageToImageGraph = (
|
||||
strength,
|
||||
},
|
||||
[IMAGE_TO_LATENTS]: {
|
||||
type: onnx_model_type ? 'i2l_onnx' : 'i2l',
|
||||
type: 'i2l',
|
||||
id: IMAGE_TO_LATENTS,
|
||||
// must be set manually later, bc `fit` parameter may require a resize node inserted
|
||||
// image: {
|
||||
@@ -140,7 +135,7 @@ export const buildLinearImageToImageGraph = (
|
||||
edges: [
|
||||
{
|
||||
source: {
|
||||
node_id: model_loader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'unet',
|
||||
},
|
||||
destination: {
|
||||
@@ -150,7 +145,7 @@ export const buildLinearImageToImageGraph = (
|
||||
},
|
||||
{
|
||||
source: {
|
||||
node_id: model_loader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'clip',
|
||||
},
|
||||
destination: {
|
||||
@@ -371,10 +366,10 @@ export const buildLinearImageToImageGraph = (
|
||||
});
|
||||
|
||||
// add LoRA support
|
||||
addLoRAsToGraph(state, graph, LATENTS_TO_LATENTS, model_loader);
|
||||
addLoRAsToGraph(state, graph, LATENTS_TO_LATENTS);
|
||||
|
||||
// optionally add custom VAE
|
||||
addVAEToGraph(state, graph, model_loader);
|
||||
addVAEToGraph(state, graph);
|
||||
|
||||
// add dynamic prompts - also sets up core iteration and seed
|
||||
addDynamicPromptsToGraph(state, graph);
|
||||
|
||||
@@ -6,13 +6,10 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
|
||||
import { addDynamicPromptsToGraph } from './addDynamicPromptsToGraph';
|
||||
import { addLoRAsToGraph } from './addLoRAsToGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
import { BaseModelType, OnnxModelField } from 'services/api/types';
|
||||
|
||||
import {
|
||||
CLIP_SKIP,
|
||||
LATENTS_TO_IMAGE,
|
||||
MAIN_MODEL_LOADER,
|
||||
ONNX_MODEL_LOADER,
|
||||
METADATA_ACCUMULATOR,
|
||||
NEGATIVE_CONDITIONING,
|
||||
NOISE,
|
||||
@@ -49,8 +46,6 @@ export const buildLinearTextToImageGraph = (
|
||||
throw new Error('No model found in state');
|
||||
}
|
||||
|
||||
const onnx_model_type = model.model_type.includes('onnx');
|
||||
const model_loader = onnx_model_type ? ONNX_MODEL_LOADER : MAIN_MODEL_LOADER;
|
||||
/**
|
||||
* The easiest way to build linear graphs is to do it in the node editor, then copy and paste the
|
||||
* full graph here as a template. Then use the parameters from app state and set friendlier node
|
||||
@@ -61,14 +56,12 @@ export const buildLinearTextToImageGraph = (
|
||||
*/
|
||||
|
||||
// copy-pasted graph from node editor, filled in with state values & friendly node ids
|
||||
|
||||
// TODO: Actually create the graph correctly for ONNX
|
||||
const graph: NonNullableGraph = {
|
||||
id: TEXT_TO_IMAGE_GRAPH,
|
||||
nodes: {
|
||||
[model_loader]: {
|
||||
type: model_loader,
|
||||
id: model_loader,
|
||||
[MAIN_MODEL_LOADER]: {
|
||||
type: 'main_model_loader',
|
||||
id: MAIN_MODEL_LOADER,
|
||||
model,
|
||||
},
|
||||
[CLIP_SKIP]: {
|
||||
@@ -77,12 +70,12 @@ export const buildLinearTextToImageGraph = (
|
||||
skipped_layers: clipSkip,
|
||||
},
|
||||
[POSITIVE_CONDITIONING]: {
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
type: 'compel',
|
||||
id: POSITIVE_CONDITIONING,
|
||||
prompt: positivePrompt,
|
||||
},
|
||||
[NEGATIVE_CONDITIONING]: {
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
type: 'compel',
|
||||
id: NEGATIVE_CONDITIONING,
|
||||
prompt: negativePrompt,
|
||||
},
|
||||
@@ -94,21 +87,21 @@ export const buildLinearTextToImageGraph = (
|
||||
use_cpu,
|
||||
},
|
||||
[TEXT_TO_LATENTS]: {
|
||||
type: onnx_model_type ? 't2l_onnx' : 't2l',
|
||||
type: 't2l',
|
||||
id: TEXT_TO_LATENTS,
|
||||
cfg_scale,
|
||||
scheduler,
|
||||
steps,
|
||||
},
|
||||
[LATENTS_TO_IMAGE]: {
|
||||
type: onnx_model_type ? 'l2i_onnx' : 'l2i',
|
||||
type: 'l2i',
|
||||
id: LATENTS_TO_IMAGE,
|
||||
},
|
||||
},
|
||||
edges: [
|
||||
{
|
||||
source: {
|
||||
node_id: model_loader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'clip',
|
||||
},
|
||||
destination: {
|
||||
@@ -118,7 +111,7 @@ export const buildLinearTextToImageGraph = (
|
||||
},
|
||||
{
|
||||
source: {
|
||||
node_id: model_loader,
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
field: 'unet',
|
||||
},
|
||||
destination: {
|
||||
@@ -222,10 +215,10 @@ export const buildLinearTextToImageGraph = (
|
||||
});
|
||||
|
||||
// add LoRA support
|
||||
addLoRAsToGraph(state, graph, TEXT_TO_LATENTS, model_loader);
|
||||
addLoRAsToGraph(state, graph, TEXT_TO_LATENTS);
|
||||
|
||||
// optionally add custom VAE
|
||||
addVAEToGraph(state, graph, model_loader);
|
||||
addVAEToGraph(state, graph);
|
||||
|
||||
// add dynamic prompts - also sets up core iteration and seed
|
||||
addDynamicPromptsToGraph(state, graph);
|
||||
|
||||
@@ -8,7 +8,6 @@ export const RANDOM_INT = 'rand_int';
|
||||
export const RANGE_OF_SIZE = 'range_of_size';
|
||||
export const ITERATE = 'iterate';
|
||||
export const MAIN_MODEL_LOADER = 'main_model_loader';
|
||||
export const ONNX_MODEL_LOADER = 'onnx_model_loader';
|
||||
export const VAE_LOADER = 'vae_loader';
|
||||
export const LORA_LOADER = 'lora_loader';
|
||||
export const CLIP_SKIP = 'clip_skip';
|
||||
@@ -21,9 +20,6 @@ export const DYNAMIC_PROMPT = 'dynamic_prompt';
|
||||
export const IMAGE_COLLECTION = 'image_collection';
|
||||
export const IMAGE_COLLECTION_ITERATE = 'image_collection_iterate';
|
||||
export const METADATA_ACCUMULATOR = 'metadata_accumulator';
|
||||
export const REALESRGAN = 'esrgan';
|
||||
export const DIVIDE = 'divide';
|
||||
export const SCALE = 'scale_image';
|
||||
|
||||
// friendly graph ids
|
||||
export const TEXT_TO_IMAGE_GRAPH = 'text_to_image_graph';
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
import { BaseModelType, MainModelField, ModelType } from 'services/api/types';
|
||||
|
||||
/**
|
||||
* Crudely converts a model id to a main model field
|
||||
* TODO: Make better
|
||||
*/
|
||||
export const modelIdToMainModelField = (modelId: string): MainModelField => {
|
||||
const [base_model, model_type, model_name] = modelId.split('/');
|
||||
|
||||
const field: MainModelField = {
|
||||
base_model: base_model as BaseModelType,
|
||||
model_type: model_type as ModelType,
|
||||
model_name,
|
||||
};
|
||||
|
||||
return field;
|
||||
};
|
||||
@@ -1,17 +0,0 @@
|
||||
import { BaseModelType, OnnxModelField, ModelType } from 'services/api/types';
|
||||
|
||||
/**
|
||||
* Crudely converts a model id to a main model field
|
||||
* TODO: Make better
|
||||
*/
|
||||
export const modelIdToOnnxModelField = (modelId: string): OnnxModelField => {
|
||||
const [base_model, model_type, model_name] = modelId.split('/');
|
||||
|
||||
const field: OnnxModelField = {
|
||||
base_model: base_model as BaseModelType,
|
||||
model_name,
|
||||
model_type: model_type as ModelType,
|
||||
};
|
||||
|
||||
return field;
|
||||
};
|
||||
@@ -0,0 +1,34 @@
|
||||
import type { RootState } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAISlider from 'common/components/IAISlider';
|
||||
import { setCodeformerFidelity } from 'features/parameters/store/postprocessingSlice';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
export default function CodeformerFidelity() {
|
||||
const isGFPGANAvailable = useAppSelector(
|
||||
(state: RootState) => state.system.isGFPGANAvailable
|
||||
);
|
||||
|
||||
const codeformerFidelity = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.codeformerFidelity
|
||||
);
|
||||
|
||||
const { t } = useTranslation();
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
return (
|
||||
<IAISlider
|
||||
isDisabled={!isGFPGANAvailable}
|
||||
label={t('parameters.codeformerFidelity')}
|
||||
step={0.05}
|
||||
min={0}
|
||||
max={1}
|
||||
onChange={(v) => dispatch(setCodeformerFidelity(v))}
|
||||
handleReset={() => dispatch(setCodeformerFidelity(1))}
|
||||
value={codeformerFidelity}
|
||||
withReset
|
||||
withSliderMarks
|
||||
withInput
|
||||
/>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,25 @@
|
||||
import { VStack } from '@chakra-ui/react';
|
||||
import { useAppSelector } from 'app/store/storeHooks';
|
||||
import type { RootState } from 'app/store/store';
|
||||
import FaceRestoreType from './FaceRestoreType';
|
||||
import FaceRestoreStrength from './FaceRestoreStrength';
|
||||
import CodeformerFidelity from './CodeformerFidelity';
|
||||
|
||||
/**
|
||||
* Displays face-fixing/GFPGAN options (strength).
|
||||
*/
|
||||
const FaceRestoreSettings = () => {
|
||||
const facetoolType = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.facetoolType
|
||||
);
|
||||
|
||||
return (
|
||||
<VStack gap={2} alignItems="stretch">
|
||||
<FaceRestoreType />
|
||||
<FaceRestoreStrength />
|
||||
{facetoolType === 'codeformer' && <CodeformerFidelity />}
|
||||
</VStack>
|
||||
);
|
||||
};
|
||||
|
||||
export default FaceRestoreSettings;
|
||||
@@ -0,0 +1,34 @@
|
||||
import { RootState } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAISlider from 'common/components/IAISlider';
|
||||
import { setFacetoolStrength } from 'features/parameters/store/postprocessingSlice';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
export default function FaceRestoreStrength() {
|
||||
const isGFPGANAvailable = useAppSelector(
|
||||
(state: RootState) => state.system.isGFPGANAvailable
|
||||
);
|
||||
|
||||
const facetoolStrength = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.facetoolStrength
|
||||
);
|
||||
|
||||
const { t } = useTranslation();
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
return (
|
||||
<IAISlider
|
||||
isDisabled={!isGFPGANAvailable}
|
||||
label={t('parameters.strength')}
|
||||
step={0.05}
|
||||
min={0}
|
||||
max={1}
|
||||
onChange={(v) => dispatch(setFacetoolStrength(v))}
|
||||
handleReset={() => dispatch(setFacetoolStrength(0.75))}
|
||||
value={facetoolStrength}
|
||||
withReset
|
||||
withSliderMarks
|
||||
withInput
|
||||
/>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,28 @@
|
||||
import { RootState } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAISwitch from 'common/components/IAISwitch';
|
||||
import { setShouldRunFacetool } from 'features/parameters/store/postprocessingSlice';
|
||||
import { ChangeEvent } from 'react';
|
||||
|
||||
export default function FaceRestoreToggle() {
|
||||
const isGFPGANAvailable = useAppSelector(
|
||||
(state: RootState) => state.system.isGFPGANAvailable
|
||||
);
|
||||
|
||||
const shouldRunFacetool = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.shouldRunFacetool
|
||||
);
|
||||
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
const handleChangeShouldRunFacetool = (e: ChangeEvent<HTMLInputElement>) =>
|
||||
dispatch(setShouldRunFacetool(e.target.checked));
|
||||
|
||||
return (
|
||||
<IAISwitch
|
||||
isDisabled={!isGFPGANAvailable}
|
||||
isChecked={shouldRunFacetool}
|
||||
onChange={handleChangeShouldRunFacetool}
|
||||
/>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,30 @@
|
||||
import { FACETOOL_TYPES } from 'app/constants';
|
||||
import { RootState } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect';
|
||||
import {
|
||||
FacetoolType,
|
||||
setFacetoolType,
|
||||
} from 'features/parameters/store/postprocessingSlice';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
export default function FaceRestoreType() {
|
||||
const facetoolType = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.facetoolType
|
||||
);
|
||||
|
||||
const dispatch = useAppDispatch();
|
||||
const { t } = useTranslation();
|
||||
|
||||
const handleChangeFacetoolType = (v: string) =>
|
||||
dispatch(setFacetoolType(v as FacetoolType));
|
||||
|
||||
return (
|
||||
<IAIMantineSearchableSelect
|
||||
label={t('parameters.type')}
|
||||
data={FACETOOL_TYPES.concat()}
|
||||
value={facetoolType}
|
||||
onChange={handleChangeFacetoolType}
|
||||
/>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,43 @@
|
||||
import { Flex } from '@chakra-ui/react';
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { stateSelector } from 'app/store/store';
|
||||
import { useAppSelector } from 'app/store/storeHooks';
|
||||
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
|
||||
import IAICollapse from 'common/components/IAICollapse';
|
||||
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
|
||||
import { memo } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { ParamHiresStrength } from './ParamHiresStrength';
|
||||
import { ParamHiresToggle } from './ParamHiresToggle';
|
||||
|
||||
const selector = createSelector(
|
||||
stateSelector,
|
||||
(state) => {
|
||||
const activeLabel = state.postprocessing.hiresFix ? 'Enabled' : undefined;
|
||||
|
||||
return { activeLabel };
|
||||
},
|
||||
defaultSelectorOptions
|
||||
);
|
||||
|
||||
const ParamHiresCollapse = () => {
|
||||
const { t } = useTranslation();
|
||||
const { activeLabel } = useAppSelector(selector);
|
||||
|
||||
const isHiresEnabled = useFeatureStatus('hires').isFeatureEnabled;
|
||||
|
||||
if (!isHiresEnabled) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return (
|
||||
<IAICollapse label={t('parameters.hiresOptim')} activeLabel={activeLabel}>
|
||||
<Flex sx={{ gap: 2, flexDirection: 'column' }}>
|
||||
<ParamHiresToggle />
|
||||
<ParamHiresStrength />
|
||||
</Flex>
|
||||
</IAICollapse>
|
||||
);
|
||||
};
|
||||
|
||||
export default memo(ParamHiresCollapse);
|
||||
@@ -0,0 +1,3 @@
|
||||
// TODO
|
||||
|
||||
export default {};
|
||||
@@ -0,0 +1,3 @@
|
||||
// TODO
|
||||
|
||||
export default {};
|
||||
@@ -0,0 +1,51 @@
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAISlider from 'common/components/IAISlider';
|
||||
import { postprocessingSelector } from 'features/parameters/store/postprocessingSelectors';
|
||||
import { setHiresStrength } from 'features/parameters/store/postprocessingSlice';
|
||||
import { isEqual } from 'lodash-es';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
const hiresStrengthSelector = createSelector(
|
||||
[postprocessingSelector],
|
||||
({ hiresFix, hiresStrength }) => ({ hiresFix, hiresStrength }),
|
||||
{
|
||||
memoizeOptions: {
|
||||
resultEqualityCheck: isEqual,
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
export const ParamHiresStrength = () => {
|
||||
const { hiresFix, hiresStrength } = useAppSelector(hiresStrengthSelector);
|
||||
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
const { t } = useTranslation();
|
||||
|
||||
const handleHiresStrength = (v: number) => {
|
||||
dispatch(setHiresStrength(v));
|
||||
};
|
||||
|
||||
const handleHiResStrengthReset = () => {
|
||||
dispatch(setHiresStrength(0.75));
|
||||
};
|
||||
|
||||
return (
|
||||
<IAISlider
|
||||
label={t('parameters.hiresStrength')}
|
||||
step={0.01}
|
||||
min={0.01}
|
||||
max={0.99}
|
||||
onChange={handleHiresStrength}
|
||||
value={hiresStrength}
|
||||
isInteger={false}
|
||||
withInput
|
||||
withSliderMarks
|
||||
// inputWidth={22}
|
||||
withReset
|
||||
handleReset={handleHiResStrengthReset}
|
||||
isDisabled={!hiresFix}
|
||||
/>
|
||||
);
|
||||
};
|
||||
@@ -0,0 +1,30 @@
|
||||
import type { RootState } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAISwitch from 'common/components/IAISwitch';
|
||||
import { setHiresFix } from 'features/parameters/store/postprocessingSlice';
|
||||
import { ChangeEvent } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
/**
|
||||
* Hires Fix Toggle
|
||||
*/
|
||||
export const ParamHiresToggle = () => {
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
const hiresFix = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.hiresFix
|
||||
);
|
||||
|
||||
const { t } = useTranslation();
|
||||
|
||||
const handleChangeHiresFix = (e: ChangeEvent<HTMLInputElement>) =>
|
||||
dispatch(setHiresFix(e.target.checked));
|
||||
|
||||
return (
|
||||
<IAISwitch
|
||||
label={t('parameters.hiresOptim')}
|
||||
isChecked={hiresFix}
|
||||
onChange={handleChangeHiresFix}
|
||||
/>
|
||||
);
|
||||
};
|
||||
@@ -0,0 +1,3 @@
|
||||
// TODO
|
||||
|
||||
export default {};
|
||||
@@ -12,11 +12,7 @@ import { modelSelected } from 'features/parameters/store/actions';
|
||||
import { MODEL_TYPE_MAP } from 'features/parameters/types/constants';
|
||||
import { modelIdToMainModelParam } from 'features/parameters/util/modelIdToMainModelParam';
|
||||
import { forEach } from 'lodash-es';
|
||||
import {
|
||||
useGetMainModelsQuery,
|
||||
useGetOnnxModelsQuery,
|
||||
} from 'services/api/endpoints/models';
|
||||
import { modelIdToOnnxModelField } from 'features/nodes/util/modelIdToOnnxModelField';
|
||||
import { useGetMainModelsQuery } from 'services/api/endpoints/models';
|
||||
|
||||
const selector = createSelector(
|
||||
stateSelector,
|
||||
@@ -31,7 +27,6 @@ const ParamMainModelSelect = () => {
|
||||
const { model } = useAppSelector(selector);
|
||||
|
||||
const { data: mainModels, isLoading } = useGetMainModelsQuery();
|
||||
const { data: onnxModels, isLoading: onnxLoading } = useGetOnnxModelsQuery();
|
||||
|
||||
const data = useMemo(() => {
|
||||
if (!mainModels) {
|
||||
@@ -51,31 +46,17 @@ const ParamMainModelSelect = () => {
|
||||
group: MODEL_TYPE_MAP[model.base_model],
|
||||
});
|
||||
});
|
||||
forEach(onnxModels?.entities, (model, id) => {
|
||||
if (!model) {
|
||||
return;
|
||||
}
|
||||
|
||||
data.push({
|
||||
value: id,
|
||||
label: model.model_name,
|
||||
group: MODEL_TYPE_MAP[model.base_model],
|
||||
});
|
||||
});
|
||||
|
||||
return data;
|
||||
}, [mainModels, onnxModels]);
|
||||
}, [mainModels]);
|
||||
|
||||
// grab the full model entity from the RTK Query cache
|
||||
// TODO: maybe we should just store the full model entity in state?
|
||||
const selectedModel = useMemo(
|
||||
() =>
|
||||
(mainModels?.entities[`${model?.base_model}/main/${model?.model_name}`] ||
|
||||
onnxModels?.entities[
|
||||
`${model?.base_model}/onnx/${model?.model_name}`
|
||||
]) ??
|
||||
mainModels?.entities[`${model?.base_model}/main/${model?.model_name}`] ??
|
||||
null,
|
||||
[mainModels?.entities, model, onnxModels?.entities]
|
||||
[mainModels?.entities, model]
|
||||
);
|
||||
|
||||
const handleChangeModel = useCallback(
|
||||
@@ -84,11 +65,7 @@ const ParamMainModelSelect = () => {
|
||||
return;
|
||||
}
|
||||
|
||||
let newModel = modelIdToMainModelParam(v);
|
||||
|
||||
if (v.includes('onnx')) {
|
||||
newModel = modelIdToOnnxModelField(v);
|
||||
}
|
||||
const newModel = modelIdToMainModelParam(v);
|
||||
|
||||
if (!newModel) {
|
||||
return;
|
||||
@@ -99,7 +76,7 @@ const ParamMainModelSelect = () => {
|
||||
[dispatch]
|
||||
);
|
||||
|
||||
return isLoading || onnxLoading ? (
|
||||
return isLoading ? (
|
||||
<IAIMantineSearchableSelect
|
||||
label={t('modelManager.model')}
|
||||
placeholder="Loading..."
|
||||
|
||||
@@ -1,58 +0,0 @@
|
||||
import { SelectItem } from '@mantine/core';
|
||||
import type { RootState } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAIMantineSelect from 'common/components/IAIMantineSelect';
|
||||
import IAIMantineSelectItemWithTooltip from 'common/components/IAIMantineSelectItemWithTooltip';
|
||||
import {
|
||||
ESRGANModelName,
|
||||
esrganModelNameChanged,
|
||||
} from 'features/parameters/store/postprocessingSlice';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
export const ESRGAN_MODEL_NAMES: SelectItem[] = [
|
||||
{
|
||||
label: 'RealESRGAN x2 Plus',
|
||||
value: 'RealESRGAN_x2plus.pth',
|
||||
tooltip: 'Attempts to retain sharpness, low smoothing',
|
||||
group: 'x2 Upscalers',
|
||||
},
|
||||
{
|
||||
label: 'RealESRGAN x4 Plus',
|
||||
value: 'RealESRGAN_x4plus.pth',
|
||||
tooltip: 'Best for photos and highly detailed images, medium smoothing',
|
||||
group: 'x4 Upscalers',
|
||||
},
|
||||
{
|
||||
label: 'RealESRGAN x4 Plus (anime 6B)',
|
||||
value: 'RealESRGAN_x4plus_anime_6B.pth',
|
||||
tooltip: 'Best for anime/manga, high smoothing',
|
||||
group: 'x4 Upscalers',
|
||||
},
|
||||
{
|
||||
label: 'ESRGAN SRx4',
|
||||
value: 'ESRGAN_SRx4_DF2KOST_official-ff704c30.pth',
|
||||
tooltip: 'Retains sharpness, low smoothing',
|
||||
group: 'x4 Upscalers',
|
||||
},
|
||||
];
|
||||
|
||||
export default function ParamESRGANModel() {
|
||||
const esrganModelName = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.esrganModelName
|
||||
);
|
||||
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
const handleChange = (v: string) =>
|
||||
dispatch(esrganModelNameChanged(v as ESRGANModelName));
|
||||
|
||||
return (
|
||||
<IAIMantineSelect
|
||||
label="ESRGAN Model"
|
||||
value={esrganModelName}
|
||||
itemComponent={IAIMantineSelectItemWithTooltip}
|
||||
onChange={handleChange}
|
||||
data={ESRGAN_MODEL_NAMES}
|
||||
/>
|
||||
);
|
||||
}
|
||||
@@ -1,62 +0,0 @@
|
||||
import { Flex, useDisclosure } from '@chakra-ui/react';
|
||||
import { upscaleRequested } from 'app/store/middleware/listenerMiddleware/listeners/upscaleRequested';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAIButton from 'common/components/IAIButton';
|
||||
import IAIIconButton from 'common/components/IAIIconButton';
|
||||
import IAIPopover from 'common/components/IAIPopover';
|
||||
import { selectIsBusy } from 'features/system/store/systemSelectors';
|
||||
import { useCallback } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { FaExpandArrowsAlt } from 'react-icons/fa';
|
||||
import { ImageDTO } from 'services/api/types';
|
||||
import ParamESRGANModel from './ParamRealESRGANModel';
|
||||
|
||||
type Props = { imageDTO?: ImageDTO };
|
||||
|
||||
const ParamUpscalePopover = (props: Props) => {
|
||||
const { imageDTO } = props;
|
||||
const dispatch = useAppDispatch();
|
||||
const isBusy = useAppSelector(selectIsBusy);
|
||||
const { t } = useTranslation();
|
||||
const { isOpen, onOpen, onClose } = useDisclosure();
|
||||
|
||||
const handleClickUpscale = useCallback(() => {
|
||||
onClose();
|
||||
if (!imageDTO) {
|
||||
return;
|
||||
}
|
||||
dispatch(upscaleRequested({ image_name: imageDTO.image_name }));
|
||||
}, [dispatch, imageDTO, onClose]);
|
||||
|
||||
return (
|
||||
<IAIPopover
|
||||
isOpen={isOpen}
|
||||
onClose={onClose}
|
||||
triggerComponent={
|
||||
<IAIIconButton
|
||||
onClick={onOpen}
|
||||
icon={<FaExpandArrowsAlt />}
|
||||
aria-label={t('parameters.upscale')}
|
||||
/>
|
||||
}
|
||||
>
|
||||
<Flex
|
||||
sx={{
|
||||
flexDirection: 'column',
|
||||
gap: 4,
|
||||
}}
|
||||
>
|
||||
<ParamESRGANModel />
|
||||
<IAIButton
|
||||
size="sm"
|
||||
isDisabled={!imageDTO || isBusy}
|
||||
onClick={handleClickUpscale}
|
||||
>
|
||||
{t('parameters.upscaleImage')}
|
||||
</IAIButton>
|
||||
</Flex>
|
||||
</IAIPopover>
|
||||
);
|
||||
};
|
||||
|
||||
export default ParamUpscalePopover;
|
||||
@@ -0,0 +1,36 @@
|
||||
import { RootState } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAISlider from 'common/components/IAISlider';
|
||||
import { setUpscalingDenoising } from 'features/parameters/store/postprocessingSlice';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
export default function UpscaleDenoisingStrength() {
|
||||
const isESRGANAvailable = useAppSelector(
|
||||
(state: RootState) => state.system.isESRGANAvailable
|
||||
);
|
||||
|
||||
const upscalingDenoising = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.upscalingDenoising
|
||||
);
|
||||
|
||||
const { t } = useTranslation();
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
return (
|
||||
<IAISlider
|
||||
label={t('parameters.denoisingStrength')}
|
||||
value={upscalingDenoising}
|
||||
min={0}
|
||||
max={1}
|
||||
step={0.01}
|
||||
onChange={(v) => {
|
||||
dispatch(setUpscalingDenoising(v));
|
||||
}}
|
||||
handleReset={() => dispatch(setUpscalingDenoising(0.75))}
|
||||
withSliderMarks
|
||||
withInput
|
||||
withReset
|
||||
isDisabled={!isESRGANAvailable}
|
||||
/>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,35 @@
|
||||
import { UPSCALING_LEVELS } from 'app/constants';
|
||||
import type { RootState } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect';
|
||||
import {
|
||||
UpscalingLevel,
|
||||
setUpscalingLevel,
|
||||
} from 'features/parameters/store/postprocessingSlice';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
export default function UpscaleScale() {
|
||||
const isESRGANAvailable = useAppSelector(
|
||||
(state: RootState) => state.system.isESRGANAvailable
|
||||
);
|
||||
|
||||
const upscalingLevel = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.upscalingLevel
|
||||
);
|
||||
|
||||
const { t } = useTranslation();
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
const handleChangeLevel = (v: string) =>
|
||||
dispatch(setUpscalingLevel(Number(v) as UpscalingLevel));
|
||||
|
||||
return (
|
||||
<IAIMantineSearchableSelect
|
||||
disabled={!isESRGANAvailable}
|
||||
label={t('parameters.scale')}
|
||||
value={String(upscalingLevel)}
|
||||
onChange={handleChangeLevel}
|
||||
data={UPSCALING_LEVELS}
|
||||
/>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,19 @@
|
||||
import { VStack } from '@chakra-ui/react';
|
||||
import UpscaleDenoisingStrength from './UpscaleDenoisingStrength';
|
||||
import UpscaleStrength from './UpscaleStrength';
|
||||
import UpscaleScale from './UpscaleScale';
|
||||
|
||||
/**
|
||||
* Displays upscaling/ESRGAN options (level and strength).
|
||||
*/
|
||||
const UpscaleSettings = () => {
|
||||
return (
|
||||
<VStack gap={2} alignItems="stretch">
|
||||
<UpscaleScale />
|
||||
<UpscaleDenoisingStrength />
|
||||
<UpscaleStrength />
|
||||
</VStack>
|
||||
);
|
||||
};
|
||||
|
||||
export default UpscaleSettings;
|
||||
@@ -0,0 +1,33 @@
|
||||
import type { RootState } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAISlider from 'common/components/IAISlider';
|
||||
import { setUpscalingStrength } from 'features/parameters/store/postprocessingSlice';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
export default function UpscaleStrength() {
|
||||
const isESRGANAvailable = useAppSelector(
|
||||
(state: RootState) => state.system.isESRGANAvailable
|
||||
);
|
||||
const upscalingStrength = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.upscalingStrength
|
||||
);
|
||||
|
||||
const { t } = useTranslation();
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
return (
|
||||
<IAISlider
|
||||
label={`${t('parameters.upscale')} ${t('parameters.strength')}`}
|
||||
value={upscalingStrength}
|
||||
min={0}
|
||||
max={1}
|
||||
step={0.05}
|
||||
onChange={(v) => dispatch(setUpscalingStrength(v))}
|
||||
handleReset={() => dispatch(setUpscalingStrength(0.75))}
|
||||
withSliderMarks
|
||||
withInput
|
||||
withReset
|
||||
isDisabled={!isESRGANAvailable}
|
||||
/>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,26 @@
|
||||
import { RootState } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAISwitch from 'common/components/IAISwitch';
|
||||
import { setShouldRunESRGAN } from 'features/parameters/store/postprocessingSlice';
|
||||
import { ChangeEvent } from 'react';
|
||||
|
||||
export default function UpscaleToggle() {
|
||||
const isESRGANAvailable = useAppSelector(
|
||||
(state: RootState) => state.system.isESRGANAvailable
|
||||
);
|
||||
|
||||
const shouldRunESRGAN = useAppSelector(
|
||||
(state: RootState) => state.postprocessing.shouldRunESRGAN
|
||||
);
|
||||
|
||||
const dispatch = useAppDispatch();
|
||||
const handleChangeShouldRunESRGAN = (e: ChangeEvent<HTMLInputElement>) =>
|
||||
dispatch(setShouldRunESRGAN(e.target.checked));
|
||||
return (
|
||||
<IAISwitch
|
||||
isDisabled={!isESRGANAvailable}
|
||||
isChecked={shouldRunESRGAN}
|
||||
onChange={handleChangeShouldRunESRGAN}
|
||||
/>
|
||||
);
|
||||
}
|
||||
@@ -1,10 +1,10 @@
|
||||
import { createAction } from '@reduxjs/toolkit';
|
||||
import { ImageDTO, MainModelField, OnnxModelField } from 'services/api/types';
|
||||
import { ImageDTO, MainModelField } from 'services/api/types';
|
||||
|
||||
export const initialImageSelected = createAction<ImageDTO | string | undefined>(
|
||||
'generation/initialImageSelected'
|
||||
);
|
||||
|
||||
export const modelSelected = createAction<MainModelField | OnnxModelField>(
|
||||
export const modelSelected = createAction<MainModelField>(
|
||||
'generation/modelSelected'
|
||||
);
|
||||
|
||||
@@ -8,7 +8,7 @@ import {
|
||||
setShouldShowAdvancedOptions,
|
||||
} from 'features/ui/store/uiSlice';
|
||||
import { clamp } from 'lodash-es';
|
||||
import { ImageDTO, MainModelField, OnnxModelField } from 'services/api/types';
|
||||
import { ImageDTO, MainModelField } from 'services/api/types';
|
||||
import { clipSkipMap } from '../components/Parameters/Advanced/ParamClipSkip';
|
||||
import {
|
||||
CfgScaleParam,
|
||||
@@ -54,7 +54,7 @@ export interface GenerationState {
|
||||
shouldUseSymmetry: boolean;
|
||||
horizontalSymmetrySteps: number;
|
||||
verticalSymmetrySteps: number;
|
||||
model: MainModelField | OnnxModelField | null;
|
||||
model: MainModelField | null;
|
||||
vae: VaeModelParam | null;
|
||||
seamlessXAxis: boolean;
|
||||
seamlessYAxis: boolean;
|
||||
@@ -227,10 +227,7 @@ export const generationSlice = createSlice({
|
||||
const { image_name, width, height } = action.payload;
|
||||
state.initialImage = { imageName: image_name, width, height };
|
||||
},
|
||||
modelChanged: (
|
||||
state,
|
||||
action: PayloadAction<MainModelParam | OnnxModelField | null>
|
||||
) => {
|
||||
modelChanged: (state, action: PayloadAction<MainModelParam | null>) => {
|
||||
state.model = action.payload;
|
||||
|
||||
if (state.model === null) {
|
||||
@@ -262,7 +259,6 @@ export const generationSlice = createSlice({
|
||||
const result = zMainModel.safeParse({
|
||||
model_name,
|
||||
base_model,
|
||||
model_type,
|
||||
});
|
||||
|
||||
if (result.success) {
|
||||
|
||||
@@ -1,27 +1,98 @@
|
||||
import type { PayloadAction } from '@reduxjs/toolkit';
|
||||
import { createSlice } from '@reduxjs/toolkit';
|
||||
import { ESRGANInvocation } from 'services/api/types';
|
||||
import { FACETOOL_TYPES } from 'app/constants';
|
||||
|
||||
export type ESRGANModelName = NonNullable<ESRGANInvocation['model_name']>;
|
||||
export type UpscalingLevel = 2 | 4;
|
||||
|
||||
export type FacetoolType = (typeof FACETOOL_TYPES)[number];
|
||||
|
||||
export interface PostprocessingState {
|
||||
esrganModelName: ESRGANModelName;
|
||||
codeformerFidelity: number;
|
||||
facetoolStrength: number;
|
||||
facetoolType: FacetoolType;
|
||||
hiresFix: boolean;
|
||||
hiresStrength: number;
|
||||
shouldLoopback: boolean;
|
||||
shouldRunESRGAN: boolean;
|
||||
shouldRunFacetool: boolean;
|
||||
upscalingLevel: UpscalingLevel;
|
||||
upscalingDenoising: number;
|
||||
upscalingStrength: number;
|
||||
}
|
||||
|
||||
export const initialPostprocessingState: PostprocessingState = {
|
||||
esrganModelName: 'RealESRGAN_x4plus.pth',
|
||||
codeformerFidelity: 0.75,
|
||||
facetoolStrength: 0.75,
|
||||
facetoolType: 'gfpgan',
|
||||
hiresFix: false,
|
||||
hiresStrength: 0.75,
|
||||
shouldLoopback: false,
|
||||
shouldRunESRGAN: false,
|
||||
shouldRunFacetool: false,
|
||||
upscalingLevel: 4,
|
||||
upscalingDenoising: 0.75,
|
||||
upscalingStrength: 0.75,
|
||||
};
|
||||
|
||||
export const postprocessingSlice = createSlice({
|
||||
name: 'postprocessing',
|
||||
initialState: initialPostprocessingState,
|
||||
reducers: {
|
||||
esrganModelNameChanged: (state, action: PayloadAction<ESRGANModelName>) => {
|
||||
state.esrganModelName = action.payload;
|
||||
setFacetoolStrength: (state, action: PayloadAction<number>) => {
|
||||
state.facetoolStrength = action.payload;
|
||||
},
|
||||
setCodeformerFidelity: (state, action: PayloadAction<number>) => {
|
||||
state.codeformerFidelity = action.payload;
|
||||
},
|
||||
setUpscalingLevel: (state, action: PayloadAction<UpscalingLevel>) => {
|
||||
state.upscalingLevel = action.payload;
|
||||
},
|
||||
setUpscalingDenoising: (state, action: PayloadAction<number>) => {
|
||||
state.upscalingDenoising = action.payload;
|
||||
},
|
||||
setUpscalingStrength: (state, action: PayloadAction<number>) => {
|
||||
state.upscalingStrength = action.payload;
|
||||
},
|
||||
setHiresFix: (state, action: PayloadAction<boolean>) => {
|
||||
state.hiresFix = action.payload;
|
||||
},
|
||||
setHiresStrength: (state, action: PayloadAction<number>) => {
|
||||
state.hiresStrength = action.payload;
|
||||
},
|
||||
resetPostprocessingState: (state) => {
|
||||
return {
|
||||
...state,
|
||||
...initialPostprocessingState,
|
||||
};
|
||||
},
|
||||
setShouldRunFacetool: (state, action: PayloadAction<boolean>) => {
|
||||
state.shouldRunFacetool = action.payload;
|
||||
},
|
||||
setFacetoolType: (state, action: PayloadAction<FacetoolType>) => {
|
||||
state.facetoolType = action.payload;
|
||||
},
|
||||
setShouldRunESRGAN: (state, action: PayloadAction<boolean>) => {
|
||||
state.shouldRunESRGAN = action.payload;
|
||||
},
|
||||
setShouldLoopback: (state, action: PayloadAction<boolean>) => {
|
||||
state.shouldLoopback = action.payload;
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
export const { esrganModelNameChanged } = postprocessingSlice.actions;
|
||||
export const {
|
||||
resetPostprocessingState,
|
||||
setCodeformerFidelity,
|
||||
setFacetoolStrength,
|
||||
setFacetoolType,
|
||||
setHiresFix,
|
||||
setHiresStrength,
|
||||
setShouldLoopback,
|
||||
setShouldRunESRGAN,
|
||||
setShouldRunFacetool,
|
||||
setUpscalingLevel,
|
||||
setUpscalingDenoising,
|
||||
setUpscalingStrength,
|
||||
} = postprocessingSlice.actions;
|
||||
|
||||
export default postprocessingSlice.reducer;
|
||||
|
||||
@@ -126,14 +126,6 @@ export type HeightParam = z.infer<typeof zHeight>;
|
||||
export const isValidHeight = (val: unknown): val is HeightParam =>
|
||||
zHeight.safeParse(val).success;
|
||||
|
||||
const zModelType = z.enum([
|
||||
'vae',
|
||||
'lora',
|
||||
'onnx',
|
||||
'main',
|
||||
'controlnet',
|
||||
'embedding',
|
||||
]);
|
||||
const zBaseModel = z.enum(['sd-1', 'sd-2', 'sdxl', 'sdxl-refiner']);
|
||||
|
||||
export type BaseModelParam = z.infer<typeof zBaseModel>;
|
||||
@@ -145,7 +137,6 @@ export type BaseModelParam = z.infer<typeof zBaseModel>;
|
||||
export const zMainModel = z.object({
|
||||
model_name: z.string().min(1),
|
||||
base_model: zBaseModel,
|
||||
model_type: zModelType,
|
||||
});
|
||||
|
||||
/**
|
||||
|
||||
@@ -14,7 +14,6 @@ export const modelIdToMainModelParam = (
|
||||
const result = zMainModel.safeParse({
|
||||
base_model,
|
||||
model_name,
|
||||
model_type,
|
||||
});
|
||||
|
||||
if (!result.success) {
|
||||
|
||||
@@ -5,11 +5,7 @@ import { useRef } from 'react';
|
||||
import { useHoverDirty } from 'react-use';
|
||||
import { useGetAppVersionQuery } from 'services/api/endpoints/appInfo';
|
||||
|
||||
interface Props {
|
||||
showVersion?: boolean;
|
||||
}
|
||||
|
||||
const InvokeAILogoComponent = ({ showVersion = true }: Props) => {
|
||||
const InvokeAILogoComponent = () => {
|
||||
const { data: appVersion } = useGetAppVersionQuery();
|
||||
const ref = useRef(null);
|
||||
const isHovered = useHoverDirty(ref);
|
||||
@@ -32,7 +28,7 @@ const InvokeAILogoComponent = ({ showVersion = true }: Props) => {
|
||||
invoke <strong>ai</strong>
|
||||
</Text>
|
||||
<AnimatePresence>
|
||||
{showVersion && isHovered && appVersion && (
|
||||
{isHovered && appVersion && (
|
||||
<motion.div
|
||||
key="statusText"
|
||||
initial={{
|
||||
|
||||
@@ -1,119 +0,0 @@
|
||||
import { memo, useCallback, useMemo } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAIMantineSelect from 'common/components/IAIMantineSelect';
|
||||
|
||||
import { SelectItem } from '@mantine/core';
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { stateSelector } from 'app/store/store';
|
||||
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
|
||||
import { modelIdToMainModelField } from 'features/nodes/util/modelIdToMainModelField';
|
||||
import { modelSelected } from 'features/parameters/store/actions';
|
||||
import { forEach } from 'lodash-es';
|
||||
import {
|
||||
useGetMainModelsQuery,
|
||||
useGetOnnxModelsQuery,
|
||||
} from 'services/api/endpoints/models';
|
||||
import { modelIdToOnnxModelField } from 'features/nodes/util/modelIdToOnnxModelField';
|
||||
|
||||
export const MODEL_TYPE_MAP = {
|
||||
'sd-1': 'Stable Diffusion 1.x',
|
||||
'sd-2': 'Stable Diffusion 2.x',
|
||||
};
|
||||
|
||||
const selector = createSelector(
|
||||
stateSelector,
|
||||
(state) => ({ currentModel: state.generation.model }),
|
||||
defaultSelectorOptions
|
||||
);
|
||||
|
||||
const ModelSelect = () => {
|
||||
const dispatch = useAppDispatch();
|
||||
const { t } = useTranslation();
|
||||
|
||||
const { currentModel } = useAppSelector(selector);
|
||||
|
||||
const { data: mainModels, isLoading } = useGetMainModelsQuery();
|
||||
const { data: onnxModels, isLoading: onnxLoading } = useGetOnnxModelsQuery();
|
||||
|
||||
const data = useMemo(() => {
|
||||
if (!mainModels) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const data: SelectItem[] = [];
|
||||
|
||||
forEach(mainModels.entities, (model, id) => {
|
||||
if (!model) {
|
||||
return;
|
||||
}
|
||||
|
||||
data.push({
|
||||
value: id,
|
||||
label: model.model_name,
|
||||
group: MODEL_TYPE_MAP[model.base_model],
|
||||
});
|
||||
});
|
||||
forEach(onnxModels?.entities, (model, id) => {
|
||||
if (!model) {
|
||||
return;
|
||||
}
|
||||
|
||||
data.push({
|
||||
value: id,
|
||||
label: model.model_name,
|
||||
group: MODEL_TYPE_MAP[model.base_model],
|
||||
});
|
||||
});
|
||||
|
||||
return data;
|
||||
}, [mainModels, onnxModels]);
|
||||
|
||||
const selectedModel = useMemo(
|
||||
() =>
|
||||
mainModels?.entities[
|
||||
`${currentModel?.base_model}/main/${currentModel?.model_name}`
|
||||
] ||
|
||||
onnxModels?.entities[
|
||||
`${currentModel?.base_model}/onnx/${currentModel?.model_name}`
|
||||
],
|
||||
[mainModels?.entities, onnxModels?.entities, currentModel]
|
||||
);
|
||||
|
||||
const handleChangeModel = useCallback(
|
||||
(v: string | null) => {
|
||||
if (!v) {
|
||||
return;
|
||||
}
|
||||
let modelField = modelIdToMainModelField(v);
|
||||
if (v.includes('onnx')) {
|
||||
modelField = modelIdToOnnxModelField(v);
|
||||
}
|
||||
dispatch(modelSelected(modelField));
|
||||
},
|
||||
[dispatch]
|
||||
);
|
||||
|
||||
return isLoading || onnxLoading ? (
|
||||
<IAIMantineSelect
|
||||
label={t('modelManager.model')}
|
||||
placeholder="Loading..."
|
||||
disabled={true}
|
||||
data={[]}
|
||||
/>
|
||||
) : (
|
||||
<IAIMantineSelect
|
||||
tooltip={selectedModel?.description}
|
||||
label={t('modelManager.model')}
|
||||
value={selectedModel?.id}
|
||||
placeholder={data.length > 0 ? 'Select a model' : 'No models available'}
|
||||
data={data}
|
||||
error={data.length === 0}
|
||||
disabled={data.length === 0}
|
||||
onChange={handleChangeModel}
|
||||
/>
|
||||
);
|
||||
};
|
||||
|
||||
export default memo(ModelSelect);
|
||||
@@ -1,56 +0,0 @@
|
||||
import { createEntityAdapter, createSlice } from '@reduxjs/toolkit';
|
||||
import { RootState } from 'app/store/store';
|
||||
import {
|
||||
StableDiffusion1ModelCheckpointConfig,
|
||||
StableDiffusion1ModelDiffusersConfig,
|
||||
} from 'services/api';
|
||||
|
||||
import { receivedModels } from 'services/thunks/model';
|
||||
|
||||
export type SD1PipelineModel = (
|
||||
| StableDiffusion1ModelCheckpointConfig
|
||||
| StableDiffusion1ModelDiffusersConfig
|
||||
) & {
|
||||
name: string;
|
||||
};
|
||||
|
||||
export const sd1PipelineModelsAdapter = createEntityAdapter<SD1PipelineModel>({
|
||||
selectId: (model) => model.name,
|
||||
sortComparer: (a, b) => a.name.localeCompare(b.name),
|
||||
});
|
||||
|
||||
export const sd1InitialPipelineModelsState =
|
||||
sd1PipelineModelsAdapter.getInitialState();
|
||||
|
||||
export type SD1PipelineModelState = typeof sd1InitialPipelineModelsState;
|
||||
|
||||
export const sd1PipelineModelsSlice = createSlice({
|
||||
name: 'sd1PipelineModels',
|
||||
initialState: sd1InitialPipelineModelsState,
|
||||
reducers: {
|
||||
modelAdded: sd1PipelineModelsAdapter.upsertOne,
|
||||
},
|
||||
extraReducers(builder) {
|
||||
/**
|
||||
* Received Models - FULFILLED
|
||||
*/
|
||||
builder.addCase(receivedModels.fulfilled, (state, action) => {
|
||||
if (action.meta.arg.baseModel !== 'sd-1') return;
|
||||
sd1PipelineModelsAdapter.setAll(state, action.payload);
|
||||
});
|
||||
},
|
||||
});
|
||||
|
||||
export const {
|
||||
selectAll: selectAllSD1PipelineModels,
|
||||
selectById: selectByIdSD1PipelineModels,
|
||||
selectEntities: selectEntitiesSD1PipelineModels,
|
||||
selectIds: selectIdsSD1PipelineModels,
|
||||
selectTotal: selectTotalSD1PipelineModels,
|
||||
} = sd1PipelineModelsAdapter.getSelectors<RootState>(
|
||||
(state) => state.sd1pipelinemodels
|
||||
);
|
||||
|
||||
export const { modelAdded } = sd1PipelineModelsSlice.actions;
|
||||
|
||||
export default sd1PipelineModelsSlice.reducer;
|
||||
@@ -1,56 +0,0 @@
|
||||
import { createEntityAdapter, createSlice } from '@reduxjs/toolkit';
|
||||
import { RootState } from 'app/store/store';
|
||||
import {
|
||||
StableDiffusion2ModelCheckpointConfig,
|
||||
StableDiffusion2ModelDiffusersConfig,
|
||||
} from 'services/api';
|
||||
|
||||
import { receivedModels } from 'services/thunks/model';
|
||||
|
||||
export type SD2PipelineModel = (
|
||||
| StableDiffusion2ModelCheckpointConfig
|
||||
| StableDiffusion2ModelDiffusersConfig
|
||||
) & {
|
||||
name: string;
|
||||
};
|
||||
|
||||
export const sd2PipelineModelsAdapater = createEntityAdapter<SD2PipelineModel>({
|
||||
selectId: (model) => model.name,
|
||||
sortComparer: (a, b) => a.name.localeCompare(b.name),
|
||||
});
|
||||
|
||||
export const sd2InitialPipelineModelsState =
|
||||
sd2PipelineModelsAdapater.getInitialState();
|
||||
|
||||
export type SD2PipelineModelState = typeof sd2InitialPipelineModelsState;
|
||||
|
||||
export const sd2PipelineModelsSlice = createSlice({
|
||||
name: 'sd2PipelineModels',
|
||||
initialState: sd2InitialPipelineModelsState,
|
||||
reducers: {
|
||||
modelAdded: sd2PipelineModelsAdapater.upsertOne,
|
||||
},
|
||||
extraReducers(builder) {
|
||||
/**
|
||||
* Received Models - FULFILLED
|
||||
*/
|
||||
builder.addCase(receivedModels.fulfilled, (state, action) => {
|
||||
if (action.meta.arg.baseModel !== 'sd-2') return;
|
||||
sd2PipelineModelsAdapater.setAll(state, action.payload);
|
||||
});
|
||||
},
|
||||
});
|
||||
|
||||
export const {
|
||||
selectAll: selectAllSD2PipelineModels,
|
||||
selectById: selectByIdSD2PipelineModels,
|
||||
selectEntities: selectEntitiesSD2PipelineModels,
|
||||
selectIds: selectIdsSD2PipelineModels,
|
||||
selectTotal: selectTotalSD2PipelineModels,
|
||||
} = sd2PipelineModelsAdapater.getSelectors<RootState>(
|
||||
(state) => state.sd2pipelinemodels
|
||||
);
|
||||
|
||||
export const { modelAdded } = sd2PipelineModelsSlice.actions;
|
||||
|
||||
export default sd2PipelineModelsSlice.reducer;
|
||||
@@ -85,7 +85,7 @@ export default function SimpleAddModels() {
|
||||
<Flex flexDirection="column" width="100%" gap={4}>
|
||||
<IAIMantineTextInput
|
||||
label="Model Location"
|
||||
placeholder="Provide a path to a local Diffusers model, local checkpoint / safetensors model a HuggingFace Repo ID, or a checkpoint/diffusers model URL."
|
||||
placeholder="Provide a path to a local Diffusers model, local checkpoint / safetensors model or a HuggingFace Repo ID"
|
||||
w="100%"
|
||||
{...addModelForm.getInputProps('location')}
|
||||
/>
|
||||
|
||||
@@ -17,7 +17,7 @@ type ModelListProps = {
|
||||
setSelectedModelId: (name: string | undefined) => void;
|
||||
};
|
||||
|
||||
type ModelFormat = 'all' | 'checkpoint' | 'diffusers' | 'olive';
|
||||
type ModelFormat = 'all' | 'checkpoint' | 'diffusers';
|
||||
|
||||
const ModelList = (props: ModelListProps) => {
|
||||
const { selectedModelId, setSelectedModelId } = props;
|
||||
|
||||
@@ -4,6 +4,7 @@ import ParamAdvancedCollapse from 'features/parameters/components/Parameters/Adv
|
||||
import ParamControlNetCollapse from 'features/parameters/components/Parameters/ControlNet/ParamControlNetCollapse';
|
||||
import ParamNegativeConditioning from 'features/parameters/components/Parameters/Core/ParamNegativeConditioning';
|
||||
import ParamPositiveConditioning from 'features/parameters/components/Parameters/Core/ParamPositiveConditioning';
|
||||
import ParamHiresCollapse from 'features/parameters/components/Parameters/Hires/ParamHiresCollapse';
|
||||
import ParamNoiseCollapse from 'features/parameters/components/Parameters/Noise/ParamNoiseCollapse';
|
||||
import ParamSeamlessCollapse from 'features/parameters/components/Parameters/Seamless/ParamSeamlessCollapse';
|
||||
import ParamSymmetryCollapse from 'features/parameters/components/Parameters/Symmetry/ParamSymmetryCollapse';
|
||||
@@ -25,6 +26,7 @@ const TextToImageTabParameters = () => {
|
||||
<ParamVariationCollapse />
|
||||
<ParamNoiseCollapse />
|
||||
<ParamSymmetryCollapse />
|
||||
<ParamHiresCollapse />
|
||||
<ParamSeamlessCollapse />
|
||||
<ParamAdvancedCollapse />
|
||||
</>
|
||||
|
||||
@@ -10,7 +10,6 @@ import {
|
||||
ImportModelConfig,
|
||||
LoRAModelConfig,
|
||||
MainModelConfig,
|
||||
OnnxModelConfig,
|
||||
MergeModelConfig,
|
||||
TextualInversionModelConfig,
|
||||
VaeModelConfig,
|
||||
@@ -28,8 +27,6 @@ export type MainModelConfigEntity =
|
||||
| DiffusersModelConfigEntity
|
||||
| CheckpointModelConfigEntity;
|
||||
|
||||
export type OnnxModelConfigEntity = OnnxModelConfig & { id: string };
|
||||
|
||||
export type LoRAModelConfigEntity = LoRAModelConfig & { id: string };
|
||||
|
||||
export type ControlNetModelConfigEntity = ControlNetModelConfig & {
|
||||
@@ -44,7 +41,6 @@ export type VaeModelConfigEntity = VaeModelConfig & { id: string };
|
||||
|
||||
type AnyModelConfigEntity =
|
||||
| MainModelConfigEntity
|
||||
| OnnxModelConfigEntity
|
||||
| LoRAModelConfigEntity
|
||||
| ControlNetModelConfigEntity
|
||||
| TextualInversionModelConfigEntity
|
||||
@@ -108,10 +104,6 @@ type SearchFolderArg = operations['search_for_models']['parameters']['query'];
|
||||
const mainModelsAdapter = createEntityAdapter<MainModelConfigEntity>({
|
||||
sortComparer: (a, b) => a.model_name.localeCompare(b.model_name),
|
||||
});
|
||||
|
||||
const onnxModelsAdapter = createEntityAdapter<OnnxModelConfigEntity>({
|
||||
sortComparer: (a, b) => a.model_name.localeCompare(b.model_name),
|
||||
});
|
||||
const loraModelsAdapter = createEntityAdapter<LoRAModelConfigEntity>({
|
||||
sortComparer: (a, b) => a.model_name.localeCompare(b.model_name),
|
||||
});
|
||||
@@ -149,38 +141,6 @@ const createModelEntities = <T extends AnyModelConfigEntity>(
|
||||
|
||||
export const modelsApi = api.injectEndpoints({
|
||||
endpoints: (build) => ({
|
||||
getOnnxModels: build.query<EntityState<OnnxModelConfigEntity>, void>({
|
||||
query: () => ({ url: 'models/', params: { model_type: 'onnx' } }),
|
||||
providesTags: (result, error, arg) => {
|
||||
const tags: ApiFullTagDescription[] = [
|
||||
{ id: 'OnnxModel', type: LIST_TAG },
|
||||
];
|
||||
|
||||
if (result) {
|
||||
tags.push(
|
||||
...result.ids.map((id) => ({
|
||||
type: 'OnnxModel' as const,
|
||||
id,
|
||||
}))
|
||||
);
|
||||
}
|
||||
|
||||
return tags;
|
||||
},
|
||||
transformResponse: (
|
||||
response: { models: OnnxModelConfig[] },
|
||||
meta,
|
||||
arg
|
||||
) => {
|
||||
const entities = createModelEntities<OnnxModelConfigEntity>(
|
||||
response.models
|
||||
);
|
||||
return onnxModelsAdapter.setAll(
|
||||
onnxModelsAdapter.getInitialState(),
|
||||
entities
|
||||
);
|
||||
},
|
||||
}),
|
||||
getMainModels: build.query<EntityState<MainModelConfigEntity>, void>({
|
||||
query: () => ({ url: 'models/', params: { model_type: 'main' } }),
|
||||
providesTags: (result, error, arg) => {
|
||||
@@ -453,7 +413,6 @@ export const modelsApi = api.injectEndpoints({
|
||||
|
||||
export const {
|
||||
useGetMainModelsQuery,
|
||||
useGetOnnxModelsQuery,
|
||||
useGetControlNetModelsQuery,
|
||||
useGetLoRAModelsQuery,
|
||||
useGetTextualInversionModelsQuery,
|
||||
|
||||
@@ -8,132 +8,6 @@ import {
|
||||
} from '@reduxjs/toolkit/query/react';
|
||||
import { $authToken, $baseUrl } from 'services/api/client';
|
||||
|
||||
export type { AddInvocation } from './models/AddInvocation';
|
||||
export type { BoardChanges } from './models/BoardChanges';
|
||||
export type { BoardDTO } from './models/BoardDTO';
|
||||
export type { Body_create_board_image } from './models/Body_create_board_image';
|
||||
export type { Body_remove_board_image } from './models/Body_remove_board_image';
|
||||
export type { Body_upload_image } from './models/Body_upload_image';
|
||||
export type { CannyImageProcessorInvocation } from './models/CannyImageProcessorInvocation';
|
||||
export type { CkptModelInfo } from './models/CkptModelInfo';
|
||||
export type { ClipField } from './models/ClipField';
|
||||
export type { CollectInvocation } from './models/CollectInvocation';
|
||||
export type { CollectInvocationOutput } from './models/CollectInvocationOutput';
|
||||
export type { ColorField } from './models/ColorField';
|
||||
export type { CompelInvocation } from './models/CompelInvocation';
|
||||
export type { CompelOutput } from './models/CompelOutput';
|
||||
export type { ConditioningField } from './models/ConditioningField';
|
||||
export type { ContentShuffleImageProcessorInvocation } from './models/ContentShuffleImageProcessorInvocation';
|
||||
export type { ControlField } from './models/ControlField';
|
||||
export type { ControlNetInvocation } from './models/ControlNetInvocation';
|
||||
export type { ControlNetModelConfig } from './models/ControlNetModelConfig';
|
||||
export type { ControlOutput } from './models/ControlOutput';
|
||||
export type { CreateModelRequest } from './models/CreateModelRequest';
|
||||
export type { CvInpaintInvocation } from './models/CvInpaintInvocation';
|
||||
export type { DiffusersModelInfo } from './models/DiffusersModelInfo';
|
||||
export type { DivideInvocation } from './models/DivideInvocation';
|
||||
export type { DynamicPromptInvocation } from './models/DynamicPromptInvocation';
|
||||
export type { Edge } from './models/Edge';
|
||||
export type { EdgeConnection } from './models/EdgeConnection';
|
||||
export type { FloatCollectionOutput } from './models/FloatCollectionOutput';
|
||||
export type { FloatLinearRangeInvocation } from './models/FloatLinearRangeInvocation';
|
||||
export type { FloatOutput } from './models/FloatOutput';
|
||||
export type { Graph } from './models/Graph';
|
||||
export type { GraphExecutionState } from './models/GraphExecutionState';
|
||||
export type { GraphInvocation } from './models/GraphInvocation';
|
||||
export type { GraphInvocationOutput } from './models/GraphInvocationOutput';
|
||||
export type { HTTPValidationError } from './models/HTTPValidationError';
|
||||
export type { ImageBlurInvocation } from './models/ImageBlurInvocation';
|
||||
export type { ImageChannelInvocation } from './models/ImageChannelInvocation';
|
||||
export type { ImageConvertInvocation } from './models/ImageConvertInvocation';
|
||||
export type { ImageCropInvocation } from './models/ImageCropInvocation';
|
||||
export type { ImageDTO } from './models/ImageDTO';
|
||||
export type { ImageField } from './models/ImageField';
|
||||
export type { ImageInverseLerpInvocation } from './models/ImageInverseLerpInvocation';
|
||||
export type { ImageLerpInvocation } from './models/ImageLerpInvocation';
|
||||
export type { ImageMetadata } from './models/ImageMetadata';
|
||||
export type { ImageMultiplyInvocation } from './models/ImageMultiplyInvocation';
|
||||
export type { ImageOutput } from './models/ImageOutput';
|
||||
export type { ImagePasteInvocation } from './models/ImagePasteInvocation';
|
||||
export type { ImageProcessorInvocation } from './models/ImageProcessorInvocation';
|
||||
export type { ImageRecordChanges } from './models/ImageRecordChanges';
|
||||
export type { ImageResizeInvocation } from './models/ImageResizeInvocation';
|
||||
export type { ImageScaleInvocation } from './models/ImageScaleInvocation';
|
||||
export type { ImageToLatentsInvocation } from './models/ImageToLatentsInvocation';
|
||||
export type { ImageUrlsDTO } from './models/ImageUrlsDTO';
|
||||
export type { InfillColorInvocation } from './models/InfillColorInvocation';
|
||||
export type { InfillPatchMatchInvocation } from './models/InfillPatchMatchInvocation';
|
||||
export type { InfillTileInvocation } from './models/InfillTileInvocation';
|
||||
export type { InpaintInvocation } from './models/InpaintInvocation';
|
||||
export type { IntCollectionOutput } from './models/IntCollectionOutput';
|
||||
export type { IntOutput } from './models/IntOutput';
|
||||
export type { IterateInvocation } from './models/IterateInvocation';
|
||||
export type { IterateInvocationOutput } from './models/IterateInvocationOutput';
|
||||
export type { LatentsField } from './models/LatentsField';
|
||||
export type { LatentsOutput } from './models/LatentsOutput';
|
||||
export type { LatentsToImageInvocation } from './models/LatentsToImageInvocation';
|
||||
export type { LatentsToLatentsInvocation } from './models/LatentsToLatentsInvocation';
|
||||
export type { LineartAnimeImageProcessorInvocation } from './models/LineartAnimeImageProcessorInvocation';
|
||||
export type { LineartImageProcessorInvocation } from './models/LineartImageProcessorInvocation';
|
||||
export type { LoadImageInvocation } from './models/LoadImageInvocation';
|
||||
export type { LoraInfo } from './models/LoraInfo';
|
||||
export type { LoraLoaderInvocation } from './models/LoraLoaderInvocation';
|
||||
export type { LoraLoaderOutput } from './models/LoraLoaderOutput';
|
||||
export type { MaskFromAlphaInvocation } from './models/MaskFromAlphaInvocation';
|
||||
export type { MaskOutput } from './models/MaskOutput';
|
||||
export type { MediapipeFaceProcessorInvocation } from './models/MediapipeFaceProcessorInvocation';
|
||||
export type { MidasDepthImageProcessorInvocation } from './models/MidasDepthImageProcessorInvocation';
|
||||
export type { MlsdImageProcessorInvocation } from './models/MlsdImageProcessorInvocation';
|
||||
export type { ModelInfo } from './models/ModelInfo';
|
||||
export type { ModelLoaderOutput } from './models/ModelLoaderOutput';
|
||||
export type { ModelsList } from './models/ModelsList';
|
||||
export type { ModelType } from './models/ModelType';
|
||||
export type { MultiplyInvocation } from './models/MultiplyInvocation';
|
||||
export type { NoiseInvocation } from './models/NoiseInvocation';
|
||||
export type { NoiseOutput } from './models/NoiseOutput';
|
||||
export type { NormalbaeImageProcessorInvocation } from './models/NormalbaeImageProcessorInvocation';
|
||||
export type { OffsetPaginatedResults_BoardDTO_ } from './models/OffsetPaginatedResults_BoardDTO_';
|
||||
export type { OffsetPaginatedResults_ImageDTO_ } from './models/OffsetPaginatedResults_ImageDTO_';
|
||||
export type { ONNXLatentsToImageInvocation } from './models/ONNXLatentsToImageInvocation';
|
||||
export type { ONNXModelLoaderOutput } from './models/ONNXModelLoaderOutput';
|
||||
export type { ONNXPromptInvocation } from './models/ONNXPromptInvocation';
|
||||
export type { ONNXSD1ModelLoaderInvocation } from './models/ONNXSD1ModelLoaderInvocation';
|
||||
export type { ONNXStableDiffusion1ModelConfig } from './models/ONNXStableDiffusion1ModelConfig';
|
||||
export type { ONNXStableDiffusion2ModelConfig } from './models/ONNXStableDiffusion2ModelConfig';
|
||||
export type { ONNXTextToLatentsInvocation } from './models/ONNXTextToLatentsInvocation';
|
||||
export type { OpenposeImageProcessorInvocation } from './models/OpenposeImageProcessorInvocation';
|
||||
export type { PaginatedResults_GraphExecutionState_ } from './models/PaginatedResults_GraphExecutionState_';
|
||||
export type { ParamFloatInvocation } from './models/ParamFloatInvocation';
|
||||
export type { ParamIntInvocation } from './models/ParamIntInvocation';
|
||||
export type { PidiImageProcessorInvocation } from './models/PidiImageProcessorInvocation';
|
||||
export type { PipelineModelField } from './models/PipelineModelField';
|
||||
export type { PipelineModelLoaderInvocation } from './models/PipelineModelLoaderInvocation';
|
||||
export type { PromptCollectionOutput } from './models/PromptCollectionOutput';
|
||||
export type { PromptOutput } from './models/PromptOutput';
|
||||
export type { RandomIntInvocation } from './models/RandomIntInvocation';
|
||||
export type { RandomRangeInvocation } from './models/RandomRangeInvocation';
|
||||
export type { RangeInvocation } from './models/RangeInvocation';
|
||||
export type { RangeOfSizeInvocation } from './models/RangeOfSizeInvocation';
|
||||
export type { ResizeLatentsInvocation } from './models/ResizeLatentsInvocation';
|
||||
export type { RestoreFaceInvocation } from './models/RestoreFaceInvocation';
|
||||
export type { ScaleLatentsInvocation } from './models/ScaleLatentsInvocation';
|
||||
export type { ShowImageInvocation } from './models/ShowImageInvocation';
|
||||
export type { StableDiffusion1ModelCheckpointConfig } from './models/StableDiffusion1ModelCheckpointConfig';
|
||||
export type { StableDiffusion1ModelDiffusersConfig } from './models/StableDiffusion1ModelDiffusersConfig';
|
||||
export type { StableDiffusion2ModelCheckpointConfig } from './models/StableDiffusion2ModelCheckpointConfig';
|
||||
export type { StableDiffusion2ModelDiffusersConfig } from './models/StableDiffusion2ModelDiffusersConfig';
|
||||
export type { StepParamEasingInvocation } from './models/StepParamEasingInvocation';
|
||||
export type { SubModelType } from './models/SubModelType';
|
||||
export type { SubtractInvocation } from './models/SubtractInvocation';
|
||||
export type { TextToLatentsInvocation } from './models/TextToLatentsInvocation';
|
||||
export type { TextualInversionModelConfig } from './models/TextualInversionModelConfig';
|
||||
export type { UNetField } from './models/UNetField';
|
||||
export type { UpscaleInvocation } from './models/UpscaleInvocation';
|
||||
export type { VaeField } from './models/VaeField';
|
||||
export type { VaeModelConfig } from './models/VaeModelConfig';
|
||||
export type { VaeRepo } from './models/VaeRepo';
|
||||
export type { ValidationError } from './models/ValidationError';
|
||||
export type { ZoeDepthImageProcessorInvocation } from './models/ZoeDepthImageProcessorInvocation';
|
||||
export const tagTypes = ['Board', 'Image', 'ImageMetadata', 'Model'];
|
||||
export type ApiFullTagDescription = FullTagDescription<
|
||||
(typeof tagTypes)[number]
|
||||
|
||||
@@ -1,26 +0,0 @@
|
||||
/* istanbul ignore file */
|
||||
/* tslint:disable */
|
||||
/* eslint-disable */
|
||||
|
||||
/**
|
||||
* Adds two numbers
|
||||
*/
|
||||
export type AddInvocation = {
|
||||
/**
|
||||
* The id of this node. Must be unique among all nodes.
|
||||
*/
|
||||
id: string;
|
||||
/**
|
||||
* Whether or not this node is an intermediate node.
|
||||
*/
|
||||
is_intermediate?: boolean;
|
||||
type?: 'add';
|
||||
/**
|
||||
* The first number
|
||||
*/
|
||||
'a'?: number;
|
||||
/**
|
||||
* The second number
|
||||
*/
|
||||
'b'?: number;
|
||||
};
|
||||
@@ -1,14 +0,0 @@
|
||||
/* istanbul ignore file */
|
||||
/* tslint:disable */
|
||||
/* eslint-disable */
|
||||
|
||||
export type BoardChanges = {
|
||||
/**
|
||||
* The board's new name.
|
||||
*/
|
||||
board_name?: string;
|
||||
/**
|
||||
* The name of the board's new cover image.
|
||||
*/
|
||||
cover_image_name?: string;
|
||||
};
|
||||
@@ -1,37 +0,0 @@
|
||||
/* istanbul ignore file */
|
||||
/* tslint:disable */
|
||||
/* eslint-disable */
|
||||
|
||||
/**
|
||||
* Deserialized board record with cover image URL and image count.
|
||||
*/
|
||||
export type BoardDTO = {
|
||||
/**
|
||||
* The unique ID of the board.
|
||||
*/
|
||||
board_id: string;
|
||||
/**
|
||||
* The name of the board.
|
||||
*/
|
||||
board_name: string;
|
||||
/**
|
||||
* The created timestamp of the board.
|
||||
*/
|
||||
created_at: string;
|
||||
/**
|
||||
* The updated timestamp of the board.
|
||||
*/
|
||||
updated_at: string;
|
||||
/**
|
||||
* The deleted timestamp of the board.
|
||||
*/
|
||||
deleted_at?: string;
|
||||
/**
|
||||
* The name of the board's cover image.
|
||||
*/
|
||||
cover_image_name?: string;
|
||||
/**
|
||||
* The number of images in the board.
|
||||
*/
|
||||
image_count: number;
|
||||
};
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user