Compare commits

..

6 Commits

Author SHA1 Message Date
Lincoln Stein
d894c86db1 Merge branch 'main' into lstein/threaded-download 2023-07-18 12:04:45 -04:00
Lincoln Stein
399d505801 add unsuccessful socket listener for model import events 2023-07-17 22:01:37 -04:00
Lincoln Stein
ad312fa1ec Merge branch 'main' into lstein/threaded-download 2023-07-17 21:41:46 -04:00
Lincoln Stein
80ce014b1e merged in bugfixes from feat/model-events 2023-07-17 17:19:52 -04:00
Lincoln Stein
1fd053b42d improve swagger documentation 2023-07-17 16:14:02 -04:00
Lincoln Stein
da187d6a87 API model downloads are now threaded and generate progress events 2023-07-17 15:51:56 -04:00
198 changed files with 5764 additions and 11631 deletions

View File

@@ -43,7 +43,7 @@ jobs:
--verbose
- name: deploy to gh-pages
if: ${{ github.ref == 'refs/heads/main' }}
if: ${{ github.ref == 'refs/heads/v2.3' }}
run: |
python -m \
mkdocs gh-deploy \

View File

@@ -36,6 +36,15 @@
</div>
_**Note: This is an alpha release. Bugs are expected and not all
features are fully implemented. Please use the GitHub [Issues
pages](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen)
to report unexpected problems. Also note that InvokeAI root directory
which contains models, outputs and configuration files, has changed
between the 2.x and 3.x release. If you wish to use your v2.3 root
directory with v3.0, please follow the directions in [Migrating a 2.3
root directory to 3.0](#migrating-to-3).**_
InvokeAI is a leading creative engine built to empower professionals
and enthusiasts alike. Generate and create stunning visual media using
the latest AI-driven technologies. InvokeAI offers an industry leading
@@ -255,24 +264,19 @@ old models directory (which contains the models selected at install
time) will be renamed `models.orig` and can be deleted once you have
confirmed that the migration was successful.
If you wish, you can pass the 2.3 root directory to both `--from` and
`--to` in order to update in place. Warning: this directory will no
longer be usable with InvokeAI 2.3.
#### Migrating in place
For the adventurous, you may do an in-place upgrade from 2.3 to 3.0
without touching the command line. ***This recipe does not work on
Windows platforms due to a bug in the Windows version of the 2.3
upgrade script.** See the next section for a Windows recipe.
##### For Mac and Linux Users:
without touching the command line. The recipe is as follows>
1. Launch the InvokeAI launcher script in your current v2.3 root directory.
2. Select option [9] "Update InvokeAI" to bring up the updater dialog.
3. Select option [1] to upgrade to the latest release.
3a. During the alpha release phase, select option [3] and manually
enter the tag name `v3.0.0+a2`.
3b. Once 3.0 is released, select option [1] to upgrade to the latest release.
4. Once the upgrade is finished you will be returned to the launcher
menu. Select option [7] "Re-run the configure script to fix a broken
@@ -291,33 +295,14 @@ worked, you can safely remove these files. Alternatively you can
restore a working v2.3 directory by removing the new files and
restoring the ".orig" files' original names.
##### For Windows Users:
Windows Users can upgrade with the
1. Enter the 2.3 root directory you wish to upgrade
2. Launch `invoke.sh` or `invoke.bat`
3. Select the "Developer's console" option [8]
4. Type the following commands
```
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0" --use-pep517 --upgrade
invokeai-configure --root .
```
(Replace `v3.0.0` with the current release number if this document is out of date).
The first command will install and upgrade new software to run
InvokeAI. The second will prepare the 2.3 directory for use with 3.0.
You may now launch the WebUI in the usual way, by selecting option [1]
from the launcher script
#### Migration Caveats
The migration script will migrate your invokeai settings and models,
including textual inversion models, LoRAs and merges that you may have
installed previously. However it does **not** migrate the generated
images stored in your 2.3-format outputs directory. You will need to
manually import selected images into the 3.0 gallery via drag-and-drop.
images stored in your 2.3-format outputs directory. The released
version of 3.0 is expected to have an interface for importing an
entire directory of image files as a batch.
## Hardware Requirements
@@ -329,12 +314,9 @@ AMD card (using the ROCm driver).
You will need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB
of VRAM is highly recommended for rendering using the Stable
Diffusion XL models
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory (Linux
only), 6-8 GB for XL rendering.
- An AMD-based graphics card with 4GB or more VRAM memory. (Linux only)
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
@@ -367,12 +349,13 @@ Invoke AI provides an organized gallery system for easily storing, accessing, an
### Other features
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1, XL support*
- *SD 2.0, 2.1 support*
- *Upscaling Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
- *Node-Based Architecture*
- *Node-Based Plug-&-Play UI (Beta)*
- *SDXL Support* (Coming soon)
### Latest Changes

View File

@@ -617,6 +617,8 @@ sections describe what's new for InvokeAI.
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for [inpainting](deprecated/INPAINTING.md) and
[outpainting](features/OUTPAINTING.md)
- img2img runs on all k\* samplers
- Support for
[negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 415 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 983 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 729 KiB

After

Width:  |  Height:  |  Size: 637 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 530 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 409 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 637 KiB

View File

@@ -1,8 +1,8 @@
---
title: Textual Inversion Embeddings and LoRAs
title: Concepts
---
# :material-library-shelves: Textual Inversions and LoRAs
# :material-library-shelves: The Hugging Face Concepts Library and Importing Textual Inversion files
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
@@ -64,25 +64,21 @@ select the embedding you'd like to use. This UI has type-ahead support, so you c
## Using LoRAs
LoRA files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
LoRA files are models that customize the output of Stable Diffusion image generation.
Larger than embeddings, but much smaller than full models, they augment SD with improved
understanding of subjects and artistic styles.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
Unlike TI files, LoRAs do not introduce novel vocabulary into the model's known tokens. Instead,
LoRAs augment the model's weights that are applied to generate imagery. LoRAs may be supplied
with a "trigger" word that they have been explicitly trained on, or may simply apply their
effect without being triggered.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
LoRAs are typically stored in .safetensors files, which are the most secure way to store and transmit
these types of weights. You may install any number of `.safetensors` LoRA files simply by copying them into
the `lora` directory of the corresponding InvokeAI models directory (usually `invokeai`
in your home directory). For example, you can simply move a Stable Diffusion 1.5 LoRA file to
the `sd-1/lora` folder.
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.
To use these when generating, open the LoRA menu item in the options panel, select the LoRAs you want to apply
and ensure that they have the appropriate weight recommended by the model provider. Typically, most LoRAs perform best at a weight of .75-1.

View File

@@ -8,64 +8,20 @@ title: ControlNet
ControlNet
ControlNet is a powerful set of features developed by the open-source
community (notably, Stanford researcher
[**@ilyasviel**](https://github.com/lllyasviel)) that allows you to
apply a secondary neural network model to your image generation
process in Invoke.
ControlNet is a powerful set of features developed by the open-source community (notably, Stanford researcher [**@ilyasviel**](https://github.com/lllyasviel)) that allows you to apply a secondary neural network model to your image generation process in Invoke.
With ControlNet, you can get more control over the output of your
image generation, providing you with a way to direct the network
towards generating images that better fit your desired style or
outcome.
With ControlNet, you can get more control over the output of your image generation, providing you with a way to direct the network towards generating images that better fit your desired style or outcome.
### How it works
ControlNet works by analyzing an input image, pre-processing that
image to identify relevant information that can be interpreted by each
specific ControlNet model, and then inserting that control information
into the generation process. This can be used to adjust the style,
composition, or other aspects of the image to better achieve a
specific result.
ControlNet works by analyzing an input image, pre-processing that image to identify relevant information that can be interpreted by each specific ControlNet model, and then inserting that control information into the generation process. This can be used to adjust the style, composition, or other aspects of the image to better achieve a specific result.
### Models
InvokeAI provides access to a series of ControlNet models that provide
different effects or styles in your generated images. Currently
InvokeAI only supports "diffuser" style ControlNet models. These are
folders that contain the files `config.json` and/or
`diffusion_pytorch_model.safetensors` and
`diffusion_pytorch_model.fp16.safetensors`. The name of the folder is
the name of the model.
As part of the model installation, ControlNet models can be selected including a variety of pre-trained models that have been added to achieve different effects or styles in your generated images. Further ControlNet models may require additional code functionality to also be incorporated into Invoke's Invocations folder. You should expect to follow any installation instructions for ControlNet models loaded outside the default models provided by Invoke. The default models include:
***InvokeAI does not currently support checkpoint-format
ControlNets. These come in the form of a single file with the
extension `.safetensors`.***
Diffuser-style ControlNet models are available at HuggingFace
(http://huggingface.co) and accessed via their repo IDs (identifiers
in the format "author/modelname"). The easiest way to install them is
to use the InvokeAI model installer application. Use the
`invoke.sh`/`invoke.bat` launcher to select item [5] and then navigate
to the CONTROLNETS section. Select the models you wish to install and
press "APPLY CHANGES". You may also enter additional HuggingFace
repo_ids in the "Additional models" textbox:
![Model Installer -
Controlnetl](../assets/installing-models/model-installer-controlnet.png){:width="640px"}
Command-line users can launch the model installer using the command
`invokeai-model-install`.
_Be aware that some ControlNet models require additional code
functionality in order to work properly, so just installing a
third-party ControlNet model may not have the desired effect._ Please
read and follow the documentation for installing a third party model
not currently included among InvokeAI's default list.
The models currently supported include:
**Canny**:

View File

@@ -4,19 +4,15 @@ title: InvokeAI Web Server
# :material-web: InvokeAI Web Server
## Quick guided walkthrough of the WebUI's features
As of version 2.0.0, this distribution comes with a full-featured web server
(see screenshot).
While most of the WebUI's features are intuitive, here is a guided walkthrough
through its various components.
### Launching the WebUI
To run the InvokeAI web server, start the `invoke.sh`/`invoke.bat`
script and select option (1). Alternatively, with the InvokeAI
environment active, run `invokeai-web`:
To use it, launch the `invoke.sh`/`invoke.bat` script and select
option (2). Alternatively, with the InvokeAI environment active, run
the `invokeai` script by adding the `--web` option:
```bash
invokeai-web
invokeai --web
```
You can then connect to the server by pointing your web browser at
@@ -32,32 +28,33 @@ invoke.sh --host 0.0.0.0
or
```bash
invokeai-web --host 0.0.0.0
invokeai --web --host 0.0.0.0
```
### The InvokeAI Web Interface
## Quick guided walkthrough of the WebUI's features
While most of the WebUI's features are intuitive, here is a guided walkthrough
through its various components.
![Invoke Web Server - Major Components](../assets/invoke-web-server-1.png){:width="640px"}
The screenshot above shows the Text to Image tab of the WebUI. There are three
main sections:
1. A **control panel** on the left, which contains various settings
for text to image generation. The most important part is the text
field (currently showing `fantasy painting, horned demon`) for
entering the positive text prompt, another text field right below it for an
optional negative text prompt (concepts to exclude), and a _Invoke_ button
to begin the image rendering process.
1. A **control panel** on the left, which contains various settings for text to
image generation. The most important part is the text field (currently
showing `strawberry sushi`) for entering the text prompt, and the camera icon
directly underneath that will render the image. We'll call this the _Invoke_
button from now on.
2. The **current image** section in the middle, which shows a large
format version of the image you are currently working on. A series
of buttons at the top lets you modify and manipulate the image in
various ways.
2. The **current image** section in the middle, which shows a large format
version of the image you are currently working on. A series of buttons at the
top ("image to image", "Use All", "Use Seed", etc) lets you modify the image
in various ways.
3. A **gallery** section on the left that contains a history of the images you
3. A \*_gallery_ section on the left that contains a history of the images you
have generated. These images are read and written to the directory specified
in the `INVOKEAIROOT/invokeai.yaml` initialization file, usually a directory
named `outputs` in `INVOKEAIROOT`.
at launch time in `--outdir`.
In addition to these three elements, there are a series of icons for changing
global settings, reporting bugs, and changing the theme on the upper right.
@@ -79,10 +76,14 @@ From top to bottom, these are:
with outpainting,and modify interior portions of the image with
inpainting, erase portions of a starting image and have the AI fill in
the erased region from a text prompt.
4. Node Editor - (experimental) this panel allows you to create
4. Workflow Management (not yet implemented) - this panel will allow you to create
pipelines of common operations and combine them into workflows.
5. Model Manager - this panel allows you to import and configure new
models using URLs, local paths, or HuggingFace diffusers repo_ids.
5. Training (not yet implemented) - this panel will provide an interface to [textual
inversion training](TEXTUAL_INVERSION.md) and fine tuning.
The inpainting, outpainting and postprocessing tabs are currently in
development. However, limited versions of their features can already be accessed
through the Text to Image and Image to Image tabs.
## Walkthrough
@@ -91,54 +92,43 @@ feature set.
### Text to Image
1. Launch the WebUI using launcher option [1] and connect to it with
your browser by accessing `http://localhost:9090`. If the browser
and server are running on different machines on your LAN, add the
option `--host 0.0.0.0` to the `invoke.sh` launch command line and connect to
the machine hosting the web server using its IP address or domain
name.
1. Launch the WebUI using `python scripts/invoke.py --web` and connect to it
with your browser by accessing `http://localhost:9090`. If the browser and
server are running on different machines on your LAN, add the option
`--host 0.0.0.0` to the launch command line and connect to the machine
hosting the web server using its IP address or domain name.
2. If all goes well, the WebUI should come up and you'll see a green dot
meaning `connected` on the upper right.
![Invoke Web Server - Control Panel](../assets/invoke-control-panel-1.png){ align=right width=300px }
2. If all goes well, the WebUI should come up and you'll see a green
`connected` message on the upper right.
#### Basics
1. Generate an image by typing _bluebird_ into the large prompt field
on the upper left and then clicking on the Invoke button or pressing
the return button.
After a short wait, you'll see a large image of a bluebird in the
1. Generate an image by typing _strawberry sushi_ into the large prompt field
on the upper left and then clicking on the Invoke button (the one with the
Camera icon). After a short wait, you'll see a large image of sushi in the
image panel, and a new thumbnail in the gallery on the right.
If you need more room on the screen, you can turn the gallery off
by typing the **g** hotkey. You can turn it back on later by clicking the
image icon that appears in the gallery's place. The list of hotkeys can
be found by clicking on the keyboard icon above the image gallery.
If you need more room on the screen, you can turn the gallery off by
clicking on the **x** to the right of "Your Invocations". You can turn it
back on later by clicking the image icon that appears in the gallery's
place.
2. Generate a bunch of bluebird images by increasing the number of
requested images by adjusting the Images counter just below the Invoke
The images are written into the directory indicated by the `--outdir` option
provided at script launch time. By default, this is `outputs/img-samples`
under the InvokeAI directory.
2. Generate a bunch of strawberry sushi images by increasing the number of
requested images by adjusting the Images counter just below the Camera
button. As each is generated, it will be added to the gallery. You can
switch the active image by clicking on the gallery thumbnails.
If you'd like to watch the image generation progress, click the hourglass
icon above the main image area. As generation progresses, you'll see
increasingly detailed versions of the ultimate image.
3. Try playing with different settings, including changing the main
model, the image width and height, the Scheduler, the Steps and
the CFG scale.
The _Model_ changes the main model. Thousands of custom models are
now available, which generate a variety of image styles and
subjects. While InvokeAI comes with a few starter models, it is
easy to import new models into the application. See [Installing
Models](../installation/050_INSTALLING_MODELS.md) for more details.
3. Try playing with different settings, including image width and height, the
Sampler, the Steps and the CFG scale.
Image _Width_ and _Height_ do what you'd expect. However, be aware that
larger images consume more VRAM memory and take longer to generate.
The _Scheduler_ controls how the AI selects the image to display. Some
The _Sampler_ controls how the AI selects the image to display. Some
samplers are more "creative" than others and will produce a wider range of
variations (see next section). Some samplers run faster than others.
@@ -152,27 +142,17 @@ feature set.
to the input prompt. You can go as high or low as you like, but generally
values greater than 20 won't improve things much, and values lower than 5
will produce unexpected images. There are complex interactions between
_Steps_, _CFG Scale_ and the _Scheduler_, so experiment to find out what works
_Steps_, _CFG Scale_ and the _Sampler_, so experiment to find out what works
for you.
The _Seed_ controls the series of values returned by InvokeAI's
random number generator. Each unique seed value will generate a different
image. To regenerate a previous image, simply use the original image's
seed value. A slider to the right of the _Seed_ field will change the
seed each time an image is generated.
![Invoke Web Server - Control Panel 2](../assets/control-panel-2.png){ align=right width=400px }
4. To regenerate a previously-generated image, select the image you want and
click _Use All_. This loads the text prompt and other original settings into
the control panel. If you then press _Invoke_ it will regenerate the image
exactly. You can also selectively modify the prompt or other settings to
tweak the image.
4. To regenerate a previously-generated image, select the image you
want and click the asterisk ("*") button at the top of the
image. This loads the text prompt and other original settings into
the control panel. If you then press _Invoke_ it will regenerate
the image exactly. You can also selectively modify the prompt or
other settings to tweak the image.
Alternatively, you may click on the "sprouting plant icon" to load
just the image's seed, and leave other settings unchanged or the
quote icon to load just the positive and negative prompts.
Alternatively, you may click on _Use Seed_ to load just the image's seed,
and leave other settings unchanged.
5. To regenerate a Stable Diffusion image that was generated by another SD
package, you need to know its text prompt and its _Seed_. Copy-paste the
@@ -181,22 +161,62 @@ feature set.
you Invoke, you will get something similar to the original image. It will
not be exact unless you also set the correct values for the original
sampler, CFG, steps and dimensions, but it will (usually) be close.
6. To save an image, right click on it to bring up a menu that will
let you download the image, save it to a named image gallery, and
copy it to the clipboard, among other things.
#### Upscaling
#### Variations on a theme
![Invoke Web Server - Upscaling](../assets/upscaling.png){ align=right width=400px }
1. Let's try generating some variations. Select your favorite sushi image from
the gallery to load it. Then select "Use All" from the list of buttons
above. This will load up all the settings used to generate this image,
including its unique seed.
"Upscaling" is the process of increasing the size of an image while
retaining the sharpness. InvokeAI uses an external library called
"ESRGAN" to do this. To invoke upscaling, simply select an image
and press the "expanding arrows" button above it. You can select
between 2X and 4X upscaling, and adjust the upscaling strength,
which has much the same meaning as in facial reconstruction. Try
running this on one of your previously-generated images.
Go down to the Variations section of the Control Panel and set the button to
On. Set Variation Amount to 0.2 to generate a modest number of variations on
the image, and also set the Image counter to `4`. Press the `invoke` button.
This will generate a series of related images. To obtain smaller variations,
just lower the Variation Amount. You may also experiment with changing the
Sampler. Some samplers generate more variability than others. _k_euler_a_ is
particularly creative, while _ddim_ is pretty conservative.
2. For even more variations, experiment with increasing the setting for
_Perlin_. This adds a bit of noise to the image generation process. Note
that values of Perlin noise greater than 0.15 produce poor images for
several of the samplers.
#### Facial reconstruction and upscaling
Stable Diffusion frequently produces mangled faces, particularly when there are
multiple figures in the same scene. Stable Diffusion has particular issues with
generating reallistic eyes. InvokeAI provides the ability to reconstruct faces
using either the GFPGAN or CodeFormer libraries. For more information see
[POSTPROCESS](POSTPROCESS.md).
1. Invoke a prompt that generates a mangled face. A prompt that often gives
this is "portrait of a lawyer, 3/4 shot" (this is not intended as a slur
against lawyers!) Once you have an image that needs some touching up, load
it into the Image panel, and press the button with the face icon
(highlighted in the first screenshot below). A dialog box will appear. Leave
_Strength_ at 0.8 and press \*Restore Faces". If all goes well, the eyes and
other aspects of the face will be improved (see the second screenshot)
![Invoke Web Server - Original Image](../assets/invoke-web-server-3.png)
![Invoke Web Server - Retouched Image](../assets/invoke-web-server-4.png)
The facial reconstruction _Strength_ field adjusts how aggressively the face
library will try to alter the face. It can be as high as 1.0, but be aware
that this often softens the face airbrush style, losing some details. The
default 0.8 is usually sufficient.
2. "Upscaling" is the process of increasing the size of an image while
retaining the sharpness. InvokeAI uses an external library called "ESRGAN"
to do this. To invoke upscaling, simply select an image and press the _HD_
button above it. You can select between 2X and 4X upscaling, and adjust the
upscaling strength, which has much the same meaning as in facial
reconstruction. Try running this on one of your previously-generated images.
3. Finally, you can run facial reconstruction and/or upscaling automatically
after each Invocation. Go to the Advanced Options section of the Control
Panel and turn on _Restore Face_ and/or _Upscale_.
### Image to Image
@@ -204,14 +224,24 @@ InvokeAI lets you take an existing image and use it as the basis for a new
creation. You can use any sort of image, including a photograph, a scanned
sketch, or a digital drawing, as long as it is in PNG or JPEG format.
For this tutorial, we'll use the file named
[Lincoln-and-Parrot-512.png](../assets/Lincoln-and-Parrot-512.png).
For this tutorial, we'll use files named
[Lincoln-and-Parrot-512.png](../assets/Lincoln-and-Parrot-512.png), and
[Lincoln-and-Parrot-512-transparent.png](../assets/Lincoln-and-Parrot-512-transparent.png).
Download these images to your local machine now to continue with the
walkthrough.
1. Click on the _Image to Image_ tab icon, which is the second icon
from the top on the left-hand side of the screen. This will bring
you to a screen similar to the one shown here:
1. Click on the _Image to Image_ tab icon, which is the second icon from the
top on the left-hand side of the screen:
![Invoke Web Server - Image to Image Tab](../assets/invoke-web-server-6.png){ width="640px" }
<figure markdown>
![Invoke Web Server - Image to Image Icon](../assets/invoke-web-server-5.png)
</figure>
This will bring you to a screen similar to the one shown here:
<figure markdown>
![Invoke Web Server - Image to Image Tab](../assets/invoke-web-server-6.png){:width="640px"}
</figure>
2. Drag-and-drop the Lincoln-and-Parrot image into the Image panel, or click
the blank area to get an upload dialog. The image will load into an area
@@ -225,99 +255,120 @@ For this tutorial, we'll use the file named
![Invoke Web Server - Image to Image example](../assets/invoke-web-server-7.png){:width="640px"}
4. Experiment with the different settings. The most influential one in Image to
Image is _Denoising Strength_ located about midway down the control
Image is _Image to Image Strength_ located about midway down the control
panel. By default it is set to 0.75, but can range from 0.0 to 0.99. The
higher the value, the more of the original image the AI will replace. A
value of 0 will leave the initial image completely unchanged, while 0.99
will replace it completely. However, the _Scheduler_ and _CFG Scale_ also
will replace it completely. However, the Sampler and CFG Scale also
influence the final result. You can also generate variations in the same way
as described in Text to Image.
5. What if we only want to change certain part(s) of the image and
leave the rest intact? This is called Inpainting, and you can do
it in the [Unified Canvas](UNIFIED_CANVAS.md). The Unified Canvas
also allows you to extend borders of the image and fill in the
blank areas, a process called outpainting.
5. What if we only want to change certain part(s) of the image and leave the
rest intact? This is called Inpainting, and a future version of the InvokeAI
web server will provide an interactive painting canvas on which you can
directly draw the areas you wish to Inpaint into. For now, you can achieve
this effect by using an external photoeditor tool to make one or more
regions of the image transparent as described in [INPAINTING.md] and
uploading that.
The file
[Lincoln-and-Parrot-512-transparent.png](../assets/Lincoln-and-Parrot-512-transparent.png)
is a version of the earlier image in which the area around the parrot has
been replaced with transparency. Click on the "x" in the upper right of the
Initial Image and upload the transparent version. Using the same prompt "old
sea captain with raven on shoulder" try Invoking an image. This time, only
the parrot will be replaced, leaving the rest of the original image intact:
<figure markdown>
![Invoke Web Server - Inpainting](../assets/invoke-web-server-8.png){:width="640px"}
</figure>
6. Would you like to modify a previously-generated image using the Image to
Image facility? Easy! While in the Image to Image panel, drag and drop any
image in the gallery into the Initial Image area, and it will be ready for
use. You can do the same thing with the main image display. Click on the
_Send to_ icon to get a menu of
commands and choose "Send to Image to Image".
![Send To Icon](../assets/send-to-icon.png)
Image facility? Easy! While in the Image to Image panel, hover over any of
the gallery images to see a little menu of icons pop up. Click the picture
icon to instantly send the selected image to Image to Image as the initial
image.
### Textual Inversion, LoRA and ControlNet
You can do the same from the Text to Image tab by clicking on the picture icon
above the central image panel. The screenshot below shows where the "use as
initial image" icons are located.
InvokeAI supports several different types of model files that
extending the capabilities of the main model by adding artistic
styles, special effects, or subjects. By mixing and matching textual
inversion, LoRA and ControlNet models, you can achieve many
interesting and beautiful effects.
![Invoke Web Server - Use as Image Links](../assets/invoke-web-server-9.png){:width="640px"}
We will give an example using a LoRA model named "Ink Scenery". This
LoRA, which can be downloaded from Civitai (civitai.com), is
specialized to paint landscapes that look like they were made with
dripping india ink. To install this LoRA, we first download it and
put it into the `autoimport/lora` folder located inside the
`invokeai` root directory. After restarting the web server, the
LoRA will now become available for use.
### Unified Canvas
To see this LoRA at work, we'll first generate an image without it
using the standard `stable-diffusion-v1-5` model. Choose this
model and enter the prompt "mountains, ink". Here is a typical
generated image, a mountain range rendered in ink and watercolor
wash:
See the [Unified Canvas Guide](UNIFIED_CANVAS.md)
![Ink Scenery without LoRA](../assets/lora-example-0.png){ width=512px }
## Reference
Now let's install and activate the Ink Scenery LoRA. Go to
https://civitai.com/models/78605/ink-scenery-or and download the LoRA
model file to `invokeai/autoimport/lora` and restart the web
server. (Alternatively, you can use [InvokeAI's Web Model
Manager](../installation/050_INSTALLING_MODELS.md) to download and
install the LoRA directly by typing its URL into the _Import
Models_->_Location_ field).
### Additional Options
Scroll down the control panel until you get to the LoRA accordion
section, and open it:
| parameter <img width=160 align="right"> | effect |
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| `--web_develop` | Starts the web server in development mode. |
| `--web_verbose` | Enables verbose logging |
| `--cors [CORS ...]` | Additional allowed origins, comma-separated |
| `--host HOST` | Web server: Host or IP to listen on. Set to 0.0.0.0 to accept traffic from other devices on your network. |
| `--port PORT` | Web server: Port to listen on |
| `--certfile CERTFILE` | Web server: Path to certificate file to use for SSL. Use together with --keyfile |
| `--keyfile KEYFILE` | Web server: Path to private key file to use for SSL. Use together with --certfile' |
| `--gui` | Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask to create a desktop app experience of the webserver. |
![LoRA Section](../assets/lora-example-1.png){ width=512px }
### Web Specific Features
Click the popup menu and select "Ink scenery". (If it isn't there, then
the model wasn't installed to the right place, or perhaps you forgot
to restart the web server.) The LoRA section will change to look like this:
The web experience offers an incredibly easy-to-use experience for interacting
with the InvokeAI toolkit. For detailed guidance on individual features, see the
Feature-specific help documents available in this directory. Note that the
latest functionality available in the CLI may not always be available in the Web
interface.
![LoRA Section Loaded](../assets/lora-example-2.png){ width=512px }
#### Dark Mode & Light Mode
Note that there is now a slider control for _Ink scenery_. The slider
controls how much influence the LoRA model will have on the generated
image.
The InvokeAI interface is available in a nano-carbon black & purple Dark Mode,
and a "burn your eyes out Nosferatu" Light Mode. These can be toggled by
clicking the Sun/Moon icons at the top right of the interface.
Run the "mountains, ink" prompt again and observe the change in style:
![InvokeAI Web Server - Dark Mode](../assets/invoke_web_dark.png)
![Ink Scenery](../assets/lora-example-3.png){ width=512px }
![InvokeAI Web Server - Light Mode](../assets/invoke_web_light.png)
Try adjusting the weight slider for larger and smaller weights and
generate the image after each adjustment. The higher the weight, the
more influence the LoRA will have.
#### Invocation Toolbar
To remove the LoRA completely, just click on its trash can icon.
The left side of the InvokeAI interface is available for customizing the prompt
and the settings used for invoking your new image. Typing your prompt into the
open text field and clicking the Invoke button will produce the image based on
the settings configured in the toolbar.
Multiple LoRAs can be added simultaneously and combined with textual
inversions and ControlNet models. Please see [Textual Inversions and
LoRAs](CONCEPTS.md) and [Using ControlNet](CONTROLNET.md) for details.
See below for additional documentation related to each feature:
## Summary
- [Variations](./VARIATIONS.md)
- [Upscaling](./POSTPROCESS.md#upscaling)
- [Image to Image](./IMG2IMG.md)
- [Other](./OTHER.md)
This walkthrough just skims the surface of the many things InvokeAI
can do. Please see [Features](index.md) for more detailed reference
guides.
#### Invocation Gallery
The currently selected --outdir (or the default outputs folder) will display all
previously generated files on load. As new invocations are generated, these will
be dynamically added to the gallery, and can be previewed by selecting them.
Each image also has a simple set of actions (e.g., Delete, Use Seed, Use All
Parameters, etc.) that can be accessed by hovering over the image.
#### Image Workspace
When an image from the Invocation Gallery is selected, or is generated, the
image will be displayed within the center of the interface. A quickbar of common
image interactions are displayed along the top of the image, including:
- Use image in the `Image to Image` workflow
- Initialize Face Restoration on the selected file
- Initialize Upscaling on the selected file
- View File metadata and details
- Delete the file
## Acknowledgements
A huge shout-out to the core team working to make the Web GUI a reality,
A huge shout-out to the core team working to make this vision a reality,
including [psychedelicious](https://github.com/psychedelicious),
[Kyle0654](https://github.com/Kyle0654) and
[blessedcoolant](https://github.com/blessedcoolant).

View File

@@ -17,12 +17,8 @@ a single convenient digital artist-optimized user interface.
### * [Prompt Engineering](PROMPTS.md)
Get the images you want with the InvokeAI prompt engineering language.
### * The [LoRA, LyCORIS and Textual Inversion Models](CONCEPTS.md)
Add custom subjects and styles using a variety of fine-tuned models.
### * [ControlNet](CONTROLNET.md)
Learn how to install and use ControlNet models for fine control over
image output.
## * The [Concepts Library](CONCEPTS.md)
Add custom subjects and styles using HuggingFace's repository of embeddings.
### * [Image-to-Image Guide](IMG2IMG.md)
Use a seed image to build new creations in the CLI.
@@ -33,28 +29,26 @@ are the ticket.
## Model Management
### * [Model Installation](../installation/050_INSTALLING_MODELS.md)
## * [Model Installation](../installation/050_INSTALLING_MODELS.md)
Learn how to import third-party models and switch among them. This
guide also covers optimizing models to load quickly.
### * [Merging Models](MODEL_MERGING.md)
## * [Merging Models](MODEL_MERGING.md)
Teach an old model new tricks. Merge 2-3 models together to create a
new model that combines characteristics of the originals.
### * [Textual Inversion](TRAINING.md)
## * [Textual Inversion](TEXTUAL_INVERSION.md)
Personalize models by adding your own style or subjects.
## Other Features
# Other Features
### * [The NSFW Checker](NSFW.md)
## * [The NSFW Checker](NSFW.md)
Prevent InvokeAI from displaying unwanted racy images.
### * [Controlling Logging](LOGGING.md)
## * [Controlling Logging](LOGGING.md)
Control how InvokeAI logs status messages.
<!-- OUT OF DATE
### * [Miscellaneous](OTHER.md)
## * [Miscellaneous](OTHER.md)
Run InvokeAI on Google Colab, generate images with repeating patterns,
batch process a file of prompts, increase the "creativity" of image
generation by adding initial noise, and more!
-->

View File

@@ -145,8 +145,8 @@ This method is recommended for those familiar with running Docker containers
### Model Management
- [Installing](installation/050_INSTALLING_MODELS.md)
- [Model Merging](features/MODEL_MERGING.md)
- [ControlNet Models](features/CONTROLNET.md)
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
- [Textual Inversion](features/TEXTUAL_INVERSION.md)
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
<!-- seperator -->
### Prompt Engineering

View File

@@ -354,8 +354,8 @@ experimental versions later.
12. **InvokeAI Options**: You can launch InvokeAI with several different command-line arguments that
customize its behavior. For example, you can change the location of the
image output directory or balance memory usage vs performance. See
[Configuration](../features/CONFIGURATION.md) for a full list of the options.
image output directory, or select your favorite sampler. See the
[Command-Line Interface](../features/CLI.md) for a full list of the options.
- To set defaults that will take effect every time you launch InvokeAI,
use a text editor (e.g. Notepad) to exit the file

View File

@@ -256,7 +256,7 @@ manager, please follow these steps:
10. Render away!
Browse the [features](../features/index.md) section to learn about all the
Browse the [features](../features/CLI.md) section to learn about all the
things you can do with InvokeAI.
@@ -270,7 +270,7 @@ manager, please follow these steps:
12. Other scripts
The [Textual Inversion](../features/TRAINING.md) script can be launched with the command:
The [Textual Inversion](../features/TEXTUAL_INVERSION.md) script can be launched with the command:
```bash
invokeai-ti --gui

View File

@@ -43,7 +43,24 @@ InvokeAI comes with support for a good set of starter models. You'll
find them listed in the master models file
`configs/INITIAL_MODELS.yaml` in the InvokeAI root directory. The
subset that are currently installed are found in
`configs/models.yaml`.
`configs/models.yaml`. As of v2.3.1, the list of starter models is:
|Model Name | HuggingFace Repo ID | Description | URL |
|---------- | ---------- | ----------- | --- |
|stable-diffusion-1.5|runwayml/stable-diffusion-v1-5|Stable Diffusion version 1.5 diffusers model (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-v1-5 |
|sd-inpainting-1.5|runwayml/stable-diffusion-inpainting|RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-inpainting |
|stable-diffusion-2.1|stabilityai/stable-diffusion-2-1|Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-1 |
|sd-inpainting-2.0|stabilityai/stable-diffusion-2-inpainting|Stable Diffusion version 2.0 inpainting model (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-inpainting |
|analog-diffusion-1.0|wavymulder/Analog-Diffusion|An SD-1.5 model trained on diverse analog photographs (2.13 GB)|https://huggingface.co/wavymulder/Analog-Diffusion |
|deliberate-1.0|XpucT/Deliberate|Versatile model that produces detailed images up to 768px (4.27 GB)|https://huggingface.co/XpucT/Deliberate |
|d&d-diffusion-1.0|0xJustin/Dungeons-and-Diffusion|Dungeons & Dragons characters (2.13 GB)|https://huggingface.co/0xJustin/Dungeons-and-Diffusion |
|dreamlike-photoreal-2.0|dreamlike-art/dreamlike-photoreal-2.0|A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)|https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0 |
|inkpunk-1.0|Envvi/Inkpunk-Diffusion|Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)|https://huggingface.co/Envvi/Inkpunk-Diffusion |
|openjourney-4.0|prompthero/openjourney|An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)|https://huggingface.co/prompthero/openjourney |
|portrait-plus-1.0|wavymulder/portraitplus|An SD-1.5 model trained on close range portraits of people; prompt with "portrait+" (2.13 GB)|https://huggingface.co/wavymulder/portraitplus |
|seek-art-mega-1.0|coreco/seek.art_MEGA|A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)|https://huggingface.co/coreco/seek.art_MEGA |
|trinart-2.0|naclbit/trinart_stable_diffusion_v2|An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)|https://huggingface.co/naclbit/trinart_stable_diffusion_v2 |
|waifu-diffusion-1.4|hakurei/waifu-diffusion|An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)|https://huggingface.co/hakurei/waifu-diffusion |
Note that these files are covered by an "Ethical AI" license which
forbids certain uses. When you initially download them, you are asked
@@ -54,7 +71,8 @@ with the model terms by visiting the URLs in the table above.
## Community-Contributed Models
[HuggingFace](https://huggingface.co/models?library=diffusers)
There are too many to list here and more are being contributed every
day. [HuggingFace](https://huggingface.co/models?library=diffusers)
is a great resource for diffusers models, and is also the home of a
[fast-growing repository](https://huggingface.co/sd-concepts-library)
of embedding (".bin") models that add subjects and/or styles to your
@@ -68,106 +86,310 @@ only `.safetensors` and `.ckpt` models, but they can be easily loaded
into InvokeAI and/or converted into optimized `diffusers` models. Be
aware that CIVITAI hosts many models that generate NSFW content.
!!! note
InvokeAI 2.3.x does not support directly importing and
running Stable Diffusion version 2 checkpoint models. You may instead
convert them into `diffusers` models using the conversion methods
described below.
## Installation
There are two ways to install and manage models:
There are multiple ways to install and manage models:
1. The `invokeai-model-install` script which will download and install
them for you. In addition to supporting main models, you can install
ControlNet, LoRA and Textual Inversion models.
1. The `invokeai-configure` script which will download and install them for you.
2. The web interface (WebUI) has a GUI for importing and managing
2. The command-line tool (CLI) has commands that allows you to import, configure and modify
models files.
3. The web interface (WebUI) has a GUI for importing and managing
models.
3. By placing models (or symbolic links to models) inside one of the
InvokeAI root directory's `autoimport` folder.
### Installation via `invokeai-configure`
### Installation via `invokeai-model-install`
From the `invoke` launcher, choose option (6) "re-run the configure
script to download new models." This will launch the same script that
prompted you to select models at install time. You can use this to add
models that you skipped the first time around. It is all right to
specify a model that was previously downloaded; the script will just
confirm that the files are complete.
From the `invoke` launcher, choose option [5] "Download and install
models." This will launch the same script that prompted you to select
models at install time. You can use this to add models that you
skipped the first time around. It is all right to specify a model that
was previously downloaded; the script will just confirm that the files
are complete.
### Installation via the CLI
The installer has different panels for installing main models from
HuggingFace, models from Civitai and other arbitrary web sites,
ControlNet models, LoRA/LyCORIS models, and Textual Inversion
embeddings. Each section has a text box in which you can enter a new
model to install. You can refer to a model using its:
You can install a new model, including any of the community-supported ones, via
the command-line client's `!import_model` command.
1. Local path to the .ckpt, .safetensors or diffusers folder on your local machine
2. A directory on your machine that contains multiple models
3. A URL that points to a downloadable model
4. A HuggingFace repo id
#### Installing individual `.ckpt` and `.safetensors` models
Previously-installed models are shown with checkboxes. Uncheck a box
to unregister the model from InvokeAI. Models that are physically
installed inside the InvokeAI root directory will be deleted and
purged (after a confirmation warning). Models that are located outside
the InvokeAI root directory will be unregistered but not deleted.
If the model is already downloaded to your local disk, use
`!import_model /path/to/file.ckpt` to load it. For example:
Note: The installer script uses a console-based text interface that requires
significant amounts of horizontal and vertical space. If the display
looks messed up, just enlarge the terminal window and/or relaunch the
script.
If you wish you can script model addition and deletion, as well as
listing installed models. Start the "developer's console" and give the
command `invokeai-model-install --help`. This will give you a series
of command-line parameters that will let you control model
installation. Examples:
```
# (list all controlnet models)
invokeai-model-install --list controlnet
# (install the model at the indicated URL)
invokeai-model-install --add http://civitai.com/2860
# (delete the named model)
invokeai-model-install --delete sd-1/main/analog-diffusion
```bash
invoke> !import_model C:/Users/fred/Downloads/martians.safetensors
```
### Installation via the Web GUI
!!! tip "Forward Slashes"
On Windows systems, use forward slashes rather than backslashes
in your file paths.
If you do use backslashes,
you must double them like this:
`C:\\Users\\fred\\Downloads\\martians.safetensors`
To install a new model using the Web GUI, do the following:
Alternatively you can directly import the file using its URL:
1. Open the InvokeAI Model Manager (cube at the bottom of the
left-hand panel) and navigate to *Import Models*
```bash
invoke> !import_model https://example.org/sd_models/martians.safetensors
```
2. In the field labeled *Location* type in the path to the model you
wish to install. You may use a URL, HuggingFace repo id, or a path on
your local disk.
For this to work, the URL must not be password-protected. Otherwise
you will receive a 404 error.
3. Alternatively, the *Scan for Models* button allows you to paste in
the path to a folder somewhere on your machine. It will be scanned for
importable models and prompt you to add the ones of your choice.
When you import a legacy model, the CLI will first ask you what type
of model this is. You can indicate whether it is a model based on
Stable Diffusion 1.x (1.4 or 1.5), one based on Stable Diffusion 2.x,
or a 1.x inpainting model. Be careful to indicate the correct model
type, or it will not load correctly. You can correct the model type
after the fact using the `!edit_model` command.
4. Press *Add Model* and wait for confirmation that the model
was added.
The system will then ask you a few other questions about the model,
including what size image it was trained on (usually 512x512), what
name and description you wish to use for it, and whether you would
like to install a custom VAE (variable autoencoder) file for the
model. For recent models, the answer to the VAE question is usually
"no," but it won't hurt to answer "yes".
To delete a model, Select *Model Manager* to list all the currently
installed models. Press the trash can icons to delete any models you
wish to get rid of. Models whose weights are located inside the
InvokeAI `models` directory will be purged from disk, while those
located outside will be unregistered from InvokeAI, but not deleted.
After importing, the model will load. If this is successful, you will
be asked if you want to keep the model loaded in memory to start
generating immediately. You'll also be asked if you wish to make this
the default model on startup. You can change this later using
`!edit_model`.
You can see where model weights are located by clicking on the model name.
This will bring up an editable info panel showing the model's characteristics,
including the `Model Location` of its files.
#### Importing a batch of `.ckpt` and `.safetensors` models from a directory
### Installation via the `autoimport` function
You may also point `!import_model` to a directory containing a set of
`.ckpt` or `.safetensors` files. They will be imported _en masse_.
In the InvokeAI root directory you will find a series of folders under
`autoimport`, one each for main models, controlnets, embeddings and
Loras. Any models that you add to these directories will be scanned
at startup time and registered automatically.
!!! example
You may create symbolic links from these folders to models located
elsewhere on disk and they will be autoimported. You can also create
subfolders and organize them as you wish.
```console
invoke> !import_model C:/Users/fred/Downloads/civitai_models/
```
The location of the autoimport directories are controlled by settings
in `invokeai.yaml`. See [Configuration](../features/CONFIGURATION.md).
You will be given the option to import all models found in the
directory, or select which ones to import. If there are subfolders
within the directory, they will be searched for models to import.
#### Installing `diffusers` models
You can install a `diffusers` model from the HuggingFace site using
`!import_model` and the HuggingFace repo_id for the model:
```bash
invoke> !import_model andite/anything-v4.0
```
Alternatively, you can download the model to disk and import it from
there. The model may be distributed as a ZIP file, or as a Git
repository:
```bash
invoke> !import_model C:/Users/fred/Downloads/andite--anything-v4.0
```
!!! tip "The CLI supports file path autocompletion"
Type a bit of the path name and hit ++tab++ in order to get a choice of
possible completions.
!!! tip "On Windows, you can drag model files onto the command-line"
Once you have typed in `!import_model `, you can drag the
model file or directory onto the command-line to insert the model path. This way, you don't need to
type it or copy/paste. However, you will need to reverse or
double backslashes as noted above.
Before installing, the CLI will ask you for a short name and
description for the model, whether to make this the default model that
is loaded at InvokeAI startup time, and whether to replace its
VAE. Generally the answer to the latter question is "no".
### Converting legacy models into `diffusers`
The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
models file into `diffusers` and install it.This will enable the model
to load and run faster without loss of image quality.
The usage is identical to `!import_model`. You may point the command
to either a downloaded model file on disk, or to a (non-password
protected) URL:
```bash
invoke> !convert_model C:/Users/fred/Downloads/martians.safetensors
```
After a successful conversion, the CLI will offer you the option of
deleting the original `.ckpt` or `.safetensors` file.
### Optimizing a previously-installed model
Lastly, if you have previously installed a `.ckpt` or `.safetensors`
file and wish to convert it into a `diffusers` model, you can do this
without re-downloading and converting the original file using the
`!optimize_model` command. Simply pass the short name of an existing
installed model:
```bash
invoke> !optimize_model martians-v1.0
```
The model will be converted into `diffusers` format and replace the
previously installed version. You will again be offered the
opportunity to delete the original `.ckpt` or `.safetensors` file.
### Related CLI Commands
There are a whole series of additional model management commands in
the CLI that you can read about in [Command-Line
Interface](../features/CLI.md). These include:
* `!models` - List all installed models
* `!switch <model name>` - Switch to the indicated model
* `!edit_model <model name>` - Edit the indicated model to change its name, description or other properties
* `!del_model <model name>` - Delete the indicated model
### Manually editing `configs/models.yaml`
If you are comfortable with a text editor then you may simply edit `models.yaml`
directly.
You will need to download the desired `.ckpt/.safetensors` file and
place it somewhere on your machine's filesystem. Alternatively, for a
`diffusers` model, record the repo_id or download the whole model
directory. Then using a **text** editor (e.g. the Windows Notepad
application), open the file `configs/models.yaml`, and add a new
stanza that follows this model:
#### A legacy model
A legacy `.ckpt` or `.safetensors` entry will look like this:
```yaml
arabian-nights-1.0:
description: A great fine-tune in Arabian Nights style
weights: ./path/to/arabian-nights-1.0.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
format: ckpt
width: 512
height: 512
default: false
```
Note that `format` is `ckpt` for both `.ckpt` and `.safetensors` files.
#### A diffusers model
A stanza for a `diffusers` model will look like this for a HuggingFace
model with a repository ID:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
repo_id: captahab/arabian-nights-1.1
format: diffusers
default: true
```
And for a downloaded directory:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
path: /path/to/captahab-arabian-nights-1.1
format: diffusers
default: true
```
There is additional syntax for indicating an external VAE to use with
this model. See `INITIAL_MODELS.yaml` and `models.yaml` for examples.
After you save the modified `models.yaml` file relaunch
`invokeai`. The new model will now be available for your use.
### Installation via the WebUI
To access the WebUI Model Manager, click on the button that looks like
a cube in the upper right side of the browser screen. This will bring
up a dialogue that lists the models you have already installed, and
allows you to load, delete or edit them:
<figure markdown>
![model-manager](../assets/installing-models/webui-models-1.png)
</figure>
To add a new model, click on **+ Add New** and select to either a
checkpoint/safetensors model, or a diffusers model:
<figure markdown>
![model-manager-add-new](../assets/installing-models/webui-models-2.png)
</figure>
In this example, we chose **Add Diffusers**. As shown in the figure
below, a new dialogue prompts you to enter the name to use for the
model, its description, and either the location of the `diffusers`
model on disk, or its Repo ID on the HuggingFace web site. If you
choose to enter a path to disk, the system will autocomplete for you
as you type:
<figure markdown>
![model-manager-add-diffusers](../assets/installing-models/webui-models-3.png)
</figure>
Press **Add Model** at the bottom of the dialogue (scrolled out of
site in the figure), and the model will be downloaded, imported, and
registered in `models.yaml`.
The **Add Checkpoint/Safetensor Model** option is similar, except that
in this case you can choose to scan an entire folder for
checkpoint/safetensors files to import. Simply type in the path of the
directory and press the "Search" icon. This will display the
`.ckpt` and `.safetensors` found inside the directory and its
subfolders, and allow you to choose which ones to import:
<figure markdown>
![model-manager-add-checkpoint](../assets/installing-models/webui-models-4.png)
</figure>
## Model Management Startup Options
The `invoke` launcher and the `invokeai` script accept a series of
command-line arguments that modify InvokeAI's behavior when loading
models. These can be provided on the command line, or added to the
InvokeAI root directory's `invokeai.init` initialization file.
The arguments are:
* `--model <model name>` -- Start up with the indicated model loaded
* `--ckpt_convert` -- When a checkpoint/safetensors model is loaded, convert it into a `diffusers` model in memory. This does not permanently save the converted model to disk.
* `--autoconvert <path/to/directory>` -- Scan the indicated directory path for new checkpoint/safetensors files, convert them into `diffusers` models, and import them into InvokeAI.
Here is an example of providing an argument on the command line using
the `invoke.sh` launch script:
```bash
invoke.sh --autoconvert /home/fred/stable-diffusion-checkpoints
```
And here is what the same argument looks like in `invokeai.init`:
```bash
--outdir="/home/fred/invokeai/outputs
--no-nsfw_checker
--autoconvert /home/fred/stable-diffusion-checkpoints
```

View File

@@ -9,7 +9,6 @@ from fastapi_events.dispatcher import dispatch
from ..services.events import EventServiceBase
class FastAPIEventService(EventServiceBase):
event_handler_id: int
__queue: Queue
@@ -28,6 +27,9 @@ class FastAPIEventService(EventServiceBase):
self.__queue.put(None)
def dispatch(self, event_name: str, payload: Any) -> None:
# TODO: Remove next two debugging lines
from .dependencies import ApiDependencies
ApiDependencies.invoker.services.logger.debug(f'dispatch {event_name} / {payload}')
self.__queue.put(dict(event_name=event_name, payload=payload))
async def __dispatch_from_queue(self, stop_event: threading.Event):

View File

@@ -24,14 +24,11 @@ async def create_board_image(
):
"""Creates a board_image"""
try:
result = ApiDependencies.invoker.services.board_images.add_image_to_board(
board_id=board_id, image_name=image_name
)
result = ApiDependencies.invoker.services.board_images.add_image_to_board(board_id=board_id, image_name=image_name)
return result
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to add to board")
@board_images_router.delete(
"/",
operation_id="remove_board_image",
@@ -46,10 +43,27 @@ async def remove_board_image(
):
"""Deletes a board_image"""
try:
result = ApiDependencies.invoker.services.board_images.remove_image_from_board(
board_id=board_id, image_name=image_name
)
result = ApiDependencies.invoker.services.board_images.remove_image_from_board(board_id=board_id, image_name=image_name)
return result
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to update board")
@board_images_router.get(
"/{board_id}",
operation_id="list_board_images",
response_model=OffsetPaginatedResults[ImageDTO],
)
async def list_board_images(
board_id: str = Path(description="The id of the board"),
offset: int = Query(default=0, description="The page offset"),
limit: int = Query(default=10, description="The number of boards per page"),
) -> OffsetPaginatedResults[ImageDTO]:
"""Gets a list of images for a board"""
results = ApiDependencies.invoker.services.board_images.get_images_for_board(
board_id,
)
return results

View File

@@ -1,28 +1,16 @@
from typing import Optional, Union
from fastapi import Body, HTTPException, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.app.services.board_record_storage import BoardChanges
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.board_record import BoardDTO
from ..dependencies import ApiDependencies
boards_router = APIRouter(prefix="/v1/boards", tags=["boards"])
class DeleteBoardResult(BaseModel):
board_id: str = Field(description="The id of the board that was deleted.")
deleted_board_images: list[str] = Field(
description="The image names of the board-images relationships that were deleted."
)
deleted_images: list[str] = Field(
description="The names of the images that were deleted."
)
@boards_router.post(
"/",
operation_id="create_board",
@@ -81,42 +69,25 @@ async def update_board(
raise HTTPException(status_code=500, detail="Failed to update board")
@boards_router.delete(
"/{board_id}", operation_id="delete_board", response_model=DeleteBoardResult
)
@boards_router.delete("/{board_id}", operation_id="delete_board")
async def delete_board(
board_id: str = Path(description="The id of board to delete"),
include_images: Optional[bool] = Query(
description="Permanently delete all images on the board", default=False
),
) -> DeleteBoardResult:
) -> None:
"""Deletes a board"""
try:
if include_images is True:
deleted_images = ApiDependencies.invoker.services.board_images.get_all_board_image_names_for_board(
board_id=board_id
)
ApiDependencies.invoker.services.images.delete_images_on_board(
board_id=board_id
)
ApiDependencies.invoker.services.boards.delete(board_id=board_id)
return DeleteBoardResult(
board_id=board_id,
deleted_board_images=[],
deleted_images=deleted_images,
)
else:
deleted_board_images = ApiDependencies.invoker.services.board_images.get_all_board_image_names_for_board(
board_id=board_id
)
ApiDependencies.invoker.services.boards.delete(board_id=board_id)
return DeleteBoardResult(
board_id=board_id,
deleted_board_images=deleted_board_images,
deleted_images=[],
)
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to delete board")
# TODO: Does this need any exception handling at all?
pass
@boards_router.get(
@@ -144,19 +115,3 @@ async def list_boards(
status_code=400,
detail="Invalid request: Must provide either 'all' or both 'offset' and 'limit'",
)
@boards_router.get(
"/{board_id}/image_names",
operation_id="list_all_board_image_names",
response_model=list[str],
)
async def list_all_board_image_names(
board_id: str = Path(description="The id of the board"),
) -> list[str]:
"""Gets a list of images for a board"""
image_names = ApiDependencies.invoker.services.board_images.get_all_board_image_names_for_board(
board_id,
)
return image_names

View File

@@ -84,17 +84,6 @@ async def delete_image(
# TODO: Does this need any exception handling at all?
pass
@images_router.post("/clear-intermediates", operation_id="clear_intermediates")
async def clear_intermediates() -> int:
"""Clears first 100 intermediates"""
try:
count_deleted = ApiDependencies.invoker.services.images.delete_many(is_intermediate=True)
return count_deleted
except Exception as e:
# TODO: Does this need any exception handling at all?
pass
@images_router.patch(
"/{image_name}",
@@ -245,16 +234,16 @@ async def get_image_urls(
)
async def list_image_dtos(
image_origin: Optional[ResourceOrigin] = Query(
default=None, description="The origin of images to list."
default=None, description="The origin of images to list"
),
categories: Optional[list[ImageCategory]] = Query(
default=None, description="The categories of image to include."
default=None, description="The categories of image to include"
),
is_intermediate: Optional[bool] = Query(
default=None, description="Whether to list intermediate images."
default=None, description="Whether to list intermediate images"
),
board_id: Optional[str] = Query(
default=None, description="The board id to filter by. Use 'none' to find images without a board."
default=None, description="The board id to filter by"
),
offset: int = Query(default=0, description="The page offset"),
limit: int = Query(default=10, description="The number of images per page"),

View File

@@ -2,6 +2,7 @@
import pathlib
import threading
from typing import Literal, List, Optional, Union
from fastapi import Body, Path, Query, Response
@@ -127,54 +128,43 @@ async def update_model(
"/import",
operation_id="import_model",
responses= {
201: {"description" : "The model imported successfully"},
404: {"description" : "The model could not be found"},
415: {"description" : "Unrecognized file/folder format"},
424: {"description" : "The model appeared to import successfully, but could not be found in the model manager"},
409: {"description" : "There is already a model corresponding to this path or repo_id"},
200: {"description" : "The path was queued for import"},
},
status_code=201,
response_model=ImportModelResponse
status_code=200
)
async def import_model(
location: str = Body(description="A model path, repo_id or URL to import"),
prediction_type: Optional[Literal['v_prediction','epsilon','sample']] = \
Body(description='Prediction type for SDv2 checkpoint files', default="v_prediction"),
) -> ImportModelResponse:
""" Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically """
) -> str:
"""
Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically.
This call launches a background thread to process the imported model and always succeeds. Results are reported in the background
as the following events:
- model_import_started(import_path:str)
- model_import_completed(import_path:str, import_info:AddModelResults, success:bool, error:str)
- download_started(url:str)
- download_progress(url:str, downloaded_bytes:int, total_bytes:int)
- download_completed(url:str, status_code:int, download_path:str)
"""
items_to_import = {location}
prediction_types = { x.value: x for x in SchedulerPredictionType }
logger = ApiDependencies.invoker.services.logger
events = ApiDependencies.invoker.services.events
try:
installed_models = ApiDependencies.invoker.services.model_manager.heuristic_import(
items_to_import = items_to_import,
prediction_type_helper = lambda x: prediction_types.get(prediction_type)
)
info = installed_models.get(location)
if not info:
logger.error("Import failed")
raise HTTPException(status_code=415)
logger.info(f'Successfully imported {location}, got {info}')
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=info.name,
base_model=info.base_model,
model_type=info.model_type
)
return parse_obj_as(ImportModelResponse, model_raw)
except ModelNotFoundException as e:
import_thread = threading.Thread(target = ApiDependencies.invoker.services.model_manager.heuristic_import,
kwargs = dict(items_to_import = items_to_import,
prediction_type_helper = lambda x: prediction_types.get(prediction_type),
event_bus = events,
)
)
import_thread.start()
return 'request queued'
except Exception as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
except InvalidModelException as e:
logger.error(str(e))
raise HTTPException(status_code=415)
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
raise HTTPException(status_code=500, detail=str(e))
@models_router.post(
"/add",

View File

@@ -331,8 +331,8 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
crop_left: int = Field(0, description="")
target_width: int = Field(1024, description="")
target_height: int = Field(1024, description="")
clip: ClipField = Field(None, description="Clip to use")
clip2: ClipField = Field(None, description="Clip2 to use")
clip1: ClipField = Field(None, description="Clip to use")
clip2: ClipField = Field(None, description="Clip to use")
# Schema customisation
class Config(InvocationConfig):
@@ -348,7 +348,7 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
@torch.no_grad()
def invoke(self, context: InvocationContext) -> CompelOutput:
c1, c1_pooled, ec1 = self.run_clip_compel(context, self.clip, self.prompt, False)
c1, c1_pooled, ec1 = self.run_clip_compel(context, self.clip1, self.prompt, False)
if self.style.strip() == "":
c2, c2_pooled, ec2 = self.run_clip_compel(context, self.clip2, self.prompt, True)
else:
@@ -451,8 +451,8 @@ class SDXLRawPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
crop_left: int = Field(0, description="")
target_width: int = Field(1024, description="")
target_height: int = Field(1024, description="")
clip: ClipField = Field(None, description="Clip to use")
clip2: ClipField = Field(None, description="Clip2 to use")
clip1: ClipField = Field(None, description="Clip to use")
clip2: ClipField = Field(None, description="Clip to use")
# Schema customisation
class Config(InvocationConfig):
@@ -468,7 +468,7 @@ class SDXLRawPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
@torch.no_grad()
def invoke(self, context: InvocationContext) -> CompelOutput:
c1, c1_pooled, ec1 = self.run_clip_raw(context, self.clip, self.prompt, False)
c1, c1_pooled, ec1 = self.run_clip_raw(context, self.clip1, self.prompt, False)
if self.style.strip() == "":
c2, c2_pooled, ec2 = self.run_clip_raw(context, self.clip2, self.prompt, True)
else:

View File

@@ -518,8 +518,8 @@ class ImageScaleInvocation(BaseInvocation, PILInvocationConfig):
type: Literal["img_scale"] = "img_scale"
# Inputs
image: Optional[ImageField] = Field(default=None, description="The image to scale")
scale_factor: Optional[float] = Field(default=2.0, gt=0, description="The factor by which to scale the image")
image: Optional[ImageField] = Field(default=None, description="The image to scale")
scale_factor: float = Field(gt=0, description="The factor by which to scale the image")
resample_mode: PIL_RESAMPLING_MODES = Field(default="bicubic", description="The resampling mode")
# fmt: on

View File

@@ -22,7 +22,7 @@ from ...backend.stable_diffusion.diffusers_pipeline import (
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import \
PostprocessingSettings
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
from ...backend.util.devices import choose_torch_device, torch_dtype, choose_precision
from ...backend.util.devices import torch_dtype
from ..models.image import ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext)
@@ -39,9 +39,6 @@ from diffusers.models.attention_processor import (
)
DEFAULT_PRECISION = choose_precision(choose_torch_device())
class LatentsField(BaseModel):
"""A latents field used for passing latents between invocations"""
@@ -496,7 +493,7 @@ class LatentsToImageInvocation(BaseInvocation):
tiled: bool = Field(
default=False,
description="Decode latents by overlaping tiles(less memory consumption)")
fp32: bool = Field(DEFAULT_PRECISION=='float32', description="Decode in full precision")
fp32: bool = Field(False, description="Decode in full precision")
metadata: Optional[CoreMetadata] = Field(default=None, description="Optional core metadata to be written to the image")
# Schema customisation
@@ -690,7 +687,7 @@ class ImageToLatentsInvocation(BaseInvocation):
tiled: bool = Field(
default=False,
description="Encode latents by overlaping tiles(less memory consumption)")
fp32: bool = Field(DEFAULT_PRECISION=='float32', description="Decode in full precision")
fp32: bool = Field(False, description="Decode in full precision")
# Schema customisation

View File

@@ -1,6 +1,6 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) & the InvokeAI Team
from pathlib import Path
from typing import Literal, Union
from pathlib import Path, PosixPath
from typing import Literal, Union, cast
import cv2 as cv
import numpy as np
@@ -16,20 +16,19 @@ from .image import ImageOutput
# TODO: Populate this from disk?
# TODO: Use model manager to load?
ESRGAN_MODELS = Literal[
REALESRGAN_MODELS = Literal[
"RealESRGAN_x4plus.pth",
"RealESRGAN_x4plus_anime_6B.pth",
"ESRGAN_SRx4_DF2KOST_official-ff704c30.pth",
"RealESRGAN_x2plus.pth",
]
class ESRGANInvocation(BaseInvocation):
class RealESRGANInvocation(BaseInvocation):
"""Upscales an image using RealESRGAN."""
type: Literal["esrgan"] = "esrgan"
type: Literal["realesrgan"] = "realesrgan"
image: Union[ImageField, None] = Field(default=None, description="The input image")
model_name: ESRGAN_MODELS = Field(
model_name: REALESRGAN_MODELS = Field(
default="RealESRGAN_x4plus.pth", description="The Real-ESRGAN model to use"
)
@@ -74,17 +73,19 @@ class ESRGANInvocation(BaseInvocation):
scale=4,
)
netscale = 4
elif self.model_name in ["RealESRGAN_x2plus.pth"]:
# x2 RRDBNet model
rrdbnet_model = RRDBNet(
num_in_ch=3,
num_out_ch=3,
num_feat=64,
num_block=23,
num_grow_ch=32,
scale=2,
)
netscale = 2
# TODO: add x2 models handling?
# elif self.model_name in ["RealESRGAN_x2plus"]:
# # x2 RRDBNet model
# model = RRDBNet(
# num_in_ch=3,
# num_out_ch=3,
# num_feat=64,
# num_block=23,
# num_grow_ch=32,
# scale=2,
# )
# model_path = Path()
# netscale = 2
else:
msg = f"Invalid RealESRGAN model: {self.model_name}"
context.services.logger.error(msg)

View File

@@ -32,11 +32,11 @@ class BoardImageRecordStorageBase(ABC):
pass
@abstractmethod
def get_all_board_image_names_for_board(
def get_images_for_board(
self,
board_id: str,
) -> list[str]:
"""Gets all board images for a board, as a list of the image names."""
) -> OffsetPaginatedResults[ImageRecord]:
"""Gets images for a board."""
pass
@abstractmethod
@@ -211,26 +211,6 @@ class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
items=images, offset=offset, limit=limit, total=count
)
def get_all_board_image_names_for_board(self, board_id: str) -> list[str]:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT image_name
FROM board_images
WHERE board_id = ?;
""",
(board_id,),
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
image_names = list(map(lambda r: r[0], result))
return image_names
except sqlite3.Error as e:
self._conn.rollback()
raise e
finally:
self._lock.release()
def get_board_for_image(
self,
image_name: str,

View File

@@ -38,11 +38,11 @@ class BoardImagesServiceABC(ABC):
pass
@abstractmethod
def get_all_board_image_names_for_board(
def get_images_for_board(
self,
board_id: str,
) -> list[str]:
"""Gets all board images for a board, as a list of the image names."""
) -> OffsetPaginatedResults[ImageDTO]:
"""Gets images for a board."""
pass
@abstractmethod
@@ -98,13 +98,30 @@ class BoardImagesService(BoardImagesServiceABC):
) -> None:
self._services.board_image_records.remove_image_from_board(board_id, image_name)
def get_all_board_image_names_for_board(
def get_images_for_board(
self,
board_id: str,
) -> list[str]:
return self._services.board_image_records.get_all_board_image_names_for_board(
) -> OffsetPaginatedResults[ImageDTO]:
image_records = self._services.board_image_records.get_images_for_board(
board_id
)
image_dtos = list(
map(
lambda r: image_record_to_dto(
r,
self._services.urls.get_image_url(r.image_name),
self._services.urls.get_image_url(r.image_name, True),
board_id,
),
image_records.items,
)
)
return OffsetPaginatedResults[ImageDTO](
items=image_dtos,
offset=image_records.offset,
limit=image_records.limit,
total=image_records.total,
)
def get_board_for_image(
self,
@@ -119,7 +136,7 @@ def board_record_to_dto(
) -> BoardDTO:
"""Converts a board record to a board DTO."""
return BoardDTO(
**board_record.dict(exclude={"cover_image_name"}),
**board_record.dict(exclude={'cover_image_name'}),
cover_image_name=cover_image_name,
image_count=image_count,
)

View File

@@ -1,9 +1,11 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Any, Optional
from pathlib import Path
from invokeai.app.models.image import ProgressImage
from invokeai.app.util.misc import get_timestamp
from invokeai.app.services.model_manager_service import BaseModelType, ModelType, SubModelType, ModelInfo
from invokeai.app.services.model_manager_service import BaseModelType, ModelType, SubModelType, ModelInfo, AddModelResult
class EventServiceBase:
session_event: str = "session_event"
@@ -111,7 +113,7 @@ class EventServiceBase:
submodel: SubModelType,
) -> None:
"""Emitted when a model is requested"""
self.__emit_session_event(
self.dispatch(
event_name="model_load_started",
payload=dict(
graph_execution_state_id=graph_execution_state_id,
@@ -132,7 +134,7 @@ class EventServiceBase:
model_info: ModelInfo,
) -> None:
"""Emitted when a model is correctly loaded (returns model info)"""
self.__emit_session_event(
self.dispatch(
event_name="model_load_completed",
payload=dict(
graph_execution_state_id=graph_execution_state_id,
@@ -141,7 +143,96 @@ class EventServiceBase:
model_type=model_type,
submodel=submodel,
hash=model_info.hash,
location=str(model_info.location),
location=model_info.location,
precision=str(model_info.precision),
),
)
def emit_model_import_started (
self,
import_path: str, # can be a local path, URL or repo_id
)->None:
"""Emitted when a model import commences"""
self.dispatch(
event_name="model_import_started",
payload=dict(
import_path = import_path,
)
)
def emit_model_import_completed (
self,
import_path: str, # can be a local path, URL or repo_id
import_info: AddModelResult,
success: bool= True,
error: str = None,
)->None:
"""Emitted when a model import completes"""
self.dispatch(
event_name="model_import_completed",
payload=dict(
import_path = import_path,
import_info = import_info,
success = success,
error = error,
)
)
def emit_download_started (
self,
url: str,
)->None:
"""Emitted when a download thread starts"""
self.dispatch(
event_name="download_started",
payload=dict(
url = url,
)
)
def emit_download_progress (
self,
url: str,
downloaded_size: int,
total_size: int,
)->None:
"""
Emitted at intervals during a download process
:param url: Requested URL
:param downloaded_size: Bytes downloaded so far
:param total_size: Total bytes to download
"""
self.dispatch(
event_name="download_progress",
payload=dict(
url = url,
downloaded_size = downloaded_size,
total_size = total_size,
)
)
def emit_download_completed (
self,
url: str,
status_code: int,
download_path: Path,
)->None:
"""
Emitted when a download thread completes.
:param url: Requested URL
:param status_code: HTTP status code from request
:param download_path: Path to downloaded file
"""
self.dispatch(
event_name="download_completed",
payload=dict(
url = url,
status_code = status_code,
download_path = download_path,
)
)

View File

@@ -10,10 +10,7 @@ from pydantic.generics import GenericModel
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from invokeai.app.services.models.image_record import (
ImageRecord,
ImageRecordChanges,
deserialize_image_record,
)
ImageRecord, ImageRecordChanges, deserialize_image_record)
T = TypeVar("T", bound=BaseModel)
@@ -100,8 +97,8 @@ class ImageRecordStorageBase(ABC):
@abstractmethod
def get_many(
self,
offset: Optional[int] = None,
limit: Optional[int] = None,
offset: int = 0,
limit: int = 10,
image_origin: Optional[ResourceOrigin] = None,
categories: Optional[list[ImageCategory]] = None,
is_intermediate: Optional[bool] = None,
@@ -325,8 +322,8 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
def get_many(
self,
offset: Optional[int] = None,
limit: Optional[int] = None,
offset: int = 0,
limit: int = 10,
image_origin: Optional[ResourceOrigin] = None,
categories: Optional[list[ImageCategory]] = None,
is_intermediate: Optional[bool] = None,
@@ -380,15 +377,11 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
query_params.append(is_intermediate)
# board_id of "none" is reserved for images without a board
if board_id == "none":
query_conditions += """--sql
AND board_images.board_id IS NULL
"""
elif board_id is not None:
if board_id is not None:
query_conditions += """--sql
AND board_images.board_id = ?
"""
query_params.append(board_id)
query_pagination = """--sql
@@ -399,12 +392,8 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
images_query += query_conditions + query_pagination + ";"
# Add all the parameters
images_params = query_params.copy()
if limit is not None:
images_params.append(limit)
if offset is not None:
images_params.append(offset)
images_params.append(limit)
images_params.append(offset)
# Build the list of images, deserializing each row
self._cursor.execute(images_query, images_params)
result = cast(list[sqlite3.Row], self._cursor.fetchall())

View File

@@ -11,6 +11,7 @@ from invokeai.app.models.image import (ImageCategory,
InvalidOriginException, ResourceOrigin)
from invokeai.app.services.board_image_record_storage import \
BoardImageRecordStorageBase
from invokeai.app.services.graph import Graph
from invokeai.app.services.image_file_storage import (
ImageFileDeleteException, ImageFileNotFoundException,
ImageFileSaveException, ImageFileStorageBase)
@@ -108,13 +109,6 @@ class ImageServiceABC(ABC):
"""Deletes an image."""
pass
@abstractmethod
def delete_many(self, is_intermediate: bool) -> int:
"""Deletes many images."""
pass
@abstractmethod
def delete_images_on_board(self, board_id: str):
"""Deletes all images on a board."""
@@ -384,29 +378,7 @@ class ImageService(ImageServiceABC):
def delete_images_on_board(self, board_id: str):
try:
image_names = (
self._services.board_image_records.get_all_board_image_names_for_board(
board_id
)
)
for image_name in image_names:
self._services.image_files.delete(image_name)
self._services.image_records.delete_many(image_names)
except ImageRecordDeleteException:
self._services.logger.error(f"Failed to delete image records")
raise
except ImageFileDeleteException:
self._services.logger.error(f"Failed to delete image files")
raise
except Exception as e:
self._services.logger.error("Problem deleting image records and files")
raise e
def delete_many(self, is_intermediate: bool):
try:
# only clears 100 at a time
images = self._services.image_records.get_many(offset=0, limit=100, is_intermediate=is_intermediate,)
count = len(images.items)
images = self._services.board_image_records.get_images_for_board(board_id)
image_name_list = list(
map(
lambda r: r.image_name,
@@ -416,7 +388,6 @@ class ImageService(ImageServiceABC):
for image_name in image_name_list:
self._services.image_files.delete(image_name)
self._services.image_records.delete_many(image_name_list)
return count
except ImageRecordDeleteException:
self._services.logger.error(f"Failed to delete image records")
raise

View File

@@ -26,6 +26,7 @@ import torch
from invokeai.app.models.exceptions import CanceledException
from ...backend.util import choose_precision, choose_torch_device
from .config import InvokeAIAppConfig
from .events import EventServiceBase
if TYPE_CHECKING:
from ..invocations.baseinvocation import BaseInvocation, InvocationContext
@@ -542,6 +543,7 @@ class ModelManagerService(ModelManagerServiceBase):
def heuristic_import(self,
items_to_import: set[str],
prediction_type_helper: Optional[Callable[[Path],SchedulerPredictionType]]=None,
event_bus: Optional[EventServiceBase]=None,
)->dict[str, AddModelResult]:
'''Import a list of paths, repo_ids or URLs. Returns the set of
successfully imported items.
@@ -559,7 +561,7 @@ class ModelManagerService(ModelManagerServiceBase):
of the set is a dict corresponding to the newly-created OmegaConf stanza for
that model.
'''
return self.mgr.heuristic_import(items_to_import, prediction_type_helper)
return self.mgr.heuristic_import(items_to_import, prediction_type_helper, event_bus=event_bus)
def merge_models(
self,

View File

@@ -222,7 +222,7 @@ def download_conversion_models():
# ---------------------------------------------
def download_realesrgan():
logger.info("Installing ESRGAN Upscaling models...")
logger.info("Installing RealESRGAN models...")
URLs = [
dict(
url = "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth",
@@ -239,11 +239,6 @@ def download_realesrgan():
dest= "core/upscaling/realesrgan/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth",
description = "ESRGAN_SRx4_DF2KOST_official.pth",
),
dict(
url= "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth",
dest= "core/upscaling/realesrgan/RealESRGAN_x2plus.pth",
description = "RealESRGAN_x2plus.pth",
),
]
for model in URLs:
download_with_progress_bar(model['url'], config.models_path / model['dest'], model['description'])
@@ -560,6 +555,7 @@ def edit_opts(program_opts: Namespace, invokeai_opts: Namespace) -> argparse.Nam
editApp.run()
return editApp.new_opts()
def default_startup_options(init_file: Path) -> Namespace:
opts = InvokeAIAppConfig.get_config()
if not init_file.exists():
@@ -663,9 +659,6 @@ def write_opts(opts: Namespace, init_file: Path):
with open(init_file,'w', encoding='utf-8') as file:
file.write(new_config.to_yaml())
if opts.hf_token:
HfLogin(opts.hf_token)
# -------------------------------------
def default_output_dir() -> Path:
return config.root_path / "outputs"

View File

@@ -10,7 +10,7 @@ from tempfile import TemporaryDirectory
from typing import List, Dict, Callable, Union, Set
import requests
from diffusers import DiffusionPipeline
from diffusers import StableDiffusionPipeline
from diffusers import logging as dlogging
from huggingface_hub import hf_hub_url, HfFolder, HfApi
from omegaconf import OmegaConf
@@ -89,13 +89,16 @@ class ModelInstall(object):
config:InvokeAIAppConfig,
prediction_type_helper: Callable[[Path],SchedulerPredictionType]=None,
model_manager: ModelManager = None,
access_token:str = None):
access_token:str = None,
event_bus = None, # EventServicesBase - getting circular import errors
):
self.config = config
self.mgr = model_manager or ModelManager(config.model_conf_path)
self.datasets = OmegaConf.load(Dataset_path)
self.prediction_helper = prediction_type_helper
self.access_token = access_token or HfFolder.get_token()
self.reverse_paths = self._reverse_paths(self.datasets)
self.event_bus = event_bus
def all_models(self)->Dict[str,ModelLoadInfo]:
'''
@@ -197,39 +200,63 @@ class ModelInstall(object):
Returns a set of dict objects corresponding to newly-created stanzas in models.yaml.
'''
if self.event_bus:
self.event_bus.emit_model_import_started(str(model_path_id_or_url))
if not models_installed:
models_installed = dict()
# A little hack to allow nested routines to retrieve info on the requested ID
self.current_id = model_path_id_or_url
path = Path(model_path_id_or_url)
# checkpoint file, or similar
if path.is_file():
models_installed.update({str(path):self._install_path(path)})
# folders style or similar
elif path.is_dir() and any([(path/x).exists() for x in \
{'config.json','model_index.json','learned_embeds.bin','pytorch_lora_weights.bin'}
]
):
models_installed.update({str(model_path_id_or_url): self._install_path(path)})
try:
# checkpoint file, or similar
if path.is_file():
models_installed.update({str(path):self._install_path(path)})
# recursive scan
elif path.is_dir():
for child in path.iterdir():
self.heuristic_import(child, models_installed=models_installed)
# folders style or similar
elif path.is_dir() and any([(path/x).exists() for x in \
{'config.json','model_index.json','learned_embeds.bin','pytorch_lora_weights.bin'}
]
):
models_installed.update({str(model_path_id_or_url): self._install_path(path)})
# huggingface repo
elif len(str(model_path_id_or_url).split('/')) == 2:
models_installed.update({str(model_path_id_or_url): self._install_repo(str(model_path_id_or_url))})
# recursive scan
elif path.is_dir():
for child in path.iterdir():
self.heuristic_import(child, models_installed=models_installed)
# a URL
elif str(model_path_id_or_url).startswith(("http:", "https:", "ftp:")):
models_installed.update({str(model_path_id_or_url): self._install_url(model_path_id_or_url)})
# huggingface repo
elif len(str(model_path_id_or_url).split('/')) == 2:
models_installed.update({str(model_path_id_or_url): self._install_repo(str(model_path_id_or_url))})
else:
raise KeyError(f'{str(model_path_id_or_url)} is not recognized as a local path, repo ID or URL. Skipping')
# a URL
elif str(model_path_id_or_url).startswith(("http:", "https:", "ftp:")):
models_installed.update({str(model_path_id_or_url): self._install_url(model_path_id_or_url)})
else:
errmsg = f'{str(model_path_id_or_url)} is not recognized as a local path, repo ID or URL. Skipping'
raise KeyError(errmsg)
if self.event_bus:
for path, add_model_result in models_installed.items():
self.event_bus.emit_model_import_completed(
str(path),
import_info = add_model_result,
)
except Exception as e:
if self.event_bus:
self.event_bus.emit_model_import_completed(
str(path),
import_info = None,
success = False,
error = str(e),
)
return models_installed
else:
raise
return models_installed
# install a model from a local path. The optional info parameter is there to prevent
@@ -238,10 +265,14 @@ class ModelInstall(object):
info = info or ModelProbe().heuristic_probe(path,self.prediction_helper)
if not info:
logger.warning(f'Unable to parse format of {path}')
return None
raise ValueError(f'Unable to parse format of {path}')
model_name = path.stem if path.is_file() else path.name
if self.mgr.model_exists(model_name, info.base_type, info.model_type):
raise ValueError(f'A model named "{model_name}" is already installed.')
errmsg = f'A model named "{model_name}" is already installed.'
raise ValueError(errmsg)
attributes = self._make_attributes(path,info)
return self.mgr.add_model(model_name = model_name,
base_model = info.base_type,
@@ -251,7 +282,7 @@ class ModelInstall(object):
def _install_url(self, url: str)->AddModelResult:
with TemporaryDirectory(dir=self.config.models_path) as staging:
location = download_with_resume(url,Path(staging))
location = download_with_resume(url,Path(staging),event_bus=self.event_bus)
if not location:
logger.error(f'Unable to download {url}. Skipping.')
info = ModelProbe().heuristic_probe(location)
@@ -310,8 +341,6 @@ class ModelInstall(object):
if key := self.reverse_paths.get(path_name):
(name, base, mtype) = ModelManager.parse_key(key)
return name
elif location.is_dir():
return location.name
else:
return location.stem
@@ -367,7 +396,7 @@ class ModelInstall(object):
model = None
for revision in revisions:
try:
model = DiffusionPipeline.from_pretrained(repo_id,revision=revision,safety_checker=None)
model = StableDiffusionPipeline.from_pretrained(repo_id,revision=revision,safety_checker=None)
except: # most errors are due to fp16 not being present. Fix this to catch other errors
pass
if model:
@@ -386,7 +415,8 @@ class ModelInstall(object):
p = hf_download_with_resume(repo_id,
model_dir=location,
model_name=filename,
access_token = self.access_token
access_token = self.access_token,
event_bus = self.event_bus,
)
if p:
paths.append(p)
@@ -427,12 +457,15 @@ def hf_download_from_pretrained(
return destination
# ---------------------------------------------
# TODO: This function is almost identical to invokeai.backend.util.download_with_resume
# and should be merged
def hf_download_with_resume(
repo_id: str,
model_dir: str,
model_name: str,
model_dest: Path = None,
access_token: str = None,
event_bus = None,
) -> Path:
model_dest = model_dest or Path(os.path.join(model_dir, model_name))
os.makedirs(model_dir, exist_ok=True)
@@ -449,15 +482,22 @@ def hf_download_with_resume(
open_mode = "ab"
resp = requests.get(url, headers=header, stream=True)
total = int(resp.headers.get("content-length", 0))
content_length = int(resp.headers.get("content-length", 0))
if event_bus:
event_bus.emit_download_started(url)
if (
resp.status_code == 416
): # "range not satisfiable", which means nothing to return
logger.info(f"{model_name}: complete file found. Skipping.")
if event_bus:
event_bus.emit_download_completed(url,resp.status_code,model_dest)
return model_dest
elif resp.status_code == 404:
logger.warning("File not found")
if event_bus:
event_bus.emit_download_completed(url,resp.status_code,None)
return None
elif resp.status_code != 200:
logger.warning(f"{model_name}: {resp.reason}")
@@ -466,11 +506,15 @@ def hf_download_with_resume(
else:
logger.info(f"{model_name}: Downloading...")
MB10 = 10 * 1048576
downloaded = exist_size
previous_interval = 0
try:
with open(model_dest, open_mode) as file, tqdm(
desc=model_name,
initial=exist_size,
total=total + exist_size,
total=content_length + exist_size,
unit="iB",
unit_scale=True,
unit_divisor=1000,
@@ -478,9 +522,20 @@ def hf_download_with_resume(
for data in resp.iter_content(chunk_size=1024):
size = file.write(data)
bar.update(size)
downloaded += size
if event_bus and downloaded // MB10 > previous_interval:
previous_interval = downloaded // MB10
event_bus.emit_download_progress(url, downloaded, content_length)
except Exception as e:
logger.error(f"An error occurred while downloading {model_name}: {str(e)}")
if event_bus:
event_bus.emit_download_completed(url,500,None)
return None
if event_bus:
event_bus.emit_download_completed(url,resp.status_code,model_dest)
return model_dest

View File

@@ -21,7 +21,6 @@ import re
import warnings
from pathlib import Path
from typing import Union
from packaging import version
import torch
from safetensors.torch import load_file
@@ -64,7 +63,6 @@ from diffusers.pipelines.stable_diffusion.safety_checker import (
StableDiffusionSafetyChecker,
)
from diffusers.utils import is_safetensors_available
import transformers
from transformers import (
AutoFeatureExtractor,
BertTokenizerFast,
@@ -843,16 +841,7 @@ def convert_ldm_clip_checkpoint(checkpoint):
key
]
# transformers 4.31.0 and higher - this key no longer in state dict
if version.parse(transformers.__version__) >= version.parse("4.31.0"):
position_ids = text_model_dict.pop("text_model.embeddings.position_ids", None)
text_model.load_state_dict(text_model_dict)
if position_ids is not None:
text_model.text_model.embeddings.position_ids.copy_(position_ids)
# transformers 4.30.2 and lower - position_ids is part of state_dict
else:
text_model.load_state_dict(text_model_dict)
text_model.load_state_dict(text_model_dict)
return text_model
@@ -958,16 +947,7 @@ def convert_open_clip_checkpoint(checkpoint):
text_model_dict[new_key] = checkpoint[key]
# transformers 4.31.0 and higher - this key no longer in state dict
if version.parse(transformers.__version__) >= version.parse("4.31.0"):
position_ids = text_model_dict.pop("text_model.embeddings.position_ids", None)
text_model.load_state_dict(text_model_dict)
if position_ids is not None:
text_model.text_model.embeddings.position_ids.copy_(position_ids)
# transformers 4.30.2 and lower - position_ids is part of state_dict
else:
text_model.load_state_dict(text_model_dict)
text_model.load_state_dict(text_model_dict)
return text_model

View File

@@ -328,25 +328,6 @@ class ModelCache(object):
refs = sys.getrefcount(cache_entry.model)
# manualy clear local variable references of just finished function calls
# for some reason python don't want to collect it even by gc.collect() immidiately
if refs > 2:
while True:
cleared = False
for referrer in gc.get_referrers(cache_entry.model):
if type(referrer).__name__ == "frame":
# RuntimeError: cannot clear an executing frame
with suppress(RuntimeError):
referrer.clear()
cleared = True
#break
# repeat if referrers changes(due to frame clear), else exit loop
if cleared:
gc.collect()
else:
break
device = cache_entry.model.device if hasattr(cache_entry.model, "device") else None
self.logger.debug(f"Model: {model_key}, locks: {cache_entry._locks}, device: {device}, loaded: {cache_entry.loaded}, refs: {refs}")
@@ -382,9 +363,6 @@ class ModelCache(object):
self.logger.debug(f'GPU VRAM freed: {(mem.vram_used/GIG):.2f} GB')
vram_in_use += mem.vram_used # note vram_used is negative
self.logger.debug(f'{(vram_in_use/GIG):.2f}GB VRAM used for models; max allowed={(reserved/GIG):.2f}GB')
gc.collect()
torch.cuda.empty_cache()
def _local_model_hash(self, model_path: Union[str, Path]) -> str:
sha = hashlib.sha256()

View File

@@ -106,16 +106,16 @@ providing information about a model defined in models.yaml. For example:
>>> models = mgr.list_models()
>>> json.dumps(models[0])
{"path": "/home/lstein/invokeai-main/models/sd-1/controlnet/canny",
"model_format": "diffusers",
"name": "canny",
"base_model": "sd-1",
{"path": "/home/lstein/invokeai-main/models/sd-1/controlnet/canny",
"model_format": "diffusers",
"name": "canny",
"base_model": "sd-1",
"type": "controlnet"
}
You can filter by model type and base model as shown here:
controlnets = mgr.list_models(model_type=ModelType.ControlNet,
base_model=BaseModelType.StableDiffusion1)
for c in controlnets:
@@ -140,14 +140,14 @@ Layout of the `models` directory:
models
├── sd-1
├── controlnet
├── lora
├── main
└── embedding
   ├── controlnet
   ├── lora
   ├── main
   └── embedding
├── sd-2
├── controlnet
├── lora
├── main
   ├── controlnet
   ├── lora
   ├── main
│ └── embedding
└── core
├── face_reconstruction
@@ -195,7 +195,7 @@ name, base model, type and a dict of model attributes. See
`invokeai/backend/model_management/models` for the attributes required
by each model type.
A model can be deleted using `del_model()`, providing the same
A model can be deleted using `del_model()`, providing the same
identifying information as `get_model()`
The `heuristic_import()` method will take a set of strings
@@ -304,7 +304,7 @@ class ModelManager(object):
logger: types.ModuleType = logger,
):
"""
Initialize with the path to the models.yaml config file.
Initialize with the path to the models.yaml config file.
Optional parameters are the torch device type, precision, max_models,
and sequential_offload boolean. Note that the default device
type and precision are set up for a CUDA system running at half precision.
@@ -323,7 +323,7 @@ class ModelManager(object):
self.config_meta = ConfigMeta(**config.pop("__metadata__"))
# TODO: metadata not found
# TODO: version check
self.app_config = InvokeAIAppConfig.get_config()
self.logger = logger
self.cache = ModelCache(
@@ -431,7 +431,7 @@ class ModelManager(object):
:param model_name: symbolic name of the model in models.yaml
:param model_type: ModelType enum indicating the type of model to return
:param base_model: BaseModelType enum indicating the base model used by this model
:param submode_typel: an ModelType enum indicating the portion of
:param submode_typel: an ModelType enum indicating the portion of
the model to retrieve (e.g. ModelType.Vae)
"""
model_class = MODEL_CLASSES[base_model][model_type]
@@ -456,7 +456,7 @@ class ModelManager(object):
raise ModelNotFoundException(f"Model not found - {model_key}")
# vae/movq override
# TODO:
# TODO:
if submodel_type is not None and hasattr(model_config, submodel_type):
override_path = getattr(model_config, submodel_type)
if override_path:
@@ -489,7 +489,7 @@ class ModelManager(object):
self.cache_keys[model_key].add(model_context.key)
model_hash = "<NO_HASH>" # TODO:
return ModelInfo(
context = model_context,
name = model_name,
@@ -518,7 +518,7 @@ class ModelManager(object):
def model_names(self) -> List[Tuple[str, BaseModelType, ModelType]]:
"""
Return a list of (str, BaseModelType, ModelType) corresponding to all models
Return a list of (str, BaseModelType, ModelType) corresponding to all models
known to the configuration.
"""
return [(self.parse_key(x)) for x in self.models.keys()]
@@ -692,12 +692,12 @@ class ModelManager(object):
if new_name is None and new_base is None:
self.logger.error("rename_model() called with neither a new_name nor a new_base. {model_name} unchanged.")
return
model_key = self.create_key(model_name, base_model, model_type)
model_cfg = self.models.get(model_key, None)
if not model_cfg:
raise ModelNotFoundException(f"Unknown model: {model_key}")
old_path = self.app_config.root_path / model_cfg.path
new_name = new_name or model_name
new_base = new_base or base_model
@@ -726,7 +726,7 @@ class ModelManager(object):
self.models.pop(model_key, None) # delete
self.models[new_key] = model_cfg
self.commit()
def convert_model (
self,
model_name: str,
@@ -776,12 +776,12 @@ class ModelManager(object):
# something went wrong, so don't leave dangling diffusers model in directory or it will cause a duplicate model error!
rmtree(new_diffusers_path)
raise
if checkpoint_path.exists() and checkpoint_path.is_relative_to(self.app_config.models_path):
checkpoint_path.unlink()
return result
def search_models(self, search_folder):
self.logger.info(f"Finding Models In: {search_folder}")
models_folder_ckpt = Path(search_folder).glob("**/*.ckpt")
@@ -824,14 +824,10 @@ class ModelManager(object):
assert config_file_path is not None,'no config file path to write to'
config_file_path = self.app_config.root_path / config_file_path
tmpfile = os.path.join(os.path.dirname(config_file_path), "new_config.tmp")
try:
with open(tmpfile, "w", encoding="utf-8") as outfile:
outfile.write(self.preamble())
outfile.write(yaml_str)
os.replace(tmpfile, config_file_path)
except OSError as err:
self.logger.warning(f"Could not modify the config file at {config_file_path}")
self.logger.warning(err)
with open(tmpfile, "w", encoding="utf-8") as outfile:
outfile.write(self.preamble())
outfile.write(yaml_str)
os.replace(tmpfile, config_file_path)
def preamble(self) -> str:
"""
@@ -938,34 +934,26 @@ class ModelManager(object):
def models_found(self):
return self.new_models_found
config = self.app_config
# LS: hacky
# Patch in the SD VAE from core so that it is available for use by the UI
try:
self.heuristic_import({config.root_path / 'models/core/convert/sd-vae-ft-mse'})
except:
pass
installer = ModelInstall(config = self.app_config,
model_manager = self,
prediction_type_helper = ask_user_for_prediction_type,
)
config = self.app_config
known_paths = {config.root_path / x['path'] for x in self.list_models()}
directories = {config.root_path / x for x in [config.autoimport_dir,
config.lora_dir,
config.embedding_dir,
config.controlnet_dir,
]
config.controlnet_dir]
}
scanner = ScanAndImport(directories, self.logger, ignore=known_paths, installer=installer)
scanner.search()
return scanner.models_found()
def heuristic_import(self,
items_to_import: Set[str],
prediction_type_helper: Callable[[Path],SchedulerPredictionType]=None,
event_bus = None, # EventServiceBase, with circular dependency issues
)->Dict[str, AddModelResult]:
'''Import a list of paths, repo_ids or URLs. Returns the set of
successfully imported items.
@@ -990,12 +978,16 @@ class ModelManager(object):
# avoid circular import here
from invokeai.backend.install.model_install_backend import ModelInstall
successfully_installed = dict()
installer = ModelInstall(config = self.app_config,
prediction_type_helper = prediction_type_helper,
model_manager = self)
model_manager = self,
event_bus = event_bus,
)
for thing in items_to_import:
installed = installer.heuristic_import(thing)
successfully_installed.update(installed)
self.commit()
self.commit()
return successfully_installed

View File

@@ -39,7 +39,6 @@ class ModelProbe(object):
CLASS2TYPE = {
'StableDiffusionPipeline' : ModelType.Main,
'StableDiffusionInpaintPipeline' : ModelType.Main,
'StableDiffusionXLPipeline' : ModelType.Main,
'StableDiffusionXLImg2ImgPipeline' : ModelType.Main,
'AutoencoderKL' : ModelType.Vae,
@@ -402,7 +401,7 @@ class PipelineFolderProbe(FolderProbeBase):
in_channels = conf['in_channels']
if in_channels == 9:
return ModelVariantType.Inpaint
return ModelVariantType.Inpainting
elif in_channels == 5:
return ModelVariantType.Depth
elif in_channels == 4:

View File

@@ -21,7 +21,6 @@ from tqdm import tqdm
import invokeai.backend.util.logging as logger
from .devices import torch_dtype
def log_txt_as_img(wh, xc, size=10):
# wh a tuple of (width, height)
# xc a list of captions to plot
@@ -285,7 +284,11 @@ def ask_user(question: str, answers: list):
# -------------------------------------
def download_with_resume(url: str, dest: Path, access_token: str = None) -> Path:
def download_with_resume(url: str,
dest: Path,
access_token: str = None,
event_bus = None, # EventServiceBase (circular import issues)
) -> Path:
"""
Download a model file.
:param url: https, http or ftp URL
@@ -323,8 +326,13 @@ def download_with_resume(url: str, dest: Path, access_token: str = None) -> Path
os.remove(dest)
exist_size = 0
if event_bus:
event_bus.emit_download_started(url)
if resp.status_code == 416 or (content_length > 0 and exist_size == content_length):
logger.warning(f"{dest}: complete file found. Skipping.")
if event_bus:
event_bus.emit_download_completed(url,resp.status_code,dest)
return dest
elif resp.status_code == 206 or exist_size > 0:
logger.warning(f"{dest}: partial file found. Resuming...")
@@ -333,11 +341,20 @@ def download_with_resume(url: str, dest: Path, access_token: str = None) -> Path
else:
logger.info(f"{dest}: Downloading...")
try:
if content_length < 2000:
logger.error(f"ERROR DOWNLOADING {url}: {resp.text}")
# If less than 2K, it's not a model - usually an HTML document of some sort
if content_length < 2000:
logger.error(f"ERROR DOWNLOADING {url}: {resp.text}")
if event_bus:
event_bus.emit_download_completed(url, 500, None)
return None
# these variables are used in progress reporting events
MB10 = 10 * 1048576
downloaded = exist_size
previous_interval = 0
try:
with open(dest, open_mode) as file, tqdm(
desc=str(dest),
initial=exist_size,
@@ -349,10 +366,20 @@ def download_with_resume(url: str, dest: Path, access_token: str = None) -> Path
for data in resp.iter_content(chunk_size=1024):
size = file.write(data)
bar.update(size)
downloaded += size
if event_bus and downloaded // MB10 > previous_interval:
previous_interval = downloaded // MB10
event_bus.emit_download_progress(url, downloaded, content_length)
except Exception as e:
logger.error(f"An error occurred while downloading {dest}: {str(e)}")
if event_bus:
event_bus.emit_download_completed(url,500,None)
return None
if event_bus:
event_bus.emit_download_completed(url,resp.status_code,dest)
return dest

View File

@@ -1,4 +1,10 @@
# This file predefines a few models that the user may want to install.
sd-1/main/stable-diffusion-xdl-base:
description: Stable Diffusion XL base model - NOT YET RELEASED!! (70 GB)
repo_id: stabilityai/stable-diffusion-xl-base
sd-1/main/stable-diffusion-xdl-refiner:
description: Stable Diffusion XL refiner model - NOT YET RELEASED!! (60 GB)
repo_id: stabilityai/stable-diffusion-xl-refiner
sd-1/main/stable-diffusion-v1-5:
description: Stable Diffusion version 1.5 diffusers model (4.27 GB)
repo_id: runwayml/stable-diffusion-v1-5
@@ -16,14 +22,6 @@ sd-2/main/stable-diffusion-2-inpainting:
description: Stable Diffusion version 2.0 inpainting model (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-inpainting
recommended: False
sdxl/main/stable-diffusion-xl-base-0-9:
description: Stable Diffusion XL base model (12 GB; access token required)
repo_id: stabilityai/stable-diffusion-xl-base-0.9
recommended: False
sdxl-refiner/main/stable-diffusion-xl-refiner-0-9:
description: Stable Diffusion XL refiner model (12 GB; access token required)
repo_id: stabilityai/stable-diffusion-xl-refiner-0.9
recommended: False
sd-1/main/Analog-Diffusion:
description: An SD-1.5 model trained on diverse analog photographs (2.13 GB)
repo_id: wavymulder/Analog-Diffusion

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -12,7 +12,7 @@
margin: 0;
}
</style>
<script type="module" crossorigin src="./assets/index-0ec007dd.js"></script>
<script type="module" crossorigin src="./assets/index-f1a5f9cf.js"></script>
</head>
<body dir="ltr">

View File

@@ -399,8 +399,6 @@
"deleteModel": "Delete Model",
"deleteConfig": "Delete Config",
"deleteMsg1": "Are you sure you want to delete this model from InvokeAI?",
"modelDeleted": "Model Deleted",
"modelDeleteFailed": "Failed to delete model",
"deleteMsg2": "This WILL delete the model from disk if it is in the InvokeAI root folder. If you are using a custom location, then the model WILL NOT be deleted from disk.",
"formMessageDiffusersModelLocation": "Diffusers Model Location",
"formMessageDiffusersModelLocationDesc": "Please enter at least one.",
@@ -410,13 +408,11 @@
"convertToDiffusers": "Convert To Diffusers",
"convertToDiffusersHelpText1": "This model will be converted to the 🧨 Diffusers format.",
"convertToDiffusersHelpText2": "This process will replace your Model Manager entry with the Diffusers version of the same model.",
"convertToDiffusersHelpText3": "Your checkpoint file on disk WILL be deleted if it is in InvokeAI root folder. If it is in a custom location, then it WILL NOT be deleted.",
"convertToDiffusersHelpText3": "Your checkpoint file on the disk will NOT be deleted or modified in anyway. You can add your checkpoint to the Model Manager again if you want to.",
"convertToDiffusersHelpText4": "This is a one time process only. It might take around 30s-60s depending on the specifications of your computer.",
"convertToDiffusersHelpText5": "Please make sure you have enough disk space. Models generally vary between 2GB-7GB in size.",
"convertToDiffusersHelpText6": "Do you wish to convert this model?",
"convertToDiffusersSaveLocation": "Save Location",
"noCustomLocationProvided": "No Custom Location Provided",
"convertingModelBegin": "Converting Model. Please wait.",
"v1": "v1",
"v2_base": "v2 (512px)",
"v2_768": "v2 (768px)",
@@ -454,8 +450,7 @@
"none": "none",
"addDifference": "Add Difference",
"pickModelType": "Pick Model Type",
"selectModel": "Select Model",
"importModels": "Import Models"
"selectModel": "Select Model"
},
"parameters": {
"general": "General",
@@ -577,7 +572,6 @@
"uploadFailedInvalidUploadDesc": "Must be single PNG or JPEG image",
"downloadImageStarted": "Image Download Started",
"imageCopied": "Image Copied",
"problemCopyingImage": "Unable to Copy Image",
"imageLinkCopied": "Image Link Copied",
"problemCopyingImageLink": "Unable to Copy Image Link",
"imageNotLoaded": "No Image Loaded",
@@ -694,15 +688,6 @@
"reloadSchema": "Reload Schema",
"saveNodes": "Save Nodes",
"loadNodes": "Load Nodes",
"clearNodes": "Clear Nodes",
"zoomInNodes": "Zoom In",
"zoomOutNodes": "Zoom Out",
"fitViewportNodes": "Fit View",
"hideGraphNodes": "Hide Graph Overlay",
"showGraphNodes": "Show Graph Overlay",
"hideLegendNodes": "Hide Field Type Legend",
"showLegendNodes": "Show Field Type Legend",
"hideMinimapnodes": "Hide MiniMap",
"showMinimapnodes": "Show MiniMap"
"clearNodes": "Clear Nodes"
}
}

View File

@@ -15,6 +15,7 @@ import InvokeTabs from 'features/ui/components/InvokeTabs';
import ParametersDrawer from 'features/ui/components/ParametersDrawer';
import i18n from 'i18n';
import { ReactNode, memo, useEffect } from 'react';
import DeleteBoardImagesModal from '../../features/gallery/components/Boards/DeleteBoardImagesModal';
import UpdateImageBoardModal from '../../features/gallery/components/Boards/UpdateImageBoardModal';
import GlobalHotkeys from './GlobalHotkeys';
import Toaster from './Toaster';
@@ -83,6 +84,7 @@ const App = ({ config = DEFAULT_CONFIG, headerComponent }: Props) => {
</Grid>
<DeleteImageModal />
<UpdateImageBoardModal />
<DeleteBoardImagesModal />
<Toaster />
<GlobalHotkeys />
</>

View File

@@ -15,7 +15,10 @@ const STYLES: ChakraProps['sx'] = {
maxH: BOX_SIZE,
shadow: 'dark-lg',
borderRadius: 'lg',
opacity: 0.3,
borderWidth: 2,
borderStyle: 'dashed',
borderColor: 'base.100',
opacity: 0.5,
bg: 'base.800',
color: 'base.50',
_dark: {

View File

@@ -28,7 +28,6 @@ const ImageDndContext = (props: ImageDndContextProps) => {
const dispatch = useAppDispatch();
const handleDragStart = useCallback((event: DragStartEvent) => {
console.log('dragStart', event.active.data.current);
const activeData = event.active.data.current;
if (!activeData) {
return;
@@ -38,16 +37,15 @@ const ImageDndContext = (props: ImageDndContextProps) => {
const handleDragEnd = useCallback(
(event: DragEndEvent) => {
console.log('dragEnd', event.active.data.current);
const activeData = event.active.data.current;
const overData = event.over?.data.current;
if (!activeDragData || !overData) {
if (!activeData || !overData) {
return;
}
dispatch(dndDropped({ overData, activeData: activeDragData }));
dispatch(dndDropped({ overData, activeData }));
setActiveDragData(null);
},
[activeDragData, dispatch]
[dispatch]
);
const mouseSensor = useSensor(MouseSensor, {

View File

@@ -11,7 +11,6 @@ import {
useDraggable as useOriginalDraggable,
useDroppable as useOriginalDroppable,
} from '@dnd-kit/core';
import { BoardId } from 'features/gallery/store/gallerySlice';
import { ImageDTO } from 'services/api/types';
type BaseDropData = {
@@ -56,7 +55,7 @@ export type AddToBatchDropData = BaseDropData & {
export type MoveBoardDropData = BaseDropData & {
actionType: 'MOVE_BOARD';
context: { boardId: BoardId };
context: { boardId: string | null };
};
export type TypesafeDroppableData =
@@ -159,36 +158,8 @@ export const isValidDrop = (
return payloadType === 'IMAGE_DTO' || 'IMAGE_NAMES';
case 'ADD_TO_BATCH':
return payloadType === 'IMAGE_DTO' || 'IMAGE_NAMES';
case 'MOVE_BOARD': {
// If the board is the same, don't allow the drop
// Check the payload types
const isPayloadValid = payloadType === 'IMAGE_DTO' || 'IMAGE_NAMES';
if (!isPayloadValid) {
return false;
}
// Check if the image's board is the board we are dragging onto
if (payloadType === 'IMAGE_DTO') {
const { imageDTO } = active.data.current.payload;
const currentBoard = imageDTO.board_id;
const destinationBoard = overData.context.boardId;
const isSameBoard = currentBoard === destinationBoard;
const isDestinationValid = !currentBoard
? destinationBoard !== 'no_board'
: true;
return !isSameBoard && isDestinationValid;
}
if (payloadType === 'IMAGE_NAMES') {
// TODO (multi-select)
return false;
}
return true;
}
case 'MOVE_BOARD':
return payloadType === 'IMAGE_DTO' || 'IMAGE_NAMES';
default:
return false;
}

View File

@@ -18,6 +18,7 @@ import { Middleware } from '@reduxjs/toolkit';
import ImageDndContext from './ImageDnd/ImageDndContext';
import { AddImageToBoardContextProvider } from '../contexts/AddImageToBoardContext';
import { $authToken, $baseUrl } from 'services/api/client';
import { DeleteBoardImagesContextProvider } from '../contexts/DeleteBoardImagesContext';
const App = lazy(() => import('./App'));
const ThemeLocaleProvider = lazy(() => import('./ThemeLocaleProvider'));
@@ -77,7 +78,9 @@ const InvokeAIUI = ({
<ThemeLocaleProvider>
<ImageDndContext>
<AddImageToBoardContextProvider>
<App config={config} headerComponent={headerComponent} />
<DeleteBoardImagesContextProvider>
<App config={config} headerComponent={headerComponent} />
</DeleteBoardImagesContextProvider>
</AddImageToBoardContextProvider>
</ImageDndContext>
</ThemeLocaleProvider>

View File

@@ -59,8 +59,15 @@ export const SCHEDULER_LABEL_MAP: Record<SchedulerParam, string> = {
export type Scheduler = (typeof SCHEDULER_NAMES)[number];
// Valid upscaling levels
export const UPSCALING_LEVELS: Array<{ label: string; value: string }> = [
{ label: '2x', value: '2' },
{ label: '4x', value: '4' },
];
export const NUMPY_RAND_MIN = 0;
export const NUMPY_RAND_MAX = 2147483647;
export const FACETOOL_TYPES = ['gfpgan', 'codeformer'] as const;
export const NODE_MIN_WIDTH = 250;

View File

@@ -1,8 +1,7 @@
import { useDisclosure } from '@chakra-ui/react';
import { PropsWithChildren, createContext, useCallback, useState } from 'react';
import { ImageDTO } from 'services/api/types';
import { imagesApi } from 'services/api/endpoints/images';
import { useAppDispatch } from '../store/storeHooks';
import { useAddImageToBoardMutation } from 'services/api/endpoints/boardImages';
export type ImageUsage = {
isInitialImage: boolean;
@@ -41,7 +40,8 @@ type Props = PropsWithChildren;
export const AddImageToBoardContextProvider = (props: Props) => {
const [imageToMove, setImageToMove] = useState<ImageDTO>();
const { isOpen, onOpen, onClose } = useDisclosure();
const dispatch = useAppDispatch();
const [addImageToBoard, result] = useAddImageToBoardMutation();
// Clean up after deleting or dismissing the modal
const closeAndClearImageToDelete = useCallback(() => {
@@ -63,16 +63,14 @@ export const AddImageToBoardContextProvider = (props: Props) => {
const handleAddToBoard = useCallback(
(boardId: string) => {
if (imageToMove) {
dispatch(
imagesApi.endpoints.addImageToBoard.initiate({
imageDTO: imageToMove,
board_id: boardId,
})
);
addImageToBoard({
board_id: boardId,
image_name: imageToMove.image_name,
});
closeAndClearImageToDelete();
}
},
[dispatch, closeAndClearImageToDelete, imageToMove]
[addImageToBoard, closeAndClearImageToDelete, imageToMove]
);
return (

View File

@@ -0,0 +1,170 @@
import { useDisclosure } from '@chakra-ui/react';
import { PropsWithChildren, createContext, useCallback, useState } from 'react';
import { BoardDTO } from 'services/api/types';
import { useDeleteBoardMutation } from '../../services/api/endpoints/boards';
import { defaultSelectorOptions } from '../store/util/defaultMemoizeOptions';
import { createSelector } from '@reduxjs/toolkit';
import { some } from 'lodash-es';
import { canvasSelector } from 'features/canvas/store/canvasSelectors';
import { controlNetSelector } from 'features/controlNet/store/controlNetSlice';
import { selectImagesById } from 'features/gallery/store/gallerySlice';
import { nodesSelector } from 'features/nodes/store/nodesSlice';
import { generationSelector } from 'features/parameters/store/generationSelectors';
import { RootState } from '../store/store';
import { useAppDispatch, useAppSelector } from '../store/storeHooks';
import { ImageUsage } from './DeleteImageContext';
import { requestedBoardImagesDeletion } from 'features/gallery/store/actions';
export const selectBoardImagesUsage = createSelector(
[
(state: RootState) => state,
generationSelector,
canvasSelector,
nodesSelector,
controlNetSelector,
(state: RootState, board_id?: string) => board_id,
],
(state, generation, canvas, nodes, controlNet, board_id) => {
const initialImage = generation.initialImage
? selectImagesById(state, generation.initialImage.imageName)
: undefined;
const isInitialImage = initialImage?.board_id === board_id;
const isCanvasImage = canvas.layerState.objects.some((obj) => {
if (obj.kind === 'image') {
const image = selectImagesById(state, obj.imageName);
return image?.board_id === board_id;
}
return false;
});
const isNodesImage = nodes.nodes.some((node) => {
return some(node.data.inputs, (input) => {
if (input.type === 'image' && input.value) {
const image = selectImagesById(state, input.value.image_name);
return image?.board_id === board_id;
}
return false;
});
});
const isControlNetImage = some(controlNet.controlNets, (c) => {
const controlImage = c.controlImage
? selectImagesById(state, c.controlImage)
: undefined;
const processedControlImage = c.processedControlImage
? selectImagesById(state, c.processedControlImage)
: undefined;
return (
controlImage?.board_id === board_id ||
processedControlImage?.board_id === board_id
);
});
const imageUsage: ImageUsage = {
isInitialImage,
isCanvasImage,
isNodesImage,
isControlNetImage,
};
return imageUsage;
},
defaultSelectorOptions
);
type DeleteBoardImagesContextValue = {
/**
* Whether the move image dialog is open.
*/
isOpen: boolean;
/**
* Closes the move image dialog.
*/
onClose: () => void;
imagesUsage?: ImageUsage;
board?: BoardDTO;
onClickDeleteBoardImages: (board: BoardDTO) => void;
handleDeleteBoardImages: (boardId: string) => void;
handleDeleteBoardOnly: (boardId: string) => void;
};
export const DeleteBoardImagesContext =
createContext<DeleteBoardImagesContextValue>({
isOpen: false,
onClose: () => undefined,
onClickDeleteBoardImages: () => undefined,
handleDeleteBoardImages: () => undefined,
handleDeleteBoardOnly: () => undefined,
});
type Props = PropsWithChildren;
export const DeleteBoardImagesContextProvider = (props: Props) => {
const [boardToDelete, setBoardToDelete] = useState<BoardDTO>();
const { isOpen, onOpen, onClose } = useDisclosure();
const dispatch = useAppDispatch();
// Check where the board images to be deleted are used (eg init image, controlnet, etc.)
const imagesUsage = useAppSelector((state) =>
selectBoardImagesUsage(state, boardToDelete?.board_id)
);
const [deleteBoard] = useDeleteBoardMutation();
// Clean up after deleting or dismissing the modal
const closeAndClearBoardToDelete = useCallback(() => {
setBoardToDelete(undefined);
onClose();
}, [onClose]);
const onClickDeleteBoardImages = useCallback(
(board?: BoardDTO) => {
console.log({ board });
if (!board) {
return;
}
setBoardToDelete(board);
onOpen();
},
[setBoardToDelete, onOpen]
);
const handleDeleteBoardImages = useCallback(
(boardId: string) => {
if (boardToDelete) {
dispatch(
requestedBoardImagesDeletion({ board: boardToDelete, imagesUsage })
);
closeAndClearBoardToDelete();
}
},
[dispatch, closeAndClearBoardToDelete, boardToDelete, imagesUsage]
);
const handleDeleteBoardOnly = useCallback(
(boardId: string) => {
if (boardToDelete) {
deleteBoard(boardId);
closeAndClearBoardToDelete();
}
},
[deleteBoard, closeAndClearBoardToDelete, boardToDelete]
);
return (
<DeleteBoardImagesContext.Provider
value={{
isOpen,
board: boardToDelete,
onClose: closeAndClearBoardToDelete,
onClickDeleteBoardImages,
handleDeleteBoardImages,
handleDeleteBoardOnly,
imagesUsage,
}}
>
{props.children}
</DeleteBoardImagesContext.Provider>
);
};

View File

@@ -11,7 +11,7 @@ import { addCommitStagingAreaImageListener } from './listeners/addCommitStagingA
import { addAppConfigReceivedListener } from './listeners/appConfigReceived';
import { addAppStartedListener } from './listeners/appStarted';
import { addBoardIdSelectedListener } from './listeners/boardIdSelected';
import { addDeleteBoardAndImagesFulfilledListener } from './listeners/boardAndImagesDeleted';
import { addRequestedBoardImageDeletionListener } from './listeners/boardImagesDeleted';
import { addCanvasCopiedToClipboardListener } from './listeners/canvasCopiedToClipboard';
import { addCanvasDownloadedAsImageListener } from './listeners/canvasDownloadedAsImage';
import { addCanvasMergedListener } from './listeners/canvasMerged';
@@ -29,6 +29,10 @@ import {
addRequestedImageDeletionListener,
} from './listeners/imageDeleted';
import { addImageDroppedListener } from './listeners/imageDropped';
import {
addImageMetadataReceivedFulfilledListener,
addImageMetadataReceivedRejectedListener,
} from './listeners/imageMetadataReceived';
import {
addImageRemovedFromBoardFulfilledListener,
addImageRemovedFromBoardRejectedListener,
@@ -42,10 +46,18 @@ import {
addImageUploadedFulfilledListener,
addImageUploadedRejectedListener,
} from './listeners/imageUploaded';
import {
addImageUrlsReceivedFulfilledListener,
addImageUrlsReceivedRejectedListener,
} from './listeners/imageUrlsReceived';
import { addInitialImageSelectedListener } from './listeners/initialImageSelected';
import { addModelSelectedListener } from './listeners/modelSelected';
import { addModelsLoadedListener } from './listeners/modelsLoaded';
import { addReceivedOpenAPISchemaListener } from './listeners/receivedOpenAPISchema';
import {
addReceivedPageOfImagesFulfilledListener,
addReceivedPageOfImagesRejectedListener,
} from './listeners/receivedPageOfImages';
import {
addSessionCanceledFulfilledListener,
addSessionCanceledPendingListener,
@@ -78,8 +90,6 @@ import { addUserInvokedNodesListener } from './listeners/userInvokedNodes';
import { addUserInvokedTextToImageListener } from './listeners/userInvokedTextToImage';
import { addModelLoadStartedEventListener } from './listeners/socketio/socketModelLoadStarted';
import { addModelLoadCompletedEventListener } from './listeners/socketio/socketModelLoadCompleted';
import { addUpscaleRequestedListener } from './listeners/upscaleRequested';
import { addFirstListImagesListener } from './listeners/addFirstListImagesListener.ts';
export const listenerMiddleware = createListenerMiddleware();
@@ -121,9 +131,17 @@ addRequestedImageDeletionListener();
addImageDeletedPendingListener();
addImageDeletedFulfilledListener();
addImageDeletedRejectedListener();
addDeleteBoardAndImagesFulfilledListener();
addRequestedBoardImageDeletionListener();
addImageToDeleteSelectedListener();
// Image metadata
addImageMetadataReceivedFulfilledListener();
addImageMetadataReceivedRejectedListener();
// Image URLs
addImageUrlsReceivedFulfilledListener();
addImageUrlsReceivedRejectedListener();
// User Invoked
addUserInvokedCanvasListener();
addUserInvokedNodesListener();
@@ -179,10 +197,17 @@ addSessionCanceledPendingListener();
addSessionCanceledFulfilledListener();
addSessionCanceledRejectedListener();
// Fetching images
addReceivedPageOfImagesFulfilledListener();
addReceivedPageOfImagesRejectedListener();
// ControlNet
addControlNetImageProcessedListener();
addControlNetAutoProcessListener();
// Update image URLs on connect
// addUpdateImageUrlsOnConnectListener();
// Boards
addImageAddedToBoardFulfilledListener();
addImageAddedToBoardRejectedListener();
@@ -203,7 +228,3 @@ addModelSelectedListener();
addAppStartedListener();
addModelsLoadedListener();
addAppConfigReceivedListener();
addFirstListImagesListener();
// Ad-hoc upscale workflwo
addUpscaleRequestedListener();

View File

@@ -1,43 +0,0 @@
import { createAction } from '@reduxjs/toolkit';
import {
IMAGE_CATEGORIES,
imageSelected,
} from 'features/gallery/store/gallerySlice';
import {
ImageCache,
getListImagesUrl,
imagesApi,
} from 'services/api/endpoints/images';
import { startAppListening } from '..';
export const appStarted = createAction('app/appStarted');
export const addFirstListImagesListener = () => {
startAppListening({
matcher: imagesApi.endpoints.listImages.matchFulfilled,
effect: async (
action,
{ getState, dispatch, unsubscribe, cancelActiveListeners }
) => {
// Only run this listener on the first listImages request for `images` categories
if (
action.meta.arg.queryCacheKey !==
getListImagesUrl({ categories: IMAGE_CATEGORIES })
) {
return;
}
// this should only run once
cancelActiveListeners();
unsubscribe();
// TODO: figure out how to type the predicate
const data = action.payload as ImageCache;
if (data.ids.length > 0) {
// Select the first image
dispatch(imageSelected(data.ids[0] as string));
}
},
});
};

View File

@@ -1,4 +1,11 @@
import { createAction } from '@reduxjs/toolkit';
import {
ASSETS_CATEGORIES,
IMAGE_CATEGORIES,
INITIAL_IMAGE_LIMIT,
isLoadingChanged,
} from 'features/gallery/store/gallerySlice';
import { receivedPageOfImages } from 'services/api/thunks/image';
import { startAppListening } from '..';
export const appStarted = createAction('app/appStarted');
@@ -10,9 +17,29 @@ export const addAppStartedListener = () => {
action,
{ getState, dispatch, unsubscribe, cancelActiveListeners }
) => {
// this should only run once
cancelActiveListeners();
unsubscribe();
// fill up the gallery tab with images
await dispatch(
receivedPageOfImages({
categories: IMAGE_CATEGORIES,
is_intermediate: false,
offset: 0,
limit: INITIAL_IMAGE_LIMIT,
})
);
// fill up the assets tab with images
await dispatch(
receivedPageOfImages({
categories: ASSETS_CATEGORIES,
is_intermediate: false,
offset: 0,
limit: INITIAL_IMAGE_LIMIT,
})
);
dispatch(isLoadingChanged(false));
},
});
};

View File

@@ -1,48 +0,0 @@
import { resetCanvas } from 'features/canvas/store/canvasSlice';
import { controlNetReset } from 'features/controlNet/store/controlNetSlice';
import { getImageUsage } from 'features/imageDeletion/store/imageDeletionSlice';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { clearInitialImage } from 'features/parameters/store/generationSlice';
import { startAppListening } from '..';
import { boardsApi } from '../../../../../services/api/endpoints/boards';
export const addDeleteBoardAndImagesFulfilledListener = () => {
startAppListening({
matcher: boardsApi.endpoints.deleteBoardAndImages.matchFulfilled,
effect: async (action, { dispatch, getState, condition }) => {
const { board_id, deleted_board_images, deleted_images } = action.payload;
// Remove all deleted images from the UI
let wasInitialImageReset = false;
let wasCanvasReset = false;
let wasNodeEditorReset = false;
let wasControlNetReset = false;
const state = getState();
deleted_images.forEach((image_name) => {
const imageUsage = getImageUsage(state, image_name);
if (imageUsage.isInitialImage && !wasInitialImageReset) {
dispatch(clearInitialImage());
wasInitialImageReset = true;
}
if (imageUsage.isCanvasImage && !wasCanvasReset) {
dispatch(resetCanvas());
wasCanvasReset = true;
}
if (imageUsage.isNodesImage && !wasNodeEditorReset) {
dispatch(nodeEditorReset());
wasNodeEditorReset = true;
}
if (imageUsage.isControlNetImage && !wasControlNetReset) {
dispatch(controlNetReset());
wasControlNetReset = true;
}
});
},
});
};

View File

@@ -1,13 +1,17 @@
import { log } from 'app/logging/useLogger';
import { selectFilteredImages } from 'features/gallery/store/gallerySelectors';
import {
ASSETS_CATEGORIES,
IMAGE_CATEGORIES,
boardIdSelected,
imageSelected,
selectImagesAll,
} from 'features/gallery/store/gallerySlice';
import { boardsApi } from 'services/api/endpoints/boards';
import {
getBoardIdQueryParamForBoard,
getCategoriesQueryParamForBoard,
} from 'features/gallery/store/util';
import { imagesApi } from 'services/api/endpoints/images';
IMAGES_PER_PAGE,
receivedPageOfImages,
} from 'services/api/thunks/image';
import { startAppListening } from '..';
const moduleLog = log.child({ namespace: 'boards' });
@@ -15,44 +19,54 @@ const moduleLog = log.child({ namespace: 'boards' });
export const addBoardIdSelectedListener = () => {
startAppListening({
actionCreator: boardIdSelected,
effect: async (
action,
{ getState, dispatch, condition, cancelActiveListeners }
) => {
// Cancel any in-progress instances of this listener, we don't want to select an image from a previous board
cancelActiveListeners();
effect: (action, { getState, dispatch }) => {
const board_id = action.payload;
const _board_id = action.payload;
// when a board is selected, we need to wait until the board has loaded *some* images, then select the first one
// we need to check if we need to fetch more images
const categories = getCategoriesQueryParamForBoard(_board_id);
const board_id = getBoardIdQueryParamForBoard(_board_id);
const queryArgs = { board_id, categories };
const state = getState();
const allImages = selectImagesAll(state);
// wait until the board has some images - maybe it already has some from a previous fetch
// must use getState() to ensure we do not have stale state
const isSuccess = await condition(
() =>
imagesApi.endpoints.listImages.select(queryArgs)(getState())
.isSuccess,
1000
);
if (board_id === 'all') {
// Selected all images
dispatch(imageSelected(allImages[0]?.image_name ?? null));
return;
}
if (isSuccess) {
// the board was just changed - we can select the first image
const { data: boardImagesData } = imagesApi.endpoints.listImages.select(
queryArgs
)(getState());
if (board_id === 'batch') {
// Selected the batch
dispatch(imageSelected(state.gallery.batchImageNames[0] ?? null));
return;
}
if (boardImagesData?.ids.length) {
dispatch(imageSelected((boardImagesData.ids[0] as string) ?? null));
} else {
// board has no images - deselect
dispatch(imageSelected(null));
}
} else {
// fallback - deselect
dispatch(imageSelected(null));
const filteredImages = selectFilteredImages(state);
const categories =
state.gallery.galleryView === 'images'
? IMAGE_CATEGORIES
: ASSETS_CATEGORIES;
// get the board from the cache
const { data: boards } =
boardsApi.endpoints.listAllBoards.select()(state);
const board = boards?.find((b) => b.board_id === board_id);
if (!board) {
// can't find the board in cache...
dispatch(boardIdSelected('all'));
return;
}
dispatch(imageSelected(board.cover_image_name ?? null));
// if we haven't loaded one full page of images from this board, load more
if (
filteredImages.length < board.image_count &&
filteredImages.length < IMAGES_PER_PAGE
) {
dispatch(
receivedPageOfImages({ categories, board_id, is_intermediate: false })
);
}
},
});

View File

@@ -0,0 +1,82 @@
import { requestedBoardImagesDeletion } from 'features/gallery/store/actions';
import { startAppListening } from '..';
import {
imageSelected,
imagesRemoved,
selectImagesAll,
selectImagesById,
} from 'features/gallery/store/gallerySlice';
import { resetCanvas } from 'features/canvas/store/canvasSlice';
import { controlNetReset } from 'features/controlNet/store/controlNetSlice';
import { clearInitialImage } from 'features/parameters/store/generationSlice';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { LIST_TAG, api } from 'services/api';
import { boardsApi } from '../../../../../services/api/endpoints/boards';
export const addRequestedBoardImageDeletionListener = () => {
startAppListening({
actionCreator: requestedBoardImagesDeletion,
effect: async (action, { dispatch, getState, condition }) => {
const { board, imagesUsage } = action.payload;
const { board_id } = board;
const state = getState();
const selectedImageName =
state.gallery.selection[state.gallery.selection.length - 1];
const selectedImage = selectedImageName
? selectImagesById(state, selectedImageName)
: undefined;
if (selectedImage && selectedImage.board_id === board_id) {
dispatch(imageSelected(null));
}
// We need to reset the features where the board images are in use - none of these work if their image(s) don't exist
if (imagesUsage.isCanvasImage) {
dispatch(resetCanvas());
}
if (imagesUsage.isControlNetImage) {
dispatch(controlNetReset());
}
if (imagesUsage.isInitialImage) {
dispatch(clearInitialImage());
}
if (imagesUsage.isNodesImage) {
dispatch(nodeEditorReset());
}
// Preemptively remove from gallery
const images = selectImagesAll(state).reduce((acc: string[], img) => {
if (img.board_id === board_id) {
acc.push(img.image_name);
}
return acc;
}, []);
dispatch(imagesRemoved(images));
// Delete from server
dispatch(boardsApi.endpoints.deleteBoardAndImages.initiate(board_id));
const result =
boardsApi.endpoints.deleteBoardAndImages.select(board_id)(state);
const { isSuccess } = result;
// Wait for successful deletion, then trigger boards to re-fetch
const wasBoardDeleted = await condition(() => !!isSuccess, 30000);
if (wasBoardDeleted) {
dispatch(
api.util.invalidateTags([
{ type: 'Board', id: board_id },
{ type: 'Image', id: LIST_TAG },
])
);
}
},
});
};

View File

@@ -1,11 +1,11 @@
import { log } from 'app/logging/useLogger';
import { canvasMerged } from 'features/canvas/store/actions';
import { setMergedCanvas } from 'features/canvas/store/canvasSlice';
import { getFullBaseLayerBlob } from 'features/canvas/util/getFullBaseLayerBlob';
import { getCanvasBaseLayer } from 'features/canvas/util/konvaInstanceProvider';
import { addToast } from 'features/system/store/systemSlice';
import { imagesApi } from 'services/api/endpoints/images';
import { startAppListening } from '..';
import { log } from 'app/logging/useLogger';
import { addToast } from 'features/system/store/systemSlice';
import { imageUploaded } from 'services/api/thunks/image';
import { setMergedCanvas } from 'features/canvas/store/canvasSlice';
import { getCanvasBaseLayer } from 'features/canvas/util/konvaInstanceProvider';
import { getFullBaseLayerBlob } from 'features/canvas/util/getFullBaseLayerBlob';
const moduleLog = log.child({ namespace: 'canvasCopiedToClipboardListener' });
@@ -46,28 +46,27 @@ export const addCanvasMergedListener = () => {
});
const imageUploadedRequest = dispatch(
imagesApi.endpoints.uploadImage.initiate({
imageUploaded({
file: new File([blob], 'mergedCanvas.png', {
type: 'image/png',
}),
image_category: 'general',
is_intermediate: true,
postUploadAction: {
type: 'TOAST',
toastOptions: { title: 'Canvas Merged' },
type: 'TOAST_CANVAS_MERGED',
},
})
);
const [{ payload }] = await take(
(uploadedImageAction) =>
imagesApi.endpoints.uploadImage.matchFulfilled(uploadedImageAction) &&
(
uploadedImageAction
): uploadedImageAction is ReturnType<typeof imageUploaded.fulfilled> =>
imageUploaded.fulfilled.match(uploadedImageAction) &&
uploadedImageAction.meta.requestId === imageUploadedRequest.requestId
);
// TODO: I can't figure out how to do the type narrowing in the `take()` so just brute forcing it here
const { image_name } =
payload as typeof imagesApi.endpoints.uploadImage.Types.ResultType;
const { image_name } = payload;
dispatch(
setMergedCanvas({
@@ -77,6 +76,13 @@ export const addCanvasMergedListener = () => {
...baseLayerRect,
})
);
dispatch(
addToast({
title: 'Canvas Merged',
status: 'success',
})
);
},
});
};

View File

@@ -1,9 +1,10 @@
import { log } from 'app/logging/useLogger';
import { canvasSavedToGallery } from 'features/canvas/store/actions';
import { startAppListening } from '..';
import { log } from 'app/logging/useLogger';
import { imageUploaded } from 'services/api/thunks/image';
import { getBaseLayerBlob } from 'features/canvas/util/getBaseLayerBlob';
import { addToast } from 'features/system/store/systemSlice';
import { imagesApi } from 'services/api/endpoints/images';
import { startAppListening } from '..';
import { imageUpserted } from 'features/gallery/store/gallerySlice';
const moduleLog = log.child({ namespace: 'canvasSavedToGalleryListener' });
@@ -27,19 +28,28 @@ export const addCanvasSavedToGalleryListener = () => {
return;
}
dispatch(
imagesApi.endpoints.uploadImage.initiate({
const imageUploadedRequest = dispatch(
imageUploaded({
file: new File([blob], 'savedCanvas.png', {
type: 'image/png',
}),
image_category: 'general',
is_intermediate: false,
postUploadAction: {
type: 'TOAST',
toastOptions: { title: 'Canvas Saved to Gallery' },
type: 'TOAST_CANVAS_SAVED_TO_GALLERY',
},
})
);
const [{ payload: uploadedImageDTO }] = await take(
(
uploadedImageAction
): uploadedImageAction is ReturnType<typeof imageUploaded.fulfilled> =>
imageUploaded.fulfilled.match(uploadedImageAction) &&
uploadedImageAction.meta.requestId === imageUploadedRequest.requestId
);
dispatch(imageUpserted(uploadedImageDTO));
},
});
};

View File

@@ -2,10 +2,10 @@ import { log } from 'app/logging/useLogger';
import { controlNetImageProcessed } from 'features/controlNet/store/actions';
import { controlNetProcessedImageChanged } from 'features/controlNet/store/controlNetSlice';
import { sessionReadyToInvoke } from 'features/system/store/actions';
import { imagesApi } from 'services/api/endpoints/images';
import { isImageOutput } from 'services/api/guards';
import { imageDTOReceived } from 'services/api/thunks/image';
import { sessionCreated } from 'services/api/thunks/session';
import { Graph, ImageDTO } from 'services/api/types';
import { Graph } from 'services/api/types';
import { socketInvocationComplete } from 'services/events/actions';
import { startAppListening } from '..';
@@ -62,13 +62,12 @@ export const addControlNetImageProcessedListener = () => {
invocationCompleteAction.payload.data.result.image;
// Wait for the ImageDTO to be received
const [{ payload }] = await take(
(action) =>
imagesApi.endpoints.getImageDTO.matchFulfilled(action) &&
const [imageMetadataReceivedAction] = await take(
(action): action is ReturnType<typeof imageDTOReceived.fulfilled> =>
imageDTOReceived.fulfilled.match(action) &&
action.payload.image_name === image_name
);
const processedControlImage = payload as ImageDTO;
const processedControlImage = imageMetadataReceivedAction.payload;
moduleLog.debug(
{ data: { arg: action.payload, processedControlImage } },

View File

@@ -1,30 +1,31 @@
import { log } from 'app/logging/useLogger';
import { imagesApi } from 'services/api/endpoints/images';
import { boardImagesApi } from 'services/api/endpoints/boardImages';
import { startAppListening } from '..';
const moduleLog = log.child({ namespace: 'boards' });
export const addImageAddedToBoardFulfilledListener = () => {
startAppListening({
matcher: imagesApi.endpoints.addImageToBoard.matchFulfilled,
matcher: boardImagesApi.endpoints.addImageToBoard.matchFulfilled,
effect: (action, { getState, dispatch }) => {
const { board_id, imageDTO } = action.meta.arg.originalArgs;
const { board_id, image_name } = action.meta.arg.originalArgs;
// TODO: update listImages cache for this board
moduleLog.debug({ data: { board_id, imageDTO } }, 'Image added to board');
moduleLog.debug(
{ data: { board_id, image_name } },
'Image added to board'
);
},
});
};
export const addImageAddedToBoardRejectedListener = () => {
startAppListening({
matcher: imagesApi.endpoints.addImageToBoard.matchRejected,
matcher: boardImagesApi.endpoints.addImageToBoard.matchRejected,
effect: (action, { getState, dispatch }) => {
const { board_id, imageDTO } = action.meta.arg.originalArgs;
const { board_id, image_name } = action.meta.arg.originalArgs;
moduleLog.debug(
{ data: { board_id, imageDTO } },
{ data: { board_id, image_name } },
'Problem adding image to board'
);
},

View File

@@ -1,17 +1,19 @@
import { log } from 'app/logging/useLogger';
import { resetCanvas } from 'features/canvas/store/canvasSlice';
import { controlNetReset } from 'features/controlNet/store/controlNetSlice';
import { selectListImagesBaseQueryArgs } from 'features/gallery/store/gallerySelectors';
import { imageSelected } from 'features/gallery/store/gallerySlice';
import { selectNextImageToSelect } from 'features/gallery/store/gallerySelectors';
import {
imageRemoved,
imageSelected,
} from 'features/gallery/store/gallerySlice';
import {
imageDeletionConfirmed,
isModalOpenChanged,
} from 'features/imageDeletion/store/imageDeletionSlice';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { clearInitialImage } from 'features/parameters/store/generationSlice';
import { clamp } from 'lodash-es';
import { api } from 'services/api';
import { imagesApi } from 'services/api/endpoints/images';
import { imageDeleted } from 'services/api/thunks/image';
import { startAppListening } from '..';
const moduleLog = log.child({ namespace: 'image' });
@@ -34,28 +36,10 @@ export const addRequestedImageDeletionListener = () => {
state.gallery.selection[state.gallery.selection.length - 1];
if (lastSelectedImage === image_name) {
const baseQueryArgs = selectListImagesBaseQueryArgs(state);
const { data } =
imagesApi.endpoints.listImages.select(baseQueryArgs)(state);
const ids = data?.ids ?? [];
const deletedImageIndex = ids.findIndex(
(result) => result.toString() === image_name
);
const filteredIds = ids.filter((id) => id.toString() !== image_name);
const newSelectedImageIndex = clamp(
deletedImageIndex,
0,
filteredIds.length - 1
);
const newSelectedImageId = filteredIds[newSelectedImageIndex];
const newSelectedImageId = selectNextImageToSelect(state, image_name);
if (newSelectedImageId) {
dispatch(imageSelected(newSelectedImageId as string));
dispatch(imageSelected(newSelectedImageId));
} else {
dispatch(imageSelected(null));
}
@@ -79,15 +63,16 @@ export const addRequestedImageDeletionListener = () => {
dispatch(nodeEditorReset());
}
// Preemptively remove from gallery
dispatch(imageRemoved(image_name));
// Delete from server
const { requestId } = dispatch(
imagesApi.endpoints.deleteImage.initiate(imageDTO)
);
const { requestId } = dispatch(imageDeleted({ image_name }));
// Wait for successful deletion, then trigger boards to re-fetch
const wasImageDeleted = await condition(
(action) =>
imagesApi.endpoints.deleteImage.matchFulfilled(action) &&
(action): action is ReturnType<typeof imageDeleted.fulfilled> =>
imageDeleted.fulfilled.match(action) &&
action.meta.requestId === requestId,
30000
);
@@ -106,7 +91,7 @@ export const addRequestedImageDeletionListener = () => {
*/
export const addImageDeletedPendingListener = () => {
startAppListening({
matcher: imagesApi.endpoints.deleteImage.matchPending,
actionCreator: imageDeleted.pending,
effect: (action, { dispatch, getState }) => {
//
},
@@ -118,12 +103,9 @@ export const addImageDeletedPendingListener = () => {
*/
export const addImageDeletedFulfilledListener = () => {
startAppListening({
matcher: imagesApi.endpoints.deleteImage.matchFulfilled,
actionCreator: imageDeleted.fulfilled,
effect: (action, { dispatch, getState }) => {
moduleLog.debug(
{ data: { image: action.meta.arg.originalArgs } },
'Image deleted'
);
moduleLog.debug({ data: { image: action.meta.arg } }, 'Image deleted');
},
});
};
@@ -133,10 +115,10 @@ export const addImageDeletedFulfilledListener = () => {
*/
export const addImageDeletedRejectedListener = () => {
startAppListening({
matcher: imagesApi.endpoints.deleteImage.matchRejected,
actionCreator: imageDeleted.rejected,
effect: (action, { dispatch, getState }) => {
moduleLog.debug(
{ data: { image: action.meta.arg.originalArgs } },
{ data: { image: action.meta.arg } },
'Unable to delete image'
);
},

View File

@@ -10,9 +10,12 @@ import {
imageSelected,
imagesAddedToBatch,
} from 'features/gallery/store/gallerySlice';
import { fieldValueChanged } from 'features/nodes/store/nodesSlice';
import {
fieldValueChanged,
imageCollectionFieldValueChanged,
} from 'features/nodes/store/nodesSlice';
import { initialImageChanged } from 'features/parameters/store/generationSlice';
import { imagesApi } from 'services/api/endpoints/images';
import { boardImagesApi } from 'services/api/endpoints/boardImages';
import { startAppListening } from '../';
const moduleLog = log.child({ namespace: 'dnd' });
@@ -134,23 +137,23 @@ export const addImageDroppedListener = () => {
return;
}
// // set multiple nodes images (multiple images handler)
// if (
// overData.actionType === 'SET_MULTI_NODES_IMAGE' &&
// activeData.payloadType === 'IMAGE_NAMES'
// ) {
// const { fieldName, nodeId } = overData.context;
// dispatch(
// imageCollectionFieldValueChanged({
// nodeId,
// fieldName,
// value: activeData.payload.image_names.map((image_name) => ({
// image_name,
// })),
// })
// );
// return;
// }
// set multiple nodes images (multiple images handler)
if (
overData.actionType === 'SET_MULTI_NODES_IMAGE' &&
activeData.payloadType === 'IMAGE_NAMES'
) {
const { fieldName, nodeId } = overData.context;
dispatch(
imageCollectionFieldValueChanged({
nodeId,
fieldName,
value: activeData.payload.image_names.map((image_name) => ({
image_name,
})),
})
);
return;
}
// add image to board
if (
@@ -159,95 +162,97 @@ export const addImageDroppedListener = () => {
activeData.payload.imageDTO &&
overData.context.boardId
) {
const { imageDTO } = activeData.payload;
const { image_name } = activeData.payload.imageDTO;
const { boardId } = overData.context;
// if the board is "No Board", this is a remove action
if (boardId === 'no_board') {
dispatch(
imagesApi.endpoints.removeImageFromBoard.initiate({
imageDTO,
})
);
return;
}
// Handle adding image to batch
if (boardId === 'batch') {
// TODO
}
// Otherwise, add the image to the board
dispatch(
imagesApi.endpoints.addImageToBoard.initiate({
imageDTO,
boardImagesApi.endpoints.addImageToBoard.initiate({
image_name,
board_id: boardId,
})
);
return;
}
// // add gallery selection to board
// if (
// overData.actionType === 'MOVE_BOARD' &&
// activeData.payloadType === 'IMAGE_NAMES' &&
// overData.context.boardId
// ) {
// console.log('adding gallery selection to board');
// const board_id = overData.context.boardId;
// dispatch(
// boardImagesApi.endpoints.addManyBoardImages.initiate({
// board_id,
// image_names: activeData.payload.image_names,
// })
// );
// return;
// }
// remove image from board
if (
overData.actionType === 'MOVE_BOARD' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO &&
overData.context.boardId === null
) {
const { image_name, board_id } = activeData.payload.imageDTO;
if (board_id) {
dispatch(
boardImagesApi.endpoints.removeImageFromBoard.initiate({
image_name,
board_id,
})
);
}
return;
}
// // remove gallery selection from board
// if (
// overData.actionType === 'MOVE_BOARD' &&
// activeData.payloadType === 'IMAGE_NAMES' &&
// overData.context.boardId === null
// ) {
// console.log('removing gallery selection to board');
// dispatch(
// boardImagesApi.endpoints.deleteManyBoardImages.initiate({
// image_names: activeData.payload.image_names,
// })
// );
// return;
// }
// add gallery selection to board
if (
overData.actionType === 'MOVE_BOARD' &&
activeData.payloadType === 'IMAGE_NAMES' &&
overData.context.boardId
) {
console.log('adding gallery selection to board');
const board_id = overData.context.boardId;
dispatch(
boardImagesApi.endpoints.addManyBoardImages.initiate({
board_id,
image_names: activeData.payload.image_names,
})
);
return;
}
// // add batch selection to board
// if (
// overData.actionType === 'MOVE_BOARD' &&
// activeData.payloadType === 'IMAGE_NAMES' &&
// overData.context.boardId
// ) {
// const board_id = overData.context.boardId;
// dispatch(
// boardImagesApi.endpoints.addManyBoardImages.initiate({
// board_id,
// image_names: activeData.payload.image_names,
// })
// );
// return;
// }
// remove gallery selection from board
if (
overData.actionType === 'MOVE_BOARD' &&
activeData.payloadType === 'IMAGE_NAMES' &&
overData.context.boardId === null
) {
console.log('removing gallery selection to board');
dispatch(
boardImagesApi.endpoints.deleteManyBoardImages.initiate({
image_names: activeData.payload.image_names,
})
);
return;
}
// // remove batch selection from board
// if (
// overData.actionType === 'MOVE_BOARD' &&
// activeData.payloadType === 'IMAGE_NAMES' &&
// overData.context.boardId === null
// ) {
// dispatch(
// boardImagesApi.endpoints.deleteManyBoardImages.initiate({
// image_names: activeData.payload.image_names,
// })
// );
// return;
// }
// add batch selection to board
if (
overData.actionType === 'MOVE_BOARD' &&
activeData.payloadType === 'IMAGE_NAMES' &&
overData.context.boardId
) {
const board_id = overData.context.boardId;
dispatch(
boardImagesApi.endpoints.addManyBoardImages.initiate({
board_id,
image_names: activeData.payload.image_names,
})
);
return;
}
// remove batch selection from board
if (
overData.actionType === 'MOVE_BOARD' &&
activeData.payloadType === 'IMAGE_NAMES' &&
overData.context.boardId === null
) {
dispatch(
boardImagesApi.endpoints.deleteManyBoardImages.initiate({
image_names: activeData.payload.image_names,
})
);
return;
}
},
});
};

View File

@@ -0,0 +1,51 @@
import { log } from 'app/logging/useLogger';
import { imageUpserted } from 'features/gallery/store/gallerySlice';
import { imageDTOReceived, imageUpdated } from 'services/api/thunks/image';
import { startAppListening } from '..';
const moduleLog = log.child({ namespace: 'image' });
export const addImageMetadataReceivedFulfilledListener = () => {
startAppListening({
actionCreator: imageDTOReceived.fulfilled,
effect: (action, { getState, dispatch }) => {
const image = action.payload;
const state = getState();
if (
image.session_id === state.canvas.layerState.stagingArea.sessionId &&
state.canvas.shouldAutoSave
) {
dispatch(
imageUpdated({
image_name: image.image_name,
is_intermediate: image.is_intermediate,
})
);
} else if (image.is_intermediate) {
// No further actions needed for intermediate images
moduleLog.trace(
{ data: { image } },
'Image metadata received (intermediate), skipping'
);
return;
}
moduleLog.debug({ data: { image } }, 'Image metadata received');
dispatch(imageUpserted(image));
},
});
};
export const addImageMetadataReceivedRejectedListener = () => {
startAppListening({
actionCreator: imageDTOReceived.rejected,
effect: (action, { getState, dispatch }) => {
moduleLog.debug(
{ data: { image: action.meta.arg } },
'Problem receiving image metadata'
);
},
});
};

View File

@@ -1,12 +1,12 @@
import { log } from 'app/logging/useLogger';
import { imagesApi } from 'services/api/endpoints/images';
import { boardImagesApi } from 'services/api/endpoints/boardImages';
import { startAppListening } from '..';
const moduleLog = log.child({ namespace: 'boards' });
export const addImageRemovedFromBoardFulfilledListener = () => {
startAppListening({
matcher: imagesApi.endpoints.removeImageFromBoard.matchFulfilled,
matcher: boardImagesApi.endpoints.removeImageFromBoard.matchFulfilled,
effect: (action, { getState, dispatch }) => {
const { board_id, image_name } = action.meta.arg.originalArgs;
@@ -20,7 +20,7 @@ export const addImageRemovedFromBoardFulfilledListener = () => {
export const addImageRemovedFromBoardRejectedListener = () => {
startAppListening({
matcher: imagesApi.endpoints.removeImageFromBoard.matchRejected,
matcher: boardImagesApi.endpoints.removeImageFromBoard.matchRejected,
effect: (action, { getState, dispatch }) => {
const { board_id, image_name } = action.meta.arg.originalArgs;

View File

@@ -1,20 +1,15 @@
import { log } from 'app/logging/useLogger';
import { imagesApi } from 'services/api/endpoints/images';
import { startAppListening } from '..';
import { imageUpdated } from 'services/api/thunks/image';
import { log } from 'app/logging/useLogger';
const moduleLog = log.child({ namespace: 'image' });
export const addImageUpdatedFulfilledListener = () => {
startAppListening({
matcher: imagesApi.endpoints.updateImage.matchFulfilled,
actionCreator: imageUpdated.fulfilled,
effect: (action, { dispatch, getState }) => {
moduleLog.debug(
{
data: {
oldImage: action.meta.arg.originalArgs,
updatedImage: action.payload,
},
},
{ oldImage: action.meta.arg, updatedImage: action.payload },
'Image updated'
);
},
@@ -23,12 +18,9 @@ export const addImageUpdatedFulfilledListener = () => {
export const addImageUpdatedRejectedListener = () => {
startAppListening({
matcher: imagesApi.endpoints.updateImage.matchRejected,
actionCreator: imageUpdated.rejected,
effect: (action, { dispatch }) => {
moduleLog.debug(
{ data: action.meta.arg.originalArgs },
'Image update failed'
);
moduleLog.debug({ oldImage: action.meta.arg }, 'Image update failed');
},
});
};

View File

@@ -1,87 +1,49 @@
import { UseToastOptions } from '@chakra-ui/react';
import { log } from 'app/logging/useLogger';
import { setInitialCanvasImage } from 'features/canvas/store/canvasSlice';
import { controlNetImageChanged } from 'features/controlNet/store/controlNetSlice';
import { imagesAddedToBatch } from 'features/gallery/store/gallerySlice';
import {
imageUpserted,
imagesAddedToBatch,
} from 'features/gallery/store/gallerySlice';
import { fieldValueChanged } from 'features/nodes/store/nodesSlice';
import { initialImageChanged } from 'features/parameters/store/generationSlice';
import { addToast } from 'features/system/store/systemSlice';
import { boardsApi } from 'services/api/endpoints/boards';
import { imageUploaded } from 'services/api/thunks/image';
import { startAppListening } from '..';
import {
SYSTEM_BOARDS,
imagesApi,
} from '../../../../../services/api/endpoints/images';
const moduleLog = log.child({ namespace: 'image' });
const DEFAULT_UPLOADED_TOAST: UseToastOptions = {
title: 'Image Uploaded',
status: 'success',
};
export const addImageUploadedFulfilledListener = () => {
startAppListening({
matcher: imagesApi.endpoints.uploadImage.matchFulfilled,
actionCreator: imageUploaded.fulfilled,
effect: (action, { dispatch, getState }) => {
const imageDTO = action.payload;
const state = getState();
const { selectedBoardId } = state.gallery;
const image = action.payload;
moduleLog.debug({ arg: '<Blob>', imageDTO }, 'Image uploaded');
moduleLog.debug({ arg: '<Blob>', image }, 'Image uploaded');
const { postUploadAction } = action.meta.arg.originalArgs;
if (
// No further actions needed for intermediate images,
action.payload.is_intermediate &&
// unless they have an explicit post-upload action
!postUploadAction
) {
if (action.payload.is_intermediate) {
// No further actions needed for intermediate images
return;
}
// default action - just upload and alert user
if (postUploadAction?.type === 'TOAST') {
const { toastOptions } = postUploadAction;
if (SYSTEM_BOARDS.includes(selectedBoardId)) {
dispatch(addToast({ ...DEFAULT_UPLOADED_TOAST, ...toastOptions }));
} else {
// Add this image to the board
dispatch(
imagesApi.endpoints.addImageToBoard.initiate({
board_id: selectedBoardId,
imageDTO,
})
);
dispatch(imageUpserted(image));
// Attempt to get the board's name for the toast
const { data } = boardsApi.endpoints.listAllBoards.select()(state);
const { postUploadAction } = action.meta.arg;
// Fall back to just the board id if we can't find the board for some reason
const board = data?.find((b) => b.board_id === selectedBoardId);
const description = board
? `Added to board ${board.board_name}`
: `Added to board ${selectedBoardId}`;
if (postUploadAction?.type === 'TOAST_CANVAS_SAVED_TO_GALLERY') {
dispatch(
addToast({ title: 'Canvas Saved to Gallery', status: 'success' })
);
return;
}
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description,
})
);
}
if (postUploadAction?.type === 'TOAST_CANVAS_MERGED') {
dispatch(addToast({ title: 'Canvas Merged', status: 'success' }));
return;
}
if (postUploadAction?.type === 'SET_CANVAS_INITIAL_IMAGE') {
dispatch(setInitialCanvasImage(imageDTO));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: 'Set as canvas initial image',
})
);
dispatch(setInitialCanvasImage(image));
return;
}
@@ -90,49 +52,30 @@ export const addImageUploadedFulfilledListener = () => {
dispatch(
controlNetImageChanged({
controlNetId,
controlImage: imageDTO.image_name,
})
);
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: 'Set as control image',
controlImage: image.image_name,
})
);
return;
}
if (postUploadAction?.type === 'SET_INITIAL_IMAGE') {
dispatch(initialImageChanged(imageDTO));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: 'Set as initial image',
})
);
dispatch(initialImageChanged(image));
return;
}
if (postUploadAction?.type === 'SET_NODES_IMAGE') {
const { nodeId, fieldName } = postUploadAction;
dispatch(fieldValueChanged({ nodeId, fieldName, value: imageDTO }));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: `Set as node field ${fieldName}`,
})
);
dispatch(fieldValueChanged({ nodeId, fieldName, value: image }));
return;
}
if (postUploadAction?.type === 'TOAST_UPLOADED') {
dispatch(addToast({ title: 'Image Uploaded', status: 'success' }));
return;
}
if (postUploadAction?.type === 'ADD_TO_BATCH') {
dispatch(imagesAddedToBatch([imageDTO.image_name]));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: 'Added to batch',
})
);
dispatch(imagesAddedToBatch([image.image_name]));
return;
}
},
@@ -141,10 +84,10 @@ export const addImageUploadedFulfilledListener = () => {
export const addImageUploadedRejectedListener = () => {
startAppListening({
matcher: imagesApi.endpoints.uploadImage.matchRejected,
actionCreator: imageUploaded.rejected,
effect: (action, { dispatch }) => {
const { file, postUploadAction, ...rest } = action.meta.arg.originalArgs;
const sanitizedData = { arg: { ...rest, file: '<Blob>' } };
const { formData, ...rest } = action.meta.arg;
const sanitizedData = { arg: { ...rest, formData: { file: '<Blob>' } } };
moduleLog.error({ data: sanitizedData }, 'Image upload failed');
dispatch(
addToast({

View File

@@ -0,0 +1,37 @@
import { log } from 'app/logging/useLogger';
import { startAppListening } from '..';
import { imageUrlsReceived } from 'services/api/thunks/image';
import { imageUpdatedOne } from 'features/gallery/store/gallerySlice';
const moduleLog = log.child({ namespace: 'image' });
export const addImageUrlsReceivedFulfilledListener = () => {
startAppListening({
actionCreator: imageUrlsReceived.fulfilled,
effect: (action, { getState, dispatch }) => {
const image = action.payload;
moduleLog.debug({ data: { image } }, 'Image URLs received');
const { image_name, image_url, thumbnail_url } = image;
dispatch(
imageUpdatedOne({
id: image_name,
changes: { image_url, thumbnail_url },
})
);
},
});
};
export const addImageUrlsReceivedRejectedListener = () => {
startAppListening({
actionCreator: imageUrlsReceived.rejected,
effect: (action, { getState, dispatch }) => {
moduleLog.debug(
{ data: { image: action.meta.arg } },
'Problem getting image URLs'
);
},
});
};

View File

@@ -1,9 +1,11 @@
import { makeToast } from 'app/components/Toaster';
import { initialImageSelected } from 'features/parameters/store/actions';
import { initialImageChanged } from 'features/parameters/store/generationSlice';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { addToast } from 'features/system/store/systemSlice';
import { startAppListening } from '..';
import { initialImageSelected } from 'features/parameters/store/actions';
import { makeToast } from 'app/components/Toaster';
import { selectImagesById } from 'features/gallery/store/gallerySlice';
import { isImageDTO } from 'services/api/guards';
export const addInitialImageSelectedListener = () => {
startAppListening({
@@ -18,7 +20,25 @@ export const addInitialImageSelectedListener = () => {
return;
}
dispatch(initialImageChanged(action.payload));
if (isImageDTO(action.payload)) {
dispatch(initialImageChanged(action.payload));
dispatch(addToast(makeToast(t('toast.sentToImageToImage'))));
return;
}
const imageName = action.payload;
const image = selectImagesById(getState(), imageName);
if (!image) {
dispatch(
addToast(
makeToast({ title: t('toast.imageNotLoadedDesc'), status: 'error' })
)
);
return;
}
dispatch(initialImageChanged(image));
dispatch(addToast(makeToast(t('toast.sentToImageToImage'))));
},
});

View File

@@ -0,0 +1,40 @@
import { log } from 'app/logging/useLogger';
import { startAppListening } from '..';
import { serializeError } from 'serialize-error';
import { receivedPageOfImages } from 'services/api/thunks/image';
import { imagesApi } from 'services/api/endpoints/images';
const moduleLog = log.child({ namespace: 'gallery' });
export const addReceivedPageOfImagesFulfilledListener = () => {
startAppListening({
actionCreator: receivedPageOfImages.fulfilled,
effect: (action, { getState, dispatch }) => {
const { items } = action.payload;
moduleLog.debug(
{ data: { payload: action.payload } },
`Received ${items.length} images`
);
items.forEach((image) => {
dispatch(
imagesApi.util.upsertQueryData('getImageDTO', image.image_name, image)
);
});
},
});
};
export const addReceivedPageOfImagesRejectedListener = () => {
startAppListening({
actionCreator: receivedPageOfImages.rejected,
effect: (action, { getState, dispatch }) => {
if (action.payload) {
moduleLog.debug(
{ data: { error: serializeError(action.payload) } },
'Problem receiving images'
);
}
},
});
};

View File

@@ -1,7 +1,7 @@
import { log } from 'app/logging/useLogger';
import { serializeError } from 'serialize-error';
import { sessionCreated } from 'services/api/thunks/session';
import { startAppListening } from '..';
import { sessionCreated } from 'services/api/thunks/session';
import { serializeError } from 'serialize-error';
const moduleLog = log.child({ namespace: 'session' });

View File

@@ -1,17 +1,9 @@
import { log } from 'app/logging/useLogger';
import { addImageToStagingArea } from 'features/canvas/store/canvasSlice';
import {
IMAGE_CATEGORIES,
boardIdSelected,
imageSelected,
} from 'features/gallery/store/gallerySlice';
import { progressImageSet } from 'features/system/store/systemSlice';
import {
SYSTEM_BOARDS,
imagesAdapter,
imagesApi,
} from 'services/api/endpoints/images';
import { boardImagesApi } from 'services/api/endpoints/boardImages';
import { isImageOutput } from 'services/api/guards';
import { imageDTOReceived } from 'services/api/thunks/image';
import { sessionCanceled } from 'services/api/thunks/session';
import {
appSocketInvocationComplete,
@@ -30,6 +22,7 @@ export const addInvocationCompleteEventListener = () => {
{ data: action.payload },
`Invocation complete (${action.payload.data.node.type})`
);
const session_id = action.payload.data.graph_execution_state_id;
const { cancelType, isCancelScheduled, boardIdToAddTo } =
@@ -46,70 +39,33 @@ export const addInvocationCompleteEventListener = () => {
// This complete event has an associated image output
if (isImageOutput(result) && !nodeDenylist.includes(node.type)) {
const { image_name } = result.image;
const { canvas, gallery } = getState();
const imageDTO = await dispatch(
imagesApi.endpoints.getImageDTO.initiate(image_name)
).unwrap();
// Get its metadata
dispatch(
imageDTOReceived({
image_name,
})
);
// Add canvas images to the staging area
const [{ payload: imageDTO }] = await take(
imageDTOReceived.fulfilled.match
);
// Handle canvas image
if (
graph_execution_state_id === canvas.layerState.stagingArea.sessionId
graph_execution_state_id ===
getState().canvas.layerState.stagingArea.sessionId
) {
dispatch(addImageToStagingArea(imageDTO));
}
if (!imageDTO.is_intermediate) {
// update the cache for 'All Images'
if (boardIdToAddTo && !imageDTO.is_intermediate) {
dispatch(
imagesApi.util.updateQueryData(
'listImages',
{
categories: IMAGE_CATEGORIES,
},
(draft) => {
imagesAdapter.addOne(draft, imageDTO);
draft.total = draft.total + 1;
}
)
boardImagesApi.endpoints.addImageToBoard.initiate({
board_id: boardIdToAddTo,
image_name,
})
);
// update the cache for 'No Board'
dispatch(
imagesApi.util.updateQueryData(
'listImages',
{
board_id: 'none',
},
(draft) => {
imagesAdapter.addOne(draft, imageDTO);
draft.total = draft.total + 1;
}
)
);
// add image to the board if we had one selected
if (boardIdToAddTo && !SYSTEM_BOARDS.includes(boardIdToAddTo)) {
dispatch(
imagesApi.endpoints.addImageToBoard.initiate({
board_id: boardIdToAddTo,
imageDTO,
})
);
}
const { selectedBoardId } = gallery;
if (boardIdToAddTo && boardIdToAddTo !== selectedBoardId) {
dispatch(boardIdSelected(boardIdToAddTo));
} else if (!boardIdToAddTo) {
dispatch(boardIdSelected('all'));
}
// If auto-switch is enabled, select the new image
if (getState().gallery.shouldAutoSwitch) {
dispatch(imageSelected(imageDTO.image_name));
}
}
dispatch(progressImageSet(null));

View File

@@ -1,8 +1,9 @@
import { log } from 'app/logging/useLogger';
import { stagingAreaImageSaved } from 'features/canvas/store/actions';
import { addToast } from 'features/system/store/systemSlice';
import { imagesApi } from 'services/api/endpoints/images';
import { startAppListening } from '..';
import { log } from 'app/logging/useLogger';
import { imageUpdated } from 'services/api/thunks/image';
import { imageUpserted } from 'features/gallery/store/gallerySlice';
import { addToast } from 'features/system/store/systemSlice';
const moduleLog = log.child({ namespace: 'canvas' });
@@ -10,27 +11,41 @@ export const addStagingAreaImageSavedListener = () => {
startAppListening({
actionCreator: stagingAreaImageSaved,
effect: async (action, { dispatch, getState, take }) => {
const { imageDTO } = action.payload;
const { imageName } = action.payload;
dispatch(
imagesApi.endpoints.updateImage.initiate({
imageDTO,
changes: { is_intermediate: false },
imageUpdated({
image_name: imageName,
is_intermediate: false,
})
)
.unwrap()
.then((image) => {
dispatch(addToast({ title: 'Image Saved', status: 'success' }));
})
.catch((error) => {
dispatch(
addToast({
title: 'Image Saving Failed',
description: error.message,
status: 'error',
})
);
});
);
const [imageUpdatedAction] = await take(
(action) =>
(imageUpdated.fulfilled.match(action) ||
imageUpdated.rejected.match(action)) &&
action.meta.arg.image_name === imageName
);
if (imageUpdated.rejected.match(imageUpdatedAction)) {
moduleLog.error(
{ data: { arg: imageUpdatedAction.meta.arg } },
'Image saving failed'
);
dispatch(
addToast({
title: 'Image Saving Failed',
description: imageUpdatedAction.error.message,
status: 'error',
})
);
return;
}
if (imageUpdated.fulfilled.match(imageUpdatedAction)) {
dispatch(imageUpserted(imageUpdatedAction.payload));
dispatch(addToast({ title: 'Image Saved', status: 'success' }));
}
},
});
};

View File

@@ -0,0 +1,91 @@
import { socketConnected } from 'services/events/actions';
import { startAppListening } from '..';
import { createSelector } from '@reduxjs/toolkit';
import { generationSelector } from 'features/parameters/store/generationSelectors';
import { canvasSelector } from 'features/canvas/store/canvasSelectors';
import { nodesSelector } from 'features/nodes/store/nodesSlice';
import { controlNetSelector } from 'features/controlNet/store/controlNetSlice';
import { forEach, uniqBy } from 'lodash-es';
import { imageUrlsReceived } from 'services/api/thunks/image';
import { log } from 'app/logging/useLogger';
import { selectImagesEntities } from 'features/gallery/store/gallerySlice';
const moduleLog = log.child({ namespace: 'images' });
const selectAllUsedImages = createSelector(
[
generationSelector,
canvasSelector,
nodesSelector,
controlNetSelector,
selectImagesEntities,
],
(generation, canvas, nodes, controlNet, imageEntities) => {
const allUsedImages: string[] = [];
if (generation.initialImage) {
allUsedImages.push(generation.initialImage.imageName);
}
canvas.layerState.objects.forEach((obj) => {
if (obj.kind === 'image') {
allUsedImages.push(obj.imageName);
}
});
nodes.nodes.forEach((node) => {
forEach(node.data.inputs, (input) => {
if (input.type === 'image' && input.value) {
allUsedImages.push(input.value.image_name);
}
});
});
forEach(controlNet.controlNets, (c) => {
if (c.controlImage) {
allUsedImages.push(c.controlImage);
}
if (c.processedControlImage) {
allUsedImages.push(c.processedControlImage);
}
});
forEach(imageEntities, (image) => {
if (image) {
allUsedImages.push(image.image_name);
}
});
const uniqueImages = uniqBy(allUsedImages, 'image_name');
return uniqueImages;
}
);
export const addUpdateImageUrlsOnConnectListener = () => {
startAppListening({
actionCreator: socketConnected,
effect: async (action, { dispatch, getState, take }) => {
const state = getState();
if (!state.config.shouldUpdateImagesOnConnect) {
return;
}
const allUsedImages = selectAllUsedImages(state);
moduleLog.trace(
{ data: allUsedImages },
`Fetching new image URLs for ${allUsedImages.length} images`
);
allUsedImages.forEach((image_name) => {
dispatch(
imageUrlsReceived({
image_name,
})
);
});
},
});
};

View File

@@ -1,37 +0,0 @@
import { createAction } from '@reduxjs/toolkit';
import { log } from 'app/logging/useLogger';
import { buildAdHocUpscaleGraph } from 'features/nodes/util/graphBuilders/buildAdHocUpscaleGraph';
import { sessionReadyToInvoke } from 'features/system/store/actions';
import { sessionCreated } from 'services/api/thunks/session';
import { startAppListening } from '..';
const moduleLog = log.child({ namespace: 'upscale' });
export const upscaleRequested = createAction<{ image_name: string }>(
`upscale/upscaleRequested`
);
export const addUpscaleRequestedListener = () => {
startAppListening({
actionCreator: upscaleRequested,
effect: async (
action,
{ dispatch, getState, take, unsubscribe, subscribe }
) => {
const { image_name } = action.payload;
const { esrganModelName } = getState().postprocessing;
const graph = buildAdHocUpscaleGraph({
image_name,
esrganModelName,
});
// Create a session to run the graph & wait til it's ready to invoke
dispatch(sessionCreated({ graph }));
await take(sessionCreated.fulfilled.match);
dispatch(sessionReadyToInvoke());
},
});
};

View File

@@ -1,20 +1,20 @@
import { startAppListening } from '..';
import { sessionCreated } from 'services/api/thunks/session';
import { buildCanvasGraph } from 'features/nodes/util/graphBuilders/buildCanvasGraph';
import { log } from 'app/logging/useLogger';
import { userInvoked } from 'app/store/actions';
import openBase64ImageInTab from 'common/util/openBase64ImageInTab';
import { canvasGraphBuilt } from 'features/nodes/store/actions';
import { imageUpdated, imageUploaded } from 'services/api/thunks/image';
import { ImageDTO } from 'services/api/types';
import {
canvasSessionIdChanged,
stagingAreaInitialized,
} from 'features/canvas/store/canvasSlice';
import { blobToDataURL } from 'features/canvas/util/blobToDataURL';
import { userInvoked } from 'app/store/actions';
import { getCanvasData } from 'features/canvas/util/getCanvasData';
import { getCanvasGenerationMode } from 'features/canvas/util/getCanvasGenerationMode';
import { canvasGraphBuilt } from 'features/nodes/store/actions';
import { buildCanvasGraph } from 'features/nodes/util/graphBuilders/buildCanvasGraph';
import { blobToDataURL } from 'features/canvas/util/blobToDataURL';
import openBase64ImageInTab from 'common/util/openBase64ImageInTab';
import { sessionReadyToInvoke } from 'features/system/store/actions';
import { imagesApi } from 'services/api/endpoints/images';
import { sessionCreated } from 'services/api/thunks/session';
import { ImageDTO } from 'services/api/types';
import { startAppListening } from '..';
const moduleLog = log.child({ namespace: 'invoke' });
@@ -74,7 +74,7 @@ export const addUserInvokedCanvasListener = () => {
if (['img2img', 'inpaint', 'outpaint'].includes(generationMode)) {
// upload the image, saving the request id
const { requestId: initImageUploadedRequestId } = dispatch(
imagesApi.endpoints.uploadImage.initiate({
imageUploaded({
file: new File([baseBlob], 'canvasInitImage.png', {
type: 'image/png',
}),
@@ -85,20 +85,19 @@ export const addUserInvokedCanvasListener = () => {
// Wait for the image to be uploaded, matching by request id
const [{ payload }] = await take(
// TODO: figure out how to narrow this action's type
(action) =>
imagesApi.endpoints.uploadImage.matchFulfilled(action) &&
(action): action is ReturnType<typeof imageUploaded.fulfilled> =>
imageUploaded.fulfilled.match(action) &&
action.meta.requestId === initImageUploadedRequestId
);
canvasInitImage = payload as ImageDTO;
canvasInitImage = payload;
}
// For inpaint/outpaint, we also need to upload the mask layer
if (['inpaint', 'outpaint'].includes(generationMode)) {
// upload the image, saving the request id
const { requestId: maskImageUploadedRequestId } = dispatch(
imagesApi.endpoints.uploadImage.initiate({
imageUploaded({
file: new File([maskBlob], 'canvasMaskImage.png', {
type: 'image/png',
}),
@@ -109,13 +108,12 @@ export const addUserInvokedCanvasListener = () => {
// Wait for the image to be uploaded, matching by request id
const [{ payload }] = await take(
// TODO: figure out how to narrow this action's type
(action) =>
imagesApi.endpoints.uploadImage.matchFulfilled(action) &&
(action): action is ReturnType<typeof imageUploaded.fulfilled> =>
imageUploaded.fulfilled.match(action) &&
action.meta.requestId === maskImageUploadedRequestId
);
canvasMaskImage = payload as ImageDTO;
canvasMaskImage = payload;
}
const graph = buildCanvasGraph(
@@ -146,9 +144,9 @@ export const addUserInvokedCanvasListener = () => {
// Associate the init image with the session, now that we have the session ID
if (['img2img', 'inpaint'].includes(generationMode) && canvasInitImage) {
dispatch(
imagesApi.endpoints.updateImage.initiate({
imageDTO: canvasInitImage,
changes: { session_id: sessionId },
imageUpdated({
image_name: canvasInitImage.image_name,
session_id: sessionId,
})
);
}
@@ -156,9 +154,9 @@ export const addUserInvokedCanvasListener = () => {
// Associate the mask image with the session, now that we have the session ID
if (['inpaint'].includes(generationMode) && canvasMaskImage) {
dispatch(
imagesApi.endpoints.updateImage.initiate({
imageDTO: canvasMaskImage,
changes: { session_id: sessionId },
imageUpdated({
image_name: canvasMaskImage.image_name,
session_id: sessionId,
})
);
}

View File

@@ -11,15 +11,13 @@ import {
TypesafeDroppableData,
} from 'app/components/ImageDnd/typesafeDnd';
import IAIIconButton from 'common/components/IAIIconButton';
import {
IAILoadingImageFallback,
IAINoContentFallback,
} from 'common/components/IAIImageFallback';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import ImageMetadataOverlay from 'common/components/ImageMetadataOverlay';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import { MouseEvent, ReactElement, SyntheticEvent, memo } from 'react';
import { FaImage, FaUndo, FaUpload } from 'react-icons/fa';
import { ImageDTO, PostUploadAction } from 'services/api/types';
import { PostUploadAction } from 'services/api/thunks/image';
import { ImageDTO } from 'services/api/types';
import { mode } from 'theme/util/mode';
import IAIDraggable from './IAIDraggable';
import IAIDroppable from './IAIDroppable';
@@ -48,7 +46,6 @@ type IAIDndImageProps = {
isSelected?: boolean;
thumbnail?: boolean;
noContentFallback?: ReactElement;
useThumbailFallback?: boolean;
};
const IAIDndImage = (props: IAIDndImageProps) => {
@@ -74,7 +71,6 @@ const IAIDndImage = (props: IAIDndImageProps) => {
resetTooltip = 'Reset',
resetIcon = <FaUndo />,
noContentFallback = <IAINoContentFallback icon={FaImage} />,
useThumbailFallback,
} = props;
const { colorMode } = useColorMode();
@@ -130,14 +126,9 @@ const IAIDndImage = (props: IAIDndImageProps) => {
<Image
src={thumbnail ? imageDTO.thumbnail_url : imageDTO.image_url}
fallbackStrategy="beforeLoadOrError"
fallbackSrc={
useThumbailFallback ? imageDTO.thumbnail_url : undefined
}
fallback={
useThumbailFallback ? undefined : (
<IAILoadingImageFallback image={imageDTO} />
)
}
// If we fall back to thumbnail, it feels much snappier than the skeleton...
fallbackSrc={imageDTO.thumbnail_url}
// fallback={<IAILoadingImageFallback image={imageDTO} />}
width={imageDTO.width}
height={imageDTO.height}
onError={onError}

Some files were not shown because too many files have changed in this diff Show More