mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-01-21 17:47:54 -05:00
Compare commits
31 Commits
test/xform
...
v3.4.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
62da69b3e8 | ||
|
|
d2852c767b | ||
|
|
47f33f1ed1 | ||
|
|
1896c6fb44 | ||
|
|
47f3515745 | ||
|
|
950021a61e | ||
|
|
5ee55cf46f | ||
|
|
91ef24e15c | ||
|
|
230dfdb9ad | ||
|
|
6f719b2c7a | ||
|
|
02ce3bd303 | ||
|
|
4599517c6c | ||
|
|
cc747c066c | ||
|
|
3ba547a41a | ||
|
|
1a37827bdf | ||
|
|
16e990b6e6 | ||
|
|
be4f3fa5c6 | ||
|
|
d0375ec234 | ||
|
|
1bf8625b10 | ||
|
|
5d6040b636 | ||
|
|
ead1b14ee7 | ||
|
|
92a9355ddb | ||
|
|
7fcf475aec | ||
|
|
3f6e8e9d6b | ||
|
|
c9655236cc | ||
|
|
5cb3fdb64c | ||
|
|
ae749ada6e | ||
|
|
36b8549f3a | ||
|
|
b6f356f067 | ||
|
|
a4f1db7c02 | ||
|
|
21206bafcf |
2
.github/workflows/style-checks.yml
vendored
2
.github/workflows/style-checks.yml
vendored
@@ -6,7 +6,7 @@ on:
|
||||
branches: main
|
||||
|
||||
jobs:
|
||||
black:
|
||||
ruff:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Invocations
|
||||
# Nodes
|
||||
|
||||
Features in InvokeAI are added in the form of modular node-like systems called
|
||||
Features in InvokeAI are added in the form of modular nodes systems called
|
||||
**Invocations**.
|
||||
|
||||
An Invocation is simply a single operation that takes in some inputs and gives
|
||||
@@ -9,13 +9,34 @@ complex functionality.
|
||||
|
||||
## Invocations Directory
|
||||
|
||||
InvokeAI Invocations can be found in the `invokeai/app/invocations` directory.
|
||||
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These can be used as examples to create your own nodes.
|
||||
|
||||
You can add your new functionality to one of the existing Invocations in this
|
||||
directory or create a new file in this directory as per your needs.
|
||||
New nodes should be added to a subfolder in `nodes` direction found at the root level of the InvokeAI installation location. Nodes added to this folder will be able to be used upon application startup.
|
||||
|
||||
Example `nodes` subfolder structure:
|
||||
```py
|
||||
├── __init__.py # Invoke-managed custom node loader
|
||||
│
|
||||
├── cool_node
|
||||
│ ├── __init__.py # see example below
|
||||
│ └── cool_node.py
|
||||
│
|
||||
└── my_node_pack
|
||||
├── __init__.py # see example below
|
||||
├── tasty_node.py
|
||||
├── bodacious_node.py
|
||||
├── utils.py
|
||||
└── extra_nodes
|
||||
└── fancy_node.py
|
||||
```
|
||||
|
||||
Each node folder must have an `__init__.py` file that imports its nodes. Only nodes imported in the `__init__.py` file are loaded.
|
||||
See the README in the nodes folder for more examples:
|
||||
|
||||
```py
|
||||
from .cool_node import CoolInvocation
|
||||
```
|
||||
|
||||
**Note:** _All Invocations must be inside this directory for InvokeAI to
|
||||
recognize them as valid Invocations._
|
||||
|
||||
## Creating A New Invocation
|
||||
|
||||
|
||||
53
docs/features/LORAS.md
Normal file
53
docs/features/LORAS.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: LoRAs & LCM-LoRAs
|
||||
---
|
||||
|
||||
# :material-library-shelves: LoRAs & LCM-LoRAs
|
||||
|
||||
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
|
||||
|
||||
## LoRAs
|
||||
|
||||
Low-Rank Adaptation (LoRA) files are models that customize the output of Stable Diffusion
|
||||
image generation. Larger than embeddings, but much smaller than full
|
||||
models, they augment SD with improved understanding of subjects and
|
||||
artistic styles.
|
||||
|
||||
Unlike TI files, LoRAs do not introduce novel vocabulary into the
|
||||
model's known tokens. Instead, LoRAs augment the model's weights that
|
||||
are applied to generate imagery. LoRAs may be supplied with a
|
||||
"trigger" word that they have been explicitly trained on, or may
|
||||
simply apply their effect without being triggered.
|
||||
|
||||
LoRAs are typically stored in .safetensors files, which are the most
|
||||
secure way to store and transmit these types of weights. You may
|
||||
install any number of `.safetensors` LoRA files simply by copying them
|
||||
into the `autoimport/lora` directory of the corresponding InvokeAI models
|
||||
directory (usually `invokeai` in your home directory).
|
||||
|
||||
To use these when generating, open the LoRA menu item in the options
|
||||
panel, select the LoRAs you want to apply and ensure that they have
|
||||
the appropriate weight recommended by the model provider. Typically,
|
||||
most LoRAs perform best at a weight of .75-1.
|
||||
|
||||
|
||||
## LCM-LoRAs
|
||||
Latent Consistency Models (LCMs) allowed a reduced number of steps to be used to generate images with Stable Diffusion. These are created by distilling base models, creating models that only require a small number of steps to generate images. However, LCMs require that any fine-tune of a base model be distilled to be used as an LCM.
|
||||
|
||||
LCM-LoRAs are models that provide the benefit of LCMs but are able to be used as LoRAs and applied to any fine tune of a base model. LCM-LoRAs are created by training a small number of adapters, rather than distilling the entire fine-tuned base model. The resulting LoRA can be used the same way as a standard LoRA, but with a greatly reduced step count. This enables SDXL images to be generated up to 10x faster than without the use of LCM-LoRAs.
|
||||
|
||||
|
||||
**Using LCM-LoRAs**
|
||||
|
||||
LCM-LoRAs are natively supported in InvokeAI throughout the application. To get started, install any diffusers format LCM-LoRAs using the model manager and select it in the LoRA field.
|
||||
|
||||
There are a number parameter differences when using LCM-LoRAs and standard generation:
|
||||
|
||||
- When using LCM-LoRAs, the LoRA strength should be lower than if using a standard LoRA, with 0.35 recommended as a starting point.
|
||||
- The LCM scheduler should be used for generation
|
||||
- CFG-Scale should be reduced to ~1
|
||||
- Steps should be reduced in the range of 4-8
|
||||
|
||||
Standard LoRAs can also be used alongside LCM-LoRAs, but will also require a lower strength, with 0.45 being recommended as a starting point.
|
||||
|
||||
More information can be found here: https://huggingface.co/blog/lcm_lora#fast-inference-with-sdxl-lcm-loras
|
||||
@@ -1,12 +1,3 @@
|
||||
---
|
||||
title: Textual Inversion Embeddings and LoRAs
|
||||
---
|
||||
|
||||
# :material-library-shelves: Textual Inversions and LoRAs
|
||||
|
||||
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
|
||||
|
||||
|
||||
## Using Textual Inversion Files
|
||||
|
||||
Textual inversion (TI) files are small models that customize the output of
|
||||
@@ -61,29 +52,4 @@ files it finds there for compatible models. At startup you will see a message si
|
||||
>> Current embedding manager terms: <HOI4-Leader>, <princess-knight>
|
||||
```
|
||||
To use these when generating, simply type the `<` key in your prompt to open the Textual Inversion WebUI and
|
||||
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.
|
||||
|
||||
## Using LoRAs
|
||||
|
||||
LoRA files are models that customize the output of Stable Diffusion
|
||||
image generation. Larger than embeddings, but much smaller than full
|
||||
models, they augment SD with improved understanding of subjects and
|
||||
artistic styles.
|
||||
|
||||
Unlike TI files, LoRAs do not introduce novel vocabulary into the
|
||||
model's known tokens. Instead, LoRAs augment the model's weights that
|
||||
are applied to generate imagery. LoRAs may be supplied with a
|
||||
"trigger" word that they have been explicitly trained on, or may
|
||||
simply apply their effect without being triggered.
|
||||
|
||||
LoRAs are typically stored in .safetensors files, which are the most
|
||||
secure way to store and transmit these types of weights. You may
|
||||
install any number of `.safetensors` LoRA files simply by copying them
|
||||
into the `autoimport/lora` directory of the corresponding InvokeAI models
|
||||
directory (usually `invokeai` in your home directory).
|
||||
|
||||
To use these when generating, open the LoRA menu item in the options
|
||||
panel, select the LoRAs you want to apply and ensure that they have
|
||||
the appropriate weight recommended by the model provider. Typically,
|
||||
most LoRAs perform best at a weight of .75-1.
|
||||
|
||||
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.
|
||||
@@ -20,7 +20,7 @@ a single convenient digital artist-optimized user interface.
|
||||
### * [Prompt Engineering](PROMPTS.md)
|
||||
Get the images you want with the InvokeAI prompt engineering language.
|
||||
|
||||
### * The [LoRA, LyCORIS and Textual Inversion Models](CONCEPTS.md)
|
||||
### * The [LoRA, LyCORIS, LCM-LoRA Models](CONCEPTS.md)
|
||||
Add custom subjects and styles using a variety of fine-tuned models.
|
||||
|
||||
### * [ControlNet](CONTROLNET.md)
|
||||
@@ -40,7 +40,7 @@ guide also covers optimizing models to load quickly.
|
||||
Teach an old model new tricks. Merge 2-3 models together to create a
|
||||
new model that combines characteristics of the originals.
|
||||
|
||||
### * [Textual Inversion](TRAINING.md)
|
||||
### * [Textual Inversion](TEXTUAL_INVERSIONS.md)
|
||||
Personalize models by adding your own style or subjects.
|
||||
|
||||
## Other Features
|
||||
|
||||
43
docs/help/FAQ.md
Normal file
43
docs/help/FAQ.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# FAQs
|
||||
|
||||
**Where do I get started? How can I install Invoke?**
|
||||
|
||||
- You can download the latest installers [here](https://github.com/invoke-ai/InvokeAI/releases) - Note that any releases marked as *pre-release* are in a beta state. You may experience some issues, but we appreciate your help testing those! For stable/reliable installations, please install the **[Latest Release](https://github.com/invoke-ai/InvokeAI/releases/latest)**
|
||||
|
||||
**How can I download models? Can I use models I already have downloaded?**
|
||||
|
||||
- Models can be downloaded through the model manager, or through option [4] in the invoke.bat/invoke.sh launcher script. To download a model through the Model Manager, use the HuggingFace Repo ID by pressing the “Copy” button next to the repository name. Alternatively, to download a model from CivitAi, use the download link in the Model Manager.
|
||||
- Models that are already downloaded can be used by creating a symlink to the model location in the `autoimport` folder or by using the Model Manger’s “Scan for Models” function.
|
||||
|
||||
**My images are taking a long time to generate. How can I speed up generation?**
|
||||
|
||||
- A common solution is to reduce the size of your RAM & VRAM cache to 0.25. This ensures your system has enough memory to generate images.
|
||||
- Additionally, check the [hardware requirements](https://invoke-ai.github.io/InvokeAI/#hardware-requirements) to ensure that your system is capable of generating images.
|
||||
- Lastly, double check your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup.
|
||||
|
||||
**I’ve installed Python on Windows but the installer says it can’t find it?**
|
||||
|
||||
- Then ensure that you checked **'Add python.exe to PATH'** when installing Python. This can be found at the bottom of the Python Installer window. If you already have Python installed, this can be done with the modify / repair feature of the installer.
|
||||
|
||||
**I’ve installed everything successfully but I still get an error about Triton when starting Invoke?**
|
||||
|
||||
- This can be safely ignored. InvokeAI doesn't use Triton, but if you are on Linux and wish to dismiss the error, you can install Triton.
|
||||
|
||||
**I updated to 3.4.0 and now xFormers can’t load C++/CUDA?**
|
||||
|
||||
- An issue occurred with your PyTorch update. Follow these steps to fix :
|
||||
1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
|
||||
2. Run:`pip install ".[xformers]" --upgrade --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu121`
|
||||
- If you run into an error with `typing_extensions`, re-open the developer console and run: `pip install -U typing-extensions`
|
||||
|
||||
**It says my pip is out of date - is that why my install isn't working?**
|
||||
- An out of date won't cause an installation to fail. The cause of the error can likely be found above the message that says pip is out of date.
|
||||
- If you saw that warning but the install went well, don't worry about it (but you can update pip afterwards if you'd like).
|
||||
|
||||
**How can I generate the exact same that I found on the internet?**
|
||||
Most example images with prompts that you'll find on the internet have been generated using different software, so you can't expect to get identical results. In order to reproduce an image, you need to replicate the exact settings and processing steps, including (but not limited to) the model, the positive and negative prompts, the seed, the sampler, the exact image size, any upscaling steps, etc.
|
||||
|
||||
|
||||
**Where can I get more help?**
|
||||
|
||||
- Create an issue on [GitHub](https://github.com/invoke-ai/InvokeAI/issues) or post in the [#help channel](https://discord.com/channels/1020123559063990373/1149510134058471514) of the InvokeAI Discord
|
||||
@@ -101,16 +101,13 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
|
||||
|
||||
<div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
|
||||
|
||||
!!! Note
|
||||
|
||||
This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time.
|
||||
|
||||
## :octicons-link-24: Quick Links
|
||||
|
||||
<div class="button-container">
|
||||
<a href="installation/INSTALLATION"> <button class="button">Installation</button> </a>
|
||||
<a href="features/"> <button class="button">Features</button> </a>
|
||||
<a href="help/gettingStartedWithAI/"> <button class="button">Getting Started</button> </a>
|
||||
<a href="help/FAQ/"> <button class="button">FAQ</button> </a>
|
||||
<a href="contributing/CONTRIBUTING/"> <button class="button">Contributing</button> </a>
|
||||
<a href="https://github.com/invoke-ai/InvokeAI/"> <button class="button">Code and Downloads</button> </a>
|
||||
<a href="https://github.com/invoke-ai/InvokeAI/issues"> <button class="button">Bug Reports </button> </a>
|
||||
|
||||
@@ -32,6 +32,7 @@ To use a community workflow, download the the `.json` node graph file and load i
|
||||
+ [Size Stepper Nodes](#size-stepper-nodes)
|
||||
+ [Text font to Image](#text-font-to-image)
|
||||
+ [Thresholding](#thresholding)
|
||||
+ [Unsharp Mask](#unsharp-mask)
|
||||
+ [XY Image to Grid and Images to Grids nodes](#xy-image-to-grid-and-images-to-grids-nodes)
|
||||
- [Example Node Template](#example-node-template)
|
||||
- [Disclaimer](#disclaimer)
|
||||
@@ -316,6 +317,13 @@ Highlights/Midtones/Shadows (with LUT blur enabled):
|
||||
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0a440e43-697f-4d17-82ee-f287467df0a5" width="300" />
|
||||
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0701fd0f-2ca7-4fe2-8613-2b52547bafce" width="300" />
|
||||
|
||||
--------------------------------
|
||||
### Unsharp Mask
|
||||
|
||||
**Description:** Applies an unsharp mask filter to an image, preserving its alpha channel in the process.
|
||||
|
||||
**Node Link:** https://github.com/JPPhoto/unsharp-mask-node
|
||||
|
||||
--------------------------------
|
||||
### XY Image to Grid and Images to Grids nodes
|
||||
|
||||
|
||||
@@ -96,7 +96,7 @@ class ControlOutput(BaseInvocationOutput):
|
||||
control: ControlField = OutputField(description=FieldDescriptions.control)
|
||||
|
||||
|
||||
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.0.0")
|
||||
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.0")
|
||||
class ControlNetInvocation(BaseInvocation):
|
||||
"""Collects ControlNet info to pass to other nodes"""
|
||||
|
||||
@@ -173,7 +173,7 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
title="Canny Processor",
|
||||
tags=["controlnet", "canny"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class CannyImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Canny edge detection for ControlNet"""
|
||||
@@ -196,7 +196,7 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
|
||||
title="HED (softedge) Processor",
|
||||
tags=["controlnet", "hed", "softedge"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class HedImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies HED edge detection to image"""
|
||||
@@ -225,7 +225,7 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Lineart Processor",
|
||||
tags=["controlnet", "lineart"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class LineartImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies line art processing to image"""
|
||||
@@ -247,7 +247,7 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Lineart Anime Processor",
|
||||
tags=["controlnet", "lineart", "anime"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies line art anime processing to image"""
|
||||
@@ -270,7 +270,7 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Openpose Processor",
|
||||
tags=["controlnet", "openpose", "pose"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies Openpose processing to image"""
|
||||
@@ -295,7 +295,7 @@ class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Midas Depth Processor",
|
||||
tags=["controlnet", "midas"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies Midas depth processing to image"""
|
||||
@@ -322,7 +322,7 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Normal BAE Processor",
|
||||
tags=["controlnet"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies NormalBae processing to image"""
|
||||
@@ -339,7 +339,7 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
|
||||
|
||||
|
||||
@invocation(
|
||||
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.0.0"
|
||||
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.1.0"
|
||||
)
|
||||
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies MLSD processing to image"""
|
||||
@@ -362,7 +362,7 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
|
||||
|
||||
|
||||
@invocation(
|
||||
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.0.0"
|
||||
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.1.0"
|
||||
)
|
||||
class PidiImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies PIDI processing to image"""
|
||||
@@ -389,7 +389,7 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Content Shuffle Processor",
|
||||
tags=["controlnet", "contentshuffle"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies content shuffle processing to image"""
|
||||
@@ -419,7 +419,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Zoe (Depth) Processor",
|
||||
tags=["controlnet", "zoe", "depth"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies Zoe depth processing to image"""
|
||||
@@ -435,7 +435,7 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Mediapipe Face Processor",
|
||||
tags=["controlnet", "mediapipe", "face"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies mediapipe face processing to image"""
|
||||
@@ -458,7 +458,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Leres (Depth) Processor",
|
||||
tags=["controlnet", "leres", "depth"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class LeresImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies leres processing to image"""
|
||||
@@ -487,7 +487,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Tile Resample Processor",
|
||||
tags=["controlnet", "tile"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Tile resampler processor"""
|
||||
@@ -527,7 +527,7 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
|
||||
title="Segment Anything Processor",
|
||||
tags=["controlnet", "segmentanything"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies segment anything processing to image"""
|
||||
@@ -569,7 +569,7 @@ class SamDetectorReproducibleColors(SamDetector):
|
||||
title="Color Map Processor",
|
||||
tags=["controlnet"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Generates a color map from the provided image"""
|
||||
|
||||
@@ -11,7 +11,7 @@ from invokeai.app.services.image_records.image_records_common import ImageCatego
|
||||
from .baseinvocation import BaseInvocation, InputField, InvocationContext, WithMetadata, WithWorkflow, invocation
|
||||
|
||||
|
||||
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.0.0")
|
||||
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.1.0")
|
||||
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
"""Simple inpaint using opencv."""
|
||||
|
||||
|
||||
@@ -438,7 +438,7 @@ def get_faces_list(
|
||||
return all_faces
|
||||
|
||||
|
||||
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.0.2")
|
||||
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.1.0")
|
||||
class FaceOffInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Bound, extract, and mask a face from an image using MediaPipe detection"""
|
||||
|
||||
@@ -532,7 +532,7 @@ class FaceOffInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
return output
|
||||
|
||||
|
||||
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.0.2")
|
||||
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.1.0")
|
||||
class FaceMaskInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Face mask creation using mediapipe face detection"""
|
||||
|
||||
@@ -650,7 +650,7 @@ class FaceMaskInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
|
||||
|
||||
@invocation(
|
||||
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.0.2"
|
||||
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.1.0"
|
||||
)
|
||||
class FaceIdentifierInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Outputs an image with detected face IDs printed on each face. For use with other FaceTools."""
|
||||
|
||||
@@ -8,7 +8,7 @@ import numpy
|
||||
from PIL import Image, ImageChops, ImageFilter, ImageOps
|
||||
|
||||
from invokeai.app.invocations.primitives import BoardField, ColorField, ImageField, ImageOutput
|
||||
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
|
||||
from invokeai.app.services.image_records.image_records_common import ImageCategory, ImageRecordChanges, ResourceOrigin
|
||||
from invokeai.app.shared.fields import FieldDescriptions
|
||||
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
|
||||
from invokeai.backend.image_util.safety_checker import SafetyChecker
|
||||
@@ -36,7 +36,7 @@ class ShowImageInvocation(BaseInvocation):
|
||||
)
|
||||
|
||||
|
||||
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.0.0")
|
||||
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.1.0")
|
||||
class BlankImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
"""Creates a blank image and forwards it to the pipeline"""
|
||||
|
||||
@@ -66,7 +66,7 @@ class BlankImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
)
|
||||
|
||||
|
||||
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.0.0")
|
||||
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.1.0")
|
||||
class ImageCropInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Crops an image to a specified box. The box can be outside of the image."""
|
||||
|
||||
@@ -100,7 +100,7 @@ class ImageCropInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.0.1")
|
||||
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.1.0")
|
||||
class ImagePasteInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Pastes an image into another image."""
|
||||
|
||||
@@ -154,7 +154,7 @@ class ImagePasteInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.0.0")
|
||||
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.1.0")
|
||||
class MaskFromAlphaInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Extracts the alpha channel of an image as a mask."""
|
||||
|
||||
@@ -186,7 +186,7 @@ class MaskFromAlphaInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.0.0")
|
||||
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.1.0")
|
||||
class ImageMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
|
||||
|
||||
@@ -220,7 +220,7 @@ class ImageMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
|
||||
|
||||
|
||||
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.0.0")
|
||||
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.1.0")
|
||||
class ImageChannelInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Gets a channel from an image."""
|
||||
|
||||
@@ -253,7 +253,7 @@ class ImageChannelInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
|
||||
|
||||
|
||||
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.0.0")
|
||||
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.1.0")
|
||||
class ImageConvertInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Converts an image to a different mode."""
|
||||
|
||||
@@ -283,7 +283,7 @@ class ImageConvertInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.0.0")
|
||||
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.1.0")
|
||||
class ImageBlurInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Blurs an image"""
|
||||
|
||||
@@ -338,7 +338,7 @@ PIL_RESAMPLING_MAP = {
|
||||
}
|
||||
|
||||
|
||||
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.0.0")
|
||||
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.1.0")
|
||||
class ImageResizeInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
"""Resizes an image to specific dimensions"""
|
||||
|
||||
@@ -375,7 +375,7 @@ class ImageResizeInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
)
|
||||
|
||||
|
||||
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.0.0")
|
||||
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.1.0")
|
||||
class ImageScaleInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
"""Scales an image by a factor"""
|
||||
|
||||
@@ -417,7 +417,7 @@ class ImageScaleInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
)
|
||||
|
||||
|
||||
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.0.0")
|
||||
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.1.0")
|
||||
class ImageLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Linear interpolation of all pixels of an image"""
|
||||
|
||||
@@ -451,7 +451,7 @@ class ImageLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.0.0")
|
||||
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.1.0")
|
||||
class ImageInverseLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Inverse linear interpolation of all pixels of an image"""
|
||||
|
||||
@@ -485,7 +485,7 @@ class ImageInverseLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.0.0")
|
||||
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.1.0")
|
||||
class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
"""Add blur to NSFW-flagged images"""
|
||||
|
||||
@@ -532,7 +532,7 @@ class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
title="Add Invisible Watermark",
|
||||
tags=["image", "watermark"],
|
||||
category="image",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
"""Add an invisible watermark to an image"""
|
||||
@@ -561,7 +561,7 @@ class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
)
|
||||
|
||||
|
||||
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.0.0")
|
||||
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.1.0")
|
||||
class MaskEdgeInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Applies an edge mask to an image"""
|
||||
|
||||
@@ -612,7 +612,7 @@ class MaskEdgeInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
title="Combine Masks",
|
||||
tags=["image", "mask", "multiply"],
|
||||
category="image",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class MaskCombineInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
|
||||
@@ -644,7 +644,7 @@ class MaskCombineInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.0.0")
|
||||
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.1.0")
|
||||
class ColorCorrectInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""
|
||||
Shifts the colors of a target image to match the reference image, optionally
|
||||
@@ -755,7 +755,7 @@ class ColorCorrectInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.0.0")
|
||||
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.1.0")
|
||||
class ImageHueAdjustmentInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Adjusts the Hue of an image."""
|
||||
|
||||
@@ -858,7 +858,7 @@ CHANNEL_FORMATS = {
|
||||
"value",
|
||||
],
|
||||
category="image",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class ImageChannelOffsetInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Add or subtract a value from a specific color channel of an image."""
|
||||
@@ -929,7 +929,7 @@ class ImageChannelOffsetInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"value",
|
||||
],
|
||||
category="image",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class ImageChannelMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Scale a specific color channel of an image."""
|
||||
@@ -988,7 +988,7 @@ class ImageChannelMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata)
|
||||
title="Save Image",
|
||||
tags=["primitives", "image"],
|
||||
category="primitives",
|
||||
version="1.0.1",
|
||||
version="1.1.0",
|
||||
use_cache=False,
|
||||
)
|
||||
class SaveImageInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
@@ -1017,3 +1017,35 @@ class SaveImageInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
width=image_dto.width,
|
||||
height=image_dto.height,
|
||||
)
|
||||
|
||||
|
||||
@invocation(
|
||||
"linear_ui_output",
|
||||
title="Linear UI Image Output",
|
||||
tags=["primitives", "image"],
|
||||
category="primitives",
|
||||
version="1.0.1",
|
||||
use_cache=False,
|
||||
)
|
||||
class LinearUIOutputInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Handles Linear UI Image Outputting tasks."""
|
||||
|
||||
image: ImageField = InputField(description=FieldDescriptions.image)
|
||||
board: Optional[BoardField] = InputField(default=None, description=FieldDescriptions.board, input=Input.Direct)
|
||||
|
||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
||||
image_dto = context.services.images.get_dto(self.image.image_name)
|
||||
|
||||
if self.board:
|
||||
context.services.board_images.add_image_to_board(self.board.board_id, self.image.image_name)
|
||||
|
||||
if image_dto.is_intermediate != self.is_intermediate:
|
||||
context.services.images.update(
|
||||
self.image.image_name, changes=ImageRecordChanges(is_intermediate=self.is_intermediate)
|
||||
)
|
||||
|
||||
return ImageOutput(
|
||||
image=ImageField(image_name=self.image.image_name),
|
||||
width=image_dto.width,
|
||||
height=image_dto.height,
|
||||
)
|
||||
|
||||
@@ -118,7 +118,7 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
|
||||
return si
|
||||
|
||||
|
||||
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
|
||||
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
|
||||
class InfillColorInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Infills transparent areas of an image with a solid color"""
|
||||
|
||||
@@ -154,7 +154,7 @@ class InfillColorInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
|
||||
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
|
||||
class InfillTileInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Infills transparent areas of an image with tiles of the image"""
|
||||
|
||||
@@ -192,7 +192,7 @@ class InfillTileInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
|
||||
|
||||
@invocation(
|
||||
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0"
|
||||
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0"
|
||||
)
|
||||
class InfillPatchMatchInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Infills transparent areas of an image using the PatchMatch algorithm"""
|
||||
@@ -245,7 +245,7 @@ class InfillPatchMatchInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
|
||||
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
|
||||
class LaMaInfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Infills transparent areas of an image using the LaMa model"""
|
||||
|
||||
@@ -274,7 +274,7 @@ class LaMaInfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
)
|
||||
|
||||
|
||||
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint")
|
||||
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
|
||||
class CV2InfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Infills transparent areas of an image using OpenCV Inpainting"""
|
||||
|
||||
|
||||
@@ -790,7 +790,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
title="Latents to Image",
|
||||
tags=["latents", "image", "vae", "l2i"],
|
||||
category="latents",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
"""Generates an image from latents."""
|
||||
|
||||
@@ -112,7 +112,7 @@ GENERATION_MODES = Literal[
|
||||
]
|
||||
|
||||
|
||||
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.0.0")
|
||||
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.0.1")
|
||||
class CoreMetadataInvocation(BaseInvocation):
|
||||
"""Collects core generation metadata into a MetadataField"""
|
||||
|
||||
@@ -160,7 +160,7 @@ class CoreMetadataInvocation(BaseInvocation):
|
||||
)
|
||||
|
||||
# High resolution fix metadata.
|
||||
hrf_enabled: Optional[float] = InputField(
|
||||
hrf_enabled: Optional[bool] = InputField(
|
||||
default=None,
|
||||
description="Whether or not high resolution fix was enabled.",
|
||||
)
|
||||
|
||||
@@ -326,7 +326,7 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
|
||||
title="ONNX Latents to Image",
|
||||
tags=["latents", "image", "vae", "onnx"],
|
||||
category="image",
|
||||
version="1.0.0",
|
||||
version="1.1.0",
|
||||
)
|
||||
class ONNXLatentsToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
|
||||
"""Generates an image from latents."""
|
||||
|
||||
@@ -29,7 +29,7 @@ if choose_torch_device() == torch.device("mps"):
|
||||
from torch import mps
|
||||
|
||||
|
||||
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.1.0")
|
||||
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.2.0")
|
||||
class ESRGANInvocation(BaseInvocation, WithWorkflow, WithMetadata):
|
||||
"""Upscales an image using RealESRGAN."""
|
||||
|
||||
|
||||
@@ -90,14 +90,6 @@ def get_extras():
|
||||
pass
|
||||
return extras
|
||||
|
||||
def get_extra_index() -> str:
|
||||
# parsed_version.local for torch is the platform + version, eg 'cu121' or 'rocm5.6'
|
||||
local = pkg_resources.get_distribution("torch").parsed_version.local
|
||||
if local and 'cu' in local:
|
||||
return "--extra-index-url https://download.pytorch.org/whl/cu121"
|
||||
if local and 'rocm' in local:
|
||||
return "--extra-index-url https://download.pytorch.org/whl/rocm5.6"
|
||||
return ""
|
||||
|
||||
def main():
|
||||
versions = get_versions()
|
||||
@@ -130,15 +122,14 @@ def main():
|
||||
branch = Prompt.ask("Enter an InvokeAI branch name")
|
||||
|
||||
extras = get_extras()
|
||||
extra_index_url = get_extra_index()
|
||||
|
||||
print(f":crossed_fingers: Upgrading to [yellow]{tag or release or branch}[/yellow]")
|
||||
if release:
|
||||
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade {extra_index_url}'
|
||||
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade'
|
||||
elif tag:
|
||||
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip" --use-pep517 --upgrade {extra_index_url}'
|
||||
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip" --use-pep517 --upgrade'
|
||||
else:
|
||||
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip" --use-pep517 --upgrade {extra_index_url}'
|
||||
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip" --use-pep517 --upgrade'
|
||||
print("")
|
||||
print("")
|
||||
if os.system(cmd) == 0:
|
||||
|
||||
171
invokeai/frontend/web/dist/assets/App-66b7b6ff.js
vendored
171
invokeai/frontend/web/dist/assets/App-66b7b6ff.js
vendored
File diff suppressed because one or more lines are too long
171
invokeai/frontend/web/dist/assets/App-ecb61195.js
vendored
Normal file
171
invokeai/frontend/web/dist/assets/App-ecb61195.js
vendored
Normal file
File diff suppressed because one or more lines are too long
1
invokeai/frontend/web/dist/assets/MantineProvider-1fe3e660.js
vendored
Normal file
1
invokeai/frontend/web/dist/assets/MantineProvider-1fe3e660.js
vendored
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -1,4 +1,4 @@
|
||||
import{w as s,ic as T,v as l,a1 as I,id as R,ad as V,ie as z,ig as j,ih as D,ii as F,ij as G,ik as W,il as K,aG as H,im as U,io as Y}from"./index-ae455ad2.js";import{M as Z}from"./MantineProvider-8ba2c088.js";var P=String.raw,E=P`
|
||||
import{I as s,ie as T,v as l,$ as A,ig as R,aa as V,ih as z,ii as j,ij as D,ik as F,il as G,im as W,io as K,az as H,ip as U,iq as Y}from"./index-2ad466e0.js";import{M as Z}from"./MantineProvider-1fe3e660.js";var P=String.raw,E=P`
|
||||
:root,
|
||||
:host {
|
||||
--chakra-vh: 100vh;
|
||||
@@ -277,4 +277,4 @@ import{w as s,ic as T,v as l,a1 as I,id as R,ad as V,ie as z,ig as j,ih as D,ii
|
||||
}
|
||||
|
||||
${E}
|
||||
`}),g={light:"chakra-ui-light",dark:"chakra-ui-dark"};function Q(e={}){const{preventTransition:o=!0}=e,n={setDataset:r=>{const t=o?n.preventTransition():void 0;document.documentElement.dataset.theme=r,document.documentElement.style.colorScheme=r,t==null||t()},setClassName(r){document.body.classList.add(r?g.dark:g.light),document.body.classList.remove(r?g.light:g.dark)},query(){return window.matchMedia("(prefers-color-scheme: dark)")},getSystemTheme(r){var t;return((t=n.query().matches)!=null?t:r==="dark")?"dark":"light"},addListener(r){const t=n.query(),i=a=>{r(a.matches?"dark":"light")};return typeof t.addListener=="function"?t.addListener(i):t.addEventListener("change",i),()=>{typeof t.removeListener=="function"?t.removeListener(i):t.removeEventListener("change",i)}},preventTransition(){const r=document.createElement("style");return r.appendChild(document.createTextNode("*{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}")),document.head.appendChild(r),()=>{window.getComputedStyle(document.body),requestAnimationFrame(()=>{requestAnimationFrame(()=>{document.head.removeChild(r)})})}}};return n}var X="chakra-ui-color-mode";function L(e){return{ssr:!1,type:"localStorage",get(o){if(!(globalThis!=null&&globalThis.document))return o;let n;try{n=localStorage.getItem(e)||o}catch{}return n||o},set(o){try{localStorage.setItem(e,o)}catch{}}}}var ee=L(X),M=()=>{};function S(e,o){return e.type==="cookie"&&e.ssr?e.get(o):o}function O(e){const{value:o,children:n,options:{useSystemColorMode:r,initialColorMode:t,disableTransitionOnChange:i}={},colorModeManager:a=ee}=e,d=t==="dark"?"dark":"light",[u,p]=l.useState(()=>S(a,d)),[y,b]=l.useState(()=>S(a)),{getSystemTheme:w,setClassName:k,setDataset:x,addListener:$}=l.useMemo(()=>Q({preventTransition:i}),[i]),v=t==="system"&&!u?y:u,c=l.useCallback(m=>{const f=m==="system"?w():m;p(f),k(f==="dark"),x(f),a.set(f)},[a,w,k,x]);I(()=>{t==="system"&&b(w())},[]),l.useEffect(()=>{const m=a.get();if(m){c(m);return}if(t==="system"){c("system");return}c(d)},[a,d,t,c]);const C=l.useCallback(()=>{c(v==="dark"?"light":"dark")},[v,c]);l.useEffect(()=>{if(r)return $(c)},[r,$,c]);const A=l.useMemo(()=>({colorMode:o??v,toggleColorMode:o?M:C,setColorMode:o?M:c,forced:o!==void 0}),[v,C,c,o]);return s.jsx(R.Provider,{value:A,children:n})}O.displayName="ColorModeProvider";var te=["borders","breakpoints","colors","components","config","direction","fonts","fontSizes","fontWeights","letterSpacings","lineHeights","radii","shadows","sizes","space","styles","transition","zIndices"];function re(e){return V(e)?te.every(o=>Object.prototype.hasOwnProperty.call(e,o)):!1}function h(e){return typeof e=="function"}function oe(...e){return o=>e.reduce((n,r)=>r(n),o)}var ne=e=>function(...n){let r=[...n],t=n[n.length-1];return re(t)&&r.length>1?r=r.slice(0,r.length-1):t=e,oe(...r.map(i=>a=>h(i)?i(a):ae(a,i)))(t)},ie=ne(j);function ae(...e){return z({},...e,_)}function _(e,o,n,r){if((h(e)||h(o))&&Object.prototype.hasOwnProperty.call(r,n))return(...t)=>{const i=h(e)?e(...t):e,a=h(o)?o(...t):o;return z({},i,a,_)}}var q=l.createContext({getDocument(){return document},getWindow(){return window}});q.displayName="EnvironmentContext";function N(e){const{children:o,environment:n,disabled:r}=e,t=l.useRef(null),i=l.useMemo(()=>n||{getDocument:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument)!=null?u:document},getWindow:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument.defaultView)!=null?u:window}},[n]),a=!r||!n;return s.jsxs(q.Provider,{value:i,children:[o,a&&s.jsx("span",{id:"__chakra_env",hidden:!0,ref:t})]})}N.displayName="EnvironmentProvider";var se=e=>{const{children:o,colorModeManager:n,portalZIndex:r,resetScope:t,resetCSS:i=!0,theme:a={},environment:d,cssVarsRoot:u,disableEnvironment:p,disableGlobalStyle:y}=e,b=s.jsx(N,{environment:d,disabled:p,children:o});return s.jsx(D,{theme:a,cssVarsRoot:u,children:s.jsxs(O,{colorModeManager:n,options:a.config,children:[i?s.jsx(J,{scope:t}):s.jsx(B,{}),!y&&s.jsx(F,{}),r?s.jsx(G,{zIndex:r,children:b}):b]})})},le=e=>function({children:n,theme:r=e,toastOptions:t,...i}){return s.jsxs(se,{theme:r,...i,children:[s.jsx(W,{value:t==null?void 0:t.defaultOptions,children:n}),s.jsx(K,{...t})]})},de=le(j);const ue=()=>l.useMemo(()=>({colorScheme:"dark",fontFamily:"'Inter Variable', sans-serif",components:{ScrollArea:{defaultProps:{scrollbarSize:10},styles:{scrollbar:{"&:hover":{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}},thumb:{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}}}}}),[]),ce=L("@@invokeai-color-mode");function me({children:e}){const{i18n:o}=H(),n=o.dir(),r=l.useMemo(()=>ie({...U,direction:n}),[n]);l.useEffect(()=>{document.body.dir=n},[n]);const t=ue();return s.jsx(Z,{theme:t,children:s.jsx(de,{theme:r,colorModeManager:ce,toastOptions:Y,children:e})})}const ve=l.memo(me);export{ve as default};
|
||||
`}),g={light:"chakra-ui-light",dark:"chakra-ui-dark"};function Q(e={}){const{preventTransition:o=!0}=e,n={setDataset:r=>{const t=o?n.preventTransition():void 0;document.documentElement.dataset.theme=r,document.documentElement.style.colorScheme=r,t==null||t()},setClassName(r){document.body.classList.add(r?g.dark:g.light),document.body.classList.remove(r?g.light:g.dark)},query(){return window.matchMedia("(prefers-color-scheme: dark)")},getSystemTheme(r){var t;return((t=n.query().matches)!=null?t:r==="dark")?"dark":"light"},addListener(r){const t=n.query(),i=a=>{r(a.matches?"dark":"light")};return typeof t.addListener=="function"?t.addListener(i):t.addEventListener("change",i),()=>{typeof t.removeListener=="function"?t.removeListener(i):t.removeEventListener("change",i)}},preventTransition(){const r=document.createElement("style");return r.appendChild(document.createTextNode("*{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}")),document.head.appendChild(r),()=>{window.getComputedStyle(document.body),requestAnimationFrame(()=>{requestAnimationFrame(()=>{document.head.removeChild(r)})})}}};return n}var X="chakra-ui-color-mode";function L(e){return{ssr:!1,type:"localStorage",get(o){if(!(globalThis!=null&&globalThis.document))return o;let n;try{n=localStorage.getItem(e)||o}catch{}return n||o},set(o){try{localStorage.setItem(e,o)}catch{}}}}var ee=L(X),M=()=>{};function S(e,o){return e.type==="cookie"&&e.ssr?e.get(o):o}function O(e){const{value:o,children:n,options:{useSystemColorMode:r,initialColorMode:t,disableTransitionOnChange:i}={},colorModeManager:a=ee}=e,d=t==="dark"?"dark":"light",[u,p]=l.useState(()=>S(a,d)),[y,b]=l.useState(()=>S(a)),{getSystemTheme:w,setClassName:k,setDataset:x,addListener:$}=l.useMemo(()=>Q({preventTransition:i}),[i]),v=t==="system"&&!u?y:u,c=l.useCallback(m=>{const f=m==="system"?w():m;p(f),k(f==="dark"),x(f),a.set(f)},[a,w,k,x]);A(()=>{t==="system"&&b(w())},[]),l.useEffect(()=>{const m=a.get();if(m){c(m);return}if(t==="system"){c("system");return}c(d)},[a,d,t,c]);const C=l.useCallback(()=>{c(v==="dark"?"light":"dark")},[v,c]);l.useEffect(()=>{if(r)return $(c)},[r,$,c]);const N=l.useMemo(()=>({colorMode:o??v,toggleColorMode:o?M:C,setColorMode:o?M:c,forced:o!==void 0}),[v,C,c,o]);return s.jsx(R.Provider,{value:N,children:n})}O.displayName="ColorModeProvider";var te=["borders","breakpoints","colors","components","config","direction","fonts","fontSizes","fontWeights","letterSpacings","lineHeights","radii","shadows","sizes","space","styles","transition","zIndices"];function re(e){return V(e)?te.every(o=>Object.prototype.hasOwnProperty.call(e,o)):!1}function h(e){return typeof e=="function"}function oe(...e){return o=>e.reduce((n,r)=>r(n),o)}var ne=e=>function(...n){let r=[...n],t=n[n.length-1];return re(t)&&r.length>1?r=r.slice(0,r.length-1):t=e,oe(...r.map(i=>a=>h(i)?i(a):ae(a,i)))(t)},ie=ne(j);function ae(...e){return z({},...e,_)}function _(e,o,n,r){if((h(e)||h(o))&&Object.prototype.hasOwnProperty.call(r,n))return(...t)=>{const i=h(e)?e(...t):e,a=h(o)?o(...t):o;return z({},i,a,_)}}var q=l.createContext({getDocument(){return document},getWindow(){return window}});q.displayName="EnvironmentContext";function I(e){const{children:o,environment:n,disabled:r}=e,t=l.useRef(null),i=l.useMemo(()=>n||{getDocument:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument)!=null?u:document},getWindow:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument.defaultView)!=null?u:window}},[n]),a=!r||!n;return s.jsxs(q.Provider,{value:i,children:[o,a&&s.jsx("span",{id:"__chakra_env",hidden:!0,ref:t})]})}I.displayName="EnvironmentProvider";var se=e=>{const{children:o,colorModeManager:n,portalZIndex:r,resetScope:t,resetCSS:i=!0,theme:a={},environment:d,cssVarsRoot:u,disableEnvironment:p,disableGlobalStyle:y}=e,b=s.jsx(I,{environment:d,disabled:p,children:o});return s.jsx(D,{theme:a,cssVarsRoot:u,children:s.jsxs(O,{colorModeManager:n,options:a.config,children:[i?s.jsx(J,{scope:t}):s.jsx(B,{}),!y&&s.jsx(F,{}),r?s.jsx(G,{zIndex:r,children:b}):b]})})},le=e=>function({children:n,theme:r=e,toastOptions:t,...i}){return s.jsxs(se,{theme:r,...i,children:[s.jsx(W,{value:t==null?void 0:t.defaultOptions,children:n}),s.jsx(K,{...t})]})},de=le(j);const ue=()=>l.useMemo(()=>({colorScheme:"dark",fontFamily:"'Inter Variable', sans-serif",components:{ScrollArea:{defaultProps:{scrollbarSize:10},styles:{scrollbar:{"&:hover":{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}},thumb:{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}}}}}),[]),ce=L("@@invokeai-color-mode");function me({children:e}){const{i18n:o}=H(),n=o.dir(),r=l.useMemo(()=>ie({...U,direction:n}),[n]);l.useEffect(()=>{document.body.dir=n},[n]);const t=ue();return s.jsx(Z,{theme:t,children:s.jsx(de,{theme:r,colorModeManager:ce,toastOptions:Y,children:e})})}const ve=l.memo(me);export{ve as default};
|
||||
9
invokeai/frontend/web/dist/assets/ThemeLocaleProvider-f5f9aabf.css
vendored
Normal file
9
invokeai/frontend/web/dist/assets/ThemeLocaleProvider-f5f9aabf.css
vendored
Normal file
File diff suppressed because one or more lines are too long
157
invokeai/frontend/web/dist/assets/index-2ad466e0.js
vendored
Normal file
157
invokeai/frontend/web/dist/assets/index-2ad466e0.js
vendored
Normal file
File diff suppressed because one or more lines are too long
156
invokeai/frontend/web/dist/assets/index-ae455ad2.js
vendored
156
invokeai/frontend/web/dist/assets/index-ae455ad2.js
vendored
File diff suppressed because one or more lines are too long
BIN
invokeai/frontend/web/dist/assets/inter-cyrillic-ext-wght-normal-1c3007b8.woff2
vendored
Normal file
BIN
invokeai/frontend/web/dist/assets/inter-cyrillic-ext-wght-normal-1c3007b8.woff2
vendored
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
invokeai/frontend/web/dist/assets/inter-cyrillic-wght-normal-eba94878.woff2
vendored
Normal file
BIN
invokeai/frontend/web/dist/assets/inter-cyrillic-wght-normal-eba94878.woff2
vendored
Normal file
Binary file not shown.
BIN
invokeai/frontend/web/dist/assets/inter-greek-ext-wght-normal-81f77e51.woff2
vendored
Normal file
BIN
invokeai/frontend/web/dist/assets/inter-greek-ext-wght-normal-81f77e51.woff2
vendored
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
invokeai/frontend/web/dist/assets/inter-greek-wght-normal-d92c6cbc.woff2
vendored
Normal file
BIN
invokeai/frontend/web/dist/assets/inter-greek-wght-normal-d92c6cbc.woff2
vendored
Normal file
Binary file not shown.
Binary file not shown.
BIN
invokeai/frontend/web/dist/assets/inter-latin-ext-wght-normal-a2bfd9fe.woff2
vendored
Normal file
BIN
invokeai/frontend/web/dist/assets/inter-latin-ext-wght-normal-a2bfd9fe.woff2
vendored
Normal file
Binary file not shown.
Binary file not shown.
BIN
invokeai/frontend/web/dist/assets/inter-latin-wght-normal-88df0b5a.woff2
vendored
Normal file
BIN
invokeai/frontend/web/dist/assets/inter-latin-wght-normal-88df0b5a.woff2
vendored
Normal file
Binary file not shown.
BIN
invokeai/frontend/web/dist/assets/inter-vietnamese-wght-normal-15df7612.woff2
vendored
Normal file
BIN
invokeai/frontend/web/dist/assets/inter-vietnamese-wght-normal-15df7612.woff2
vendored
Normal file
Binary file not shown.
Binary file not shown.
2
invokeai/frontend/web/dist/index.html
vendored
2
invokeai/frontend/web/dist/index.html
vendored
@@ -15,7 +15,7 @@
|
||||
margin: 0;
|
||||
}
|
||||
</style>
|
||||
<script type="module" crossorigin src="./assets/index-ae455ad2.js"></script>
|
||||
<script type="module" crossorigin src="./assets/index-2ad466e0.js"></script>
|
||||
</head>
|
||||
|
||||
<body dir="ltr">
|
||||
|
||||
3
invokeai/frontend/web/dist/locales/en.json
vendored
3
invokeai/frontend/web/dist/locales/en.json
vendored
@@ -920,7 +920,10 @@
|
||||
"unknownTemplate": "Unknown Template",
|
||||
"unkownInvocation": "Unknown Invocation type",
|
||||
"updateNode": "Update Node",
|
||||
"updateAllNodes": "Update All Nodes",
|
||||
"updateApp": "Update App",
|
||||
"unableToUpdateNodes_one": "Unable to update {{count}} node",
|
||||
"unableToUpdateNodes_other": "Unable to update {{count}} nodes",
|
||||
"vaeField": "Vae",
|
||||
"vaeFieldDescription": "Vae submodel.",
|
||||
"vaeModelField": "VAE",
|
||||
|
||||
16
invokeai/frontend/web/dist/locales/zh_CN.json
vendored
16
invokeai/frontend/web/dist/locales/zh_CN.json
vendored
@@ -1222,7 +1222,8 @@
|
||||
"seamless": "无缝",
|
||||
"fit": "图生图匹配",
|
||||
"recallParameters": "召回参数",
|
||||
"noRecallParameters": "未找到要召回的参数"
|
||||
"noRecallParameters": "未找到要召回的参数",
|
||||
"vae": "VAE"
|
||||
},
|
||||
"models": {
|
||||
"noMatchingModels": "无相匹配的模型",
|
||||
@@ -1501,5 +1502,18 @@
|
||||
"clear": "清除",
|
||||
"maxCacheSize": "最大缓存大小",
|
||||
"cacheSize": "缓存大小"
|
||||
},
|
||||
"hrf": {
|
||||
"enableHrf": "启用高分辨率修复",
|
||||
"upscaleMethod": "放大方法",
|
||||
"enableHrfTooltip": "使用较低的分辨率进行初始生成,放大到基础分辨率后进行图生图。",
|
||||
"metadata": {
|
||||
"strength": "高分辨率修复强度",
|
||||
"enabled": "高分辨率修复已启用",
|
||||
"method": "高分辨率修复方法"
|
||||
},
|
||||
"hrf": "高分辨率修复",
|
||||
"hrfStrength": "高分辨率修复强度",
|
||||
"strengthTooltip": "值越低细节越少,但可以减少部分潜在的伪影。"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -920,7 +920,10 @@
|
||||
"unknownTemplate": "Unknown Template",
|
||||
"unkownInvocation": "Unknown Invocation type",
|
||||
"updateNode": "Update Node",
|
||||
"updateAllNodes": "Update All Nodes",
|
||||
"updateApp": "Update App",
|
||||
"unableToUpdateNodes_one": "Unable to update {{count}} node",
|
||||
"unableToUpdateNodes_other": "Unable to update {{count}} nodes",
|
||||
"vaeField": "Vae",
|
||||
"vaeFieldDescription": "Vae submodel.",
|
||||
"vaeModelField": "VAE",
|
||||
|
||||
@@ -1222,7 +1222,8 @@
|
||||
"seamless": "无缝",
|
||||
"fit": "图生图匹配",
|
||||
"recallParameters": "召回参数",
|
||||
"noRecallParameters": "未找到要召回的参数"
|
||||
"noRecallParameters": "未找到要召回的参数",
|
||||
"vae": "VAE"
|
||||
},
|
||||
"models": {
|
||||
"noMatchingModels": "无相匹配的模型",
|
||||
@@ -1501,5 +1502,18 @@
|
||||
"clear": "清除",
|
||||
"maxCacheSize": "最大缓存大小",
|
||||
"cacheSize": "缓存大小"
|
||||
},
|
||||
"hrf": {
|
||||
"enableHrf": "启用高分辨率修复",
|
||||
"upscaleMethod": "放大方法",
|
||||
"enableHrfTooltip": "使用较低的分辨率进行初始生成,放大到基础分辨率后进行图生图。",
|
||||
"metadata": {
|
||||
"strength": "高分辨率修复强度",
|
||||
"enabled": "高分辨率修复已启用",
|
||||
"method": "高分辨率修复方法"
|
||||
},
|
||||
"hrf": "高分辨率修复",
|
||||
"hrfStrength": "高分辨率修复强度",
|
||||
"strengthTooltip": "值越低细节越少,但可以减少部分潜在的伪影。"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -72,6 +72,7 @@ import { addStagingAreaImageSavedListener } from './listeners/stagingAreaImageSa
|
||||
import { addTabChangedListener } from './listeners/tabChanged';
|
||||
import { addUpscaleRequestedListener } from './listeners/upscaleRequested';
|
||||
import { addWorkflowLoadedListener } from './listeners/workflowLoaded';
|
||||
import { addUpdateAllNodesRequestedListener } from './listeners/updateAllNodesRequested';
|
||||
|
||||
export const listenerMiddleware = createListenerMiddleware();
|
||||
|
||||
@@ -178,6 +179,7 @@ addReceivedOpenAPISchemaListener();
|
||||
|
||||
// Workflows
|
||||
addWorkflowLoadedListener();
|
||||
addUpdateAllNodesRequestedListener();
|
||||
|
||||
// DND
|
||||
addImageDroppedListener();
|
||||
|
||||
@@ -8,7 +8,6 @@ import {
|
||||
selectControlAdapterById,
|
||||
} from 'features/controlAdapters/store/controlAdaptersSlice';
|
||||
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
|
||||
import { SAVE_IMAGE } from 'features/nodes/util/graphBuilders/constants';
|
||||
import { addToast } from 'features/system/store/systemSlice';
|
||||
import { t } from 'i18next';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
@@ -38,6 +37,7 @@ export const addControlNetImageProcessedListener = () => {
|
||||
// ControlNet one-off procressing graph is just the processor node, no edges.
|
||||
// Also we need to grab the image.
|
||||
|
||||
const nodeId = ca.processorNode.id;
|
||||
const enqueueBatchArg: BatchConfig = {
|
||||
prepend: true,
|
||||
batch: {
|
||||
@@ -46,27 +46,10 @@ export const addControlNetImageProcessedListener = () => {
|
||||
[ca.processorNode.id]: {
|
||||
...ca.processorNode,
|
||||
is_intermediate: true,
|
||||
use_cache: false,
|
||||
image: { image_name: ca.controlImage },
|
||||
},
|
||||
[SAVE_IMAGE]: {
|
||||
id: SAVE_IMAGE,
|
||||
type: 'save_image',
|
||||
is_intermediate: true,
|
||||
use_cache: false,
|
||||
},
|
||||
},
|
||||
edges: [
|
||||
{
|
||||
source: {
|
||||
node_id: ca.processorNode.id,
|
||||
field: 'image',
|
||||
},
|
||||
destination: {
|
||||
node_id: SAVE_IMAGE,
|
||||
field: 'image',
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
runs: 1,
|
||||
},
|
||||
@@ -90,7 +73,7 @@ export const addControlNetImageProcessedListener = () => {
|
||||
socketInvocationComplete.match(action) &&
|
||||
action.payload.data.queue_batch_id ===
|
||||
enqueueResult.batch.batch_id &&
|
||||
action.payload.data.source_node_id === SAVE_IMAGE
|
||||
action.payload.data.source_node_id === nodeId
|
||||
);
|
||||
|
||||
// We still have to check the output type
|
||||
|
||||
@@ -7,7 +7,10 @@ import {
|
||||
imageSelected,
|
||||
} from 'features/gallery/store/gallerySlice';
|
||||
import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
|
||||
import { CANVAS_OUTPUT } from 'features/nodes/util/graphBuilders/constants';
|
||||
import {
|
||||
LINEAR_UI_OUTPUT,
|
||||
nodeIDDenyList,
|
||||
} from 'features/nodes/util/graphBuilders/constants';
|
||||
import { boardsApi } from 'services/api/endpoints/boards';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import { isImageOutput } from 'services/api/guards';
|
||||
@@ -19,7 +22,7 @@ import {
|
||||
import { startAppListening } from '../..';
|
||||
|
||||
// These nodes output an image, but do not actually *save* an image, so we don't want to handle the gallery logic on them
|
||||
const nodeDenylist = ['load_image', 'image'];
|
||||
const nodeTypeDenylist = ['load_image', 'image'];
|
||||
|
||||
export const addInvocationCompleteEventListener = () => {
|
||||
startAppListening({
|
||||
@@ -32,22 +35,31 @@ export const addInvocationCompleteEventListener = () => {
|
||||
`Invocation complete (${action.payload.data.node.type})`
|
||||
);
|
||||
|
||||
const { result, node, queue_batch_id } = data;
|
||||
const { result, node, queue_batch_id, source_node_id } = data;
|
||||
|
||||
// This complete event has an associated image output
|
||||
if (isImageOutput(result) && !nodeDenylist.includes(node.type)) {
|
||||
if (
|
||||
isImageOutput(result) &&
|
||||
!nodeTypeDenylist.includes(node.type) &&
|
||||
!nodeIDDenyList.includes(source_node_id)
|
||||
) {
|
||||
const { image_name } = result.image;
|
||||
const { canvas, gallery } = getState();
|
||||
|
||||
// This populates the `getImageDTO` cache
|
||||
const imageDTO = await dispatch(
|
||||
imagesApi.endpoints.getImageDTO.initiate(image_name)
|
||||
).unwrap();
|
||||
const imageDTORequest = dispatch(
|
||||
imagesApi.endpoints.getImageDTO.initiate(image_name, {
|
||||
forceRefetch: true,
|
||||
})
|
||||
);
|
||||
|
||||
const imageDTO = await imageDTORequest.unwrap();
|
||||
imageDTORequest.unsubscribe();
|
||||
|
||||
// Add canvas images to the staging area
|
||||
if (
|
||||
canvas.batchIds.includes(queue_batch_id) &&
|
||||
[CANVAS_OUTPUT].includes(data.source_node_id)
|
||||
[LINEAR_UI_OUTPUT].includes(data.source_node_id)
|
||||
) {
|
||||
dispatch(addImageToStagingArea(imageDTO));
|
||||
}
|
||||
|
||||
@@ -0,0 +1,52 @@
|
||||
import {
|
||||
getNeedsUpdate,
|
||||
updateNode,
|
||||
} from 'features/nodes/hooks/useNodeVersion';
|
||||
import { updateAllNodesRequested } from 'features/nodes/store/actions';
|
||||
import { nodeReplaced } from 'features/nodes/store/nodesSlice';
|
||||
import { startAppListening } from '..';
|
||||
import { logger } from 'app/logging/logger';
|
||||
import { addToast } from 'features/system/store/systemSlice';
|
||||
import { makeToast } from 'features/system/util/makeToast';
|
||||
import { t } from 'i18next';
|
||||
|
||||
export const addUpdateAllNodesRequestedListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: updateAllNodesRequested,
|
||||
effect: (action, { dispatch, getState }) => {
|
||||
const log = logger('nodes');
|
||||
const nodes = getState().nodes.nodes;
|
||||
const templates = getState().nodes.nodeTemplates;
|
||||
|
||||
let unableToUpdateCount = 0;
|
||||
|
||||
nodes.forEach((node) => {
|
||||
const template = templates[node.data.type];
|
||||
const needsUpdate = getNeedsUpdate(node, template);
|
||||
const updatedNode = updateNode(node, template);
|
||||
if (!updatedNode) {
|
||||
if (needsUpdate) {
|
||||
unableToUpdateCount++;
|
||||
}
|
||||
return;
|
||||
}
|
||||
dispatch(nodeReplaced({ nodeId: updatedNode.id, node: updatedNode }));
|
||||
});
|
||||
|
||||
if (unableToUpdateCount) {
|
||||
log.warn(
|
||||
`Unable to update ${unableToUpdateCount} nodes. Please report this issue.`
|
||||
);
|
||||
dispatch(
|
||||
addToast(
|
||||
makeToast({
|
||||
title: t('nodes.unableToUpdateNodes', {
|
||||
count: unableToUpdateCount,
|
||||
}),
|
||||
})
|
||||
)
|
||||
);
|
||||
}
|
||||
},
|
||||
});
|
||||
};
|
||||
@@ -157,6 +157,8 @@ const ImageMetadataActions = (props: Props) => {
|
||||
return null;
|
||||
}
|
||||
|
||||
console.log(metadata);
|
||||
|
||||
return (
|
||||
<>
|
||||
{metadata.created_by && (
|
||||
|
||||
@@ -3,7 +3,7 @@ import { memo } from 'react';
|
||||
import NodeCollapseButton from '../common/NodeCollapseButton';
|
||||
import NodeTitle from '../common/NodeTitle';
|
||||
import InvocationNodeCollapsedHandles from './InvocationNodeCollapsedHandles';
|
||||
import InvocationNodeNotes from './InvocationNodeNotes';
|
||||
import InvocationNodeInfoIcon from './InvocationNodeInfoIcon';
|
||||
import InvocationNodeStatusIndicator from './InvocationNodeStatusIndicator';
|
||||
|
||||
type Props = {
|
||||
@@ -34,7 +34,7 @@ const InvocationNodeHeader = ({ nodeId, isOpen }: Props) => {
|
||||
<NodeTitle nodeId={nodeId} />
|
||||
<Flex alignItems="center">
|
||||
<InvocationNodeStatusIndicator nodeId={nodeId} />
|
||||
<InvocationNodeNotes nodeId={nodeId} />
|
||||
<InvocationNodeInfoIcon nodeId={nodeId} />
|
||||
</Flex>
|
||||
{!isOpen && <InvocationNodeCollapsedHandles nodeId={nodeId} />}
|
||||
</Flex>
|
||||
|
||||
@@ -1,85 +1,39 @@
|
||||
import {
|
||||
Flex,
|
||||
Icon,
|
||||
Modal,
|
||||
ModalBody,
|
||||
ModalCloseButton,
|
||||
ModalContent,
|
||||
ModalFooter,
|
||||
ModalHeader,
|
||||
ModalOverlay,
|
||||
Text,
|
||||
Tooltip,
|
||||
useDisclosure,
|
||||
} from '@chakra-ui/react';
|
||||
import { Flex, Icon, Text, Tooltip } from '@chakra-ui/react';
|
||||
import { compare } from 'compare-versions';
|
||||
import { useNodeData } from 'features/nodes/hooks/useNodeData';
|
||||
import { useNodeLabel } from 'features/nodes/hooks/useNodeLabel';
|
||||
import { useNodeTemplate } from 'features/nodes/hooks/useNodeTemplate';
|
||||
import { useNodeTemplateTitle } from 'features/nodes/hooks/useNodeTemplateTitle';
|
||||
import { useNodeVersion } from 'features/nodes/hooks/useNodeVersion';
|
||||
import { isInvocationNodeData } from 'features/nodes/types/types';
|
||||
import { memo, useMemo } from 'react';
|
||||
import { FaInfoCircle } from 'react-icons/fa';
|
||||
import NotesTextarea from './NotesTextarea';
|
||||
import { useDoNodeVersionsMatch } from 'features/nodes/hooks/useDoNodeVersionsMatch';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { FaInfoCircle } from 'react-icons/fa';
|
||||
|
||||
interface Props {
|
||||
nodeId: string;
|
||||
}
|
||||
|
||||
const InvocationNodeNotes = ({ nodeId }: Props) => {
|
||||
const { isOpen, onOpen, onClose } = useDisclosure();
|
||||
const label = useNodeLabel(nodeId);
|
||||
const title = useNodeTemplateTitle(nodeId);
|
||||
const doVersionsMatch = useDoNodeVersionsMatch(nodeId);
|
||||
const { t } = useTranslation();
|
||||
const InvocationNodeInfoIcon = ({ nodeId }: Props) => {
|
||||
const { needsUpdate } = useNodeVersion(nodeId);
|
||||
|
||||
return (
|
||||
<>
|
||||
<Tooltip
|
||||
label={<TooltipContent nodeId={nodeId} />}
|
||||
placement="top"
|
||||
shouldWrapChildren
|
||||
>
|
||||
<Flex
|
||||
className="nodrag"
|
||||
onClick={onOpen}
|
||||
sx={{
|
||||
alignItems: 'center',
|
||||
justifyContent: 'center',
|
||||
w: 8,
|
||||
h: 8,
|
||||
cursor: 'pointer',
|
||||
}}
|
||||
>
|
||||
<Icon
|
||||
as={FaInfoCircle}
|
||||
sx={{
|
||||
boxSize: 4,
|
||||
w: 8,
|
||||
color: doVersionsMatch ? 'base.400' : 'error.400',
|
||||
}}
|
||||
/>
|
||||
</Flex>
|
||||
</Tooltip>
|
||||
|
||||
<Modal isOpen={isOpen} onClose={onClose} isCentered>
|
||||
<ModalOverlay />
|
||||
<ModalContent>
|
||||
<ModalHeader>{label || title || t('nodes.unknownNode')}</ModalHeader>
|
||||
<ModalCloseButton />
|
||||
<ModalBody>
|
||||
<NotesTextarea nodeId={nodeId} />
|
||||
</ModalBody>
|
||||
<ModalFooter />
|
||||
</ModalContent>
|
||||
</Modal>
|
||||
</>
|
||||
<Tooltip
|
||||
label={<TooltipContent nodeId={nodeId} />}
|
||||
placement="top"
|
||||
shouldWrapChildren
|
||||
>
|
||||
<Icon
|
||||
as={FaInfoCircle}
|
||||
sx={{
|
||||
boxSize: 4,
|
||||
w: 8,
|
||||
color: needsUpdate ? 'error.400' : 'base.400',
|
||||
}}
|
||||
/>
|
||||
</Tooltip>
|
||||
);
|
||||
};
|
||||
|
||||
export default memo(InvocationNodeNotes);
|
||||
export default memo(InvocationNodeInfoIcon);
|
||||
|
||||
const TooltipContent = memo(({ nodeId }: { nodeId: string }) => {
|
||||
const data = useNodeData(nodeId);
|
||||
@@ -3,15 +3,22 @@ import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import IAIIconButton from 'common/components/IAIIconButton';
|
||||
import { addNodePopoverOpened } from 'features/nodes/store/nodesSlice';
|
||||
import { memo, useCallback } from 'react';
|
||||
import { FaPlus } from 'react-icons/fa';
|
||||
import { FaPlus, FaSync } from 'react-icons/fa';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import IAIButton from 'common/components/IAIButton';
|
||||
import { useGetNodesNeedUpdate } from 'features/nodes/hooks/useGetNodesNeedUpdate';
|
||||
import { updateAllNodesRequested } from 'features/nodes/store/actions';
|
||||
|
||||
const TopLeftPanel = () => {
|
||||
const dispatch = useAppDispatch();
|
||||
const { t } = useTranslation();
|
||||
const nodesNeedUpdate = useGetNodesNeedUpdate();
|
||||
const handleOpenAddNodePopover = useCallback(() => {
|
||||
dispatch(addNodePopoverOpened());
|
||||
}, [dispatch]);
|
||||
const handleClickUpdateNodes = useCallback(() => {
|
||||
dispatch(updateAllNodesRequested());
|
||||
}, [dispatch]);
|
||||
|
||||
return (
|
||||
<Flex sx={{ gap: 2, position: 'absolute', top: 2, insetInlineStart: 2 }}>
|
||||
@@ -21,6 +28,11 @@ const TopLeftPanel = () => {
|
||||
icon={<FaPlus />}
|
||||
onClick={handleOpenAddNodePopover}
|
||||
/>
|
||||
{nodesNeedUpdate && (
|
||||
<IAIButton leftIcon={<FaSync />} onClick={handleClickUpdateNodes}>
|
||||
{t('nodes.updateAllNodes')}
|
||||
</IAIButton>
|
||||
)}
|
||||
</Flex>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -0,0 +1,125 @@
|
||||
import {
|
||||
Box,
|
||||
Flex,
|
||||
FormControl,
|
||||
FormLabel,
|
||||
HStack,
|
||||
Text,
|
||||
} from '@chakra-ui/react';
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { stateSelector } from 'app/store/store';
|
||||
import { useAppSelector } from 'app/store/storeHooks';
|
||||
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
|
||||
import IAIIconButton from 'common/components/IAIIconButton';
|
||||
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
|
||||
import { useNodeVersion } from 'features/nodes/hooks/useNodeVersion';
|
||||
import {
|
||||
InvocationNodeData,
|
||||
InvocationTemplate,
|
||||
isInvocationNode,
|
||||
} from 'features/nodes/types/types';
|
||||
import { memo } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { FaSync } from 'react-icons/fa';
|
||||
import { Node } from 'reactflow';
|
||||
import NotesTextarea from '../../flow/nodes/Invocation/NotesTextarea';
|
||||
import ScrollableContent from '../ScrollableContent';
|
||||
import EditableNodeTitle from './details/EditableNodeTitle';
|
||||
|
||||
const selector = createSelector(
|
||||
stateSelector,
|
||||
({ nodes }) => {
|
||||
const lastSelectedNodeId =
|
||||
nodes.selectedNodes[nodes.selectedNodes.length - 1];
|
||||
|
||||
const lastSelectedNode = nodes.nodes.find(
|
||||
(node) => node.id === lastSelectedNodeId
|
||||
);
|
||||
|
||||
const lastSelectedNodeTemplate = lastSelectedNode
|
||||
? nodes.nodeTemplates[lastSelectedNode.data.type]
|
||||
: undefined;
|
||||
|
||||
return {
|
||||
node: lastSelectedNode,
|
||||
template: lastSelectedNodeTemplate,
|
||||
};
|
||||
},
|
||||
defaultSelectorOptions
|
||||
);
|
||||
|
||||
const InspectorDetailsTab = () => {
|
||||
const { node, template } = useAppSelector(selector);
|
||||
const { t } = useTranslation();
|
||||
|
||||
if (!template || !isInvocationNode(node)) {
|
||||
return (
|
||||
<IAINoContentFallback label={t('nodes.noNodeSelected')} icon={null} />
|
||||
);
|
||||
}
|
||||
|
||||
return <Content node={node} template={template} />;
|
||||
};
|
||||
|
||||
export default memo(InspectorDetailsTab);
|
||||
|
||||
const Content = (props: {
|
||||
node: Node<InvocationNodeData>;
|
||||
template: InvocationTemplate;
|
||||
}) => {
|
||||
const { t } = useTranslation();
|
||||
const { needsUpdate, updateNode } = useNodeVersion(props.node.id);
|
||||
return (
|
||||
<Box
|
||||
sx={{
|
||||
position: 'relative',
|
||||
w: 'full',
|
||||
h: 'full',
|
||||
}}
|
||||
>
|
||||
<ScrollableContent>
|
||||
<Flex
|
||||
sx={{
|
||||
flexDir: 'column',
|
||||
position: 'relative',
|
||||
p: 1,
|
||||
gap: 2,
|
||||
w: 'full',
|
||||
}}
|
||||
>
|
||||
<EditableNodeTitle nodeId={props.node.data.id} />
|
||||
<HStack>
|
||||
<FormControl>
|
||||
<FormLabel>Node Type</FormLabel>
|
||||
<Text fontSize="sm" fontWeight={600}>
|
||||
{props.template.title}
|
||||
</Text>
|
||||
</FormControl>
|
||||
<Flex
|
||||
flexDir="row"
|
||||
alignItems="center"
|
||||
justifyContent="space-between"
|
||||
w="full"
|
||||
>
|
||||
<FormControl isInvalid={needsUpdate}>
|
||||
<FormLabel>Node Version</FormLabel>
|
||||
<Text fontSize="sm" fontWeight={600}>
|
||||
{props.node.data.version}
|
||||
</Text>
|
||||
</FormControl>
|
||||
{needsUpdate && (
|
||||
<IAIIconButton
|
||||
aria-label={t('nodes.updateNode')}
|
||||
tooltip={t('nodes.updateNode')}
|
||||
icon={<FaSync />}
|
||||
onClick={updateNode}
|
||||
/>
|
||||
)}
|
||||
</Flex>
|
||||
</HStack>
|
||||
<NotesTextarea nodeId={props.node.data.id} />
|
||||
</Flex>
|
||||
</ScrollableContent>
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
@@ -10,7 +10,7 @@ import { memo } from 'react';
|
||||
import InspectorDataTab from './InspectorDataTab';
|
||||
import InspectorOutputsTab from './InspectorOutputsTab';
|
||||
import InspectorTemplateTab from './InspectorTemplateTab';
|
||||
// import InspectorDetailsTab from './InspectorDetailsTab';
|
||||
import InspectorDetailsTab from './InspectorDetailsTab';
|
||||
|
||||
const InspectorPanel = () => {
|
||||
return (
|
||||
@@ -30,16 +30,16 @@ const InspectorPanel = () => {
|
||||
sx={{ display: 'flex', flexDir: 'column', w: 'full', h: 'full' }}
|
||||
>
|
||||
<TabList>
|
||||
{/* <Tab>Details</Tab> */}
|
||||
<Tab>Details</Tab>
|
||||
<Tab>Outputs</Tab>
|
||||
<Tab>Data</Tab>
|
||||
<Tab>Template</Tab>
|
||||
</TabList>
|
||||
|
||||
<TabPanels>
|
||||
{/* <TabPanel>
|
||||
<TabPanel>
|
||||
<InspectorDetailsTab />
|
||||
</TabPanel> */}
|
||||
</TabPanel>
|
||||
<TabPanel>
|
||||
<InspectorOutputsTab />
|
||||
</TabPanel>
|
||||
|
||||
@@ -0,0 +1,74 @@
|
||||
import {
|
||||
Editable,
|
||||
EditableInput,
|
||||
EditablePreview,
|
||||
Flex,
|
||||
} from '@chakra-ui/react';
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import { useNodeLabel } from 'features/nodes/hooks/useNodeLabel';
|
||||
import { useNodeTemplateTitle } from 'features/nodes/hooks/useNodeTemplateTitle';
|
||||
import { nodeLabelChanged } from 'features/nodes/store/nodesSlice';
|
||||
import { memo, useCallback, useEffect, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
type Props = {
|
||||
nodeId: string;
|
||||
title?: string;
|
||||
};
|
||||
|
||||
const EditableNodeTitle = ({ nodeId, title }: Props) => {
|
||||
const dispatch = useAppDispatch();
|
||||
const label = useNodeLabel(nodeId);
|
||||
const templateTitle = useNodeTemplateTitle(nodeId);
|
||||
const { t } = useTranslation();
|
||||
|
||||
const [localTitle, setLocalTitle] = useState('');
|
||||
const handleSubmit = useCallback(
|
||||
async (newTitle: string) => {
|
||||
dispatch(nodeLabelChanged({ nodeId, label: newTitle }));
|
||||
setLocalTitle(
|
||||
label || title || templateTitle || t('nodes.problemSettingTitle')
|
||||
);
|
||||
},
|
||||
[dispatch, nodeId, title, templateTitle, label, t]
|
||||
);
|
||||
|
||||
const handleChange = useCallback((newTitle: string) => {
|
||||
setLocalTitle(newTitle);
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
// Another component may change the title; sync local title with global state
|
||||
setLocalTitle(
|
||||
label || title || templateTitle || t('nodes.problemSettingTitle')
|
||||
);
|
||||
}, [label, templateTitle, title, t]);
|
||||
|
||||
return (
|
||||
<Flex
|
||||
sx={{
|
||||
w: 'full',
|
||||
h: 'full',
|
||||
alignItems: 'center',
|
||||
justifyContent: 'center',
|
||||
}}
|
||||
>
|
||||
<Editable
|
||||
as={Flex}
|
||||
value={localTitle}
|
||||
onChange={handleChange}
|
||||
onSubmit={handleSubmit}
|
||||
w="full"
|
||||
fontWeight={600}
|
||||
>
|
||||
<EditablePreview noOfLines={1} />
|
||||
<EditableInput
|
||||
className="nodrag"
|
||||
_focusVisible={{ boxShadow: 'none' }}
|
||||
/>
|
||||
</Editable>
|
||||
</Flex>
|
||||
);
|
||||
};
|
||||
|
||||
export default memo(EditableNodeTitle);
|
||||
@@ -1,19 +1,10 @@
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { RootState } from 'app/store/store';
|
||||
import { useAppSelector } from 'app/store/storeHooks';
|
||||
import { reduce } from 'lodash-es';
|
||||
import { useCallback } from 'react';
|
||||
import { Node, useReactFlow } from 'reactflow';
|
||||
import { AnyInvocationType } from 'services/events/types';
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
import {
|
||||
CurrentImageNodeData,
|
||||
InputFieldValue,
|
||||
InvocationNodeData,
|
||||
NotesNodeData,
|
||||
OutputFieldValue,
|
||||
} from '../types/types';
|
||||
import { buildInputFieldValue } from '../util/fieldValueBuilders';
|
||||
import { buildNodeData } from '../store/util/buildNodeData';
|
||||
import { DRAG_HANDLE_CLASSNAME, NODE_WIDTH } from '../types/constants';
|
||||
|
||||
const templatesSelector = createSelector(
|
||||
@@ -26,14 +17,12 @@ export const SHARED_NODE_PROPERTIES: Partial<Node> = {
|
||||
};
|
||||
|
||||
export const useBuildNodeData = () => {
|
||||
const invocationTemplates = useAppSelector(templatesSelector);
|
||||
const nodeTemplates = useAppSelector(templatesSelector);
|
||||
|
||||
const flow = useReactFlow();
|
||||
|
||||
return useCallback(
|
||||
(type: AnyInvocationType | 'current_image' | 'notes') => {
|
||||
const nodeId = uuidv4();
|
||||
|
||||
let _x = window.innerWidth / 2;
|
||||
let _y = window.innerHeight / 2;
|
||||
|
||||
@@ -47,111 +36,15 @@ export const useBuildNodeData = () => {
|
||||
_y = rect.height / 2 - NODE_WIDTH / 2;
|
||||
}
|
||||
|
||||
const { x, y } = flow.project({
|
||||
const position = flow.project({
|
||||
x: _x,
|
||||
y: _y,
|
||||
});
|
||||
|
||||
if (type === 'current_image') {
|
||||
const node: Node<CurrentImageNodeData> = {
|
||||
...SHARED_NODE_PROPERTIES,
|
||||
id: nodeId,
|
||||
type: 'current_image',
|
||||
position: { x: x, y: y },
|
||||
data: {
|
||||
id: nodeId,
|
||||
type: 'current_image',
|
||||
isOpen: true,
|
||||
label: 'Current Image',
|
||||
},
|
||||
};
|
||||
const template = nodeTemplates[type];
|
||||
|
||||
return node;
|
||||
}
|
||||
|
||||
if (type === 'notes') {
|
||||
const node: Node<NotesNodeData> = {
|
||||
...SHARED_NODE_PROPERTIES,
|
||||
id: nodeId,
|
||||
type: 'notes',
|
||||
position: { x: x, y: y },
|
||||
data: {
|
||||
id: nodeId,
|
||||
isOpen: true,
|
||||
label: 'Notes',
|
||||
notes: '',
|
||||
type: 'notes',
|
||||
},
|
||||
};
|
||||
|
||||
return node;
|
||||
}
|
||||
|
||||
const template = invocationTemplates[type];
|
||||
|
||||
if (template === undefined) {
|
||||
console.error(`Unable to find template ${type}.`);
|
||||
return;
|
||||
}
|
||||
|
||||
const inputs = reduce(
|
||||
template.inputs,
|
||||
(inputsAccumulator, inputTemplate, inputName) => {
|
||||
const fieldId = uuidv4();
|
||||
|
||||
const inputFieldValue: InputFieldValue = buildInputFieldValue(
|
||||
fieldId,
|
||||
inputTemplate
|
||||
);
|
||||
|
||||
inputsAccumulator[inputName] = inputFieldValue;
|
||||
|
||||
return inputsAccumulator;
|
||||
},
|
||||
{} as Record<string, InputFieldValue>
|
||||
);
|
||||
|
||||
const outputs = reduce(
|
||||
template.outputs,
|
||||
(outputsAccumulator, outputTemplate, outputName) => {
|
||||
const fieldId = uuidv4();
|
||||
|
||||
const outputFieldValue: OutputFieldValue = {
|
||||
id: fieldId,
|
||||
name: outputName,
|
||||
type: outputTemplate.type,
|
||||
fieldKind: 'output',
|
||||
};
|
||||
|
||||
outputsAccumulator[outputName] = outputFieldValue;
|
||||
|
||||
return outputsAccumulator;
|
||||
},
|
||||
{} as Record<string, OutputFieldValue>
|
||||
);
|
||||
|
||||
const invocation: Node<InvocationNodeData> = {
|
||||
...SHARED_NODE_PROPERTIES,
|
||||
id: nodeId,
|
||||
type: 'invocation',
|
||||
position: { x: x, y: y },
|
||||
data: {
|
||||
id: nodeId,
|
||||
type,
|
||||
version: template.version,
|
||||
label: '',
|
||||
notes: '',
|
||||
isOpen: true,
|
||||
embedWorkflow: false,
|
||||
isIntermediate: type === 'save_image' ? false : true,
|
||||
inputs,
|
||||
outputs,
|
||||
useCache: template.useCache,
|
||||
},
|
||||
};
|
||||
|
||||
return invocation;
|
||||
return buildNodeData(type, position, template);
|
||||
},
|
||||
[invocationTemplates, flow]
|
||||
[nodeTemplates, flow]
|
||||
);
|
||||
};
|
||||
|
||||
@@ -0,0 +1,25 @@
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { stateSelector } from 'app/store/store';
|
||||
import { useAppSelector } from 'app/store/storeHooks';
|
||||
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
|
||||
import { getNeedsUpdate } from './useNodeVersion';
|
||||
|
||||
const selector = createSelector(
|
||||
stateSelector,
|
||||
(state) => {
|
||||
const nodes = state.nodes.nodes;
|
||||
const templates = state.nodes.nodeTemplates;
|
||||
|
||||
const needsUpdate = nodes.some((node) => {
|
||||
const template = templates[node.data.type];
|
||||
return getNeedsUpdate(node, template);
|
||||
});
|
||||
return needsUpdate;
|
||||
},
|
||||
defaultSelectorOptions
|
||||
);
|
||||
|
||||
export const useGetNodesNeedUpdate = () => {
|
||||
const getNeedsUpdate = useAppSelector(selector);
|
||||
return getNeedsUpdate;
|
||||
};
|
||||
@@ -0,0 +1,27 @@
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { stateSelector } from 'app/store/store';
|
||||
import { useAppSelector } from 'app/store/storeHooks';
|
||||
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
|
||||
import { useMemo } from 'react';
|
||||
import { AnyInvocationType } from 'services/events/types';
|
||||
|
||||
export const useNodeTemplateByType = (
|
||||
type: AnyInvocationType | 'current_image' | 'notes'
|
||||
) => {
|
||||
const selector = useMemo(
|
||||
() =>
|
||||
createSelector(
|
||||
stateSelector,
|
||||
({ nodes }) => {
|
||||
const nodeTemplate = nodes.nodeTemplates[type];
|
||||
return nodeTemplate;
|
||||
},
|
||||
defaultSelectorOptions
|
||||
),
|
||||
[type]
|
||||
);
|
||||
|
||||
const nodeTemplate = useAppSelector(selector);
|
||||
|
||||
return nodeTemplate;
|
||||
};
|
||||
119
invokeai/frontend/web/src/features/nodes/hooks/useNodeVersion.ts
Normal file
119
invokeai/frontend/web/src/features/nodes/hooks/useNodeVersion.ts
Normal file
@@ -0,0 +1,119 @@
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { stateSelector } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
|
||||
import { satisfies } from 'compare-versions';
|
||||
import { cloneDeep, defaultsDeep } from 'lodash-es';
|
||||
import { useCallback, useMemo } from 'react';
|
||||
import { Node } from 'reactflow';
|
||||
import { AnyInvocationType } from 'services/events/types';
|
||||
import { nodeReplaced } from '../store/nodesSlice';
|
||||
import { buildNodeData } from '../store/util/buildNodeData';
|
||||
import {
|
||||
InvocationNodeData,
|
||||
InvocationTemplate,
|
||||
NodeData,
|
||||
isInvocationNode,
|
||||
zParsedSemver,
|
||||
} from '../types/types';
|
||||
import { useAppToaster } from 'app/components/Toaster';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
export const getNeedsUpdate = (
|
||||
node?: Node<NodeData>,
|
||||
template?: InvocationTemplate
|
||||
) => {
|
||||
if (!isInvocationNode(node) || !template) {
|
||||
return false;
|
||||
}
|
||||
return node.data.version !== template.version;
|
||||
};
|
||||
|
||||
export const getMayUpdateNode = (
|
||||
node?: Node<NodeData>,
|
||||
template?: InvocationTemplate
|
||||
) => {
|
||||
const needsUpdate = getNeedsUpdate(node, template);
|
||||
if (
|
||||
!needsUpdate ||
|
||||
!isInvocationNode(node) ||
|
||||
!template ||
|
||||
!node.data.version
|
||||
) {
|
||||
return false;
|
||||
}
|
||||
const templateMajor = zParsedSemver.parse(template.version).major;
|
||||
|
||||
return satisfies(node.data.version, `^${templateMajor}`);
|
||||
};
|
||||
|
||||
export const updateNode = (
|
||||
node?: Node<NodeData>,
|
||||
template?: InvocationTemplate
|
||||
) => {
|
||||
const mayUpdate = getMayUpdateNode(node, template);
|
||||
if (
|
||||
!mayUpdate ||
|
||||
!isInvocationNode(node) ||
|
||||
!template ||
|
||||
!node.data.version
|
||||
) {
|
||||
return;
|
||||
}
|
||||
|
||||
const defaults = buildNodeData(
|
||||
node.data.type as AnyInvocationType,
|
||||
node.position,
|
||||
template
|
||||
) as Node<InvocationNodeData>;
|
||||
|
||||
const clone = cloneDeep(node);
|
||||
clone.data.version = template.version;
|
||||
defaultsDeep(clone, defaults);
|
||||
return clone;
|
||||
};
|
||||
|
||||
export const useNodeVersion = (nodeId: string) => {
|
||||
const dispatch = useAppDispatch();
|
||||
const toast = useAppToaster();
|
||||
const { t } = useTranslation();
|
||||
const selector = useMemo(
|
||||
() =>
|
||||
createSelector(
|
||||
stateSelector,
|
||||
({ nodes }) => {
|
||||
const node = nodes.nodes.find((node) => node.id === nodeId);
|
||||
const nodeTemplate = nodes.nodeTemplates[node?.data.type ?? ''];
|
||||
return { node, nodeTemplate };
|
||||
},
|
||||
defaultSelectorOptions
|
||||
),
|
||||
[nodeId]
|
||||
);
|
||||
|
||||
const { node, nodeTemplate } = useAppSelector(selector);
|
||||
|
||||
const needsUpdate = useMemo(
|
||||
() => getNeedsUpdate(node, nodeTemplate),
|
||||
[node, nodeTemplate]
|
||||
);
|
||||
|
||||
const mayUpdate = useMemo(
|
||||
() => getMayUpdateNode(node, nodeTemplate),
|
||||
[node, nodeTemplate]
|
||||
);
|
||||
|
||||
const _updateNode = useCallback(() => {
|
||||
const needsUpdate = getNeedsUpdate(node, nodeTemplate);
|
||||
const updatedNode = updateNode(node, nodeTemplate);
|
||||
if (!updatedNode) {
|
||||
if (needsUpdate) {
|
||||
toast({ title: t('nodes.unableToUpdateNodes', { count: 1 }) });
|
||||
}
|
||||
return;
|
||||
}
|
||||
dispatch(nodeReplaced({ nodeId: updatedNode.id, node: updatedNode }));
|
||||
}, [dispatch, node, nodeTemplate, t, toast]);
|
||||
|
||||
return { needsUpdate, mayUpdate, updateNode: _updateNode };
|
||||
};
|
||||
@@ -21,3 +21,7 @@ export const isAnyGraphBuilt = isAnyOf(
|
||||
export const workflowLoadRequested = createAction<Workflow>(
|
||||
'nodes/workflowLoadRequested'
|
||||
);
|
||||
|
||||
export const updateAllNodesRequested = createAction(
|
||||
'nodes/updateAllNodesRequested'
|
||||
);
|
||||
|
||||
@@ -149,6 +149,18 @@ const nodesSlice = createSlice({
|
||||
nodesChanged: (state, action: PayloadAction<NodeChange[]>) => {
|
||||
state.nodes = applyNodeChanges(action.payload, state.nodes);
|
||||
},
|
||||
nodeReplaced: (
|
||||
state,
|
||||
action: PayloadAction<{ nodeId: string; node: Node }>
|
||||
) => {
|
||||
const nodeIndex = state.nodes.findIndex(
|
||||
(n) => n.id === action.payload.nodeId
|
||||
);
|
||||
if (nodeIndex < 0) {
|
||||
return;
|
||||
}
|
||||
state.nodes[nodeIndex] = action.payload.node;
|
||||
},
|
||||
nodeAdded: (
|
||||
state,
|
||||
action: PayloadAction<
|
||||
@@ -1029,6 +1041,7 @@ export const {
|
||||
mouseOverFieldChanged,
|
||||
mouseOverNodeChanged,
|
||||
nodeAdded,
|
||||
nodeReplaced,
|
||||
nodeEditorReset,
|
||||
nodeEmbedWorkflowChanged,
|
||||
nodeExclusivelySelected,
|
||||
|
||||
@@ -0,0 +1,127 @@
|
||||
import { DRAG_HANDLE_CLASSNAME } from 'features/nodes/types/constants';
|
||||
import {
|
||||
CurrentImageNodeData,
|
||||
InputFieldValue,
|
||||
InvocationNodeData,
|
||||
InvocationTemplate,
|
||||
NotesNodeData,
|
||||
OutputFieldValue,
|
||||
} from 'features/nodes/types/types';
|
||||
import { buildInputFieldValue } from 'features/nodes/util/fieldValueBuilders';
|
||||
import { reduce } from 'lodash-es';
|
||||
import { Node, XYPosition } from 'reactflow';
|
||||
import { AnyInvocationType } from 'services/events/types';
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
|
||||
export const SHARED_NODE_PROPERTIES: Partial<Node> = {
|
||||
dragHandle: `.${DRAG_HANDLE_CLASSNAME}`,
|
||||
};
|
||||
export const buildNodeData = (
|
||||
type: AnyInvocationType | 'current_image' | 'notes',
|
||||
position: XYPosition,
|
||||
template?: InvocationTemplate
|
||||
):
|
||||
| Node<CurrentImageNodeData>
|
||||
| Node<NotesNodeData>
|
||||
| Node<InvocationNodeData>
|
||||
| undefined => {
|
||||
const nodeId = uuidv4();
|
||||
|
||||
if (type === 'current_image') {
|
||||
const node: Node<CurrentImageNodeData> = {
|
||||
...SHARED_NODE_PROPERTIES,
|
||||
id: nodeId,
|
||||
type: 'current_image',
|
||||
position,
|
||||
data: {
|
||||
id: nodeId,
|
||||
type: 'current_image',
|
||||
isOpen: true,
|
||||
label: 'Current Image',
|
||||
},
|
||||
};
|
||||
|
||||
return node;
|
||||
}
|
||||
|
||||
if (type === 'notes') {
|
||||
const node: Node<NotesNodeData> = {
|
||||
...SHARED_NODE_PROPERTIES,
|
||||
id: nodeId,
|
||||
type: 'notes',
|
||||
position,
|
||||
data: {
|
||||
id: nodeId,
|
||||
isOpen: true,
|
||||
label: 'Notes',
|
||||
notes: '',
|
||||
type: 'notes',
|
||||
},
|
||||
};
|
||||
|
||||
return node;
|
||||
}
|
||||
|
||||
if (template === undefined) {
|
||||
console.error(`Unable to find template ${type}.`);
|
||||
return;
|
||||
}
|
||||
|
||||
const inputs = reduce(
|
||||
template.inputs,
|
||||
(inputsAccumulator, inputTemplate, inputName) => {
|
||||
const fieldId = uuidv4();
|
||||
|
||||
const inputFieldValue: InputFieldValue = buildInputFieldValue(
|
||||
fieldId,
|
||||
inputTemplate
|
||||
);
|
||||
|
||||
inputsAccumulator[inputName] = inputFieldValue;
|
||||
|
||||
return inputsAccumulator;
|
||||
},
|
||||
{} as Record<string, InputFieldValue>
|
||||
);
|
||||
|
||||
const outputs = reduce(
|
||||
template.outputs,
|
||||
(outputsAccumulator, outputTemplate, outputName) => {
|
||||
const fieldId = uuidv4();
|
||||
|
||||
const outputFieldValue: OutputFieldValue = {
|
||||
id: fieldId,
|
||||
name: outputName,
|
||||
type: outputTemplate.type,
|
||||
fieldKind: 'output',
|
||||
};
|
||||
|
||||
outputsAccumulator[outputName] = outputFieldValue;
|
||||
|
||||
return outputsAccumulator;
|
||||
},
|
||||
{} as Record<string, OutputFieldValue>
|
||||
);
|
||||
|
||||
const invocation: Node<InvocationNodeData> = {
|
||||
...SHARED_NODE_PROPERTIES,
|
||||
id: nodeId,
|
||||
type: 'invocation',
|
||||
position,
|
||||
data: {
|
||||
id: nodeId,
|
||||
type,
|
||||
version: template.version,
|
||||
label: '',
|
||||
notes: '',
|
||||
isOpen: true,
|
||||
embedWorkflow: false,
|
||||
isIntermediate: type === 'save_image' ? false : true,
|
||||
inputs,
|
||||
outputs,
|
||||
useCache: template.useCache,
|
||||
},
|
||||
};
|
||||
|
||||
return invocation;
|
||||
};
|
||||
@@ -23,7 +23,7 @@ import {
|
||||
RESIZE_HRF,
|
||||
VAE_LOADER,
|
||||
} from './constants';
|
||||
import { upsertMetadata } from './metadata';
|
||||
import { setMetadataReceivingNode, upsertMetadata } from './metadata';
|
||||
|
||||
// Copy certain connections from previous DENOISE_LATENTS to new DENOISE_LATENTS_HRF.
|
||||
function copyConnectionsToDenoiseLatentsHrf(graph: NonNullableGraph): void {
|
||||
@@ -369,4 +369,5 @@ export const addHrfToGraph = (
|
||||
hrf_enabled: hrfEnabled,
|
||||
hrf_method: hrfMethod,
|
||||
});
|
||||
setMetadataReceivingNode(graph, LATENTS_TO_IMAGE_HRF_HR);
|
||||
};
|
||||
|
||||
@@ -1,20 +1,20 @@
|
||||
import { RootState } from 'app/store/store';
|
||||
import { NonNullableGraph } from 'features/nodes/types/types';
|
||||
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
|
||||
import { SaveImageInvocation } from 'services/api/types';
|
||||
import { LinearUIOutputInvocation } from 'services/api/types';
|
||||
import {
|
||||
CANVAS_OUTPUT,
|
||||
LATENTS_TO_IMAGE,
|
||||
LATENTS_TO_IMAGE_HRF_HR,
|
||||
LINEAR_UI_OUTPUT,
|
||||
NSFW_CHECKER,
|
||||
SAVE_IMAGE,
|
||||
WATERMARKER,
|
||||
} from './constants';
|
||||
|
||||
/**
|
||||
* Set the `use_cache` field on the linear/canvas graph's final image output node to False.
|
||||
*/
|
||||
export const addSaveImageNode = (
|
||||
export const addLinearUIOutputNode = (
|
||||
state: RootState,
|
||||
graph: NonNullableGraph
|
||||
): void => {
|
||||
@@ -23,18 +23,18 @@ export const addSaveImageNode = (
|
||||
activeTabName === 'unifiedCanvas' ? !state.canvas.shouldAutoSave : false;
|
||||
const { autoAddBoardId } = state.gallery;
|
||||
|
||||
const saveImageNode: SaveImageInvocation = {
|
||||
id: SAVE_IMAGE,
|
||||
type: 'save_image',
|
||||
const linearUIOutputNode: LinearUIOutputInvocation = {
|
||||
id: LINEAR_UI_OUTPUT,
|
||||
type: 'linear_ui_output',
|
||||
is_intermediate,
|
||||
use_cache: false,
|
||||
board: autoAddBoardId === 'none' ? undefined : { board_id: autoAddBoardId },
|
||||
};
|
||||
|
||||
graph.nodes[SAVE_IMAGE] = saveImageNode;
|
||||
graph.nodes[LINEAR_UI_OUTPUT] = linearUIOutputNode;
|
||||
|
||||
const destination = {
|
||||
node_id: SAVE_IMAGE,
|
||||
node_id: LINEAR_UI_OUTPUT,
|
||||
field: 'image',
|
||||
};
|
||||
|
||||
@@ -4,9 +4,9 @@ import { ESRGANModelName } from 'features/parameters/store/postprocessingSlice';
|
||||
import {
|
||||
ESRGANInvocation,
|
||||
Graph,
|
||||
SaveImageInvocation,
|
||||
LinearUIOutputInvocation,
|
||||
} from 'services/api/types';
|
||||
import { REALESRGAN as ESRGAN, SAVE_IMAGE } from './constants';
|
||||
import { ESRGAN, LINEAR_UI_OUTPUT } from './constants';
|
||||
import { addCoreMetadataNode, upsertMetadata } from './metadata';
|
||||
|
||||
type Arg = {
|
||||
@@ -28,9 +28,9 @@ export const buildAdHocUpscaleGraph = ({
|
||||
is_intermediate: true,
|
||||
};
|
||||
|
||||
const saveImageNode: SaveImageInvocation = {
|
||||
id: SAVE_IMAGE,
|
||||
type: 'save_image',
|
||||
const linearUIOutputNode: LinearUIOutputInvocation = {
|
||||
id: LINEAR_UI_OUTPUT,
|
||||
type: 'linear_ui_output',
|
||||
use_cache: false,
|
||||
is_intermediate: false,
|
||||
board: autoAddBoardId === 'none' ? undefined : { board_id: autoAddBoardId },
|
||||
@@ -40,7 +40,7 @@ export const buildAdHocUpscaleGraph = ({
|
||||
id: `adhoc-esrgan-graph`,
|
||||
nodes: {
|
||||
[ESRGAN]: realesrganNode,
|
||||
[SAVE_IMAGE]: saveImageNode,
|
||||
[LINEAR_UI_OUTPUT]: linearUIOutputNode,
|
||||
},
|
||||
edges: [
|
||||
{
|
||||
@@ -49,14 +49,14 @@ export const buildAdHocUpscaleGraph = ({
|
||||
field: 'image',
|
||||
},
|
||||
destination: {
|
||||
node_id: SAVE_IMAGE,
|
||||
node_id: LINEAR_UI_OUTPUT,
|
||||
field: 'image',
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
addCoreMetadataNode(graph, {});
|
||||
addCoreMetadataNode(graph, {}, ESRGAN);
|
||||
upsertMetadata(graph, {
|
||||
esrgan_model: esrganModelName,
|
||||
});
|
||||
|
||||
@@ -6,7 +6,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
|
||||
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addLoRAsToGraph } from './addLoRAsToGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -308,24 +308,30 @@ export const buildCanvasImageToImageGraph = (
|
||||
});
|
||||
}
|
||||
|
||||
addCoreMetadataNode(graph, {
|
||||
generation_mode: 'img2img',
|
||||
cfg_scale,
|
||||
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
|
||||
height: !isUsingScaledDimensions
|
||||
? height
|
||||
: scaledBoundingBoxDimensions.height,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
clip_skip: clipSkip,
|
||||
strength,
|
||||
init_image: initialImage.image_name,
|
||||
});
|
||||
addCoreMetadataNode(
|
||||
graph,
|
||||
{
|
||||
generation_mode: 'img2img',
|
||||
cfg_scale,
|
||||
width: !isUsingScaledDimensions
|
||||
? width
|
||||
: scaledBoundingBoxDimensions.width,
|
||||
height: !isUsingScaledDimensions
|
||||
? height
|
||||
: scaledBoundingBoxDimensions.height,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
clip_skip: clipSkip,
|
||||
strength,
|
||||
init_image: initialImage.image_name,
|
||||
},
|
||||
CANVAS_OUTPUT
|
||||
);
|
||||
|
||||
// Add Seamless To Graph
|
||||
if (seamlessXAxis || seamlessYAxis) {
|
||||
@@ -357,7 +363,7 @@ export const buildCanvasImageToImageGraph = (
|
||||
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -13,7 +13,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
|
||||
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addLoRAsToGraph } from './addLoRAsToGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -666,7 +666,7 @@ export const buildCanvasInpaintGraph = (
|
||||
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -12,7 +12,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
|
||||
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addLoRAsToGraph } from './addLoRAsToGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -770,7 +770,7 @@ export const buildCanvasOutpaintGraph = (
|
||||
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -7,7 +7,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
|
||||
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
import { addWatermarkerToGraph } from './addWatermarkerToGraph';
|
||||
@@ -319,23 +319,29 @@ export const buildCanvasSDXLImageToImageGraph = (
|
||||
});
|
||||
}
|
||||
|
||||
addCoreMetadataNode(graph, {
|
||||
generation_mode: 'img2img',
|
||||
cfg_scale,
|
||||
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
|
||||
height: !isUsingScaledDimensions
|
||||
? height
|
||||
: scaledBoundingBoxDimensions.height,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
strength,
|
||||
init_image: initialImage.image_name,
|
||||
});
|
||||
addCoreMetadataNode(
|
||||
graph,
|
||||
{
|
||||
generation_mode: 'img2img',
|
||||
cfg_scale,
|
||||
width: !isUsingScaledDimensions
|
||||
? width
|
||||
: scaledBoundingBoxDimensions.width,
|
||||
height: !isUsingScaledDimensions
|
||||
? height
|
||||
: scaledBoundingBoxDimensions.height,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
strength,
|
||||
init_image: initialImage.image_name,
|
||||
},
|
||||
CANVAS_OUTPUT
|
||||
);
|
||||
|
||||
// Add Seamless To Graph
|
||||
if (seamlessXAxis || seamlessYAxis) {
|
||||
@@ -380,7 +386,7 @@ export const buildCanvasSDXLImageToImageGraph = (
|
||||
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -14,7 +14,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
|
||||
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -696,7 +696,7 @@ export const buildCanvasSDXLInpaintGraph = (
|
||||
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -13,7 +13,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
|
||||
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -799,7 +799,7 @@ export const buildCanvasSDXLOutpaintGraph = (
|
||||
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -10,7 +10,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
|
||||
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -301,21 +301,27 @@ export const buildCanvasSDXLTextToImageGraph = (
|
||||
});
|
||||
}
|
||||
|
||||
addCoreMetadataNode(graph, {
|
||||
generation_mode: 'txt2img',
|
||||
cfg_scale,
|
||||
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
|
||||
height: !isUsingScaledDimensions
|
||||
? height
|
||||
: scaledBoundingBoxDimensions.height,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
});
|
||||
addCoreMetadataNode(
|
||||
graph,
|
||||
{
|
||||
generation_mode: 'txt2img',
|
||||
cfg_scale,
|
||||
width: !isUsingScaledDimensions
|
||||
? width
|
||||
: scaledBoundingBoxDimensions.width,
|
||||
height: !isUsingScaledDimensions
|
||||
? height
|
||||
: scaledBoundingBoxDimensions.height,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
},
|
||||
CANVAS_OUTPUT
|
||||
);
|
||||
|
||||
// Add Seamless To Graph
|
||||
if (seamlessXAxis || seamlessYAxis) {
|
||||
@@ -360,7 +366,7 @@ export const buildCanvasSDXLTextToImageGraph = (
|
||||
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -9,7 +9,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
|
||||
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addLoRAsToGraph } from './addLoRAsToGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -289,22 +289,28 @@ export const buildCanvasTextToImageGraph = (
|
||||
});
|
||||
}
|
||||
|
||||
addCoreMetadataNode(graph, {
|
||||
generation_mode: 'txt2img',
|
||||
cfg_scale,
|
||||
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
|
||||
height: !isUsingScaledDimensions
|
||||
? height
|
||||
: scaledBoundingBoxDimensions.height,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
clip_skip: clipSkip,
|
||||
});
|
||||
addCoreMetadataNode(
|
||||
graph,
|
||||
{
|
||||
generation_mode: 'txt2img',
|
||||
cfg_scale,
|
||||
width: !isUsingScaledDimensions
|
||||
? width
|
||||
: scaledBoundingBoxDimensions.width,
|
||||
height: !isUsingScaledDimensions
|
||||
? height
|
||||
: scaledBoundingBoxDimensions.height,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
clip_skip: clipSkip,
|
||||
},
|
||||
CANVAS_OUTPUT
|
||||
);
|
||||
|
||||
// Add Seamless To Graph
|
||||
if (seamlessXAxis || seamlessYAxis) {
|
||||
@@ -336,7 +342,7 @@ export const buildCanvasTextToImageGraph = (
|
||||
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -9,7 +9,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
|
||||
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addLoRAsToGraph } from './addLoRAsToGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -311,22 +311,26 @@ export const buildLinearImageToImageGraph = (
|
||||
});
|
||||
}
|
||||
|
||||
addCoreMetadataNode(graph, {
|
||||
generation_mode: 'img2img',
|
||||
cfg_scale,
|
||||
height,
|
||||
width,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
clip_skip: clipSkip,
|
||||
strength,
|
||||
init_image: initialImage.imageName,
|
||||
});
|
||||
addCoreMetadataNode(
|
||||
graph,
|
||||
{
|
||||
generation_mode: 'img2img',
|
||||
cfg_scale,
|
||||
height,
|
||||
width,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
clip_skip: clipSkip,
|
||||
strength,
|
||||
init_image: initialImage.imageName,
|
||||
},
|
||||
IMAGE_TO_LATENTS
|
||||
);
|
||||
|
||||
// Add Seamless To Graph
|
||||
if (seamlessXAxis || seamlessYAxis) {
|
||||
@@ -358,7 +362,7 @@ export const buildLinearImageToImageGraph = (
|
||||
addWatermarkerToGraph(state, graph);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -10,7 +10,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
|
||||
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -331,23 +331,27 @@ export const buildLinearSDXLImageToImageGraph = (
|
||||
});
|
||||
}
|
||||
|
||||
addCoreMetadataNode(graph, {
|
||||
generation_mode: 'sdxl_img2img',
|
||||
cfg_scale,
|
||||
height,
|
||||
width,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
strength,
|
||||
init_image: initialImage.imageName,
|
||||
positive_style_prompt: positiveStylePrompt,
|
||||
negative_style_prompt: negativeStylePrompt,
|
||||
});
|
||||
addCoreMetadataNode(
|
||||
graph,
|
||||
{
|
||||
generation_mode: 'sdxl_img2img',
|
||||
cfg_scale,
|
||||
height,
|
||||
width,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
strength,
|
||||
init_image: initialImage.imageName,
|
||||
positive_style_prompt: positiveStylePrompt,
|
||||
negative_style_prompt: negativeStylePrompt,
|
||||
},
|
||||
IMAGE_TO_LATENTS
|
||||
);
|
||||
|
||||
// Add Seamless To Graph
|
||||
if (seamlessXAxis || seamlessYAxis) {
|
||||
@@ -388,7 +392,7 @@ export const buildLinearSDXLImageToImageGraph = (
|
||||
addWatermarkerToGraph(state, graph);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -6,7 +6,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
|
||||
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -225,21 +225,25 @@ export const buildLinearSDXLTextToImageGraph = (
|
||||
],
|
||||
};
|
||||
|
||||
addCoreMetadataNode(graph, {
|
||||
generation_mode: 'sdxl_txt2img',
|
||||
cfg_scale,
|
||||
height,
|
||||
width,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
positive_style_prompt: positiveStylePrompt,
|
||||
negative_style_prompt: negativeStylePrompt,
|
||||
});
|
||||
addCoreMetadataNode(
|
||||
graph,
|
||||
{
|
||||
generation_mode: 'sdxl_txt2img',
|
||||
cfg_scale,
|
||||
height,
|
||||
width,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
positive_style_prompt: positiveStylePrompt,
|
||||
negative_style_prompt: negativeStylePrompt,
|
||||
},
|
||||
LATENTS_TO_IMAGE
|
||||
);
|
||||
|
||||
// Add Seamless To Graph
|
||||
if (seamlessXAxis || seamlessYAxis) {
|
||||
@@ -280,7 +284,7 @@ export const buildLinearSDXLTextToImageGraph = (
|
||||
addWatermarkerToGraph(state, graph);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -10,7 +10,7 @@ import { addHrfToGraph } from './addHrfToGraph';
|
||||
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
|
||||
import { addLoRAsToGraph } from './addLoRAsToGraph';
|
||||
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
|
||||
import { addSaveImageNode } from './addSaveImageNode';
|
||||
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
|
||||
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
|
||||
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
|
||||
import { addVAEToGraph } from './addVAEToGraph';
|
||||
@@ -234,20 +234,24 @@ export const buildLinearTextToImageGraph = (
|
||||
],
|
||||
};
|
||||
|
||||
addCoreMetadataNode(graph, {
|
||||
generation_mode: 'txt2img',
|
||||
cfg_scale,
|
||||
height,
|
||||
width,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
clip_skip: clipSkip,
|
||||
});
|
||||
addCoreMetadataNode(
|
||||
graph,
|
||||
{
|
||||
generation_mode: 'txt2img',
|
||||
cfg_scale,
|
||||
height,
|
||||
width,
|
||||
positive_prompt: positivePrompt,
|
||||
negative_prompt: negativePrompt,
|
||||
model,
|
||||
seed,
|
||||
steps,
|
||||
rand_device: use_cpu ? 'cpu' : 'cuda',
|
||||
scheduler,
|
||||
clip_skip: clipSkip,
|
||||
},
|
||||
LATENTS_TO_IMAGE
|
||||
);
|
||||
|
||||
// Add Seamless To Graph
|
||||
if (seamlessXAxis || seamlessYAxis) {
|
||||
@@ -286,7 +290,7 @@ export const buildLinearTextToImageGraph = (
|
||||
addWatermarkerToGraph(state, graph);
|
||||
}
|
||||
|
||||
addSaveImageNode(state, graph);
|
||||
addLinearUIOutputNode(state, graph);
|
||||
|
||||
return graph;
|
||||
};
|
||||
|
||||
@@ -9,7 +9,7 @@ export const LATENTS_TO_IMAGE_HRF_LR = 'latents_to_image_hrf_lr';
|
||||
export const IMAGE_TO_LATENTS_HRF = 'image_to_latents_hrf';
|
||||
export const RESIZE_HRF = 'resize_hrf';
|
||||
export const ESRGAN_HRF = 'esrgan_hrf';
|
||||
export const SAVE_IMAGE = 'save_image';
|
||||
export const LINEAR_UI_OUTPUT = 'linear_ui_output';
|
||||
export const NSFW_CHECKER = 'nsfw_checker';
|
||||
export const WATERMARKER = 'invisible_watermark';
|
||||
export const NOISE = 'noise';
|
||||
@@ -67,7 +67,7 @@ export const BATCH_PROMPT = 'batch_prompt';
|
||||
export const BATCH_STYLE_PROMPT = 'batch_style_prompt';
|
||||
export const METADATA_COLLECT = 'metadata_collect';
|
||||
export const MERGE_METADATA = 'merge_metadata';
|
||||
export const REALESRGAN = 'esrgan';
|
||||
export const ESRGAN = 'esrgan';
|
||||
export const DIVIDE = 'divide';
|
||||
export const SCALE = 'scale_image';
|
||||
export const SDXL_MODEL_LOADER = 'sdxl_model_loader';
|
||||
@@ -82,6 +82,32 @@ export const SDXL_REFINER_INPAINT_CREATE_MASK = 'refiner_inpaint_create_mask';
|
||||
export const SEAMLESS = 'seamless';
|
||||
export const SDXL_REFINER_SEAMLESS = 'refiner_seamless';
|
||||
|
||||
// these image-outputting nodes are from the linear UI and we should not handle the gallery logic on them
|
||||
// instead, we wait for LINEAR_UI_OUTPUT node, and handle it like any other image-outputting node
|
||||
export const nodeIDDenyList = [
|
||||
CANVAS_OUTPUT,
|
||||
LATENTS_TO_IMAGE,
|
||||
LATENTS_TO_IMAGE_HRF_HR,
|
||||
NSFW_CHECKER,
|
||||
WATERMARKER,
|
||||
ESRGAN,
|
||||
ESRGAN_HRF,
|
||||
RESIZE_HRF,
|
||||
LATENTS_TO_IMAGE_HRF_LR,
|
||||
IMG2IMG_RESIZE,
|
||||
INPAINT_IMAGE,
|
||||
SCALED_INPAINT_IMAGE,
|
||||
INPAINT_IMAGE_RESIZE_UP,
|
||||
INPAINT_IMAGE_RESIZE_DOWN,
|
||||
INPAINT_INFILL,
|
||||
INPAINT_INFILL_RESIZE_DOWN,
|
||||
INPAINT_FINAL_IMAGE,
|
||||
INPAINT_CREATE_MASK,
|
||||
INPAINT_MASK,
|
||||
PASTE_IMAGE,
|
||||
SCALE,
|
||||
];
|
||||
|
||||
// friendly graph ids
|
||||
export const TEXT_TO_IMAGE_GRAPH = 'text_to_image_graph';
|
||||
export const IMAGE_TO_IMAGE_GRAPH = 'image_to_image_graph';
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
import { NonNullableGraph } from 'features/nodes/types/types';
|
||||
import { CoreMetadataInvocation } from 'services/api/types';
|
||||
import { JsonObject } from 'type-fest';
|
||||
import { METADATA, SAVE_IMAGE } from './constants';
|
||||
import { METADATA } from './constants';
|
||||
|
||||
export const addCoreMetadataNode = (
|
||||
graph: NonNullableGraph,
|
||||
metadata: Partial<CoreMetadataInvocation>
|
||||
metadata: Partial<CoreMetadataInvocation>,
|
||||
nodeId: string
|
||||
): void => {
|
||||
graph.nodes[METADATA] = {
|
||||
id: METADATA,
|
||||
@@ -19,7 +20,7 @@ export const addCoreMetadataNode = (
|
||||
field: 'metadata',
|
||||
},
|
||||
destination: {
|
||||
node_id: SAVE_IMAGE,
|
||||
node_id: nodeId,
|
||||
field: 'metadata',
|
||||
},
|
||||
});
|
||||
@@ -64,3 +65,21 @@ export const getHasMetadata = (graph: NonNullableGraph): boolean => {
|
||||
|
||||
return Boolean(metadataNode);
|
||||
};
|
||||
|
||||
export const setMetadataReceivingNode = (
|
||||
graph: NonNullableGraph,
|
||||
nodeId: string
|
||||
) => {
|
||||
graph.edges = graph.edges.filter((edge) => edge.source.node_id !== METADATA);
|
||||
|
||||
graph.edges.push({
|
||||
source: {
|
||||
node_id: METADATA,
|
||||
field: 'metadata',
|
||||
},
|
||||
destination: {
|
||||
node_id: nodeId,
|
||||
field: 'metadata',
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
@@ -19,7 +19,7 @@ const RESERVED_INPUT_FIELD_NAMES = ['id', 'type', 'use_cache'];
|
||||
const RESERVED_OUTPUT_FIELD_NAMES = ['type'];
|
||||
const RESERVED_FIELD_TYPES = ['IsIntermediate'];
|
||||
|
||||
const invocationDenylist: AnyInvocationType[] = ['graph'];
|
||||
const invocationDenylist: AnyInvocationType[] = ['graph', 'linear_ui_output'];
|
||||
|
||||
const isReservedInputField = (nodeType: string, fieldName: string) => {
|
||||
if (RESERVED_INPUT_FIELD_NAMES.includes(fieldName)) {
|
||||
|
||||
1166
invokeai/frontend/web/src/services/api/schema.d.ts
vendored
1166
invokeai/frontend/web/src/services/api/schema.d.ts
vendored
File diff suppressed because one or more lines are too long
@@ -48,9 +48,11 @@ export type OffsetPaginatedResults_ImageDTO_ =
|
||||
s['OffsetPaginatedResults_ImageDTO_'];
|
||||
|
||||
// Models
|
||||
export type ModelType = s['ModelType'];
|
||||
export type ModelType =
|
||||
s['invokeai__backend__model_management__models__base__ModelType'];
|
||||
export type SubModelType = s['SubModelType'];
|
||||
export type BaseModelType = s['BaseModelType'];
|
||||
export type BaseModelType =
|
||||
s['invokeai__backend__model_management__models__base__BaseModelType'];
|
||||
export type MainModelField = s['MainModelField'];
|
||||
export type OnnxModelField = s['OnnxModelField'];
|
||||
export type VAEModelField = s['VAEModelField'];
|
||||
@@ -58,7 +60,7 @@ export type LoRAModelField = s['LoRAModelField'];
|
||||
export type ControlNetModelField = s['ControlNetModelField'];
|
||||
export type IPAdapterModelField = s['IPAdapterModelField'];
|
||||
export type T2IAdapterModelField = s['T2IAdapterModelField'];
|
||||
export type ModelsList = s['ModelsList'];
|
||||
export type ModelsList = s['invokeai__app__api__routers__models__ModelsList'];
|
||||
export type ControlField = s['ControlField'];
|
||||
export type IPAdapterField = s['IPAdapterField'];
|
||||
|
||||
@@ -140,7 +142,7 @@ export type DivideInvocation = s['DivideInvocation'];
|
||||
export type ImageNSFWBlurInvocation = s['ImageNSFWBlurInvocation'];
|
||||
export type ImageWatermarkInvocation = s['ImageWatermarkInvocation'];
|
||||
export type SeamlessModeInvocation = s['SeamlessModeInvocation'];
|
||||
export type SaveImageInvocation = s['SaveImageInvocation'];
|
||||
export type LinearUIOutputInvocation = s['LinearUIOutputInvocation'];
|
||||
export type MetadataInvocation = s['MetadataInvocation'];
|
||||
export type CoreMetadataInvocation = s['CoreMetadataInvocation'];
|
||||
export type MetadataItemInvocation = s['MetadataItemInvocation'];
|
||||
|
||||
@@ -1 +1 @@
|
||||
__version__ = "3.4.0dev1"
|
||||
__version__ = "3.4.0"
|
||||
|
||||
@@ -139,17 +139,19 @@ nav:
|
||||
- Features:
|
||||
- Overview: 'features/index.md'
|
||||
- New to InvokeAI?: 'help/gettingStartedWithAI.md'
|
||||
- Concepts: 'features/CONCEPTS.md'
|
||||
- Configuration: 'features/CONFIGURATION.md'
|
||||
- Control Adapters: 'features/CONTROLNET.md'
|
||||
- Image-to-Image: 'features/IMG2IMG.md'
|
||||
- Controlling Logging: 'features/LOGGING.md'
|
||||
- LoRAs & LCM-LoRAs: 'features/LORAS.md'
|
||||
- Model Merging: 'features/MODEL_MERGING.md'
|
||||
- Using Nodes : 'nodes/overview.md'
|
||||
- Nodes & Workflows: 'nodes/overview.md'
|
||||
- NSFW Checker: 'features/WATERMARK+NSFW.md'
|
||||
- Postprocessing: 'features/POSTPROCESS.md'
|
||||
- Prompting Features: 'features/PROMPTS.md'
|
||||
- Textual Inversion Training: 'features/TRAINING.md'
|
||||
- Textual Inversions:
|
||||
- Textual Inversions: 'features/TEXTUAL_INVERSIONS.md'
|
||||
- Textual Inversion Training: 'features/TRAINING.md'
|
||||
- Unified Canvas: 'features/UNIFIED_CANVAS.md'
|
||||
- InvokeAI Web Server: 'features/WEB.md'
|
||||
- WebUI Hotkeys: "features/WEBUIHOTKEYS.md"
|
||||
@@ -180,6 +182,7 @@ nav:
|
||||
- Troubleshooting: 'help/deprecated/TROUBLESHOOT.md'
|
||||
- Help:
|
||||
- Getting Started: 'help/gettingStartedWithAI.md'
|
||||
- FAQs: 'help/FAQ.md'
|
||||
- Diffusion Overview: 'help/diffusion.md'
|
||||
- Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md'
|
||||
- Other:
|
||||
|
||||
@@ -46,7 +46,7 @@ dependencies = [
|
||||
"easing-functions",
|
||||
"einops",
|
||||
"facexlib",
|
||||
"fastapi~=0.103.2",
|
||||
"fastapi~=0.104.1",
|
||||
"fastapi-events~=0.9.1",
|
||||
"huggingface-hub~=0.16.4",
|
||||
"imohash",
|
||||
@@ -59,7 +59,7 @@ dependencies = [
|
||||
"onnx",
|
||||
"onnxruntime",
|
||||
"opencv-python",
|
||||
"pydantic~=2.4.2",
|
||||
"pydantic~=2.5.0",
|
||||
"pydantic-settings~=2.0.3",
|
||||
"picklescan",
|
||||
"pillow",
|
||||
@@ -80,10 +80,10 @@ dependencies = [
|
||||
"send2trash",
|
||||
"test-tube~=0.7.5",
|
||||
"torch==2.1.0",
|
||||
"torchvision~=0.16",
|
||||
"torchvision==0.16.0",
|
||||
"torchmetrics~=0.11.0",
|
||||
"torchsde~=0.2.5",
|
||||
"transformers==4.35.0",
|
||||
"transformers~=4.35.0",
|
||||
"uvicorn[standard]~=0.21.1",
|
||||
"windows-curses; sys_platform=='win32'",
|
||||
]
|
||||
|
||||
Reference in New Issue
Block a user