Compare commits

..

4 Commits

Author SHA1 Message Date
psychedelicious
b3384bbc09 fix(installer): use extra index url when updating
If we don't include this, on updating, we will always get the CPU torch/torchvision/xformers.
2023-11-15 20:34:42 +11:00
psychedelicious
1ce8615745 fix: pin torch and torchvision exactly 2023-11-15 17:16:54 +11:00
psychedelicious
e7019b6e98 Update invokeai_version.py 2023-11-15 15:19:57 +11:00
Millun Atluri
e96a404530 Updated build to test cu121 updates 2023-11-14 14:01:05 +11:00
87 changed files with 963 additions and 2558 deletions

View File

@@ -6,7 +6,7 @@ on:
branches: main
jobs:
ruff:
black:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

View File

@@ -1,6 +1,6 @@
# Nodes
# Invocations
Features in InvokeAI are added in the form of modular nodes systems called
Features in InvokeAI are added in the form of modular node-like systems called
**Invocations**.
An Invocation is simply a single operation that takes in some inputs and gives
@@ -9,34 +9,13 @@ complex functionality.
## Invocations Directory
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These can be used as examples to create your own nodes.
InvokeAI Invocations can be found in the `invokeai/app/invocations` directory.
New nodes should be added to a subfolder in `nodes` direction found at the root level of the InvokeAI installation location. Nodes added to this folder will be able to be used upon application startup.
Example `nodes` subfolder structure:
```py
__init__.py # Invoke-managed custom node loader
cool_node
__init__.py # see example below
cool_node.py
my_node_pack
__init__.py # see example below
tasty_node.py
bodacious_node.py
utils.py
extra_nodes
fancy_node.py
```
Each node folder must have an `__init__.py` file that imports its nodes. Only nodes imported in the `__init__.py` file are loaded.
See the README in the nodes folder for more examples:
```py
from .cool_node import CoolInvocation
```
You can add your new functionality to one of the existing Invocations in this
directory or create a new file in this directory as per your needs.
**Note:** _All Invocations must be inside this directory for InvokeAI to
recognize them as valid Invocations._
## Creating A New Invocation

View File

@@ -1,3 +1,12 @@
---
title: Textual Inversion Embeddings and LoRAs
---
# :material-library-shelves: Textual Inversions and LoRAs
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## Using Textual Inversion Files
Textual inversion (TI) files are small models that customize the output of
@@ -52,4 +61,29 @@ files it finds there for compatible models. At startup you will see a message si
>> Current embedding manager terms: <HOI4-Leader>, <princess-knight>
```
To use these when generating, simply type the `<` key in your prompt to open the Textual Inversion WebUI and
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.
## Using LoRAs
LoRA files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.

View File

@@ -1,53 +0,0 @@
---
title: LoRAs & LCM-LoRAs
---
# :material-library-shelves: LoRAs & LCM-LoRAs
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## LoRAs
Low-Rank Adaptation (LoRA) files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.
## LCM-LoRAs
Latent Consistency Models (LCMs) allowed a reduced number of steps to be used to generate images with Stable Diffusion. These are created by distilling base models, creating models that only require a small number of steps to generate images. However, LCMs require that any fine-tune of a base model be distilled to be used as an LCM.
LCM-LoRAs are models that provide the benefit of LCMs but are able to be used as LoRAs and applied to any fine tune of a base model. LCM-LoRAs are created by training a small number of adapters, rather than distilling the entire fine-tuned base model. The resulting LoRA can be used the same way as a standard LoRA, but with a greatly reduced step count. This enables SDXL images to be generated up to 10x faster than without the use of LCM-LoRAs.
**Using LCM-LoRAs**
LCM-LoRAs are natively supported in InvokeAI throughout the application. To get started, install any diffusers format LCM-LoRAs using the model manager and select it in the LoRA field.
There are a number parameter differences when using LCM-LoRAs and standard generation:
- When using LCM-LoRAs, the LoRA strength should be lower than if using a standard LoRA, with 0.35 recommended as a starting point.
- The LCM scheduler should be used for generation
- CFG-Scale should be reduced to ~1
- Steps should be reduced in the range of 4-8
Standard LoRAs can also be used alongside LCM-LoRAs, but will also require a lower strength, with 0.45 being recommended as a starting point.
More information can be found here: https://huggingface.co/blog/lcm_lora#fast-inference-with-sdxl-lcm-loras

View File

@@ -20,7 +20,7 @@ a single convenient digital artist-optimized user interface.
### * [Prompt Engineering](PROMPTS.md)
Get the images you want with the InvokeAI prompt engineering language.
### * The [LoRA, LyCORIS, LCM-LoRA Models](CONCEPTS.md)
### * The [LoRA, LyCORIS and Textual Inversion Models](CONCEPTS.md)
Add custom subjects and styles using a variety of fine-tuned models.
### * [ControlNet](CONTROLNET.md)
@@ -40,7 +40,7 @@ guide also covers optimizing models to load quickly.
Teach an old model new tricks. Merge 2-3 models together to create a
new model that combines characteristics of the originals.
### * [Textual Inversion](TEXTUAL_INVERSIONS.md)
### * [Textual Inversion](TRAINING.md)
Personalize models by adding your own style or subjects.
## Other Features

View File

@@ -1,43 +0,0 @@
# FAQs
**Where do I get started? How can I install Invoke?**
- You can download the latest installers [here](https://github.com/invoke-ai/InvokeAI/releases) - Note that any releases marked as *pre-release* are in a beta state. You may experience some issues, but we appreciate your help testing those! For stable/reliable installations, please install the **[Latest Release](https://github.com/invoke-ai/InvokeAI/releases/latest)**
**How can I download models? Can I use models I already have downloaded?**
- Models can be downloaded through the model manager, or through option [4] in the invoke.bat/invoke.sh launcher script. To download a model through the Model Manager, use the HuggingFace Repo ID by pressing the “Copy” button next to the repository name. Alternatively, to download a model from CivitAi, use the download link in the Model Manager.
- Models that are already downloaded can be used by creating a symlink to the model location in the `autoimport` folder or by using the Model Mangers “Scan for Models” function.
**My images are taking a long time to generate. How can I speed up generation?**
- A common solution is to reduce the size of your RAM & VRAM cache to 0.25. This ensures your system has enough memory to generate images.
- Additionally, check the [hardware requirements](https://invoke-ai.github.io/InvokeAI/#hardware-requirements) to ensure that your system is capable of generating images.
- Lastly, double check your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup.
**Ive installed Python on Windows but the installer says it cant find it?**
- Then ensure that you checked **'Add python.exe to PATH'** when installing Python. This can be found at the bottom of the Python Installer window. If you already have Python installed, this can be done with the modify / repair feature of the installer.
**Ive installed everything successfully but I still get an error about Triton when starting Invoke?**
- This can be safely ignored. InvokeAI doesn't use Triton, but if you are on Linux and wish to dismiss the error, you can install Triton.
**I updated to 3.4.0 and now xFormers cant load C++/CUDA?**
- An issue occurred with your PyTorch update. Follow these steps to fix :
1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
2. Run:`pip install ".[xformers]" --upgrade --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu121`
- If you run into an error with `typing_extensions`, re-open the developer console and run: `pip install -U typing-extensions`
**It says my pip is out of date - is that why my install isn't working?**
- An out of date won't cause an installation to fail. The cause of the error can likely be found above the message that says pip is out of date.
- If you saw that warning but the install went well, don't worry about it (but you can update pip afterwards if you'd like).
**How can I generate the exact same that I found on the internet?**
Most example images with prompts that you'll find on the internet have been generated using different software, so you can't expect to get identical results. In order to reproduce an image, you need to replicate the exact settings and processing steps, including (but not limited to) the model, the positive and negative prompts, the seed, the sampler, the exact image size, any upscaling steps, etc.
**Where can I get more help?**
- Create an issue on [GitHub](https://github.com/invoke-ai/InvokeAI/issues) or post in the [#help channel](https://discord.com/channels/1020123559063990373/1149510134058471514) of the InvokeAI Discord

View File

@@ -101,13 +101,16 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
<div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
!!! Note
This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time.
## :octicons-link-24: Quick Links
<div class="button-container">
<a href="installation/INSTALLATION"> <button class="button">Installation</button> </a>
<a href="features/"> <button class="button">Features</button> </a>
<a href="help/gettingStartedWithAI/"> <button class="button">Getting Started</button> </a>
<a href="help/FAQ/"> <button class="button">FAQ</button> </a>
<a href="contributing/CONTRIBUTING/"> <button class="button">Contributing</button> </a>
<a href="https://github.com/invoke-ai/InvokeAI/"> <button class="button">Code and Downloads</button> </a>
<a href="https://github.com/invoke-ai/InvokeAI/issues"> <button class="button">Bug Reports </button> </a>

View File

@@ -32,7 +32,6 @@ To use a community workflow, download the the `.json` node graph file and load i
+ [Size Stepper Nodes](#size-stepper-nodes)
+ [Text font to Image](#text-font-to-image)
+ [Thresholding](#thresholding)
+ [Unsharp Mask](#unsharp-mask)
+ [XY Image to Grid and Images to Grids nodes](#xy-image-to-grid-and-images-to-grids-nodes)
- [Example Node Template](#example-node-template)
- [Disclaimer](#disclaimer)
@@ -317,13 +316,6 @@ Highlights/Midtones/Shadows (with LUT blur enabled):
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0a440e43-697f-4d17-82ee-f287467df0a5" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0701fd0f-2ca7-4fe2-8613-2b52547bafce" width="300" />
--------------------------------
### Unsharp Mask
**Description:** Applies an unsharp mask filter to an image, preserving its alpha channel in the process.
**Node Link:** https://github.com/JPPhoto/unsharp-mask-node
--------------------------------
### XY Image to Grid and Images to Grids nodes

View File

@@ -96,7 +96,7 @@ class ControlOutput(BaseInvocationOutput):
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.0")
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.0.0")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
@@ -173,7 +173,7 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithWorkflow):
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
@@ -196,7 +196,7 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
@@ -225,7 +225,7 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
@@ -247,7 +247,7 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
@@ -270,7 +270,7 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
title="Openpose Processor",
tags=["controlnet", "openpose", "pose"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Openpose processing to image"""
@@ -295,7 +295,7 @@ class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
@@ -322,7 +322,7 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
@@ -339,7 +339,7 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.1.0"
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.0.0"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
@@ -362,7 +362,7 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.1.0"
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.0.0"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
@@ -389,7 +389,7 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
@@ -419,7 +419,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
@@ -435,7 +435,7 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
@@ -458,7 +458,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
@@ -487,7 +487,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
@@ -527,7 +527,7 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
@@ -569,7 +569,7 @@ class SamDetectorReproducibleColors(SamDetector):
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""

View File

@@ -11,7 +11,7 @@ from invokeai.app.services.image_records.image_records_common import ImageCatego
from .baseinvocation import BaseInvocation, InputField, InvocationContext, WithMetadata, WithWorkflow, invocation
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.1.0")
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.0.0")
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Simple inpaint using opencv."""

View File

@@ -438,7 +438,7 @@ def get_faces_list(
return all_faces
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.1.0")
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.0.2")
class FaceOffInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Bound, extract, and mask a face from an image using MediaPipe detection"""
@@ -532,7 +532,7 @@ class FaceOffInvocation(BaseInvocation, WithWorkflow, WithMetadata):
return output
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.1.0")
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.0.2")
class FaceMaskInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Face mask creation using mediapipe face detection"""
@@ -650,7 +650,7 @@ class FaceMaskInvocation(BaseInvocation, WithWorkflow, WithMetadata):
@invocation(
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.1.0"
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.0.2"
)
class FaceIdentifierInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Outputs an image with detected face IDs printed on each face. For use with other FaceTools."""

View File

@@ -8,7 +8,7 @@ import numpy
from PIL import Image, ImageChops, ImageFilter, ImageOps
from invokeai.app.invocations.primitives import BoardField, ColorField, ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ImageRecordChanges, ResourceOrigin
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
@@ -36,7 +36,7 @@ class ShowImageInvocation(BaseInvocation):
)
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.1.0")
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.0.0")
class BlankImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Creates a blank image and forwards it to the pipeline"""
@@ -66,7 +66,7 @@ class BlankImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.1.0")
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.0.0")
class ImageCropInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Crops an image to a specified box. The box can be outside of the image."""
@@ -100,7 +100,7 @@ class ImageCropInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.1.0")
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.0.1")
class ImagePasteInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Pastes an image into another image."""
@@ -154,7 +154,7 @@ class ImagePasteInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.1.0")
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.0.0")
class MaskFromAlphaInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Extracts the alpha channel of an image as a mask."""
@@ -186,7 +186,7 @@ class MaskFromAlphaInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.1.0")
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.0.0")
class ImageMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
@@ -220,7 +220,7 @@ class ImageMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.1.0")
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.0.0")
class ImageChannelInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Gets a channel from an image."""
@@ -253,7 +253,7 @@ class ImageChannelInvocation(BaseInvocation, WithWorkflow, WithMetadata):
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.1.0")
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.0.0")
class ImageConvertInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Converts an image to a different mode."""
@@ -283,7 +283,7 @@ class ImageConvertInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.1.0")
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.0.0")
class ImageBlurInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Blurs an image"""
@@ -338,7 +338,7 @@ PIL_RESAMPLING_MAP = {
}
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.1.0")
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.0.0")
class ImageResizeInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Resizes an image to specific dimensions"""
@@ -375,7 +375,7 @@ class ImageResizeInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.1.0")
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.0.0")
class ImageScaleInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Scales an image by a factor"""
@@ -417,7 +417,7 @@ class ImageScaleInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.1.0")
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.0.0")
class ImageLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Linear interpolation of all pixels of an image"""
@@ -451,7 +451,7 @@ class ImageLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.1.0")
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.0.0")
class ImageInverseLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Inverse linear interpolation of all pixels of an image"""
@@ -485,7 +485,7 @@ class ImageInverseLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.1.0")
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.0.0")
class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Add blur to NSFW-flagged images"""
@@ -532,7 +532,7 @@ class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithWorkflow):
title="Add Invisible Watermark",
tags=["image", "watermark"],
category="image",
version="1.1.0",
version="1.0.0",
)
class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Add an invisible watermark to an image"""
@@ -561,7 +561,7 @@ class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.1.0")
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.0.0")
class MaskEdgeInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Applies an edge mask to an image"""
@@ -612,7 +612,7 @@ class MaskEdgeInvocation(BaseInvocation, WithWorkflow, WithMetadata):
title="Combine Masks",
tags=["image", "mask", "multiply"],
category="image",
version="1.1.0",
version="1.0.0",
)
class MaskCombineInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
@@ -644,7 +644,7 @@ class MaskCombineInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.1.0")
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.0.0")
class ColorCorrectInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""
Shifts the colors of a target image to match the reference image, optionally
@@ -755,7 +755,7 @@ class ColorCorrectInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.1.0")
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.0.0")
class ImageHueAdjustmentInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Adjusts the Hue of an image."""
@@ -858,7 +858,7 @@ CHANNEL_FORMATS = {
"value",
],
category="image",
version="1.1.0",
version="1.0.0",
)
class ImageChannelOffsetInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Add or subtract a value from a specific color channel of an image."""
@@ -929,7 +929,7 @@ class ImageChannelOffsetInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"value",
],
category="image",
version="1.1.0",
version="1.0.0",
)
class ImageChannelMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Scale a specific color channel of an image."""
@@ -988,7 +988,7 @@ class ImageChannelMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata)
title="Save Image",
tags=["primitives", "image"],
category="primitives",
version="1.1.0",
version="1.0.1",
use_cache=False,
)
class SaveImageInvocation(BaseInvocation, WithWorkflow, WithMetadata):
@@ -1017,35 +1017,3 @@ class SaveImageInvocation(BaseInvocation, WithWorkflow, WithMetadata):
width=image_dto.width,
height=image_dto.height,
)
@invocation(
"linear_ui_output",
title="Linear UI Image Output",
tags=["primitives", "image"],
category="primitives",
version="1.0.1",
use_cache=False,
)
class LinearUIOutputInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Handles Linear UI Image Outputting tasks."""
image: ImageField = InputField(description=FieldDescriptions.image)
board: Optional[BoardField] = InputField(default=None, description=FieldDescriptions.board, input=Input.Direct)
def invoke(self, context: InvocationContext) -> ImageOutput:
image_dto = context.services.images.get_dto(self.image.image_name)
if self.board:
context.services.board_images.add_image_to_board(self.board.board_id, self.image.image_name)
if image_dto.is_intermediate != self.is_intermediate:
context.services.images.update(
self.image.image_name, changes=ImageRecordChanges(is_intermediate=self.is_intermediate)
)
return ImageOutput(
image=ImageField(image_name=self.image.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@@ -118,7 +118,7 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
return si
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class InfillColorInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image with a solid color"""
@@ -154,7 +154,7 @@ class InfillColorInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class InfillTileInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image with tiles of the image"""
@@ -192,7 +192,7 @@ class InfillTileInvocation(BaseInvocation, WithWorkflow, WithMetadata):
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0"
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0"
)
class InfillPatchMatchInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
@@ -245,7 +245,7 @@ class InfillPatchMatchInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class LaMaInfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using the LaMa model"""
@@ -274,7 +274,7 @@ class LaMaInfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint")
class CV2InfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using OpenCV Inpainting"""

View File

@@ -790,7 +790,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
title="Latents to Image",
tags=["latents", "image", "vae", "l2i"],
category="latents",
version="1.1.0",
version="1.0.0",
)
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Generates an image from latents."""

View File

@@ -112,7 +112,7 @@ GENERATION_MODES = Literal[
]
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.0.1")
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.0.0")
class CoreMetadataInvocation(BaseInvocation):
"""Collects core generation metadata into a MetadataField"""
@@ -160,7 +160,7 @@ class CoreMetadataInvocation(BaseInvocation):
)
# High resolution fix metadata.
hrf_enabled: Optional[bool] = InputField(
hrf_enabled: Optional[float] = InputField(
default=None,
description="Whether or not high resolution fix was enabled.",
)

View File

@@ -326,7 +326,7 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
title="ONNX Latents to Image",
tags=["latents", "image", "vae", "onnx"],
category="image",
version="1.1.0",
version="1.0.0",
)
class ONNXLatentsToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Generates an image from latents."""

View File

@@ -29,7 +29,7 @@ if choose_torch_device() == torch.device("mps"):
from torch import mps
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.2.0")
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.1.0")
class ESRGANInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Upscales an image using RealESRGAN."""

View File

@@ -90,6 +90,14 @@ def get_extras():
pass
return extras
def get_extra_index() -> str:
# parsed_version.local for torch is the platform + version, eg 'cu121' or 'rocm5.6'
local = pkg_resources.get_distribution("torch").parsed_version.local
if local and 'cu' in local:
return "--extra-index-url https://download.pytorch.org/whl/cu121"
if local and 'rocm' in local:
return "--extra-index-url https://download.pytorch.org/whl/rocm5.6"
return ""
def main():
versions = get_versions()
@@ -122,14 +130,15 @@ def main():
branch = Prompt.ask("Enter an InvokeAI branch name")
extras = get_extras()
extra_index_url = get_extra_index()
print(f":crossed_fingers: Upgrading to [yellow]{tag or release or branch}[/yellow]")
if release:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade'
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade {extra_index_url}'
elif tag:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip" --use-pep517 --upgrade'
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip" --use-pep517 --upgrade {extra_index_url}'
else:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip" --use-pep517 --upgrade'
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip" --use-pep517 --upgrade {extra_index_url}'
print("")
print("")
if os.system(cmd) == 0:

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
import{I as s,ie as T,v as l,$ as A,ig as R,aa as V,ih as z,ii as j,ij as D,ik as F,il as G,im as W,io as K,az as H,ip as U,iq as Y}from"./index-2ad466e0.js";import{M as Z}from"./MantineProvider-1fe3e660.js";var P=String.raw,E=P`
import{w as s,ic as T,v as l,a1 as I,id as R,ad as V,ie as z,ig as j,ih as D,ii as F,ij as G,ik as W,il as K,aG as H,im as U,io as Y}from"./index-ae455ad2.js";import{M as Z}from"./MantineProvider-8ba2c088.js";var P=String.raw,E=P`
:root,
:host {
--chakra-vh: 100vh;
@@ -277,4 +277,4 @@ import{I as s,ie as T,v as l,$ as A,ig as R,aa as V,ih as z,ii as j,ij as D,ik a
}
${E}
`}),g={light:"chakra-ui-light",dark:"chakra-ui-dark"};function Q(e={}){const{preventTransition:o=!0}=e,n={setDataset:r=>{const t=o?n.preventTransition():void 0;document.documentElement.dataset.theme=r,document.documentElement.style.colorScheme=r,t==null||t()},setClassName(r){document.body.classList.add(r?g.dark:g.light),document.body.classList.remove(r?g.light:g.dark)},query(){return window.matchMedia("(prefers-color-scheme: dark)")},getSystemTheme(r){var t;return((t=n.query().matches)!=null?t:r==="dark")?"dark":"light"},addListener(r){const t=n.query(),i=a=>{r(a.matches?"dark":"light")};return typeof t.addListener=="function"?t.addListener(i):t.addEventListener("change",i),()=>{typeof t.removeListener=="function"?t.removeListener(i):t.removeEventListener("change",i)}},preventTransition(){const r=document.createElement("style");return r.appendChild(document.createTextNode("*{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}")),document.head.appendChild(r),()=>{window.getComputedStyle(document.body),requestAnimationFrame(()=>{requestAnimationFrame(()=>{document.head.removeChild(r)})})}}};return n}var X="chakra-ui-color-mode";function L(e){return{ssr:!1,type:"localStorage",get(o){if(!(globalThis!=null&&globalThis.document))return o;let n;try{n=localStorage.getItem(e)||o}catch{}return n||o},set(o){try{localStorage.setItem(e,o)}catch{}}}}var ee=L(X),M=()=>{};function S(e,o){return e.type==="cookie"&&e.ssr?e.get(o):o}function O(e){const{value:o,children:n,options:{useSystemColorMode:r,initialColorMode:t,disableTransitionOnChange:i}={},colorModeManager:a=ee}=e,d=t==="dark"?"dark":"light",[u,p]=l.useState(()=>S(a,d)),[y,b]=l.useState(()=>S(a)),{getSystemTheme:w,setClassName:k,setDataset:x,addListener:$}=l.useMemo(()=>Q({preventTransition:i}),[i]),v=t==="system"&&!u?y:u,c=l.useCallback(m=>{const f=m==="system"?w():m;p(f),k(f==="dark"),x(f),a.set(f)},[a,w,k,x]);A(()=>{t==="system"&&b(w())},[]),l.useEffect(()=>{const m=a.get();if(m){c(m);return}if(t==="system"){c("system");return}c(d)},[a,d,t,c]);const C=l.useCallback(()=>{c(v==="dark"?"light":"dark")},[v,c]);l.useEffect(()=>{if(r)return $(c)},[r,$,c]);const N=l.useMemo(()=>({colorMode:o??v,toggleColorMode:o?M:C,setColorMode:o?M:c,forced:o!==void 0}),[v,C,c,o]);return s.jsx(R.Provider,{value:N,children:n})}O.displayName="ColorModeProvider";var te=["borders","breakpoints","colors","components","config","direction","fonts","fontSizes","fontWeights","letterSpacings","lineHeights","radii","shadows","sizes","space","styles","transition","zIndices"];function re(e){return V(e)?te.every(o=>Object.prototype.hasOwnProperty.call(e,o)):!1}function h(e){return typeof e=="function"}function oe(...e){return o=>e.reduce((n,r)=>r(n),o)}var ne=e=>function(...n){let r=[...n],t=n[n.length-1];return re(t)&&r.length>1?r=r.slice(0,r.length-1):t=e,oe(...r.map(i=>a=>h(i)?i(a):ae(a,i)))(t)},ie=ne(j);function ae(...e){return z({},...e,_)}function _(e,o,n,r){if((h(e)||h(o))&&Object.prototype.hasOwnProperty.call(r,n))return(...t)=>{const i=h(e)?e(...t):e,a=h(o)?o(...t):o;return z({},i,a,_)}}var q=l.createContext({getDocument(){return document},getWindow(){return window}});q.displayName="EnvironmentContext";function I(e){const{children:o,environment:n,disabled:r}=e,t=l.useRef(null),i=l.useMemo(()=>n||{getDocument:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument)!=null?u:document},getWindow:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument.defaultView)!=null?u:window}},[n]),a=!r||!n;return s.jsxs(q.Provider,{value:i,children:[o,a&&s.jsx("span",{id:"__chakra_env",hidden:!0,ref:t})]})}I.displayName="EnvironmentProvider";var se=e=>{const{children:o,colorModeManager:n,portalZIndex:r,resetScope:t,resetCSS:i=!0,theme:a={},environment:d,cssVarsRoot:u,disableEnvironment:p,disableGlobalStyle:y}=e,b=s.jsx(I,{environment:d,disabled:p,children:o});return s.jsx(D,{theme:a,cssVarsRoot:u,children:s.jsxs(O,{colorModeManager:n,options:a.config,children:[i?s.jsx(J,{scope:t}):s.jsx(B,{}),!y&&s.jsx(F,{}),r?s.jsx(G,{zIndex:r,children:b}):b]})})},le=e=>function({children:n,theme:r=e,toastOptions:t,...i}){return s.jsxs(se,{theme:r,...i,children:[s.jsx(W,{value:t==null?void 0:t.defaultOptions,children:n}),s.jsx(K,{...t})]})},de=le(j);const ue=()=>l.useMemo(()=>({colorScheme:"dark",fontFamily:"'Inter Variable', sans-serif",components:{ScrollArea:{defaultProps:{scrollbarSize:10},styles:{scrollbar:{"&:hover":{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}},thumb:{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}}}}}),[]),ce=L("@@invokeai-color-mode");function me({children:e}){const{i18n:o}=H(),n=o.dir(),r=l.useMemo(()=>ie({...U,direction:n}),[n]);l.useEffect(()=>{document.body.dir=n},[n]);const t=ue();return s.jsx(Z,{theme:t,children:s.jsx(de,{theme:r,colorModeManager:ce,toastOptions:Y,children:e})})}const ve=l.memo(me);export{ve as default};
`}),g={light:"chakra-ui-light",dark:"chakra-ui-dark"};function Q(e={}){const{preventTransition:o=!0}=e,n={setDataset:r=>{const t=o?n.preventTransition():void 0;document.documentElement.dataset.theme=r,document.documentElement.style.colorScheme=r,t==null||t()},setClassName(r){document.body.classList.add(r?g.dark:g.light),document.body.classList.remove(r?g.light:g.dark)},query(){return window.matchMedia("(prefers-color-scheme: dark)")},getSystemTheme(r){var t;return((t=n.query().matches)!=null?t:r==="dark")?"dark":"light"},addListener(r){const t=n.query(),i=a=>{r(a.matches?"dark":"light")};return typeof t.addListener=="function"?t.addListener(i):t.addEventListener("change",i),()=>{typeof t.removeListener=="function"?t.removeListener(i):t.removeEventListener("change",i)}},preventTransition(){const r=document.createElement("style");return r.appendChild(document.createTextNode("*{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}")),document.head.appendChild(r),()=>{window.getComputedStyle(document.body),requestAnimationFrame(()=>{requestAnimationFrame(()=>{document.head.removeChild(r)})})}}};return n}var X="chakra-ui-color-mode";function L(e){return{ssr:!1,type:"localStorage",get(o){if(!(globalThis!=null&&globalThis.document))return o;let n;try{n=localStorage.getItem(e)||o}catch{}return n||o},set(o){try{localStorage.setItem(e,o)}catch{}}}}var ee=L(X),M=()=>{};function S(e,o){return e.type==="cookie"&&e.ssr?e.get(o):o}function O(e){const{value:o,children:n,options:{useSystemColorMode:r,initialColorMode:t,disableTransitionOnChange:i}={},colorModeManager:a=ee}=e,d=t==="dark"?"dark":"light",[u,p]=l.useState(()=>S(a,d)),[y,b]=l.useState(()=>S(a)),{getSystemTheme:w,setClassName:k,setDataset:x,addListener:$}=l.useMemo(()=>Q({preventTransition:i}),[i]),v=t==="system"&&!u?y:u,c=l.useCallback(m=>{const f=m==="system"?w():m;p(f),k(f==="dark"),x(f),a.set(f)},[a,w,k,x]);I(()=>{t==="system"&&b(w())},[]),l.useEffect(()=>{const m=a.get();if(m){c(m);return}if(t==="system"){c("system");return}c(d)},[a,d,t,c]);const C=l.useCallback(()=>{c(v==="dark"?"light":"dark")},[v,c]);l.useEffect(()=>{if(r)return $(c)},[r,$,c]);const A=l.useMemo(()=>({colorMode:o??v,toggleColorMode:o?M:C,setColorMode:o?M:c,forced:o!==void 0}),[v,C,c,o]);return s.jsx(R.Provider,{value:A,children:n})}O.displayName="ColorModeProvider";var te=["borders","breakpoints","colors","components","config","direction","fonts","fontSizes","fontWeights","letterSpacings","lineHeights","radii","shadows","sizes","space","styles","transition","zIndices"];function re(e){return V(e)?te.every(o=>Object.prototype.hasOwnProperty.call(e,o)):!1}function h(e){return typeof e=="function"}function oe(...e){return o=>e.reduce((n,r)=>r(n),o)}var ne=e=>function(...n){let r=[...n],t=n[n.length-1];return re(t)&&r.length>1?r=r.slice(0,r.length-1):t=e,oe(...r.map(i=>a=>h(i)?i(a):ae(a,i)))(t)},ie=ne(j);function ae(...e){return z({},...e,_)}function _(e,o,n,r){if((h(e)||h(o))&&Object.prototype.hasOwnProperty.call(r,n))return(...t)=>{const i=h(e)?e(...t):e,a=h(o)?o(...t):o;return z({},i,a,_)}}var q=l.createContext({getDocument(){return document},getWindow(){return window}});q.displayName="EnvironmentContext";function N(e){const{children:o,environment:n,disabled:r}=e,t=l.useRef(null),i=l.useMemo(()=>n||{getDocument:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument)!=null?u:document},getWindow:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument.defaultView)!=null?u:window}},[n]),a=!r||!n;return s.jsxs(q.Provider,{value:i,children:[o,a&&s.jsx("span",{id:"__chakra_env",hidden:!0,ref:t})]})}N.displayName="EnvironmentProvider";var se=e=>{const{children:o,colorModeManager:n,portalZIndex:r,resetScope:t,resetCSS:i=!0,theme:a={},environment:d,cssVarsRoot:u,disableEnvironment:p,disableGlobalStyle:y}=e,b=s.jsx(N,{environment:d,disabled:p,children:o});return s.jsx(D,{theme:a,cssVarsRoot:u,children:s.jsxs(O,{colorModeManager:n,options:a.config,children:[i?s.jsx(J,{scope:t}):s.jsx(B,{}),!y&&s.jsx(F,{}),r?s.jsx(G,{zIndex:r,children:b}):b]})})},le=e=>function({children:n,theme:r=e,toastOptions:t,...i}){return s.jsxs(se,{theme:r,...i,children:[s.jsx(W,{value:t==null?void 0:t.defaultOptions,children:n}),s.jsx(K,{...t})]})},de=le(j);const ue=()=>l.useMemo(()=>({colorScheme:"dark",fontFamily:"'Inter Variable', sans-serif",components:{ScrollArea:{defaultProps:{scrollbarSize:10},styles:{scrollbar:{"&:hover":{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}},thumb:{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}}}}}),[]),ce=L("@@invokeai-color-mode");function me({children:e}){const{i18n:o}=H(),n=o.dir(),r=l.useMemo(()=>ie({...U,direction:n}),[n]);l.useEffect(()=>{document.body.dir=n},[n]);const t=ue();return s.jsx(Z,{theme:t,children:s.jsx(de,{theme:r,colorModeManager:ce,toastOptions:Y,children:e})})}const ve=l.memo(me);export{ve as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -15,7 +15,7 @@
margin: 0;
}
</style>
<script type="module" crossorigin src="./assets/index-2ad466e0.js"></script>
<script type="module" crossorigin src="./assets/index-ae455ad2.js"></script>
</head>
<body dir="ltr">

View File

@@ -920,10 +920,7 @@
"unknownTemplate": "Unknown Template",
"unkownInvocation": "Unknown Invocation type",
"updateNode": "Update Node",
"updateAllNodes": "Update All Nodes",
"updateApp": "Update App",
"unableToUpdateNodes_one": "Unable to update {{count}} node",
"unableToUpdateNodes_other": "Unable to update {{count}} nodes",
"vaeField": "Vae",
"vaeFieldDescription": "Vae submodel.",
"vaeModelField": "VAE",

View File

@@ -1222,8 +1222,7 @@
"seamless": "无缝",
"fit": "图生图匹配",
"recallParameters": "召回参数",
"noRecallParameters": "未找到要召回的参数",
"vae": "VAE"
"noRecallParameters": "未找到要召回的参数"
},
"models": {
"noMatchingModels": "无相匹配的模型",
@@ -1502,18 +1501,5 @@
"clear": "清除",
"maxCacheSize": "最大缓存大小",
"cacheSize": "缓存大小"
},
"hrf": {
"enableHrf": "启用高分辨率修复",
"upscaleMethod": "放大方法",
"enableHrfTooltip": "使用较低的分辨率进行初始生成,放大到基础分辨率后进行图生图。",
"metadata": {
"strength": "高分辨率修复强度",
"enabled": "高分辨率修复已启用",
"method": "高分辨率修复方法"
},
"hrf": "高分辨率修复",
"hrfStrength": "高分辨率修复强度",
"strengthTooltip": "值越低细节越少,但可以减少部分潜在的伪影。"
}
}

View File

@@ -920,10 +920,7 @@
"unknownTemplate": "Unknown Template",
"unkownInvocation": "Unknown Invocation type",
"updateNode": "Update Node",
"updateAllNodes": "Update All Nodes",
"updateApp": "Update App",
"unableToUpdateNodes_one": "Unable to update {{count}} node",
"unableToUpdateNodes_other": "Unable to update {{count}} nodes",
"vaeField": "Vae",
"vaeFieldDescription": "Vae submodel.",
"vaeModelField": "VAE",

View File

@@ -1222,8 +1222,7 @@
"seamless": "无缝",
"fit": "图生图匹配",
"recallParameters": "召回参数",
"noRecallParameters": "未找到要召回的参数",
"vae": "VAE"
"noRecallParameters": "未找到要召回的参数"
},
"models": {
"noMatchingModels": "无相匹配的模型",
@@ -1502,18 +1501,5 @@
"clear": "清除",
"maxCacheSize": "最大缓存大小",
"cacheSize": "缓存大小"
},
"hrf": {
"enableHrf": "启用高分辨率修复",
"upscaleMethod": "放大方法",
"enableHrfTooltip": "使用较低的分辨率进行初始生成,放大到基础分辨率后进行图生图。",
"metadata": {
"strength": "高分辨率修复强度",
"enabled": "高分辨率修复已启用",
"method": "高分辨率修复方法"
},
"hrf": "高分辨率修复",
"hrfStrength": "高分辨率修复强度",
"strengthTooltip": "值越低细节越少,但可以减少部分潜在的伪影。"
}
}

View File

@@ -72,7 +72,6 @@ import { addStagingAreaImageSavedListener } from './listeners/stagingAreaImageSa
import { addTabChangedListener } from './listeners/tabChanged';
import { addUpscaleRequestedListener } from './listeners/upscaleRequested';
import { addWorkflowLoadedListener } from './listeners/workflowLoaded';
import { addUpdateAllNodesRequestedListener } from './listeners/updateAllNodesRequested';
export const listenerMiddleware = createListenerMiddleware();
@@ -179,7 +178,6 @@ addReceivedOpenAPISchemaListener();
// Workflows
addWorkflowLoadedListener();
addUpdateAllNodesRequestedListener();
// DND
addImageDroppedListener();

View File

@@ -8,6 +8,7 @@ import {
selectControlAdapterById,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
import { SAVE_IMAGE } from 'features/nodes/util/graphBuilders/constants';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { imagesApi } from 'services/api/endpoints/images';
@@ -37,7 +38,6 @@ export const addControlNetImageProcessedListener = () => {
// ControlNet one-off procressing graph is just the processor node, no edges.
// Also we need to grab the image.
const nodeId = ca.processorNode.id;
const enqueueBatchArg: BatchConfig = {
prepend: true,
batch: {
@@ -46,10 +46,27 @@ export const addControlNetImageProcessedListener = () => {
[ca.processorNode.id]: {
...ca.processorNode,
is_intermediate: true,
use_cache: false,
image: { image_name: ca.controlImage },
},
[SAVE_IMAGE]: {
id: SAVE_IMAGE,
type: 'save_image',
is_intermediate: true,
use_cache: false,
},
},
edges: [
{
source: {
node_id: ca.processorNode.id,
field: 'image',
},
destination: {
node_id: SAVE_IMAGE,
field: 'image',
},
},
],
},
runs: 1,
},
@@ -73,7 +90,7 @@ export const addControlNetImageProcessedListener = () => {
socketInvocationComplete.match(action) &&
action.payload.data.queue_batch_id ===
enqueueResult.batch.batch_id &&
action.payload.data.source_node_id === nodeId
action.payload.data.source_node_id === SAVE_IMAGE
);
// We still have to check the output type

View File

@@ -7,10 +7,7 @@ import {
imageSelected,
} from 'features/gallery/store/gallerySlice';
import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
import {
LINEAR_UI_OUTPUT,
nodeIDDenyList,
} from 'features/nodes/util/graphBuilders/constants';
import { CANVAS_OUTPUT } from 'features/nodes/util/graphBuilders/constants';
import { boardsApi } from 'services/api/endpoints/boards';
import { imagesApi } from 'services/api/endpoints/images';
import { isImageOutput } from 'services/api/guards';
@@ -22,7 +19,7 @@ import {
import { startAppListening } from '../..';
// These nodes output an image, but do not actually *save* an image, so we don't want to handle the gallery logic on them
const nodeTypeDenylist = ['load_image', 'image'];
const nodeDenylist = ['load_image', 'image'];
export const addInvocationCompleteEventListener = () => {
startAppListening({
@@ -35,31 +32,22 @@ export const addInvocationCompleteEventListener = () => {
`Invocation complete (${action.payload.data.node.type})`
);
const { result, node, queue_batch_id, source_node_id } = data;
const { result, node, queue_batch_id } = data;
// This complete event has an associated image output
if (
isImageOutput(result) &&
!nodeTypeDenylist.includes(node.type) &&
!nodeIDDenyList.includes(source_node_id)
) {
if (isImageOutput(result) && !nodeDenylist.includes(node.type)) {
const { image_name } = result.image;
const { canvas, gallery } = getState();
// This populates the `getImageDTO` cache
const imageDTORequest = dispatch(
imagesApi.endpoints.getImageDTO.initiate(image_name, {
forceRefetch: true,
})
);
const imageDTO = await imageDTORequest.unwrap();
imageDTORequest.unsubscribe();
const imageDTO = await dispatch(
imagesApi.endpoints.getImageDTO.initiate(image_name)
).unwrap();
// Add canvas images to the staging area
if (
canvas.batchIds.includes(queue_batch_id) &&
[LINEAR_UI_OUTPUT].includes(data.source_node_id)
[CANVAS_OUTPUT].includes(data.source_node_id)
) {
dispatch(addImageToStagingArea(imageDTO));
}

View File

@@ -1,52 +0,0 @@
import {
getNeedsUpdate,
updateNode,
} from 'features/nodes/hooks/useNodeVersion';
import { updateAllNodesRequested } from 'features/nodes/store/actions';
import { nodeReplaced } from 'features/nodes/store/nodesSlice';
import { startAppListening } from '..';
import { logger } from 'app/logging/logger';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { t } from 'i18next';
export const addUpdateAllNodesRequestedListener = () => {
startAppListening({
actionCreator: updateAllNodesRequested,
effect: (action, { dispatch, getState }) => {
const log = logger('nodes');
const nodes = getState().nodes.nodes;
const templates = getState().nodes.nodeTemplates;
let unableToUpdateCount = 0;
nodes.forEach((node) => {
const template = templates[node.data.type];
const needsUpdate = getNeedsUpdate(node, template);
const updatedNode = updateNode(node, template);
if (!updatedNode) {
if (needsUpdate) {
unableToUpdateCount++;
}
return;
}
dispatch(nodeReplaced({ nodeId: updatedNode.id, node: updatedNode }));
});
if (unableToUpdateCount) {
log.warn(
`Unable to update ${unableToUpdateCount} nodes. Please report this issue.`
);
dispatch(
addToast(
makeToast({
title: t('nodes.unableToUpdateNodes', {
count: unableToUpdateCount,
}),
})
)
);
}
},
});
};

View File

@@ -157,8 +157,6 @@ const ImageMetadataActions = (props: Props) => {
return null;
}
console.log(metadata);
return (
<>
{metadata.created_by && (

View File

@@ -3,7 +3,7 @@ import { memo } from 'react';
import NodeCollapseButton from '../common/NodeCollapseButton';
import NodeTitle from '../common/NodeTitle';
import InvocationNodeCollapsedHandles from './InvocationNodeCollapsedHandles';
import InvocationNodeInfoIcon from './InvocationNodeInfoIcon';
import InvocationNodeNotes from './InvocationNodeNotes';
import InvocationNodeStatusIndicator from './InvocationNodeStatusIndicator';
type Props = {
@@ -34,7 +34,7 @@ const InvocationNodeHeader = ({ nodeId, isOpen }: Props) => {
<NodeTitle nodeId={nodeId} />
<Flex alignItems="center">
<InvocationNodeStatusIndicator nodeId={nodeId} />
<InvocationNodeInfoIcon nodeId={nodeId} />
<InvocationNodeNotes nodeId={nodeId} />
</Flex>
{!isOpen && <InvocationNodeCollapsedHandles nodeId={nodeId} />}
</Flex>

View File

@@ -1,39 +1,85 @@
import { Flex, Icon, Text, Tooltip } from '@chakra-ui/react';
import {
Flex,
Icon,
Modal,
ModalBody,
ModalCloseButton,
ModalContent,
ModalFooter,
ModalHeader,
ModalOverlay,
Text,
Tooltip,
useDisclosure,
} from '@chakra-ui/react';
import { compare } from 'compare-versions';
import { useNodeData } from 'features/nodes/hooks/useNodeData';
import { useNodeLabel } from 'features/nodes/hooks/useNodeLabel';
import { useNodeTemplate } from 'features/nodes/hooks/useNodeTemplate';
import { useNodeVersion } from 'features/nodes/hooks/useNodeVersion';
import { useNodeTemplateTitle } from 'features/nodes/hooks/useNodeTemplateTitle';
import { isInvocationNodeData } from 'features/nodes/types/types';
import { memo, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { FaInfoCircle } from 'react-icons/fa';
import NotesTextarea from './NotesTextarea';
import { useDoNodeVersionsMatch } from 'features/nodes/hooks/useDoNodeVersionsMatch';
import { useTranslation } from 'react-i18next';
interface Props {
nodeId: string;
}
const InvocationNodeInfoIcon = ({ nodeId }: Props) => {
const { needsUpdate } = useNodeVersion(nodeId);
const InvocationNodeNotes = ({ nodeId }: Props) => {
const { isOpen, onOpen, onClose } = useDisclosure();
const label = useNodeLabel(nodeId);
const title = useNodeTemplateTitle(nodeId);
const doVersionsMatch = useDoNodeVersionsMatch(nodeId);
const { t } = useTranslation();
return (
<Tooltip
label={<TooltipContent nodeId={nodeId} />}
placement="top"
shouldWrapChildren
>
<Icon
as={FaInfoCircle}
sx={{
boxSize: 4,
w: 8,
color: needsUpdate ? 'error.400' : 'base.400',
}}
/>
</Tooltip>
<>
<Tooltip
label={<TooltipContent nodeId={nodeId} />}
placement="top"
shouldWrapChildren
>
<Flex
className="nodrag"
onClick={onOpen}
sx={{
alignItems: 'center',
justifyContent: 'center',
w: 8,
h: 8,
cursor: 'pointer',
}}
>
<Icon
as={FaInfoCircle}
sx={{
boxSize: 4,
w: 8,
color: doVersionsMatch ? 'base.400' : 'error.400',
}}
/>
</Flex>
</Tooltip>
<Modal isOpen={isOpen} onClose={onClose} isCentered>
<ModalOverlay />
<ModalContent>
<ModalHeader>{label || title || t('nodes.unknownNode')}</ModalHeader>
<ModalCloseButton />
<ModalBody>
<NotesTextarea nodeId={nodeId} />
</ModalBody>
<ModalFooter />
</ModalContent>
</Modal>
</>
);
};
export default memo(InvocationNodeInfoIcon);
export default memo(InvocationNodeNotes);
const TooltipContent = memo(({ nodeId }: { nodeId: string }) => {
const data = useNodeData(nodeId);

View File

@@ -3,22 +3,15 @@ import { useAppDispatch } from 'app/store/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import { addNodePopoverOpened } from 'features/nodes/store/nodesSlice';
import { memo, useCallback } from 'react';
import { FaPlus, FaSync } from 'react-icons/fa';
import { FaPlus } from 'react-icons/fa';
import { useTranslation } from 'react-i18next';
import IAIButton from 'common/components/IAIButton';
import { useGetNodesNeedUpdate } from 'features/nodes/hooks/useGetNodesNeedUpdate';
import { updateAllNodesRequested } from 'features/nodes/store/actions';
const TopLeftPanel = () => {
const dispatch = useAppDispatch();
const { t } = useTranslation();
const nodesNeedUpdate = useGetNodesNeedUpdate();
const handleOpenAddNodePopover = useCallback(() => {
dispatch(addNodePopoverOpened());
}, [dispatch]);
const handleClickUpdateNodes = useCallback(() => {
dispatch(updateAllNodesRequested());
}, [dispatch]);
return (
<Flex sx={{ gap: 2, position: 'absolute', top: 2, insetInlineStart: 2 }}>
@@ -28,11 +21,6 @@ const TopLeftPanel = () => {
icon={<FaPlus />}
onClick={handleOpenAddNodePopover}
/>
{nodesNeedUpdate && (
<IAIButton leftIcon={<FaSync />} onClick={handleClickUpdateNodes}>
{t('nodes.updateAllNodes')}
</IAIButton>
)}
</Flex>
);
};

View File

@@ -1,125 +0,0 @@
import {
Box,
Flex,
FormControl,
FormLabel,
HStack,
Text,
} from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIIconButton from 'common/components/IAIIconButton';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import { useNodeVersion } from 'features/nodes/hooks/useNodeVersion';
import {
InvocationNodeData,
InvocationTemplate,
isInvocationNode,
} from 'features/nodes/types/types';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { FaSync } from 'react-icons/fa';
import { Node } from 'reactflow';
import NotesTextarea from '../../flow/nodes/Invocation/NotesTextarea';
import ScrollableContent from '../ScrollableContent';
import EditableNodeTitle from './details/EditableNodeTitle';
const selector = createSelector(
stateSelector,
({ nodes }) => {
const lastSelectedNodeId =
nodes.selectedNodes[nodes.selectedNodes.length - 1];
const lastSelectedNode = nodes.nodes.find(
(node) => node.id === lastSelectedNodeId
);
const lastSelectedNodeTemplate = lastSelectedNode
? nodes.nodeTemplates[lastSelectedNode.data.type]
: undefined;
return {
node: lastSelectedNode,
template: lastSelectedNodeTemplate,
};
},
defaultSelectorOptions
);
const InspectorDetailsTab = () => {
const { node, template } = useAppSelector(selector);
const { t } = useTranslation();
if (!template || !isInvocationNode(node)) {
return (
<IAINoContentFallback label={t('nodes.noNodeSelected')} icon={null} />
);
}
return <Content node={node} template={template} />;
};
export default memo(InspectorDetailsTab);
const Content = (props: {
node: Node<InvocationNodeData>;
template: InvocationTemplate;
}) => {
const { t } = useTranslation();
const { needsUpdate, updateNode } = useNodeVersion(props.node.id);
return (
<Box
sx={{
position: 'relative',
w: 'full',
h: 'full',
}}
>
<ScrollableContent>
<Flex
sx={{
flexDir: 'column',
position: 'relative',
p: 1,
gap: 2,
w: 'full',
}}
>
<EditableNodeTitle nodeId={props.node.data.id} />
<HStack>
<FormControl>
<FormLabel>Node Type</FormLabel>
<Text fontSize="sm" fontWeight={600}>
{props.template.title}
</Text>
</FormControl>
<Flex
flexDir="row"
alignItems="center"
justifyContent="space-between"
w="full"
>
<FormControl isInvalid={needsUpdate}>
<FormLabel>Node Version</FormLabel>
<Text fontSize="sm" fontWeight={600}>
{props.node.data.version}
</Text>
</FormControl>
{needsUpdate && (
<IAIIconButton
aria-label={t('nodes.updateNode')}
tooltip={t('nodes.updateNode')}
icon={<FaSync />}
onClick={updateNode}
/>
)}
</Flex>
</HStack>
<NotesTextarea nodeId={props.node.data.id} />
</Flex>
</ScrollableContent>
</Box>
);
};

View File

@@ -10,7 +10,7 @@ import { memo } from 'react';
import InspectorDataTab from './InspectorDataTab';
import InspectorOutputsTab from './InspectorOutputsTab';
import InspectorTemplateTab from './InspectorTemplateTab';
import InspectorDetailsTab from './InspectorDetailsTab';
// import InspectorDetailsTab from './InspectorDetailsTab';
const InspectorPanel = () => {
return (
@@ -30,16 +30,16 @@ const InspectorPanel = () => {
sx={{ display: 'flex', flexDir: 'column', w: 'full', h: 'full' }}
>
<TabList>
<Tab>Details</Tab>
{/* <Tab>Details</Tab> */}
<Tab>Outputs</Tab>
<Tab>Data</Tab>
<Tab>Template</Tab>
</TabList>
<TabPanels>
<TabPanel>
{/* <TabPanel>
<InspectorDetailsTab />
</TabPanel>
</TabPanel> */}
<TabPanel>
<InspectorOutputsTab />
</TabPanel>

View File

@@ -1,74 +0,0 @@
import {
Editable,
EditableInput,
EditablePreview,
Flex,
} from '@chakra-ui/react';
import { useAppDispatch } from 'app/store/storeHooks';
import { useNodeLabel } from 'features/nodes/hooks/useNodeLabel';
import { useNodeTemplateTitle } from 'features/nodes/hooks/useNodeTemplateTitle';
import { nodeLabelChanged } from 'features/nodes/store/nodesSlice';
import { memo, useCallback, useEffect, useState } from 'react';
import { useTranslation } from 'react-i18next';
type Props = {
nodeId: string;
title?: string;
};
const EditableNodeTitle = ({ nodeId, title }: Props) => {
const dispatch = useAppDispatch();
const label = useNodeLabel(nodeId);
const templateTitle = useNodeTemplateTitle(nodeId);
const { t } = useTranslation();
const [localTitle, setLocalTitle] = useState('');
const handleSubmit = useCallback(
async (newTitle: string) => {
dispatch(nodeLabelChanged({ nodeId, label: newTitle }));
setLocalTitle(
label || title || templateTitle || t('nodes.problemSettingTitle')
);
},
[dispatch, nodeId, title, templateTitle, label, t]
);
const handleChange = useCallback((newTitle: string) => {
setLocalTitle(newTitle);
}, []);
useEffect(() => {
// Another component may change the title; sync local title with global state
setLocalTitle(
label || title || templateTitle || t('nodes.problemSettingTitle')
);
}, [label, templateTitle, title, t]);
return (
<Flex
sx={{
w: 'full',
h: 'full',
alignItems: 'center',
justifyContent: 'center',
}}
>
<Editable
as={Flex}
value={localTitle}
onChange={handleChange}
onSubmit={handleSubmit}
w="full"
fontWeight={600}
>
<EditablePreview noOfLines={1} />
<EditableInput
className="nodrag"
_focusVisible={{ boxShadow: 'none' }}
/>
</Editable>
</Flex>
);
};
export default memo(EditableNodeTitle);

View File

@@ -1,10 +1,19 @@
import { createSelector } from '@reduxjs/toolkit';
import { RootState } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { reduce } from 'lodash-es';
import { useCallback } from 'react';
import { Node, useReactFlow } from 'reactflow';
import { AnyInvocationType } from 'services/events/types';
import { buildNodeData } from '../store/util/buildNodeData';
import { v4 as uuidv4 } from 'uuid';
import {
CurrentImageNodeData,
InputFieldValue,
InvocationNodeData,
NotesNodeData,
OutputFieldValue,
} from '../types/types';
import { buildInputFieldValue } from '../util/fieldValueBuilders';
import { DRAG_HANDLE_CLASSNAME, NODE_WIDTH } from '../types/constants';
const templatesSelector = createSelector(
@@ -17,12 +26,14 @@ export const SHARED_NODE_PROPERTIES: Partial<Node> = {
};
export const useBuildNodeData = () => {
const nodeTemplates = useAppSelector(templatesSelector);
const invocationTemplates = useAppSelector(templatesSelector);
const flow = useReactFlow();
return useCallback(
(type: AnyInvocationType | 'current_image' | 'notes') => {
const nodeId = uuidv4();
let _x = window.innerWidth / 2;
let _y = window.innerHeight / 2;
@@ -36,15 +47,111 @@ export const useBuildNodeData = () => {
_y = rect.height / 2 - NODE_WIDTH / 2;
}
const position = flow.project({
const { x, y } = flow.project({
x: _x,
y: _y,
});
const template = nodeTemplates[type];
if (type === 'current_image') {
const node: Node<CurrentImageNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'current_image',
position: { x: x, y: y },
data: {
id: nodeId,
type: 'current_image',
isOpen: true,
label: 'Current Image',
},
};
return buildNodeData(type, position, template);
return node;
}
if (type === 'notes') {
const node: Node<NotesNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'notes',
position: { x: x, y: y },
data: {
id: nodeId,
isOpen: true,
label: 'Notes',
notes: '',
type: 'notes',
},
};
return node;
}
const template = invocationTemplates[type];
if (template === undefined) {
console.error(`Unable to find template ${type}.`);
return;
}
const inputs = reduce(
template.inputs,
(inputsAccumulator, inputTemplate, inputName) => {
const fieldId = uuidv4();
const inputFieldValue: InputFieldValue = buildInputFieldValue(
fieldId,
inputTemplate
);
inputsAccumulator[inputName] = inputFieldValue;
return inputsAccumulator;
},
{} as Record<string, InputFieldValue>
);
const outputs = reduce(
template.outputs,
(outputsAccumulator, outputTemplate, outputName) => {
const fieldId = uuidv4();
const outputFieldValue: OutputFieldValue = {
id: fieldId,
name: outputName,
type: outputTemplate.type,
fieldKind: 'output',
};
outputsAccumulator[outputName] = outputFieldValue;
return outputsAccumulator;
},
{} as Record<string, OutputFieldValue>
);
const invocation: Node<InvocationNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'invocation',
position: { x: x, y: y },
data: {
id: nodeId,
type,
version: template.version,
label: '',
notes: '',
isOpen: true,
embedWorkflow: false,
isIntermediate: type === 'save_image' ? false : true,
inputs,
outputs,
useCache: template.useCache,
},
};
return invocation;
},
[nodeTemplates, flow]
[invocationTemplates, flow]
);
};

View File

@@ -1,25 +0,0 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { getNeedsUpdate } from './useNodeVersion';
const selector = createSelector(
stateSelector,
(state) => {
const nodes = state.nodes.nodes;
const templates = state.nodes.nodeTemplates;
const needsUpdate = nodes.some((node) => {
const template = templates[node.data.type];
return getNeedsUpdate(node, template);
});
return needsUpdate;
},
defaultSelectorOptions
);
export const useGetNodesNeedUpdate = () => {
const getNeedsUpdate = useAppSelector(selector);
return getNeedsUpdate;
};

View File

@@ -1,27 +0,0 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { useMemo } from 'react';
import { AnyInvocationType } from 'services/events/types';
export const useNodeTemplateByType = (
type: AnyInvocationType | 'current_image' | 'notes'
) => {
const selector = useMemo(
() =>
createSelector(
stateSelector,
({ nodes }) => {
const nodeTemplate = nodes.nodeTemplates[type];
return nodeTemplate;
},
defaultSelectorOptions
),
[type]
);
const nodeTemplate = useAppSelector(selector);
return nodeTemplate;
};

View File

@@ -1,119 +0,0 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { satisfies } from 'compare-versions';
import { cloneDeep, defaultsDeep } from 'lodash-es';
import { useCallback, useMemo } from 'react';
import { Node } from 'reactflow';
import { AnyInvocationType } from 'services/events/types';
import { nodeReplaced } from '../store/nodesSlice';
import { buildNodeData } from '../store/util/buildNodeData';
import {
InvocationNodeData,
InvocationTemplate,
NodeData,
isInvocationNode,
zParsedSemver,
} from '../types/types';
import { useAppToaster } from 'app/components/Toaster';
import { useTranslation } from 'react-i18next';
export const getNeedsUpdate = (
node?: Node<NodeData>,
template?: InvocationTemplate
) => {
if (!isInvocationNode(node) || !template) {
return false;
}
return node.data.version !== template.version;
};
export const getMayUpdateNode = (
node?: Node<NodeData>,
template?: InvocationTemplate
) => {
const needsUpdate = getNeedsUpdate(node, template);
if (
!needsUpdate ||
!isInvocationNode(node) ||
!template ||
!node.data.version
) {
return false;
}
const templateMajor = zParsedSemver.parse(template.version).major;
return satisfies(node.data.version, `^${templateMajor}`);
};
export const updateNode = (
node?: Node<NodeData>,
template?: InvocationTemplate
) => {
const mayUpdate = getMayUpdateNode(node, template);
if (
!mayUpdate ||
!isInvocationNode(node) ||
!template ||
!node.data.version
) {
return;
}
const defaults = buildNodeData(
node.data.type as AnyInvocationType,
node.position,
template
) as Node<InvocationNodeData>;
const clone = cloneDeep(node);
clone.data.version = template.version;
defaultsDeep(clone, defaults);
return clone;
};
export const useNodeVersion = (nodeId: string) => {
const dispatch = useAppDispatch();
const toast = useAppToaster();
const { t } = useTranslation();
const selector = useMemo(
() =>
createSelector(
stateSelector,
({ nodes }) => {
const node = nodes.nodes.find((node) => node.id === nodeId);
const nodeTemplate = nodes.nodeTemplates[node?.data.type ?? ''];
return { node, nodeTemplate };
},
defaultSelectorOptions
),
[nodeId]
);
const { node, nodeTemplate } = useAppSelector(selector);
const needsUpdate = useMemo(
() => getNeedsUpdate(node, nodeTemplate),
[node, nodeTemplate]
);
const mayUpdate = useMemo(
() => getMayUpdateNode(node, nodeTemplate),
[node, nodeTemplate]
);
const _updateNode = useCallback(() => {
const needsUpdate = getNeedsUpdate(node, nodeTemplate);
const updatedNode = updateNode(node, nodeTemplate);
if (!updatedNode) {
if (needsUpdate) {
toast({ title: t('nodes.unableToUpdateNodes', { count: 1 }) });
}
return;
}
dispatch(nodeReplaced({ nodeId: updatedNode.id, node: updatedNode }));
}, [dispatch, node, nodeTemplate, t, toast]);
return { needsUpdate, mayUpdate, updateNode: _updateNode };
};

View File

@@ -21,7 +21,3 @@ export const isAnyGraphBuilt = isAnyOf(
export const workflowLoadRequested = createAction<Workflow>(
'nodes/workflowLoadRequested'
);
export const updateAllNodesRequested = createAction(
'nodes/updateAllNodesRequested'
);

View File

@@ -149,18 +149,6 @@ const nodesSlice = createSlice({
nodesChanged: (state, action: PayloadAction<NodeChange[]>) => {
state.nodes = applyNodeChanges(action.payload, state.nodes);
},
nodeReplaced: (
state,
action: PayloadAction<{ nodeId: string; node: Node }>
) => {
const nodeIndex = state.nodes.findIndex(
(n) => n.id === action.payload.nodeId
);
if (nodeIndex < 0) {
return;
}
state.nodes[nodeIndex] = action.payload.node;
},
nodeAdded: (
state,
action: PayloadAction<
@@ -1041,7 +1029,6 @@ export const {
mouseOverFieldChanged,
mouseOverNodeChanged,
nodeAdded,
nodeReplaced,
nodeEditorReset,
nodeEmbedWorkflowChanged,
nodeExclusivelySelected,

View File

@@ -1,127 +0,0 @@
import { DRAG_HANDLE_CLASSNAME } from 'features/nodes/types/constants';
import {
CurrentImageNodeData,
InputFieldValue,
InvocationNodeData,
InvocationTemplate,
NotesNodeData,
OutputFieldValue,
} from 'features/nodes/types/types';
import { buildInputFieldValue } from 'features/nodes/util/fieldValueBuilders';
import { reduce } from 'lodash-es';
import { Node, XYPosition } from 'reactflow';
import { AnyInvocationType } from 'services/events/types';
import { v4 as uuidv4 } from 'uuid';
export const SHARED_NODE_PROPERTIES: Partial<Node> = {
dragHandle: `.${DRAG_HANDLE_CLASSNAME}`,
};
export const buildNodeData = (
type: AnyInvocationType | 'current_image' | 'notes',
position: XYPosition,
template?: InvocationTemplate
):
| Node<CurrentImageNodeData>
| Node<NotesNodeData>
| Node<InvocationNodeData>
| undefined => {
const nodeId = uuidv4();
if (type === 'current_image') {
const node: Node<CurrentImageNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'current_image',
position,
data: {
id: nodeId,
type: 'current_image',
isOpen: true,
label: 'Current Image',
},
};
return node;
}
if (type === 'notes') {
const node: Node<NotesNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'notes',
position,
data: {
id: nodeId,
isOpen: true,
label: 'Notes',
notes: '',
type: 'notes',
},
};
return node;
}
if (template === undefined) {
console.error(`Unable to find template ${type}.`);
return;
}
const inputs = reduce(
template.inputs,
(inputsAccumulator, inputTemplate, inputName) => {
const fieldId = uuidv4();
const inputFieldValue: InputFieldValue = buildInputFieldValue(
fieldId,
inputTemplate
);
inputsAccumulator[inputName] = inputFieldValue;
return inputsAccumulator;
},
{} as Record<string, InputFieldValue>
);
const outputs = reduce(
template.outputs,
(outputsAccumulator, outputTemplate, outputName) => {
const fieldId = uuidv4();
const outputFieldValue: OutputFieldValue = {
id: fieldId,
name: outputName,
type: outputTemplate.type,
fieldKind: 'output',
};
outputsAccumulator[outputName] = outputFieldValue;
return outputsAccumulator;
},
{} as Record<string, OutputFieldValue>
);
const invocation: Node<InvocationNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'invocation',
position,
data: {
id: nodeId,
type,
version: template.version,
label: '',
notes: '',
isOpen: true,
embedWorkflow: false,
isIntermediate: type === 'save_image' ? false : true,
inputs,
outputs,
useCache: template.useCache,
},
};
return invocation;
};

View File

@@ -23,7 +23,7 @@ import {
RESIZE_HRF,
VAE_LOADER,
} from './constants';
import { setMetadataReceivingNode, upsertMetadata } from './metadata';
import { upsertMetadata } from './metadata';
// Copy certain connections from previous DENOISE_LATENTS to new DENOISE_LATENTS_HRF.
function copyConnectionsToDenoiseLatentsHrf(graph: NonNullableGraph): void {
@@ -369,5 +369,4 @@ export const addHrfToGraph = (
hrf_enabled: hrfEnabled,
hrf_method: hrfMethod,
});
setMetadataReceivingNode(graph, LATENTS_TO_IMAGE_HRF_HR);
};

View File

@@ -1,20 +1,20 @@
import { RootState } from 'app/store/store';
import { NonNullableGraph } from 'features/nodes/types/types';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { LinearUIOutputInvocation } from 'services/api/types';
import { SaveImageInvocation } from 'services/api/types';
import {
CANVAS_OUTPUT,
LATENTS_TO_IMAGE,
LATENTS_TO_IMAGE_HRF_HR,
LINEAR_UI_OUTPUT,
NSFW_CHECKER,
SAVE_IMAGE,
WATERMARKER,
} from './constants';
/**
* Set the `use_cache` field on the linear/canvas graph's final image output node to False.
*/
export const addLinearUIOutputNode = (
export const addSaveImageNode = (
state: RootState,
graph: NonNullableGraph
): void => {
@@ -23,18 +23,18 @@ export const addLinearUIOutputNode = (
activeTabName === 'unifiedCanvas' ? !state.canvas.shouldAutoSave : false;
const { autoAddBoardId } = state.gallery;
const linearUIOutputNode: LinearUIOutputInvocation = {
id: LINEAR_UI_OUTPUT,
type: 'linear_ui_output',
const saveImageNode: SaveImageInvocation = {
id: SAVE_IMAGE,
type: 'save_image',
is_intermediate,
use_cache: false,
board: autoAddBoardId === 'none' ? undefined : { board_id: autoAddBoardId },
};
graph.nodes[LINEAR_UI_OUTPUT] = linearUIOutputNode;
graph.nodes[SAVE_IMAGE] = saveImageNode;
const destination = {
node_id: LINEAR_UI_OUTPUT,
node_id: SAVE_IMAGE,
field: 'image',
};

View File

@@ -4,9 +4,9 @@ import { ESRGANModelName } from 'features/parameters/store/postprocessingSlice';
import {
ESRGANInvocation,
Graph,
LinearUIOutputInvocation,
SaveImageInvocation,
} from 'services/api/types';
import { ESRGAN, LINEAR_UI_OUTPUT } from './constants';
import { REALESRGAN as ESRGAN, SAVE_IMAGE } from './constants';
import { addCoreMetadataNode, upsertMetadata } from './metadata';
type Arg = {
@@ -28,9 +28,9 @@ export const buildAdHocUpscaleGraph = ({
is_intermediate: true,
};
const linearUIOutputNode: LinearUIOutputInvocation = {
id: LINEAR_UI_OUTPUT,
type: 'linear_ui_output',
const saveImageNode: SaveImageInvocation = {
id: SAVE_IMAGE,
type: 'save_image',
use_cache: false,
is_intermediate: false,
board: autoAddBoardId === 'none' ? undefined : { board_id: autoAddBoardId },
@@ -40,7 +40,7 @@ export const buildAdHocUpscaleGraph = ({
id: `adhoc-esrgan-graph`,
nodes: {
[ESRGAN]: realesrganNode,
[LINEAR_UI_OUTPUT]: linearUIOutputNode,
[SAVE_IMAGE]: saveImageNode,
},
edges: [
{
@@ -49,14 +49,14 @@ export const buildAdHocUpscaleGraph = ({
field: 'image',
},
destination: {
node_id: LINEAR_UI_OUTPUT,
node_id: SAVE_IMAGE,
field: 'image',
},
},
],
};
addCoreMetadataNode(graph, {}, ESRGAN);
addCoreMetadataNode(graph, {});
upsertMetadata(graph, {
esrgan_model: esrganModelName,
});

View File

@@ -6,7 +6,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -308,30 +308,24 @@ export const buildCanvasImageToImageGraph = (
});
}
addCoreMetadataNode(
graph,
{
generation_mode: 'img2img',
cfg_scale,
width: !isUsingScaledDimensions
? width
: scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
strength,
init_image: initialImage.image_name,
},
CANVAS_OUTPUT
);
addCoreMetadataNode(graph, {
generation_mode: 'img2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
strength,
init_image: initialImage.image_name,
});
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -363,7 +357,7 @@ export const buildCanvasImageToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -13,7 +13,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -666,7 +666,7 @@ export const buildCanvasInpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -12,7 +12,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -770,7 +770,7 @@ export const buildCanvasOutpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -7,7 +7,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
import { addWatermarkerToGraph } from './addWatermarkerToGraph';
@@ -319,29 +319,23 @@ export const buildCanvasSDXLImageToImageGraph = (
});
}
addCoreMetadataNode(
graph,
{
generation_mode: 'img2img',
cfg_scale,
width: !isUsingScaledDimensions
? width
: scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
strength,
init_image: initialImage.image_name,
},
CANVAS_OUTPUT
);
addCoreMetadataNode(graph, {
generation_mode: 'img2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
strength,
init_image: initialImage.image_name,
});
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -386,7 +380,7 @@ export const buildCanvasSDXLImageToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -14,7 +14,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -696,7 +696,7 @@ export const buildCanvasSDXLInpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -13,7 +13,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -799,7 +799,7 @@ export const buildCanvasSDXLOutpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -10,7 +10,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -301,27 +301,21 @@ export const buildCanvasSDXLTextToImageGraph = (
});
}
addCoreMetadataNode(
graph,
{
generation_mode: 'txt2img',
cfg_scale,
width: !isUsingScaledDimensions
? width
: scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
},
CANVAS_OUTPUT
);
addCoreMetadataNode(graph, {
generation_mode: 'txt2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
});
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -366,7 +360,7 @@ export const buildCanvasSDXLTextToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -9,7 +9,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -289,28 +289,22 @@ export const buildCanvasTextToImageGraph = (
});
}
addCoreMetadataNode(
graph,
{
generation_mode: 'txt2img',
cfg_scale,
width: !isUsingScaledDimensions
? width
: scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
},
CANVAS_OUTPUT
);
addCoreMetadataNode(graph, {
generation_mode: 'txt2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
});
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -342,7 +336,7 @@ export const buildCanvasTextToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -9,7 +9,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -311,26 +311,22 @@ export const buildLinearImageToImageGraph = (
});
}
addCoreMetadataNode(
graph,
{
generation_mode: 'img2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
strength,
init_image: initialImage.imageName,
},
IMAGE_TO_LATENTS
);
addCoreMetadataNode(graph, {
generation_mode: 'img2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
strength,
init_image: initialImage.imageName,
});
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -362,7 +358,7 @@ export const buildLinearImageToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -10,7 +10,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -331,27 +331,23 @@ export const buildLinearSDXLImageToImageGraph = (
});
}
addCoreMetadataNode(
graph,
{
generation_mode: 'sdxl_img2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
strength,
init_image: initialImage.imageName,
positive_style_prompt: positiveStylePrompt,
negative_style_prompt: negativeStylePrompt,
},
IMAGE_TO_LATENTS
);
addCoreMetadataNode(graph, {
generation_mode: 'sdxl_img2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
strength,
init_image: initialImage.imageName,
positive_style_prompt: positiveStylePrompt,
negative_style_prompt: negativeStylePrompt,
});
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -392,7 +388,7 @@ export const buildLinearSDXLImageToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -6,7 +6,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -225,25 +225,21 @@ export const buildLinearSDXLTextToImageGraph = (
],
};
addCoreMetadataNode(
graph,
{
generation_mode: 'sdxl_txt2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
positive_style_prompt: positiveStylePrompt,
negative_style_prompt: negativeStylePrompt,
},
LATENTS_TO_IMAGE
);
addCoreMetadataNode(graph, {
generation_mode: 'sdxl_txt2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
positive_style_prompt: positiveStylePrompt,
negative_style_prompt: negativeStylePrompt,
});
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -284,7 +280,7 @@ export const buildLinearSDXLTextToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -10,7 +10,7 @@ import { addHrfToGraph } from './addHrfToGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -234,24 +234,20 @@ export const buildLinearTextToImageGraph = (
],
};
addCoreMetadataNode(
graph,
{
generation_mode: 'txt2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
},
LATENTS_TO_IMAGE
);
addCoreMetadataNode(graph, {
generation_mode: 'txt2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
});
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -290,7 +286,7 @@ export const buildLinearTextToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addLinearUIOutputNode(state, graph);
addSaveImageNode(state, graph);
return graph;
};

View File

@@ -9,7 +9,7 @@ export const LATENTS_TO_IMAGE_HRF_LR = 'latents_to_image_hrf_lr';
export const IMAGE_TO_LATENTS_HRF = 'image_to_latents_hrf';
export const RESIZE_HRF = 'resize_hrf';
export const ESRGAN_HRF = 'esrgan_hrf';
export const LINEAR_UI_OUTPUT = 'linear_ui_output';
export const SAVE_IMAGE = 'save_image';
export const NSFW_CHECKER = 'nsfw_checker';
export const WATERMARKER = 'invisible_watermark';
export const NOISE = 'noise';
@@ -67,7 +67,7 @@ export const BATCH_PROMPT = 'batch_prompt';
export const BATCH_STYLE_PROMPT = 'batch_style_prompt';
export const METADATA_COLLECT = 'metadata_collect';
export const MERGE_METADATA = 'merge_metadata';
export const ESRGAN = 'esrgan';
export const REALESRGAN = 'esrgan';
export const DIVIDE = 'divide';
export const SCALE = 'scale_image';
export const SDXL_MODEL_LOADER = 'sdxl_model_loader';
@@ -82,32 +82,6 @@ export const SDXL_REFINER_INPAINT_CREATE_MASK = 'refiner_inpaint_create_mask';
export const SEAMLESS = 'seamless';
export const SDXL_REFINER_SEAMLESS = 'refiner_seamless';
// these image-outputting nodes are from the linear UI and we should not handle the gallery logic on them
// instead, we wait for LINEAR_UI_OUTPUT node, and handle it like any other image-outputting node
export const nodeIDDenyList = [
CANVAS_OUTPUT,
LATENTS_TO_IMAGE,
LATENTS_TO_IMAGE_HRF_HR,
NSFW_CHECKER,
WATERMARKER,
ESRGAN,
ESRGAN_HRF,
RESIZE_HRF,
LATENTS_TO_IMAGE_HRF_LR,
IMG2IMG_RESIZE,
INPAINT_IMAGE,
SCALED_INPAINT_IMAGE,
INPAINT_IMAGE_RESIZE_UP,
INPAINT_IMAGE_RESIZE_DOWN,
INPAINT_INFILL,
INPAINT_INFILL_RESIZE_DOWN,
INPAINT_FINAL_IMAGE,
INPAINT_CREATE_MASK,
INPAINT_MASK,
PASTE_IMAGE,
SCALE,
];
// friendly graph ids
export const TEXT_TO_IMAGE_GRAPH = 'text_to_image_graph';
export const IMAGE_TO_IMAGE_GRAPH = 'image_to_image_graph';

View File

@@ -1,12 +1,11 @@
import { NonNullableGraph } from 'features/nodes/types/types';
import { CoreMetadataInvocation } from 'services/api/types';
import { JsonObject } from 'type-fest';
import { METADATA } from './constants';
import { METADATA, SAVE_IMAGE } from './constants';
export const addCoreMetadataNode = (
graph: NonNullableGraph,
metadata: Partial<CoreMetadataInvocation>,
nodeId: string
metadata: Partial<CoreMetadataInvocation>
): void => {
graph.nodes[METADATA] = {
id: METADATA,
@@ -20,7 +19,7 @@ export const addCoreMetadataNode = (
field: 'metadata',
},
destination: {
node_id: nodeId,
node_id: SAVE_IMAGE,
field: 'metadata',
},
});
@@ -65,21 +64,3 @@ export const getHasMetadata = (graph: NonNullableGraph): boolean => {
return Boolean(metadataNode);
};
export const setMetadataReceivingNode = (
graph: NonNullableGraph,
nodeId: string
) => {
graph.edges = graph.edges.filter((edge) => edge.source.node_id !== METADATA);
graph.edges.push({
source: {
node_id: METADATA,
field: 'metadata',
},
destination: {
node_id: nodeId,
field: 'metadata',
},
});
};

View File

@@ -19,7 +19,7 @@ const RESERVED_INPUT_FIELD_NAMES = ['id', 'type', 'use_cache'];
const RESERVED_OUTPUT_FIELD_NAMES = ['type'];
const RESERVED_FIELD_TYPES = ['IsIntermediate'];
const invocationDenylist: AnyInvocationType[] = ['graph', 'linear_ui_output'];
const invocationDenylist: AnyInvocationType[] = ['graph'];
const isReservedInputField = (nodeType: string, fieldName: string) => {
if (RESERVED_INPUT_FIELD_NAMES.includes(fieldName)) {

File diff suppressed because one or more lines are too long

View File

@@ -48,11 +48,9 @@ export type OffsetPaginatedResults_ImageDTO_ =
s['OffsetPaginatedResults_ImageDTO_'];
// Models
export type ModelType =
s['invokeai__backend__model_management__models__base__ModelType'];
export type ModelType = s['ModelType'];
export type SubModelType = s['SubModelType'];
export type BaseModelType =
s['invokeai__backend__model_management__models__base__BaseModelType'];
export type BaseModelType = s['BaseModelType'];
export type MainModelField = s['MainModelField'];
export type OnnxModelField = s['OnnxModelField'];
export type VAEModelField = s['VAEModelField'];
@@ -60,7 +58,7 @@ export type LoRAModelField = s['LoRAModelField'];
export type ControlNetModelField = s['ControlNetModelField'];
export type IPAdapterModelField = s['IPAdapterModelField'];
export type T2IAdapterModelField = s['T2IAdapterModelField'];
export type ModelsList = s['invokeai__app__api__routers__models__ModelsList'];
export type ModelsList = s['ModelsList'];
export type ControlField = s['ControlField'];
export type IPAdapterField = s['IPAdapterField'];
@@ -142,7 +140,7 @@ export type DivideInvocation = s['DivideInvocation'];
export type ImageNSFWBlurInvocation = s['ImageNSFWBlurInvocation'];
export type ImageWatermarkInvocation = s['ImageWatermarkInvocation'];
export type SeamlessModeInvocation = s['SeamlessModeInvocation'];
export type LinearUIOutputInvocation = s['LinearUIOutputInvocation'];
export type SaveImageInvocation = s['SaveImageInvocation'];
export type MetadataInvocation = s['MetadataInvocation'];
export type CoreMetadataInvocation = s['CoreMetadataInvocation'];
export type MetadataItemInvocation = s['MetadataItemInvocation'];

View File

@@ -1 +1 @@
__version__ = "3.4.0"
__version__ = "3.4.0dev1"

View File

@@ -139,19 +139,17 @@ nav:
- Features:
- Overview: 'features/index.md'
- New to InvokeAI?: 'help/gettingStartedWithAI.md'
- Concepts: 'features/CONCEPTS.md'
- Configuration: 'features/CONFIGURATION.md'
- Control Adapters: 'features/CONTROLNET.md'
- Image-to-Image: 'features/IMG2IMG.md'
- Controlling Logging: 'features/LOGGING.md'
- LoRAs & LCM-LoRAs: 'features/LORAS.md'
- Model Merging: 'features/MODEL_MERGING.md'
- Nodes & Workflows: 'nodes/overview.md'
- Using Nodes : 'nodes/overview.md'
- NSFW Checker: 'features/WATERMARK+NSFW.md'
- Postprocessing: 'features/POSTPROCESS.md'
- Prompting Features: 'features/PROMPTS.md'
- Textual Inversions:
- Textual Inversions: 'features/TEXTUAL_INVERSIONS.md'
- Textual Inversion Training: 'features/TRAINING.md'
- Textual Inversion Training: 'features/TRAINING.md'
- Unified Canvas: 'features/UNIFIED_CANVAS.md'
- InvokeAI Web Server: 'features/WEB.md'
- WebUI Hotkeys: "features/WEBUIHOTKEYS.md"
@@ -182,7 +180,6 @@ nav:
- Troubleshooting: 'help/deprecated/TROUBLESHOOT.md'
- Help:
- Getting Started: 'help/gettingStartedWithAI.md'
- FAQs: 'help/FAQ.md'
- Diffusion Overview: 'help/diffusion.md'
- Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md'
- Other:

View File

@@ -46,7 +46,7 @@ dependencies = [
"easing-functions",
"einops",
"facexlib",
"fastapi~=0.104.1",
"fastapi~=0.103.2",
"fastapi-events~=0.9.1",
"huggingface-hub~=0.16.4",
"imohash",
@@ -59,7 +59,7 @@ dependencies = [
"onnx",
"onnxruntime",
"opencv-python",
"pydantic~=2.5.0",
"pydantic~=2.4.2",
"pydantic-settings~=2.0.3",
"picklescan",
"pillow",
@@ -80,10 +80,10 @@ dependencies = [
"send2trash",
"test-tube~=0.7.5",
"torch==2.1.0",
"torchvision==0.16.0",
"torchvision~=0.16",
"torchmetrics~=0.11.0",
"torchsde~=0.2.5",
"transformers~=4.35.0",
"transformers==4.35.0",
"uvicorn[standard]~=0.21.1",
"windows-curses; sys_platform=='win32'",
]