Compare commits

..

2 Commits

Author SHA1 Message Date
Ryan Dick
3f2dedceec Remove unused check_internet() function. 2024-04-25 17:46:39 -04:00
Ryan Dick
0bdf7f5726 Do less stuff on import of api_app.py. Instead, call functions imperatively. 2024-04-25 17:45:54 -04:00
419 changed files with 11723 additions and 20992 deletions

View File

@@ -12,7 +12,7 @@
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
[Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
[Installation][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
<div align="center">

View File

@@ -117,13 +117,13 @@ Stateless fields do not store their value in the node, so their field instances
"Custom" fields will always be treated as stateless fields.
##### Single and Collection Fields
##### Collection and Scalar Fields
Field types have a name and cardinality property which may identify it as a **SINGLE**, **COLLECTION** or **SINGLE_OR_COLLECTION** field.
Field types have a name and two flags which may identify it as a **collection** or **collection or scalar** field.
- If a field is annotated in python as a singular value or class, its field type is parsed as a **SINGLE** type (e.g. `int`, `ImageField`, `str`).
- If a field is annotated in python as a list, its field type is parsed as a **COLLECTION** type (e.g. `list[int]`).
- If it is annotated as a union of a type and list, the type will be parsed as a **SINGLE_OR_COLLECTION** type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
If a field is annotated in python as a list, its field type is parsed and flagged as a **collection** type (e.g. `list[int]`).
If it is annotated as a union of a type and list, the type will be flagged as a **collection or scalar** type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
## Implementation
@@ -173,7 +173,8 @@ Field types are represented as structured objects:
```ts
type FieldType = {
name: string;
cardinality: 'SINGLE' | 'COLLECTION' | 'SINGLE_OR_COLLECTION';
isCollection: boolean;
isCollectionOrScalar: boolean;
};
```
@@ -185,7 +186,7 @@ There are 4 general cases for field type parsing.
When a field is annotated as a primitive values (e.g. `int`, `str`, `float`), the field type parsing is fairly straightforward. The field is represented by a simple OpenAPI **schema object**, which has a `type` property.
We create a field type name from this `type` string (e.g. `string` -> `StringField`). The cardinality is `"SINGLE"`.
We create a field type name from this `type` string (e.g. `string` -> `StringField`).
##### Complex Types
@@ -199,13 +200,13 @@ We need to **dereference** the schema to pull these out. Dereferencing may requi
When a field is annotated as a list of a single type, the schema object has an `items` property. They may be a schema object or reference object and must be parsed to determine the item type.
We use the item type for field type name. The cardinality is `"COLLECTION"`.
We use the item type for field type name, adding `isCollection: true` to the field type.
##### Single or Collection Types
##### Collection or Scalar Types
When a field is annotated as a union of a type and list of that type, the schema object has an `anyOf` property, which holds a list of valid types for the union.
After verifying that the union has two members (a type and list of the same type), we use the type for field type name, with cardinality `"SINGLE_OR_COLLECTION"`.
After verifying that the union has two members (a type and list of the same type), we use the type for field type name, adding `isCollectionOrScalar: true` to the field type.
##### Optional Fields

View File

@@ -51,11 +51,13 @@ The settings in this file will override the defaults. You only need
to change this file if the default for a particular setting doesn't
work for you.
You'll find an example file next to `invokeai.yaml` that shows the default values.
Some settings, like [Model Marketplace API Keys], require the YAML
to be formatted correctly. Here is a [basic guide to YAML files].
You can fix a broken `invokeai.yaml` by deleting it and running the
configuration script again -- option [6] in the launcher, "Re-run the
configure script".
#### Custom Config File Location
You can use any config file with the `--config` CLI arg. Pass in the path to the `invokeai.yaml` file you want to use.

View File

@@ -4,6 +4,278 @@ title: Training
# :material-file-document: Training
Invoke Training has moved to its own repository, with a dedicated UI for accessing common scripts like Textual Inversion and LoRA training.
# Textual Inversion Training
## **Personalizing Text-to-Image Generation**
You can find more by visiting the repo at https://github.com/invoke-ai/invoke-training
You may personalize the generated images to provide your own styles or objects
by training a new LDM checkpoint and introducing a new vocabulary to the fixed
model as a (.pt) embeddings file. Alternatively, you may use or train
HuggingFace Concepts embeddings files (.bin) from
<https://huggingface.co/sd-concepts-library> and its associated
notebooks.
## **Hardware and Software Requirements**
You will need a GPU to perform training in a reasonable length of
time, and at least 12 GB of VRAM. We recommend using the [`xformers`
library](../installation/070_INSTALL_XFORMERS.md) to accelerate the
training process further. During training, about ~8 GB is temporarily
needed in order to store intermediate models, checkpoints and logs.
## **Preparing for Training**
To train, prepare a folder that contains 3-5 images that illustrate
the object or concept. It is good to provide a variety of examples or
poses to avoid overtraining the system. Format these images as PNG
(preferred) or JPG. You do not need to resize or crop the images in
advance, but for more control you may wish to do so.
Place the training images in a directory on the machine InvokeAI runs
on. We recommend placing them in a subdirectory of the
`text-inversion-training-data` folder located in the InvokeAI root
directory, ordinarily `~/invokeai` (Linux/Mac), or
`C:\Users\your_name\invokeai` (Windows). For example, to create an
embedding for the "psychedelic" style, you'd place the training images
into the directory
`~invokeai/text-inversion-training-data/psychedelic`.
## **Launching Training Using the Console Front End**
InvokeAI 2.3 and higher comes with a text console-based training front
end. From within the `invoke.sh`/`invoke.bat` Invoke launcher script,
start training tool selecting choice (3):
```sh
1 "Generate images with a browser-based interface"
2 "Explore InvokeAI nodes using a command-line interface"
3 "Textual inversion training"
4 "Merge models (diffusers type only)"
5 "Download and install models"
6 "Change InvokeAI startup options"
7 "Re-run the configure script to fix a broken install or to complete a major upgrade"
8 "Open the developer console"
9 "Update InvokeAI"
```
Alternatively, you can select option (8) or from the command line, with the InvokeAI virtual environment active,
you can then launch the front end with the command `invokeai-ti --gui`.
This will launch a text-based front end that will look like this:
<figure markdown>
![ti-frontend](../assets/textual-inversion/ti-frontend.png)
</figure>
The interface is keyboard-based. Move from field to field using
control-N (^N) to move to the next field and control-P (^P) to the
previous one. <Tab> and <shift-TAB> work as well. Once a field is
active, use the cursor keys. In a checkbox group, use the up and down
cursor keys to move from choice to choice, and <space> to select a
choice. In a scrollbar, use the left and right cursor keys to increase
and decrease the value of the scroll. In textfields, type the desired
values.
The number of parameters may look intimidating, but in most cases the
predefined defaults work fine. The red circled fields in the above
illustration are the ones you will adjust most frequently.
### Model Name
This will list all the diffusers models that are currently
installed. Select the one you wish to use as the basis for your
embedding. Be aware that if you use a SD-1.X-based model for your
training, you will only be able to use this embedding with other
SD-1.X-based models. Similarly, if you train on SD-2.X, you will only
be able to use the embeddings with models based on SD-2.X.
### Trigger Term
This is the prompt term you will use to trigger the embedding. Type a
single word or phrase you wish to use as the trigger, example
"psychedelic" (without angle brackets). Within InvokeAI, you will then
be able to activate the trigger using the syntax `<psychedelic>`.
### Initializer
This is a single character that is used internally during the training
process as a placeholder for the trigger term. It defaults to "*" and
can usually be left alone.
### Resume from last saved checkpoint
As training proceeds, textual inversion will write a series of
intermediate files that can be used to resume training from where it
was left off in the case of an interruption. This checkbox will be
automatically selected if you provide a previously used trigger term
and at least one checkpoint file is found on disk.
Note that as of 20 January 2023, resume does not seem to be working
properly due to an issue with the upstream code.
### Data Training Directory
This is the location of the images to be used for training. When you
select a trigger term like "my-trigger", the frontend will prepopulate
this field with `~/invokeai/text-inversion-training-data/my-trigger`,
but you can change the path to wherever you want.
### Output Destination Directory
This is the location of the logs, checkpoint files, and embedding
files created during training. When you select a trigger term like
"my-trigger", the frontend will prepopulate this field with
`~/invokeai/text-inversion-output/my-trigger`, but you can change the
path to wherever you want.
### Image resolution
The images in the training directory will be automatically scaled to
the value you use here. For best results, you will want to use the
same default resolution of the underlying model (512 pixels for
SD-1.5, 768 for the larger version of SD-2.1).
### Center crop images
If this is selected, your images will be center cropped to make them
square before resizing them to the desired resolution. Center cropping
can indiscriminately cut off the top of subjects' heads for portrait
aspect images, so if you have images like this, you may wish to use a
photoeditor to manually crop them to a square aspect ratio.
### Mixed precision
Select the floating point precision for the embedding. "no" will
result in a full 32-bit precision, "fp16" will provide 16-bit
precision, and "bf16" will provide mixed precision (only available
when XFormers is used).
### Max training steps
How many steps the training will take before the model converges. Most
training sets will converge with 2000-3000 steps.
### Batch size
This adjusts how many training images are processed simultaneously in
each step. Higher values will cause the training process to run more
quickly, but use more memory. The default size will run with GPUs with
as little as 12 GB.
### Learning rate
The rate at which the system adjusts its internal weights during
training. Higher values risk overtraining (getting the same image each
time), and lower values will take more steps to train a good
model. The default of 0.0005 is conservative; you may wish to increase
it to 0.005 to speed up training.
### Scale learning rate by number of GPUs, steps and batch size
If this is selected (the default) the system will adjust the provided
learning rate to improve performance.
### Use xformers acceleration
This will activate XFormers memory-efficient attention. You need to
have XFormers installed for this to have an effect.
### Learning rate scheduler
This adjusts how the learning rate changes over the course of
training. The default "constant" means to use a constant learning rate
for the entire training session. The other values scale the learning
rate according to various formulas.
Only "constant" is supported by the XFormers library.
### Gradient accumulation steps
This is a parameter that allows you to use bigger batch sizes than
your GPU's VRAM would ordinarily accommodate, at the cost of some
performance.
### Warmup steps
If "constant_with_warmup" is selected in the learning rate scheduler,
then this provides the number of warmup steps. Warmup steps have a
very low learning rate, and are one way of preventing early
overtraining.
## The training run
Start the training run by advancing to the OK button (bottom right)
and pressing <enter>. A series of progress messages will be displayed
as the training process proceeds. This may take an hour or two,
depending on settings and the speed of your system. Various log and
checkpoint files will be written into the output directory (ordinarily
`~/invokeai/text-inversion-output/my-model/`)
At the end of successful training, the system will copy the file
`learned_embeds.bin` into the InvokeAI root directory's `embeddings`
directory, using a subdirectory named after the trigger token. For
example, if the trigger token was `psychedelic`, then look for the
embeddings file in
`~/invokeai/embeddings/psychedelic/learned_embeds.bin`
You may now launch InvokeAI and try out a prompt that uses the trigger
term. For example `a plate of banana sushi in <psychedelic> style`.
## **Training with the Command-Line Script**
Training can also be done using a traditional command-line script. It
can be launched from within the "developer's console", or from the
command line after activating InvokeAI's virtual environment.
It accepts a large number of arguments, which can be summarized by
passing the `--help` argument:
```sh
invokeai-ti --help
```
Typical usage is shown here:
```sh
invokeai-ti \
--model=stable-diffusion-1.5 \
--resolution=512 \
--learnable_property=style \
--initializer_token='*' \
--placeholder_token='<psychedelic>' \
--train_data_dir=/home/lstein/invokeai/training-data/psychedelic \
--output_dir=/home/lstein/invokeai/text-inversion-training/psychedelic \
--scale_lr \
--train_batch_size=8 \
--gradient_accumulation_steps=4 \
--max_train_steps=3000 \
--learning_rate=0.0005 \
--resume_from_checkpoint=latest \
--lr_scheduler=constant \
--mixed_precision=fp16 \
--only_save_embeds
```
## Troubleshooting
### `Cannot load embedding for <trigger>. It was trained on a model with token dimension 1024, but the current model has token dimension 768`
Messages like this indicate you trained the embedding on a different base model than the currently selected one.
For example, in the error above, the training was done on SD2.1 (768x768) but it was used on SD1.5 (512x512).
## Reading
For more information on textual inversion, please see the following
resources:
* The [textual inversion repository](https://github.com/rinongal/textual_inversion) and
associated paper for details and limitations.
* [HuggingFace's textual inversion training
page](https://huggingface.co/docs/diffusers/training/text_inversion)
* [HuggingFace example script
documentation](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
(Note that this script is similar to, but not identical, to
`textual_inversion`, but produces embed files that are completely compatible.
---
copyright (c) 2023, Lincoln Stein and the InvokeAI Development Team

View File

@@ -1,10 +1,8 @@
# Automatic Install & Updates
# Automatic Install
**The same packaged installer file can be used for both new installs and updates.**
Using the installer for updates will leave everything you've added since installation, and just update the core libraries used to run Invoke.
Simply use the same path you installed to originally.
The installer is used for both new installs and updates.
Both release and pre-release versions can be installed using the installer. It also supports install through a wheel if needed.
Both release and pre-release versions can be installed using it. It also supports install a wheel if needed.
Be sure to review the [installation requirements] and ensure your system has everything it needs to install Invoke.
@@ -98,7 +96,7 @@ Updating is exactly the same as installing - download the latest installer, choo
If you have installation issues, please review the [FAQ]. You can also [create an issue] or ask for help on [discord].
[installation requirements]: INSTALL_REQUIREMENTS.md
[installation requirements]: INSTALLATION.md#installation-requirements
[FAQ]: ../help/FAQ.md
[install some models]: 050_INSTALLING_MODELS.md
[configuration docs]: ../features/CONFIGURATION.md

View File

@@ -10,7 +10,7 @@ InvokeAI is distributed as a python package on PyPI, installable with `pip`. The
### Requirements
Before you start, go through the [installation requirements](./INSTALL_REQUIREMENTS.md).
Before you start, go through the [installation requirements].
### Installation Walkthrough
@@ -79,7 +79,7 @@ Before you start, go through the [installation requirements](./INSTALL_REQUIREME
1. Install the InvokeAI Package. The base command is `pip install InvokeAI --use-pep517`, but you may need to change this depending on your system and the desired features.
- You may need to provide an [extra index URL](https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-extra-index-url). Select your platform configuration using [this tool on the PyTorch website](https://pytorch.org/get-started/locally/). Copy the `--extra-index-url` string from this and append it to your install command.
- You may need to provide an [extra index URL]. Select your platform configuration using [this tool on the PyTorch website]. Copy the `--extra-index-url` string from this and append it to your install command.
!!! example "Install with an extra index URL"
@@ -116,4 +116,4 @@ Before you start, go through the [installation requirements](./INSTALL_REQUIREME
!!! warning
If the virtual environment is _not_ inside the root directory, then you _must_ specify the path to the root directory with `--root \path\to\invokeai` or the `INVOKEAI_ROOT` environment variable.
If the virtual environment is _not_ inside the root directory, then you _must_ specify the path to the root directory with `--root_dir \path\to\invokeai` or the `INVOKEAI_ROOT` environment variable.

View File

@@ -1,4 +1,4 @@
# Installation and Updating Overview
# Installation Overview
Before installing, review the [installation requirements] to ensure your system is set up properly.
@@ -6,21 +6,14 @@ See the [FAQ] for frequently-encountered installation issues.
If you need more help, join our [discord] or [create an issue].
<h2>Automatic Install & Updates </h2>
<h2>Automatic Install</h2>
✅ The automatic install is the best way to run InvokeAI. Check out the [installation guide] to get started.
⬆️ The same installer is also the best way to update InvokeAI - Simply rerun it for the same folder you installed to.
The installation process simply manages installation for the core libraries & application dependencies that run Invoke.
Any models, images, or other assets in the Invoke root folder won't be affected by the installation process.
<h2>Manual Install</h2>
If you are familiar with python and want more control over the packages that are installed, you can [install InvokeAI manually via PyPI].
Updates are managed by reinstalling the latest version through PyPi.
<h2>Developer Install</h2>
If you want to contribute to InvokeAI, consult the [developer install guide].

View File

@@ -37,13 +37,13 @@ Invoke runs best with a dedicated GPU, but will fall back to running on CPU, alb
=== "Nvidia"
```
Any GPU with at least 8GB VRAM.
Any GPU with at least 8GB VRAM. Linux only.
```
=== "AMD"
```
Any GPU with at least 16GB VRAM. Linux only.
Any GPU with at least 16GB VRAM.
```
=== "Mac"

View File

@@ -35,23 +35,6 @@ from ..services.urls.urls_default import LocalUrlService
from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from .events import FastAPIEventService
# TODO: is there a better way to achieve this?
def check_internet() -> bool:
"""
Return true if the internet is reachable.
It does this by pinging huggingface.co.
"""
import urllib.request
host = "http://huggingface.co"
try:
urllib.request.urlopen(host, timeout=1)
return True
except Exception:
return False
logger = InvokeAILogger.get_logger()

View File

@@ -13,6 +13,7 @@ from pydantic import BaseModel, Field
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
from invokeai.backend.image_util.safety_checker import SafetyChecker
from invokeai.backend.util.logging import logging
from invokeai.version import __version__
@@ -108,7 +109,9 @@ async def get_config() -> AppConfig:
upscaling_models.append(str(Path(model).stem))
upscaler = Upscaler(upscaling_method="esrgan", upscaling_models=upscaling_models)
nsfw_methods = ["nsfw_checker"]
nsfw_methods = []
if SafetyChecker.safety_checker_available():
nsfw_methods.append("nsfw_checker")
watermarking_methods = ["invisible_watermark"]

View File

@@ -6,12 +6,13 @@ from fastapi import BackgroundTasks, Body, HTTPException, Path, Query, Request,
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, Field, JsonValue
from pydantic import BaseModel, Field, ValidationError
from invokeai.app.invocations.fields import MetadataField
from invokeai.app.invocations.fields import MetadataField, MetadataFieldValidator
from invokeai.app.services.image_records.image_records_common import ImageCategory, ImageRecordChanges, ResourceOrigin
from invokeai.app.services.images.images_common import ImageDTO, ImageUrlsDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.workflow_records.workflow_records_common import WorkflowWithoutID, WorkflowWithoutIDValidator
from ..dependencies import ApiDependencies
@@ -41,17 +42,13 @@ async def upload_image(
board_id: Optional[str] = Query(default=None, description="The board to add this image to, if any"),
session_id: Optional[str] = Query(default=None, description="The session ID associated with this upload, if any"),
crop_visible: Optional[bool] = Query(default=False, description="Whether to crop the image"),
metadata: Optional[JsonValue] = Body(
default=None, description="The metadata to associate with the image", embed=True
),
) -> ImageDTO:
"""Uploads an image"""
if not file.content_type or not file.content_type.startswith("image"):
raise HTTPException(status_code=415, detail="Not an image")
_metadata = None
_workflow = None
_graph = None
metadata = None
workflow = None
contents = await file.read()
try:
@@ -65,28 +62,22 @@ async def upload_image(
# TODO: retain non-invokeai metadata on upload?
# attempt to parse metadata from image
metadata_raw = metadata if isinstance(metadata, str) else pil_image.info.get("invokeai_metadata", None)
if isinstance(metadata_raw, str):
_metadata = metadata_raw
else:
ApiDependencies.invoker.services.logger.warn("Failed to parse metadata for uploaded image")
pass
metadata_raw = pil_image.info.get("invokeai_metadata", None)
if metadata_raw:
try:
metadata = MetadataFieldValidator.validate_json(metadata_raw)
except ValidationError:
ApiDependencies.invoker.services.logger.warn("Failed to parse metadata for uploaded image")
pass
# attempt to parse workflow from image
workflow_raw = pil_image.info.get("invokeai_workflow", None)
if isinstance(workflow_raw, str):
_workflow = workflow_raw
else:
ApiDependencies.invoker.services.logger.warn("Failed to parse workflow for uploaded image")
pass
# attempt to extract graph from image
graph_raw = pil_image.info.get("invokeai_graph", None)
if isinstance(graph_raw, str):
_graph = graph_raw
else:
ApiDependencies.invoker.services.logger.warn("Failed to parse graph for uploaded image")
pass
if workflow_raw is not None:
try:
workflow = WorkflowWithoutIDValidator.validate_json(workflow_raw)
except ValidationError:
ApiDependencies.invoker.services.logger.warn("Failed to parse metadata for uploaded image")
pass
try:
image_dto = ApiDependencies.invoker.services.images.create(
@@ -95,9 +86,8 @@ async def upload_image(
image_category=image_category,
session_id=session_id,
board_id=board_id,
metadata=_metadata,
workflow=_workflow,
graph=_graph,
metadata=metadata,
workflow=workflow,
is_intermediate=is_intermediate,
)
@@ -195,21 +185,14 @@ async def get_image_metadata(
raise HTTPException(status_code=404)
class WorkflowAndGraphResponse(BaseModel):
workflow: Optional[str] = Field(description="The workflow used to generate the image, as stringified JSON")
graph: Optional[str] = Field(description="The graph used to generate the image, as stringified JSON")
@images_router.get(
"/i/{image_name}/workflow", operation_id="get_image_workflow", response_model=WorkflowAndGraphResponse
"/i/{image_name}/workflow", operation_id="get_image_workflow", response_model=Optional[WorkflowWithoutID]
)
async def get_image_workflow(
image_name: str = Path(description="The name of image whose workflow to get"),
) -> WorkflowAndGraphResponse:
) -> Optional[WorkflowWithoutID]:
try:
workflow = ApiDependencies.invoker.services.images.get_workflow(image_name)
graph = ApiDependencies.invoker.services.images.get_graph(image_name)
return WorkflowAndGraphResponse(workflow=workflow, graph=graph)
return ApiDependencies.invoker.services.images.get_workflow(image_name)
except Exception:
raise HTTPException(status_code=404)

View File

@@ -6,7 +6,7 @@ import pathlib
import shutil
import traceback
from copy import deepcopy
from typing import Any, Dict, List, Optional, Type
from typing import Any, Dict, List, Optional
from fastapi import Body, Path, Query, Response, UploadFile
from fastapi.responses import FileResponse
@@ -16,7 +16,6 @@ from pydantic import AnyHttpUrl, BaseModel, ConfigDict, Field
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.services.model_images.model_images_common import ModelImageFileNotFoundException
from invokeai.app.services.model_install import ModelInstallJob
from invokeai.app.services.model_records import (
DuplicateModelException,
@@ -53,13 +52,6 @@ class ModelsList(BaseModel):
model_config = ConfigDict(use_enum_values=True)
def add_cover_image_to_model_config(config: AnyModelConfig, dependencies: Type[ApiDependencies]) -> AnyModelConfig:
"""Add a cover image URL to a model configuration."""
cover_image = dependencies.invoker.services.model_images.get_url(config.key)
config.cover_image = cover_image
return config
##############################################################################
# These are example inputs and outputs that are used in places where Swagger
# is unable to generate a correct example.
@@ -126,7 +118,8 @@ async def list_model_records(
record_store.search_by_attr(model_type=model_type, model_name=model_name, model_format=model_format)
)
for model in found_models:
model = add_cover_image_to_model_config(model, ApiDependencies)
cover_image = ApiDependencies.invoker.services.model_images.get_url(model.key)
model.cover_image = cover_image
return ModelsList(models=found_models)
@@ -167,9 +160,12 @@ async def get_model_record(
key: str = Path(description="Key of the model record to fetch."),
) -> AnyModelConfig:
"""Get a model record"""
record_store = ApiDependencies.invoker.services.model_manager.store
try:
config = ApiDependencies.invoker.services.model_manager.store.get_model(key)
return add_cover_image_to_model_config(config, ApiDependencies)
config: AnyModelConfig = record_store.get_model(key)
cover_image = ApiDependencies.invoker.services.model_images.get_url(key)
config.cover_image = cover_image
return config
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
@@ -298,15 +294,14 @@ async def update_model_record(
installer = ApiDependencies.invoker.services.model_manager.install
try:
record_store.update_model(key, changes=changes)
config = installer.sync_model_path(key)
config = add_cover_image_to_model_config(config, ApiDependencies)
model_response: AnyModelConfig = installer.sync_model_path(key)
logger.info(f"Updated model: {key}")
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return config
return model_response
@model_manager_router.get(
@@ -653,14 +648,6 @@ async def convert_model(
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
# Update the model image if the model had one
try:
model_image = ApiDependencies.invoker.services.model_images.get(key)
ApiDependencies.invoker.services.model_images.save(model_image, new_key)
ApiDependencies.invoker.services.model_images.delete(key)
except ModelImageFileNotFoundException:
pass
# delete the original safetensors file
installer.delete(key)
@@ -668,8 +655,7 @@ async def convert_model(
shutil.rmtree(cache_path)
# return the config record for the new diffusers directory
new_config = store.get_model(new_key)
new_config = add_cover_image_to_model_config(new_config, ApiDependencies)
new_config: AnyModelConfig = store.get_model(new_key)
return new_config

View File

@@ -2,6 +2,7 @@ import asyncio
import logging
import mimetypes
import socket
import time
from contextlib import asynccontextmanager
from inspect import signature
from pathlib import Path
@@ -20,13 +21,10 @@ from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.json_schema import models_json_schema
from torch.backends.mps import is_available as is_mps_available
# for PyCharm:
# noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
import invokeai.frontend.web as web_dir
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.config.config_default import get_config
from invokeai.app.services.config.config_default import InvokeAIAppConfig, get_config
from invokeai.app.services.session_processor.session_processor_common import ProgressImage
from invokeai.backend.util.devices import TorchDevice
@@ -44,196 +42,189 @@ from .api.routers import (
workflows,
)
from .api.sockets import SocketIO
from .invocations.baseinvocation import (
BaseInvocation,
UIConfigBase,
)
from .invocations.baseinvocation import BaseInvocation, UIConfigBase
from .invocations.fields import InputFieldJSONSchemaExtra, OutputFieldJSONSchemaExtra
app_config = get_config()
# TODO(ryand): Search for imports from api_app.py in the rest of the codebase and make sure I didn't break any of them.
def build_app(app_config: InvokeAIAppConfig, logger: logging.Logger) -> FastAPI:
@asynccontextmanager
async def lifespan(app: FastAPI):
# Add startup event to load dependencies
ApiDependencies.initialize(config=app_config, event_handler_id=event_handler_id, logger=logger)
yield
# Shut down threads
ApiDependencies.shutdown()
app = FastAPI(
title="Invoke - Community Edition",
docs_url=None,
redoc_url=None,
separate_input_output_schemas=False,
lifespan=lifespan,
)
# Add event handler
event_handler_id: int = id(app)
app.add_middleware(
EventHandlerASGIMiddleware,
handlers=[local_handler], # TODO: consider doing this in services to support different configurations
middleware_id=event_handler_id,
)
socket_io = SocketIO(app)
app.add_middleware(
CORSMiddleware,
allow_origins=app_config.allow_origins,
allow_credentials=app_config.allow_credentials,
allow_methods=app_config.allow_methods,
allow_headers=app_config.allow_headers,
)
app.add_middleware(GZipMiddleware, minimum_size=1000)
# Include all routers
app.include_router(utilities.utilities_router, prefix="/api")
app.include_router(model_manager.model_manager_router, prefix="/api")
app.include_router(download_queue.download_queue_router, prefix="/api")
app.include_router(images.images_router, prefix="/api")
app.include_router(boards.boards_router, prefix="/api")
app.include_router(board_images.board_images_router, prefix="/api")
app.include_router(app_info.app_router, prefix="/api")
app.include_router(session_queue.session_queue_router, prefix="/api")
app.include_router(workflows.workflows_router, prefix="/api")
add_custom_openapi(app)
@app.get("/docs", include_in_schema=False)
def overridden_swagger() -> HTMLResponse:
return get_swagger_ui_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=f"{app.title} - Swagger UI",
swagger_favicon_url="static/docs/invoke-favicon-docs.svg",
)
@app.get("/redoc", include_in_schema=False)
def overridden_redoc() -> HTMLResponse:
return get_redoc_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=f"{app.title} - Redoc",
redoc_favicon_url="static/docs/invoke-favicon-docs.svg",
)
web_root_path = Path(list(web_dir.__path__)[0])
try:
app.mount("/", NoCacheStaticFiles(directory=Path(web_root_path, "dist"), html=True), name="ui")
except RuntimeError:
logger.warn(f"No UI found at {web_root_path}/dist, skipping UI mount")
app.mount(
"/static", NoCacheStaticFiles(directory=Path(web_root_path, "static/")), name="static"
) # docs favicon is in here
return app
if is_mps_available():
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
def apply_monkeypatches() -> None:
# TODO(ryand): Don't monkeypatch on import!
if is_mps_available():
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
logger = InvokeAILogger.get_logger(config=app_config)
# fix for windows mimetypes registry entries being borked
# see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
mimetypes.add_type("application/javascript", ".js")
mimetypes.add_type("text/css", ".css")
torch_device_name = TorchDevice.get_torch_device_name()
logger.info(f"Using torch device: {torch_device_name}")
def fix_mimetypes():
# fix for windows mimetypes registry entries being borked
# see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
mimetypes.add_type("application/javascript", ".js")
mimetypes.add_type("text/css", ".css")
@asynccontextmanager
async def lifespan(app: FastAPI):
# Add startup event to load dependencies
ApiDependencies.initialize(config=app_config, event_handler_id=event_handler_id, logger=logger)
yield
# Shut down threads
ApiDependencies.shutdown()
def add_custom_openapi(app: FastAPI) -> None:
"""Add a custom .openapi() method to the FastAPI app.
This is done based on the guidance here:
https://fastapi.tiangolo.com/how-to/extending-openapi/#normal-fastapi
"""
# Create the app
# TODO: create this all in a method so configuration/etc. can be passed in?
app = FastAPI(
title="Invoke - Community Edition",
docs_url=None,
redoc_url=None,
separate_input_output_schemas=False,
lifespan=lifespan,
)
# Build a custom OpenAPI to include all outputs
# TODO: can outputs be included on metadata of invocation schemas somehow?
def custom_openapi() -> dict[str, Any]:
if app.openapi_schema:
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
description="An API for invoking AI image operations",
version="1.0.0",
routes=app.routes,
separate_input_output_schemas=False, # https://fastapi.tiangolo.com/how-to/separate-openapi-schemas/
)
# Add event handler
event_handler_id: int = id(app)
app.add_middleware(
EventHandlerASGIMiddleware,
handlers=[local_handler], # TODO: consider doing this in services to support different configurations
middleware_id=event_handler_id,
)
# Add all outputs
all_invocations = BaseInvocation.get_invocations()
output_types = set()
output_type_titles = {}
for invoker in all_invocations:
output_type = signature(invoker.invoke).return_annotation
output_types.add(output_type)
socket_io = SocketIO(app)
output_schemas = models_json_schema(
models=[(o, "serialization") for o in output_types], ref_template="#/components/schemas/{model}"
)
for schema_key, output_schema in output_schemas[1]["$defs"].items():
# TODO: note that we assume the schema_key here is the TYPE.__name__
# This could break in some cases, figure out a better way to do it
output_type_titles[schema_key] = output_schema["title"]
openapi_schema["components"]["schemas"][schema_key] = output_schema
openapi_schema["components"]["schemas"][schema_key]["class"] = "output"
app.add_middleware(
CORSMiddleware,
allow_origins=app_config.allow_origins,
allow_credentials=app_config.allow_credentials,
allow_methods=app_config.allow_methods,
allow_headers=app_config.allow_headers,
)
# Some models don't end up in the schemas as standalone definitions
additional_schemas = models_json_schema(
[
(UIConfigBase, "serialization"),
(InputFieldJSONSchemaExtra, "serialization"),
(OutputFieldJSONSchemaExtra, "serialization"),
(ModelIdentifierField, "serialization"),
(ProgressImage, "serialization"),
],
ref_template="#/components/schemas/{model}",
)
for schema_key, schema_json in additional_schemas[1]["$defs"].items():
openapi_schema["components"]["schemas"][schema_key] = schema_json
app.add_middleware(GZipMiddleware, minimum_size=1000)
# Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations:
invoker_name = invoker.__name__ # type: ignore [attr-defined] # this is a valid attribute
output_type = signature(obj=invoker.invoke).return_annotation
output_type_title = output_type_titles[output_type.__name__]
invoker_schema = openapi_schema["components"]["schemas"][f"{invoker_name}"]
outputs_ref = {"$ref": f"#/components/schemas/{output_type_title}"}
invoker_schema["output"] = outputs_ref
invoker_schema["class"] = "invocation"
# This code no longer seems to be necessary?
# Leave it here just in case
#
# from invokeai.backend.model_manager import get_model_config_formats
# formats = get_model_config_formats()
# for model_config_name, enum_set in formats.items():
# Include all routers
app.include_router(utilities.utilities_router, prefix="/api")
app.include_router(model_manager.model_manager_router, prefix="/api")
app.include_router(download_queue.download_queue_router, prefix="/api")
app.include_router(images.images_router, prefix="/api")
app.include_router(boards.boards_router, prefix="/api")
app.include_router(board_images.board_images_router, prefix="/api")
app.include_router(app_info.app_router, prefix="/api")
app.include_router(session_queue.session_queue_router, prefix="/api")
app.include_router(workflows.workflows_router, prefix="/api")
# if model_config_name in openapi_schema["components"]["schemas"]:
# # print(f"Config with name {name} already defined")
# continue
# openapi_schema["components"]["schemas"][model_config_name] = {
# "title": model_config_name,
# "description": "An enumeration.",
# "type": "string",
# "enum": [v.value for v in enum_set],
# }
# Build a custom OpenAPI to include all outputs
# TODO: can outputs be included on metadata of invocation schemas somehow?
def custom_openapi() -> dict[str, Any]:
if app.openapi_schema:
app.openapi_schema = openapi_schema
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
description="An API for invoking AI image operations",
version="1.0.0",
routes=app.routes,
separate_input_output_schemas=False, # https://fastapi.tiangolo.com/how-to/separate-openapi-schemas/
)
# Add all outputs
all_invocations = BaseInvocation.get_invocations()
output_types = set()
output_type_titles = {}
for invoker in all_invocations:
output_type = signature(invoker.invoke).return_annotation
output_types.add(output_type)
output_schemas = models_json_schema(
models=[(o, "serialization") for o in output_types], ref_template="#/components/schemas/{model}"
)
for schema_key, output_schema in output_schemas[1]["$defs"].items():
# TODO: note that we assume the schema_key here is the TYPE.__name__
# This could break in some cases, figure out a better way to do it
output_type_titles[schema_key] = output_schema["title"]
openapi_schema["components"]["schemas"][schema_key] = output_schema
openapi_schema["components"]["schemas"][schema_key]["class"] = "output"
# Some models don't end up in the schemas as standalone definitions
additional_schemas = models_json_schema(
[
(UIConfigBase, "serialization"),
(InputFieldJSONSchemaExtra, "serialization"),
(OutputFieldJSONSchemaExtra, "serialization"),
(ModelIdentifierField, "serialization"),
(ProgressImage, "serialization"),
],
ref_template="#/components/schemas/{model}",
)
for schema_key, schema_json in additional_schemas[1]["$defs"].items():
openapi_schema["components"]["schemas"][schema_key] = schema_json
openapi_schema["components"]["schemas"]["InvocationOutputMap"] = {
"type": "object",
"properties": {},
"required": [],
}
# Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations:
invoker_name = invoker.__name__ # type: ignore [attr-defined] # this is a valid attribute
output_type = signature(obj=invoker.invoke).return_annotation
output_type_title = output_type_titles[output_type.__name__]
invoker_schema = openapi_schema["components"]["schemas"][f"{invoker_name}"]
outputs_ref = {"$ref": f"#/components/schemas/{output_type_title}"}
invoker_schema["output"] = outputs_ref
openapi_schema["components"]["schemas"]["InvocationOutputMap"]["properties"][invoker.get_type()] = outputs_ref
openapi_schema["components"]["schemas"]["InvocationOutputMap"]["required"].append(invoker.get_type())
invoker_schema["class"] = "invocation"
# This code no longer seems to be necessary?
# Leave it here just in case
#
# from invokeai.backend.model_manager import get_model_config_formats
# formats = get_model_config_formats()
# for model_config_name, enum_set in formats.items():
# if model_config_name in openapi_schema["components"]["schemas"]:
# # print(f"Config with name {name} already defined")
# continue
# openapi_schema["components"]["schemas"][model_config_name] = {
# "title": model_config_name,
# "description": "An enumeration.",
# "type": "string",
# "enum": [v.value for v in enum_set],
# }
app.openapi_schema = openapi_schema
return app.openapi_schema
app.openapi = custom_openapi # type: ignore [method-assign] # this is a valid assignment
@app.get("/docs", include_in_schema=False)
def overridden_swagger() -> HTMLResponse:
return get_swagger_ui_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=f"{app.title} - Swagger UI",
swagger_favicon_url="static/docs/invoke-favicon-docs.svg",
)
@app.get("/redoc", include_in_schema=False)
def overridden_redoc() -> HTMLResponse:
return get_redoc_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=f"{app.title} - Redoc",
redoc_favicon_url="static/docs/invoke-favicon-docs.svg",
)
web_root_path = Path(list(web_dir.__path__)[0])
try:
app.mount("/", NoCacheStaticFiles(directory=Path(web_root_path, "dist"), html=True), name="ui")
except RuntimeError:
logger.warn(f"No UI found at {web_root_path}/dist, skipping UI mount")
app.mount(
"/static", NoCacheStaticFiles(directory=Path(web_root_path, "static/")), name="static"
) # docs favicon is in here
app.openapi = custom_openapi # type: ignore [method-assign] # this is a valid assignment
def check_cudnn(logger: logging.Logger) -> None:
@@ -252,27 +243,39 @@ def check_cudnn(logger: logging.Logger) -> None:
)
def find_port(port: int) -> int:
"""Find a port not in use starting at given port"""
# Taken from https://waylonwalker.com/python-find-available-port/, thanks Waylon!
# https://github.com/WaylonWalker
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
if s.connect_ex(("localhost", port)) == 0:
return find_port(port=port + 1)
else:
return port
def init_dev_reload(logger: logging.Logger) -> None:
try:
import jurigged
except ImportError as e:
logger.error(
'Can\'t start `--dev_reload` because jurigged is not found; `pip install -e ".[dev]"` to include development dependencies.',
exc_info=e,
)
else:
jurigged.watch(logger=InvokeAILogger.get_logger(name="jurigged").info)
def invoke_api() -> None:
def find_port(port: int) -> int:
"""Find a port not in use starting at given port"""
# Taken from https://waylonwalker.com/python-find-available-port/, thanks Waylon!
# https://github.com/WaylonWalker
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
if s.connect_ex(("localhost", port)) == 0:
return find_port(port=port + 1)
else:
return port
start = time.time()
app_config = get_config()
logger = InvokeAILogger.get_logger(config=app_config)
apply_monkeypatches()
fix_mimetypes()
if app_config.dev_reload:
try:
import jurigged
except ImportError as e:
logger.error(
'Can\'t start `--dev_reload` because jurigged is not found; `pip install -e ".[dev]"` to include development dependencies.',
exc_info=e,
)
else:
jurigged.watch(logger=InvokeAILogger.get_logger(name="jurigged").info)
init_dev_reload(logger)
port = find_port(app_config.port)
if port != app_config.port:
@@ -280,6 +283,11 @@ def invoke_api() -> None:
check_cudnn(logger)
torch_device_name = TorchDevice.get_torch_device_name()
logger.info(f"Using torch device: {torch_device_name}")
app = build_app(app_config, logger)
# Start our own event loop for eventing usage
loop = asyncio.new_event_loop()
config = uvicorn.Config(
@@ -299,6 +307,7 @@ def invoke_api() -> None:
log.handlers.clear()
for ch in logger.handlers:
log.addHandler(ch)
logger.info(f"API started in {time.time() - start:.2f} seconds")
loop.run_until_complete(server.serve())

View File

@@ -24,6 +24,7 @@ from pydantic import BaseModel, Field, field_validator, model_validator
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
Input,
InputField,
OutputField,
UIType,
@@ -34,16 +35,15 @@ from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES, heuristic_resize
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
from invokeai.backend.image_util.canny import get_canny_edges
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector
from invokeai.backend.image_util.hed import HEDProcessor
from invokeai.backend.image_util.lineart import LineartProcessor
from invokeai.backend.image_util.lineart_anime import LineartAnimeProcessor
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
from .baseinvocation import BaseInvocation, BaseInvocationOutput, Classification, invocation, invocation_output
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
class ControlField(BaseModel):
@@ -79,13 +79,13 @@ class ControlOutput(BaseInvocationOutput):
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.2")
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.1")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
image: ImageField = InputField(description="The control image")
control_model: ModelIdentifierField = InputField(
description=FieldDescriptions.controlnet_model, ui_type=UIType.ControlNetModel
description=FieldDescriptions.controlnet_model, input=Input.Direct, ui_type=UIType.ControlNetModel
)
control_weight: Union[float, List[float]] = InputField(
default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"
@@ -164,13 +164,13 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.3.3",
version="1.3.2",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
low_threshold: int = InputField(
default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)"
)
@@ -198,13 +198,13 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
# safe not supported in controlnet_aux v0.0.3
# safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
@@ -227,13 +227,13 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
coarse: bool = InputField(default=False, description="Whether to use coarse mode")
def run_processor(self, image: Image.Image) -> Image.Image:
@@ -249,13 +249,13 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image) -> Image.Image:
processor = LineartAnimeProcessor()
@@ -272,15 +272,15 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.2.4",
version="1.2.3",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
a_mult: float = InputField(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)")
bg_th: float = InputField(default=0.1, ge=0, description="Midas parameter `bg_th`")
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
# depth_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal: bool = InputField(default=False, description="whether to use depth and normal mode")
@@ -303,13 +303,13 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
normalbae_processor = NormalBaeDetector.from_pretrained("lllyasviel/Annotators")
@@ -320,13 +320,13 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.3"
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.2"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
thr_v: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_v`")
thr_d: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_d`")
@@ -343,13 +343,13 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.3"
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.2"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
@@ -370,13 +370,13 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
h: int = InputField(default=512, ge=0, description="Content shuffle `h` parameter")
w: int = InputField(default=512, ge=0, description="Content shuffle `w` parameter")
f: int = InputField(default=256, ge=0, description="Content shuffle `f` parameter")
@@ -400,7 +400,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
@@ -416,15 +416,15 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.2.4",
version="1.2.3",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
max_faces: int = InputField(default=1, ge=1, description="Maximum number of faces to detect")
min_confidence: float = InputField(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
mediapipe_face_processor = MediapipeFaceDetector()
@@ -443,7 +443,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
@@ -451,8 +451,8 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
thr_a: float = InputField(default=0, description="Leres parameter `thr_a`")
thr_b: float = InputField(default=0, description="Leres parameter `thr_b`")
boost: bool = InputField(default=False, description="Whether to use boost mode")
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
leres_processor = LeresDetector.from_pretrained("lllyasviel/Annotators")
@@ -472,7 +472,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
@@ -512,13 +512,13 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.2.4",
version="1.2.3",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
@@ -559,12 +559,12 @@ class SamDetectorReproducibleColors(SamDetector):
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""
color_map_tile_size: int = InputField(default=64, ge=1, description=FieldDescriptions.tile_size)
color_map_tile_size: int = InputField(default=64, ge=0, description=FieldDescriptions.tile_size)
def run_processor(self, image: Image.Image):
np_image = np.array(image, dtype=np.uint8)
@@ -591,7 +591,7 @@ DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.1.2",
version="1.1.1",
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
@@ -599,7 +599,7 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
model_size: DEPTH_ANYTHING_MODEL_SIZES = InputField(
default="small", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image):
depth_anything_detector = DepthAnythingDetector()
@@ -614,7 +614,7 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
title="DW Openpose Image Processor",
tags=["controlnet", "dwpose", "openpose"],
category="controlnet",
version="1.1.1",
version="1.1.0",
)
class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Generates an openpose pose from an image using DWPose"""
@@ -622,7 +622,7 @@ class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
draw_body: bool = InputField(default=True)
draw_face: bool = InputField(default=False)
draw_hands: bool = InputField(default=False)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image):
dw_openpose = DWOpenposeDetector()
@@ -634,27 +634,3 @@ class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
resolution=self.image_resolution,
)
return processed_image
@invocation(
"heuristic_resize",
title="Heuristic Resize",
tags=["image, controlnet"],
category="image",
version="1.0.1",
classification=Classification.Prototype,
)
class HeuristicResizeInvocation(BaseInvocation):
"""Resize an image using a heuristic method. Preserves edge maps."""
image: ImageField = InputField(description="The image to resize")
width: int = InputField(default=512, ge=1, description="The width to resize to (px)")
height: int = InputField(default=512, ge=1, description="The height to resize to (px)")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name, "RGB")
np_img = pil_to_np(image)
np_resized = heuristic_resize(np_img, (self.width, self.height))
resized = np_to_pil(np_resized)
image_dto = context.images.save(image=resized)
return ImageOutput.build(image_dto)

View File

@@ -1,5 +1,6 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from pathlib import Path
from typing import Literal, Optional
import cv2
@@ -503,7 +504,7 @@ class ImageInverseLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Blur NSFW Image",
tags=["image", "nsfw"],
category="image",
version="1.2.3",
version="1.2.2",
)
class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Add blur to NSFW-flagged images"""
@@ -515,12 +516,23 @@ class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
logger = context.logger
logger.debug("Running NSFW checker")
image = SafetyChecker.blur_if_nsfw(image)
if SafetyChecker.has_nsfw_concept(image):
logger.info("A potentially NSFW image has been detected. Image will be blurred.")
blurry_image = image.filter(filter=ImageFilter.GaussianBlur(radius=32))
caution = self._get_caution_img()
blurry_image.paste(caution, (0, 0), caution)
image = blurry_image
image_dto = context.images.save(image=image)
return ImageOutput.build(image_dto)
def _get_caution_img(self) -> Image.Image:
import invokeai.app.assets.images as image_assets
caution = Image.open(Path(image_assets.__path__[0]) / "caution.png")
return caution.resize((caution.width // 2, caution.height // 2))
@invocation(
"img_watermark",

View File

@@ -5,7 +5,7 @@ from pydantic import BaseModel, Field, field_validator, model_validator
from typing_extensions import Self
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, TensorField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, TensorField, UIType
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
@@ -58,7 +58,7 @@ class IPAdapterOutput(BaseInvocationOutput):
CLIP_VISION_MODEL_MAP = {"ViT-H": "ip_adapter_sd_image_encoder", "ViT-G": "ip_adapter_sdxl_image_encoder"}
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.4.1")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.4.0")
class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes."""
@@ -67,6 +67,7 @@ class IPAdapterInvocation(BaseInvocation):
ip_adapter_model: ModelIdentifierField = InputField(
description="The IP-Adapter model.",
title="IP-Adapter Model",
input=Input.Direct,
ui_order=-1,
ui_type=UIType.IPAdapterModel,
)

View File

@@ -3,7 +3,7 @@ import inspect
import math
from contextlib import ExitStack
from functools import singledispatchmethod
from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union
from typing import Any, Iterator, List, Literal, Optional, Tuple, Union
import einops
import numpy as np
@@ -11,6 +11,7 @@ import numpy.typing as npt
import torch
import torchvision
import torchvision.transforms as T
from diffusers import AutoencoderKL, AutoencoderTiny
from diffusers.configuration_utils import ConfigMixin
from diffusers.image_processor import VaeImageProcessor
from diffusers.models.adapter import T2IAdapter
@@ -20,12 +21,9 @@ from diffusers.models.attention_processor import (
LoRAXFormersAttnProcessor,
XFormersAttnProcessor,
)
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from diffusers.schedulers.scheduling_dpmsolver_sde import DPMSolverSDEScheduler
from diffusers.schedulers.scheduling_tcd import TCDScheduler
from diffusers.schedulers.scheduling_utils import SchedulerMixin as Scheduler
from diffusers.schedulers import DPMSolverSDEScheduler
from diffusers.schedulers import SchedulerMixin as Scheduler
from PIL import Image, ImageFilter
from pydantic import field_validator
from torchvision.transforms.functional import resize as tv_resize
@@ -343,7 +341,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps)
cfg_scale: Union[float, List[float]] = InputField(
default=7.5, description=FieldDescriptions.cfg_scale, title="CFG Scale"
default=7.5, ge=1, description=FieldDescriptions.cfg_scale, title="CFG Scale"
)
denoising_start: float = InputField(
default=0.0,
@@ -523,10 +521,9 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
if is_sdxl:
return (
SDXLConditioningInfo(embeds=text_embedding, pooled_embeds=pooled_embedding, add_time_ids=add_time_ids),
regions,
)
return SDXLConditioningInfo(
embeds=text_embedding, pooled_embeds=pooled_embedding, add_time_ids=add_time_ids
), regions
return BasicConditioningInfo(embeds=text_embedding), regions
def get_conditioning_data(
@@ -566,11 +563,6 @@ class DenoiseLatentsInvocation(BaseInvocation):
dtype=unet.dtype,
)
if isinstance(self.cfg_scale, list):
assert (
len(self.cfg_scale) == self.steps
), "cfg_scale (list) must have the same length as the number of steps"
conditioning_data = TextConditioningData(
uncond_text=uncond_text_embedding,
cond_text=cond_text_embedding,
@@ -586,6 +578,13 @@ class DenoiseLatentsInvocation(BaseInvocation):
unet: UNet2DConditionModel,
scheduler: Scheduler,
) -> StableDiffusionGeneratorPipeline:
# TODO:
# configure_model_padding(
# unet,
# self.seamless,
# self.seamless_axes,
# )
class FakeVae:
class FakeVaeConfig:
def __init__(self) -> None:
@@ -821,7 +820,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
denoising_start: float,
denoising_end: float,
seed: int,
) -> Tuple[int, List[int], int, Dict[str, Any]]:
) -> Tuple[int, List[int], int]:
assert isinstance(scheduler, ConfigMixin)
if scheduler.config.get("cpu_only", False):
scheduler.set_timesteps(steps, device="cpu")
@@ -849,15 +848,13 @@ class DenoiseLatentsInvocation(BaseInvocation):
timesteps = timesteps[t_start_idx : t_start_idx + t_end_idx]
num_inference_steps = len(timesteps) // scheduler.order
scheduler_step_kwargs: Dict[str, Any] = {}
scheduler_step_kwargs = {}
scheduler_step_signature = inspect.signature(scheduler.step)
if "generator" in scheduler_step_signature.parameters:
# At some point, someone decided that schedulers that accept a generator should use the original seed with
# all bits flipped. I don't know the original rationale for this, but now we must keep it like this for
# reproducibility.
scheduler_step_kwargs.update({"generator": torch.Generator(device=device).manual_seed(seed ^ 0xFFFFFFFF)})
if isinstance(scheduler, TCDScheduler):
scheduler_step_kwargs.update({"eta": 1.0})
scheduler_step_kwargs = {"generator": torch.Generator(device=device).manual_seed(seed ^ 0xFFFFFFFF)}
return num_inference_steps, timesteps, init_timestep, scheduler_step_kwargs

View File

@@ -11,7 +11,6 @@ from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType,
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
@@ -94,46 +93,19 @@ class ModelLoaderOutput(UNetOutput, CLIPOutput, VAEOutput):
pass
@invocation_output("model_identifier_output")
class ModelIdentifierOutput(BaseInvocationOutput):
"""Model identifier output"""
model: ModelIdentifierField = OutputField(description="Model identifier", title="Model")
@invocation(
"model_identifier",
title="Model identifier",
tags=["model"],
category="model",
version="1.0.0",
classification=Classification.Prototype,
)
class ModelIdentifierInvocation(BaseInvocation):
"""Selects any model, outputting it its identifier. Be careful with this one! The identifier will be accepted as
input for any model, even if the model types don't match. If you connect this to a mismatched input, you'll get an
error."""
model: ModelIdentifierField = InputField(description="The model to select", title="Model")
def invoke(self, context: InvocationContext) -> ModelIdentifierOutput:
if not context.models.exists(self.model.key):
raise Exception(f"Unknown model {self.model.key}")
return ModelIdentifierOutput(model=self.model)
@invocation(
"main_model_loader",
title="Main Model",
tags=["model"],
category="model",
version="1.0.3",
version="1.0.2",
)
class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
model: ModelIdentifierField = InputField(description=FieldDescriptions.main_model, ui_type=UIType.MainModel)
model: ModelIdentifierField = InputField(
description=FieldDescriptions.main_model, input=Input.Direct, ui_type=UIType.MainModel
)
# TODO: precision?
def invoke(self, context: InvocationContext) -> ModelLoaderOutput:
@@ -162,12 +134,12 @@ class LoRALoaderOutput(BaseInvocationOutput):
clip: Optional[CLIPField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.3")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.2")
class LoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA", ui_type=UIType.LoRAModel
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
@@ -218,75 +190,6 @@ class LoRALoaderInvocation(BaseInvocation):
return output
@invocation_output("lora_selector_output")
class LoRASelectorOutput(BaseInvocationOutput):
"""Model loader output"""
lora: LoRAField = OutputField(description="LoRA model and weight", title="LoRA")
@invocation("lora_selector", title="LoRA Selector", tags=["model"], category="model", version="1.0.1")
class LoRASelectorInvocation(BaseInvocation):
"""Selects a LoRA model and weight."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
def invoke(self, context: InvocationContext) -> LoRASelectorOutput:
return LoRASelectorOutput(lora=LoRAField(lora=self.lora, weight=self.weight))
@invocation("lora_collection_loader", title="LoRA Collection Loader", tags=["model"], category="model", version="1.0.0")
class LoRACollectionLoader(BaseInvocation):
"""Applies a collection of LoRAs to the provided UNet and CLIP models."""
loras: LoRAField | list[LoRAField] = InputField(
description="LoRA models and weights. May be a single LoRA or collection.", title="LoRAs"
)
unet: Optional[UNetField] = InputField(
default=None,
description=FieldDescriptions.unet,
input=Input.Connection,
title="UNet",
)
clip: Optional[CLIPField] = InputField(
default=None,
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP",
)
def invoke(self, context: InvocationContext) -> LoRALoaderOutput:
output = LoRALoaderOutput()
loras = self.loras if isinstance(self.loras, list) else [self.loras]
added_loras: list[str] = []
for lora in loras:
if lora.lora.key in added_loras:
continue
if not context.models.exists(lora.lora.key):
raise Exception(f"Unknown lora: {lora.lora.key}!")
assert lora.lora.base in (BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2)
added_loras.append(lora.lora.key)
if self.unet is not None:
if output.unet is None:
output.unet = self.unet.model_copy(deep=True)
output.unet.loras.append(lora)
if self.clip is not None:
if output.clip is None:
output.clip = self.clip.model_copy(deep=True)
output.clip.loras.append(lora)
return output
@invocation_output("sdxl_lora_loader_output")
class SDXLLoRALoaderOutput(BaseInvocationOutput):
"""SDXL LoRA Loader Output"""
@@ -301,13 +204,13 @@ class SDXLLoRALoaderOutput(BaseInvocationOutput):
title="SDXL LoRA",
tags=["lora", "model"],
category="model",
version="1.0.3",
version="1.0.2",
)
class SDXLLoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA", ui_type=UIType.LoRAModel
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
@@ -376,78 +279,12 @@ class SDXLLoRALoaderInvocation(BaseInvocation):
return output
@invocation(
"sdxl_lora_collection_loader",
title="SDXL LoRA Collection Loader",
tags=["model"],
category="model",
version="1.0.0",
)
class SDXLLoRACollectionLoader(BaseInvocation):
"""Applies a collection of SDXL LoRAs to the provided UNet and CLIP models."""
loras: LoRAField | list[LoRAField] = InputField(
description="LoRA models and weights. May be a single LoRA or collection.", title="LoRAs"
)
unet: Optional[UNetField] = InputField(
default=None,
description=FieldDescriptions.unet,
input=Input.Connection,
title="UNet",
)
clip: Optional[CLIPField] = InputField(
default=None,
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP",
)
clip2: Optional[CLIPField] = InputField(
default=None,
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP 2",
)
def invoke(self, context: InvocationContext) -> SDXLLoRALoaderOutput:
output = SDXLLoRALoaderOutput()
loras = self.loras if isinstance(self.loras, list) else [self.loras]
added_loras: list[str] = []
for lora in loras:
if lora.lora.key in added_loras:
continue
if not context.models.exists(lora.lora.key):
raise Exception(f"Unknown lora: {lora.lora.key}!")
assert lora.lora.base is BaseModelType.StableDiffusionXL
added_loras.append(lora.lora.key)
if self.unet is not None:
if output.unet is None:
output.unet = self.unet.model_copy(deep=True)
output.unet.loras.append(lora)
if self.clip is not None:
if output.clip is None:
output.clip = self.clip.model_copy(deep=True)
output.clip.loras.append(lora)
if self.clip2 is not None:
if output.clip2 is None:
output.clip2 = self.clip2.model_copy(deep=True)
output.clip2.loras.append(lora)
return output
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.3")
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.2")
class VAELoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
vae_model: ModelIdentifierField = InputField(
description=FieldDescriptions.vae_model, title="VAE", ui_type=UIType.VAEModel
description=FieldDescriptions.vae_model, input=Input.Direct, title="VAE", ui_type=UIType.VAEModel
)
def invoke(self, context: InvocationContext) -> VAEOutput:

View File

@@ -1,4 +1,4 @@
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import SubModelType
@@ -30,12 +30,12 @@ class SDXLRefinerModelLoaderOutput(BaseInvocationOutput):
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.3")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.2")
class SDXLModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl base model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sdxl_main_model, ui_type=UIType.SDXLMainModel
description=FieldDescriptions.sdxl_main_model, input=Input.Direct, ui_type=UIType.SDXLMainModel
)
# TODO: precision?
@@ -67,13 +67,13 @@ class SDXLModelLoaderInvocation(BaseInvocation):
title="SDXL Refiner Model",
tags=["model", "sdxl", "refiner"],
category="model",
version="1.0.3",
version="1.0.2",
)
class SDXLRefinerModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl refiner model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sdxl_refiner_model, ui_type=UIType.SDXLRefinerModel
description=FieldDescriptions.sdxl_refiner_model, input=Input.Direct, ui_type=UIType.SDXLRefinerModel
)
# TODO: precision?

View File

@@ -8,7 +8,7 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
@@ -45,7 +45,7 @@ class T2IAdapterOutput(BaseInvocationOutput):
@invocation(
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.3"
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.2"
)
class T2IAdapterInvocation(BaseInvocation):
"""Collects T2I-Adapter info to pass to other nodes."""
@@ -55,6 +55,7 @@ class T2IAdapterInvocation(BaseInvocation):
t2i_adapter_model: ModelIdentifierField = InputField(
description="The T2I-Adapter model.",
title="T2I-Adapter Model",
input=Input.Direct,
ui_order=-1,
ui_type=UIType.T2IAdapterModel,
)

View File

@@ -318,8 +318,10 @@ class DownloadQueueService(DownloadQueueServiceBase):
in_progress_path.rename(job.download_path)
def _validate_filename(self, directory: str, filename: str) -> bool:
pc_name_max = get_pc_name_max(directory)
pc_path_max = get_pc_path_max(directory)
pc_name_max = os.pathconf(directory, "PC_NAME_MAX") if hasattr(os, "pathconf") else 260 # hardcoded for windows
pc_path_max = (
os.pathconf(directory, "PC_PATH_MAX") if hasattr(os, "pathconf") else 32767
) # hardcoded for windows with long names enabled
if "/" in filename:
return False
if filename.startswith(".."):
@@ -417,26 +419,6 @@ class DownloadQueueService(DownloadQueueServiceBase):
self._logger.warning(excp)
def get_pc_name_max(directory: str) -> int:
if hasattr(os, "pathconf"):
try:
return os.pathconf(directory, "PC_NAME_MAX")
except OSError:
# macOS w/ external drives raise OSError
pass
return 260 # hardcoded for windows
def get_pc_path_max(directory: str) -> int:
if hasattr(os, "pathconf"):
try:
return os.pathconf(directory, "PC_PATH_MAX")
except OSError:
# some platforms may not have this value
pass
return 32767 # hardcoded for windows with long names enabled
# Example on_progress event handler to display a TQDM status bar
# Activate with:
# download_service.download(DownloadJob('http://foo.bar/baz', '/tmp', on_progress=TqdmProgress().update))

View File

@@ -122,8 +122,6 @@ class EventServiceBase:
source_node_id: str,
error_type: str,
error: str,
user_id: str | None,
project_id: str | None,
) -> None:
"""Emitted when an invocation has completed"""
self.__emit_queue_event(
@@ -137,8 +135,6 @@ class EventServiceBase:
"source_node_id": source_node_id,
"error_type": error_type,
"error": error,
"user_id": user_id,
"project_id": project_id,
},
)

View File

@@ -4,6 +4,9 @@ from typing import Optional
from PIL.Image import Image as PILImageType
from invokeai.app.invocations.fields import MetadataField
from invokeai.app.services.workflow_records.workflow_records_common import WorkflowWithoutID
class ImageFileStorageBase(ABC):
"""Low-level service responsible for storing and retrieving image files."""
@@ -30,9 +33,8 @@ class ImageFileStorageBase(ABC):
self,
image: PILImageType,
image_name: str,
metadata: Optional[str] = None,
workflow: Optional[str] = None,
graph: Optional[str] = None,
metadata: Optional[MetadataField] = None,
workflow: Optional[WorkflowWithoutID] = None,
thumbnail_size: int = 256,
) -> None:
"""Saves an image and a 256x256 WEBP thumbnail. Returns a tuple of the image name, thumbnail name, and created timestamp."""
@@ -44,11 +46,6 @@ class ImageFileStorageBase(ABC):
pass
@abstractmethod
def get_workflow(self, image_name: str) -> Optional[str]:
def get_workflow(self, image_name: str) -> Optional[WorkflowWithoutID]:
"""Gets the workflow of an image."""
pass
@abstractmethod
def get_graph(self, image_name: str) -> Optional[str]:
"""Gets the graph of an image."""
pass

View File

@@ -7,7 +7,9 @@ from PIL import Image, PngImagePlugin
from PIL.Image import Image as PILImageType
from send2trash import send2trash
from invokeai.app.invocations.fields import MetadataField
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.workflow_records.workflow_records_common import WorkflowWithoutID
from invokeai.app.util.thumbnails import get_thumbnail_name, make_thumbnail
from .image_files_base import ImageFileStorageBase
@@ -54,9 +56,8 @@ class DiskImageFileStorage(ImageFileStorageBase):
self,
image: PILImageType,
image_name: str,
metadata: Optional[str] = None,
workflow: Optional[str] = None,
graph: Optional[str] = None,
metadata: Optional[MetadataField] = None,
workflow: Optional[WorkflowWithoutID] = None,
thumbnail_size: int = 256,
) -> None:
try:
@@ -67,14 +68,13 @@ class DiskImageFileStorage(ImageFileStorageBase):
info_dict = {}
if metadata is not None:
info_dict["invokeai_metadata"] = metadata
pnginfo.add_text("invokeai_metadata", metadata)
metadata_json = metadata.model_dump_json()
info_dict["invokeai_metadata"] = metadata_json
pnginfo.add_text("invokeai_metadata", metadata_json)
if workflow is not None:
info_dict["invokeai_workflow"] = workflow
pnginfo.add_text("invokeai_workflow", workflow)
if graph is not None:
info_dict["invokeai_graph"] = graph
pnginfo.add_text("invokeai_graph", graph)
workflow_json = workflow.model_dump_json()
info_dict["invokeai_workflow"] = workflow_json
pnginfo.add_text("invokeai_workflow", workflow_json)
# When saving the image, the image object's info field is not populated. We need to set it
image.info = info_dict
@@ -129,18 +129,11 @@ class DiskImageFileStorage(ImageFileStorageBase):
path = path if isinstance(path, Path) else Path(path)
return path.exists()
def get_workflow(self, image_name: str) -> str | None:
def get_workflow(self, image_name: str) -> WorkflowWithoutID | None:
image = self.get(image_name)
workflow = image.info.get("invokeai_workflow", None)
if isinstance(workflow, str):
return workflow
return None
def get_graph(self, image_name: str) -> str | None:
image = self.get(image_name)
graph = image.info.get("invokeai_graph", None)
if isinstance(graph, str):
return graph
if workflow is not None:
return WorkflowWithoutID.model_validate_json(workflow)
return None
def __validate_storage_folders(self) -> None:

View File

@@ -80,7 +80,7 @@ class ImageRecordStorageBase(ABC):
starred: Optional[bool] = False,
session_id: Optional[str] = None,
node_id: Optional[str] = None,
metadata: Optional[str] = None,
metadata: Optional[MetadataField] = None,
) -> datetime:
"""Saves an image record."""
pass

View File

@@ -328,9 +328,10 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
starred: Optional[bool] = False,
session_id: Optional[str] = None,
node_id: Optional[str] = None,
metadata: Optional[str] = None,
metadata: Optional[MetadataField] = None,
) -> datetime:
try:
metadata_json = metadata.model_dump_json() if metadata is not None else None
self._lock.acquire()
self._cursor.execute(
"""--sql
@@ -357,7 +358,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
height,
node_id,
session_id,
metadata,
metadata_json,
is_intermediate,
starred,
has_workflow,

View File

@@ -12,6 +12,7 @@ from invokeai.app.services.image_records.image_records_common import (
)
from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.workflow_records.workflow_records_common import WorkflowWithoutID
class ImageServiceABC(ABC):
@@ -50,9 +51,8 @@ class ImageServiceABC(ABC):
session_id: Optional[str] = None,
board_id: Optional[str] = None,
is_intermediate: Optional[bool] = False,
metadata: Optional[str] = None,
workflow: Optional[str] = None,
graph: Optional[str] = None,
metadata: Optional[MetadataField] = None,
workflow: Optional[WorkflowWithoutID] = None,
) -> ImageDTO:
"""Creates an image, storing the file and its metadata."""
pass
@@ -87,12 +87,7 @@ class ImageServiceABC(ABC):
pass
@abstractmethod
def get_workflow(self, image_name: str) -> Optional[str]:
"""Gets an image's workflow."""
pass
@abstractmethod
def get_graph(self, image_name: str) -> Optional[str]:
def get_workflow(self, image_name: str) -> Optional[WorkflowWithoutID]:
"""Gets an image's workflow."""
pass

View File

@@ -5,6 +5,7 @@ from PIL.Image import Image as PILImageType
from invokeai.app.invocations.fields import MetadataField
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.workflow_records.workflow_records_common import WorkflowWithoutID
from ..image_files.image_files_common import (
ImageFileDeleteException,
@@ -41,9 +42,8 @@ class ImageService(ImageServiceABC):
session_id: Optional[str] = None,
board_id: Optional[str] = None,
is_intermediate: Optional[bool] = False,
metadata: Optional[str] = None,
workflow: Optional[str] = None,
graph: Optional[str] = None,
metadata: Optional[MetadataField] = None,
workflow: Optional[WorkflowWithoutID] = None,
) -> ImageDTO:
if image_origin not in ResourceOrigin:
raise InvalidOriginException
@@ -64,7 +64,7 @@ class ImageService(ImageServiceABC):
image_category=image_category,
width=width,
height=height,
has_workflow=workflow is not None or graph is not None,
has_workflow=workflow is not None,
# Meta fields
is_intermediate=is_intermediate,
# Nullable fields
@@ -75,7 +75,7 @@ class ImageService(ImageServiceABC):
if board_id is not None:
self.__invoker.services.board_image_records.add_image_to_board(board_id=board_id, image_name=image_name)
self.__invoker.services.image_files.save(
image_name=image_name, image=image, metadata=metadata, workflow=workflow, graph=graph
image_name=image_name, image=image, metadata=metadata, workflow=workflow
)
image_dto = self.get_dto(image_name)
@@ -157,7 +157,7 @@ class ImageService(ImageServiceABC):
self.__invoker.services.logger.error("Problem getting image metadata")
raise e
def get_workflow(self, image_name: str) -> Optional[str]:
def get_workflow(self, image_name: str) -> Optional[WorkflowWithoutID]:
try:
return self.__invoker.services.image_files.get_workflow(image_name)
except ImageFileNotFoundException:
@@ -167,16 +167,6 @@ class ImageService(ImageServiceABC):
self.__invoker.services.logger.error("Problem getting image workflow")
raise
def get_graph(self, image_name: str) -> Optional[str]:
try:
return self.__invoker.services.image_files.get_graph(image_name)
except ImageFileNotFoundException:
self.__invoker.services.logger.error("Image file not found")
raise
except Exception:
self.__invoker.services.logger.error("Problem getting image graph")
raise
def get_path(self, image_name: str, thumbnail: bool = False) -> str:
try:
return str(self.__invoker.services.image_files.get_path(image_name, thumbnail))

View File

@@ -237,8 +237,6 @@ class DefaultSessionProcessor(SessionProcessorBase):
source_node_id=source_invocation_id,
error_type=e.__class__.__name__,
error=error,
user_id=None,
project_id=None,
)
pass

View File

@@ -180,9 +180,9 @@ class ImagesInterface(InvocationContextInterface):
# If `metadata` is provided directly, use that. Else, use the metadata provided by `WithMetadata`, falling back to None.
metadata_ = None
if metadata:
metadata_ = metadata.model_dump_json()
elif isinstance(self._data.invocation, WithMetadata) and self._data.invocation.metadata:
metadata_ = self._data.invocation.metadata.model_dump_json()
metadata_ = metadata
elif isinstance(self._data.invocation, WithMetadata):
metadata_ = self._data.invocation.metadata
# If `board_id` is provided directly, use that. Else, use the board provided by `WithBoard`, falling back to None.
board_id_ = None
@@ -191,14 +191,6 @@ class ImagesInterface(InvocationContextInterface):
elif isinstance(self._data.invocation, WithBoard) and self._data.invocation.board:
board_id_ = self._data.invocation.board.board_id
workflow_ = None
if self._data.queue_item.workflow:
workflow_ = self._data.queue_item.workflow.model_dump_json()
graph_ = None
if self._data.queue_item.session.graph:
graph_ = self._data.queue_item.session.graph.model_dump_json()
return self._services.images.create(
image=image,
is_intermediate=self._data.invocation.is_intermediate,
@@ -206,8 +198,7 @@ class ImagesInterface(InvocationContextInterface):
board_id=board_id_,
metadata=metadata_,
image_origin=ResourceOrigin.INTERNAL,
workflow=workflow_,
graph=graph_,
workflow=self._data.queue_item.workflow,
session_id=self._data.queue_item.session_id,
node_id=self._data.invocation.id,
)

View File

@@ -4,4 +4,5 @@ Initialization file for invokeai.backend.image_util methods.
from .infill_methods.patchmatch import PatchMatch # noqa: F401
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
from .seamless import configure_model_padding # noqa: F401
from .util import InitImageResizer, make_grid # noqa: F401

View File

@@ -8,7 +8,7 @@ from pathlib import Path
import numpy as np
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from PIL import Image, ImageFilter
from PIL import Image
from transformers import AutoFeatureExtractor
import invokeai.backend.util.logging as logger
@@ -16,7 +16,6 @@ from invokeai.app.services.config.config_default import get_config
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.silence_warnings import SilenceWarnings
repo_id = "CompVis/stable-diffusion-safety-checker"
CHECKER_PATH = "core/convert/stable-diffusion-safety-checker"
@@ -25,30 +24,30 @@ class SafetyChecker:
Wrapper around SafetyChecker model.
"""
feature_extractor = None
safety_checker = None
feature_extractor = None
tried_load: bool = False
@classmethod
def _load_safety_checker(cls):
if cls.safety_checker is not None and cls.feature_extractor is not None:
if cls.tried_load:
return
try:
model_path = get_config().models_path / CHECKER_PATH
if model_path.exists():
cls.feature_extractor = AutoFeatureExtractor.from_pretrained(model_path)
cls.safety_checker = StableDiffusionSafetyChecker.from_pretrained(model_path)
else:
model_path.mkdir(parents=True, exist_ok=True)
cls.feature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)
cls.feature_extractor.save_pretrained(model_path, safe_serialization=True)
cls.safety_checker = StableDiffusionSafetyChecker.from_pretrained(repo_id)
cls.safety_checker.save_pretrained(model_path, safe_serialization=True)
cls.safety_checker = StableDiffusionSafetyChecker.from_pretrained(get_config().models_path / CHECKER_PATH)
cls.feature_extractor = AutoFeatureExtractor.from_pretrained(get_config().models_path / CHECKER_PATH)
except Exception as e:
logger.warning(f"Could not load NSFW checker: {str(e)}")
cls.tried_load = True
@classmethod
def safety_checker_available(cls) -> bool:
return Path(get_config().models_path, CHECKER_PATH).exists()
@classmethod
def has_nsfw_concept(cls, image: Image.Image) -> bool:
if not cls.safety_checker_available() and cls.tried_load:
return False
cls._load_safety_checker()
if cls.safety_checker is None or cls.feature_extractor is None:
return False
@@ -61,24 +60,3 @@ class SafetyChecker:
with SilenceWarnings():
checked_image, has_nsfw_concept = cls.safety_checker(images=x_image, clip_input=features.pixel_values)
return has_nsfw_concept[0]
@classmethod
def blur_if_nsfw(cls, image: Image.Image) -> Image.Image:
if cls.has_nsfw_concept(image):
logger.warning("A potentially NSFW image has been detected. Image will be blurred.")
blurry_image = image.filter(filter=ImageFilter.GaussianBlur(radius=32))
caution = cls._get_caution_img()
# Center the caution image on the blurred image
x = (blurry_image.width - caution.width) // 2
y = (blurry_image.height - caution.height) // 2
blurry_image.paste(caution, (x, y), caution)
image = blurry_image
return image
@classmethod
def _get_caution_img(cls) -> Image.Image:
import invokeai.app.assets.images as image_assets
caution = Image.open(Path(image_assets.__path__[0]) / "caution.png")
return caution.resize((caution.width // 2, caution.height // 2))

View File

@@ -0,0 +1,52 @@
import torch.nn as nn
def _conv_forward_asymmetric(self, input, weight, bias):
"""
Patch for Conv2d._conv_forward that supports asymmetric padding
"""
working = nn.functional.pad(input, self.asymmetric_padding["x"], mode=self.asymmetric_padding_mode["x"])
working = nn.functional.pad(working, self.asymmetric_padding["y"], mode=self.asymmetric_padding_mode["y"])
return nn.functional.conv2d(
working,
weight,
bias,
self.stride,
nn.modules.utils._pair(0),
self.dilation,
self.groups,
)
def configure_model_padding(model, seamless, seamless_axes):
"""
Modifies the 2D convolution layers to use a circular padding mode based on
the `seamless` and `seamless_axes` options.
"""
# TODO: get an explicit interface for this in diffusers: https://github.com/huggingface/diffusers/issues/556
for m in model.modules():
if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
if seamless:
m.asymmetric_padding_mode = {}
m.asymmetric_padding = {}
m.asymmetric_padding_mode["x"] = "circular" if ("x" in seamless_axes) else "constant"
m.asymmetric_padding["x"] = (
m._reversed_padding_repeated_twice[0],
m._reversed_padding_repeated_twice[1],
0,
0,
)
m.asymmetric_padding_mode["y"] = "circular" if ("y" in seamless_axes) else "constant"
m.asymmetric_padding["y"] = (
0,
0,
m._reversed_padding_repeated_twice[2],
m._reversed_padding_repeated_twice[3],
)
m._conv_forward = _conv_forward_asymmetric.__get__(m, nn.Conv2d)
else:
m._conv_forward = nn.Conv2d._conv_forward.__get__(m, nn.Conv2d)
if hasattr(m, "asymmetric_padding_mode"):
del m.asymmetric_padding_mode
if hasattr(m, "asymmetric_padding"):
del m.asymmetric_padding

View File

@@ -144,8 +144,10 @@ def resize_image_to_resolution(input_image: np.ndarray, resolution: int) -> np.n
h = float(input_image.shape[0])
w = float(input_image.shape[1])
scaling_factor = float(resolution) / min(h, w)
h = int(h * scaling_factor)
w = int(w * scaling_factor)
h *= scaling_factor
w *= scaling_factor
h = int(np.round(h / 64.0)) * 64
w = int(np.round(w / 64.0)) * 64
if scaling_factor > 1:
return cv2.resize(input_image, (w, h), interpolation=cv2.INTER_LANCZOS4)
else:

View File

@@ -51,7 +51,6 @@ LEGACY_CONFIGS: Dict[BaseModelType, Dict[ModelVariantType, Union[str, Dict[Sched
},
BaseModelType.StableDiffusionXL: {
ModelVariantType.Normal: "sd_xl_base.yaml",
ModelVariantType.Inpaint: "sd_xl_inpaint.yaml",
},
BaseModelType.StableDiffusionXLRefiner: {
ModelVariantType.Normal: "sd_xl_refiner.yaml",

View File

@@ -13,7 +13,6 @@ from diffusers import (
LCMScheduler,
LMSDiscreteScheduler,
PNDMScheduler,
TCDScheduler,
UniPCMultistepScheduler,
)
@@ -41,5 +40,4 @@ SCHEDULER_MAP = {
"dpmpp_sde_k": (DPMSolverSDEScheduler, {"use_karras_sigmas": True, "noise_sampler_seed": 0}),
"unipc": (UniPCMultistepScheduler, {"cpu_only": True}),
"lcm": (LCMScheduler, {}),
"tcd": (TCDScheduler, {}),
}

View File

@@ -1,51 +1,89 @@
from contextlib import contextmanager
from typing import Callable, List, Optional, Tuple, Union
from __future__ import annotations
from contextlib import contextmanager
from typing import Callable, List, Union
import torch
import torch.nn as nn
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
from diffusers.models.lora import LoRACompatibleConv
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
def _conv_forward_asymmetric(self, input, weight, bias):
"""
Patch for Conv2d._conv_forward that supports asymmetric padding
"""
working = nn.functional.pad(input, self.asymmetric_padding["x"], mode=self.asymmetric_padding_mode["x"])
working = nn.functional.pad(working, self.asymmetric_padding["y"], mode=self.asymmetric_padding_mode["y"])
return nn.functional.conv2d(
working,
weight,
bias,
self.stride,
nn.modules.utils._pair(0),
self.dilation,
self.groups,
)
@contextmanager
def set_seamless(model: Union[UNet2DConditionModel, AutoencoderKL, AutoencoderTiny], seamless_axes: List[str]):
if not seamless_axes:
yield
return
# override conv_forward
# https://github.com/huggingface/diffusers/issues/556#issuecomment-1993287019
def _conv_forward_asymmetric(self, input: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor] = None):
self.paddingX = (self._reversed_padding_repeated_twice[0], self._reversed_padding_repeated_twice[1], 0, 0)
self.paddingY = (0, 0, self._reversed_padding_repeated_twice[2], self._reversed_padding_repeated_twice[3])
working = torch.nn.functional.pad(input, self.paddingX, mode=x_mode)
working = torch.nn.functional.pad(working, self.paddingY, mode=y_mode)
return torch.nn.functional.conv2d(
working, weight, bias, self.stride, torch.nn.modules.utils._pair(0), self.dilation, self.groups
)
original_layers: List[Tuple[nn.Conv2d, Callable]] = []
# Callable: (input: Tensor, weight: Tensor, bias: Optional[Tensor]) -> Tensor
to_restore: list[tuple[nn.Conv2d | nn.ConvTranspose2d, Callable]] = []
try:
x_mode = "circular" if "x" in seamless_axes else "constant"
y_mode = "circular" if "y" in seamless_axes else "constant"
# Hard coded to skip down block layers, allowing for seamless tiling at the expense of prompt adherence
skipped_layers = 1
for m_name, m in model.named_modules():
if not isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
continue
conv_layers: List[torch.nn.Conv2d] = []
if isinstance(model, UNet2DConditionModel) and m_name.startswith("down_blocks.") and ".resnets." in m_name:
# down_blocks.1.resnets.1.conv1
_, block_num, _, resnet_num, submodule_name = m_name.split(".")
block_num = int(block_num)
resnet_num = int(resnet_num)
for module in model.modules():
if isinstance(module, torch.nn.Conv2d):
conv_layers.append(module)
if block_num >= len(model.down_blocks) - skipped_layers:
continue
for layer in conv_layers:
if isinstance(layer, LoRACompatibleConv) and layer.lora_layer is None:
layer.lora_layer = lambda *x: 0
original_layers.append((layer, layer._conv_forward))
layer._conv_forward = _conv_forward_asymmetric.__get__(layer, torch.nn.Conv2d)
# Skip the second resnet (could be configurable)
if resnet_num > 0:
continue
# Skip Conv2d layers (could be configurable)
if submodule_name == "conv2":
continue
m.asymmetric_padding_mode = {}
m.asymmetric_padding = {}
m.asymmetric_padding_mode["x"] = "circular" if ("x" in seamless_axes) else "constant"
m.asymmetric_padding["x"] = (
m._reversed_padding_repeated_twice[0],
m._reversed_padding_repeated_twice[1],
0,
0,
)
m.asymmetric_padding_mode["y"] = "circular" if ("y" in seamless_axes) else "constant"
m.asymmetric_padding["y"] = (
0,
0,
m._reversed_padding_repeated_twice[2],
m._reversed_padding_repeated_twice[3],
)
to_restore.append((m, m._conv_forward))
m._conv_forward = _conv_forward_asymmetric.__get__(m, nn.Conv2d)
yield
finally:
for layer, orig_conv_forward in original_layers:
layer._conv_forward = orig_conv_forward
for module, orig_conv_forward in to_restore:
module._conv_forward = orig_conv_forward
if hasattr(module, "asymmetric_padding_mode"):
del module.asymmetric_padding_mode
if hasattr(module, "asymmetric_padding"):
del module.asymmetric_padding

View File

@@ -1,98 +0,0 @@
model:
target: sgm.models.diffusion.DiffusionEngine
params:
scale_factor: 0.13025
disable_first_stage_autocast: True
denoiser_config:
target: sgm.modules.diffusionmodules.denoiser.DiscreteDenoiser
params:
num_idx: 1000
weighting_config:
target: sgm.modules.diffusionmodules.denoiser_weighting.EpsWeighting
scaling_config:
target: sgm.modules.diffusionmodules.denoiser_scaling.EpsScaling
discretization_config:
target: sgm.modules.diffusionmodules.discretizer.LegacyDDPMDiscretization
network_config:
target: sgm.modules.diffusionmodules.openaimodel.UNetModel
params:
adm_in_channels: 2816
num_classes: sequential
use_checkpoint: True
in_channels: 9
out_channels: 4
model_channels: 320
attention_resolutions: [4, 2]
num_res_blocks: 2
channel_mult: [1, 2, 4]
num_head_channels: 64
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: [1, 2, 10] # note: the first is unused (due to attn_res starting at 2) 32, 16, 8 --> 64, 32, 16
context_dim: 2048
spatial_transformer_attn_type: softmax-xformers
legacy: False
conditioner_config:
target: sgm.modules.GeneralConditioner
params:
emb_models:
# crossattn cond
- is_trainable: False
input_key: txt
target: sgm.modules.encoders.modules.FrozenCLIPEmbedder
params:
layer: hidden
layer_idx: 11
# crossattn and vector cond
- is_trainable: False
input_key: txt
target: sgm.modules.encoders.modules.FrozenOpenCLIPEmbedder2
params:
arch: ViT-bigG-14
version: laion2b_s39b_b160k
freeze: True
layer: penultimate
always_return_pooled: True
legacy: False
# vector cond
- is_trainable: False
input_key: original_size_as_tuple
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
# vector cond
- is_trainable: False
input_key: crop_coords_top_left
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
# vector cond
- is_trainable: False
input_key: target_size_as_tuple
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
first_stage_config:
target: sgm.models.autoencoder.AutoencoderKLInferenceWrapper
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
attn_type: vanilla-xformers
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [1, 2, 4, 4]
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity

View File

@@ -10,8 +10,6 @@ module.exports = {
'path/no-relative-imports': ['error', { maxDepth: 0 }],
// https://github.com/edvardchen/eslint-plugin-i18next/blob/HEAD/docs/rules/no-literal-string.md
'i18next/no-literal-string': 'error',
// https://eslint.org/docs/latest/rules/no-console
'no-console': 'error',
},
overrides: [
/**

View File

@@ -43,5 +43,4 @@ stats.html
yalc.lock
# vitest
tsconfig.vitest-temp.json
coverage/
tsconfig.vitest-temp.json

View File

@@ -25,7 +25,7 @@
"typegen": "node scripts/typegen.js",
"preview": "vite preview",
"lint:knip": "knip",
"lint:dpdm": "dpdm --no-warning --no-tree --transform --exit-code circular:1 src/main.tsx",
"lint:dpdm": "dpdm --no-warning --no-tree --transform --exit-code circular:0 src/main.tsx",
"lint:eslint": "eslint --max-warnings=0 .",
"lint:prettier": "prettier --check .",
"lint:tsc": "tsc --noEmit",
@@ -35,7 +35,6 @@
"storybook": "storybook dev -p 6006",
"build-storybook": "storybook build",
"test": "vitest",
"test:ui": "vitest --coverage --ui",
"test:no-watch": "vitest --no-watch"
},
"madge": {
@@ -53,48 +52,47 @@
},
"dependencies": {
"@chakra-ui/react-use-size": "^2.1.0",
"@dagrejs/dagre": "^1.1.2",
"@dagrejs/graphlib": "^2.2.2",
"@dagrejs/dagre": "^1.1.1",
"@dagrejs/graphlib": "^2.2.1",
"@dnd-kit/core": "^6.1.0",
"@dnd-kit/sortable": "^8.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@fontsource-variable/inter": "^5.0.18",
"@invoke-ai/ui-library": "^0.0.25",
"@fontsource-variable/inter": "^5.0.17",
"@invoke-ai/ui-library": "^0.0.21",
"@nanostores/react": "^0.7.2",
"@reduxjs/toolkit": "2.2.3",
"@reduxjs/toolkit": "2.2.2",
"@roarr/browser-log-writer": "^1.3.0",
"chakra-react-select": "^4.7.6",
"compare-versions": "^6.1.0",
"dateformat": "^5.0.3",
"fracturedjsonjs": "^4.0.1",
"framer-motion": "^11.1.8",
"i18next": "^23.11.3",
"i18next-http-backend": "^2.5.1",
"framer-motion": "^11.0.22",
"i18next": "^23.10.1",
"i18next-http-backend": "^2.5.0",
"idb-keyval": "^6.2.1",
"jsondiffpatch": "^0.6.0",
"konva": "^9.3.6",
"lodash-es": "^4.17.21",
"nanostores": "^0.10.3",
"nanostores": "^0.10.0",
"new-github-issue-url": "^1.0.0",
"overlayscrollbars": "^2.7.3",
"overlayscrollbars-react": "^0.5.6",
"overlayscrollbars": "^2.6.1",
"overlayscrollbars-react": "^0.5.5",
"query-string": "^9.0.0",
"react": "^18.3.1",
"react": "^18.2.0",
"react-colorful": "^5.6.1",
"react-dom": "^18.3.1",
"react-dom": "^18.2.0",
"react-dropzone": "^14.2.3",
"react-error-boundary": "^4.0.13",
"react-hook-form": "^7.51.4",
"react-hook-form": "^7.51.2",
"react-hotkeys-hook": "4.5.0",
"react-i18next": "^14.1.1",
"react-icons": "^5.2.0",
"react-i18next": "^14.1.0",
"react-icons": "^5.0.1",
"react-konva": "^18.2.10",
"react-redux": "9.1.2",
"react-resizable-panels": "^2.0.19",
"react-redux": "9.1.0",
"react-resizable-panels": "^2.0.16",
"react-select": "5.8.0",
"react-use": "^17.5.0",
"react-virtuoso": "^4.7.10",
"reactflow": "^11.11.3",
"react-virtuoso": "^4.7.5",
"reactflow": "^11.10.4",
"redux-dynamic-middlewares": "^2.2.0",
"redux-remember": "^5.1.0",
"redux-undo": "^1.1.0",
@@ -103,11 +101,10 @@
"serialize-error": "^11.0.3",
"socket.io-client": "^4.7.5",
"use-debounce": "^10.0.0",
"use-device-pixel-ratio": "^1.1.2",
"use-image": "^1.1.1",
"uuid": "^9.0.1",
"zod": "^3.23.6",
"zod-validation-error": "^3.2.0"
"zod": "^3.22.4",
"zod-validation-error": "^3.0.3"
},
"peerDependencies": {
"@chakra-ui/react": "^2.8.2",
@@ -118,42 +115,40 @@
"devDependencies": {
"@invoke-ai/eslint-config-react": "^0.0.14",
"@invoke-ai/prettier-config-react": "^0.0.7",
"@storybook/addon-essentials": "^8.0.10",
"@storybook/addon-interactions": "^8.0.10",
"@storybook/addon-links": "^8.0.10",
"@storybook/addon-storysource": "^8.0.10",
"@storybook/manager-api": "^8.0.10",
"@storybook/react": "^8.0.10",
"@storybook/react-vite": "^8.0.10",
"@storybook/theming": "^8.0.10",
"@storybook/addon-essentials": "^8.0.4",
"@storybook/addon-interactions": "^8.0.4",
"@storybook/addon-links": "^8.0.4",
"@storybook/addon-storysource": "^8.0.4",
"@storybook/manager-api": "^8.0.4",
"@storybook/react": "^8.0.4",
"@storybook/react-vite": "^8.0.4",
"@storybook/theming": "^8.0.4",
"@types/dateformat": "^5.0.2",
"@types/lodash-es": "^4.17.12",
"@types/node": "^20.12.10",
"@types/react": "^18.3.1",
"@types/react-dom": "^18.3.0",
"@types/node": "^20.11.30",
"@types/react": "^18.2.73",
"@types/react-dom": "^18.2.22",
"@types/uuid": "^9.0.8",
"@vitejs/plugin-react-swc": "^3.6.0",
"@vitest/coverage-v8": "^1.5.0",
"@vitest/ui": "^1.5.0",
"concurrently": "^8.2.2",
"dpdm": "^3.14.0",
"eslint": "^8.57.0",
"eslint-plugin-i18next": "^6.0.3",
"eslint-plugin-path": "^1.3.0",
"knip": "^5.12.3",
"knip": "^5.6.1",
"openapi-types": "^12.1.3",
"openapi-typescript": "^6.7.5",
"prettier": "^3.2.5",
"rollup-plugin-visualizer": "^5.12.0",
"storybook": "^8.0.10",
"storybook": "^8.0.4",
"ts-toolbelt": "^9.6.0",
"tsafe": "^1.6.6",
"typescript": "^5.4.5",
"vite": "^5.2.11",
"vite-plugin-css-injected-by-js": "^3.5.1",
"vite-plugin-dts": "^3.9.1",
"typescript": "^5.4.3",
"vite": "^5.2.6",
"vite-plugin-css-injected-by-js": "^3.5.0",
"vite-plugin-dts": "^3.8.0",
"vite-plugin-eslint": "^1.8.1",
"vite-tsconfig-paths": "^4.3.2",
"vitest": "^1.6.0"
"vitest": "^1.4.0"
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -76,9 +76,7 @@
"aboutHeading": "Nutzen Sie Ihre kreative Energie",
"toResolve": "Lösen",
"add": "Hinzufügen",
"loglevel": "Protokoll Stufe",
"selected": "Ausgewählt",
"beta": "Beta"
"loglevel": "Protokoll Stufe"
},
"gallery": {
"galleryImageSize": "Bildgröße",
@@ -88,7 +86,7 @@
"noImagesInGallery": "Keine Bilder in der Galerie",
"loading": "Lade",
"deleteImage_one": "Lösche Bild",
"deleteImage_other": "Lösche {{count}} Bilder",
"deleteImage_other": "",
"copy": "Kopieren",
"download": "Runterladen",
"setCurrentImage": "Setze aktuelle Bild",
@@ -399,14 +397,7 @@
"cancel": "Stornieren",
"defaultSettingsSaved": "Standardeinstellungen gespeichert",
"addModels": "Model hinzufügen",
"deleteModelImage": "Lösche Model Bild",
"hfTokenInvalidErrorMessage": "Falscher oder fehlender HuggingFace Schlüssel.",
"huggingFaceRepoID": "HuggingFace Repo ID",
"hfToken": "HuggingFace Schlüssel",
"hfTokenInvalid": "Falscher oder fehlender HF Schlüssel",
"huggingFacePlaceholder": "besitzer/model-name",
"hfTokenSaved": "HF Schlüssel gespeichert",
"hfTokenUnableToVerify": "Konnte den HF Schlüssel nicht validieren"
"deleteModelImage": "Lösche Model Bild"
},
"parameters": {
"images": "Bilder",
@@ -695,11 +686,7 @@
"hands": "Hände",
"dwOpenpose": "DW Openpose",
"dwOpenposeDescription": "Posenschätzung mit DW Openpose",
"selectCLIPVisionModel": "Wähle ein CLIP Vision Model aus",
"ipAdapterMethod": "Methode",
"composition": "Nur Komposition",
"full": "Voll",
"style": "Nur Style"
"selectCLIPVisionModel": "Wähle ein CLIP Vision Model aus"
},
"queue": {
"status": "Status",
@@ -730,6 +717,7 @@
"resume": "Wieder aufnehmen",
"item": "Auftrag",
"notReady": "Warteschlange noch nicht bereit",
"queueCountPrediction": "{{promptsCount}} Prompts × {{iterations}} Iterationen -> {{count}} Generationen",
"clearQueueAlertDialog": "\"Die Warteschlange leeren\" stoppt den aktuellen Prozess und leert die Warteschlange komplett.",
"completedIn": "Fertig in",
"cancelBatchSucceeded": "Stapel abgebrochen",

View File

@@ -88,13 +88,11 @@
"negativePrompt": "Negative Prompt",
"discordLabel": "Discord",
"dontAskMeAgain": "Don't ask me again",
"editor": "Editor",
"error": "Error",
"file": "File",
"folder": "Folder",
"format": "format",
"githubLabel": "Github",
"goTo": "Go to",
"hotkeysLabel": "Hotkeys",
"imageFailedToLoad": "Unable to Load Image",
"img2img": "Image To Image",
@@ -142,11 +140,7 @@
"blue": "Blue",
"alpha": "Alpha",
"selected": "Selected",
"tab": "Tab",
"viewing": "Viewing",
"viewingDesc": "Review images in a large gallery view",
"editing": "Editing",
"editingDesc": "Edit on the Control Layers canvas"
"viewer": "Viewer"
},
"controlnet": {
"controlAdapter_one": "Control Adapter",
@@ -162,7 +156,6 @@
"balanced": "Balanced",
"base": "Base",
"beginEndStepPercent": "Begin / End Step Percentage",
"beginEndStepPercentShort": "Begin/End %",
"bgth": "bg_th",
"canny": "Canny",
"cannyDescription": "Canny edge detection",
@@ -231,11 +224,10 @@
"composition": "Composition Only",
"safe": "Safe",
"saveControlImage": "Save Control Image",
"scribble": "Scribble",
"scribble": "scribble",
"selectModel": "Select a model",
"selectCLIPVisionModel": "Select a CLIP Vision model",
"setControlImageDimensions": "Copy size to W/H (optimize for model)",
"setControlImageDimensionsForce": "Copy size to W/H (ignore model)",
"setControlImageDimensions": "Set Control Image Dimensions To W/H",
"showAdvanced": "Show Advanced",
"small": "Small",
"toggleControlNet": "Toggle this ControlNet",
@@ -261,6 +253,7 @@
"queue": "Queue",
"queueFront": "Add to Front of Queue",
"queueBack": "Add to Queue",
"queueCountPrediction": "{{promptsCount}} prompts \u00d7 {{iterations}} iterations -> {{count}} generations",
"queueEmpty": "Queue Empty",
"enqueueing": "Queueing Batch",
"resume": "Resume",
@@ -313,13 +306,7 @@
"batchFailedToQueue": "Failed to Queue Batch",
"graphQueued": "Graph queued",
"graphFailedToQueue": "Failed to queue graph",
"openQueue": "Open Queue",
"prompts_one": "Prompt",
"prompts_other": "Prompts",
"iterations_one": "Iteration",
"iterations_other": "Iterations",
"generations_one": "Generation",
"generations_other": "Generations"
"openQueue": "Open Queue"
},
"invocationCache": {
"invocationCache": "Invocation Cache",
@@ -595,10 +582,6 @@
"upscale": {
"desc": "Upscale the current image",
"title": "Upscale"
},
"toggleViewer": {
"desc": "Switches between the Image Viewer and workspace for the current tab.",
"title": "Toggle Image Viewer"
}
},
"metadata": {
@@ -774,15 +757,10 @@
"cannotConnectOutputToOutput": "Cannot connect output to output",
"cannotConnectToSelf": "Cannot connect to self",
"cannotDuplicateConnection": "Cannot create duplicate connections",
"cannotMixAndMatchCollectionItemTypes": "Cannot mix and match collection item types",
"missingNode": "Missing invocation node",
"missingInvocationTemplate": "Missing invocation template",
"missingFieldTemplate": "Missing field template",
"nodePack": "Node pack",
"collection": "Collection",
"singleFieldType": "{{name}} (Single)",
"collectionFieldType": "{{name}} (Collection)",
"collectionOrScalarFieldType": "{{name}} (Single or Collection)",
"collectionFieldType": "{{name}} Collection",
"collectionOrScalarFieldType": "{{name}} Collection|Scalar",
"colorCodeEdges": "Color-Code Edges",
"colorCodeEdgesHelp": "Color-code edges according to their connected fields",
"connectionWouldCreateCycle": "Connection would create a cycle",
@@ -884,7 +862,6 @@
"versionUnknown": " Version Unknown",
"workflow": "Workflow",
"graph": "Graph",
"noGraph": "No Graph",
"workflowAuthor": "Author",
"workflowContact": "Contact",
"workflowDescription": "Short Description",
@@ -938,37 +915,23 @@
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}} missing input",
"missingNodeTemplate": "Missing node template",
"noControlImageForControlAdapter": "Control Adapter #{{number}} has no control image",
"imageNotProcessedForControlAdapter": "Control Adapter #{{number}}'s image is not processed",
"noInitialImageSelected": "No initial image selected",
"noModelForControlAdapter": "Control Adapter #{{number}} has no model selected.",
"incompatibleBaseModelForControlAdapter": "Control Adapter #{{number}} model is incompatible with main model.",
"noModelSelected": "No model selected",
"noPrompts": "No prompts generated",
"noNodesInGraph": "No nodes in graph",
"systemDisconnected": "System disconnected",
"layer": {
"initialImageNoImageSelected": "no initial image selected",
"controlAdapterNoModelSelected": "no Control Adapter model selected",
"controlAdapterIncompatibleBaseModel": "incompatible Control Adapter base model",
"controlAdapterNoImageSelected": "no Control Adapter image selected",
"controlAdapterImageNotProcessed": "Control Adapter image not processed",
"t2iAdapterIncompatibleDimensions": "T2I Adapter requires image dimension to be multiples of {{multiple}}",
"ipAdapterNoModelSelected": "no IP adapter selected",
"ipAdapterIncompatibleBaseModel": "incompatible IP Adapter base model",
"ipAdapterNoImageSelected": "no IP Adapter image selected",
"rgNoPromptsOrIPAdapters": "no text prompts or IP Adapters",
"rgNoRegion": "no region selected"
}
"systemDisconnected": "System disconnected"
},
"maskBlur": "Mask Blur",
"negativePromptPlaceholder": "Negative Prompt",
"globalNegativePromptPlaceholder": "Global Negative Prompt",
"noiseThreshold": "Noise Threshold",
"patchmatchDownScaleSize": "Downscale",
"perlinNoise": "Perlin Noise",
"positivePromptPlaceholder": "Positive Prompt",
"globalPositivePromptPlaceholder": "Global Positive Prompt",
"iterations": "Iterations",
"iterationsWithCount_one": "{{count}} Iteration",
"iterationsWithCount_other": "{{count}} Iterations",
"scale": "Scale",
"scaleBeforeProcessing": "Scale Before Processing",
"scaledHeight": "Scaled H",
@@ -1548,7 +1511,7 @@
"app": {
"storeNotInitialized": "Store is not initialized"
},
"controlLayers": {
"regionalPrompts": {
"deleteAll": "Delete All",
"addLayer": "Add Layer",
"moveToFront": "Move to Front",
@@ -1556,7 +1519,8 @@
"moveForward": "Move Forward",
"moveBackward": "Move Backward",
"brushSize": "Brush Size",
"controlLayers": "Control Layers",
"regionalControl": "Regional Control (ALPHA)",
"enableRegionalPrompts": "Enable $t(regionalPrompts.regionalPrompts)",
"globalMaskOpacity": "Global Mask Opacity",
"autoNegative": "Auto Negative",
"toggleVisibility": "Toggle Layer Visibility",
@@ -1567,35 +1531,6 @@
"maskPreviewColor": "Mask Preview Color",
"addPositivePrompt": "Add $t(common.positivePrompt)",
"addNegativePrompt": "Add $t(common.negativePrompt)",
"addIPAdapter": "Add $t(common.ipAdapter)",
"regionalGuidance": "Regional Guidance",
"regionalGuidanceLayer": "$t(controlLayers.regionalGuidance) $t(unifiedCanvas.layer)",
"opacity": "Opacity",
"globalControlAdapter": "Global $t(controlnet.controlAdapter_one)",
"globalControlAdapterLayer": "Global $t(controlnet.controlAdapter_one) $t(unifiedCanvas.layer)",
"globalIPAdapter": "Global $t(common.ipAdapter)",
"globalIPAdapterLayer": "Global $t(common.ipAdapter) $t(unifiedCanvas.layer)",
"globalInitialImage": "Global Initial Image",
"globalInitialImageLayer": "$t(controlLayers.globalInitialImage) $t(unifiedCanvas.layer)",
"opacityFilter": "Opacity Filter",
"clearProcessor": "Clear Processor",
"resetProcessor": "Reset Processor to Defaults",
"noLayersAdded": "No Layers Added",
"layers_one": "Layer",
"layers_other": "Layers"
},
"ui": {
"tabs": {
"generation": "Generation",
"generationTab": "$t(ui.tabs.generation) $t(common.tab)",
"canvas": "Canvas",
"canvasTab": "$t(ui.tabs.canvas) $t(common.tab)",
"workflows": "Workflows",
"workflowsTab": "$t(ui.tabs.workflows) $t(common.tab)",
"models": "Models",
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"queue": "Queue",
"queueTab": "$t(ui.tabs.queue) $t(common.tab)"
}
"addIPAdapter": "Add $t(common.ipAdapter)"
}
}

View File

@@ -25,24 +25,7 @@
"areYouSure": "¿Estas seguro?",
"batch": "Administrador de lotes",
"modelManager": "Administrador de modelos",
"communityLabel": "Comunidad",
"direction": "Dirección",
"ai": "Ia",
"add": "Añadir",
"auto": "Automático",
"copyError": "Error $t(gallery.copy)",
"details": "Detalles",
"or": "o",
"checkpoint": "Punto de control",
"controlNet": "ControlNet",
"aboutHeading": "Sea dueño de su poder creativo",
"advanced": "Avanzado",
"data": "Fecha",
"delete": "Borrar",
"copy": "Copiar",
"beta": "Beta",
"on": "En",
"aboutDesc": "¿Utilizas Invoke para trabajar? Mira aquí:"
"communityLabel": "Comunidad"
},
"gallery": {
"galleryImageSize": "Tamaño de la imagen",
@@ -460,13 +443,7 @@
"previousImage": "Imagen anterior",
"nextImage": "Siguiente imagen",
"showOptionsPanel": "Mostrar el panel lateral",
"menu": "Menú",
"showGalleryPanel": "Mostrar panel de galería",
"loadMore": "Cargar más",
"about": "Acerca de",
"createIssue": "Crear un problema",
"resetUI": "Interfaz de usuario $t(accessibility.reset)",
"mode": "Modo"
"menu": "Menú"
},
"nodes": {
"zoomInNodes": "Acercar",
@@ -479,68 +456,5 @@
"reloadNodeTemplates": "Recargar las plantillas de nodos",
"loadWorkflow": "Cargar el flujo de trabajo",
"downloadWorkflow": "Descargar el flujo de trabajo en un archivo JSON"
},
"boards": {
"autoAddBoard": "Agregar panel automáticamente",
"changeBoard": "Cambiar el panel",
"clearSearch": "Borrar la búsqueda",
"deleteBoard": "Borrar el panel",
"selectBoard": "Seleccionar un panel",
"uncategorized": "Sin categoría",
"cancel": "Cancelar",
"addBoard": "Agregar un panel",
"movingImagesToBoard_one": "Moviendo {{count}} imagen al panel:",
"movingImagesToBoard_many": "Moviendo {{count}} imágenes al panel:",
"movingImagesToBoard_other": "Moviendo {{count}} imágenes al panel:",
"bottomMessage": "Al eliminar este panel y las imágenes que contiene, se restablecerán las funciones que los estén utilizando actualmente.",
"deleteBoardAndImages": "Borrar el panel y las imágenes",
"loading": "Cargando...",
"deletedBoardsCannotbeRestored": "Los paneles eliminados no se pueden restaurar",
"move": "Mover",
"menuItemAutoAdd": "Agregar automáticamente a este panel",
"searchBoard": "Buscando paneles…",
"topMessage": "Este panel contiene imágenes utilizadas en las siguientes funciones:",
"downloadBoard": "Descargar panel",
"deleteBoardOnly": "Borrar solo el panel",
"myBoard": "Mi panel",
"noMatching": "No hay paneles que coincidan"
},
"accordions": {
"compositing": {
"title": "Composición",
"infillTab": "Relleno"
},
"generation": {
"title": "Generación"
},
"image": {
"title": "Imagen"
},
"control": {
"title": "Control"
},
"advanced": {
"options": "$t(accordions.advanced.title) opciones",
"title": "Avanzado"
}
},
"ui": {
"tabs": {
"generationTab": "$t(ui.tabs.generation) $t(common.tab)",
"canvas": "Lienzo",
"generation": "Generación",
"queue": "Cola",
"queueTab": "$t(ui.tabs.queue) $t(common.tab)",
"workflows": "Flujos de trabajo",
"models": "Modelos",
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"canvasTab": "$t(ui.tabs.canvas) $t(common.tab)",
"workflowsTab": "$t(ui.tabs.workflows) $t(common.tab)"
}
},
"controlLayers": {
"layers_one": "Capa",
"layers_many": "Capas",
"layers_other": "Capas"
}
}

View File

@@ -5,7 +5,7 @@
"reportBugLabel": "Segnala un errore",
"settingsLabel": "Impostazioni",
"img2img": "Immagine a Immagine",
"unifiedCanvas": "Tela",
"unifiedCanvas": "Tela unificata",
"nodes": "Flussi di lavoro",
"upload": "Caricamento",
"load": "Carica",
@@ -74,18 +74,7 @@
"file": "File",
"toResolve": "Da risolvere",
"add": "Aggiungi",
"loglevel": "Livello di log",
"beta": "Beta",
"positivePrompt": "Prompt positivo",
"negativePrompt": "Prompt negativo",
"selected": "Selezionato",
"goTo": "Vai a",
"editor": "Editor",
"tab": "Scheda",
"viewing": "Visualizza",
"viewingDesc": "Rivedi le immagini in un'ampia vista della galleria",
"editing": "Modifica",
"editingDesc": "Modifica nell'area Livelli di controllo"
"loglevel": "Livello di log"
},
"gallery": {
"galleryImageSize": "Dimensione dell'immagine",
@@ -191,8 +180,8 @@
"desc": "Mostra le informazioni sui metadati dell'immagine corrente"
},
"sendToImageToImage": {
"title": "Invia a Generazione da immagine",
"desc": "Invia l'immagine corrente a Generazione da immagine"
"title": "Invia a Immagine a Immagine",
"desc": "Invia l'immagine corrente a da Immagine a Immagine"
},
"deleteImage": {
"title": "Elimina immagine",
@@ -345,10 +334,6 @@
"remixImage": {
"desc": "Utilizza tutti i parametri tranne il seme dell'immagine corrente",
"title": "Remixa l'immagine"
},
"toggleViewer": {
"title": "Attiva/disattiva il visualizzatore di immagini",
"desc": "Passa dal Visualizzatore immagini all'area di lavoro per la scheda corrente."
}
},
"modelManager": {
@@ -486,8 +471,8 @@
"scaledHeight": "Altezza ridimensionata",
"infillMethod": "Metodo di riempimento",
"tileSize": "Dimensione piastrella",
"sendToImg2Img": "Invia a Generazione da immagine",
"sendToUnifiedCanvas": "Invia alla Tela",
"sendToImg2Img": "Invia a Immagine a Immagine",
"sendToUnifiedCanvas": "Invia a Tela Unificata",
"downloadImage": "Scarica l'immagine",
"usePrompt": "Usa Prompt",
"useSeed": "Usa Seme",
@@ -523,11 +508,13 @@
"incompatibleBaseModelForControlAdapter": "Il modello dell'adattatore di controllo #{{number}} non è compatibile con il modello principale.",
"missingNodeTemplate": "Modello di nodo mancante",
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}} ingresso mancante",
"missingFieldTemplate": "Modello di campo mancante",
"imageNotProcessedForControlAdapter": "L'immagine dell'adattatore di controllo #{{number}} non è stata elaborata"
"missingFieldTemplate": "Modello di campo mancante"
},
"useCpuNoise": "Usa la CPU per generare rumore",
"iterations": "Iterazioni",
"iterationsWithCount_one": "{{count}} Iterazione",
"iterationsWithCount_many": "{{count}} Iterazioni",
"iterationsWithCount_other": "{{count}} Iterazioni",
"isAllowedToUpscale": {
"useX2Model": "L'immagine è troppo grande per l'ampliamento con il modello x4, utilizza il modello x2",
"tooLarge": "L'immagine è troppo grande per l'ampliamento, seleziona un'immagine più piccola"
@@ -547,10 +534,7 @@
"infillMosaicMinColor": "Colore minimo",
"infillMosaicMaxColor": "Colore massimo",
"infillMosaicTileHeight": "Altezza piastrella",
"infillColorValue": "Colore di riempimento",
"globalSettings": "Impostazioni globali",
"globalPositivePromptPlaceholder": "Prompt positivo globale",
"globalNegativePromptPlaceholder": "Prompt negativo globale"
"infillColorValue": "Colore di riempimento"
},
"settings": {
"models": "Modelli",
@@ -575,7 +559,7 @@
"intermediatesCleared_one": "Cancellata {{count}} immagine intermedia",
"intermediatesCleared_many": "Cancellate {{count}} immagini intermedie",
"intermediatesCleared_other": "Cancellate {{count}} immagini intermedie",
"clearIntermediatesDesc1": "La cancellazione delle immagini intermedie ripristinerà lo stato della Tela e degli Adattatori di Controllo.",
"clearIntermediatesDesc1": "La cancellazione delle immagini intermedie ripristinerà lo stato di Tela Unificata e ControlNet.",
"intermediatesClearedFailed": "Problema con la cancellazione delle immagini intermedie",
"clearIntermediatesWithCount_one": "Cancella {{count}} immagine intermedia",
"clearIntermediatesWithCount_many": "Cancella {{count}} immagini intermedie",
@@ -591,8 +575,8 @@
"imageCopied": "Immagine copiata",
"imageNotLoadedDesc": "Impossibile trovare l'immagine",
"canvasMerged": "Tela unita",
"sentToImageToImage": "Inviato a Generazione da immagine",
"sentToUnifiedCanvas": "Inviato alla Tela",
"sentToImageToImage": "Inviato a Immagine a Immagine",
"sentToUnifiedCanvas": "Inviato a Tela Unificata",
"parametersNotSet": "Parametri non impostati",
"metadataLoadFailed": "Impossibile caricare i metadati",
"serverError": "Errore del Server",
@@ -811,7 +795,7 @@
"float": "In virgola mobile",
"currentImageDescription": "Visualizza l'immagine corrente nell'editor dei nodi",
"fieldTypesMustMatch": "I tipi di campo devono corrispondere",
"edge": "Collegamento",
"edge": "Bordo",
"currentImage": "Immagine corrente",
"integer": "Numero Intero",
"inputMayOnlyHaveOneConnection": "L'ingresso può avere solo una connessione",
@@ -861,9 +845,7 @@
"resetToDefaultValue": "Ripristina il valore predefinito",
"noFieldsViewMode": "Questo flusso di lavoro non ha campi selezionati da visualizzare. Visualizza il flusso di lavoro completo per configurare i valori.",
"edit": "Modifica",
"graph": "Grafico",
"showEdgeLabelsHelp": "Mostra etichette sui collegamenti, che indicano i nodi collegati",
"showEdgeLabels": "Mostra le etichette del collegamento"
"graph": "Grafico"
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@@ -940,7 +922,7 @@
"colorMapTileSize": "Dimensione piastrella",
"mediapipeFaceDescription": "Rilevamento dei volti tramite Mediapipe",
"hedDescription": "Rilevamento dei bordi nidificati olisticamente",
"setControlImageDimensions": "Copia le dimensioni in L/A (ottimizza per il modello)",
"setControlImageDimensions": "Imposta le dimensioni dell'immagine di controllo su L/A",
"maxFaces": "Numero massimo di volti",
"addT2IAdapter": "Aggiungi $t(common.t2iAdapter)",
"addControlNet": "Aggiungi $t(common.controlNet)",
@@ -969,17 +951,12 @@
"mediapipeFace": "Mediapipe Volto",
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"selectCLIPVisionModel": "Seleziona un modello CLIP Vision",
"ipAdapterMethod": "Metodo",
"full": "Completo",
"composition": "Solo la composizione",
"style": "Solo lo stile",
"beginEndStepPercentShort": "Inizio/Fine %",
"setControlImageDimensionsForce": "Copia le dimensioni in L/A (ignora il modello)"
"selectCLIPVisionModel": "Seleziona un modello CLIP Vision"
},
"queue": {
"queueFront": "Aggiungi all'inizio della coda",
"queueBack": "Aggiungi alla coda",
"queueCountPrediction": "{{promptsCount}} prompt × {{iterations}} iterazioni -> {{count}} generazioni",
"queue": "Coda",
"status": "Stato",
"pruneSucceeded": "Rimossi {{item_count}} elementi completati dalla coda",
@@ -1016,7 +993,7 @@
"cancelBatchSucceeded": "Lotto annullato",
"clearTooltip": "Annulla e cancella tutti gli elementi",
"current": "Attuale",
"pauseTooltip": "Sospendi l'elaborazione",
"pauseTooltip": "Sospende l'elaborazione",
"failed": "Falliti",
"cancelItem": "Annulla l'elemento",
"next": "Prossimo",
@@ -1417,12 +1394,6 @@
"paragraphs": [
"La dimensione del bordo del passaggio di coerenza."
]
},
"ipAdapterMethod": {
"heading": "Metodo",
"paragraphs": [
"Metodo con cui applicare l'adattatore IP corrente."
]
}
},
"sdxl": {
@@ -1551,56 +1522,5 @@
"compatibleEmbeddings": "Incorporamenti compatibili",
"addPromptTrigger": "Aggiungi Trigger nel prompt",
"noMatchingTriggers": "Nessun Trigger corrispondente"
},
"controlLayers": {
"opacityFilter": "Filtro opacità",
"deleteAll": "Cancella tutto",
"addLayer": "Aggiungi Livello",
"moveToFront": "Sposta in primo piano",
"moveToBack": "Sposta in fondo",
"moveForward": "Sposta avanti",
"moveBackward": "Sposta indietro",
"brushSize": "Dimensioni del pennello",
"globalMaskOpacity": "Opacità globale della maschera",
"autoNegative": "Auto Negativo",
"toggleVisibility": "Attiva/disattiva la visibilità dei livelli",
"deletePrompt": "Cancella il prompt",
"debugLayers": "Debug dei Livelli",
"rectangle": "Rettangolo",
"maskPreviewColor": "Colore anteprima maschera",
"addPositivePrompt": "Aggiungi $t(common.positivePrompt)",
"addNegativePrompt": "Aggiungi $t(common.negativePrompt)",
"addIPAdapter": "Aggiungi $t(common.ipAdapter)",
"regionalGuidance": "Guida regionale",
"regionalGuidanceLayer": "$t(unifiedCanvas.layer) $t(controlLayers.regionalGuidance)",
"opacity": "Opacità",
"globalControlAdapter": "$t(controlnet.controlAdapter_one) Globale",
"globalControlAdapterLayer": "$t(controlnet.controlAdapter_one) - $t(unifiedCanvas.layer) Globale",
"globalIPAdapter": "$t(common.ipAdapter) Globale",
"globalIPAdapterLayer": "$t(common.ipAdapter) - $t(unifiedCanvas.layer) Globale",
"globalInitialImage": "Immagine iniziale",
"globalInitialImageLayer": "$t(controlLayers.globalInitialImage) - $t(unifiedCanvas.layer) Globale",
"clearProcessor": "Cancella processore",
"resetProcessor": "Ripristina il processore alle impostazioni predefinite",
"noLayersAdded": "Nessun livello aggiunto",
"resetRegion": "Reimposta la regione",
"controlLayers": "Livelli di controllo",
"layers_one": "Livello",
"layers_many": "Livelli",
"layers_other": "Livelli"
},
"ui": {
"tabs": {
"generation": "Generazione",
"generationTab": "$t(ui.tabs.generation) $t(common.tab)",
"canvas": "Tela",
"canvasTab": "$t(ui.tabs.canvas) $t(common.tab)",
"workflows": "Flussi di lavoro",
"workflowsTab": "$t(ui.tabs.workflows) $t(common.tab)",
"models": "Modelli",
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"queue": "Coda",
"queueTab": "$t(ui.tabs.queue) $t(common.tab)"
}
}
}

View File

@@ -570,6 +570,7 @@
"pauseSucceeded": "処理が一時停止されました",
"queueFront": "キューの先頭へ追加",
"queueBack": "キューに追加",
"queueCountPrediction": "{{promptsCount}} プロンプト × {{iterations}} イテレーション -> {{count}} 枚生成",
"pause": "一時停止",
"queue": "キュー",
"pauseTooltip": "処理を一時停止",

View File

@@ -505,6 +505,7 @@
"completed": "완성된",
"queueBack": "Queue에 추가",
"cancelFailed": "항목 취소 중 발생한 문제",
"queueCountPrediction": "Queue에 {{predicted}} 추가",
"batchQueued": "Batch Queued",
"pauseFailed": "프로세서 중지 중 발생한 문제",
"clearFailed": "Queue 제거 중 발생한 문제",

View File

@@ -383,6 +383,8 @@
"useCpuNoise": "Gebruik CPU-ruis",
"imageActions": "Afbeeldingshandeling",
"iterations": "Iteraties",
"iterationsWithCount_one": "{{count}} iteratie",
"iterationsWithCount_other": "{{count}} iteraties",
"coherenceMode": "Modus"
},
"settings": {
@@ -938,6 +940,7 @@
"completed": "Voltooid",
"queueBack": "Voeg toe aan wachtrij",
"cancelFailed": "Fout bij annuleren onderdeel",
"queueCountPrediction": "Voeg {{predicted}} toe aan wachtrij",
"batchQueued": "Reeks in wachtrij geplaatst",
"pauseFailed": "Fout bij onderbreken verwerker",
"clearFailed": "Fout bij wissen van wachtrij",

View File

@@ -76,18 +76,7 @@
"localSystem": "Локальная система",
"aboutDesc": "Используя Invoke для работы? Проверьте это:",
"add": "Добавить",
"loglevel": "Уровень логов",
"beta": "Бета",
"selected": "Выбрано",
"positivePrompt": "Позитивный запрос",
"negativePrompt": "Негативный запрос",
"editor": "Редактор",
"goTo": "Перейти к",
"tab": "Вкладка",
"viewing": "Просмотр",
"editing": "Редактирование",
"viewingDesc": "Просмотр изображений в режиме большой галереи",
"editingDesc": "Редактировать на холсте слоёв управления"
"loglevel": "Уровень логов"
},
"gallery": {
"galleryImageSize": "Размер изображений",
@@ -98,8 +87,8 @@
"deleteImagePermanent": "Удаленные изображения невозможно восстановить.",
"deleteImageBin": "Удаленные изображения будут отправлены в корзину вашей операционной системы.",
"deleteImage_one": "Удалить изображение",
"deleteImage_few": "Удалить {{count}} изображения",
"deleteImage_many": "Удалить {{count}} изображений",
"deleteImage_few": "",
"deleteImage_many": "",
"assets": "Ресурсы",
"autoAssignBoardOnClick": "Авто-назначение доски по клику",
"deleteSelection": "Удалить выделенное",
@@ -347,10 +336,6 @@
"remixImage": {
"desc": "Используйте все параметры, кроме сида из текущего изображения",
"title": "Ремикс изображения"
},
"toggleViewer": {
"title": "Переключить просмотр изображений",
"desc": "Переключение между средством просмотра изображений и рабочей областью для текущей вкладки."
}
},
"modelManager": {
@@ -527,8 +512,7 @@
"missingNodeTemplate": "Отсутствует шаблон узла",
"missingFieldTemplate": "Отсутствует шаблон поля",
"addingImagesTo": "Добавление изображений в",
"invoke": "Создать",
"imageNotProcessedForControlAdapter": "Изображение адаптера контроля №{{number}} не обрабатывается"
"invoke": "Создать"
},
"isAllowedToUpscale": {
"useX2Model": "Изображение слишком велико для увеличения с помощью модели x4. Используйте модель x2",
@@ -539,6 +523,9 @@
"useCpuNoise": "Использовать шум CPU",
"imageActions": "Действия с изображениями",
"iterations": "Кол-во",
"iterationsWithCount_one": "{{count}} Интеграция",
"iterationsWithCount_few": "{{count}} Итерации",
"iterationsWithCount_many": "{{count}} Итераций",
"useSize": "Использовать размер",
"coherenceMode": "Режим",
"aspect": "Соотношение",
@@ -554,10 +541,7 @@
"infillMosaicTileHeight": "Высота плиток",
"infillMosaicMinColor": "Мин цвет",
"infillMosaicMaxColor": "Макс цвет",
"infillColorValue": "Цвет заливки",
"globalSettings": "Глобальные настройки",
"globalNegativePromptPlaceholder": "Глобальный негативный запрос",
"globalPositivePromptPlaceholder": "Глобальный запрос"
"infillColorValue": "Цвет заливки"
},
"settings": {
"models": "Модели",
@@ -722,9 +706,7 @@
"coherenceModeBoxBlur": "коробчатое размытие",
"discardCurrent": "Отбросить текущее",
"invertBrushSizeScrollDirection": "Инвертировать прокрутку для размера кисти",
"initialFitImageSize": "Подогнать размер изображения при перебросе",
"hideBoundingBox": "Скрыть ограничительную рамку",
"showBoundingBox": "Показать ограничительную рамку"
"initialFitImageSize": "Подогнать размер изображения при перебросе"
},
"accessibility": {
"uploadImage": "Загрузить изображение",
@@ -867,10 +849,7 @@
"editMode": "Открыть в редакторе узлов",
"resetToDefaultValue": "Сбросить к стандартному значкнию",
"edit": "Редактировать",
"noFieldsViewMode": "В этом рабочем процессе нет выбранных полей для отображения. Просмотрите полный рабочий процесс для настройки значений.",
"graph": "График",
"showEdgeLabels": "Показать метки на ребрах",
"showEdgeLabelsHelp": "Показать метки на ребрах, указывающие на соединенные узлы"
"noFieldsViewMode": "В этом рабочем процессе нет выбранных полей для отображения. Просмотрите полный рабочий процесс для настройки значений."
},
"controlnet": {
"amult": "a_mult",
@@ -938,8 +917,8 @@
"lineartAnime": "Контурный рисунок в стиле аниме",
"mediapipeFaceDescription": "Обнаружение лиц с помощью Mediapipe",
"hedDescription": "Целостное обнаружение границ",
"setControlImageDimensions": "Скопируйте размер в Ш/В (оптимизируйте для модели)",
"scribble": "Штрихи",
"setControlImageDimensions": "Установите размеры контрольного изображения на Ш/В",
"scribble": "каракули",
"maxFaces": "Макс Лица",
"mlsdDescription": "Минималистичный детектор отрезков линии",
"resizeSimple": "Изменить размер (простой)",
@@ -954,18 +933,7 @@
"small": "Маленький",
"body": "Тело",
"hands": "Руки",
"selectCLIPVisionModel": "Выбрать модель CLIP Vision",
"ipAdapterMethod": "Метод",
"full": "Всё",
"mlsd": "M-LSD",
"h": "H",
"style": "Только стиль",
"dwOpenpose": "DW Openpose",
"pidi": "PIDI",
"composition": "Только композиция",
"hed": "HED",
"beginEndStepPercentShort": "Начало/конец %",
"setControlImageDimensionsForce": "Скопируйте размер в Ш/В (игнорируйте модель)"
"selectCLIPVisionModel": "Выбрать модель CLIP Vision"
},
"boards": {
"autoAddBoard": "Авто добавление Доски",
@@ -1344,12 +1312,6 @@
"paragraphs": [
"Плавно укладывайте изображение вдоль вертикальной оси."
]
},
"ipAdapterMethod": {
"heading": "Метод",
"paragraphs": [
"Метод, с помощью которого применяется текущий IP-адаптер."
]
}
},
"metadata": {
@@ -1397,6 +1359,7 @@
"completed": "Выполнено",
"queueBack": "Добавить в очередь",
"cancelFailed": "Проблема с отменой элемента",
"queueCountPrediction": "{{promptsCount}} запросов × {{iterations}} изображений -> {{count}} генераций",
"batchQueued": "Пакетная очередь",
"pauseFailed": "Проблема с приостановкой рендеринга",
"clearFailed": "Проблема с очисткой очереди",
@@ -1512,11 +1475,7 @@
"projectWorkflows": "Рабочие процессы проекта",
"defaultWorkflows": "Стандартные рабочие процессы",
"name": "Имя",
"noRecentWorkflows": "Нет последних рабочих процессов",
"loadWorkflow": "Рабочий процесс $t(common.load)",
"convertGraph": "Конвертировать график",
"loadFromGraph": "Загрузка рабочего процесса из графика",
"autoLayout": "Автоматическое расположение"
"noRecentWorkflows": "Нет последних рабочих процессов"
},
"hrf": {
"enableHrf": "Включить исправление высокого разрешения",
@@ -1569,56 +1528,5 @@
"addPromptTrigger": "Добавить триггер запроса",
"compatibleEmbeddings": "Совместимые встраивания",
"noMatchingTriggers": "Нет соответствующих триггеров"
},
"controlLayers": {
"moveToBack": "На задний план",
"moveForward": "Переместить вперёд",
"moveBackward": "Переместить назад",
"brushSize": "Размер кисти",
"controlLayers": "Слои управления",
"globalMaskOpacity": "Глобальная непрозрачность маски",
"autoNegative": "Авто негатив",
"deletePrompt": "Удалить запрос",
"resetRegion": "Сбросить регион",
"debugLayers": "Слои отладки",
"rectangle": "Прямоугольник",
"maskPreviewColor": "Цвет предпросмотра маски",
"addNegativePrompt": "Добавить $t(common.negativePrompt)",
"regionalGuidance": "Региональная точность",
"opacity": "Непрозрачность",
"globalControlAdapter": "Глобальный $t(controlnet.controlAdapter_one)",
"globalControlAdapterLayer": "Глобальный $t(controlnet.controlAdapter_one) $t(unifiedCanvas.layer)",
"globalIPAdapter": "Глобальный $t(common.ipAdapter)",
"globalIPAdapterLayer": "Глобальный $t(common.ipAdapter) $t(unifiedCanvas.layer)",
"opacityFilter": "Фильтр непрозрачности",
"deleteAll": "Удалить всё",
"addLayer": "Добавить слой",
"moveToFront": "На передний план",
"toggleVisibility": "Переключить видимость слоя",
"addPositivePrompt": "Добавить $t(common.positivePrompt)",
"addIPAdapter": "Добавить $t(common.ipAdapter)",
"regionalGuidanceLayer": "$t(controlLayers.regionalGuidance) $t(unifiedCanvas.layer)",
"resetProcessor": "Сброс процессора по умолчанию",
"clearProcessor": "Чистый процессор",
"globalInitialImage": "Глобальное исходное изображение",
"globalInitialImageLayer": "$t(controlLayers.globalInitialImage) $t(unifiedCanvas.layer)",
"noLayersAdded": "Без слоев",
"layers_one": "Слой",
"layers_few": "Слоя",
"layers_many": "Слоев"
},
"ui": {
"tabs": {
"generation": "Генерация",
"canvas": "Холст",
"workflowsTab": "$t(ui.tabs.workflows) $t(common.tab)",
"models": "Модели",
"generationTab": "$t(ui.tabs.generation) $t(common.tab)",
"workflows": "Рабочие процессы",
"canvasTab": "$t(ui.tabs.canvas) $t(common.tab)",
"queueTab": "$t(ui.tabs.queue) $t(common.tab)",
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"queue": "Очередь"
}
}
}

View File

@@ -66,7 +66,7 @@
"saveAs": "保存为",
"ai": "ai",
"or": "或",
"aboutDesc": "使用 Invoke 工作?来看看:",
"aboutDesc": "使用 Invoke 工作?看:",
"add": "添加",
"loglevel": "日志级别",
"copy": "复制",
@@ -445,6 +445,7 @@
"useX2Model": "图像太大,无法使用 x4 模型,使用 x2 模型作为替代",
"tooLarge": "图像太大无法进行放大,请选择更小的图像"
},
"iterationsWithCount_other": "{{count}} 次迭代生成",
"cfgRescaleMultiplier": "CFG 重缩放倍数",
"useSize": "使用尺寸",
"setToOptimalSize": "优化模型大小",
@@ -852,6 +853,7 @@
"pruneSucceeded": "从队列修剪 {{item_count}} 个已完成的项目",
"notReady": "无法排队",
"batchFailedToQueue": "批次加入队列失败",
"queueCountPrediction": "{{promptsCount}} 提示词 × {{iterations}} 迭代次数 -> {{count}} 次生成",
"batchQueued": "加入队列的批次",
"front": "前",
"pruneTooltip": "修剪 {{item_count}} 个已完成的项目",

View File

@@ -1,4 +1,3 @@
/* eslint-disable no-console */
import fs from 'node:fs';
import openapiTS from 'openapi-typescript';

View File

@@ -21,7 +21,6 @@ import i18n from 'i18n';
import { size } from 'lodash-es';
import { memo, useCallback, useEffect } from 'react';
import { ErrorBoundary } from 'react-error-boundary';
import { useGetOpenAPISchemaQuery } from 'services/api/endpoints/appInfo';
import AppErrorBoundaryFallback from './AppErrorBoundaryFallback';
import PreselectedImage from './PreselectedImage';
@@ -47,7 +46,6 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage }: Props) => {
useSocketIO();
useGlobalModifiersInit();
useGlobalHotkeys();
useGetOpenAPISchemaQuery();
const { dropzone, isHandlingUpload, setIsHandlingUpload } = useFullscreenDropzone();

View File

@@ -67,8 +67,6 @@ export const useSocketIO = () => {
if ($isDebugging.get() || import.meta.env.MODE === 'development') {
window.$socketOptions = $socketOptions;
// This is only enabled manually for debugging, console is allowed.
/* eslint-disable-next-line no-console */
console.log('Socket initialized', socket);
}
@@ -77,8 +75,6 @@ export const useSocketIO = () => {
return () => {
if ($isDebugging.get() || import.meta.env.MODE === 'development') {
window.$socketOptions = undefined;
// This is only enabled manually for debugging, console is allowed.
/* eslint-disable-next-line no-console */
console.log('Socket teardown', socket);
}
socket.disconnect();

View File

@@ -20,14 +20,15 @@ export type LoggerNamespace =
| 'models'
| 'config'
| 'canvas'
| 'generation'
| 'txt2img'
| 'img2img'
| 'nodes'
| 'system'
| 'socketio'
| 'session'
| 'queue'
| 'dnd'
| 'controlLayers';
| 'regionalPrompts';
export const logger = (namespace: LoggerNamespace) => $logger.get().child({ namespace });

View File

@@ -1,6 +1,3 @@
/* eslint-disable no-console */
// This is only enabled manually for debugging, console is allowed.
import type { Middleware, MiddlewareAPI } from '@reduxjs/toolkit';
import { diff } from 'jsondiffpatch';

View File

@@ -1,6 +1,7 @@
import type { UnknownAction } from '@reduxjs/toolkit';
import { deepClone } from 'common/util/deepClone';
import { isAnyGraphBuilt } from 'features/nodes/store/actions';
import { nodeTemplatesBuilt } from 'features/nodes/store/nodesSlice';
import { appInfoApi } from 'services/api/endpoints/appInfo';
import type { Graph } from 'services/api/types';
import { socketGeneratorProgress } from 'services/events/actions';
@@ -24,6 +25,13 @@ export const actionSanitizer = <A extends UnknownAction>(action: A): A => {
};
}
if (nodeTemplatesBuilt.match(action)) {
return {
...action,
payload: '<Node templates omitted>',
};
}
if (socketGeneratorProgress.match(action)) {
const sanitized = deepClone(action);
if (sanitized.payload.data.progress_image) {

View File

@@ -16,7 +16,6 @@ import { addCanvasMaskSavedToGalleryListener } from 'app/store/middleware/listen
import { addCanvasMaskToControlNetListener } from 'app/store/middleware/listenerMiddleware/listeners/canvasMaskToControlNet';
import { addCanvasMergedListener } from 'app/store/middleware/listenerMiddleware/listeners/canvasMerged';
import { addCanvasSavedToGalleryListener } from 'app/store/middleware/listenerMiddleware/listeners/canvasSavedToGallery';
import { addControlAdapterPreprocessor } from 'app/store/middleware/listenerMiddleware/listeners/controlAdapterPreprocessor';
import { addControlNetAutoProcessListener } from 'app/store/middleware/listenerMiddleware/listeners/controlNetAutoProcess';
import { addControlNetImageProcessedListener } from 'app/store/middleware/listenerMiddleware/listeners/controlNetImageProcessed';
import { addEnqueueRequestedCanvasListener } from 'app/store/middleware/listenerMiddleware/listeners/enqueueRequestedCanvas';
@@ -32,6 +31,7 @@ import { addImagesStarredListener } from 'app/store/middleware/listenerMiddlewar
import { addImagesUnstarredListener } from 'app/store/middleware/listenerMiddleware/listeners/imagesUnstarred';
import { addImageToDeleteSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/imageToDeleteSelected';
import { addImageUploadedFulfilledListener } from 'app/store/middleware/listenerMiddleware/listeners/imageUploaded';
import { addInitialImageSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/initialImageSelected';
import { addModelSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/modelSelected';
import { addModelsLoadedListener } from 'app/store/middleware/listenerMiddleware/listeners/modelsLoaded';
import { addDynamicPromptsListener } from 'app/store/middleware/listenerMiddleware/listeners/promptChanged';
@@ -72,6 +72,9 @@ const startAppListening = listenerMiddleware.startListening as AppStartListening
// Image uploaded
addImageUploadedFulfilledListener(startAppListening);
// Image selected
addInitialImageSelectedListener(startAppListening);
// Image deleted
addRequestedSingleImageDeletionListener(startAppListening);
addDeleteBoardAndImagesFulfilledListener(startAppListening);
@@ -154,4 +157,3 @@ addUpscaleRequestedListener(startAppListening);
addDynamicPromptsListener(startAppListening);
addSetDefaultSettingsListener(startAppListening);
addControlAdapterPreprocessor(startAppListening);

View File

@@ -1,9 +1,9 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { resetCanvas } from 'features/canvas/store/canvasSlice';
import { controlAdaptersReset } from 'features/controlAdapters/store/controlAdaptersSlice';
import { allLayersDeleted } from 'features/controlLayers/store/controlLayersSlice';
import { getImageUsage } from 'features/deleteImageModal/store/selectors';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { clearInitialImage } from 'features/parameters/store/generationSlice';
import { imagesApi } from 'services/api/endpoints/images';
export const addDeleteBoardAndImagesFulfilledListener = (startAppListening: AppStartListening) => {
@@ -14,14 +14,19 @@ export const addDeleteBoardAndImagesFulfilledListener = (startAppListening: AppS
// Remove all deleted images from the UI
let wasInitialImageReset = false;
let wasCanvasReset = false;
let wasNodeEditorReset = false;
let wereControlAdaptersReset = false;
let wereControlLayersReset = false;
const { canvas, nodes, controlAdapters, controlLayers } = getState();
const { generation, canvas, nodes, controlAdapters } = getState();
deleted_images.forEach((image_name) => {
const imageUsage = getImageUsage(canvas, nodes.present, controlAdapters, controlLayers.present, image_name);
const imageUsage = getImageUsage(generation, canvas, nodes, controlAdapters, image_name);
if (imageUsage.isInitialImage && !wasInitialImageReset) {
dispatch(clearInitialImage());
wasInitialImageReset = true;
}
if (imageUsage.isCanvasImage && !wasCanvasReset) {
dispatch(resetCanvas());
@@ -37,11 +42,6 @@ export const addDeleteBoardAndImagesFulfilledListener = (startAppListening: AppS
dispatch(controlAdaptersReset());
wereControlAdaptersReset = true;
}
if (imageUsage.isControlLayerImage && !wereControlLayersReset) {
dispatch(allLayersDeleted());
wereControlLayersReset = true;
}
});
},
});

View File

@@ -1,6 +1,5 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { parseify } from 'common/util/serialize';
import { canvasSavedToGallery } from 'features/canvas/store/actions';
import { getBaseLayerBlob } from 'features/canvas/util/getBaseLayerBlob';
import { addToast } from 'features/system/store/systemSlice';
@@ -44,9 +43,6 @@ export const addCanvasSavedToGalleryListener = (startAppListening: AppStartListe
type: 'TOAST',
toastOptions: { title: t('toast.canvasSavedGallery') },
},
metadata: {
_canvas_objects: parseify(state.canvas.layerState.objects),
},
})
);
},

View File

@@ -1,189 +0,0 @@
import { isAnyOf } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { AppDispatch } from 'app/store/store';
import { parseify } from 'common/util/serialize';
import {
caLayerImageChanged,
caLayerModelChanged,
caLayerProcessedImageChanged,
caLayerProcessorConfigChanged,
caLayerProcessorPendingBatchIdChanged,
caLayerRecalled,
isControlAdapterLayer,
} from 'features/controlLayers/store/controlLayersSlice';
import { CA_PROCESSOR_DATA } from 'features/controlLayers/util/controlAdapters';
import { isImageOutput } from 'features/nodes/types/common';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { isEqual } from 'lodash-es';
import { getImageDTO } from 'services/api/endpoints/images';
import { queueApi } from 'services/api/endpoints/queue';
import type { BatchConfig } from 'services/api/types';
import { socketInvocationComplete } from 'services/events/actions';
import { assert } from 'tsafe';
const matcher = isAnyOf(caLayerImageChanged, caLayerProcessorConfigChanged, caLayerModelChanged, caLayerRecalled);
const DEBOUNCE_MS = 300;
const log = logger('session');
/**
* Simple helper to cancel a batch and reset the pending batch ID
*/
const cancelProcessorBatch = async (dispatch: AppDispatch, layerId: string, batchId: string) => {
const req = dispatch(queueApi.endpoints.cancelByBatchIds.initiate({ batch_ids: [batchId] }));
log.trace({ batchId }, 'Cancelling existing preprocessor batch');
try {
await req.unwrap();
} catch {
// no-op
} finally {
req.reset();
// Always reset the pending batch ID - the cancel req could fail if the batch doesn't exist
dispatch(caLayerProcessorPendingBatchIdChanged({ layerId, batchId: null }));
}
};
export const addControlAdapterPreprocessor = (startAppListening: AppStartListening) => {
startAppListening({
matcher,
effect: async (action, { dispatch, getState, getOriginalState, cancelActiveListeners, delay, take, signal }) => {
const layerId = caLayerRecalled.match(action) ? action.payload.id : action.payload.layerId;
const state = getState();
const originalState = getOriginalState();
// Cancel any in-progress instances of this listener
cancelActiveListeners();
log.trace('Control Layer CA auto-process triggered');
// Delay before starting actual work
await delay(DEBOUNCE_MS);
const layer = state.controlLayers.present.layers.filter(isControlAdapterLayer).find((l) => l.id === layerId);
if (!layer) {
return;
}
// We should only process if the processor settings or image have changed
const originalLayer = originalState.controlLayers.present.layers
.filter(isControlAdapterLayer)
.find((l) => l.id === layerId);
const originalImage = originalLayer?.controlAdapter.image;
const originalConfig = originalLayer?.controlAdapter.processorConfig;
const image = layer.controlAdapter.image;
const config = layer.controlAdapter.processorConfig;
if (isEqual(config, originalConfig) && isEqual(image, originalImage)) {
// Neither config nor image have changed, we can bail
return;
}
if (!image || !config) {
// - If we have no image, we have nothing to process
// - If we have no processor config, we have nothing to process
// Clear the processed image and bail
dispatch(caLayerProcessedImageChanged({ layerId, imageDTO: null }));
return;
}
// At this point, the user has stopped fiddling with the processor settings and there is a processor selected.
// If there is a pending processor batch, cancel it.
if (layer.controlAdapter.processorPendingBatchId) {
cancelProcessorBatch(dispatch, layerId, layer.controlAdapter.processorPendingBatchId);
}
// TODO(psyche): I can't get TS to be happy, it thinkgs `config` is `never` but it should be inferred from the generic... I'll just cast it for now
const processorNode = CA_PROCESSOR_DATA[config.type].buildNode(image, config as never);
const enqueueBatchArg: BatchConfig = {
prepend: true,
batch: {
graph: {
nodes: {
[processorNode.id]: {
...processorNode,
// Control images are always intermediate - do not save to gallery
is_intermediate: true,
},
},
edges: [],
},
runs: 1,
},
};
// Kick off the processor batch
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(enqueueBatchArg, {
fixedCacheKey: 'enqueueBatch',
})
);
try {
const enqueueResult = await req.unwrap();
// TODO(psyche): Update the pydantic models, pretty sure we will _always_ have a batch_id here, but the model says it's optional
assert(enqueueResult.batch.batch_id, 'Batch ID not returned from queue');
dispatch(caLayerProcessorPendingBatchIdChanged({ layerId, batchId: enqueueResult.batch.batch_id }));
log.debug({ enqueueResult: parseify(enqueueResult) }, t('queue.graphQueued'));
// Wait for the processor node to complete
const [invocationCompleteAction] = await take(
(action): action is ReturnType<typeof socketInvocationComplete> =>
socketInvocationComplete.match(action) &&
action.payload.data.queue_batch_id === enqueueResult.batch.batch_id &&
action.payload.data.source_node_id === processorNode.id
);
// We still have to check the output type
assert(
isImageOutput(invocationCompleteAction.payload.data.result),
`Processor did not return an image output, got: ${invocationCompleteAction.payload.data.result}`
);
const { image_name } = invocationCompleteAction.payload.data.result.image;
const imageDTO = await getImageDTO(image_name);
assert(imageDTO, "Failed to fetch processor output's image DTO");
// Whew! We made it. Update the layer with the processed image
log.debug({ layerId, imageDTO }, 'ControlNet image processed');
dispatch(caLayerProcessedImageChanged({ layerId, imageDTO }));
dispatch(caLayerProcessorPendingBatchIdChanged({ layerId, batchId: null }));
} catch (error) {
if (signal.aborted) {
// The listener was canceled - we need to cancel the pending processor batch, if there is one (could have changed by now).
const pendingBatchId = getState()
.controlLayers.present.layers.filter(isControlAdapterLayer)
.find((l) => l.id === layerId)?.controlAdapter.processorPendingBatchId;
if (pendingBatchId) {
cancelProcessorBatch(dispatch, layerId, pendingBatchId);
}
log.trace('Control Adapter preprocessor cancelled');
} else {
// Some other error condition...
log.error({ enqueueBatchArg: parseify(enqueueBatchArg) }, t('queue.graphFailedToQueue'));
if (error instanceof Object) {
if ('data' in error && 'status' in error) {
if (error.status === 403) {
dispatch(caLayerImageChanged({ layerId, imageDTO: null }));
return;
}
}
}
dispatch(
addToast({
title: t('queue.graphFailedToQueue'),
status: 'error',
})
);
}
} finally {
req.reset();
}
},
});
};

View File

@@ -8,8 +8,8 @@ import { blobToDataURL } from 'features/canvas/util/blobToDataURL';
import { getCanvasData } from 'features/canvas/util/getCanvasData';
import { getCanvasGenerationMode } from 'features/canvas/util/getCanvasGenerationMode';
import { canvasGraphBuilt } from 'features/nodes/store/actions';
import { buildCanvasGraph } from 'features/nodes/util/graph/buildCanvasGraph';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
import { buildCanvasGraph } from 'features/nodes/util/graph/canvas/buildCanvasGraph';
import { imagesApi } from 'services/api/endpoints/images';
import { queueApi } from 'services/api/endpoints/queue';
import type { ImageDTO } from 'services/api/types';
@@ -30,7 +30,7 @@ import type { ImageDTO } from 'services/api/types';
export const addEnqueueRequestedCanvasListener = (startAppListening: AppStartListening) => {
startAppListening({
predicate: (action): action is ReturnType<typeof enqueueRequested> =>
enqueueRequested.match(action) && action.payload.tabName === 'canvas',
enqueueRequested.match(action) && action.payload.tabName === 'unifiedCanvas',
effect: async (action, { getState, dispatch }) => {
const log = logger('queue');
const { prepend } = action.payload;

View File

@@ -1,27 +1,35 @@
import { enqueueRequested } from 'app/store/actions';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { isImageViewerOpenChanged } from 'features/gallery/store/gallerySlice';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
import { buildGenerationTabGraph } from 'features/nodes/util/graph/generation/buildGenerationTabGraph';
import { buildGenerationTabSDXLGraph } from 'features/nodes/util/graph/generation/buildGenerationTabSDXLGraph';
import { buildLinearImageToImageGraph } from 'features/nodes/util/graph/buildLinearImageToImageGraph';
import { buildLinearSDXLImageToImageGraph } from 'features/nodes/util/graph/buildLinearSDXLImageToImageGraph';
import { buildLinearSDXLTextToImageGraph } from 'features/nodes/util/graph/buildLinearSDXLTextToImageGraph';
import { buildLinearTextToImageGraph } from 'features/nodes/util/graph/buildLinearTextToImageGraph';
import { queueApi } from 'services/api/endpoints/queue';
export const addEnqueueRequestedLinear = (startAppListening: AppStartListening) => {
startAppListening({
predicate: (action): action is ReturnType<typeof enqueueRequested> =>
enqueueRequested.match(action) && action.payload.tabName === 'generation',
enqueueRequested.match(action) && (action.payload.tabName === 'txt2img' || action.payload.tabName === 'img2img'),
effect: async (action, { getState, dispatch }) => {
const state = getState();
const { shouldShowProgressInViewer } = state.ui;
const model = state.generation.model;
const { prepend } = action.payload;
let graph;
if (model?.base === 'sdxl') {
graph = await buildGenerationTabSDXLGraph(state);
if (model && model.base === 'sdxl') {
if (action.payload.tabName === 'txt2img') {
graph = await buildLinearSDXLTextToImageGraph(state);
} else {
graph = await buildLinearSDXLImageToImageGraph(state);
}
} else {
graph = await buildGenerationTabGraph(state);
if (action.payload.tabName === 'txt2img') {
graph = await buildLinearTextToImageGraph(state);
} else {
graph = await buildLinearImageToImageGraph(state);
}
}
const batchConfig = prepareLinearUIBatch(state, graph, prepend);
@@ -31,14 +39,7 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
fixedCacheKey: 'enqueueBatch',
})
);
try {
await req.unwrap();
if (shouldShowProgressInViewer) {
dispatch(isImageViewerOpenChanged(true));
}
} finally {
req.reset();
}
req.reset();
},
});
};

View File

@@ -8,12 +8,12 @@ import type { BatchConfig } from 'services/api/types';
export const addEnqueueRequestedNodes = (startAppListening: AppStartListening) => {
startAppListening({
predicate: (action): action is ReturnType<typeof enqueueRequested> =>
enqueueRequested.match(action) && action.payload.tabName === 'workflows',
enqueueRequested.match(action) && action.payload.tabName === 'nodes',
effect: async (action, { getState, dispatch }) => {
const state = getState();
const { nodes, edges } = state.nodes.present;
const { nodes, edges } = state.nodes;
const workflow = state.workflow;
const graph = buildNodesGraph(state.nodes.present);
const graph = buildNodesGraph(state.nodes);
const builtWorkflow = buildWorkflowWithValidation({
nodes,
edges,
@@ -39,11 +39,7 @@ export const addEnqueueRequestedNodes = (startAppListening: AppStartListening) =
fixedCacheKey: 'enqueueBatch',
})
);
try {
await req.unwrap();
} finally {
req.reset();
}
req.reset();
},
});
};

View File

@@ -1,7 +1,7 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { parseify } from 'common/util/serialize';
import { $templates } from 'features/nodes/store/nodesSlice';
import { nodeTemplatesBuilt } from 'features/nodes/store/nodesSlice';
import { parseSchema } from 'features/nodes/util/schema/parseSchema';
import { size } from 'lodash-es';
import { appInfoApi } from 'services/api/endpoints/appInfo';
@@ -9,7 +9,7 @@ import { appInfoApi } from 'services/api/endpoints/appInfo';
export const addGetOpenAPISchemaListener = (startAppListening: AppStartListening) => {
startAppListening({
matcher: appInfoApi.endpoints.getOpenAPISchema.matchFulfilled,
effect: (action, { getState }) => {
effect: (action, { dispatch, getState }) => {
const log = logger('system');
const schemaJSON = action.payload;
@@ -20,7 +20,7 @@ export const addGetOpenAPISchemaListener = (startAppListening: AppStartListening
log.debug({ nodeTemplates: parseify(nodeTemplates) }, `Built ${size(nodeTemplates)} node templates`);
$templates.set(nodeTemplates);
dispatch(nodeTemplatesBuilt(nodeTemplates));
},
});

View File

@@ -1,6 +1,5 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { AppDispatch, RootState } from 'app/store/store';
import { resetCanvas } from 'features/canvas/store/canvasSlice';
import {
controlAdapterImageChanged,
@@ -8,13 +7,6 @@ import {
selectControlAdapterAll,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
import {
isControlAdapterLayer,
isInitialImageLayer,
isIPAdapterLayer,
isRegionalGuidanceLayer,
layerDeleted,
} from 'features/controlLayers/store/controlLayersSlice';
import { imageDeletionConfirmed } from 'features/deleteImageModal/store/actions';
import { isModalOpenChanged } from 'features/deleteImageModal/store/slice';
import { selectListImagesQueryArgs } from 'features/gallery/store/gallerySelectors';
@@ -22,82 +14,12 @@ import { imageSelected } from 'features/gallery/store/gallerySlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { isImageFieldInputInstance } from 'features/nodes/types/field';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { clearInitialImage } from 'features/parameters/store/generationSlice';
import { clamp, forEach } from 'lodash-es';
import { api } from 'services/api';
import { imagesApi } from 'services/api/endpoints/images';
import type { ImageDTO } from 'services/api/types';
import { imagesSelectors } from 'services/api/util';
const deleteNodesImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
state.nodes.present.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
return;
}
forEach(node.data.inputs, (input) => {
if (isImageFieldInputInstance(input) && input.value?.image_name === imageDTO.image_name) {
dispatch(
fieldImageValueChanged({
nodeId: node.data.id,
fieldName: input.name,
value: undefined,
})
);
}
});
});
};
const deleteControlAdapterImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
forEach(selectControlAdapterAll(state.controlAdapters), (ca) => {
if (
ca.controlImage === imageDTO.image_name ||
(isControlNetOrT2IAdapter(ca) && ca.processedControlImage === imageDTO.image_name)
) {
dispatch(
controlAdapterImageChanged({
id: ca.id,
controlImage: null,
})
);
dispatch(
controlAdapterProcessedImageChanged({
id: ca.id,
processedControlImage: null,
})
);
}
});
};
const deleteControlLayerImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
state.controlLayers.present.layers.forEach((l) => {
if (isRegionalGuidanceLayer(l)) {
if (l.ipAdapters.some((ipa) => ipa.image?.name === imageDTO.image_name)) {
dispatch(layerDeleted(l.id));
}
}
if (isControlAdapterLayer(l)) {
if (
l.controlAdapter.image?.name === imageDTO.image_name ||
l.controlAdapter.processedImage?.name === imageDTO.image_name
) {
dispatch(layerDeleted(l.id));
}
}
if (isIPAdapterLayer(l)) {
if (l.ipAdapter.image?.name === imageDTO.image_name) {
dispatch(layerDeleted(l.id));
}
}
if (isInitialImageLayer(l)) {
if (l.image?.name === imageDTO.image_name) {
dispatch(layerDeleted(l.id));
}
}
});
};
export const addRequestedSingleImageDeletionListener = (startAppListening: AppStartListening) => {
startAppListening({
actionCreator: imageDeletionConfirmed,
@@ -151,9 +73,50 @@ export const addRequestedSingleImageDeletionListener = (startAppListening: AppSt
}
imageDTOs.forEach((imageDTO) => {
deleteControlAdapterImages(state, dispatch, imageDTO);
deleteNodesImages(state, dispatch, imageDTO);
deleteControlLayerImages(state, dispatch, imageDTO);
// reset init image if we deleted it
if (getState().generation.initialImage?.imageName === imageDTO.image_name) {
dispatch(clearInitialImage());
}
// reset control adapters that use the deleted images
forEach(selectControlAdapterAll(getState().controlAdapters), (ca) => {
if (
ca.controlImage === imageDTO.image_name ||
(isControlNetOrT2IAdapter(ca) && ca.processedControlImage === imageDTO.image_name)
) {
dispatch(
controlAdapterImageChanged({
id: ca.id,
controlImage: null,
})
);
dispatch(
controlAdapterProcessedImageChanged({
id: ca.id,
processedControlImage: null,
})
);
}
});
// reset nodes that use the deleted images
getState().nodes.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
return;
}
forEach(node.data.inputs, (input) => {
if (isImageFieldInputInstance(input) && input.value?.image_name === imageDTO.image_name) {
dispatch(
fieldImageValueChanged({
nodeId: node.data.id,
fieldName: input.name,
value: undefined,
})
);
}
});
});
});
// Delete from server
@@ -205,9 +168,50 @@ export const addRequestedSingleImageDeletionListener = (startAppListening: AppSt
}
imageDTOs.forEach((imageDTO) => {
deleteControlAdapterImages(state, dispatch, imageDTO);
deleteNodesImages(state, dispatch, imageDTO);
deleteControlLayerImages(state, dispatch, imageDTO);
// reset init image if we deleted it
if (getState().generation.initialImage?.imageName === imageDTO.image_name) {
dispatch(clearInitialImage());
}
// reset control adapters that use the deleted images
forEach(selectControlAdapterAll(getState().controlAdapters), (ca) => {
if (
ca.controlImage === imageDTO.image_name ||
(isControlNetOrT2IAdapter(ca) && ca.processedControlImage === imageDTO.image_name)
) {
dispatch(
controlAdapterImageChanged({
id: ca.id,
controlImage: null,
})
);
dispatch(
controlAdapterProcessedImageChanged({
id: ca.id,
processedControlImage: null,
})
);
}
});
// reset nodes that use the deleted images
getState().nodes.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
return;
}
forEach(node.data.inputs, (input) => {
if (isImageFieldInputInstance(input) && input.value?.image_name === imageDTO.image_name) {
dispatch(
fieldImageValueChanged({
nodeId: node.data.id,
fieldName: input.name,
value: undefined,
})
);
}
});
});
});
} catch {
// no-op

View File

@@ -7,16 +7,10 @@ import {
controlAdapterImageChanged,
controlAdapterIsEnabledChanged,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import {
caLayerImageChanged,
iiLayerImageChanged,
ipaLayerImageChanged,
rgLayerIPAdapterImageChanged,
} from 'features/controlLayers/store/controlLayersSlice';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { imageSelected } from 'features/gallery/store/gallerySlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { initialImageChanged, selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { imagesApi } from 'services/api/endpoints/images';
export const dndDropped = createAction<{
@@ -53,6 +47,18 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
return;
}
/**
* Image dropped on initial image
*/
if (
overData.actionType === 'SET_INITIAL_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
dispatch(initialImageChanged(activeData.payload.imageDTO));
return;
}
/**
* Image dropped on ControlNet
*/
@@ -77,79 +83,6 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
return;
}
/**
* Image dropped on Control Adapter Layer
*/
if (
overData.actionType === 'SET_CA_LAYER_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { layerId } = overData.context;
dispatch(
caLayerImageChanged({
layerId,
imageDTO: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image dropped on IP Adapter Layer
*/
if (
overData.actionType === 'SET_IPA_LAYER_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { layerId } = overData.context;
dispatch(
ipaLayerImageChanged({
layerId,
imageDTO: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image dropped on RG Layer IP Adapter
*/
if (
overData.actionType === 'SET_RG_LAYER_IP_ADAPTER_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { layerId, ipAdapterId } = overData.context;
dispatch(
rgLayerIPAdapterImageChanged({
layerId,
ipAdapterId,
imageDTO: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image dropped on II Layer Image
*/
if (
overData.actionType === 'SET_II_LAYER_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { layerId } = overData.context;
dispatch(
iiLayerImageChanged({
layerId,
imageDTO: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image dropped on Canvas
*/

View File

@@ -14,6 +14,7 @@ export const addImageToDeleteSelectedListener = (startAppListening: AppStartList
const isImageInUse =
imagesUsage.some((i) => i.isCanvasImage) ||
imagesUsage.some((i) => i.isInitialImage) ||
imagesUsage.some((i) => i.isControlImage) ||
imagesUsage.some((i) => i.isNodesImage);

View File

@@ -6,14 +6,8 @@ import {
controlAdapterImageChanged,
controlAdapterIsEnabledChanged,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import {
caLayerImageChanged,
iiLayerImageChanged,
ipaLayerImageChanged,
rgLayerIPAdapterImageChanged,
} from 'features/controlLayers/store/controlLayersSlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { initialImageChanged, selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { omit } from 'lodash-es';
@@ -114,48 +108,15 @@ export const addImageUploadedFulfilledListener = (startAppListening: AppStartLis
return;
}
if (postUploadAction?.type === 'SET_CA_LAYER_IMAGE') {
const { layerId } = postUploadAction;
dispatch(caLayerImageChanged({ layerId, imageDTO }));
if (postUploadAction?.type === 'SET_INITIAL_IMAGE') {
dispatch(initialImageChanged(imageDTO));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: t('toast.setControlImage'),
})
);
}
if (postUploadAction?.type === 'SET_IPA_LAYER_IMAGE') {
const { layerId } = postUploadAction;
dispatch(ipaLayerImageChanged({ layerId, imageDTO }));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: t('toast.setControlImage'),
})
);
}
if (postUploadAction?.type === 'SET_RG_LAYER_IP_ADAPTER_IMAGE') {
const { layerId, ipAdapterId } = postUploadAction;
dispatch(rgLayerIPAdapterImageChanged({ layerId, ipAdapterId, imageDTO }));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: t('toast.setControlImage'),
})
);
}
if (postUploadAction?.type === 'SET_II_LAYER_IMAGE') {
const { layerId } = postUploadAction;
dispatch(iiLayerImageChanged({ layerId, imageDTO }));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: t('toast.setControlImage'),
description: t('toast.setInitialImage'),
})
);
return;
}
if (postUploadAction?.type === 'SET_NODES_IMAGE') {

View File

@@ -0,0 +1,21 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { initialImageSelected } from 'features/parameters/store/actions';
import { initialImageChanged } from 'features/parameters/store/generationSlice';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { t } from 'i18next';
export const addInitialImageSelectedListener = (startAppListening: AppStartListening) => {
startAppListening({
actionCreator: initialImageSelected,
effect: (action, { dispatch }) => {
if (!action.payload) {
dispatch(addToast(makeToast({ title: t('toast.imageNotLoadedDesc'), status: 'error' })));
return;
}
dispatch(initialImageChanged(action.payload));
dispatch(addToast(makeToast(t('toast.sentToImageToImage'))));
},
});
};

View File

@@ -6,10 +6,9 @@ import {
controlAdapterModelCleared,
selectControlAdapterAll,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import { loraRemoved } from 'features/lora/store/loraSlice';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { modelChanged, vaeSelected } from 'features/parameters/store/generationSlice';
import { heightChanged, modelChanged, vaeSelected, widthChanged } from 'features/parameters/store/generationSlice';
import { zParameterModel, zParameterVAEModel } from 'features/parameters/types/parameterSchemas';
import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';
import { refinerModelChanged } from 'features/sdxl/store/sdxlSlice';
@@ -70,22 +69,16 @@ const handleMainModels: ModelHandler = (models, state, dispatch, log) => {
dispatch(modelChanged(defaultModelInList, currentModel));
const optimalDimension = getOptimalDimension(defaultModelInList);
if (
getIsSizeOptimal(
state.controlLayers.present.size.width,
state.controlLayers.present.size.height,
optimalDimension
)
) {
if (getIsSizeOptimal(state.generation.width, state.generation.height, optimalDimension)) {
return;
}
const { width, height } = calculateNewSize(
state.controlLayers.present.size.aspectRatio.value,
state.generation.aspectRatio.value,
optimalDimension * optimalDimension
);
dispatch(widthChanged({ width }));
dispatch(heightChanged({ height }));
dispatch(widthChanged(width));
dispatch(heightChanged(height));
return;
}
}

View File

@@ -1,6 +1,5 @@
import { isAnyOf } from '@reduxjs/toolkit';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { positivePromptChanged } from 'features/controlLayers/store/controlLayersSlice';
import {
combinatorialToggled,
isErrorChanged,
@@ -11,16 +10,11 @@ import {
promptsChanged,
} from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
import { setPositivePrompt } from 'features/parameters/store/generationSlice';
import { utilitiesApi } from 'services/api/endpoints/utilities';
import { socketConnected } from 'services/events/actions';
const matcher = isAnyOf(
positivePromptChanged,
combinatorialToggled,
maxPromptsChanged,
maxPromptsReset,
socketConnected
);
const matcher = isAnyOf(setPositivePrompt, combinatorialToggled, maxPromptsChanged, maxPromptsReset, socketConnected);
export const addDynamicPromptsListener = (startAppListening: AppStartListening) => {
startAppListening({
@@ -28,7 +22,7 @@ export const addDynamicPromptsListener = (startAppListening: AppStartListening)
effect: async (action, { dispatch, getState, cancelActiveListeners, delay }) => {
cancelActiveListeners();
const state = getState();
const { positivePrompt } = state.controlLayers.present;
const { positivePrompt } = state.generation;
const { maxPrompts } = state.dynamicPrompts;
if (state.config.disabledFeatures.includes('dynamicPrompting')) {
@@ -38,7 +32,7 @@ export const addDynamicPromptsListener = (startAppListening: AppStartListening)
const cachedPrompts = utilitiesApi.endpoints.dynamicPrompts.select({
prompt: positivePrompt,
max_prompts: maxPrompts,
})(state).data;
})(getState()).data;
if (cachedPrompts) {
dispatch(promptsChanged(cachedPrompts.prompts));
@@ -46,8 +40,8 @@ export const addDynamicPromptsListener = (startAppListening: AppStartListening)
return;
}
if (!getShouldProcessPrompt(positivePrompt)) {
dispatch(promptsChanged([positivePrompt]));
if (!getShouldProcessPrompt(state.generation.positivePrompt)) {
dispatch(promptsChanged([state.generation.positivePrompt]));
dispatch(parsingErrorChanged(undefined));
dispatch(isErrorChanged(false));
return;

View File

@@ -1,13 +1,14 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import { setDefaultSettings } from 'features/parameters/store/actions';
import {
heightRecalled,
setCfgRescaleMultiplier,
setCfgScale,
setScheduler,
setSteps,
vaePrecisionChanged,
vaeSelected,
widthRecalled,
} from 'features/parameters/store/generationSlice';
import {
isParameterCFGRescaleMultiplier,
@@ -96,16 +97,16 @@ export const addSetDefaultSettingsListener = (startAppListening: AppStartListeni
dispatch(setScheduler(scheduler));
}
}
const setSizeOptions = { updateAspectRatio: true, clamp: true };
if (width) {
if (isParameterWidth(width)) {
dispatch(widthChanged({ width, ...setSizeOptions }));
dispatch(widthRecalled(width));
}
}
if (height) {
if (isParameterHeight(height)) {
dispatch(heightChanged({ height, ...setSizeOptions }));
dispatch(heightRecalled(height));
}
}

View File

@@ -1,8 +1,5 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { deepClone } from 'common/util/deepClone';
import { $nodeExecutionStates, upsertExecutionState } from 'features/nodes/hooks/useExecutionState';
import { zNodeStatus } from 'features/nodes/types/invocation';
import { socketGeneratorProgress } from 'services/events/actions';
const log = logger('socketio');
@@ -12,14 +9,6 @@ export const addGeneratorProgressEventListener = (startAppListening: AppStartLis
actionCreator: socketGeneratorProgress,
effect: (action) => {
log.trace(action.payload, `Generator progress`);
const { source_node_id, step, total_steps, progress_image } = action.payload.data;
const nes = deepClone($nodeExecutionStates.get()[source_node_id]);
if (nes) {
nes.status = zNodeStatus.enum.IN_PROGRESS;
nes.progress = (step + 1) / total_steps;
nes.progressImage = progress_image ?? null;
upsertExecutionState(nes.nodeId, nes);
}
},
});
};

View File

@@ -1,18 +1,10 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { deepClone } from 'common/util/deepClone';
import { parseify } from 'common/util/serialize';
import { addImageToStagingArea } from 'features/canvas/store/canvasSlice';
import {
boardIdSelected,
galleryViewChanged,
imageSelected,
isImageViewerOpenChanged,
} from 'features/gallery/store/gallerySlice';
import { boardIdSelected, galleryViewChanged, imageSelected } from 'features/gallery/store/gallerySlice';
import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
import { $nodeExecutionStates, upsertExecutionState } from 'features/nodes/hooks/useExecutionState';
import { isImageOutput } from 'features/nodes/types/common';
import { zNodeStatus } from 'features/nodes/types/invocation';
import { CANVAS_OUTPUT } from 'features/nodes/util/graph/constants';
import { boardsApi } from 'services/api/endpoints/boards';
import { imagesApi } from 'services/api/endpoints/images';
@@ -31,7 +23,7 @@ export const addInvocationCompleteEventListener = (startAppListening: AppStartLi
const { data } = action.payload;
log.debug({ data: parseify(data) }, `Invocation complete (${action.payload.data.node.type})`);
const { result, node, queue_batch_id, source_node_id } = data;
const { result, node, queue_batch_id } = data;
// This complete event has an associated image output
if (isImageOutput(result) && !nodeTypeDenylist.includes(node.type)) {
const { image_name } = result.image;
@@ -109,20 +101,9 @@ export const addInvocationCompleteEventListener = (startAppListening: AppStartLi
}
dispatch(imageSelected(imageDTO));
dispatch(isImageViewerOpenChanged(true));
}
}
}
const nes = deepClone($nodeExecutionStates.get()[source_node_id]);
if (nes) {
nes.status = zNodeStatus.enum.COMPLETED;
if (nes.progress !== null) {
nes.progress = 1;
}
nes.outputs.push(result);
upsertExecutionState(nes.nodeId, nes);
}
},
});
};

View File

@@ -1,8 +1,5 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { deepClone } from 'common/util/deepClone';
import { $nodeExecutionStates, upsertExecutionState } from 'features/nodes/hooks/useExecutionState';
import { zNodeStatus } from 'features/nodes/types/invocation';
import { socketInvocationError } from 'services/events/actions';
const log = logger('socketio');
@@ -12,15 +9,6 @@ export const addInvocationErrorEventListener = (startAppListening: AppStartListe
actionCreator: socketInvocationError,
effect: (action) => {
log.error(action.payload, `Invocation error (${action.payload.data.node.type})`);
const { source_node_id } = action.payload.data;
const nes = deepClone($nodeExecutionStates.get()[source_node_id]);
if (nes) {
nes.status = zNodeStatus.enum.FAILED;
nes.error = action.payload.data.error;
nes.progress = null;
nes.progressImage = null;
upsertExecutionState(nes.nodeId, nes);
}
},
});
};

View File

@@ -1,8 +1,5 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { deepClone } from 'common/util/deepClone';
import { $nodeExecutionStates, upsertExecutionState } from 'features/nodes/hooks/useExecutionState';
import { zNodeStatus } from 'features/nodes/types/invocation';
import { socketInvocationStarted } from 'services/events/actions';
const log = logger('socketio');
@@ -12,12 +9,6 @@ export const addInvocationStartedEventListener = (startAppListening: AppStartLis
actionCreator: socketInvocationStarted,
effect: (action) => {
log.debug(action.payload, `Invocation started (${action.payload.data.node.type})`);
const { source_node_id } = action.payload.data;
const nes = deepClone($nodeExecutionStates.get()[source_node_id]);
if (nes) {
nes.status = zNodeStatus.enum.IN_PROGRESS;
upsertExecutionState(nes.nodeId, nes);
}
},
});
};

View File

@@ -1,9 +1,5 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { deepClone } from 'common/util/deepClone';
import { $nodeExecutionStates } from 'features/nodes/hooks/useExecutionState';
import { zNodeStatus } from 'features/nodes/types/invocation';
import { forEach } from 'lodash-es';
import { queueApi, queueItemsAdapter } from 'services/api/endpoints/queue';
import { socketQueueItemStatusChanged } from 'services/events/actions';
@@ -58,21 +54,6 @@ export const addSocketQueueItemStatusChangedEventListener = (startAppListening:
dispatch(
queueApi.util.invalidateTags(['CurrentSessionQueueItem', 'NextSessionQueueItem', 'InvocationCacheStatus'])
);
if (['in_progress'].includes(action.payload.data.queue_item.status)) {
forEach($nodeExecutionStates.get(), (nes) => {
if (!nes) {
return;
}
const clone = deepClone(nes);
clone.status = zNodeStatus.enum.PENDING;
clone.error = null;
clone.progress = null;
clone.progressImage = null;
clone.outputs = [];
$nodeExecutionStates.setKey(clone.nodeId, clone);
});
}
},
});
};

View File

@@ -1,7 +1,7 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { updateAllNodesRequested } from 'features/nodes/store/actions';
import { $templates, nodesChanged } from 'features/nodes/store/nodesSlice';
import { nodeReplaced } from 'features/nodes/store/nodesSlice';
import { NodeUpdateError } from 'features/nodes/types/error';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { getNeedsUpdate, updateNode } from 'features/nodes/util/node/nodeUpdate';
@@ -14,8 +14,7 @@ export const addUpdateAllNodesRequestedListener = (startAppListening: AppStartLi
actionCreator: updateAllNodesRequested,
effect: (action, { dispatch, getState }) => {
const log = logger('nodes');
const { nodes } = getState().nodes.present;
const templates = $templates.get();
const { nodes, templates } = getState().nodes;
let unableToUpdateCount = 0;
@@ -25,18 +24,13 @@ export const addUpdateAllNodesRequestedListener = (startAppListening: AppStartLi
unableToUpdateCount++;
return;
}
if (!getNeedsUpdate(node.data, template)) {
if (!getNeedsUpdate(node, template)) {
// No need to increment the count here, since we're not actually updating
return;
}
try {
const updatedNode = updateNode(node, template);
dispatch(
nodesChanged([
{ type: 'remove', id: updatedNode.id },
{ type: 'add', item: updatedNode },
])
);
dispatch(nodeReplaced({ nodeId: updatedNode.id, node: updatedNode }));
} catch (e) {
if (e instanceof NodeUpdateError) {
unableToUpdateCount++;

View File

@@ -2,51 +2,32 @@ import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { parseify } from 'common/util/serialize';
import { workflowLoaded, workflowLoadRequested } from 'features/nodes/store/actions';
import { $templates } from 'features/nodes/store/nodesSlice';
import { $flow } from 'features/nodes/store/reactFlowInstance';
import type { Templates } from 'features/nodes/store/types';
import { WorkflowMigrationError, WorkflowVersionError } from 'features/nodes/types/error';
import { graphToWorkflow } from 'features/nodes/util/workflow/graphToWorkflow';
import { validateWorkflow } from 'features/nodes/util/workflow/validateWorkflow';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { t } from 'i18next';
import type { GraphAndWorkflowResponse, NonNullableGraph } from 'services/api/types';
import { z } from 'zod';
import { fromZodError } from 'zod-validation-error';
const getWorkflow = (data: GraphAndWorkflowResponse, templates: Templates) => {
if (data.workflow) {
// Prefer to load the workflow if it's available - it has more information
const parsed = JSON.parse(data.workflow);
return validateWorkflow(parsed, templates);
} else if (data.graph) {
// Else we fall back on the graph, using the graphToWorkflow function to convert and do layout
const parsed = JSON.parse(data.graph);
const workflow = graphToWorkflow(parsed as NonNullableGraph, true);
return validateWorkflow(workflow, templates);
} else {
throw new Error('No workflow or graph provided');
}
};
export const addWorkflowLoadRequestedListener = (startAppListening: AppStartListening) => {
startAppListening({
actionCreator: workflowLoadRequested,
effect: (action, { dispatch }) => {
effect: (action, { dispatch, getState }) => {
const log = logger('nodes');
const { data, asCopy } = action.payload;
const nodeTemplates = $templates.get();
const { workflow, asCopy } = action.payload;
const nodeTemplates = getState().nodes.templates;
try {
const { workflow, warnings } = getWorkflow(data, nodeTemplates);
const { workflow: validatedWorkflow, warnings } = validateWorkflow(workflow, nodeTemplates);
if (asCopy) {
// If we're loading a copy, we need to remove the ID so that the backend will create a new workflow
delete workflow.id;
delete validatedWorkflow.id;
}
dispatch(workflowLoaded(workflow));
dispatch(workflowLoaded(validatedWorkflow));
if (!warnings.length) {
dispatch(
addToast(

View File

@@ -10,23 +10,22 @@ import {
controlAdaptersPersistConfig,
controlAdaptersSlice,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import {
controlLayersPersistConfig,
controlLayersSlice,
controlLayersUndoableConfig,
} from 'features/controlLayers/store/controlLayersSlice';
import { deleteImageModalSlice } from 'features/deleteImageModal/store/slice';
import { dynamicPromptsPersistConfig, dynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { galleryPersistConfig, gallerySlice } from 'features/gallery/store/gallerySlice';
import { hrfPersistConfig, hrfSlice } from 'features/hrf/store/hrfSlice';
import { loraPersistConfig, loraSlice } from 'features/lora/store/loraSlice';
import { modelManagerV2PersistConfig, modelManagerV2Slice } from 'features/modelManagerV2/store/modelManagerV2Slice';
import { nodesPersistConfig, nodesSlice, nodesUndoableConfig } from 'features/nodes/store/nodesSlice';
import { workflowSettingsPersistConfig, workflowSettingsSlice } from 'features/nodes/store/workflowSettingsSlice';
import { nodesPersistConfig, nodesSlice } from 'features/nodes/store/nodesSlice';
import { workflowPersistConfig, workflowSlice } from 'features/nodes/store/workflowSlice';
import { generationPersistConfig, generationSlice } from 'features/parameters/store/generationSlice';
import { postprocessingPersistConfig, postprocessingSlice } from 'features/parameters/store/postprocessingSlice';
import { queueSlice } from 'features/queue/store/queueSlice';
import {
regionalPromptsPersistConfig,
regionalPromptsSlice,
regionalPromptsUndoableConfig,
} from 'features/regionalPrompts/store/regionalPromptsSlice';
import { sdxlPersistConfig, sdxlSlice } from 'features/sdxl/store/sdxlSlice';
import { configSlice } from 'features/system/store/configSlice';
import { systemPersistConfig, systemSlice } from 'features/system/store/systemSlice';
@@ -51,7 +50,7 @@ const allReducers = {
[canvasSlice.name]: canvasSlice.reducer,
[gallerySlice.name]: gallerySlice.reducer,
[generationSlice.name]: generationSlice.reducer,
[nodesSlice.name]: undoable(nodesSlice.reducer, nodesUndoableConfig),
[nodesSlice.name]: nodesSlice.reducer,
[postprocessingSlice.name]: postprocessingSlice.reducer,
[systemSlice.name]: systemSlice.reducer,
[configSlice.name]: configSlice.reducer,
@@ -66,8 +65,7 @@ const allReducers = {
[queueSlice.name]: queueSlice.reducer,
[workflowSlice.name]: workflowSlice.reducer,
[hrfSlice.name]: hrfSlice.reducer,
[controlLayersSlice.name]: undoable(controlLayersSlice.reducer, controlLayersUndoableConfig),
[workflowSettingsSlice.name]: workflowSettingsSlice.reducer,
[regionalPromptsSlice.name]: undoable(regionalPromptsSlice.reducer, regionalPromptsUndoableConfig),
[api.reducerPath]: api.reducer,
};
@@ -112,8 +110,7 @@ const persistConfigs: { [key in keyof typeof allReducers]?: PersistConfig } = {
[loraPersistConfig.name]: loraPersistConfig,
[modelManagerV2PersistConfig.name]: modelManagerV2PersistConfig,
[hrfPersistConfig.name]: hrfPersistConfig,
[controlLayersPersistConfig.name]: controlLayersPersistConfig,
[workflowSettingsPersistConfig.name]: workflowSettingsPersistConfig,
[regionalPromptsPersistConfig.name]: regionalPromptsPersistConfig,
};
const unserialize: UnserializeFunction = (data, key) => {

View File

@@ -70,7 +70,6 @@ const IAIDndImage = (props: IAIDndImageProps) => {
onMouseOver,
onMouseOut,
dataTestId,
...rest
} = props;
const [isHovered, setIsHovered] = useState(false);
@@ -139,7 +138,6 @@ const IAIDndImage = (props: IAIDndImageProps) => {
minH={minSize ? minSize : undefined}
userSelect="none"
cursor={isDragDisabled || !imageDTO ? 'default' : 'pointer'}
{...rest}
>
{imageDTO && (
<Flex

View File

@@ -17,10 +17,14 @@ const accept: Accept = {
const selectPostUploadAction = createMemoizedSelector(activeTabNameSelector, (activeTabName) => {
let postUploadAction: PostUploadAction = { type: 'TOAST' };
if (activeTabName === 'canvas') {
if (activeTabName === 'unifiedCanvas') {
postUploadAction = { type: 'SET_CANVAS_INITIAL_IMAGE' };
}
if (activeTabName === 'img2img') {
postUploadAction = { type: 'SET_INITIAL_IMAGE' };
}
return postUploadAction;
});

View File

@@ -67,7 +67,7 @@ export const useGlobalHotkeys = () => {
useHotkeys(
'1',
() => {
dispatch(setActiveTab('generation'));
dispatch(setActiveTab('txt2img'));
},
[dispatch]
);
@@ -75,7 +75,7 @@ export const useGlobalHotkeys = () => {
useHotkeys(
'2',
() => {
dispatch(setActiveTab('canvas'));
dispatch(setActiveTab('img2img'));
},
[dispatch]
);
@@ -83,23 +83,31 @@ export const useGlobalHotkeys = () => {
useHotkeys(
'3',
() => {
dispatch(setActiveTab('workflows'));
dispatch(setActiveTab('unifiedCanvas'));
},
[dispatch]
);
useHotkeys(
'4',
() => {
dispatch(setActiveTab('nodes'));
},
[dispatch]
);
useHotkeys(
'5',
() => {
if (isModelManagerEnabled) {
dispatch(setActiveTab('models'));
dispatch(setActiveTab('modelManager'));
}
},
[dispatch, isModelManagerEnabled]
);
useHotkeys(
isModelManagerEnabled ? '5' : '4',
isModelManagerEnabled ? '6' : '5',
() => {
dispatch(setActiveTab('queue'));
},

View File

@@ -13,7 +13,6 @@ type UseGroupedModelComboboxArg<T extends AnyModelConfig> = {
onChange: (value: T | null) => void;
getIsDisabled?: (model: T) => boolean;
isLoading?: boolean;
groupByType?: boolean;
};
type UseGroupedModelComboboxReturn = {
@@ -24,21 +23,17 @@ type UseGroupedModelComboboxReturn = {
noOptionsMessage: () => string;
};
const groupByBaseFunc = <T extends AnyModelConfig>(model: T) => model.base.toUpperCase();
const groupByBaseAndTypeFunc = <T extends AnyModelConfig>(model: T) =>
`${model.base.toUpperCase()} / ${model.type.replaceAll('_', ' ').toUpperCase()}`;
export const useGroupedModelCombobox = <T extends AnyModelConfig>(
arg: UseGroupedModelComboboxArg<T>
): UseGroupedModelComboboxReturn => {
const { t } = useTranslation();
const base_model = useAppSelector((s) => s.generation.model?.base ?? 'sdxl');
const { modelConfigs, selectedModel, getIsDisabled, onChange, isLoading, groupByType = false } = arg;
const { modelConfigs, selectedModel, getIsDisabled, onChange, isLoading } = arg;
const options = useMemo<GroupBase<ComboboxOption>[]>(() => {
if (!modelConfigs) {
return [];
}
const groupedModels = groupBy(modelConfigs, groupByType ? groupByBaseAndTypeFunc : groupByBaseFunc);
const groupedModels = groupBy(modelConfigs, 'base');
const _options = reduce(
groupedModels,
(acc, val, label) => {
@@ -54,9 +49,9 @@ export const useGroupedModelCombobox = <T extends AnyModelConfig>(
},
[] as GroupBase<ComboboxOption>[]
);
_options.sort((a) => (a.label?.split('/')[0]?.toLowerCase().includes(base_model) ? -1 : 1));
_options.sort((a) => (a.label === base_model ? -1 : 1));
return _options;
}, [modelConfigs, groupByType, getIsDisabled, base_model]);
}, [getIsDisabled, modelConfigs, base_model]);
const value = useMemo(
() =>

View File

@@ -1,4 +1,3 @@
import { useStore } from '@nanostores/react';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import {
@@ -6,231 +5,133 @@ import {
selectControlAdaptersSlice,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
import { selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
import type { Layer } from 'features/controlLayers/store/types';
import { selectDynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
import { $templates, selectNodesSlice } from 'features/nodes/store/nodesSlice';
import type { Templates } from 'features/nodes/store/types';
import { selectWorkflowSettingsSlice } from 'features/nodes/store/workflowSettingsSlice';
import { selectNodesSlice } from 'features/nodes/store/nodesSlice';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { selectGenerationSlice } from 'features/parameters/store/generationSlice';
import { selectSystemSlice } from 'features/system/store/systemSlice';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import i18n from 'i18next';
import { forEach, upperFirst } from 'lodash-es';
import { useMemo } from 'react';
import { forEach } from 'lodash-es';
import { getConnectedEdges } from 'reactflow';
const LAYER_TYPE_TO_TKEY: Record<Layer['type'], string> = {
initial_image_layer: 'controlLayers.globalInitialImage',
control_adapter_layer: 'controlLayers.globalControlAdapter',
ip_adapter_layer: 'controlLayers.globalIPAdapter',
regional_guidance_layer: 'controlLayers.regionalGuidance',
};
const selector = createMemoizedSelector(
[
selectControlAdaptersSlice,
selectGenerationSlice,
selectSystemSlice,
selectNodesSlice,
selectDynamicPromptsSlice,
activeTabNameSelector,
],
(controlAdapters, generation, system, nodes, dynamicPrompts, activeTabName) => {
const { initialImage, model, positivePrompt } = generation;
const createSelector = (templates: Templates) =>
createMemoizedSelector(
[
selectControlAdaptersSlice,
selectGenerationSlice,
selectSystemSlice,
selectNodesSlice,
selectWorkflowSettingsSlice,
selectDynamicPromptsSlice,
selectControlLayersSlice,
activeTabNameSelector,
],
(controlAdapters, generation, system, nodes, workflowSettings, dynamicPrompts, controlLayers, activeTabName) => {
const { model } = generation;
const { size } = controlLayers.present;
const { positivePrompt } = controlLayers.present;
const { isConnected } = system;
const { isConnected } = system;
const reasons: string[] = [];
const reasons: { prefix?: string; content: string }[] = [];
// Cannot generate if not connected
if (!isConnected) {
reasons.push(i18n.t('parameters.invoke.systemDisconnected'));
}
// Cannot generate if not connected
if (!isConnected) {
reasons.push({ content: i18n.t('parameters.invoke.systemDisconnected') });
}
if (activeTabName === 'img2img' && !initialImage) {
reasons.push(i18n.t('parameters.invoke.noInitialImageSelected'));
}
if (activeTabName === 'workflows') {
if (workflowSettings.shouldValidateGraph) {
if (!nodes.nodes.length) {
reasons.push({ content: i18n.t('parameters.invoke.noNodesInGraph') });
if (activeTabName === 'nodes') {
if (nodes.shouldValidateGraph) {
if (!nodes.nodes.length) {
reasons.push(i18n.t('parameters.invoke.noNodesInGraph'));
}
nodes.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
return;
}
nodes.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
const nodeTemplate = nodes.templates[node.data.type];
if (!nodeTemplate) {
// Node type not found
reasons.push(i18n.t('parameters.invoke.missingNodeTemplate'));
return;
}
const connectedEdges = getConnectedEdges([node], nodes.edges);
forEach(node.data.inputs, (field) => {
const fieldTemplate = nodeTemplate.inputs[field.name];
const hasConnection = connectedEdges.some(
(edge) => edge.target === node.id && edge.targetHandle === field.name
);
if (!fieldTemplate) {
reasons.push(i18n.t('parameters.invoke.missingFieldTemplate'));
return;
}
const nodeTemplate = templates[node.data.type];
if (!nodeTemplate) {
// Node type not found
reasons.push({ content: i18n.t('parameters.invoke.missingNodeTemplate') });
return;
}
const connectedEdges = getConnectedEdges([node], nodes.edges);
forEach(node.data.inputs, (field) => {
const fieldTemplate = nodeTemplate.inputs[field.name];
const hasConnection = connectedEdges.some(
(edge) => edge.target === node.id && edge.targetHandle === field.name
if (fieldTemplate.required && field.value === undefined && !hasConnection) {
reasons.push(
i18n.t('parameters.invoke.missingInputForField', {
nodeLabel: node.data.label || nodeTemplate.title,
fieldLabel: field.label || fieldTemplate.title,
})
);
if (!fieldTemplate) {
reasons.push({ content: i18n.t('parameters.invoke.missingFieldTemplate') });
return;
}
if (fieldTemplate.required && field.value === undefined && !hasConnection) {
reasons.push({
content: i18n.t('parameters.invoke.missingInputForField', {
nodeLabel: node.data.label || nodeTemplate.title,
fieldLabel: field.label || fieldTemplate.title,
}),
});
return;
}
});
return;
}
});
}
} else {
if (dynamicPrompts.prompts.length === 0 && getShouldProcessPrompt(positivePrompt)) {
reasons.push({ content: i18n.t('parameters.invoke.noPrompts') });
}
if (!model) {
reasons.push({ content: i18n.t('parameters.invoke.noModelSelected') });
}
if (activeTabName === 'generation') {
// Handling for generation tab
controlLayers.present.layers
.filter((l) => l.isEnabled)
.forEach((l, i) => {
const layerLiteral = i18n.t('controlLayers.layers_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[l.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
const problems: string[] = [];
if (l.type === 'control_adapter_layer') {
// Must have model
if (!l.controlAdapter.model) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterNoModelSelected'));
}
// Model base must match
if (l.controlAdapter.model?.base !== model?.base) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterIncompatibleBaseModel'));
}
// Must have a control image OR, if it has a processor, it must have a processed image
if (!l.controlAdapter.image) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterNoImageSelected'));
} else if (l.controlAdapter.processorConfig && !l.controlAdapter.processedImage) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterImageNotProcessed'));
}
// T2I Adapters require images have dimensions that are multiples of 64 (SD1.5) or 32 (SDXL)
if (l.controlAdapter.type === 't2i_adapter') {
const multiple = model?.base === 'sdxl' ? 32 : 64;
if (size.width % multiple !== 0 || size.height % multiple !== 0) {
problems.push(i18n.t('parameters.invoke.layer.t2iAdapterIncompatibleDimensions', { multiple }));
}
}
}
if (l.type === 'ip_adapter_layer') {
// Must have model
if (!l.ipAdapter.model) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoModelSelected'));
}
// Model base must match
if (l.ipAdapter.model?.base !== model?.base) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterIncompatibleBaseModel'));
}
// Must have an image
if (!l.ipAdapter.image) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoImageSelected'));
}
}
if (l.type === 'initial_image_layer') {
// Must have an image
if (!l.image) {
problems.push(i18n.t('parameters.invoke.layer.initialImageNoImageSelected'));
}
}
if (l.type === 'regional_guidance_layer') {
// Must have a region
if (l.maskObjects.length === 0) {
problems.push(i18n.t('parameters.invoke.layer.rgNoRegion'));
}
// Must have at least 1 prompt or IP Adapter
if (l.positivePrompt === null && l.negativePrompt === null && l.ipAdapters.length === 0) {
problems.push(i18n.t('parameters.invoke.layer.rgNoPromptsOrIPAdapters'));
}
l.ipAdapters.forEach((ipAdapter) => {
// Must have model
if (!ipAdapter.model) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoModelSelected'));
}
// Model base must match
if (ipAdapter.model?.base !== model?.base) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterIncompatibleBaseModel'));
}
// Must have an image
if (!ipAdapter.image) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoImageSelected'));
}
});
}
if (problems.length) {
const content = upperFirst(problems.join(', '));
reasons.push({ prefix, content });
}
});
} else {
// Handling for all other tabs
selectControlAdapterAll(controlAdapters)
.filter((ca) => ca.isEnabled)
.forEach((ca, i) => {
if (!ca.isEnabled) {
return;
}
if (!ca.model) {
reasons.push({ content: i18n.t('parameters.invoke.noModelForControlAdapter', { number: i + 1 }) });
} else if (ca.model.base !== model?.base) {
// This should never happen, just a sanity check
reasons.push({
content: i18n.t('parameters.invoke.incompatibleBaseModelForControlAdapter', { number: i + 1 }),
});
}
if (
!ca.controlImage ||
(isControlNetOrT2IAdapter(ca) && !ca.processedControlImage && ca.processorType !== 'none')
) {
reasons.push({
content: i18n.t('parameters.invoke.noControlImageForControlAdapter', { number: i + 1 }),
});
}
});
}
});
}
} else {
if (dynamicPrompts.prompts.length === 0 && getShouldProcessPrompt(positivePrompt)) {
reasons.push(i18n.t('parameters.invoke.noPrompts'));
}
return { isReady: !reasons.length, reasons };
if (!model) {
reasons.push(i18n.t('parameters.invoke.noModelSelected'));
}
selectControlAdapterAll(controlAdapters).forEach((ca, i) => {
if (!ca.isEnabled) {
return;
}
if (!ca.model) {
reasons.push(
i18n.t('parameters.invoke.noModelForControlAdapter', {
number: i + 1,
})
);
} else if (ca.model.base !== model?.base) {
// This should never happen, just a sanity check
reasons.push(
i18n.t('parameters.invoke.incompatibleBaseModelForControlAdapter', {
number: i + 1,
})
);
}
if (
!ca.controlImage ||
(isControlNetOrT2IAdapter(ca) && !ca.processedControlImage && ca.processorType !== 'none')
) {
reasons.push(
i18n.t('parameters.invoke.noControlImageForControlAdapter', {
number: i + 1,
})
);
}
});
}
);
return { isReady: !reasons.length, reasons };
}
);
export const useIsReadyToEnqueue = () => {
const templates = useStore($templates);
const selector = useMemo(() => createSelector(templates), [templates]);
const value = useAppSelector(selector);
return value;
const { isReady, reasons } = useAppSelector(selector);
return { isReady, reasons };
};

View File

@@ -1,3 +0,0 @@
export const stopPropagation = (e: React.MouseEvent) => {
e.stopPropagation();
};

View File

@@ -21,6 +21,7 @@ import {
setShouldShowBoundingBox,
} from 'features/canvas/store/canvasSlice';
import type { CanvasLayer } from 'features/canvas/store/canvasTypes';
import { LAYER_NAMES_DICT } from 'features/canvas/store/canvasTypes';
import { memo, useCallback, useMemo } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
@@ -215,20 +216,13 @@ const IAICanvasToolbar = () => {
[dispatch, isMaskEnabled]
);
const layerOptions = useMemo<{ label: string; value: CanvasLayer }[]>(
() => [
{ label: t('unifiedCanvas.base'), value: 'base' },
{ label: t('unifiedCanvas.mask'), value: 'mask' },
],
[t]
);
const layerValue = useMemo(() => layerOptions.filter((o) => o.value === layer)[0] ?? null, [layer, layerOptions]);
const value = useMemo(() => LAYER_NAMES_DICT.filter((o) => o.value === layer)[0], [layer]);
return (
<Flex alignItems="center" gap={2} flexWrap="wrap">
<Tooltip label={`${t('unifiedCanvas.layer')} (Q)`}>
<FormControl isDisabled={isStaging} w="5rem">
<Combobox value={layerValue} options={layerOptions} onChange={handleChangeLayer} />
<Combobox value={value} options={LAYER_NAMES_DICT} onChange={handleChangeLayer} />
</FormControl>
</Tooltip>

View File

@@ -75,7 +75,7 @@ const useInpaintingCanvasHotkeys = () => {
const onKeyDown = useCallback(
(e: KeyboardEvent) => {
if (e.repeat || e.key !== ' ' || isInteractiveTarget(e.target) || activeTabName !== 'canvas') {
if (e.repeat || e.key !== ' ' || isInteractiveTarget(e.target) || activeTabName !== 'unifiedCanvas') {
return;
}
if ($toolStash.get() || $tool.get() === 'move') {
@@ -90,7 +90,7 @@ const useInpaintingCanvasHotkeys = () => {
);
const onKeyUp = useCallback(
(e: KeyboardEvent) => {
if (e.repeat || e.key !== ' ' || isInteractiveTarget(e.target) || activeTabName !== 'canvas') {
if (e.repeat || e.key !== ' ' || isInteractiveTarget(e.target) || activeTabName !== 'unifiedCanvas') {
return;
}
if (!$toolStash.get() || $tool.get() !== 'move') {

View File

@@ -8,7 +8,6 @@ import calculateScale from 'features/canvas/util/calculateScale';
import { STAGE_PADDING_PERCENTAGE } from 'features/canvas/util/constants';
import floorCoordinates from 'features/canvas/util/floorCoordinates';
import getScaledBoundingBoxDimensions from 'features/canvas/util/getScaledBoundingBoxDimensions';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { initialAspectRatioState } from 'features/parameters/components/ImageSize/constants';
import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
import { modelChanged } from 'features/parameters/store/generationSlice';
@@ -589,9 +588,8 @@ export const canvasSlice = createSlice({
},
extraReducers: (builder) => {
builder.addCase(modelChanged, (state, action) => {
const newModel = action.payload;
if (!newModel || action.meta.previousModel?.base === newModel.base) {
// Model was cleared or the base didn't change
if (action.meta.previousModel?.base === action.payload?.base) {
// The base model hasn't changed, we don't need to optimize the size
return;
}
const optimalDimension = getOptimalDimension(action.payload);
@@ -599,8 +597,14 @@ export const canvasSlice = createSlice({
if (getIsSizeOptimal(width, height, optimalDimension)) {
return;
}
const newSize = calculateNewSize(state.aspectRatio.value, optimalDimension * optimalDimension);
setBoundingBoxDimensionsReducer(state, newSize, optimalDimension);
setBoundingBoxDimensionsReducer(
state,
{
width,
height,
},
optimalDimension
);
});
builder.addCase(socketQueueItemStatusChanged, (state, action) => {

View File

@@ -5,6 +5,11 @@ import { z } from 'zod';
export type CanvasLayer = 'base' | 'mask';
export const LAYER_NAMES_DICT: { label: string; value: CanvasLayer }[] = [
{ label: 'Base', value: 'base' },
{ label: 'Mask', value: 'mask' },
];
const zBoundingBoxScaleMethod = z.enum(['none', 'auto', 'manual']);
export type BoundingBoxScaleMethod = z.infer<typeof zBoundingBoxScaleMethod>;
export const isBoundingBoxScaleMethod = (v: unknown): v is BoundingBoxScaleMethod =>

View File

@@ -76,7 +76,7 @@ const ControlAdapterConfig = (props: { id: string; number: number }) => {
<Box minW={0} w="full" transitionProperty="common" transitionDuration="0.1s">
<ParamControlAdapterModel id={id} />
</Box>
{activeTabName === 'canvas' && <ControlNetCanvasImageImports id={id} />}
{activeTabName === 'unifiedCanvas' && <ControlNetCanvasImageImports id={id} />}
<IconButton
size="sm"
tooltip={t('controlnet.duplicate')}

View File

@@ -13,10 +13,9 @@ import {
controlAdapterImageChanged,
selectControlAdaptersSlice,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { heightChanged, selectOptimalDimension, widthChanged } from 'features/parameters/store/generationSlice';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { memo, useCallback, useEffect, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
@@ -93,16 +92,15 @@ const ControlAdapterImagePreview = ({ isSmall, id }: Props) => {
return;
}
if (activeTabName === 'canvas') {
if (activeTabName === 'unifiedCanvas') {
dispatch(setBoundingBoxDimensions({ width: controlImage.width, height: controlImage.height }, optimalDimension));
} else {
const options = { updateAspectRatio: true, clamp: true };
const { width, height } = calculateNewSize(
controlImage.width / controlImage.height,
optimalDimension * optimalDimension
);
dispatch(widthChanged({ width, ...options }));
dispatch(heightChanged({ height, ...options }));
dispatch(widthChanged(width));
dispatch(heightChanged(height));
}
}, [controlImage, activeTabName, dispatch, optimalDimension]);

View File

@@ -52,7 +52,7 @@ const ParamControlAdapterIPMethod = ({ id }: Props) => {
return (
<FormControl>
<InformationalPopover feature="controlNetResizeMode">
<InformationalPopover feature="ipAdapterMethod">
<FormLabel>{t('controlnet.ipAdapterMethod')}</FormLabel>
</InformationalPopover>
<Combobox value={value} options={options} isDisabled={!isEnabled} onChange={handleIPMethodChanged} />

View File

@@ -1,11 +1,12 @@
import type { PayloadAction, Update } from '@reduxjs/toolkit';
import { createEntityAdapter, createSlice } from '@reduxjs/toolkit';
import { createEntityAdapter, createSlice, isAnyOf } from '@reduxjs/toolkit';
import { getSelectorsOptions } from 'app/store/createMemoizedSelector';
import type { PersistConfig, RootState } from 'app/store/store';
import { deepClone } from 'common/util/deepClone';
import { buildControlAdapter } from 'features/controlAdapters/util/buildControlAdapter';
import { buildControlAdapterProcessor } from 'features/controlAdapters/util/buildControlAdapterProcessor';
import { zModelIdentifierField } from 'features/nodes/types/common';
import { maskLayerIPAdapterAdded } from 'features/regionalPrompts/store/regionalPromptsSlice';
import { merge, uniq } from 'lodash-es';
import type { ControlNetModelConfig, IPAdapterModelConfig, T2IAdapterModelConfig } from 'services/api/types';
import { socketInvocationError } from 'services/events/actions';
@@ -382,6 +383,10 @@ export const controlAdaptersSlice = createSlice({
builder.addCase(socketInvocationError, (state) => {
state.pendingControlImages = [];
});
builder.addCase(maskLayerIPAdapterAdded, (state, action) => {
caAdapter.addOne(state, buildControlAdapter(action.meta.uuid, 'ip_adapter'));
});
},
});
@@ -412,6 +417,8 @@ export const {
t2iAdaptersReset,
} = controlAdaptersSlice.actions;
export const isAnyControlAdapterAdded = isAnyOf(controlAdapterAdded, controlAdapterRecalled);
export const selectControlAdaptersSlice = (state: RootState) => state.controlAdapters;
/* eslint-disable-next-line @typescript-eslint/no-explicit-any */

View File

@@ -5,7 +5,22 @@ import type {
ParameterT2IAdapterModel,
} from 'features/parameters/types/parameterSchemas';
import type { components } from 'services/api/schema';
import type { Invocation } from 'services/api/types';
import type {
CannyImageProcessorInvocation,
ColorMapImageProcessorInvocation,
ContentShuffleImageProcessorInvocation,
DepthAnythingImageProcessorInvocation,
DWOpenposeImageProcessorInvocation,
HedImageProcessorInvocation,
LineartAnimeImageProcessorInvocation,
LineartImageProcessorInvocation,
MediapipeFaceProcessorInvocation,
MidasDepthImageProcessorInvocation,
MlsdImageProcessorInvocation,
NormalbaeImageProcessorInvocation,
PidiImageProcessorInvocation,
ZoeDepthImageProcessorInvocation,
} from 'services/api/types';
import type { O } from 'ts-toolbelt';
import { z } from 'zod';
@@ -13,20 +28,20 @@ import { z } from 'zod';
* Any ControlNet processor node
*/
export type ControlAdapterProcessorNode =
| Invocation<'canny_image_processor'>
| Invocation<'color_map_image_processor'>
| Invocation<'content_shuffle_image_processor'>
| Invocation<'depth_anything_image_processor'>
| Invocation<'hed_image_processor'>
| Invocation<'lineart_anime_image_processor'>
| Invocation<'lineart_image_processor'>
| Invocation<'mediapipe_face_processor'>
| Invocation<'midas_depth_image_processor'>
| Invocation<'mlsd_image_processor'>
| Invocation<'normalbae_image_processor'>
| Invocation<'dw_openpose_image_processor'>
| Invocation<'pidi_image_processor'>
| Invocation<'zoe_depth_image_processor'>;
| CannyImageProcessorInvocation
| ColorMapImageProcessorInvocation
| ContentShuffleImageProcessorInvocation
| DepthAnythingImageProcessorInvocation
| HedImageProcessorInvocation
| LineartAnimeImageProcessorInvocation
| LineartImageProcessorInvocation
| MediapipeFaceProcessorInvocation
| MidasDepthImageProcessorInvocation
| MlsdImageProcessorInvocation
| NormalbaeImageProcessorInvocation
| DWOpenposeImageProcessorInvocation
| PidiImageProcessorInvocation
| ZoeDepthImageProcessorInvocation;
/**
* Any ControlNet processor type
@@ -56,7 +71,7 @@ export const isControlAdapterProcessorType = (v: unknown): v is ControlAdapterPr
* The Canny processor node, with parameters flagged as required
*/
export type RequiredCannyImageProcessorInvocation = O.Required<
Invocation<'canny_image_processor'>,
CannyImageProcessorInvocation,
'type' | 'low_threshold' | 'high_threshold' | 'image_resolution' | 'detect_resolution'
>;
@@ -64,7 +79,7 @@ export type RequiredCannyImageProcessorInvocation = O.Required<
* The Color Map processor node, with parameters flagged as required
*/
export type RequiredColorMapImageProcessorInvocation = O.Required<
Invocation<'color_map_image_processor'>,
ColorMapImageProcessorInvocation,
'type' | 'color_map_tile_size'
>;
@@ -72,7 +87,7 @@ export type RequiredColorMapImageProcessorInvocation = O.Required<
* The ContentShuffle processor node, with parameters flagged as required
*/
export type RequiredContentShuffleImageProcessorInvocation = O.Required<
Invocation<'content_shuffle_image_processor'>,
ContentShuffleImageProcessorInvocation,
'type' | 'detect_resolution' | 'image_resolution' | 'w' | 'h' | 'f'
>;
@@ -80,7 +95,7 @@ export type RequiredContentShuffleImageProcessorInvocation = O.Required<
* The DepthAnything processor node, with parameters flagged as required
*/
export type RequiredDepthAnythingImageProcessorInvocation = O.Required<
Invocation<'depth_anything_image_processor'>,
DepthAnythingImageProcessorInvocation,
'type' | 'model_size' | 'resolution' | 'offload'
>;
@@ -93,7 +108,7 @@ export const isDepthAnythingModelSize = (v: unknown): v is DepthAnythingModelSiz
* The HED processor node, with parameters flagged as required
*/
export type RequiredHedImageProcessorInvocation = O.Required<
Invocation<'hed_image_processor'>,
HedImageProcessorInvocation,
'type' | 'detect_resolution' | 'image_resolution' | 'scribble'
>;
@@ -101,7 +116,7 @@ export type RequiredHedImageProcessorInvocation = O.Required<
* The Lineart Anime processor node, with parameters flagged as required
*/
export type RequiredLineartAnimeImageProcessorInvocation = O.Required<
Invocation<'lineart_anime_image_processor'>,
LineartAnimeImageProcessorInvocation,
'type' | 'detect_resolution' | 'image_resolution'
>;
@@ -109,7 +124,7 @@ export type RequiredLineartAnimeImageProcessorInvocation = O.Required<
* The Lineart processor node, with parameters flagged as required
*/
export type RequiredLineartImageProcessorInvocation = O.Required<
Invocation<'lineart_image_processor'>,
LineartImageProcessorInvocation,
'type' | 'detect_resolution' | 'image_resolution' | 'coarse'
>;
@@ -117,7 +132,7 @@ export type RequiredLineartImageProcessorInvocation = O.Required<
* The MediapipeFace processor node, with parameters flagged as required
*/
export type RequiredMediapipeFaceProcessorInvocation = O.Required<
Invocation<'mediapipe_face_processor'>,
MediapipeFaceProcessorInvocation,
'type' | 'max_faces' | 'min_confidence' | 'image_resolution' | 'detect_resolution'
>;
@@ -125,7 +140,7 @@ export type RequiredMediapipeFaceProcessorInvocation = O.Required<
* The MidasDepth processor node, with parameters flagged as required
*/
export type RequiredMidasDepthImageProcessorInvocation = O.Required<
Invocation<'midas_depth_image_processor'>,
MidasDepthImageProcessorInvocation,
'type' | 'a_mult' | 'bg_th' | 'image_resolution' | 'detect_resolution'
>;
@@ -133,7 +148,7 @@ export type RequiredMidasDepthImageProcessorInvocation = O.Required<
* The MLSD processor node, with parameters flagged as required
*/
export type RequiredMlsdImageProcessorInvocation = O.Required<
Invocation<'mlsd_image_processor'>,
MlsdImageProcessorInvocation,
'type' | 'detect_resolution' | 'image_resolution' | 'thr_v' | 'thr_d'
>;
@@ -141,7 +156,7 @@ export type RequiredMlsdImageProcessorInvocation = O.Required<
* The NormalBae processor node, with parameters flagged as required
*/
export type RequiredNormalbaeImageProcessorInvocation = O.Required<
Invocation<'normalbae_image_processor'>,
NormalbaeImageProcessorInvocation,
'type' | 'detect_resolution' | 'image_resolution'
>;
@@ -149,7 +164,7 @@ export type RequiredNormalbaeImageProcessorInvocation = O.Required<
* The DW Openpose processor node, with parameters flagged as required
*/
export type RequiredDWOpenposeImageProcessorInvocation = O.Required<
Invocation<'dw_openpose_image_processor'>,
DWOpenposeImageProcessorInvocation,
'type' | 'image_resolution' | 'draw_body' | 'draw_face' | 'draw_hands'
>;
@@ -157,14 +172,14 @@ export type RequiredDWOpenposeImageProcessorInvocation = O.Required<
* The Pidi processor node, with parameters flagged as required
*/
export type RequiredPidiImageProcessorInvocation = O.Required<
Invocation<'pidi_image_processor'>,
PidiImageProcessorInvocation,
'type' | 'detect_resolution' | 'image_resolution' | 'safe' | 'scribble'
>;
/**
* The ZoeDepth processor node, with parameters flagged as required
*/
export type RequiredZoeDepthImageProcessorInvocation = O.Required<Invocation<'zoe_depth_image_processor'>, 'type'>;
export type RequiredZoeDepthImageProcessorInvocation = O.Required<ZoeDepthImageProcessorInvocation, 'type'>;
/**
* Any ControlNet Processor node, with its parameters flagged as required

View File

@@ -1,47 +0,0 @@
import { Button, Menu, MenuButton, MenuItem, MenuList } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useAddCALayer, useAddIILayer, useAddIPALayer } from 'features/controlLayers/hooks/addLayerHooks';
import { rgLayerAdded } from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
export const AddLayerButton = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const [addCALayer, isAddCALayerDisabled] = useAddCALayer();
const [addIPALayer, isAddIPALayerDisabled] = useAddIPALayer();
const [addIILayer, isAddIILayerDisabled] = useAddIILayer();
const addRGLayer = useCallback(() => {
dispatch(rgLayerAdded());
}, [dispatch]);
return (
<Menu>
<MenuButton
as={Button}
leftIcon={<PiPlusBold />}
variant="ghost"
data-testid="control-layers-add-layer-menu-button"
>
{t('controlLayers.addLayer')}
</MenuButton>
<MenuList>
<MenuItem icon={<PiPlusBold />} onClick={addRGLayer}>
{t('controlLayers.regionalGuidanceLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addCALayer} isDisabled={isAddCALayerDisabled}>
{t('controlLayers.globalControlAdapterLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addIPALayer} isDisabled={isAddIPALayerDisabled}>
{t('controlLayers.globalIPAdapterLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addIILayer} isDisabled={isAddIILayerDisabled}>
{t('controlLayers.globalInitialImageLayer')}
</MenuItem>
</MenuList>
</Menu>
);
});
AddLayerButton.displayName = 'AddLayerButton';

View File

@@ -1,45 +0,0 @@
import { Flex, Spacer, useDisclosure } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { CALayerControlAdapterWrapper } from 'features/controlLayers/components/CALayer/CALayerControlAdapterWrapper';
import { LayerDeleteButton } from 'features/controlLayers/components/LayerCommon/LayerDeleteButton';
import { LayerMenu } from 'features/controlLayers/components/LayerCommon/LayerMenu';
import { LayerTitle } from 'features/controlLayers/components/LayerCommon/LayerTitle';
import { LayerVisibilityToggle } from 'features/controlLayers/components/LayerCommon/LayerVisibilityToggle';
import { LayerWrapper } from 'features/controlLayers/components/LayerCommon/LayerWrapper';
import { layerSelected, selectCALayerOrThrow } from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback } from 'react';
import CALayerOpacity from './CALayerOpacity';
type Props = {
layerId: string;
};
export const CALayer = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const isSelected = useAppSelector((s) => selectCALayerOrThrow(s.controlLayers.present, layerId).isSelected);
const onClick = useCallback(() => {
dispatch(layerSelected(layerId));
}, [dispatch, layerId]);
const { isOpen, onToggle } = useDisclosure({ defaultIsOpen: true });
return (
<LayerWrapper onClick={onClick} borderColor={isSelected ? 'base.400' : 'base.800'}>
<Flex gap={3} alignItems="center" p={3} cursor="pointer" onDoubleClick={onToggle}>
<LayerVisibilityToggle layerId={layerId} />
<LayerTitle type="control_adapter_layer" />
<Spacer />
<CALayerOpacity layerId={layerId} />
<LayerMenu layerId={layerId} />
<LayerDeleteButton layerId={layerId} />
</Flex>
{isOpen && (
<Flex flexDir="column" gap={3} px={3} pb={3}>
<CALayerControlAdapterWrapper layerId={layerId} />
</Flex>
)}
</LayerWrapper>
);
});
CALayer.displayName = 'CALayer';

View File

@@ -1,121 +0,0 @@
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { ControlAdapter } from 'features/controlLayers/components/ControlAndIPAdapter/ControlAdapter';
import {
caLayerControlModeChanged,
caLayerImageChanged,
caLayerModelChanged,
caLayerProcessorConfigChanged,
caOrIPALayerBeginEndStepPctChanged,
caOrIPALayerWeightChanged,
selectCALayerOrThrow,
} from 'features/controlLayers/store/controlLayersSlice';
import type { ControlModeV2, ProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import type { CALayerImageDropData } from 'features/dnd/types';
import { memo, useCallback, useMemo } from 'react';
import type {
CALayerImagePostUploadAction,
ControlNetModelConfig,
ImageDTO,
T2IAdapterModelConfig,
} from 'services/api/types';
type Props = {
layerId: string;
};
export const CALayerControlAdapterWrapper = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const controlAdapter = useAppSelector((s) => selectCALayerOrThrow(s.controlLayers.present, layerId).controlAdapter);
const onChangeBeginEndStepPct = useCallback(
(beginEndStepPct: [number, number]) => {
dispatch(
caOrIPALayerBeginEndStepPctChanged({
layerId,
beginEndStepPct,
})
);
},
[dispatch, layerId]
);
const onChangeControlMode = useCallback(
(controlMode: ControlModeV2) => {
dispatch(
caLayerControlModeChanged({
layerId,
controlMode,
})
);
},
[dispatch, layerId]
);
const onChangeWeight = useCallback(
(weight: number) => {
dispatch(caOrIPALayerWeightChanged({ layerId, weight }));
},
[dispatch, layerId]
);
const onChangeProcessorConfig = useCallback(
(processorConfig: ProcessorConfig | null) => {
dispatch(caLayerProcessorConfigChanged({ layerId, processorConfig }));
},
[dispatch, layerId]
);
const onChangeModel = useCallback(
(modelConfig: ControlNetModelConfig | T2IAdapterModelConfig) => {
dispatch(
caLayerModelChanged({
layerId,
modelConfig,
})
);
},
[dispatch, layerId]
);
const onChangeImage = useCallback(
(imageDTO: ImageDTO | null) => {
dispatch(caLayerImageChanged({ layerId, imageDTO }));
},
[dispatch, layerId]
);
const droppableData = useMemo<CALayerImageDropData>(
() => ({
actionType: 'SET_CA_LAYER_IMAGE',
context: {
layerId,
},
id: layerId,
}),
[layerId]
);
const postUploadAction = useMemo<CALayerImagePostUploadAction>(
() => ({
layerId,
type: 'SET_CA_LAYER_IMAGE',
}),
[layerId]
);
return (
<ControlAdapter
controlAdapter={controlAdapter}
onChangeBeginEndStepPct={onChangeBeginEndStepPct}
onChangeControlMode={onChangeControlMode}
onChangeWeight={onChangeWeight}
onChangeProcessorConfig={onChangeProcessorConfig}
onChangeModel={onChangeModel}
onChangeImage={onChangeImage}
droppableData={droppableData}
postUploadAction={postUploadAction}
/>
);
});
CALayerControlAdapterWrapper.displayName = 'CALayerControlAdapterWrapper';

View File

@@ -1,98 +0,0 @@
import {
CompositeNumberInput,
CompositeSlider,
Flex,
FormControl,
FormLabel,
IconButton,
Popover,
PopoverArrow,
PopoverBody,
PopoverContent,
PopoverTrigger,
Switch,
} from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { stopPropagation } from 'common/util/stopPropagation';
import { useLayerOpacity } from 'features/controlLayers/hooks/layerStateHooks';
import { caLayerIsFilterEnabledChanged, caLayerOpacityChanged } from 'features/controlLayers/store/controlLayersSlice';
import type { ChangeEvent } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiDropHalfFill } from 'react-icons/pi';
type Props = {
layerId: string;
};
const marks = [0, 25, 50, 75, 100];
const formatPct = (v: number | string) => `${v} %`;
const CALayerOpacity = ({ layerId }: Props) => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const { opacity, isFilterEnabled } = useLayerOpacity(layerId);
const onChangeOpacity = useCallback(
(v: number) => {
dispatch(caLayerOpacityChanged({ layerId, opacity: v / 100 }));
},
[dispatch, layerId]
);
const onChangeFilter = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
dispatch(caLayerIsFilterEnabledChanged({ layerId, isFilterEnabled: e.target.checked }));
},
[dispatch, layerId]
);
return (
<Popover isLazy>
<PopoverTrigger>
<IconButton
aria-label={t('controlLayers.opacity')}
size="sm"
icon={<PiDropHalfFill size={16} />}
variant="ghost"
onDoubleClick={stopPropagation}
/>
</PopoverTrigger>
<PopoverContent onDoubleClick={stopPropagation}>
<PopoverArrow />
<PopoverBody>
<Flex direction="column" gap={2}>
<FormControl orientation="horizontal" w="full">
<FormLabel m={0} flexGrow={1} cursor="pointer">
{t('controlLayers.opacityFilter')}
</FormLabel>
<Switch isChecked={isFilterEnabled} onChange={onChangeFilter} />
</FormControl>
<FormControl orientation="horizontal">
<FormLabel m={0}>{t('controlLayers.opacity')}</FormLabel>
<CompositeSlider
min={0}
max={100}
step={1}
value={opacity}
defaultValue={100}
onChange={onChangeOpacity}
marks={marks}
w={48}
/>
<CompositeNumberInput
min={0}
max={100}
step={1}
value={opacity}
defaultValue={100}
onChange={onChangeOpacity}
w={24}
format={formatPct}
/>
</FormControl>
</Flex>
</PopoverBody>
</PopoverContent>
</Popover>
);
};
export default memo(CALayerOpacity);

Some files were not shown because too many files have changed in this diff Show More