Compare commits

..

2 Commits

Author SHA1 Message Date
Ryan Dick
3f2dedceec Remove unused check_internet() function. 2024-04-25 17:46:39 -04:00
Ryan Dick
0bdf7f5726 Do less stuff on import of api_app.py. Instead, call functions imperatively. 2024-04-25 17:45:54 -04:00
241 changed files with 7829 additions and 12195 deletions

View File

@@ -12,7 +12,7 @@
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
[Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
[Installation][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
<div align="center">

View File

@@ -51,11 +51,13 @@ The settings in this file will override the defaults. You only need
to change this file if the default for a particular setting doesn't
work for you.
You'll find an example file next to `invokeai.yaml` that shows the default values.
Some settings, like [Model Marketplace API Keys], require the YAML
to be formatted correctly. Here is a [basic guide to YAML files].
You can fix a broken `invokeai.yaml` by deleting it and running the
configuration script again -- option [6] in the launcher, "Re-run the
configure script".
#### Custom Config File Location
You can use any config file with the `--config` CLI arg. Pass in the path to the `invokeai.yaml` file you want to use.

View File

@@ -4,6 +4,278 @@ title: Training
# :material-file-document: Training
Invoke Training has moved to its own repository, with a dedicated UI for accessing common scripts like Textual Inversion and LoRA training.
# Textual Inversion Training
## **Personalizing Text-to-Image Generation**
You can find more by visiting the repo at https://github.com/invoke-ai/invoke-training
You may personalize the generated images to provide your own styles or objects
by training a new LDM checkpoint and introducing a new vocabulary to the fixed
model as a (.pt) embeddings file. Alternatively, you may use or train
HuggingFace Concepts embeddings files (.bin) from
<https://huggingface.co/sd-concepts-library> and its associated
notebooks.
## **Hardware and Software Requirements**
You will need a GPU to perform training in a reasonable length of
time, and at least 12 GB of VRAM. We recommend using the [`xformers`
library](../installation/070_INSTALL_XFORMERS.md) to accelerate the
training process further. During training, about ~8 GB is temporarily
needed in order to store intermediate models, checkpoints and logs.
## **Preparing for Training**
To train, prepare a folder that contains 3-5 images that illustrate
the object or concept. It is good to provide a variety of examples or
poses to avoid overtraining the system. Format these images as PNG
(preferred) or JPG. You do not need to resize or crop the images in
advance, but for more control you may wish to do so.
Place the training images in a directory on the machine InvokeAI runs
on. We recommend placing them in a subdirectory of the
`text-inversion-training-data` folder located in the InvokeAI root
directory, ordinarily `~/invokeai` (Linux/Mac), or
`C:\Users\your_name\invokeai` (Windows). For example, to create an
embedding for the "psychedelic" style, you'd place the training images
into the directory
`~invokeai/text-inversion-training-data/psychedelic`.
## **Launching Training Using the Console Front End**
InvokeAI 2.3 and higher comes with a text console-based training front
end. From within the `invoke.sh`/`invoke.bat` Invoke launcher script,
start training tool selecting choice (3):
```sh
1 "Generate images with a browser-based interface"
2 "Explore InvokeAI nodes using a command-line interface"
3 "Textual inversion training"
4 "Merge models (diffusers type only)"
5 "Download and install models"
6 "Change InvokeAI startup options"
7 "Re-run the configure script to fix a broken install or to complete a major upgrade"
8 "Open the developer console"
9 "Update InvokeAI"
```
Alternatively, you can select option (8) or from the command line, with the InvokeAI virtual environment active,
you can then launch the front end with the command `invokeai-ti --gui`.
This will launch a text-based front end that will look like this:
<figure markdown>
![ti-frontend](../assets/textual-inversion/ti-frontend.png)
</figure>
The interface is keyboard-based. Move from field to field using
control-N (^N) to move to the next field and control-P (^P) to the
previous one. <Tab> and <shift-TAB> work as well. Once a field is
active, use the cursor keys. In a checkbox group, use the up and down
cursor keys to move from choice to choice, and <space> to select a
choice. In a scrollbar, use the left and right cursor keys to increase
and decrease the value of the scroll. In textfields, type the desired
values.
The number of parameters may look intimidating, but in most cases the
predefined defaults work fine. The red circled fields in the above
illustration are the ones you will adjust most frequently.
### Model Name
This will list all the diffusers models that are currently
installed. Select the one you wish to use as the basis for your
embedding. Be aware that if you use a SD-1.X-based model for your
training, you will only be able to use this embedding with other
SD-1.X-based models. Similarly, if you train on SD-2.X, you will only
be able to use the embeddings with models based on SD-2.X.
### Trigger Term
This is the prompt term you will use to trigger the embedding. Type a
single word or phrase you wish to use as the trigger, example
"psychedelic" (without angle brackets). Within InvokeAI, you will then
be able to activate the trigger using the syntax `<psychedelic>`.
### Initializer
This is a single character that is used internally during the training
process as a placeholder for the trigger term. It defaults to "*" and
can usually be left alone.
### Resume from last saved checkpoint
As training proceeds, textual inversion will write a series of
intermediate files that can be used to resume training from where it
was left off in the case of an interruption. This checkbox will be
automatically selected if you provide a previously used trigger term
and at least one checkpoint file is found on disk.
Note that as of 20 January 2023, resume does not seem to be working
properly due to an issue with the upstream code.
### Data Training Directory
This is the location of the images to be used for training. When you
select a trigger term like "my-trigger", the frontend will prepopulate
this field with `~/invokeai/text-inversion-training-data/my-trigger`,
but you can change the path to wherever you want.
### Output Destination Directory
This is the location of the logs, checkpoint files, and embedding
files created during training. When you select a trigger term like
"my-trigger", the frontend will prepopulate this field with
`~/invokeai/text-inversion-output/my-trigger`, but you can change the
path to wherever you want.
### Image resolution
The images in the training directory will be automatically scaled to
the value you use here. For best results, you will want to use the
same default resolution of the underlying model (512 pixels for
SD-1.5, 768 for the larger version of SD-2.1).
### Center crop images
If this is selected, your images will be center cropped to make them
square before resizing them to the desired resolution. Center cropping
can indiscriminately cut off the top of subjects' heads for portrait
aspect images, so if you have images like this, you may wish to use a
photoeditor to manually crop them to a square aspect ratio.
### Mixed precision
Select the floating point precision for the embedding. "no" will
result in a full 32-bit precision, "fp16" will provide 16-bit
precision, and "bf16" will provide mixed precision (only available
when XFormers is used).
### Max training steps
How many steps the training will take before the model converges. Most
training sets will converge with 2000-3000 steps.
### Batch size
This adjusts how many training images are processed simultaneously in
each step. Higher values will cause the training process to run more
quickly, but use more memory. The default size will run with GPUs with
as little as 12 GB.
### Learning rate
The rate at which the system adjusts its internal weights during
training. Higher values risk overtraining (getting the same image each
time), and lower values will take more steps to train a good
model. The default of 0.0005 is conservative; you may wish to increase
it to 0.005 to speed up training.
### Scale learning rate by number of GPUs, steps and batch size
If this is selected (the default) the system will adjust the provided
learning rate to improve performance.
### Use xformers acceleration
This will activate XFormers memory-efficient attention. You need to
have XFormers installed for this to have an effect.
### Learning rate scheduler
This adjusts how the learning rate changes over the course of
training. The default "constant" means to use a constant learning rate
for the entire training session. The other values scale the learning
rate according to various formulas.
Only "constant" is supported by the XFormers library.
### Gradient accumulation steps
This is a parameter that allows you to use bigger batch sizes than
your GPU's VRAM would ordinarily accommodate, at the cost of some
performance.
### Warmup steps
If "constant_with_warmup" is selected in the learning rate scheduler,
then this provides the number of warmup steps. Warmup steps have a
very low learning rate, and are one way of preventing early
overtraining.
## The training run
Start the training run by advancing to the OK button (bottom right)
and pressing <enter>. A series of progress messages will be displayed
as the training process proceeds. This may take an hour or two,
depending on settings and the speed of your system. Various log and
checkpoint files will be written into the output directory (ordinarily
`~/invokeai/text-inversion-output/my-model/`)
At the end of successful training, the system will copy the file
`learned_embeds.bin` into the InvokeAI root directory's `embeddings`
directory, using a subdirectory named after the trigger token. For
example, if the trigger token was `psychedelic`, then look for the
embeddings file in
`~/invokeai/embeddings/psychedelic/learned_embeds.bin`
You may now launch InvokeAI and try out a prompt that uses the trigger
term. For example `a plate of banana sushi in <psychedelic> style`.
## **Training with the Command-Line Script**
Training can also be done using a traditional command-line script. It
can be launched from within the "developer's console", or from the
command line after activating InvokeAI's virtual environment.
It accepts a large number of arguments, which can be summarized by
passing the `--help` argument:
```sh
invokeai-ti --help
```
Typical usage is shown here:
```sh
invokeai-ti \
--model=stable-diffusion-1.5 \
--resolution=512 \
--learnable_property=style \
--initializer_token='*' \
--placeholder_token='<psychedelic>' \
--train_data_dir=/home/lstein/invokeai/training-data/psychedelic \
--output_dir=/home/lstein/invokeai/text-inversion-training/psychedelic \
--scale_lr \
--train_batch_size=8 \
--gradient_accumulation_steps=4 \
--max_train_steps=3000 \
--learning_rate=0.0005 \
--resume_from_checkpoint=latest \
--lr_scheduler=constant \
--mixed_precision=fp16 \
--only_save_embeds
```
## Troubleshooting
### `Cannot load embedding for <trigger>. It was trained on a model with token dimension 1024, but the current model has token dimension 768`
Messages like this indicate you trained the embedding on a different base model than the currently selected one.
For example, in the error above, the training was done on SD2.1 (768x768) but it was used on SD1.5 (512x512).
## Reading
For more information on textual inversion, please see the following
resources:
* The [textual inversion repository](https://github.com/rinongal/textual_inversion) and
associated paper for details and limitations.
* [HuggingFace's textual inversion training
page](https://huggingface.co/docs/diffusers/training/text_inversion)
* [HuggingFace example script
documentation](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
(Note that this script is similar to, but not identical, to
`textual_inversion`, but produces embed files that are completely compatible.
---
copyright (c) 2023, Lincoln Stein and the InvokeAI Development Team

View File

@@ -1,10 +1,8 @@
# Automatic Install & Updates
# Automatic Install
**The same packaged installer file can be used for both new installs and updates.**
Using the installer for updates will leave everything you've added since installation, and just update the core libraries used to run Invoke.
Simply use the same path you installed to originally.
The installer is used for both new installs and updates.
Both release and pre-release versions can be installed using the installer. It also supports install through a wheel if needed.
Both release and pre-release versions can be installed using it. It also supports install a wheel if needed.
Be sure to review the [installation requirements] and ensure your system has everything it needs to install Invoke.

View File

@@ -1,4 +1,4 @@
# Installation and Updating Overview
# Installation Overview
Before installing, review the [installation requirements] to ensure your system is set up properly.
@@ -6,21 +6,14 @@ See the [FAQ] for frequently-encountered installation issues.
If you need more help, join our [discord] or [create an issue].
<h2>Automatic Install & Updates </h2>
<h2>Automatic Install</h2>
✅ The automatic install is the best way to run InvokeAI. Check out the [installation guide] to get started.
⬆️ The same installer is also the best way to update InvokeAI - Simply rerun it for the same folder you installed to.
The installation process simply manages installation for the core libraries & application dependencies that run Invoke.
Any models, images, or other assets in the Invoke root folder won't be affected by the installation process.
<h2>Manual Install</h2>
If you are familiar with python and want more control over the packages that are installed, you can [install InvokeAI manually via PyPI].
Updates are managed by reinstalling the latest version through PyPi.
<h2>Developer Install</h2>
If you want to contribute to InvokeAI, consult the [developer install guide].

View File

@@ -35,23 +35,6 @@ from ..services.urls.urls_default import LocalUrlService
from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from .events import FastAPIEventService
# TODO: is there a better way to achieve this?
def check_internet() -> bool:
"""
Return true if the internet is reachable.
It does this by pinging huggingface.co.
"""
import urllib.request
host = "http://huggingface.co"
try:
urllib.request.urlopen(host, timeout=1)
return True
except Exception:
return False
logger = InvokeAILogger.get_logger()

View File

@@ -2,6 +2,7 @@ import asyncio
import logging
import mimetypes
import socket
import time
from contextlib import asynccontextmanager
from inspect import signature
from pathlib import Path
@@ -20,13 +21,10 @@ from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.json_schema import models_json_schema
from torch.backends.mps import is_available as is_mps_available
# for PyCharm:
# noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
import invokeai.frontend.web as web_dir
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.config.config_default import get_config
from invokeai.app.services.config.config_default import InvokeAIAppConfig, get_config
from invokeai.app.services.session_processor.session_processor_common import ProgressImage
from invokeai.backend.util.devices import TorchDevice
@@ -44,188 +42,189 @@ from .api.routers import (
workflows,
)
from .api.sockets import SocketIO
from .invocations.baseinvocation import (
BaseInvocation,
UIConfigBase,
)
from .invocations.baseinvocation import BaseInvocation, UIConfigBase
from .invocations.fields import InputFieldJSONSchemaExtra, OutputFieldJSONSchemaExtra
app_config = get_config()
# TODO(ryand): Search for imports from api_app.py in the rest of the codebase and make sure I didn't break any of them.
def build_app(app_config: InvokeAIAppConfig, logger: logging.Logger) -> FastAPI:
@asynccontextmanager
async def lifespan(app: FastAPI):
# Add startup event to load dependencies
ApiDependencies.initialize(config=app_config, event_handler_id=event_handler_id, logger=logger)
yield
# Shut down threads
ApiDependencies.shutdown()
app = FastAPI(
title="Invoke - Community Edition",
docs_url=None,
redoc_url=None,
separate_input_output_schemas=False,
lifespan=lifespan,
)
# Add event handler
event_handler_id: int = id(app)
app.add_middleware(
EventHandlerASGIMiddleware,
handlers=[local_handler], # TODO: consider doing this in services to support different configurations
middleware_id=event_handler_id,
)
socket_io = SocketIO(app)
app.add_middleware(
CORSMiddleware,
allow_origins=app_config.allow_origins,
allow_credentials=app_config.allow_credentials,
allow_methods=app_config.allow_methods,
allow_headers=app_config.allow_headers,
)
app.add_middleware(GZipMiddleware, minimum_size=1000)
# Include all routers
app.include_router(utilities.utilities_router, prefix="/api")
app.include_router(model_manager.model_manager_router, prefix="/api")
app.include_router(download_queue.download_queue_router, prefix="/api")
app.include_router(images.images_router, prefix="/api")
app.include_router(boards.boards_router, prefix="/api")
app.include_router(board_images.board_images_router, prefix="/api")
app.include_router(app_info.app_router, prefix="/api")
app.include_router(session_queue.session_queue_router, prefix="/api")
app.include_router(workflows.workflows_router, prefix="/api")
add_custom_openapi(app)
@app.get("/docs", include_in_schema=False)
def overridden_swagger() -> HTMLResponse:
return get_swagger_ui_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=f"{app.title} - Swagger UI",
swagger_favicon_url="static/docs/invoke-favicon-docs.svg",
)
@app.get("/redoc", include_in_schema=False)
def overridden_redoc() -> HTMLResponse:
return get_redoc_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=f"{app.title} - Redoc",
redoc_favicon_url="static/docs/invoke-favicon-docs.svg",
)
web_root_path = Path(list(web_dir.__path__)[0])
try:
app.mount("/", NoCacheStaticFiles(directory=Path(web_root_path, "dist"), html=True), name="ui")
except RuntimeError:
logger.warn(f"No UI found at {web_root_path}/dist, skipping UI mount")
app.mount(
"/static", NoCacheStaticFiles(directory=Path(web_root_path, "static/")), name="static"
) # docs favicon is in here
return app
if is_mps_available():
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
def apply_monkeypatches() -> None:
# TODO(ryand): Don't monkeypatch on import!
if is_mps_available():
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
logger = InvokeAILogger.get_logger(config=app_config)
# fix for windows mimetypes registry entries being borked
# see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
mimetypes.add_type("application/javascript", ".js")
mimetypes.add_type("text/css", ".css")
torch_device_name = TorchDevice.get_torch_device_name()
logger.info(f"Using torch device: {torch_device_name}")
def fix_mimetypes():
# fix for windows mimetypes registry entries being borked
# see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
mimetypes.add_type("application/javascript", ".js")
mimetypes.add_type("text/css", ".css")
@asynccontextmanager
async def lifespan(app: FastAPI):
# Add startup event to load dependencies
ApiDependencies.initialize(config=app_config, event_handler_id=event_handler_id, logger=logger)
yield
# Shut down threads
ApiDependencies.shutdown()
def add_custom_openapi(app: FastAPI) -> None:
"""Add a custom .openapi() method to the FastAPI app.
This is done based on the guidance here:
https://fastapi.tiangolo.com/how-to/extending-openapi/#normal-fastapi
"""
# Create the app
# TODO: create this all in a method so configuration/etc. can be passed in?
app = FastAPI(
title="Invoke - Community Edition",
docs_url=None,
redoc_url=None,
separate_input_output_schemas=False,
lifespan=lifespan,
)
# Build a custom OpenAPI to include all outputs
# TODO: can outputs be included on metadata of invocation schemas somehow?
def custom_openapi() -> dict[str, Any]:
if app.openapi_schema:
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
description="An API for invoking AI image operations",
version="1.0.0",
routes=app.routes,
separate_input_output_schemas=False, # https://fastapi.tiangolo.com/how-to/separate-openapi-schemas/
)
# Add event handler
event_handler_id: int = id(app)
app.add_middleware(
EventHandlerASGIMiddleware,
handlers=[local_handler], # TODO: consider doing this in services to support different configurations
middleware_id=event_handler_id,
)
# Add all outputs
all_invocations = BaseInvocation.get_invocations()
output_types = set()
output_type_titles = {}
for invoker in all_invocations:
output_type = signature(invoker.invoke).return_annotation
output_types.add(output_type)
socket_io = SocketIO(app)
output_schemas = models_json_schema(
models=[(o, "serialization") for o in output_types], ref_template="#/components/schemas/{model}"
)
for schema_key, output_schema in output_schemas[1]["$defs"].items():
# TODO: note that we assume the schema_key here is the TYPE.__name__
# This could break in some cases, figure out a better way to do it
output_type_titles[schema_key] = output_schema["title"]
openapi_schema["components"]["schemas"][schema_key] = output_schema
openapi_schema["components"]["schemas"][schema_key]["class"] = "output"
app.add_middleware(
CORSMiddleware,
allow_origins=app_config.allow_origins,
allow_credentials=app_config.allow_credentials,
allow_methods=app_config.allow_methods,
allow_headers=app_config.allow_headers,
)
# Some models don't end up in the schemas as standalone definitions
additional_schemas = models_json_schema(
[
(UIConfigBase, "serialization"),
(InputFieldJSONSchemaExtra, "serialization"),
(OutputFieldJSONSchemaExtra, "serialization"),
(ModelIdentifierField, "serialization"),
(ProgressImage, "serialization"),
],
ref_template="#/components/schemas/{model}",
)
for schema_key, schema_json in additional_schemas[1]["$defs"].items():
openapi_schema["components"]["schemas"][schema_key] = schema_json
app.add_middleware(GZipMiddleware, minimum_size=1000)
# Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations:
invoker_name = invoker.__name__ # type: ignore [attr-defined] # this is a valid attribute
output_type = signature(obj=invoker.invoke).return_annotation
output_type_title = output_type_titles[output_type.__name__]
invoker_schema = openapi_schema["components"]["schemas"][f"{invoker_name}"]
outputs_ref = {"$ref": f"#/components/schemas/{output_type_title}"}
invoker_schema["output"] = outputs_ref
invoker_schema["class"] = "invocation"
# This code no longer seems to be necessary?
# Leave it here just in case
#
# from invokeai.backend.model_manager import get_model_config_formats
# formats = get_model_config_formats()
# for model_config_name, enum_set in formats.items():
# Include all routers
app.include_router(utilities.utilities_router, prefix="/api")
app.include_router(model_manager.model_manager_router, prefix="/api")
app.include_router(download_queue.download_queue_router, prefix="/api")
app.include_router(images.images_router, prefix="/api")
app.include_router(boards.boards_router, prefix="/api")
app.include_router(board_images.board_images_router, prefix="/api")
app.include_router(app_info.app_router, prefix="/api")
app.include_router(session_queue.session_queue_router, prefix="/api")
app.include_router(workflows.workflows_router, prefix="/api")
# if model_config_name in openapi_schema["components"]["schemas"]:
# # print(f"Config with name {name} already defined")
# continue
# openapi_schema["components"]["schemas"][model_config_name] = {
# "title": model_config_name,
# "description": "An enumeration.",
# "type": "string",
# "enum": [v.value for v in enum_set],
# }
# Build a custom OpenAPI to include all outputs
# TODO: can outputs be included on metadata of invocation schemas somehow?
def custom_openapi() -> dict[str, Any]:
if app.openapi_schema:
app.openapi_schema = openapi_schema
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
description="An API for invoking AI image operations",
version="1.0.0",
routes=app.routes,
separate_input_output_schemas=False, # https://fastapi.tiangolo.com/how-to/separate-openapi-schemas/
)
# Add all outputs
all_invocations = BaseInvocation.get_invocations()
output_types = set()
output_type_titles = {}
for invoker in all_invocations:
output_type = signature(invoker.invoke).return_annotation
output_types.add(output_type)
output_schemas = models_json_schema(
models=[(o, "serialization") for o in output_types], ref_template="#/components/schemas/{model}"
)
for schema_key, output_schema in output_schemas[1]["$defs"].items():
# TODO: note that we assume the schema_key here is the TYPE.__name__
# This could break in some cases, figure out a better way to do it
output_type_titles[schema_key] = output_schema["title"]
openapi_schema["components"]["schemas"][schema_key] = output_schema
openapi_schema["components"]["schemas"][schema_key]["class"] = "output"
# Some models don't end up in the schemas as standalone definitions
additional_schemas = models_json_schema(
[
(UIConfigBase, "serialization"),
(InputFieldJSONSchemaExtra, "serialization"),
(OutputFieldJSONSchemaExtra, "serialization"),
(ModelIdentifierField, "serialization"),
(ProgressImage, "serialization"),
],
ref_template="#/components/schemas/{model}",
)
for schema_key, schema_json in additional_schemas[1]["$defs"].items():
openapi_schema["components"]["schemas"][schema_key] = schema_json
# Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations:
invoker_name = invoker.__name__ # type: ignore [attr-defined] # this is a valid attribute
output_type = signature(obj=invoker.invoke).return_annotation
output_type_title = output_type_titles[output_type.__name__]
invoker_schema = openapi_schema["components"]["schemas"][f"{invoker_name}"]
outputs_ref = {"$ref": f"#/components/schemas/{output_type_title}"}
invoker_schema["output"] = outputs_ref
invoker_schema["class"] = "invocation"
# This code no longer seems to be necessary?
# Leave it here just in case
#
# from invokeai.backend.model_manager import get_model_config_formats
# formats = get_model_config_formats()
# for model_config_name, enum_set in formats.items():
# if model_config_name in openapi_schema["components"]["schemas"]:
# # print(f"Config with name {name} already defined")
# continue
# openapi_schema["components"]["schemas"][model_config_name] = {
# "title": model_config_name,
# "description": "An enumeration.",
# "type": "string",
# "enum": [v.value for v in enum_set],
# }
app.openapi_schema = openapi_schema
return app.openapi_schema
app.openapi = custom_openapi # type: ignore [method-assign] # this is a valid assignment
@app.get("/docs", include_in_schema=False)
def overridden_swagger() -> HTMLResponse:
return get_swagger_ui_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=f"{app.title} - Swagger UI",
swagger_favicon_url="static/docs/invoke-favicon-docs.svg",
)
@app.get("/redoc", include_in_schema=False)
def overridden_redoc() -> HTMLResponse:
return get_redoc_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=f"{app.title} - Redoc",
redoc_favicon_url="static/docs/invoke-favicon-docs.svg",
)
web_root_path = Path(list(web_dir.__path__)[0])
try:
app.mount("/", NoCacheStaticFiles(directory=Path(web_root_path, "dist"), html=True), name="ui")
except RuntimeError:
logger.warn(f"No UI found at {web_root_path}/dist, skipping UI mount")
app.mount(
"/static", NoCacheStaticFiles(directory=Path(web_root_path, "static/")), name="static"
) # docs favicon is in here
app.openapi = custom_openapi # type: ignore [method-assign] # this is a valid assignment
def check_cudnn(logger: logging.Logger) -> None:
@@ -244,27 +243,39 @@ def check_cudnn(logger: logging.Logger) -> None:
)
def find_port(port: int) -> int:
"""Find a port not in use starting at given port"""
# Taken from https://waylonwalker.com/python-find-available-port/, thanks Waylon!
# https://github.com/WaylonWalker
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
if s.connect_ex(("localhost", port)) == 0:
return find_port(port=port + 1)
else:
return port
def init_dev_reload(logger: logging.Logger) -> None:
try:
import jurigged
except ImportError as e:
logger.error(
'Can\'t start `--dev_reload` because jurigged is not found; `pip install -e ".[dev]"` to include development dependencies.',
exc_info=e,
)
else:
jurigged.watch(logger=InvokeAILogger.get_logger(name="jurigged").info)
def invoke_api() -> None:
def find_port(port: int) -> int:
"""Find a port not in use starting at given port"""
# Taken from https://waylonwalker.com/python-find-available-port/, thanks Waylon!
# https://github.com/WaylonWalker
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
if s.connect_ex(("localhost", port)) == 0:
return find_port(port=port + 1)
else:
return port
start = time.time()
app_config = get_config()
logger = InvokeAILogger.get_logger(config=app_config)
apply_monkeypatches()
fix_mimetypes()
if app_config.dev_reload:
try:
import jurigged
except ImportError as e:
logger.error(
'Can\'t start `--dev_reload` because jurigged is not found; `pip install -e ".[dev]"` to include development dependencies.',
exc_info=e,
)
else:
jurigged.watch(logger=InvokeAILogger.get_logger(name="jurigged").info)
init_dev_reload(logger)
port = find_port(app_config.port)
if port != app_config.port:
@@ -272,6 +283,11 @@ def invoke_api() -> None:
check_cudnn(logger)
torch_device_name = TorchDevice.get_torch_device_name()
logger.info(f"Using torch device: {torch_device_name}")
app = build_app(app_config, logger)
# Start our own event loop for eventing usage
loop = asyncio.new_event_loop()
config = uvicorn.Config(
@@ -291,6 +307,7 @@ def invoke_api() -> None:
log.handlers.clear()
for ch in logger.handlers:
log.addHandler(ch)
logger.info(f"API started in {time.time() - start:.2f} seconds")
loop.run_until_complete(server.serve())

View File

@@ -35,16 +35,15 @@ from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES, heuristic_resize
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
from invokeai.backend.image_util.canny import get_canny_edges
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector
from invokeai.backend.image_util.hed import HEDProcessor
from invokeai.backend.image_util.lineart import LineartProcessor
from invokeai.backend.image_util.lineart_anime import LineartAnimeProcessor
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
from .baseinvocation import BaseInvocation, BaseInvocationOutput, Classification, invocation, invocation_output
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
class ControlField(BaseModel):
@@ -165,13 +164,13 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.3.3",
version="1.3.2",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
low_threshold: int = InputField(
default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)"
)
@@ -199,13 +198,13 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
# safe not supported in controlnet_aux v0.0.3
# safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
@@ -228,13 +227,13 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
coarse: bool = InputField(default=False, description="Whether to use coarse mode")
def run_processor(self, image: Image.Image) -> Image.Image:
@@ -250,13 +249,13 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image) -> Image.Image:
processor = LineartAnimeProcessor()
@@ -273,15 +272,15 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.2.4",
version="1.2.3",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
a_mult: float = InputField(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)")
bg_th: float = InputField(default=0.1, ge=0, description="Midas parameter `bg_th`")
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
# depth_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal: bool = InputField(default=False, description="whether to use depth and normal mode")
@@ -304,13 +303,13 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
normalbae_processor = NormalBaeDetector.from_pretrained("lllyasviel/Annotators")
@@ -321,13 +320,13 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.3"
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.2"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
thr_v: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_v`")
thr_d: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_d`")
@@ -344,13 +343,13 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.3"
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.2"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
@@ -371,13 +370,13 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
h: int = InputField(default=512, ge=0, description="Content shuffle `h` parameter")
w: int = InputField(default=512, ge=0, description="Content shuffle `w` parameter")
f: int = InputField(default=256, ge=0, description="Content shuffle `f` parameter")
@@ -401,7 +400,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
@@ -417,15 +416,15 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.2.4",
version="1.2.3",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
max_faces: int = InputField(default=1, ge=1, description="Maximum number of faces to detect")
min_confidence: float = InputField(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
mediapipe_face_processor = MediapipeFaceDetector()
@@ -444,7 +443,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
@@ -452,8 +451,8 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
thr_a: float = InputField(default=0, description="Leres parameter `thr_a`")
thr_b: float = InputField(default=0, description="Leres parameter `thr_b`")
boost: bool = InputField(default=False, description="Whether to use boost mode")
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
leres_processor = LeresDetector.from_pretrained("lllyasviel/Annotators")
@@ -473,7 +472,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
@@ -513,13 +512,13 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.2.4",
version="1.2.3",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
@@ -560,12 +559,12 @@ class SamDetectorReproducibleColors(SamDetector):
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.3",
version="1.2.2",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""
color_map_tile_size: int = InputField(default=64, ge=1, description=FieldDescriptions.tile_size)
color_map_tile_size: int = InputField(default=64, ge=0, description=FieldDescriptions.tile_size)
def run_processor(self, image: Image.Image):
np_image = np.array(image, dtype=np.uint8)
@@ -592,7 +591,7 @@ DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.1.2",
version="1.1.1",
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
@@ -600,7 +599,7 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
model_size: DEPTH_ANYTHING_MODEL_SIZES = InputField(
default="small", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image):
depth_anything_detector = DepthAnythingDetector()
@@ -615,7 +614,7 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
title="DW Openpose Image Processor",
tags=["controlnet", "dwpose", "openpose"],
category="controlnet",
version="1.1.1",
version="1.1.0",
)
class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Generates an openpose pose from an image using DWPose"""
@@ -623,7 +622,7 @@ class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
draw_body: bool = InputField(default=True)
draw_face: bool = InputField(default=False)
draw_hands: bool = InputField(default=False)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image):
dw_openpose = DWOpenposeDetector()
@@ -635,27 +634,3 @@ class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
resolution=self.image_resolution,
)
return processed_image
@invocation(
"heuristic_resize",
title="Heuristic Resize",
tags=["image, controlnet"],
category="image",
version="1.0.1",
classification=Classification.Prototype,
)
class HeuristicResizeInvocation(BaseInvocation):
"""Resize an image using a heuristic method. Preserves edge maps."""
image: ImageField = InputField(description="The image to resize")
width: int = InputField(default=512, ge=1, description="The width to resize to (px)")
height: int = InputField(default=512, ge=1, description="The height to resize to (px)")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name, "RGB")
np_img = pil_to_np(image)
np_resized = heuristic_resize(np_img, (self.width, self.height))
resized = np_to_pil(np_resized)
image_dto = context.images.save(image=resized)
return ImageOutput.build(image_dto)

View File

@@ -3,7 +3,7 @@ import inspect
import math
from contextlib import ExitStack
from functools import singledispatchmethod
from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union
from typing import Any, Iterator, List, Literal, Optional, Tuple, Union
import einops
import numpy as np
@@ -11,6 +11,7 @@ import numpy.typing as npt
import torch
import torchvision
import torchvision.transforms as T
from diffusers import AutoencoderKL, AutoencoderTiny
from diffusers.configuration_utils import ConfigMixin
from diffusers.image_processor import VaeImageProcessor
from diffusers.models.adapter import T2IAdapter
@@ -20,12 +21,9 @@ from diffusers.models.attention_processor import (
LoRAXFormersAttnProcessor,
XFormersAttnProcessor,
)
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from diffusers.schedulers.scheduling_dpmsolver_sde import DPMSolverSDEScheduler
from diffusers.schedulers.scheduling_tcd import TCDScheduler
from diffusers.schedulers.scheduling_utils import SchedulerMixin as Scheduler
from diffusers.schedulers import DPMSolverSDEScheduler
from diffusers.schedulers import SchedulerMixin as Scheduler
from PIL import Image, ImageFilter
from pydantic import field_validator
from torchvision.transforms.functional import resize as tv_resize
@@ -343,7 +341,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps)
cfg_scale: Union[float, List[float]] = InputField(
default=7.5, description=FieldDescriptions.cfg_scale, title="CFG Scale"
default=7.5, ge=1, description=FieldDescriptions.cfg_scale, title="CFG Scale"
)
denoising_start: float = InputField(
default=0.0,
@@ -523,10 +521,9 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
if is_sdxl:
return (
SDXLConditioningInfo(embeds=text_embedding, pooled_embeds=pooled_embedding, add_time_ids=add_time_ids),
regions,
)
return SDXLConditioningInfo(
embeds=text_embedding, pooled_embeds=pooled_embedding, add_time_ids=add_time_ids
), regions
return BasicConditioningInfo(embeds=text_embedding), regions
def get_conditioning_data(
@@ -566,11 +563,6 @@ class DenoiseLatentsInvocation(BaseInvocation):
dtype=unet.dtype,
)
if isinstance(self.cfg_scale, list):
assert (
len(self.cfg_scale) == self.steps
), "cfg_scale (list) must have the same length as the number of steps"
conditioning_data = TextConditioningData(
uncond_text=uncond_text_embedding,
cond_text=cond_text_embedding,
@@ -828,7 +820,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
denoising_start: float,
denoising_end: float,
seed: int,
) -> Tuple[int, List[int], int, Dict[str, Any]]:
) -> Tuple[int, List[int], int]:
assert isinstance(scheduler, ConfigMixin)
if scheduler.config.get("cpu_only", False):
scheduler.set_timesteps(steps, device="cpu")
@@ -856,15 +848,13 @@ class DenoiseLatentsInvocation(BaseInvocation):
timesteps = timesteps[t_start_idx : t_start_idx + t_end_idx]
num_inference_steps = len(timesteps) // scheduler.order
scheduler_step_kwargs: Dict[str, Any] = {}
scheduler_step_kwargs = {}
scheduler_step_signature = inspect.signature(scheduler.step)
if "generator" in scheduler_step_signature.parameters:
# At some point, someone decided that schedulers that accept a generator should use the original seed with
# all bits flipped. I don't know the original rationale for this, but now we must keep it like this for
# reproducibility.
scheduler_step_kwargs.update({"generator": torch.Generator(device=device).manual_seed(seed ^ 0xFFFFFFFF)})
if isinstance(scheduler, TCDScheduler):
scheduler_step_kwargs.update({"eta": 1.0})
scheduler_step_kwargs = {"generator": torch.Generator(device=device).manual_seed(seed ^ 0xFFFFFFFF)}
return num_inference_steps, timesteps, init_timestep, scheduler_step_kwargs

View File

@@ -318,8 +318,10 @@ class DownloadQueueService(DownloadQueueServiceBase):
in_progress_path.rename(job.download_path)
def _validate_filename(self, directory: str, filename: str) -> bool:
pc_name_max = get_pc_name_max(directory)
pc_path_max = get_pc_path_max(directory)
pc_name_max = os.pathconf(directory, "PC_NAME_MAX") if hasattr(os, "pathconf") else 260 # hardcoded for windows
pc_path_max = (
os.pathconf(directory, "PC_PATH_MAX") if hasattr(os, "pathconf") else 32767
) # hardcoded for windows with long names enabled
if "/" in filename:
return False
if filename.startswith(".."):
@@ -417,26 +419,6 @@ class DownloadQueueService(DownloadQueueServiceBase):
self._logger.warning(excp)
def get_pc_name_max(directory: str) -> int:
if hasattr(os, "pathconf"):
try:
return os.pathconf(directory, "PC_NAME_MAX")
except OSError:
# macOS w/ external drives raise OSError
pass
return 260 # hardcoded for windows
def get_pc_path_max(directory: str) -> int:
if hasattr(os, "pathconf"):
try:
return os.pathconf(directory, "PC_PATH_MAX")
except OSError:
# some platforms may not have this value
pass
return 32767 # hardcoded for windows with long names enabled
# Example on_progress event handler to display a TQDM status bar
# Activate with:
# download_service.download(DownloadJob('http://foo.bar/baz', '/tmp', on_progress=TqdmProgress().update))

View File

@@ -144,8 +144,10 @@ def resize_image_to_resolution(input_image: np.ndarray, resolution: int) -> np.n
h = float(input_image.shape[0])
w = float(input_image.shape[1])
scaling_factor = float(resolution) / min(h, w)
h = int(h * scaling_factor)
w = int(w * scaling_factor)
h *= scaling_factor
w *= scaling_factor
h = int(np.round(h / 64.0)) * 64
w = int(np.round(w / 64.0)) * 64
if scaling_factor > 1:
return cv2.resize(input_image, (w, h), interpolation=cv2.INTER_LANCZOS4)
else:

View File

@@ -51,7 +51,6 @@ LEGACY_CONFIGS: Dict[BaseModelType, Dict[ModelVariantType, Union[str, Dict[Sched
},
BaseModelType.StableDiffusionXL: {
ModelVariantType.Normal: "sd_xl_base.yaml",
ModelVariantType.Inpaint: "sd_xl_inpaint.yaml",
},
BaseModelType.StableDiffusionXLRefiner: {
ModelVariantType.Normal: "sd_xl_refiner.yaml",

View File

@@ -13,7 +13,6 @@ from diffusers import (
LCMScheduler,
LMSDiscreteScheduler,
PNDMScheduler,
TCDScheduler,
UniPCMultistepScheduler,
)
@@ -41,5 +40,4 @@ SCHEDULER_MAP = {
"dpmpp_sde_k": (DPMSolverSDEScheduler, {"use_karras_sigmas": True, "noise_sampler_seed": 0}),
"unipc": (UniPCMultistepScheduler, {"cpu_only": True}),
"lcm": (LCMScheduler, {}),
"tcd": (TCDScheduler, {}),
}

View File

@@ -1,98 +0,0 @@
model:
target: sgm.models.diffusion.DiffusionEngine
params:
scale_factor: 0.13025
disable_first_stage_autocast: True
denoiser_config:
target: sgm.modules.diffusionmodules.denoiser.DiscreteDenoiser
params:
num_idx: 1000
weighting_config:
target: sgm.modules.diffusionmodules.denoiser_weighting.EpsWeighting
scaling_config:
target: sgm.modules.diffusionmodules.denoiser_scaling.EpsScaling
discretization_config:
target: sgm.modules.diffusionmodules.discretizer.LegacyDDPMDiscretization
network_config:
target: sgm.modules.diffusionmodules.openaimodel.UNetModel
params:
adm_in_channels: 2816
num_classes: sequential
use_checkpoint: True
in_channels: 9
out_channels: 4
model_channels: 320
attention_resolutions: [4, 2]
num_res_blocks: 2
channel_mult: [1, 2, 4]
num_head_channels: 64
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: [1, 2, 10] # note: the first is unused (due to attn_res starting at 2) 32, 16, 8 --> 64, 32, 16
context_dim: 2048
spatial_transformer_attn_type: softmax-xformers
legacy: False
conditioner_config:
target: sgm.modules.GeneralConditioner
params:
emb_models:
# crossattn cond
- is_trainable: False
input_key: txt
target: sgm.modules.encoders.modules.FrozenCLIPEmbedder
params:
layer: hidden
layer_idx: 11
# crossattn and vector cond
- is_trainable: False
input_key: txt
target: sgm.modules.encoders.modules.FrozenOpenCLIPEmbedder2
params:
arch: ViT-bigG-14
version: laion2b_s39b_b160k
freeze: True
layer: penultimate
always_return_pooled: True
legacy: False
# vector cond
- is_trainable: False
input_key: original_size_as_tuple
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
# vector cond
- is_trainable: False
input_key: crop_coords_top_left
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
# vector cond
- is_trainable: False
input_key: target_size_as_tuple
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
first_stage_config:
target: sgm.models.autoencoder.AutoencoderKLInferenceWrapper
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
attn_type: vanilla-xformers
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [1, 2, 4, 4]
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity

View File

@@ -25,7 +25,7 @@
"typegen": "node scripts/typegen.js",
"preview": "vite preview",
"lint:knip": "knip",
"lint:dpdm": "dpdm --no-warning --no-tree --transform --exit-code circular:1 src/main.tsx",
"lint:dpdm": "dpdm --no-warning --no-tree --transform --exit-code circular:0 src/main.tsx",
"lint:eslint": "eslint --max-warnings=0 .",
"lint:prettier": "prettier --check .",
"lint:tsc": "tsc --noEmit",
@@ -52,48 +52,47 @@
},
"dependencies": {
"@chakra-ui/react-use-size": "^2.1.0",
"@dagrejs/dagre": "^1.1.2",
"@dagrejs/graphlib": "^2.2.2",
"@dagrejs/dagre": "^1.1.1",
"@dagrejs/graphlib": "^2.2.1",
"@dnd-kit/core": "^6.1.0",
"@dnd-kit/sortable": "^8.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@fontsource-variable/inter": "^5.0.18",
"@invoke-ai/ui-library": "^0.0.25",
"@fontsource-variable/inter": "^5.0.17",
"@invoke-ai/ui-library": "^0.0.21",
"@nanostores/react": "^0.7.2",
"@reduxjs/toolkit": "2.2.3",
"@reduxjs/toolkit": "2.2.2",
"@roarr/browser-log-writer": "^1.3.0",
"chakra-react-select": "^4.7.6",
"compare-versions": "^6.1.0",
"dateformat": "^5.0.3",
"fracturedjsonjs": "^4.0.1",
"framer-motion": "^11.1.8",
"i18next": "^23.11.3",
"i18next-http-backend": "^2.5.1",
"framer-motion": "^11.0.22",
"i18next": "^23.10.1",
"i18next-http-backend": "^2.5.0",
"idb-keyval": "^6.2.1",
"jsondiffpatch": "^0.6.0",
"konva": "^9.3.6",
"lodash-es": "^4.17.21",
"nanostores": "^0.10.3",
"nanostores": "^0.10.0",
"new-github-issue-url": "^1.0.0",
"overlayscrollbars": "^2.7.3",
"overlayscrollbars-react": "^0.5.6",
"overlayscrollbars": "^2.6.1",
"overlayscrollbars-react": "^0.5.5",
"query-string": "^9.0.0",
"react": "^18.3.1",
"react": "^18.2.0",
"react-colorful": "^5.6.1",
"react-dom": "^18.3.1",
"react-dom": "^18.2.0",
"react-dropzone": "^14.2.3",
"react-error-boundary": "^4.0.13",
"react-hook-form": "^7.51.4",
"react-hook-form": "^7.51.2",
"react-hotkeys-hook": "4.5.0",
"react-i18next": "^14.1.1",
"react-icons": "^5.2.0",
"react-i18next": "^14.1.0",
"react-icons": "^5.0.1",
"react-konva": "^18.2.10",
"react-redux": "9.1.2",
"react-resizable-panels": "^2.0.19",
"react-redux": "9.1.0",
"react-resizable-panels": "^2.0.16",
"react-select": "5.8.0",
"react-use": "^17.5.0",
"react-virtuoso": "^4.7.10",
"reactflow": "^11.11.3",
"react-virtuoso": "^4.7.5",
"reactflow": "^11.10.4",
"redux-dynamic-middlewares": "^2.2.0",
"redux-remember": "^5.1.0",
"redux-undo": "^1.1.0",
@@ -102,11 +101,10 @@
"serialize-error": "^11.0.3",
"socket.io-client": "^4.7.5",
"use-debounce": "^10.0.0",
"use-device-pixel-ratio": "^1.1.2",
"use-image": "^1.1.1",
"uuid": "^9.0.1",
"zod": "^3.23.6",
"zod-validation-error": "^3.2.0"
"zod": "^3.22.4",
"zod-validation-error": "^3.0.3"
},
"peerDependencies": {
"@chakra-ui/react": "^2.8.2",
@@ -117,19 +115,19 @@
"devDependencies": {
"@invoke-ai/eslint-config-react": "^0.0.14",
"@invoke-ai/prettier-config-react": "^0.0.7",
"@storybook/addon-essentials": "^8.0.10",
"@storybook/addon-interactions": "^8.0.10",
"@storybook/addon-links": "^8.0.10",
"@storybook/addon-storysource": "^8.0.10",
"@storybook/manager-api": "^8.0.10",
"@storybook/react": "^8.0.10",
"@storybook/react-vite": "^8.0.10",
"@storybook/theming": "^8.0.10",
"@storybook/addon-essentials": "^8.0.4",
"@storybook/addon-interactions": "^8.0.4",
"@storybook/addon-links": "^8.0.4",
"@storybook/addon-storysource": "^8.0.4",
"@storybook/manager-api": "^8.0.4",
"@storybook/react": "^8.0.4",
"@storybook/react-vite": "^8.0.4",
"@storybook/theming": "^8.0.4",
"@types/dateformat": "^5.0.2",
"@types/lodash-es": "^4.17.12",
"@types/node": "^20.12.10",
"@types/react": "^18.3.1",
"@types/react-dom": "^18.3.0",
"@types/node": "^20.11.30",
"@types/react": "^18.2.73",
"@types/react-dom": "^18.2.22",
"@types/uuid": "^9.0.8",
"@vitejs/plugin-react-swc": "^3.6.0",
"concurrently": "^8.2.2",
@@ -137,20 +135,20 @@
"eslint": "^8.57.0",
"eslint-plugin-i18next": "^6.0.3",
"eslint-plugin-path": "^1.3.0",
"knip": "^5.12.3",
"knip": "^5.6.1",
"openapi-types": "^12.1.3",
"openapi-typescript": "^6.7.5",
"prettier": "^3.2.5",
"rollup-plugin-visualizer": "^5.12.0",
"storybook": "^8.0.10",
"storybook": "^8.0.4",
"ts-toolbelt": "^9.6.0",
"tsafe": "^1.6.6",
"typescript": "^5.4.5",
"vite": "^5.2.11",
"vite-plugin-css-injected-by-js": "^3.5.1",
"vite-plugin-dts": "^3.9.1",
"typescript": "^5.4.3",
"vite": "^5.2.6",
"vite-plugin-css-injected-by-js": "^3.5.0",
"vite-plugin-dts": "^3.8.0",
"vite-plugin-eslint": "^1.8.1",
"vite-tsconfig-paths": "^4.3.2",
"vitest": "^1.6.0"
"vitest": "^1.4.0"
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -88,13 +88,11 @@
"negativePrompt": "Negative Prompt",
"discordLabel": "Discord",
"dontAskMeAgain": "Don't ask me again",
"editor": "Editor",
"error": "Error",
"file": "File",
"folder": "Folder",
"format": "format",
"githubLabel": "Github",
"goTo": "Go to",
"hotkeysLabel": "Hotkeys",
"imageFailedToLoad": "Unable to Load Image",
"img2img": "Image To Image",
@@ -142,11 +140,7 @@
"blue": "Blue",
"alpha": "Alpha",
"selected": "Selected",
"tab": "Tab",
"viewing": "Viewing",
"viewingDesc": "Review images in a large gallery view",
"editing": "Editing",
"editingDesc": "Edit on the Control Layers canvas"
"viewer": "Viewer"
},
"controlnet": {
"controlAdapter_one": "Control Adapter",
@@ -162,7 +156,6 @@
"balanced": "Balanced",
"base": "Base",
"beginEndStepPercent": "Begin / End Step Percentage",
"beginEndStepPercentShort": "Begin/End %",
"bgth": "bg_th",
"canny": "Canny",
"cannyDescription": "Canny edge detection",
@@ -231,11 +224,10 @@
"composition": "Composition Only",
"safe": "Safe",
"saveControlImage": "Save Control Image",
"scribble": "Scribble",
"scribble": "scribble",
"selectModel": "Select a model",
"selectCLIPVisionModel": "Select a CLIP Vision model",
"setControlImageDimensions": "Copy size to W/H (optimize for model)",
"setControlImageDimensionsForce": "Copy size to W/H (ignore model)",
"setControlImageDimensions": "Set Control Image Dimensions To W/H",
"showAdvanced": "Show Advanced",
"small": "Small",
"toggleControlNet": "Toggle this ControlNet",
@@ -590,10 +582,6 @@
"upscale": {
"desc": "Upscale the current image",
"title": "Upscale"
},
"toggleViewer": {
"desc": "Switches between the Image Viewer and workspace for the current tab.",
"title": "Toggle Image Viewer"
}
},
"metadata": {
@@ -927,7 +915,6 @@
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}} missing input",
"missingNodeTemplate": "Missing node template",
"noControlImageForControlAdapter": "Control Adapter #{{number}} has no control image",
"imageNotProcessedForControlAdapter": "Control Adapter #{{number}}'s image is not processed",
"noInitialImageSelected": "No initial image selected",
"noModelForControlAdapter": "Control Adapter #{{number}} has no model selected.",
"incompatibleBaseModelForControlAdapter": "Control Adapter #{{number}} model is incompatible with main model.",
@@ -938,12 +925,10 @@
},
"maskBlur": "Mask Blur",
"negativePromptPlaceholder": "Negative Prompt",
"globalNegativePromptPlaceholder": "Global Negative Prompt",
"noiseThreshold": "Noise Threshold",
"patchmatchDownScaleSize": "Downscale",
"perlinNoise": "Perlin Noise",
"positivePromptPlaceholder": "Positive Prompt",
"globalPositivePromptPlaceholder": "Global Positive Prompt",
"iterations": "Iterations",
"iterationsWithCount_one": "{{count}} Iteration",
"iterationsWithCount_other": "{{count}} Iterations",
@@ -1526,7 +1511,7 @@
"app": {
"storeNotInitialized": "Store is not initialized"
},
"controlLayers": {
"regionalPrompts": {
"deleteAll": "Delete All",
"addLayer": "Add Layer",
"moveToFront": "Move to Front",
@@ -1534,7 +1519,8 @@
"moveForward": "Move Forward",
"moveBackward": "Move Backward",
"brushSize": "Brush Size",
"controlLayers": "Control Layers",
"regionalControl": "Regional Control (ALPHA)",
"enableRegionalPrompts": "Enable $t(regionalPrompts.regionalPrompts)",
"globalMaskOpacity": "Global Mask Opacity",
"autoNegative": "Auto Negative",
"toggleVisibility": "Toggle Layer Visibility",
@@ -1545,35 +1531,6 @@
"maskPreviewColor": "Mask Preview Color",
"addPositivePrompt": "Add $t(common.positivePrompt)",
"addNegativePrompt": "Add $t(common.negativePrompt)",
"addIPAdapter": "Add $t(common.ipAdapter)",
"regionalGuidance": "Regional Guidance",
"regionalGuidanceLayer": "$t(controlLayers.regionalGuidance) $t(unifiedCanvas.layer)",
"opacity": "Opacity",
"globalControlAdapter": "Global $t(controlnet.controlAdapter_one)",
"globalControlAdapterLayer": "Global $t(controlnet.controlAdapter_one) $t(unifiedCanvas.layer)",
"globalIPAdapter": "Global $t(common.ipAdapter)",
"globalIPAdapterLayer": "Global $t(common.ipAdapter) $t(unifiedCanvas.layer)",
"globalInitialImage": "Global Initial Image",
"globalInitialImageLayer": "$t(controlLayers.globalInitialImage) $t(unifiedCanvas.layer)",
"opacityFilter": "Opacity Filter",
"clearProcessor": "Clear Processor",
"resetProcessor": "Reset Processor to Defaults",
"noLayersAdded": "No Layers Added",
"layers_one": "Layer",
"layers_other": "Layers"
},
"ui": {
"tabs": {
"generation": "Generation",
"generationTab": "$t(ui.tabs.generation) $t(common.tab)",
"canvas": "Canvas",
"canvasTab": "$t(ui.tabs.canvas) $t(common.tab)",
"workflows": "Workflows",
"workflowsTab": "$t(ui.tabs.workflows) $t(common.tab)",
"models": "Models",
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"queue": "Queue",
"queueTab": "$t(ui.tabs.queue) $t(common.tab)"
}
"addIPAdapter": "Add $t(common.ipAdapter)"
}
}

View File

@@ -20,14 +20,15 @@ export type LoggerNamespace =
| 'models'
| 'config'
| 'canvas'
| 'generation'
| 'txt2img'
| 'img2img'
| 'nodes'
| 'system'
| 'socketio'
| 'session'
| 'queue'
| 'dnd'
| 'controlLayers';
| 'regionalPrompts';
export const logger = (namespace: LoggerNamespace) => $logger.get().child({ namespace });

View File

@@ -16,7 +16,6 @@ import { addCanvasMaskSavedToGalleryListener } from 'app/store/middleware/listen
import { addCanvasMaskToControlNetListener } from 'app/store/middleware/listenerMiddleware/listeners/canvasMaskToControlNet';
import { addCanvasMergedListener } from 'app/store/middleware/listenerMiddleware/listeners/canvasMerged';
import { addCanvasSavedToGalleryListener } from 'app/store/middleware/listenerMiddleware/listeners/canvasSavedToGallery';
import { addControlAdapterPreprocessor } from 'app/store/middleware/listenerMiddleware/listeners/controlAdapterPreprocessor';
import { addControlNetAutoProcessListener } from 'app/store/middleware/listenerMiddleware/listeners/controlNetAutoProcess';
import { addControlNetImageProcessedListener } from 'app/store/middleware/listenerMiddleware/listeners/controlNetImageProcessed';
import { addEnqueueRequestedCanvasListener } from 'app/store/middleware/listenerMiddleware/listeners/enqueueRequestedCanvas';
@@ -32,6 +31,7 @@ import { addImagesStarredListener } from 'app/store/middleware/listenerMiddlewar
import { addImagesUnstarredListener } from 'app/store/middleware/listenerMiddleware/listeners/imagesUnstarred';
import { addImageToDeleteSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/imageToDeleteSelected';
import { addImageUploadedFulfilledListener } from 'app/store/middleware/listenerMiddleware/listeners/imageUploaded';
import { addInitialImageSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/initialImageSelected';
import { addModelSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/modelSelected';
import { addModelsLoadedListener } from 'app/store/middleware/listenerMiddleware/listeners/modelsLoaded';
import { addDynamicPromptsListener } from 'app/store/middleware/listenerMiddleware/listeners/promptChanged';
@@ -72,6 +72,9 @@ const startAppListening = listenerMiddleware.startListening as AppStartListening
// Image uploaded
addImageUploadedFulfilledListener(startAppListening);
// Image selected
addInitialImageSelectedListener(startAppListening);
// Image deleted
addRequestedSingleImageDeletionListener(startAppListening);
addDeleteBoardAndImagesFulfilledListener(startAppListening);
@@ -154,4 +157,3 @@ addUpscaleRequestedListener(startAppListening);
addDynamicPromptsListener(startAppListening);
addSetDefaultSettingsListener(startAppListening);
addControlAdapterPreprocessor(startAppListening);

View File

@@ -1,9 +1,9 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { resetCanvas } from 'features/canvas/store/canvasSlice';
import { controlAdaptersReset } from 'features/controlAdapters/store/controlAdaptersSlice';
import { allLayersDeleted } from 'features/controlLayers/store/controlLayersSlice';
import { getImageUsage } from 'features/deleteImageModal/store/selectors';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { clearInitialImage } from 'features/parameters/store/generationSlice';
import { imagesApi } from 'services/api/endpoints/images';
export const addDeleteBoardAndImagesFulfilledListener = (startAppListening: AppStartListening) => {
@@ -14,14 +14,19 @@ export const addDeleteBoardAndImagesFulfilledListener = (startAppListening: AppS
// Remove all deleted images from the UI
let wasInitialImageReset = false;
let wasCanvasReset = false;
let wasNodeEditorReset = false;
let wereControlAdaptersReset = false;
let wereControlLayersReset = false;
const { canvas, nodes, controlAdapters, controlLayers } = getState();
const { generation, canvas, nodes, controlAdapters } = getState();
deleted_images.forEach((image_name) => {
const imageUsage = getImageUsage(canvas, nodes, controlAdapters, controlLayers.present, image_name);
const imageUsage = getImageUsage(generation, canvas, nodes, controlAdapters, image_name);
if (imageUsage.isInitialImage && !wasInitialImageReset) {
dispatch(clearInitialImage());
wasInitialImageReset = true;
}
if (imageUsage.isCanvasImage && !wasCanvasReset) {
dispatch(resetCanvas());
@@ -37,11 +42,6 @@ export const addDeleteBoardAndImagesFulfilledListener = (startAppListening: AppS
dispatch(controlAdaptersReset());
wereControlAdaptersReset = true;
}
if (imageUsage.isControlLayerImage && !wereControlLayersReset) {
dispatch(allLayersDeleted());
wereControlLayersReset = true;
}
});
},
});

View File

@@ -1,156 +0,0 @@
import { isAnyOf } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { parseify } from 'common/util/serialize';
import {
caLayerImageChanged,
caLayerIsProcessingImageChanged,
caLayerModelChanged,
caLayerProcessedImageChanged,
caLayerProcessorConfigChanged,
caLayerRecalled,
isControlAdapterLayer,
} from 'features/controlLayers/store/controlLayersSlice';
import { CA_PROCESSOR_DATA } from 'features/controlLayers/util/controlAdapters';
import { isImageOutput } from 'features/nodes/types/common';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { isEqual } from 'lodash-es';
import { imagesApi } from 'services/api/endpoints/images';
import { queueApi } from 'services/api/endpoints/queue';
import type { BatchConfig, ImageDTO } from 'services/api/types';
import { socketInvocationComplete } from 'services/events/actions';
const matcher = isAnyOf(caLayerImageChanged, caLayerProcessorConfigChanged, caLayerModelChanged, caLayerRecalled);
const DEBOUNCE_MS = 300;
const log = logger('session');
export const addControlAdapterPreprocessor = (startAppListening: AppStartListening) => {
startAppListening({
matcher,
effect: async (action, { dispatch, getState, getOriginalState, cancelActiveListeners, delay, take }) => {
const layerId = caLayerRecalled.match(action) ? action.payload.id : action.payload.layerId;
const precheckLayerOriginal = getOriginalState()
.controlLayers.present.layers.filter(isControlAdapterLayer)
.find((l) => l.id === layerId);
const precheckLayer = getState()
.controlLayers.present.layers.filter(isControlAdapterLayer)
.find((l) => l.id === layerId);
// Conditions to bail
const layerDoesNotExist = !precheckLayer;
const layerHasNoImage = !precheckLayer?.controlAdapter.image;
const layerHasNoProcessorConfig = !precheckLayer?.controlAdapter.processorConfig;
const layerIsAlreadyProcessingImage = precheckLayer?.controlAdapter.isProcessingImage;
const areImageAndProcessorUnchanged =
isEqual(precheckLayer?.controlAdapter.image, precheckLayerOriginal?.controlAdapter.image) &&
isEqual(precheckLayer?.controlAdapter.processorConfig, precheckLayerOriginal?.controlAdapter.processorConfig);
if (
layerDoesNotExist ||
layerHasNoImage ||
layerHasNoProcessorConfig ||
areImageAndProcessorUnchanged ||
layerIsAlreadyProcessingImage
) {
return;
}
// Cancel any in-progress instances of this listener
cancelActiveListeners();
log.trace('Control Layer CA auto-process triggered');
// Delay before starting actual work
await delay(DEBOUNCE_MS);
dispatch(caLayerIsProcessingImageChanged({ layerId, isProcessingImage: true }));
// Double-check that we are still eligible for processing
const state = getState();
const layer = state.controlLayers.present.layers.filter(isControlAdapterLayer).find((l) => l.id === layerId);
const image = layer?.controlAdapter.image;
const config = layer?.controlAdapter.processorConfig;
// If we have no image or there is no processor config, bail
if (!layer || !image || !config) {
return;
}
// @ts-expect-error: TS isn't able to narrow the typing of buildNode and `config` will error...
const processorNode = CA_PROCESSOR_DATA[config.type].buildNode(image, config);
const enqueueBatchArg: BatchConfig = {
prepend: true,
batch: {
graph: {
nodes: {
[processorNode.id]: { ...processorNode, is_intermediate: true },
},
edges: [],
},
runs: 1,
},
};
try {
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(enqueueBatchArg, {
fixedCacheKey: 'enqueueBatch',
})
);
const enqueueResult = await req.unwrap();
req.reset();
log.debug({ enqueueResult: parseify(enqueueResult) }, t('queue.graphQueued'));
const [invocationCompleteAction] = await take(
(action): action is ReturnType<typeof socketInvocationComplete> =>
socketInvocationComplete.match(action) &&
action.payload.data.queue_batch_id === enqueueResult.batch.batch_id &&
action.payload.data.source_node_id === processorNode.id
);
// We still have to check the output type
if (isImageOutput(invocationCompleteAction.payload.data.result)) {
const { image_name } = invocationCompleteAction.payload.data.result.image;
// Wait for the ImageDTO to be received
const [{ payload }] = await take(
(action) =>
imagesApi.endpoints.getImageDTO.matchFulfilled(action) && action.payload.image_name === image_name
);
const imageDTO = payload as ImageDTO;
log.debug({ layerId, imageDTO }, 'ControlNet image processed');
// Update the processed image in the store
dispatch(
caLayerProcessedImageChanged({
layerId,
imageDTO,
})
);
dispatch(caLayerIsProcessingImageChanged({ layerId, isProcessingImage: false }));
}
} catch (error) {
log.error({ enqueueBatchArg: parseify(enqueueBatchArg) }, t('queue.graphFailedToQueue'));
dispatch(caLayerIsProcessingImageChanged({ layerId, isProcessingImage: false }));
if (error instanceof Object) {
if ('data' in error && 'status' in error) {
if (error.status === 403) {
dispatch(caLayerImageChanged({ layerId, imageDTO: null }));
return;
}
}
}
dispatch(
addToast({
title: t('queue.graphFailedToQueue'),
status: 'error',
})
);
}
},
});
};

View File

@@ -30,7 +30,7 @@ import type { ImageDTO } from 'services/api/types';
export const addEnqueueRequestedCanvasListener = (startAppListening: AppStartListening) => {
startAppListening({
predicate: (action): action is ReturnType<typeof enqueueRequested> =>
enqueueRequested.match(action) && action.payload.tabName === 'canvas',
enqueueRequested.match(action) && action.payload.tabName === 'unifiedCanvas',
effect: async (action, { getState, dispatch }) => {
const log = logger('queue');
const { prepend } = action.payload;

View File

@@ -1,27 +1,35 @@
import { enqueueRequested } from 'app/store/actions';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { isImageViewerOpenChanged } from 'features/gallery/store/gallerySlice';
import { buildGenerationTabGraph } from 'features/nodes/util/graph/buildGenerationTabGraph';
import { buildGenerationTabSDXLGraph } from 'features/nodes/util/graph/buildGenerationTabSDXLGraph';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
import { buildLinearImageToImageGraph } from 'features/nodes/util/graph/buildLinearImageToImageGraph';
import { buildLinearSDXLImageToImageGraph } from 'features/nodes/util/graph/buildLinearSDXLImageToImageGraph';
import { buildLinearSDXLTextToImageGraph } from 'features/nodes/util/graph/buildLinearSDXLTextToImageGraph';
import { buildLinearTextToImageGraph } from 'features/nodes/util/graph/buildLinearTextToImageGraph';
import { queueApi } from 'services/api/endpoints/queue';
export const addEnqueueRequestedLinear = (startAppListening: AppStartListening) => {
startAppListening({
predicate: (action): action is ReturnType<typeof enqueueRequested> =>
enqueueRequested.match(action) && action.payload.tabName === 'generation',
enqueueRequested.match(action) && (action.payload.tabName === 'txt2img' || action.payload.tabName === 'img2img'),
effect: async (action, { getState, dispatch }) => {
const state = getState();
const { shouldShowProgressInViewer } = state.ui;
const model = state.generation.model;
const { prepend } = action.payload;
let graph;
if (model && model.base === 'sdxl') {
graph = await buildGenerationTabSDXLGraph(state);
if (action.payload.tabName === 'txt2img') {
graph = await buildLinearSDXLTextToImageGraph(state);
} else {
graph = await buildLinearSDXLImageToImageGraph(state);
}
} else {
graph = await buildGenerationTabGraph(state);
if (action.payload.tabName === 'txt2img') {
graph = await buildLinearTextToImageGraph(state);
} else {
graph = await buildLinearImageToImageGraph(state);
}
}
const batchConfig = prepareLinearUIBatch(state, graph, prepend);
@@ -31,14 +39,7 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
fixedCacheKey: 'enqueueBatch',
})
);
try {
req.unwrap();
if (shouldShowProgressInViewer) {
dispatch(isImageViewerOpenChanged(true));
}
} finally {
req.reset();
}
req.reset();
},
});
};

View File

@@ -8,7 +8,7 @@ import type { BatchConfig } from 'services/api/types';
export const addEnqueueRequestedNodes = (startAppListening: AppStartListening) => {
startAppListening({
predicate: (action): action is ReturnType<typeof enqueueRequested> =>
enqueueRequested.match(action) && action.payload.tabName === 'workflows',
enqueueRequested.match(action) && action.payload.tabName === 'nodes',
effect: async (action, { getState, dispatch }) => {
const state = getState();
const { nodes, edges } = state.nodes;

View File

@@ -1,6 +1,5 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { AppDispatch, RootState } from 'app/store/store';
import { resetCanvas } from 'features/canvas/store/canvasSlice';
import {
controlAdapterImageChanged,
@@ -8,13 +7,6 @@ import {
selectControlAdapterAll,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
import {
isControlAdapterLayer,
isInitialImageLayer,
isIPAdapterLayer,
isRegionalGuidanceLayer,
layerDeleted,
} from 'features/controlLayers/store/controlLayersSlice';
import { imageDeletionConfirmed } from 'features/deleteImageModal/store/actions';
import { isModalOpenChanged } from 'features/deleteImageModal/store/slice';
import { selectListImagesQueryArgs } from 'features/gallery/store/gallerySelectors';
@@ -22,82 +14,12 @@ import { imageSelected } from 'features/gallery/store/gallerySlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { isImageFieldInputInstance } from 'features/nodes/types/field';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { clearInitialImage } from 'features/parameters/store/generationSlice';
import { clamp, forEach } from 'lodash-es';
import { api } from 'services/api';
import { imagesApi } from 'services/api/endpoints/images';
import type { ImageDTO } from 'services/api/types';
import { imagesSelectors } from 'services/api/util';
const deleteNodesImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
state.nodes.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
return;
}
forEach(node.data.inputs, (input) => {
if (isImageFieldInputInstance(input) && input.value?.image_name === imageDTO.image_name) {
dispatch(
fieldImageValueChanged({
nodeId: node.data.id,
fieldName: input.name,
value: undefined,
})
);
}
});
});
};
const deleteControlAdapterImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
forEach(selectControlAdapterAll(state.controlAdapters), (ca) => {
if (
ca.controlImage === imageDTO.image_name ||
(isControlNetOrT2IAdapter(ca) && ca.processedControlImage === imageDTO.image_name)
) {
dispatch(
controlAdapterImageChanged({
id: ca.id,
controlImage: null,
})
);
dispatch(
controlAdapterProcessedImageChanged({
id: ca.id,
processedControlImage: null,
})
);
}
});
};
const deleteControlLayerImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
state.controlLayers.present.layers.forEach((l) => {
if (isRegionalGuidanceLayer(l)) {
if (l.ipAdapters.some((ipa) => ipa.image?.name === imageDTO.image_name)) {
dispatch(layerDeleted(l.id));
}
}
if (isControlAdapterLayer(l)) {
if (
l.controlAdapter.image?.name === imageDTO.image_name ||
l.controlAdapter.processedImage?.name === imageDTO.image_name
) {
dispatch(layerDeleted(l.id));
}
}
if (isIPAdapterLayer(l)) {
if (l.ipAdapter.image?.name === imageDTO.image_name) {
dispatch(layerDeleted(l.id));
}
}
if (isInitialImageLayer(l)) {
if (l.image?.name === imageDTO.image_name) {
dispatch(layerDeleted(l.id));
}
}
});
};
export const addRequestedSingleImageDeletionListener = (startAppListening: AppStartListening) => {
startAppListening({
actionCreator: imageDeletionConfirmed,
@@ -151,9 +73,50 @@ export const addRequestedSingleImageDeletionListener = (startAppListening: AppSt
}
imageDTOs.forEach((imageDTO) => {
deleteControlAdapterImages(state, dispatch, imageDTO);
deleteNodesImages(state, dispatch, imageDTO);
deleteControlLayerImages(state, dispatch, imageDTO);
// reset init image if we deleted it
if (getState().generation.initialImage?.imageName === imageDTO.image_name) {
dispatch(clearInitialImage());
}
// reset control adapters that use the deleted images
forEach(selectControlAdapterAll(getState().controlAdapters), (ca) => {
if (
ca.controlImage === imageDTO.image_name ||
(isControlNetOrT2IAdapter(ca) && ca.processedControlImage === imageDTO.image_name)
) {
dispatch(
controlAdapterImageChanged({
id: ca.id,
controlImage: null,
})
);
dispatch(
controlAdapterProcessedImageChanged({
id: ca.id,
processedControlImage: null,
})
);
}
});
// reset nodes that use the deleted images
getState().nodes.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
return;
}
forEach(node.data.inputs, (input) => {
if (isImageFieldInputInstance(input) && input.value?.image_name === imageDTO.image_name) {
dispatch(
fieldImageValueChanged({
nodeId: node.data.id,
fieldName: input.name,
value: undefined,
})
);
}
});
});
});
// Delete from server
@@ -205,9 +168,50 @@ export const addRequestedSingleImageDeletionListener = (startAppListening: AppSt
}
imageDTOs.forEach((imageDTO) => {
deleteControlAdapterImages(state, dispatch, imageDTO);
deleteNodesImages(state, dispatch, imageDTO);
deleteControlLayerImages(state, dispatch, imageDTO);
// reset init image if we deleted it
if (getState().generation.initialImage?.imageName === imageDTO.image_name) {
dispatch(clearInitialImage());
}
// reset control adapters that use the deleted images
forEach(selectControlAdapterAll(getState().controlAdapters), (ca) => {
if (
ca.controlImage === imageDTO.image_name ||
(isControlNetOrT2IAdapter(ca) && ca.processedControlImage === imageDTO.image_name)
) {
dispatch(
controlAdapterImageChanged({
id: ca.id,
controlImage: null,
})
);
dispatch(
controlAdapterProcessedImageChanged({
id: ca.id,
processedControlImage: null,
})
);
}
});
// reset nodes that use the deleted images
getState().nodes.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
return;
}
forEach(node.data.inputs, (input) => {
if (isImageFieldInputInstance(input) && input.value?.image_name === imageDTO.image_name) {
dispatch(
fieldImageValueChanged({
nodeId: node.data.id,
fieldName: input.name,
value: undefined,
})
);
}
});
});
});
} catch {
// no-op

View File

@@ -7,16 +7,10 @@ import {
controlAdapterImageChanged,
controlAdapterIsEnabledChanged,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import {
caLayerImageChanged,
iiLayerImageChanged,
ipaLayerImageChanged,
rgLayerIPAdapterImageChanged,
} from 'features/controlLayers/store/controlLayersSlice';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { imageSelected } from 'features/gallery/store/gallerySlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { initialImageChanged, selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { imagesApi } from 'services/api/endpoints/images';
export const dndDropped = createAction<{
@@ -53,6 +47,18 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
return;
}
/**
* Image dropped on initial image
*/
if (
overData.actionType === 'SET_INITIAL_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
dispatch(initialImageChanged(activeData.payload.imageDTO));
return;
}
/**
* Image dropped on ControlNet
*/
@@ -77,79 +83,6 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
return;
}
/**
* Image dropped on Control Adapter Layer
*/
if (
overData.actionType === 'SET_CA_LAYER_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { layerId } = overData.context;
dispatch(
caLayerImageChanged({
layerId,
imageDTO: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image dropped on IP Adapter Layer
*/
if (
overData.actionType === 'SET_IPA_LAYER_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { layerId } = overData.context;
dispatch(
ipaLayerImageChanged({
layerId,
imageDTO: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image dropped on RG Layer IP Adapter
*/
if (
overData.actionType === 'SET_RG_LAYER_IP_ADAPTER_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { layerId, ipAdapterId } = overData.context;
dispatch(
rgLayerIPAdapterImageChanged({
layerId,
ipAdapterId,
imageDTO: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image dropped on II Layer Image
*/
if (
overData.actionType === 'SET_II_LAYER_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { layerId } = overData.context;
dispatch(
iiLayerImageChanged({
layerId,
imageDTO: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image dropped on Canvas
*/

View File

@@ -14,6 +14,7 @@ export const addImageToDeleteSelectedListener = (startAppListening: AppStartList
const isImageInUse =
imagesUsage.some((i) => i.isCanvasImage) ||
imagesUsage.some((i) => i.isInitialImage) ||
imagesUsage.some((i) => i.isControlImage) ||
imagesUsage.some((i) => i.isNodesImage);

View File

@@ -6,14 +6,8 @@ import {
controlAdapterImageChanged,
controlAdapterIsEnabledChanged,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import {
caLayerImageChanged,
iiLayerImageChanged,
ipaLayerImageChanged,
rgLayerIPAdapterImageChanged,
} from 'features/controlLayers/store/controlLayersSlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { initialImageChanged, selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { omit } from 'lodash-es';
@@ -114,48 +108,15 @@ export const addImageUploadedFulfilledListener = (startAppListening: AppStartLis
return;
}
if (postUploadAction?.type === 'SET_CA_LAYER_IMAGE') {
const { layerId } = postUploadAction;
dispatch(caLayerImageChanged({ layerId, imageDTO }));
if (postUploadAction?.type === 'SET_INITIAL_IMAGE') {
dispatch(initialImageChanged(imageDTO));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: t('toast.setControlImage'),
})
);
}
if (postUploadAction?.type === 'SET_IPA_LAYER_IMAGE') {
const { layerId } = postUploadAction;
dispatch(ipaLayerImageChanged({ layerId, imageDTO }));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: t('toast.setControlImage'),
})
);
}
if (postUploadAction?.type === 'SET_RG_LAYER_IP_ADAPTER_IMAGE') {
const { layerId, ipAdapterId } = postUploadAction;
dispatch(rgLayerIPAdapterImageChanged({ layerId, ipAdapterId, imageDTO }));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: t('toast.setControlImage'),
})
);
}
if (postUploadAction?.type === 'SET_II_LAYER_IMAGE') {
const { layerId } = postUploadAction;
dispatch(iiLayerImageChanged({ layerId, imageDTO }));
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: t('toast.setControlImage'),
description: t('toast.setInitialImage'),
})
);
return;
}
if (postUploadAction?.type === 'SET_NODES_IMAGE') {

View File

@@ -0,0 +1,21 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { initialImageSelected } from 'features/parameters/store/actions';
import { initialImageChanged } from 'features/parameters/store/generationSlice';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { t } from 'i18next';
export const addInitialImageSelectedListener = (startAppListening: AppStartListening) => {
startAppListening({
actionCreator: initialImageSelected,
effect: (action, { dispatch }) => {
if (!action.payload) {
dispatch(addToast(makeToast({ title: t('toast.imageNotLoadedDesc'), status: 'error' })));
return;
}
dispatch(initialImageChanged(action.payload));
dispatch(addToast(makeToast(t('toast.sentToImageToImage'))));
},
});
};

View File

@@ -6,10 +6,9 @@ import {
controlAdapterModelCleared,
selectControlAdapterAll,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import { loraRemoved } from 'features/lora/store/loraSlice';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { modelChanged, vaeSelected } from 'features/parameters/store/generationSlice';
import { heightChanged, modelChanged, vaeSelected, widthChanged } from 'features/parameters/store/generationSlice';
import { zParameterModel, zParameterVAEModel } from 'features/parameters/types/parameterSchemas';
import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';
import { refinerModelChanged } from 'features/sdxl/store/sdxlSlice';
@@ -70,22 +69,16 @@ const handleMainModels: ModelHandler = (models, state, dispatch, log) => {
dispatch(modelChanged(defaultModelInList, currentModel));
const optimalDimension = getOptimalDimension(defaultModelInList);
if (
getIsSizeOptimal(
state.controlLayers.present.size.width,
state.controlLayers.present.size.height,
optimalDimension
)
) {
if (getIsSizeOptimal(state.generation.width, state.generation.height, optimalDimension)) {
return;
}
const { width, height } = calculateNewSize(
state.controlLayers.present.size.aspectRatio.value,
state.generation.aspectRatio.value,
optimalDimension * optimalDimension
);
dispatch(widthChanged({ width }));
dispatch(heightChanged({ height }));
dispatch(widthChanged(width));
dispatch(heightChanged(height));
return;
}
}

View File

@@ -1,6 +1,5 @@
import { isAnyOf } from '@reduxjs/toolkit';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { positivePromptChanged } from 'features/controlLayers/store/controlLayersSlice';
import {
combinatorialToggled,
isErrorChanged,
@@ -11,16 +10,11 @@ import {
promptsChanged,
} from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
import { setPositivePrompt } from 'features/parameters/store/generationSlice';
import { utilitiesApi } from 'services/api/endpoints/utilities';
import { socketConnected } from 'services/events/actions';
const matcher = isAnyOf(
positivePromptChanged,
combinatorialToggled,
maxPromptsChanged,
maxPromptsReset,
socketConnected
);
const matcher = isAnyOf(setPositivePrompt, combinatorialToggled, maxPromptsChanged, maxPromptsReset, socketConnected);
export const addDynamicPromptsListener = (startAppListening: AppStartListening) => {
startAppListening({
@@ -28,7 +22,7 @@ export const addDynamicPromptsListener = (startAppListening: AppStartListening)
effect: async (action, { dispatch, getState, cancelActiveListeners, delay }) => {
cancelActiveListeners();
const state = getState();
const { positivePrompt } = state.controlLayers.present;
const { positivePrompt } = state.generation;
const { maxPrompts } = state.dynamicPrompts;
if (state.config.disabledFeatures.includes('dynamicPrompting')) {
@@ -38,7 +32,7 @@ export const addDynamicPromptsListener = (startAppListening: AppStartListening)
const cachedPrompts = utilitiesApi.endpoints.dynamicPrompts.select({
prompt: positivePrompt,
max_prompts: maxPrompts,
})(state).data;
})(getState()).data;
if (cachedPrompts) {
dispatch(promptsChanged(cachedPrompts.prompts));
@@ -46,8 +40,8 @@ export const addDynamicPromptsListener = (startAppListening: AppStartListening)
return;
}
if (!getShouldProcessPrompt(positivePrompt)) {
dispatch(promptsChanged([positivePrompt]));
if (!getShouldProcessPrompt(state.generation.positivePrompt)) {
dispatch(promptsChanged([state.generation.positivePrompt]));
dispatch(parsingErrorChanged(undefined));
dispatch(isErrorChanged(false));
return;

View File

@@ -1,13 +1,14 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import { setDefaultSettings } from 'features/parameters/store/actions';
import {
heightRecalled,
setCfgRescaleMultiplier,
setCfgScale,
setScheduler,
setSteps,
vaePrecisionChanged,
vaeSelected,
widthRecalled,
} from 'features/parameters/store/generationSlice';
import {
isParameterCFGRescaleMultiplier,
@@ -96,16 +97,16 @@ export const addSetDefaultSettingsListener = (startAppListening: AppStartListeni
dispatch(setScheduler(scheduler));
}
}
const setSizeOptions = { updateAspectRatio: true, clamp: true };
if (width) {
if (isParameterWidth(width)) {
dispatch(widthChanged({ width, ...setSizeOptions }));
dispatch(widthRecalled(width));
}
}
if (height) {
if (isParameterHeight(height)) {
dispatch(heightChanged({ height, ...setSizeOptions }));
dispatch(heightRecalled(height));
}
}

View File

@@ -2,12 +2,7 @@ import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { parseify } from 'common/util/serialize';
import { addImageToStagingArea } from 'features/canvas/store/canvasSlice';
import {
boardIdSelected,
galleryViewChanged,
imageSelected,
isImageViewerOpenChanged,
} from 'features/gallery/store/gallerySlice';
import { boardIdSelected, galleryViewChanged, imageSelected } from 'features/gallery/store/gallerySlice';
import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
import { isImageOutput } from 'features/nodes/types/common';
import { CANVAS_OUTPUT } from 'features/nodes/util/graph/constants';
@@ -106,7 +101,6 @@ export const addInvocationCompleteEventListener = (startAppListening: AppStartLi
}
dispatch(imageSelected(imageDTO));
dispatch(isImageViewerOpenChanged(true));
}
}
}

View File

@@ -10,11 +10,6 @@ import {
controlAdaptersPersistConfig,
controlAdaptersSlice,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import {
controlLayersPersistConfig,
controlLayersSlice,
controlLayersUndoableConfig,
} from 'features/controlLayers/store/controlLayersSlice';
import { deleteImageModalSlice } from 'features/deleteImageModal/store/slice';
import { dynamicPromptsPersistConfig, dynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { galleryPersistConfig, gallerySlice } from 'features/gallery/store/gallerySlice';
@@ -26,6 +21,11 @@ import { workflowPersistConfig, workflowSlice } from 'features/nodes/store/workf
import { generationPersistConfig, generationSlice } from 'features/parameters/store/generationSlice';
import { postprocessingPersistConfig, postprocessingSlice } from 'features/parameters/store/postprocessingSlice';
import { queueSlice } from 'features/queue/store/queueSlice';
import {
regionalPromptsPersistConfig,
regionalPromptsSlice,
regionalPromptsUndoableConfig,
} from 'features/regionalPrompts/store/regionalPromptsSlice';
import { sdxlPersistConfig, sdxlSlice } from 'features/sdxl/store/sdxlSlice';
import { configSlice } from 'features/system/store/configSlice';
import { systemPersistConfig, systemSlice } from 'features/system/store/systemSlice';
@@ -65,7 +65,7 @@ const allReducers = {
[queueSlice.name]: queueSlice.reducer,
[workflowSlice.name]: workflowSlice.reducer,
[hrfSlice.name]: hrfSlice.reducer,
[controlLayersSlice.name]: undoable(controlLayersSlice.reducer, controlLayersUndoableConfig),
[regionalPromptsSlice.name]: undoable(regionalPromptsSlice.reducer, regionalPromptsUndoableConfig),
[api.reducerPath]: api.reducer,
};
@@ -110,7 +110,7 @@ const persistConfigs: { [key in keyof typeof allReducers]?: PersistConfig } = {
[loraPersistConfig.name]: loraPersistConfig,
[modelManagerV2PersistConfig.name]: modelManagerV2PersistConfig,
[hrfPersistConfig.name]: hrfPersistConfig,
[controlLayersPersistConfig.name]: controlLayersPersistConfig,
[regionalPromptsPersistConfig.name]: regionalPromptsPersistConfig,
};
const unserialize: UnserializeFunction = (data, key) => {

View File

@@ -70,7 +70,6 @@ const IAIDndImage = (props: IAIDndImageProps) => {
onMouseOver,
onMouseOut,
dataTestId,
...rest
} = props;
const [isHovered, setIsHovered] = useState(false);
@@ -139,7 +138,6 @@ const IAIDndImage = (props: IAIDndImageProps) => {
minH={minSize ? minSize : undefined}
userSelect="none"
cursor={isDragDisabled || !imageDTO ? 'default' : 'pointer'}
{...rest}
>
{imageDTO && (
<Flex

View File

@@ -17,10 +17,14 @@ const accept: Accept = {
const selectPostUploadAction = createMemoizedSelector(activeTabNameSelector, (activeTabName) => {
let postUploadAction: PostUploadAction = { type: 'TOAST' };
if (activeTabName === 'canvas') {
if (activeTabName === 'unifiedCanvas') {
postUploadAction = { type: 'SET_CANVAS_INITIAL_IMAGE' };
}
if (activeTabName === 'img2img') {
postUploadAction = { type: 'SET_INITIAL_IMAGE' };
}
return postUploadAction;
});

View File

@@ -67,7 +67,7 @@ export const useGlobalHotkeys = () => {
useHotkeys(
'1',
() => {
dispatch(setActiveTab('generation'));
dispatch(setActiveTab('txt2img'));
},
[dispatch]
);
@@ -75,7 +75,7 @@ export const useGlobalHotkeys = () => {
useHotkeys(
'2',
() => {
dispatch(setActiveTab('canvas'));
dispatch(setActiveTab('img2img'));
},
[dispatch]
);
@@ -83,23 +83,31 @@ export const useGlobalHotkeys = () => {
useHotkeys(
'3',
() => {
dispatch(setActiveTab('workflows'));
dispatch(setActiveTab('unifiedCanvas'));
},
[dispatch]
);
useHotkeys(
'4',
() => {
dispatch(setActiveTab('nodes'));
},
[dispatch]
);
useHotkeys(
'5',
() => {
if (isModelManagerEnabled) {
dispatch(setActiveTab('models'));
dispatch(setActiveTab('modelManager'));
}
},
[dispatch, isModelManagerEnabled]
);
useHotkeys(
isModelManagerEnabled ? '5' : '4',
isModelManagerEnabled ? '6' : '5',
() => {
dispatch(setActiveTab('queue'));
},

View File

@@ -5,7 +5,6 @@ import {
selectControlAdaptersSlice,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
import { selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
import { selectDynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
import { selectNodesSlice } from 'features/nodes/store/nodesSlice';
@@ -24,12 +23,10 @@ const selector = createMemoizedSelector(
selectSystemSlice,
selectNodesSlice,
selectDynamicPromptsSlice,
selectControlLayersSlice,
activeTabNameSelector,
],
(controlAdapters, generation, system, nodes, dynamicPrompts, controlLayers, activeTabName) => {
const { model } = generation;
const { positivePrompt } = controlLayers.present;
(controlAdapters, generation, system, nodes, dynamicPrompts, activeTabName) => {
const { initialImage, model, positivePrompt } = generation;
const { isConnected } = system;
@@ -40,7 +37,11 @@ const selector = createMemoizedSelector(
reasons.push(i18n.t('parameters.invoke.systemDisconnected'));
}
if (activeTabName === 'workflows') {
if (activeTabName === 'img2img' && !initialImage) {
reasons.push(i18n.t('parameters.invoke.noInitialImageSelected'));
}
if (activeTabName === 'nodes') {
if (nodes.shouldValidateGraph) {
if (!nodes.nodes.length) {
reasons.push(i18n.t('parameters.invoke.noNodesInGraph'));
@@ -93,93 +94,37 @@ const selector = createMemoizedSelector(
reasons.push(i18n.t('parameters.invoke.noModelSelected'));
}
if (activeTabName === 'generation') {
// Handling for generation tab
controlLayers.present.layers
.filter((l) => l.isEnabled)
.flatMap((l) => {
if (l.type === 'control_adapter_layer') {
return l.controlAdapter;
} else if (l.type === 'ip_adapter_layer') {
return l.ipAdapter;
} else if (l.type === 'regional_guidance_layer') {
return l.ipAdapters;
}
return [];
})
.forEach((ca, i) => {
const hasNoModel = !ca.model;
const mismatchedModelBase = ca.model?.base !== model?.base;
const hasNoImage = !ca.image;
const imageNotProcessed =
(ca.type === 'controlnet' || ca.type === 't2i_adapter') && !ca.processedImage && ca.processorConfig;
selectControlAdapterAll(controlAdapters).forEach((ca, i) => {
if (!ca.isEnabled) {
return;
}
if (hasNoModel) {
reasons.push(
i18n.t('parameters.invoke.noModelForControlAdapter', {
number: i + 1,
})
);
}
if (mismatchedModelBase) {
// This should never happen, just a sanity check
reasons.push(
i18n.t('parameters.invoke.incompatibleBaseModelForControlAdapter', {
number: i + 1,
})
);
}
if (hasNoImage) {
reasons.push(
i18n.t('parameters.invoke.noControlImageForControlAdapter', {
number: i + 1,
})
);
}
if (imageNotProcessed) {
reasons.push(
i18n.t('parameters.invoke.imageNotProcessedForControlAdapter', {
number: i + 1,
})
);
}
});
} else {
// Handling for all other tabs
selectControlAdapterAll(controlAdapters)
.filter((ca) => ca.isEnabled)
.forEach((ca, i) => {
if (!ca.isEnabled) {
return;
}
if (!ca.model) {
reasons.push(
i18n.t('parameters.invoke.noModelForControlAdapter', {
number: i + 1,
})
);
} else if (ca.model.base !== model?.base) {
// This should never happen, just a sanity check
reasons.push(
i18n.t('parameters.invoke.incompatibleBaseModelForControlAdapter', {
number: i + 1,
})
);
}
if (!ca.model) {
reasons.push(
i18n.t('parameters.invoke.noModelForControlAdapter', {
number: i + 1,
})
);
} else if (ca.model.base !== model?.base) {
// This should never happen, just a sanity check
reasons.push(
i18n.t('parameters.invoke.incompatibleBaseModelForControlAdapter', {
number: i + 1,
})
);
}
if (
!ca.controlImage ||
(isControlNetOrT2IAdapter(ca) && !ca.processedControlImage && ca.processorType !== 'none')
) {
reasons.push(
i18n.t('parameters.invoke.noControlImageForControlAdapter', {
number: i + 1,
})
);
}
});
}
if (
!ca.controlImage ||
(isControlNetOrT2IAdapter(ca) && !ca.processedControlImage && ca.processorType !== 'none')
) {
reasons.push(
i18n.t('parameters.invoke.noControlImageForControlAdapter', {
number: i + 1,
})
);
}
});
}
return { isReady: !reasons.length, reasons };

View File

@@ -1,3 +0,0 @@
export const stopPropagation = (e: React.MouseEvent) => {
e.stopPropagation();
};

View File

@@ -75,7 +75,7 @@ const useInpaintingCanvasHotkeys = () => {
const onKeyDown = useCallback(
(e: KeyboardEvent) => {
if (e.repeat || e.key !== ' ' || isInteractiveTarget(e.target) || activeTabName !== 'canvas') {
if (e.repeat || e.key !== ' ' || isInteractiveTarget(e.target) || activeTabName !== 'unifiedCanvas') {
return;
}
if ($toolStash.get() || $tool.get() === 'move') {
@@ -90,7 +90,7 @@ const useInpaintingCanvasHotkeys = () => {
);
const onKeyUp = useCallback(
(e: KeyboardEvent) => {
if (e.repeat || e.key !== ' ' || isInteractiveTarget(e.target) || activeTabName !== 'canvas') {
if (e.repeat || e.key !== ' ' || isInteractiveTarget(e.target) || activeTabName !== 'unifiedCanvas') {
return;
}
if (!$toolStash.get() || $tool.get() !== 'move') {

View File

@@ -8,7 +8,6 @@ import calculateScale from 'features/canvas/util/calculateScale';
import { STAGE_PADDING_PERCENTAGE } from 'features/canvas/util/constants';
import floorCoordinates from 'features/canvas/util/floorCoordinates';
import getScaledBoundingBoxDimensions from 'features/canvas/util/getScaledBoundingBoxDimensions';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { initialAspectRatioState } from 'features/parameters/components/ImageSize/constants';
import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
import { modelChanged } from 'features/parameters/store/generationSlice';
@@ -589,9 +588,8 @@ export const canvasSlice = createSlice({
},
extraReducers: (builder) => {
builder.addCase(modelChanged, (state, action) => {
const newModel = action.payload;
if (!newModel || action.meta.previousModel?.base === newModel.base) {
// Model was cleared or the base didn't change
if (action.meta.previousModel?.base === action.payload?.base) {
// The base model hasn't changed, we don't need to optimize the size
return;
}
const optimalDimension = getOptimalDimension(action.payload);
@@ -599,8 +597,14 @@ export const canvasSlice = createSlice({
if (getIsSizeOptimal(width, height, optimalDimension)) {
return;
}
const newSize = calculateNewSize(state.aspectRatio.value, optimalDimension * optimalDimension);
setBoundingBoxDimensionsReducer(state, newSize, optimalDimension);
setBoundingBoxDimensionsReducer(
state,
{
width,
height,
},
optimalDimension
);
});
builder.addCase(socketQueueItemStatusChanged, (state, action) => {

View File

@@ -76,7 +76,7 @@ const ControlAdapterConfig = (props: { id: string; number: number }) => {
<Box minW={0} w="full" transitionProperty="common" transitionDuration="0.1s">
<ParamControlAdapterModel id={id} />
</Box>
{activeTabName === 'canvas' && <ControlNetCanvasImageImports id={id} />}
{activeTabName === 'unifiedCanvas' && <ControlNetCanvasImageImports id={id} />}
<IconButton
size="sm"
tooltip={t('controlnet.duplicate')}

View File

@@ -13,10 +13,9 @@ import {
controlAdapterImageChanged,
selectControlAdaptersSlice,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { heightChanged, selectOptimalDimension, widthChanged } from 'features/parameters/store/generationSlice';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { memo, useCallback, useEffect, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
@@ -93,16 +92,15 @@ const ControlAdapterImagePreview = ({ isSmall, id }: Props) => {
return;
}
if (activeTabName === 'canvas') {
if (activeTabName === 'unifiedCanvas') {
dispatch(setBoundingBoxDimensions({ width: controlImage.width, height: controlImage.height }, optimalDimension));
} else {
const options = { updateAspectRatio: true, clamp: true };
const { width, height } = calculateNewSize(
controlImage.width / controlImage.height,
optimalDimension * optimalDimension
);
dispatch(widthChanged({ width, ...options }));
dispatch(heightChanged({ height, ...options }));
dispatch(widthChanged(width));
dispatch(heightChanged(height));
}
}, [controlImage, activeTabName, dispatch, optimalDimension]);

View File

@@ -52,7 +52,7 @@ const ParamControlAdapterIPMethod = ({ id }: Props) => {
return (
<FormControl>
<InformationalPopover feature="controlNetResizeMode">
<InformationalPopover feature="ipAdapterMethod">
<FormLabel>{t('controlnet.ipAdapterMethod')}</FormLabel>
</InformationalPopover>
<Combobox value={value} options={options} isDisabled={!isEnabled} onChange={handleIPMethodChanged} />

View File

@@ -1,11 +1,12 @@
import type { PayloadAction, Update } from '@reduxjs/toolkit';
import { createEntityAdapter, createSlice } from '@reduxjs/toolkit';
import { createEntityAdapter, createSlice, isAnyOf } from '@reduxjs/toolkit';
import { getSelectorsOptions } from 'app/store/createMemoizedSelector';
import type { PersistConfig, RootState } from 'app/store/store';
import { deepClone } from 'common/util/deepClone';
import { buildControlAdapter } from 'features/controlAdapters/util/buildControlAdapter';
import { buildControlAdapterProcessor } from 'features/controlAdapters/util/buildControlAdapterProcessor';
import { zModelIdentifierField } from 'features/nodes/types/common';
import { maskLayerIPAdapterAdded } from 'features/regionalPrompts/store/regionalPromptsSlice';
import { merge, uniq } from 'lodash-es';
import type { ControlNetModelConfig, IPAdapterModelConfig, T2IAdapterModelConfig } from 'services/api/types';
import { socketInvocationError } from 'services/events/actions';
@@ -382,6 +383,10 @@ export const controlAdaptersSlice = createSlice({
builder.addCase(socketInvocationError, (state) => {
state.pendingControlImages = [];
});
builder.addCase(maskLayerIPAdapterAdded, (state, action) => {
caAdapter.addOne(state, buildControlAdapter(action.meta.uuid, 'ip_adapter'));
});
},
});
@@ -412,6 +417,8 @@ export const {
t2iAdaptersReset,
} = controlAdaptersSlice.actions;
export const isAnyControlAdapterAdded = isAnyOf(controlAdapterAdded, controlAdapterRecalled);
export const selectControlAdaptersSlice = (state: RootState) => state.controlAdapters;
/* eslint-disable-next-line @typescript-eslint/no-explicit-any */

View File

@@ -1,47 +0,0 @@
import { Button, Menu, MenuButton, MenuItem, MenuList } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useAddCALayer, useAddIILayer, useAddIPALayer } from 'features/controlLayers/hooks/addLayerHooks';
import { rgLayerAdded } from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
export const AddLayerButton = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const [addCALayer, isAddCALayerDisabled] = useAddCALayer();
const [addIPALayer, isAddIPALayerDisabled] = useAddIPALayer();
const [addIILayer, isAddIILayerDisabled] = useAddIILayer();
const addRGLayer = useCallback(() => {
dispatch(rgLayerAdded());
}, [dispatch]);
return (
<Menu>
<MenuButton
as={Button}
leftIcon={<PiPlusBold />}
variant="ghost"
data-testid="control-layers-add-layer-menu-button"
>
{t('controlLayers.addLayer')}
</MenuButton>
<MenuList>
<MenuItem icon={<PiPlusBold />} onClick={addRGLayer}>
{t('controlLayers.regionalGuidanceLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addCALayer} isDisabled={isAddCALayerDisabled}>
{t('controlLayers.globalControlAdapterLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addIPALayer} isDisabled={isAddIPALayerDisabled}>
{t('controlLayers.globalIPAdapterLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addIILayer} isDisabled={isAddIILayerDisabled}>
{t('controlLayers.globalInitialImageLayer')}
</MenuItem>
</MenuList>
</Menu>
);
});
AddLayerButton.displayName = 'AddLayerButton';

View File

@@ -1,45 +0,0 @@
import { Flex, Spacer, useDisclosure } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { CALayerControlAdapterWrapper } from 'features/controlLayers/components/CALayer/CALayerControlAdapterWrapper';
import { LayerDeleteButton } from 'features/controlLayers/components/LayerCommon/LayerDeleteButton';
import { LayerMenu } from 'features/controlLayers/components/LayerCommon/LayerMenu';
import { LayerTitle } from 'features/controlLayers/components/LayerCommon/LayerTitle';
import { LayerVisibilityToggle } from 'features/controlLayers/components/LayerCommon/LayerVisibilityToggle';
import { LayerWrapper } from 'features/controlLayers/components/LayerCommon/LayerWrapper';
import { layerSelected, selectCALayerOrThrow } from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback } from 'react';
import CALayerOpacity from './CALayerOpacity';
type Props = {
layerId: string;
};
export const CALayer = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const isSelected = useAppSelector((s) => selectCALayerOrThrow(s.controlLayers.present, layerId).isSelected);
const onClick = useCallback(() => {
dispatch(layerSelected(layerId));
}, [dispatch, layerId]);
const { isOpen, onToggle } = useDisclosure({ defaultIsOpen: true });
return (
<LayerWrapper onClick={onClick} borderColor={isSelected ? 'base.400' : 'base.800'}>
<Flex gap={3} alignItems="center" p={3} cursor="pointer" onDoubleClick={onToggle}>
<LayerVisibilityToggle layerId={layerId} />
<LayerTitle type="control_adapter_layer" />
<Spacer />
<CALayerOpacity layerId={layerId} />
<LayerMenu layerId={layerId} />
<LayerDeleteButton layerId={layerId} />
</Flex>
{isOpen && (
<Flex flexDir="column" gap={3} px={3} pb={3}>
<CALayerControlAdapterWrapper layerId={layerId} />
</Flex>
)}
</LayerWrapper>
);
});
CALayer.displayName = 'CALayer';

View File

@@ -1,121 +0,0 @@
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { ControlAdapter } from 'features/controlLayers/components/ControlAndIPAdapter/ControlAdapter';
import {
caLayerControlModeChanged,
caLayerImageChanged,
caLayerModelChanged,
caLayerProcessorConfigChanged,
caOrIPALayerBeginEndStepPctChanged,
caOrIPALayerWeightChanged,
selectCALayerOrThrow,
} from 'features/controlLayers/store/controlLayersSlice';
import type { ControlModeV2, ProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import type { CALayerImageDropData } from 'features/dnd/types';
import { memo, useCallback, useMemo } from 'react';
import type {
CALayerImagePostUploadAction,
ControlNetModelConfig,
ImageDTO,
T2IAdapterModelConfig,
} from 'services/api/types';
type Props = {
layerId: string;
};
export const CALayerControlAdapterWrapper = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const controlAdapter = useAppSelector((s) => selectCALayerOrThrow(s.controlLayers.present, layerId).controlAdapter);
const onChangeBeginEndStepPct = useCallback(
(beginEndStepPct: [number, number]) => {
dispatch(
caOrIPALayerBeginEndStepPctChanged({
layerId,
beginEndStepPct,
})
);
},
[dispatch, layerId]
);
const onChangeControlMode = useCallback(
(controlMode: ControlModeV2) => {
dispatch(
caLayerControlModeChanged({
layerId,
controlMode,
})
);
},
[dispatch, layerId]
);
const onChangeWeight = useCallback(
(weight: number) => {
dispatch(caOrIPALayerWeightChanged({ layerId, weight }));
},
[dispatch, layerId]
);
const onChangeProcessorConfig = useCallback(
(processorConfig: ProcessorConfig | null) => {
dispatch(caLayerProcessorConfigChanged({ layerId, processorConfig }));
},
[dispatch, layerId]
);
const onChangeModel = useCallback(
(modelConfig: ControlNetModelConfig | T2IAdapterModelConfig) => {
dispatch(
caLayerModelChanged({
layerId,
modelConfig,
})
);
},
[dispatch, layerId]
);
const onChangeImage = useCallback(
(imageDTO: ImageDTO | null) => {
dispatch(caLayerImageChanged({ layerId, imageDTO }));
},
[dispatch, layerId]
);
const droppableData = useMemo<CALayerImageDropData>(
() => ({
actionType: 'SET_CA_LAYER_IMAGE',
context: {
layerId,
},
id: layerId,
}),
[layerId]
);
const postUploadAction = useMemo<CALayerImagePostUploadAction>(
() => ({
layerId,
type: 'SET_CA_LAYER_IMAGE',
}),
[layerId]
);
return (
<ControlAdapter
controlAdapter={controlAdapter}
onChangeBeginEndStepPct={onChangeBeginEndStepPct}
onChangeControlMode={onChangeControlMode}
onChangeWeight={onChangeWeight}
onChangeProcessorConfig={onChangeProcessorConfig}
onChangeModel={onChangeModel}
onChangeImage={onChangeImage}
droppableData={droppableData}
postUploadAction={postUploadAction}
/>
);
});
CALayerControlAdapterWrapper.displayName = 'CALayerControlAdapterWrapper';

View File

@@ -1,98 +0,0 @@
import {
CompositeNumberInput,
CompositeSlider,
Flex,
FormControl,
FormLabel,
IconButton,
Popover,
PopoverArrow,
PopoverBody,
PopoverContent,
PopoverTrigger,
Switch,
} from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { stopPropagation } from 'common/util/stopPropagation';
import { useLayerOpacity } from 'features/controlLayers/hooks/layerStateHooks';
import { caLayerIsFilterEnabledChanged, caLayerOpacityChanged } from 'features/controlLayers/store/controlLayersSlice';
import type { ChangeEvent } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiDropHalfFill } from 'react-icons/pi';
type Props = {
layerId: string;
};
const marks = [0, 25, 50, 75, 100];
const formatPct = (v: number | string) => `${v} %`;
const CALayerOpacity = ({ layerId }: Props) => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const { opacity, isFilterEnabled } = useLayerOpacity(layerId);
const onChangeOpacity = useCallback(
(v: number) => {
dispatch(caLayerOpacityChanged({ layerId, opacity: v / 100 }));
},
[dispatch, layerId]
);
const onChangeFilter = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
dispatch(caLayerIsFilterEnabledChanged({ layerId, isFilterEnabled: e.target.checked }));
},
[dispatch, layerId]
);
return (
<Popover isLazy>
<PopoverTrigger>
<IconButton
aria-label={t('controlLayers.opacity')}
size="sm"
icon={<PiDropHalfFill size={16} />}
variant="ghost"
onDoubleClick={stopPropagation}
/>
</PopoverTrigger>
<PopoverContent onDoubleClick={stopPropagation}>
<PopoverArrow />
<PopoverBody>
<Flex direction="column" gap={2}>
<FormControl orientation="horizontal" w="full">
<FormLabel m={0} flexGrow={1} cursor="pointer">
{t('controlLayers.opacityFilter')}
</FormLabel>
<Switch isChecked={isFilterEnabled} onChange={onChangeFilter} />
</FormControl>
<FormControl orientation="horizontal">
<FormLabel m={0}>{t('controlLayers.opacity')}</FormLabel>
<CompositeSlider
min={0}
max={100}
step={1}
value={opacity}
defaultValue={100}
onChange={onChangeOpacity}
marks={marks}
w={48}
/>
<CompositeNumberInput
min={0}
max={100}
step={1}
value={opacity}
defaultValue={100}
onChange={onChangeOpacity}
w={24}
format={formatPct}
/>
</FormControl>
</Flex>
</PopoverBody>
</PopoverContent>
</Popover>
);
};
export default memo(CALayerOpacity);

View File

@@ -1,117 +0,0 @@
import { Box, Divider, Flex, Icon, IconButton } from '@invoke-ai/ui-library';
import { ControlAdapterModelCombobox } from 'features/controlLayers/components/ControlAndIPAdapter/ControlAdapterModelCombobox';
import type {
ControlModeV2,
ControlNetConfigV2,
ProcessorConfig,
T2IAdapterConfigV2,
} from 'features/controlLayers/util/controlAdapters';
import type { TypesafeDroppableData } from 'features/dnd/types';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiCaretUpBold } from 'react-icons/pi';
import { useToggle } from 'react-use';
import type { ControlNetModelConfig, ImageDTO, PostUploadAction, T2IAdapterModelConfig } from 'services/api/types';
import { ControlAdapterBeginEndStepPct } from './ControlAdapterBeginEndStepPct';
import { ControlAdapterControlModeSelect } from './ControlAdapterControlModeSelect';
import { ControlAdapterImagePreview } from './ControlAdapterImagePreview';
import { ControlAdapterProcessorConfig } from './ControlAdapterProcessorConfig';
import { ControlAdapterProcessorTypeSelect } from './ControlAdapterProcessorTypeSelect';
import { ControlAdapterWeight } from './ControlAdapterWeight';
type Props = {
controlAdapter: ControlNetConfigV2 | T2IAdapterConfigV2;
onChangeBeginEndStepPct: (beginEndStepPct: [number, number]) => void;
onChangeControlMode: (controlMode: ControlModeV2) => void;
onChangeWeight: (weight: number) => void;
onChangeProcessorConfig: (processorConfig: ProcessorConfig | null) => void;
onChangeModel: (modelConfig: ControlNetModelConfig | T2IAdapterModelConfig) => void;
onChangeImage: (imageDTO: ImageDTO | null) => void;
droppableData: TypesafeDroppableData;
postUploadAction: PostUploadAction;
};
export const ControlAdapter = memo(
({
controlAdapter,
onChangeBeginEndStepPct,
onChangeControlMode,
onChangeWeight,
onChangeProcessorConfig,
onChangeModel,
onChangeImage,
droppableData,
postUploadAction,
}: Props) => {
const { t } = useTranslation();
const [isExpanded, toggleIsExpanded] = useToggle(false);
return (
<Flex flexDir="column" gap={3} position="relative" w="full">
<Flex gap={3} alignItems="center" w="full">
<Box minW={0} w="full" transitionProperty="common" transitionDuration="0.1s">
<ControlAdapterModelCombobox modelKey={controlAdapter.model?.key ?? null} onChange={onChangeModel} />
</Box>
<IconButton
size="sm"
tooltip={isExpanded ? t('controlnet.hideAdvanced') : t('controlnet.showAdvanced')}
aria-label={isExpanded ? t('controlnet.hideAdvanced') : t('controlnet.showAdvanced')}
onClick={toggleIsExpanded}
variant="ghost"
icon={
<Icon
boxSize={4}
as={PiCaretUpBold}
transform={isExpanded ? 'rotate(0deg)' : 'rotate(180deg)'}
transitionProperty="common"
transitionDuration="normal"
/>
}
/>
</Flex>
<Flex gap={3} w="full">
<Flex flexDir="column" gap={3} w="full" h="full">
{controlAdapter.type === 'controlnet' && (
<ControlAdapterControlModeSelect
controlMode={controlAdapter.controlMode}
onChange={onChangeControlMode}
/>
)}
<ControlAdapterWeight weight={controlAdapter.weight} onChange={onChangeWeight} />
<ControlAdapterBeginEndStepPct
beginEndStepPct={controlAdapter.beginEndStepPct}
onChange={onChangeBeginEndStepPct}
/>
</Flex>
<Flex alignItems="center" justifyContent="center" h={36} w={36} aspectRatio="1/1">
<ControlAdapterImagePreview
controlAdapter={controlAdapter}
onChangeImage={onChangeImage}
droppableData={droppableData}
postUploadAction={postUploadAction}
/>
</Flex>
</Flex>
{isExpanded && (
<>
<Divider />
<Flex flexDir="column" gap={3} w="full">
<ControlAdapterProcessorTypeSelect
config={controlAdapter.processorConfig}
onChange={onChangeProcessorConfig}
/>
<ControlAdapterProcessorConfig
config={controlAdapter.processorConfig}
onChange={onChangeProcessorConfig}
/>
</Flex>
</>
)}
</Flex>
);
}
);
ControlAdapter.displayName = 'ControlAdapter';

View File

@@ -1,43 +0,0 @@
import { CompositeRangeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
type Props = {
beginEndStepPct: [number, number];
onChange: (beginEndStepPct: [number, number]) => void;
};
const formatPct = (v: number) => `${Math.round(v * 100)}%`;
const ariaLabel = ['Begin Step %', 'End Step %'];
export const ControlAdapterBeginEndStepPct = memo(({ beginEndStepPct, onChange }: Props) => {
const { t } = useTranslation();
const onReset = useCallback(() => {
onChange([0, 1]);
}, [onChange]);
return (
<FormControl orientation="horizontal">
<InformationalPopover feature="controlNetBeginEnd">
<FormLabel m={0}>{t('controlnet.beginEndStepPercentShort')}</FormLabel>
</InformationalPopover>
<CompositeRangeSlider
aria-label={ariaLabel}
value={beginEndStepPct}
onChange={onChange}
onReset={onReset}
min={0}
max={1}
step={0.05}
fineStep={0.01}
minStepsBetweenThumbs={1}
formatValue={formatPct}
marks
withThumbTooltip
/>
</FormControl>
);
});
ControlAdapterBeginEndStepPct.displayName = 'ControlAdapterBeginEndStepPct';

View File

@@ -1,60 +0,0 @@
import type { ComboboxOnChange } from '@invoke-ai/ui-library';
import { Combobox, FormControl, FormLabel } from '@invoke-ai/ui-library';
import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
import type { ControlModeV2 } from 'features/controlLayers/util/controlAdapters';
import { isControlModeV2 } from 'features/controlLayers/util/controlAdapters';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { assert } from 'tsafe';
type Props = {
controlMode: ControlModeV2;
onChange: (controlMode: ControlModeV2) => void;
};
export const ControlAdapterControlModeSelect = memo(({ controlMode, onChange }: Props) => {
const { t } = useTranslation();
const CONTROL_MODE_DATA = useMemo(
() => [
{ label: t('controlnet.balanced'), value: 'balanced' },
{ label: t('controlnet.prompt'), value: 'more_prompt' },
{ label: t('controlnet.control'), value: 'more_control' },
{ label: t('controlnet.megaControl'), value: 'unbalanced' },
],
[t]
);
const handleControlModeChange = useCallback<ComboboxOnChange>(
(v) => {
assert(isControlModeV2(v?.value));
onChange(v.value);
},
[onChange]
);
const value = useMemo(
() => CONTROL_MODE_DATA.filter((o) => o.value === controlMode)[0],
[CONTROL_MODE_DATA, controlMode]
);
if (!controlMode) {
return null;
}
return (
<FormControl>
<InformationalPopover feature="controlNetControlMode">
<FormLabel m={0}>{t('controlnet.control')}</FormLabel>
</InformationalPopover>
<Combobox
value={value}
options={CONTROL_MODE_DATA}
onChange={handleControlModeChange}
isClearable={false}
isSearchable={false}
/>
</FormControl>
);
});
ControlAdapterControlModeSelect.displayName = 'ControlAdapterControlModeSelect';

View File

@@ -1,217 +0,0 @@
import type { SystemStyleObject } from '@invoke-ai/ui-library';
import { Box, Flex, Spinner, useShiftModifier } from '@invoke-ai/ui-library';
import { skipToken } from '@reduxjs/toolkit/query';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIDndImage from 'common/components/IAIDndImage';
import IAIDndImageIcon from 'common/components/IAIDndImageIcon';
import { setBoundingBoxDimensions } from 'features/canvas/store/canvasSlice';
import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import type { ControlNetConfigV2, T2IAdapterConfigV2 } from 'features/controlLayers/util/controlAdapters';
import type { ImageDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { memo, useCallback, useEffect, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { PiArrowCounterClockwiseBold, PiFloppyDiskBold, PiRulerBold } from 'react-icons/pi';
import {
useAddImageToBoardMutation,
useChangeImageIsIntermediateMutation,
useGetImageDTOQuery,
useRemoveImageFromBoardMutation,
} from 'services/api/endpoints/images';
import type { ImageDTO, PostUploadAction } from 'services/api/types';
type Props = {
controlAdapter: ControlNetConfigV2 | T2IAdapterConfigV2;
onChangeImage: (imageDTO: ImageDTO | null) => void;
droppableData: TypesafeDroppableData;
postUploadAction: PostUploadAction;
};
export const ControlAdapterImagePreview = memo(
({ controlAdapter, onChangeImage, droppableData, postUploadAction }: Props) => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const autoAddBoardId = useAppSelector((s) => s.gallery.autoAddBoardId);
const isConnected = useAppSelector((s) => s.system.isConnected);
const activeTabName = useAppSelector(activeTabNameSelector);
const optimalDimension = useAppSelector(selectOptimalDimension);
const shift = useShiftModifier();
const [isMouseOverImage, setIsMouseOverImage] = useState(false);
const { currentData: controlImage, isError: isErrorControlImage } = useGetImageDTOQuery(
controlAdapter.image?.name ?? skipToken
);
const { currentData: processedControlImage, isError: isErrorProcessedControlImage } = useGetImageDTOQuery(
controlAdapter.processedImage?.name ?? skipToken
);
const [changeIsIntermediate] = useChangeImageIsIntermediateMutation();
const [addToBoard] = useAddImageToBoardMutation();
const [removeFromBoard] = useRemoveImageFromBoardMutation();
const handleResetControlImage = useCallback(() => {
onChangeImage(null);
}, [onChangeImage]);
const handleSaveControlImage = useCallback(async () => {
if (!processedControlImage) {
return;
}
await changeIsIntermediate({
imageDTO: processedControlImage,
is_intermediate: false,
}).unwrap();
if (autoAddBoardId !== 'none') {
addToBoard({
imageDTO: processedControlImage,
board_id: autoAddBoardId,
});
} else {
removeFromBoard({ imageDTO: processedControlImage });
}
}, [processedControlImage, changeIsIntermediate, autoAddBoardId, addToBoard, removeFromBoard]);
const handleSetControlImageToDimensions = useCallback(() => {
if (!controlImage) {
return;
}
if (activeTabName === 'canvas') {
dispatch(
setBoundingBoxDimensions({ width: controlImage.width, height: controlImage.height }, optimalDimension)
);
} else {
const options = { updateAspectRatio: true, clamp: true };
if (shift) {
const { width, height } = controlImage;
dispatch(widthChanged({ width, ...options }));
dispatch(heightChanged({ height, ...options }));
} else {
const { width, height } = calculateNewSize(
controlImage.width / controlImage.height,
optimalDimension * optimalDimension
);
dispatch(widthChanged({ width, ...options }));
dispatch(heightChanged({ height, ...options }));
}
}
}, [controlImage, activeTabName, dispatch, optimalDimension, shift]);
const handleMouseEnter = useCallback(() => {
setIsMouseOverImage(true);
}, []);
const handleMouseLeave = useCallback(() => {
setIsMouseOverImage(false);
}, []);
const draggableData = useMemo<ImageDraggableData | undefined>(() => {
if (controlImage) {
return {
id: controlAdapter.id,
payloadType: 'IMAGE_DTO',
payload: { imageDTO: controlImage },
};
}
}, [controlImage, controlAdapter.id]);
const shouldShowProcessedImage =
controlImage &&
processedControlImage &&
!isMouseOverImage &&
!controlAdapter.isProcessingImage &&
controlAdapter.processorConfig !== null;
useEffect(() => {
if (isConnected && (isErrorControlImage || isErrorProcessedControlImage)) {
handleResetControlImage();
}
}, [handleResetControlImage, isConnected, isErrorControlImage, isErrorProcessedControlImage]);
return (
<Flex
onMouseEnter={handleMouseEnter}
onMouseLeave={handleMouseLeave}
position="relative"
w="full"
h={36}
alignItems="center"
justifyContent="center"
>
<IAIDndImage
draggableData={draggableData}
droppableData={droppableData}
imageDTO={controlImage}
isDropDisabled={shouldShowProcessedImage}
postUploadAction={postUploadAction}
/>
<Box
position="absolute"
top={0}
insetInlineStart={0}
w="full"
h="full"
opacity={shouldShowProcessedImage ? 1 : 0}
transitionProperty="common"
transitionDuration="normal"
pointerEvents="none"
>
<IAIDndImage
draggableData={draggableData}
droppableData={droppableData}
imageDTO={processedControlImage}
isUploadDisabled={true}
/>
</Box>
<>
<IAIDndImageIcon
onClick={handleResetControlImage}
icon={controlImage ? <PiArrowCounterClockwiseBold size={16} /> : undefined}
tooltip={t('controlnet.resetControlImage')}
/>
<IAIDndImageIcon
onClick={handleSaveControlImage}
icon={controlImage ? <PiFloppyDiskBold size={16} /> : undefined}
tooltip={t('controlnet.saveControlImage')}
styleOverrides={saveControlImageStyleOverrides}
/>
<IAIDndImageIcon
onClick={handleSetControlImageToDimensions}
icon={controlImage ? <PiRulerBold size={16} /> : undefined}
tooltip={shift ? t('controlnet.setControlImageDimensionsForce') : t('controlnet.setControlImageDimensions')}
styleOverrides={setControlImageDimensionsStyleOverrides}
/>
</>
{controlAdapter.isProcessingImage && (
<Flex
position="absolute"
top={0}
insetInlineStart={0}
w="full"
h="full"
alignItems="center"
justifyContent="center"
opacity={0.8}
borderRadius="base"
bg="base.900"
>
<Spinner size="xl" color="base.400" />
</Flex>
)}
</Flex>
);
}
);
ControlAdapterImagePreview.displayName = 'ControlAdapterImagePreview';
const saveControlImageStyleOverrides: SystemStyleObject = { mt: 6 };
const setControlImageDimensionsStyleOverrides: SystemStyleObject = { mt: 12 };

View File

@@ -1,62 +0,0 @@
import { Combobox, FormControl, Tooltip } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { useGroupedModelCombobox } from 'common/hooks/useGroupedModelCombobox';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { useControlNetAndT2IAdapterModels } from 'services/api/hooks/modelsByType';
import type { AnyModelConfig, ControlNetModelConfig, T2IAdapterModelConfig } from 'services/api/types';
type Props = {
modelKey: string | null;
onChange: (modelConfig: ControlNetModelConfig | T2IAdapterModelConfig) => void;
};
export const ControlAdapterModelCombobox = memo(({ modelKey, onChange: onChangeModel }: Props) => {
const { t } = useTranslation();
const currentBaseModel = useAppSelector((s) => s.generation.model?.base);
const [modelConfigs, { isLoading }] = useControlNetAndT2IAdapterModels();
const selectedModel = useMemo(() => modelConfigs.find((m) => m.key === modelKey), [modelConfigs, modelKey]);
const _onChange = useCallback(
(modelConfig: ControlNetModelConfig | T2IAdapterModelConfig | null) => {
if (!modelConfig) {
return;
}
onChangeModel(modelConfig);
},
[onChangeModel]
);
const getIsDisabled = useCallback(
(model: AnyModelConfig): boolean => {
const isCompatible = currentBaseModel === model.base;
const hasMainModel = Boolean(currentBaseModel);
return !hasMainModel || !isCompatible;
},
[currentBaseModel]
);
const { options, value, onChange, noOptionsMessage } = useGroupedModelCombobox({
modelConfigs,
onChange: _onChange,
selectedModel,
getIsDisabled,
isLoading,
});
return (
<Tooltip label={selectedModel?.description}>
<FormControl isInvalid={!value || currentBaseModel !== selectedModel?.base} w="full">
<Combobox
options={options}
placeholder={t('controlnet.selectModel')}
value={value}
onChange={onChange}
noOptionsMessage={noOptionsMessage}
/>
</FormControl>
</Tooltip>
);
});
ControlAdapterModelCombobox.displayName = 'ControlAdapterModelCombobox';

View File

@@ -1,85 +0,0 @@
import type { ProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { memo } from 'react';
import { CannyProcessor } from './processors/CannyProcessor';
import { ColorMapProcessor } from './processors/ColorMapProcessor';
import { ContentShuffleProcessor } from './processors/ContentShuffleProcessor';
import { DepthAnythingProcessor } from './processors/DepthAnythingProcessor';
import { DWOpenposeProcessor } from './processors/DWOpenposeProcessor';
import { HedProcessor } from './processors/HedProcessor';
import { LineartProcessor } from './processors/LineartProcessor';
import { MediapipeFaceProcessor } from './processors/MediapipeFaceProcessor';
import { MidasDepthProcessor } from './processors/MidasDepthProcessor';
import { MlsdImageProcessor } from './processors/MlsdImageProcessor';
import { PidiProcessor } from './processors/PidiProcessor';
type Props = {
config: ProcessorConfig | null;
onChange: (config: ProcessorConfig | null) => void;
};
export const ControlAdapterProcessorConfig = memo(({ config, onChange }: Props) => {
if (!config) {
return null;
}
if (config.type === 'canny_image_processor') {
return <CannyProcessor onChange={onChange} config={config} />;
}
if (config.type === 'color_map_image_processor') {
return <ColorMapProcessor onChange={onChange} config={config} />;
}
if (config.type === 'depth_anything_image_processor') {
return <DepthAnythingProcessor onChange={onChange} config={config} />;
}
if (config.type === 'hed_image_processor') {
return <HedProcessor onChange={onChange} config={config} />;
}
if (config.type === 'lineart_image_processor') {
return <LineartProcessor onChange={onChange} config={config} />;
}
if (config.type === 'content_shuffle_image_processor') {
return <ContentShuffleProcessor onChange={onChange} config={config} />;
}
if (config.type === 'lineart_anime_image_processor') {
// No configurable options for this processor
return null;
}
if (config.type === 'mediapipe_face_processor') {
return <MediapipeFaceProcessor onChange={onChange} config={config} />;
}
if (config.type === 'midas_depth_image_processor') {
return <MidasDepthProcessor onChange={onChange} config={config} />;
}
if (config.type === 'mlsd_image_processor') {
return <MlsdImageProcessor onChange={onChange} config={config} />;
}
if (config.type === 'normalbae_image_processor') {
// No configurable options for this processor
return null;
}
if (config.type === 'dw_openpose_image_processor') {
return <DWOpenposeProcessor onChange={onChange} config={config} />;
}
if (config.type === 'pidi_image_processor') {
return <PidiProcessor onChange={onChange} config={config} />;
}
if (config.type === 'zoe_depth_image_processor') {
return null;
}
});
ControlAdapterProcessorConfig.displayName = 'ControlAdapterProcessorConfig';

View File

@@ -1,70 +0,0 @@
import type { ComboboxOnChange } from '@invoke-ai/ui-library';
import { Combobox, Flex, FormControl, FormLabel, IconButton } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
import type { ProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { CA_PROCESSOR_DATA, isProcessorTypeV2 } from 'features/controlLayers/util/controlAdapters';
import { configSelector } from 'features/system/store/configSelectors';
import { includes, map } from 'lodash-es';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiXBold } from 'react-icons/pi';
import { assert } from 'tsafe';
type Props = {
config: ProcessorConfig | null;
onChange: (config: ProcessorConfig | null) => void;
};
const selectDisabledProcessors = createMemoizedSelector(
configSelector,
(config) => config.sd.disabledControlNetProcessors
);
export const ControlAdapterProcessorTypeSelect = memo(({ config, onChange }: Props) => {
const { t } = useTranslation();
const disabledProcessors = useAppSelector(selectDisabledProcessors);
const options = useMemo(() => {
return map(CA_PROCESSOR_DATA, ({ labelTKey }, type) => ({ value: type, label: t(labelTKey) })).filter(
(o) => !includes(disabledProcessors, o.value)
);
}, [disabledProcessors, t]);
const _onChange = useCallback<ComboboxOnChange>(
(v) => {
if (!v) {
onChange(null);
} else {
assert(isProcessorTypeV2(v.value));
onChange(CA_PROCESSOR_DATA[v.value].buildDefaults());
}
},
[onChange]
);
const clearProcessor = useCallback(() => {
onChange(null);
}, [onChange]);
const value = useMemo(() => options.find((o) => o.value === config?.type) ?? null, [options, config?.type]);
return (
<Flex gap={2}>
<FormControl>
<InformationalPopover feature="controlNetProcessor">
<FormLabel m={0}>{t('controlnet.processor')}</FormLabel>
</InformationalPopover>
<Combobox value={value} options={options} onChange={_onChange} isSearchable={false} isClearable={false} />
</FormControl>
<IconButton
aria-label={t('controlLayers.clearProcessor')}
onClick={clearProcessor}
isDisabled={!config}
icon={<PiXBold />}
variant="ghost"
size="sm"
/>
</Flex>
);
});
ControlAdapterProcessorTypeSelect.displayName = 'ControlAdapterProcessorTypeSelect';

View File

@@ -1,55 +0,0 @@
import { CompositeNumberInput, CompositeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
type Props = {
weight: number;
onChange: (weight: number) => void;
};
const formatValue = (v: number) => v.toFixed(2);
const marks = [0, 1, 2];
export const ControlAdapterWeight = memo(({ weight, onChange }: Props) => {
const { t } = useTranslation();
const initial = useAppSelector((s) => s.config.sd.ca.weight.initial);
const sliderMin = useAppSelector((s) => s.config.sd.ca.weight.sliderMin);
const sliderMax = useAppSelector((s) => s.config.sd.ca.weight.sliderMax);
const numberInputMin = useAppSelector((s) => s.config.sd.ca.weight.numberInputMin);
const numberInputMax = useAppSelector((s) => s.config.sd.ca.weight.numberInputMax);
const coarseStep = useAppSelector((s) => s.config.sd.ca.weight.coarseStep);
const fineStep = useAppSelector((s) => s.config.sd.ca.weight.fineStep);
return (
<FormControl orientation="horizontal">
<InformationalPopover feature="controlNetWeight">
<FormLabel m={0}>{t('controlnet.weight')}</FormLabel>
</InformationalPopover>
<CompositeSlider
value={weight}
onChange={onChange}
defaultValue={initial}
min={sliderMin}
max={sliderMax}
step={coarseStep}
fineStep={fineStep}
marks={marks}
formatValue={formatValue}
/>
<CompositeNumberInput
value={weight}
onChange={onChange}
min={numberInputMin}
max={numberInputMax}
step={coarseStep}
fineStep={fineStep}
maxW={20}
defaultValue={initial}
/>
</FormControl>
);
});
ControlAdapterWeight.displayName = 'ControlAdapterWeight';

View File

@@ -1,72 +0,0 @@
import { Box, Flex } from '@invoke-ai/ui-library';
import { ControlAdapterBeginEndStepPct } from 'features/controlLayers/components/ControlAndIPAdapter/ControlAdapterBeginEndStepPct';
import { ControlAdapterWeight } from 'features/controlLayers/components/ControlAndIPAdapter/ControlAdapterWeight';
import { IPAdapterImagePreview } from 'features/controlLayers/components/ControlAndIPAdapter/IPAdapterImagePreview';
import { IPAdapterMethod } from 'features/controlLayers/components/ControlAndIPAdapter/IPAdapterMethod';
import { IPAdapterModelSelect } from 'features/controlLayers/components/ControlAndIPAdapter/IPAdapterModelSelect';
import type { CLIPVisionModelV2, IPAdapterConfigV2, IPMethodV2 } from 'features/controlLayers/util/controlAdapters';
import type { TypesafeDroppableData } from 'features/dnd/types';
import { memo } from 'react';
import type { ImageDTO, IPAdapterModelConfig, PostUploadAction } from 'services/api/types';
type Props = {
ipAdapter: IPAdapterConfigV2;
onChangeBeginEndStepPct: (beginEndStepPct: [number, number]) => void;
onChangeWeight: (weight: number) => void;
onChangeIPMethod: (method: IPMethodV2) => void;
onChangeModel: (modelConfig: IPAdapterModelConfig) => void;
onChangeCLIPVisionModel: (clipVisionModel: CLIPVisionModelV2) => void;
onChangeImage: (imageDTO: ImageDTO | null) => void;
droppableData: TypesafeDroppableData;
postUploadAction: PostUploadAction;
};
export const IPAdapter = memo(
({
ipAdapter,
onChangeBeginEndStepPct,
onChangeWeight,
onChangeIPMethod,
onChangeModel,
onChangeCLIPVisionModel,
onChangeImage,
droppableData,
postUploadAction,
}: Props) => {
return (
<Flex flexDir="column" gap={4} position="relative" w="full">
<Flex gap={3} alignItems="center" w="full">
<Box minW={0} w="full" transitionProperty="common" transitionDuration="0.1s">
<IPAdapterModelSelect
modelKey={ipAdapter.model?.key ?? null}
onChangeModel={onChangeModel}
clipVisionModel={ipAdapter.clipVisionModel}
onChangeCLIPVisionModel={onChangeCLIPVisionModel}
/>
</Box>
</Flex>
<Flex gap={4} w="full" alignItems="center">
<Flex flexDir="column" gap={3} w="full">
<IPAdapterMethod method={ipAdapter.method} onChange={onChangeIPMethod} />
<ControlAdapterWeight weight={ipAdapter.weight} onChange={onChangeWeight} />
<ControlAdapterBeginEndStepPct
beginEndStepPct={ipAdapter.beginEndStepPct}
onChange={onChangeBeginEndStepPct}
/>
</Flex>
<Flex alignItems="center" justifyContent="center" h={36} w={36} aspectRatio="1/1">
<IPAdapterImagePreview
image={ipAdapter.image}
onChangeImage={onChangeImage}
ipAdapterId={ipAdapter.id}
droppableData={droppableData}
postUploadAction={postUploadAction}
/>
</Flex>
</Flex>
</Flex>
);
}
);
IPAdapter.displayName = 'IPAdapter';

View File

@@ -1,113 +0,0 @@
import type { SystemStyleObject } from '@invoke-ai/ui-library';
import { Flex, useShiftModifier } from '@invoke-ai/ui-library';
import { skipToken } from '@reduxjs/toolkit/query';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIDndImage from 'common/components/IAIDndImage';
import IAIDndImageIcon from 'common/components/IAIDndImageIcon';
import { setBoundingBoxDimensions } from 'features/canvas/store/canvasSlice';
import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import type { ImageWithDims } from 'features/controlLayers/util/controlAdapters';
import type { ImageDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { memo, useCallback, useEffect, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiArrowCounterClockwiseBold, PiRulerBold } from 'react-icons/pi';
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
import type { ImageDTO, PostUploadAction } from 'services/api/types';
type Props = {
image: ImageWithDims | null;
onChangeImage: (imageDTO: ImageDTO | null) => void;
ipAdapterId: string; // required for the dnd/upload interactions
droppableData: TypesafeDroppableData;
postUploadAction: PostUploadAction;
};
export const IPAdapterImagePreview = memo(
({ image, onChangeImage, ipAdapterId, droppableData, postUploadAction }: Props) => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const isConnected = useAppSelector((s) => s.system.isConnected);
const activeTabName = useAppSelector(activeTabNameSelector);
const optimalDimension = useAppSelector(selectOptimalDimension);
const shift = useShiftModifier();
const { currentData: controlImage, isError: isErrorControlImage } = useGetImageDTOQuery(image?.name ?? skipToken);
const handleResetControlImage = useCallback(() => {
onChangeImage(null);
}, [onChangeImage]);
const handleSetControlImageToDimensions = useCallback(() => {
if (!controlImage) {
return;
}
if (activeTabName === 'canvas') {
dispatch(
setBoundingBoxDimensions({ width: controlImage.width, height: controlImage.height }, optimalDimension)
);
} else {
const options = { updateAspectRatio: true, clamp: true };
if (shift) {
const { width, height } = controlImage;
dispatch(widthChanged({ width, ...options }));
dispatch(heightChanged({ height, ...options }));
} else {
const { width, height } = calculateNewSize(
controlImage.width / controlImage.height,
optimalDimension * optimalDimension
);
dispatch(widthChanged({ width, ...options }));
dispatch(heightChanged({ height, ...options }));
}
}
}, [controlImage, activeTabName, dispatch, optimalDimension, shift]);
const draggableData = useMemo<ImageDraggableData | undefined>(() => {
if (controlImage) {
return {
id: ipAdapterId,
payloadType: 'IMAGE_DTO',
payload: { imageDTO: controlImage },
};
}
}, [controlImage, ipAdapterId]);
useEffect(() => {
if (isConnected && isErrorControlImage) {
handleResetControlImage();
}
}, [handleResetControlImage, isConnected, isErrorControlImage]);
return (
<Flex position="relative" w="full" h={36} alignItems="center" justifyContent="center">
<IAIDndImage
draggableData={draggableData}
droppableData={droppableData}
imageDTO={controlImage}
postUploadAction={postUploadAction}
/>
<>
<IAIDndImageIcon
onClick={handleResetControlImage}
icon={controlImage ? <PiArrowCounterClockwiseBold size={16} /> : undefined}
tooltip={t('controlnet.resetControlImage')}
/>
<IAIDndImageIcon
onClick={handleSetControlImageToDimensions}
icon={controlImage ? <PiRulerBold size={16} /> : undefined}
tooltip={shift ? t('controlnet.setControlImageDimensionsForce') : t('controlnet.setControlImageDimensions')}
styleOverrides={setControlImageDimensionsStyleOverrides}
/>
</>
</Flex>
);
}
);
IPAdapterImagePreview.displayName = 'IPAdapterImagePreview';
const setControlImageDimensionsStyleOverrides: SystemStyleObject = { mt: 6 };

View File

@@ -1,44 +0,0 @@
import type { ComboboxOnChange } from '@invoke-ai/ui-library';
import { Combobox, FormControl, FormLabel } from '@invoke-ai/ui-library';
import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
import type { IPMethodV2 } from 'features/controlLayers/util/controlAdapters';
import { isIPMethodV2 } from 'features/controlLayers/util/controlAdapters';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { assert } from 'tsafe';
type Props = {
method: IPMethodV2;
onChange: (method: IPMethodV2) => void;
};
export const IPAdapterMethod = memo(({ method, onChange }: Props) => {
const { t } = useTranslation();
const options: { label: string; value: IPMethodV2 }[] = useMemo(
() => [
{ label: t('controlnet.full'), value: 'full' },
{ label: `${t('controlnet.style')} (${t('common.beta')})`, value: 'style' },
{ label: `${t('controlnet.composition')} (${t('common.beta')})`, value: 'composition' },
],
[t]
);
const _onChange = useCallback<ComboboxOnChange>(
(v) => {
assert(isIPMethodV2(v?.value));
onChange(v.value);
},
[onChange]
);
const value = useMemo(() => options.find((o) => o.value === method), [options, method]);
return (
<FormControl>
<InformationalPopover feature="ipAdapterMethod">
<FormLabel>{t('controlnet.ipAdapterMethod')}</FormLabel>
</InformationalPopover>
<Combobox value={value} options={options} onChange={_onChange} />
</FormControl>
);
});
IPAdapterMethod.displayName = 'IPAdapterMethod';

View File

@@ -1,100 +0,0 @@
import type { ComboboxOnChange } from '@invoke-ai/ui-library';
import { Combobox, Flex, FormControl, Tooltip } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { useGroupedModelCombobox } from 'common/hooks/useGroupedModelCombobox';
import type { CLIPVisionModelV2 } from 'features/controlLayers/util/controlAdapters';
import { isCLIPVisionModelV2 } from 'features/controlLayers/util/controlAdapters';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { useIPAdapterModels } from 'services/api/hooks/modelsByType';
import type { AnyModelConfig, IPAdapterModelConfig } from 'services/api/types';
import { assert } from 'tsafe';
const CLIP_VISION_OPTIONS = [
{ label: 'ViT-H', value: 'ViT-H' },
{ label: 'ViT-G', value: 'ViT-G' },
];
type Props = {
modelKey: string | null;
onChangeModel: (modelConfig: IPAdapterModelConfig) => void;
clipVisionModel: CLIPVisionModelV2;
onChangeCLIPVisionModel: (clipVisionModel: CLIPVisionModelV2) => void;
};
export const IPAdapterModelSelect = memo(
({ modelKey, onChangeModel, clipVisionModel, onChangeCLIPVisionModel }: Props) => {
const { t } = useTranslation();
const currentBaseModel = useAppSelector((s) => s.generation.model?.base);
const [modelConfigs, { isLoading }] = useIPAdapterModels();
const selectedModel = useMemo(() => modelConfigs.find((m) => m.key === modelKey), [modelConfigs, modelKey]);
const _onChangeModel = useCallback(
(modelConfig: IPAdapterModelConfig | null) => {
if (!modelConfig) {
return;
}
onChangeModel(modelConfig);
},
[onChangeModel]
);
const _onChangeCLIPVisionModel = useCallback<ComboboxOnChange>(
(v) => {
assert(isCLIPVisionModelV2(v?.value));
onChangeCLIPVisionModel(v.value);
},
[onChangeCLIPVisionModel]
);
const getIsDisabled = useCallback(
(model: AnyModelConfig): boolean => {
const isCompatible = currentBaseModel === model.base;
const hasMainModel = Boolean(currentBaseModel);
return !hasMainModel || !isCompatible;
},
[currentBaseModel]
);
const { options, value, onChange, noOptionsMessage } = useGroupedModelCombobox({
modelConfigs,
onChange: _onChangeModel,
selectedModel,
getIsDisabled,
isLoading,
});
const clipVisionModelValue = useMemo(
() => CLIP_VISION_OPTIONS.find((o) => o.value === clipVisionModel),
[clipVisionModel]
);
return (
<Flex gap={4}>
<Tooltip label={selectedModel?.description}>
<FormControl isInvalid={!value || currentBaseModel !== selectedModel?.base} w="full">
<Combobox
options={options}
placeholder={t('controlnet.selectModel')}
value={value}
onChange={onChange}
noOptionsMessage={noOptionsMessage}
/>
</FormControl>
</Tooltip>
{selectedModel?.format === 'checkpoint' && (
<FormControl isInvalid={!value || currentBaseModel !== selectedModel?.base} width="max-content" minWidth={28}>
<Combobox
options={CLIP_VISION_OPTIONS}
placeholder={t('controlnet.selectCLIPVisionModel')}
value={clipVisionModelValue}
onChange={_onChangeCLIPVisionModel}
/>
</FormControl>
)}
</Flex>
);
}
);
IPAdapterModelSelect.displayName = 'IPAdapterModelSelect';

View File

@@ -1,67 +0,0 @@
import { CompositeNumberInput, CompositeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import { CA_PROCESSOR_DATA, type CannyProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<CannyProcessorConfig>;
const DEFAULTS = CA_PROCESSOR_DATA['canny_image_processor'].buildDefaults();
export const CannyProcessor = ({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleLowThresholdChanged = useCallback(
(v: number) => {
onChange({ ...config, low_threshold: v });
},
[onChange, config]
);
const handleHighThresholdChanged = useCallback(
(v: number) => {
onChange({ ...config, high_threshold: v });
},
[onChange, config]
);
return (
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.lowThreshold')}</FormLabel>
<CompositeSlider
value={config.low_threshold}
onChange={handleLowThresholdChanged}
defaultValue={DEFAULTS.low_threshold}
min={0}
max={255}
/>
<CompositeNumberInput
value={config.low_threshold}
onChange={handleLowThresholdChanged}
defaultValue={DEFAULTS.low_threshold}
min={0}
max={255}
/>
</FormControl>
<FormControl>
<FormLabel m={0}>{t('controlnet.highThreshold')}</FormLabel>
<CompositeSlider
value={config.high_threshold}
onChange={handleHighThresholdChanged}
defaultValue={DEFAULTS.high_threshold}
min={0}
max={255}
/>
<CompositeNumberInput
value={config.high_threshold}
onChange={handleHighThresholdChanged}
defaultValue={DEFAULTS.high_threshold}
min={0}
max={255}
/>
</FormControl>
</ProcessorWrapper>
);
};
CannyProcessor.displayName = 'CannyProcessor';

View File

@@ -1,47 +0,0 @@
import { CompositeNumberInput, CompositeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import { CA_PROCESSOR_DATA, type ColorMapProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<ColorMapProcessorConfig>;
const DEFAULTS = CA_PROCESSOR_DATA['color_map_image_processor'].buildDefaults();
export const ColorMapProcessor = memo(({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleColorMapTileSizeChanged = useCallback(
(v: number) => {
onChange({ ...config, color_map_tile_size: v });
},
[config, onChange]
);
return (
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.colorMapTileSize')}</FormLabel>
<CompositeSlider
value={config.color_map_tile_size}
defaultValue={DEFAULTS.color_map_tile_size}
onChange={handleColorMapTileSizeChanged}
min={1}
max={256}
step={1}
marks
/>
<CompositeNumberInput
value={config.color_map_tile_size}
defaultValue={DEFAULTS.color_map_tile_size}
onChange={handleColorMapTileSizeChanged}
min={1}
max={4096}
step={1}
/>
</FormControl>
</ProcessorWrapper>
);
});
ColorMapProcessor.displayName = 'ColorMapProcessor';

View File

@@ -1,79 +0,0 @@
import { CompositeNumberInput, CompositeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import type { ContentShuffleProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { CA_PROCESSOR_DATA } from 'features/controlLayers/util/controlAdapters';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<ContentShuffleProcessorConfig>;
const DEFAULTS = CA_PROCESSOR_DATA['content_shuffle_image_processor'].buildDefaults();
export const ContentShuffleProcessor = memo(({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleWChanged = useCallback(
(v: number) => {
onChange({ ...config, w: v });
},
[config, onChange]
);
const handleHChanged = useCallback(
(v: number) => {
onChange({ ...config, h: v });
},
[config, onChange]
);
const handleFChanged = useCallback(
(v: number) => {
onChange({ ...config, f: v });
},
[config, onChange]
);
return (
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.w')}</FormLabel>
<CompositeSlider
value={config.w}
defaultValue={DEFAULTS.w}
onChange={handleWChanged}
min={0}
max={4096}
marks
/>
<CompositeNumberInput value={config.w} defaultValue={DEFAULTS.w} onChange={handleWChanged} min={0} max={4096} />
</FormControl>
<FormControl>
<FormLabel m={0}>{t('controlnet.h')}</FormLabel>
<CompositeSlider
value={config.h}
defaultValue={DEFAULTS.h}
onChange={handleHChanged}
min={0}
max={4096}
marks
/>
<CompositeNumberInput value={config.h} defaultValue={DEFAULTS.h} onChange={handleHChanged} min={0} max={4096} />
</FormControl>
<FormControl>
<FormLabel m={0}>{t('controlnet.f')}</FormLabel>
<CompositeSlider
value={config.f}
defaultValue={DEFAULTS.f}
onChange={handleFChanged}
min={0}
max={4096}
marks
/>
<CompositeNumberInput value={config.f} defaultValue={DEFAULTS.f} onChange={handleFChanged} min={0} max={4096} />
</FormControl>
</ProcessorWrapper>
);
});
ContentShuffleProcessor.displayName = 'ContentShuffleProcessor';

View File

@@ -1,62 +0,0 @@
import { Flex, FormControl, FormLabel, Switch } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import type { DWOpenposeProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { CA_PROCESSOR_DATA } from 'features/controlLayers/util/controlAdapters';
import type { ChangeEvent } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<DWOpenposeProcessorConfig>;
const DEFAULTS = CA_PROCESSOR_DATA['dw_openpose_image_processor'].buildDefaults();
export const DWOpenposeProcessor = memo(({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleDrawBodyChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
onChange({ ...config, draw_body: e.target.checked });
},
[config, onChange]
);
const handleDrawFaceChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
onChange({ ...config, draw_face: e.target.checked });
},
[config, onChange]
);
const handleDrawHandsChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
onChange({ ...config, draw_hands: e.target.checked });
},
[config, onChange]
);
return (
<ProcessorWrapper>
<Flex sx={{ flexDir: 'row', gap: 6 }}>
<FormControl w="max-content">
<FormLabel m={0}>{t('controlnet.body')}</FormLabel>
<Switch defaultChecked={DEFAULTS.draw_body} isChecked={config.draw_body} onChange={handleDrawBodyChanged} />
</FormControl>
<FormControl w="max-content">
<FormLabel m={0}>{t('controlnet.face')}</FormLabel>
<Switch defaultChecked={DEFAULTS.draw_face} isChecked={config.draw_face} onChange={handleDrawFaceChanged} />
</FormControl>
<FormControl w="max-content">
<FormLabel m={0}>{t('controlnet.hands')}</FormLabel>
<Switch
defaultChecked={DEFAULTS.draw_hands}
isChecked={config.draw_hands}
onChange={handleDrawHandsChanged}
/>
</FormControl>
</Flex>
</ProcessorWrapper>
);
});
DWOpenposeProcessor.displayName = 'DWOpenposeProcessor';

View File

@@ -1,52 +0,0 @@
import type { ComboboxOnChange } from '@invoke-ai/ui-library';
import { Combobox, FormControl, FormLabel } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import type { DepthAnythingModelSize, DepthAnythingProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { CA_PROCESSOR_DATA, isDepthAnythingModelSize } from 'features/controlLayers/util/controlAdapters';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<DepthAnythingProcessorConfig>;
const DEFAULTS = CA_PROCESSOR_DATA['depth_anything_image_processor'].buildDefaults();
export const DepthAnythingProcessor = memo(({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleModelSizeChange = useCallback<ComboboxOnChange>(
(v) => {
if (!isDepthAnythingModelSize(v?.value)) {
return;
}
onChange({ ...config, model_size: v.value });
},
[config, onChange]
);
const options: { label: string; value: DepthAnythingModelSize }[] = useMemo(
() => [
{ label: t('controlnet.small'), value: 'small' },
{ label: t('controlnet.base'), value: 'base' },
{ label: t('controlnet.large'), value: 'large' },
],
[t]
);
const value = useMemo(() => options.filter((o) => o.value === config.model_size)[0], [options, config.model_size]);
return (
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.modelSize')}</FormLabel>
<Combobox
value={value}
defaultInputValue={DEFAULTS.model_size}
options={options}
onChange={handleModelSizeChange}
/>
</FormControl>
</ProcessorWrapper>
);
});
DepthAnythingProcessor.displayName = 'DepthAnythingProcessor';

View File

@@ -1,32 +0,0 @@
import { FormControl, FormLabel, Switch } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import type { HedProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import type { ChangeEvent } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<HedProcessorConfig>;
export const HedProcessor = memo(({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleScribbleChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
onChange({ ...config, scribble: e.target.checked });
},
[config, onChange]
);
return (
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.scribble')}</FormLabel>
<Switch isChecked={config.scribble} onChange={handleScribbleChanged} />
</FormControl>
</ProcessorWrapper>
);
});
HedProcessor.displayName = 'HedProcessor';

View File

@@ -1,32 +0,0 @@
import { FormControl, FormLabel, Switch } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import type { LineartProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import type { ChangeEvent } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<LineartProcessorConfig>;
export const LineartProcessor = memo(({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleCoarseChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
onChange({ ...config, coarse: e.target.checked });
},
[config, onChange]
);
return (
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.coarse')}</FormLabel>
<Switch isChecked={config.coarse} onChange={handleCoarseChanged} />
</FormControl>
</ProcessorWrapper>
);
});
LineartProcessor.displayName = 'LineartProcessor';

View File

@@ -1,73 +0,0 @@
import { CompositeNumberInput, CompositeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import { CA_PROCESSOR_DATA, type MediapipeFaceProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<MediapipeFaceProcessorConfig>;
const DEFAULTS = CA_PROCESSOR_DATA['mediapipe_face_processor'].buildDefaults();
export const MediapipeFaceProcessor = memo(({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleMaxFacesChanged = useCallback(
(v: number) => {
onChange({ ...config, max_faces: v });
},
[config, onChange]
);
const handleMinConfidenceChanged = useCallback(
(v: number) => {
onChange({ ...config, min_confidence: v });
},
[config, onChange]
);
return (
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.maxFaces')}</FormLabel>
<CompositeSlider
value={config.max_faces}
onChange={handleMaxFacesChanged}
defaultValue={DEFAULTS.max_faces}
min={1}
max={20}
marks
/>
<CompositeNumberInput
value={config.max_faces}
onChange={handleMaxFacesChanged}
defaultValue={DEFAULTS.max_faces}
min={1}
max={20}
/>
</FormControl>
<FormControl>
<FormLabel m={0}>{t('controlnet.minConfidence')}</FormLabel>
<CompositeSlider
value={config.min_confidence}
onChange={handleMinConfidenceChanged}
defaultValue={DEFAULTS.min_confidence}
min={0}
max={1}
step={0.01}
marks
/>
<CompositeNumberInput
value={config.min_confidence}
onChange={handleMinConfidenceChanged}
defaultValue={DEFAULTS.min_confidence}
min={0}
max={1}
step={0.01}
/>
</FormControl>
</ProcessorWrapper>
);
});
MediapipeFaceProcessor.displayName = 'MediapipeFaceProcessor';

View File

@@ -1,76 +0,0 @@
import { CompositeNumberInput, CompositeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import type { MidasDepthProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { CA_PROCESSOR_DATA } from 'features/controlLayers/util/controlAdapters';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<MidasDepthProcessorConfig>;
const DEFAULTS = CA_PROCESSOR_DATA['midas_depth_image_processor'].buildDefaults();
export const MidasDepthProcessor = memo(({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleAMultChanged = useCallback(
(v: number) => {
onChange({ ...config, a_mult: v });
},
[config, onChange]
);
const handleBgThChanged = useCallback(
(v: number) => {
onChange({ ...config, bg_th: v });
},
[config, onChange]
);
return (
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.amult')}</FormLabel>
<CompositeSlider
value={config.a_mult}
onChange={handleAMultChanged}
defaultValue={DEFAULTS.a_mult}
min={0}
max={20}
step={0.01}
marks
/>
<CompositeNumberInput
value={config.a_mult}
onChange={handleAMultChanged}
defaultValue={DEFAULTS.a_mult}
min={0}
max={20}
step={0.01}
/>
</FormControl>
<FormControl>
<FormLabel m={0}>{t('controlnet.bgth')}</FormLabel>
<CompositeSlider
value={config.bg_th}
onChange={handleBgThChanged}
defaultValue={DEFAULTS.bg_th}
min={0}
max={20}
step={0.01}
marks
/>
<CompositeNumberInput
value={config.bg_th}
onChange={handleBgThChanged}
defaultValue={DEFAULTS.bg_th}
min={0}
max={20}
step={0.01}
/>
</FormControl>
</ProcessorWrapper>
);
});
MidasDepthProcessor.displayName = 'MidasDepthProcessor';

View File

@@ -1,76 +0,0 @@
import { CompositeNumberInput, CompositeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import type { MlsdProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { CA_PROCESSOR_DATA } from 'features/controlLayers/util/controlAdapters';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<MlsdProcessorConfig>;
const DEFAULTS = CA_PROCESSOR_DATA['mlsd_image_processor'].buildDefaults();
export const MlsdImageProcessor = memo(({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleThrDChanged = useCallback(
(v: number) => {
onChange({ ...config, thr_d: v });
},
[config, onChange]
);
const handleThrVChanged = useCallback(
(v: number) => {
onChange({ ...config, thr_v: v });
},
[config, onChange]
);
return (
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.w')} </FormLabel>
<CompositeSlider
value={config.thr_d}
onChange={handleThrDChanged}
defaultValue={DEFAULTS.thr_d}
min={0}
max={1}
step={0.01}
marks
/>
<CompositeNumberInput
value={config.thr_d}
onChange={handleThrDChanged}
defaultValue={DEFAULTS.thr_d}
min={0}
max={1}
step={0.01}
/>
</FormControl>
<FormControl>
<FormLabel m={0}>{t('controlnet.h')} </FormLabel>
<CompositeSlider
value={config.thr_v}
onChange={handleThrVChanged}
defaultValue={DEFAULTS.thr_v}
min={0}
max={1}
step={0.01}
marks
/>
<CompositeNumberInput
value={config.thr_v}
onChange={handleThrVChanged}
defaultValue={DEFAULTS.thr_v}
min={0}
max={1}
step={0.01}
/>
</FormControl>
</ProcessorWrapper>
);
});
MlsdImageProcessor.displayName = 'MlsdImageProcessor';

View File

@@ -1,43 +0,0 @@
import { FormControl, FormLabel, Switch } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import type { PidiProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import type { ChangeEvent } from 'react';
import { useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<PidiProcessorConfig>;
export const PidiProcessor = ({ onChange, config }: Props) => {
const { t } = useTranslation();
const handleScribbleChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
onChange({ ...config, scribble: e.target.checked });
},
[config, onChange]
);
const handleSafeChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
onChange({ ...config, safe: e.target.checked });
},
[config, onChange]
);
return (
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.scribble')}</FormLabel>
<Switch isChecked={config.scribble} onChange={handleScribbleChanged} />
</FormControl>
<FormControl>
<FormLabel m={0}>{t('controlnet.safe')}</FormLabel>
<Switch isChecked={config.safe} onChange={handleSafeChanged} />
</FormControl>
</ProcessorWrapper>
);
};
PidiProcessor.displayName = 'PidiProcessor';

View File

@@ -1,15 +0,0 @@
import { Flex } from '@invoke-ai/ui-library';
import type { PropsWithChildren } from 'react';
import { memo } from 'react';
type Props = PropsWithChildren;
const ProcessorWrapper = (props: Props) => {
return (
<Flex flexDir="column" gap={3}>
{props.children}
</Flex>
);
};
export default memo(ProcessorWrapper);

View File

@@ -1,6 +0,0 @@
import type { ProcessorConfig } from 'features/controlLayers/util/controlAdapters';
export type ProcessorComponentProps<T extends ProcessorConfig> = {
onChange: (config: T) => void;
config: T;
};

View File

@@ -1,24 +0,0 @@
import { Flex } from '@invoke-ai/ui-library';
import type { Meta, StoryObj } from '@storybook/react';
import { ControlLayersEditor } from 'features/controlLayers/components/ControlLayersEditor';
const meta: Meta<typeof ControlLayersEditor> = {
title: 'Feature/ControlLayers',
tags: ['autodocs'],
component: ControlLayersEditor,
};
export default meta;
type Story = StoryObj<typeof ControlLayersEditor>;
const Component = () => {
return (
<Flex w={1500} h={1500}>
<ControlLayersEditor />
</Flex>
);
};
export const Default: Story = {
render: Component,
};

View File

@@ -1,69 +0,0 @@
/* eslint-disable i18next/no-literal-string */
import { Flex } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import ScrollableContent from 'common/components/OverlayScrollbars/ScrollableContent';
import { AddLayerButton } from 'features/controlLayers/components/AddLayerButton';
import { CALayer } from 'features/controlLayers/components/CALayer/CALayer';
import { DeleteAllLayersButton } from 'features/controlLayers/components/DeleteAllLayersButton';
import { IILayer } from 'features/controlLayers/components/IILayer/IILayer';
import { IPALayer } from 'features/controlLayers/components/IPALayer/IPALayer';
import { RGLayer } from 'features/controlLayers/components/RGLayer/RGLayer';
import { isRenderableLayer, selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
import type { Layer } from 'features/controlLayers/store/types';
import { partition } from 'lodash-es';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
const selectLayerIdTypePairs = createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
const [renderableLayers, ipAdapterLayers] = partition(controlLayers.present.layers, isRenderableLayer);
return [...ipAdapterLayers, ...renderableLayers].map((l) => ({ id: l.id, type: l.type })).reverse();
});
export const ControlLayersPanelContent = memo(() => {
const { t } = useTranslation();
const layerIdTypePairs = useAppSelector(selectLayerIdTypePairs);
return (
<Flex flexDir="column" gap={2} w="full" h="full">
<Flex justifyContent="space-around">
<AddLayerButton />
<DeleteAllLayersButton />
</Flex>
{layerIdTypePairs.length > 0 && (
<ScrollableContent>
<Flex flexDir="column" gap={2} data-testid="control-layers-layer-list">
{layerIdTypePairs.map(({ id, type }) => (
<LayerWrapper key={id} id={id} type={type} />
))}
</Flex>
</ScrollableContent>
)}
{layerIdTypePairs.length === 0 && <IAINoContentFallback icon={null} label={t('controlLayers.noLayersAdded')} />}
</Flex>
);
});
ControlLayersPanelContent.displayName = 'ControlLayersPanelContent';
type LayerWrapperProps = {
id: string;
type: Layer['type'];
};
const LayerWrapper = memo(({ id, type }: LayerWrapperProps) => {
if (type === 'regional_guidance_layer') {
return <RGLayer key={id} layerId={id} />;
}
if (type === 'control_adapter_layer') {
return <CALayer key={id} layerId={id} />;
}
if (type === 'ip_adapter_layer') {
return <IPALayer key={id} layerId={id} />;
}
if (type === 'initial_image_layer') {
return <IILayer key={id} layerId={id} />;
}
});
LayerWrapper.displayName = 'LayerWrapper';

View File

@@ -1,51 +0,0 @@
import {
Checkbox,
Flex,
FormControl,
FormLabel,
IconButton,
Popover,
PopoverBody,
PopoverContent,
PopoverTrigger,
} from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { setShouldInvertBrushSizeScrollDirection } from 'features/canvas/store/canvasSlice';
import { GlobalMaskLayerOpacity } from 'features/controlLayers/components/GlobalMaskLayerOpacity';
import type { ChangeEvent } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { RiSettings4Fill } from 'react-icons/ri';
const ControlLayersSettingsPopover = () => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const shouldInvertBrushSizeScrollDirection = useAppSelector((s) => s.canvas.shouldInvertBrushSizeScrollDirection);
const handleChangeShouldInvertBrushSizeScrollDirection = useCallback(
(e: ChangeEvent<HTMLInputElement>) => dispatch(setShouldInvertBrushSizeScrollDirection(e.target.checked)),
[dispatch]
);
return (
<Popover isLazy>
<PopoverTrigger>
<IconButton aria-label={t('common.settingsLabel')} icon={<RiSettings4Fill />} />
</PopoverTrigger>
<PopoverContent>
<PopoverBody>
<Flex direction="column" gap={2}>
<GlobalMaskLayerOpacity />
<FormControl w="full">
<FormLabel flexGrow={1}>{t('unifiedCanvas.invertBrushSizeScrollDirection')}</FormLabel>
<Checkbox
isChecked={shouldInvertBrushSizeScrollDirection}
onChange={handleChangeShouldInvertBrushSizeScrollDirection}
/>
</FormControl>
</Flex>
</PopoverBody>
</PopoverContent>
</Popover>
);
};
export default memo(ControlLayersSettingsPopover);

View File

@@ -1,34 +0,0 @@
/* eslint-disable i18next/no-literal-string */
import { Flex } from '@invoke-ai/ui-library';
import { BrushSize } from 'features/controlLayers/components/BrushSize';
import ControlLayersSettingsPopover from 'features/controlLayers/components/ControlLayersSettingsPopover';
import { ToolChooser } from 'features/controlLayers/components/ToolChooser';
import { UndoRedoButtonGroup } from 'features/controlLayers/components/UndoRedoButtonGroup';
import { ToggleProgressButton } from 'features/gallery/components/ImageViewer/ToggleProgressButton';
import { ViewerToggleMenu } from 'features/gallery/components/ImageViewer/ViewerToggleMenu';
import { memo } from 'react';
export const ControlLayersToolbar = memo(() => {
return (
<Flex w="full" gap={2}>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineEnd="auto">
<ToggleProgressButton />
</Flex>
</Flex>
<Flex flex={1} gap={2} justifyContent="center">
<BrushSize />
<ToolChooser />
<UndoRedoButtonGroup />
<ControlLayersSettingsPopover />
</Flex>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineStart="auto">
<ViewerToggleMenu />
</Flex>
</Flex>
</Flex>
);
});
ControlLayersToolbar.displayName = 'ControlLayersToolbar';

View File

@@ -1,54 +0,0 @@
import { CompositeNumberInput, CompositeSlider, Flex, FormControl, FormLabel } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import {
globalMaskLayerOpacityChanged,
initialControlLayersState,
} from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
const marks = [0, 25, 50, 75, 100];
const formatPct = (v: number | string) => `${v} %`;
export const GlobalMaskLayerOpacity = memo(() => {
const dispatch = useAppDispatch();
const { t } = useTranslation();
const globalMaskLayerOpacity = useAppSelector((s) =>
Math.round(s.controlLayers.present.globalMaskLayerOpacity * 100)
);
const onChange = useCallback(
(v: number) => {
dispatch(globalMaskLayerOpacityChanged(v / 100));
},
[dispatch]
);
return (
<FormControl orientation="vertical">
<FormLabel m={0}>{t('controlLayers.globalMaskOpacity')}</FormLabel>
<Flex gap={4}>
<CompositeSlider
min={0}
max={100}
step={1}
value={globalMaskLayerOpacity}
defaultValue={initialControlLayersState.globalMaskLayerOpacity * 100}
onChange={onChange}
marks={marks}
minW={48}
/>
<CompositeNumberInput
min={0}
max={100}
step={1}
value={globalMaskLayerOpacity}
defaultValue={initialControlLayersState.globalMaskLayerOpacity * 100}
onChange={onChange}
w={28}
format={formatPct}
/>
</Flex>
</FormControl>
);
});
GlobalMaskLayerOpacity.displayName = 'GlobalMaskLayerOpacity';

View File

@@ -1,91 +0,0 @@
import { Flex, Spacer, useDisclosure } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IILayerOpacity from 'features/controlLayers/components/IILayer/IILayerOpacity';
import { InitialImagePreview } from 'features/controlLayers/components/IILayer/InitialImagePreview';
import { LayerDeleteButton } from 'features/controlLayers/components/LayerCommon/LayerDeleteButton';
import { LayerMenu } from 'features/controlLayers/components/LayerCommon/LayerMenu';
import { LayerTitle } from 'features/controlLayers/components/LayerCommon/LayerTitle';
import { LayerVisibilityToggle } from 'features/controlLayers/components/LayerCommon/LayerVisibilityToggle';
import { LayerWrapper } from 'features/controlLayers/components/LayerCommon/LayerWrapper';
import {
iiLayerDenoisingStrengthChanged,
iiLayerImageChanged,
layerSelected,
selectIILayerOrThrow,
} from 'features/controlLayers/store/controlLayersSlice';
import type { IILayerImageDropData } from 'features/dnd/types';
import ImageToImageStrength from 'features/parameters/components/ImageToImage/ImageToImageStrength';
import { memo, useCallback, useMemo } from 'react';
import type { IILayerImagePostUploadAction, ImageDTO } from 'services/api/types';
type Props = {
layerId: string;
};
export const IILayer = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const layer = useAppSelector((s) => selectIILayerOrThrow(s.controlLayers.present, layerId));
const onClick = useCallback(() => {
dispatch(layerSelected(layerId));
}, [dispatch, layerId]);
const { isOpen, onToggle } = useDisclosure({ defaultIsOpen: true });
const onChangeImage = useCallback(
(imageDTO: ImageDTO | null) => {
dispatch(iiLayerImageChanged({ layerId, imageDTO }));
},
[dispatch, layerId]
);
const onChangeDenoisingStrength = useCallback(
(denoisingStrength: number) => {
dispatch(iiLayerDenoisingStrengthChanged({ layerId, denoisingStrength }));
},
[dispatch, layerId]
);
const droppableData = useMemo<IILayerImageDropData>(
() => ({
actionType: 'SET_II_LAYER_IMAGE',
context: {
layerId,
},
id: layerId,
}),
[layerId]
);
const postUploadAction = useMemo<IILayerImagePostUploadAction>(
() => ({
layerId,
type: 'SET_II_LAYER_IMAGE',
}),
[layerId]
);
return (
<LayerWrapper onClick={onClick} borderColor={layer.isSelected ? 'base.400' : 'base.800'}>
<Flex gap={3} alignItems="center" p={3} cursor="pointer" onDoubleClick={onToggle}>
<LayerVisibilityToggle layerId={layerId} />
<LayerTitle type="initial_image_layer" />
<Spacer />
<IILayerOpacity layerId={layerId} />
<LayerMenu layerId={layerId} />
<LayerDeleteButton layerId={layerId} />
</Flex>
{isOpen && (
<Flex flexDir="column" gap={3} px={3} pb={3}>
<ImageToImageStrength value={layer.denoisingStrength} onChange={onChangeDenoisingStrength} />
<InitialImagePreview
image={layer.image}
onChangeImage={onChangeImage}
droppableData={droppableData}
postUploadAction={postUploadAction}
/>
</Flex>
)}
</LayerWrapper>
);
});
IILayer.displayName = 'IILayer';

View File

@@ -1,98 +0,0 @@
import {
CompositeNumberInput,
CompositeSlider,
Flex,
FormControl,
FormLabel,
IconButton,
Popover,
PopoverArrow,
PopoverBody,
PopoverContent,
PopoverTrigger,
} from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { stopPropagation } from 'common/util/stopPropagation';
import {
iiLayerOpacityChanged,
isInitialImageLayer,
selectControlLayersSlice,
} from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiDropHalfFill } from 'react-icons/pi';
import { assert } from 'tsafe';
type Props = {
layerId: string;
};
const marks = [0, 25, 50, 75, 100];
const formatPct = (v: number | string) => `${v} %`;
const IILayerOpacity = ({ layerId }: Props) => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const selectOpacity = useMemo(
() =>
createSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.filter(isInitialImageLayer).find((l) => l.id === layerId);
assert(layer, `Layer ${layerId} not found`);
return Math.round(layer.opacity * 100);
}),
[layerId]
);
const opacity = useAppSelector(selectOpacity);
const onChangeOpacity = useCallback(
(v: number) => {
dispatch(iiLayerOpacityChanged({ layerId, opacity: v / 100 }));
},
[dispatch, layerId]
);
return (
<Popover isLazy>
<PopoverTrigger>
<IconButton
aria-label={t('controlLayers.opacity')}
size="sm"
icon={<PiDropHalfFill size={16} />}
variant="ghost"
onDoubleClick={stopPropagation}
/>
</PopoverTrigger>
<PopoverContent onDoubleClick={stopPropagation}>
<PopoverArrow />
<PopoverBody>
<Flex direction="column" gap={2}>
<FormControl orientation="horizontal">
<FormLabel m={0}>{t('controlLayers.opacity')}</FormLabel>
<CompositeSlider
min={0}
max={100}
step={1}
value={opacity}
defaultValue={100}
onChange={onChangeOpacity}
marks={marks}
w={48}
/>
<CompositeNumberInput
min={0}
max={100}
step={1}
value={opacity}
defaultValue={100}
onChange={onChangeOpacity}
w={24}
format={formatPct}
/>
</FormControl>
</Flex>
</PopoverBody>
</PopoverContent>
</Popover>
);
};
export default memo(IILayerOpacity);

View File

@@ -1,109 +0,0 @@
import type { SystemStyleObject } from '@invoke-ai/ui-library';
import { Flex, useShiftModifier } from '@invoke-ai/ui-library';
import { skipToken } from '@reduxjs/toolkit/query';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIDndImage from 'common/components/IAIDndImage';
import IAIDndImageIcon from 'common/components/IAIDndImageIcon';
import { setBoundingBoxDimensions } from 'features/canvas/store/canvasSlice';
import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import type { ImageWithDims } from 'features/controlLayers/util/controlAdapters';
import type { ImageDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { memo, useCallback, useEffect, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiArrowCounterClockwiseBold, PiRulerBold } from 'react-icons/pi';
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
import type { ImageDTO, PostUploadAction } from 'services/api/types';
type Props = {
image: ImageWithDims | null;
onChangeImage: (imageDTO: ImageDTO | null) => void;
droppableData: TypesafeDroppableData;
postUploadAction: PostUploadAction;
};
export const InitialImagePreview = memo(({ image, onChangeImage, droppableData, postUploadAction }: Props) => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const isConnected = useAppSelector((s) => s.system.isConnected);
const activeTabName = useAppSelector(activeTabNameSelector);
const optimalDimension = useAppSelector(selectOptimalDimension);
const shift = useShiftModifier();
const { currentData: imageDTO, isError: isErrorControlImage } = useGetImageDTOQuery(image?.name ?? skipToken);
const onReset = useCallback(() => {
onChangeImage(null);
}, [onChangeImage]);
const onUseSize = useCallback(() => {
if (!imageDTO) {
return;
}
if (activeTabName === 'canvas') {
dispatch(setBoundingBoxDimensions({ width: imageDTO.width, height: imageDTO.height }, optimalDimension));
} else {
const options = { updateAspectRatio: true, clamp: true };
if (shift) {
const { width, height } = imageDTO;
dispatch(widthChanged({ width, ...options }));
dispatch(heightChanged({ height, ...options }));
} else {
const { width, height } = calculateNewSize(
imageDTO.width / imageDTO.height,
optimalDimension * optimalDimension
);
dispatch(widthChanged({ width, ...options }));
dispatch(heightChanged({ height, ...options }));
}
}
}, [imageDTO, activeTabName, dispatch, optimalDimension, shift]);
const draggableData = useMemo<ImageDraggableData | undefined>(() => {
if (imageDTO) {
return {
id: 'initial_image_layer',
payloadType: 'IMAGE_DTO',
payload: { imageDTO: imageDTO },
};
}
}, [imageDTO]);
useEffect(() => {
if (isConnected && isErrorControlImage) {
onReset();
}
}, [onReset, isConnected, isErrorControlImage]);
return (
<Flex position="relative" w="full" h={36} alignItems="center" justifyContent="center">
<IAIDndImage
draggableData={draggableData}
droppableData={droppableData}
imageDTO={imageDTO}
postUploadAction={postUploadAction}
/>
<>
<IAIDndImageIcon
onClick={onReset}
icon={imageDTO ? <PiArrowCounterClockwiseBold size={16} /> : undefined}
tooltip={t('controlnet.resetControlImage')}
/>
<IAIDndImageIcon
onClick={onUseSize}
icon={imageDTO ? <PiRulerBold size={16} /> : undefined}
tooltip={shift ? t('controlnet.setControlImageDimensionsForce') : t('controlnet.setControlImageDimensions')}
styleOverrides={useSizeStyleOverrides}
/>
</>
</Flex>
);
});
InitialImagePreview.displayName = 'InitialImagePreview';
const useSizeStyleOverrides: SystemStyleObject = { mt: 6 };

View File

@@ -1,39 +0,0 @@
import { Flex, Spacer, useDisclosure } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { IPALayerIPAdapterWrapper } from 'features/controlLayers/components/IPALayer/IPALayerIPAdapterWrapper';
import { LayerDeleteButton } from 'features/controlLayers/components/LayerCommon/LayerDeleteButton';
import { LayerTitle } from 'features/controlLayers/components/LayerCommon/LayerTitle';
import { LayerVisibilityToggle } from 'features/controlLayers/components/LayerCommon/LayerVisibilityToggle';
import { LayerWrapper } from 'features/controlLayers/components/LayerCommon/LayerWrapper';
import { layerSelected, selectIPALayerOrThrow } from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback } from 'react';
type Props = {
layerId: string;
};
export const IPALayer = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const isSelected = useAppSelector((s) => selectIPALayerOrThrow(s.controlLayers.present, layerId).isSelected);
const { isOpen, onToggle } = useDisclosure({ defaultIsOpen: true });
const onClick = useCallback(() => {
dispatch(layerSelected(layerId));
}, [dispatch, layerId]);
return (
<LayerWrapper onClick={onClick} borderColor={isSelected ? 'base.400' : 'base.800'}>
<Flex gap={3} alignItems="center" p={3} cursor="pointer" onDoubleClick={onToggle}>
<LayerVisibilityToggle layerId={layerId} />
<LayerTitle type="ip_adapter_layer" />
<Spacer />
<LayerDeleteButton layerId={layerId} />
</Flex>
{isOpen && (
<Flex flexDir="column" gap={3} px={3} pb={3}>
<IPALayerIPAdapterWrapper layerId={layerId} />
</Flex>
)}
</LayerWrapper>
);
});
IPALayer.displayName = 'IPALayer';

View File

@@ -1,106 +0,0 @@
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { IPAdapter } from 'features/controlLayers/components/ControlAndIPAdapter/IPAdapter';
import {
caOrIPALayerBeginEndStepPctChanged,
caOrIPALayerWeightChanged,
ipaLayerCLIPVisionModelChanged,
ipaLayerImageChanged,
ipaLayerMethodChanged,
ipaLayerModelChanged,
selectIPALayerOrThrow,
} from 'features/controlLayers/store/controlLayersSlice';
import type { CLIPVisionModelV2, IPMethodV2 } from 'features/controlLayers/util/controlAdapters';
import type { IPALayerImageDropData } from 'features/dnd/types';
import { memo, useCallback, useMemo } from 'react';
import type { ImageDTO, IPAdapterModelConfig, IPALayerImagePostUploadAction } from 'services/api/types';
type Props = {
layerId: string;
};
export const IPALayerIPAdapterWrapper = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const ipAdapter = useAppSelector((s) => selectIPALayerOrThrow(s.controlLayers.present, layerId).ipAdapter);
const onChangeBeginEndStepPct = useCallback(
(beginEndStepPct: [number, number]) => {
dispatch(
caOrIPALayerBeginEndStepPctChanged({
layerId,
beginEndStepPct,
})
);
},
[dispatch, layerId]
);
const onChangeWeight = useCallback(
(weight: number) => {
dispatch(caOrIPALayerWeightChanged({ layerId, weight }));
},
[dispatch, layerId]
);
const onChangeIPMethod = useCallback(
(method: IPMethodV2) => {
dispatch(ipaLayerMethodChanged({ layerId, method }));
},
[dispatch, layerId]
);
const onChangeModel = useCallback(
(modelConfig: IPAdapterModelConfig) => {
dispatch(ipaLayerModelChanged({ layerId, modelConfig }));
},
[dispatch, layerId]
);
const onChangeCLIPVisionModel = useCallback(
(clipVisionModel: CLIPVisionModelV2) => {
dispatch(ipaLayerCLIPVisionModelChanged({ layerId, clipVisionModel }));
},
[dispatch, layerId]
);
const onChangeImage = useCallback(
(imageDTO: ImageDTO | null) => {
dispatch(ipaLayerImageChanged({ layerId, imageDTO }));
},
[dispatch, layerId]
);
const droppableData = useMemo<IPALayerImageDropData>(
() => ({
actionType: 'SET_IPA_LAYER_IMAGE',
context: {
layerId,
},
id: layerId,
}),
[layerId]
);
const postUploadAction = useMemo<IPALayerImagePostUploadAction>(
() => ({
type: 'SET_IPA_LAYER_IMAGE',
layerId,
}),
[layerId]
);
return (
<IPAdapter
ipAdapter={ipAdapter}
onChangeBeginEndStepPct={onChangeBeginEndStepPct}
onChangeWeight={onChangeWeight}
onChangeIPMethod={onChangeIPMethod}
onChangeModel={onChangeModel}
onChangeCLIPVisionModel={onChangeCLIPVisionModel}
onChangeImage={onChangeImage}
droppableData={droppableData}
postUploadAction={postUploadAction}
/>
);
});
IPALayerIPAdapterWrapper.displayName = 'IPALayerIPAdapterWrapper';

View File

@@ -1,61 +0,0 @@
import { IconButton, Menu, MenuButton, MenuDivider, MenuItem, MenuList } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { stopPropagation } from 'common/util/stopPropagation';
import { LayerMenuArrangeActions } from 'features/controlLayers/components/LayerCommon/LayerMenuArrangeActions';
import { LayerMenuRGActions } from 'features/controlLayers/components/LayerCommon/LayerMenuRGActions';
import { useLayerType } from 'features/controlLayers/hooks/layerStateHooks';
import { layerDeleted, layerReset } from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiArrowCounterClockwiseBold, PiDotsThreeVerticalBold, PiTrashSimpleBold } from 'react-icons/pi';
type Props = { layerId: string };
export const LayerMenu = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const { t } = useTranslation();
const layerType = useLayerType(layerId);
const resetLayer = useCallback(() => {
dispatch(layerReset(layerId));
}, [dispatch, layerId]);
const deleteLayer = useCallback(() => {
dispatch(layerDeleted(layerId));
}, [dispatch, layerId]);
return (
<Menu>
<MenuButton
as={IconButton}
aria-label="Layer menu"
size="sm"
icon={<PiDotsThreeVerticalBold />}
onDoubleClick={stopPropagation} // double click expands the layer
/>
<MenuList>
{layerType === 'regional_guidance_layer' && (
<>
<LayerMenuRGActions layerId={layerId} />
<MenuDivider />
</>
)}
{(layerType === 'regional_guidance_layer' ||
layerType === 'control_adapter_layer' ||
layerType === 'initial_image_layer') && (
<>
<LayerMenuArrangeActions layerId={layerId} />
<MenuDivider />
</>
)}
{layerType === 'regional_guidance_layer' && (
<MenuItem onClick={resetLayer} icon={<PiArrowCounterClockwiseBold />}>
{t('accessibility.reset')}
</MenuItem>
)}
<MenuItem onClick={deleteLayer} icon={<PiTrashSimpleBold />} color="error.300">
{t('common.delete')}
</MenuItem>
</MenuList>
</Menu>
);
});
LayerMenu.displayName = 'LayerMenu';

View File

@@ -1,69 +0,0 @@
import { MenuItem } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import {
isRenderableLayer,
layerMovedBackward,
layerMovedForward,
layerMovedToBack,
layerMovedToFront,
selectControlLayersSlice,
} from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiArrowDownBold, PiArrowLineDownBold, PiArrowLineUpBold, PiArrowUpBold } from 'react-icons/pi';
import { assert } from 'tsafe';
type Props = { layerId: string };
export const LayerMenuArrangeActions = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const { t } = useTranslation();
const selectValidActions = useMemo(
() =>
createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.find((l) => l.id === layerId);
assert(isRenderableLayer(layer), `Layer ${layerId} not found or not an RP layer`);
const layerIndex = controlLayers.present.layers.findIndex((l) => l.id === layerId);
const layerCount = controlLayers.present.layers.length;
return {
canMoveForward: layerIndex < layerCount - 1,
canMoveBackward: layerIndex > 0,
canMoveToFront: layerIndex < layerCount - 1,
canMoveToBack: layerIndex > 0,
};
}),
[layerId]
);
const validActions = useAppSelector(selectValidActions);
const moveForward = useCallback(() => {
dispatch(layerMovedForward(layerId));
}, [dispatch, layerId]);
const moveToFront = useCallback(() => {
dispatch(layerMovedToFront(layerId));
}, [dispatch, layerId]);
const moveBackward = useCallback(() => {
dispatch(layerMovedBackward(layerId));
}, [dispatch, layerId]);
const moveToBack = useCallback(() => {
dispatch(layerMovedToBack(layerId));
}, [dispatch, layerId]);
return (
<>
<MenuItem onClick={moveToFront} isDisabled={!validActions.canMoveToFront} icon={<PiArrowLineUpBold />}>
{t('controlLayers.moveToFront')}
</MenuItem>
<MenuItem onClick={moveForward} isDisabled={!validActions.canMoveForward} icon={<PiArrowUpBold />}>
{t('controlLayers.moveForward')}
</MenuItem>
<MenuItem onClick={moveBackward} isDisabled={!validActions.canMoveBackward} icon={<PiArrowDownBold />}>
{t('controlLayers.moveBackward')}
</MenuItem>
<MenuItem onClick={moveToBack} isDisabled={!validActions.canMoveToBack} icon={<PiArrowLineDownBold />}>
{t('controlLayers.moveToBack')}
</MenuItem>
</>
);
});
LayerMenuArrangeActions.displayName = 'LayerMenuArrangeActions';

View File

@@ -1,56 +0,0 @@
import { MenuItem } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useAddIPAdapterToIPALayer } from 'features/controlLayers/hooks/addLayerHooks';
import {
isRegionalGuidanceLayer,
rgLayerNegativePromptChanged,
rgLayerPositivePromptChanged,
selectControlLayersSlice,
} from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
import { assert } from 'tsafe';
type Props = { layerId: string };
export const LayerMenuRGActions = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const { t } = useTranslation();
const [addIPAdapter, isAddIPAdapterDisabled] = useAddIPAdapterToIPALayer(layerId);
const selectValidActions = useMemo(
() =>
createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.find((l) => l.id === layerId);
assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an RP layer`);
return {
canAddPositivePrompt: layer.positivePrompt === null,
canAddNegativePrompt: layer.negativePrompt === null,
};
}),
[layerId]
);
const validActions = useAppSelector(selectValidActions);
const addPositivePrompt = useCallback(() => {
dispatch(rgLayerPositivePromptChanged({ layerId, prompt: '' }));
}, [dispatch, layerId]);
const addNegativePrompt = useCallback(() => {
dispatch(rgLayerNegativePromptChanged({ layerId, prompt: '' }));
}, [dispatch, layerId]);
return (
<>
<MenuItem onClick={addPositivePrompt} isDisabled={!validActions.canAddPositivePrompt} icon={<PiPlusBold />}>
{t('controlLayers.addPositivePrompt')}
</MenuItem>
<MenuItem onClick={addNegativePrompt} isDisabled={!validActions.canAddNegativePrompt} icon={<PiPlusBold />}>
{t('controlLayers.addNegativePrompt')}
</MenuItem>
<MenuItem onClick={addIPAdapter} icon={<PiPlusBold />} isDisabled={isAddIPAdapterDisabled}>
{t('controlLayers.addIPAdapter')}
</MenuItem>
</>
);
});
LayerMenuRGActions.displayName = 'LayerMenuRGActions';

View File

@@ -1,31 +0,0 @@
import { Text } from '@invoke-ai/ui-library';
import type { Layer } from 'features/controlLayers/store/types';
import { memo, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
type Props = {
type: Layer['type'];
};
export const LayerTitle = memo(({ type }: Props) => {
const { t } = useTranslation();
const title = useMemo(() => {
if (type === 'regional_guidance_layer') {
return t('controlLayers.regionalGuidance');
} else if (type === 'control_adapter_layer') {
return t('controlLayers.globalControlAdapter');
} else if (type === 'ip_adapter_layer') {
return t('controlLayers.globalIPAdapter');
} else if (type === 'initial_image_layer') {
return t('controlLayers.globalInitialImage');
}
}, [t, type]);
return (
<Text size="sm" fontWeight="semibold" userSelect="none" color="base.300">
{title}
</Text>
);
});
LayerTitle.displayName = 'LayerTitle';

View File

@@ -1,30 +0,0 @@
import type { ChakraProps } from '@invoke-ai/ui-library';
import { Flex } from '@invoke-ai/ui-library';
import type { PropsWithChildren } from 'react';
import { memo } from 'react';
type Props = PropsWithChildren<{
onClick?: () => void;
borderColor: ChakraProps['bg'];
}>;
export const LayerWrapper = memo(({ onClick, borderColor, children }: Props) => {
return (
<Flex
gap={2}
onClick={onClick}
bg={borderColor}
px={2}
borderRadius="base"
py="1px"
transitionProperty="all"
transitionDuration="0.2s"
>
<Flex flexDir="column" w="full" bg="base.850" borderRadius="base">
{children}
</Flex>
</Flex>
);
});
LayerWrapper.displayName = 'LayerWrapper';

View File

@@ -1,83 +0,0 @@
import { Badge, Flex, Spacer, useDisclosure } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { rgbColorToString } from 'features/canvas/util/colorToString';
import { AddPromptButtons } from 'features/controlLayers/components/AddPromptButtons';
import { LayerDeleteButton } from 'features/controlLayers/components/LayerCommon/LayerDeleteButton';
import { LayerMenu } from 'features/controlLayers/components/LayerCommon/LayerMenu';
import { LayerTitle } from 'features/controlLayers/components/LayerCommon/LayerTitle';
import { LayerVisibilityToggle } from 'features/controlLayers/components/LayerCommon/LayerVisibilityToggle';
import { LayerWrapper } from 'features/controlLayers/components/LayerCommon/LayerWrapper';
import {
isRegionalGuidanceLayer,
layerSelected,
selectControlLayersSlice,
} from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { assert } from 'tsafe';
import { RGLayerColorPicker } from './RGLayerColorPicker';
import { RGLayerIPAdapterList } from './RGLayerIPAdapterList';
import { RGLayerNegativePrompt } from './RGLayerNegativePrompt';
import { RGLayerPositivePrompt } from './RGLayerPositivePrompt';
import RGLayerSettingsPopover from './RGLayerSettingsPopover';
type Props = {
layerId: string;
};
export const RGLayer = memo(({ layerId }: Props) => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const selector = useMemo(
() =>
createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.find((l) => l.id === layerId);
assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an RP layer`);
return {
color: rgbColorToString(layer.previewColor),
hasPositivePrompt: layer.positivePrompt !== null,
hasNegativePrompt: layer.negativePrompt !== null,
hasIPAdapters: layer.ipAdapters.length > 0,
isSelected: layerId === controlLayers.present.selectedLayerId,
autoNegative: layer.autoNegative,
};
}),
[layerId]
);
const { autoNegative, color, hasPositivePrompt, hasNegativePrompt, hasIPAdapters, isSelected } =
useAppSelector(selector);
const { isOpen, onToggle } = useDisclosure({ defaultIsOpen: true });
const onClick = useCallback(() => {
dispatch(layerSelected(layerId));
}, [dispatch, layerId]);
return (
<LayerWrapper onClick={onClick} borderColor={isSelected ? color : 'base.800'}>
<Flex gap={3} alignItems="center" p={3} cursor="pointer" onDoubleClick={onToggle}>
<LayerVisibilityToggle layerId={layerId} />
<LayerTitle type="regional_guidance_layer" />
<Spacer />
{autoNegative === 'invert' && (
<Badge color="base.300" bg="transparent" borderWidth={1} userSelect="none">
{t('controlLayers.autoNegative')}
</Badge>
)}
<RGLayerColorPicker layerId={layerId} />
<RGLayerSettingsPopover layerId={layerId} />
<LayerMenu layerId={layerId} />
<LayerDeleteButton layerId={layerId} />
</Flex>
{isOpen && (
<Flex flexDir="column" gap={3} px={3} pb={3}>
{!hasPositivePrompt && !hasNegativePrompt && !hasIPAdapters && <AddPromptButtons layerId={layerId} />}
{hasPositivePrompt && <RGLayerPositivePrompt layerId={layerId} />}
{hasNegativePrompt && <RGLayerNegativePrompt layerId={layerId} />}
{hasIPAdapters && <RGLayerIPAdapterList layerId={layerId} />}
</Flex>
)}
</LayerWrapper>
);
});
RGLayer.displayName = 'RGLayer';

View File

@@ -1,45 +0,0 @@
import { Divider, Flex } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { RGLayerIPAdapterWrapper } from 'features/controlLayers/components/RGLayer/RGLayerIPAdapterWrapper';
import { isRegionalGuidanceLayer, selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
import { memo, useMemo } from 'react';
import { assert } from 'tsafe';
type Props = {
layerId: string;
};
export const RGLayerIPAdapterList = memo(({ layerId }: Props) => {
const selectIPAdapterIds = useMemo(
() =>
createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.filter(isRegionalGuidanceLayer).find((l) => l.id === layerId);
assert(layer, `Layer ${layerId} not found`);
return layer.ipAdapters;
}),
[layerId]
);
const ipAdapters = useAppSelector(selectIPAdapterIds);
if (ipAdapters.length === 0) {
return null;
}
return (
<>
{ipAdapters.map(({ id }, index) => (
<Flex flexDir="column" key={id}>
{index > 0 && (
<Flex pb={3}>
<Divider />
</Flex>
)}
<RGLayerIPAdapterWrapper layerId={layerId} ipAdapterId={id} ipAdapterNumber={index + 1} />
</Flex>
))}
</>
);
});
RGLayerIPAdapterList.displayName = 'RGLayerIPAdapterList';

View File

@@ -1,131 +0,0 @@
import { Flex, IconButton, Spacer, Text } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { IPAdapter } from 'features/controlLayers/components/ControlAndIPAdapter/IPAdapter';
import {
rgLayerIPAdapterBeginEndStepPctChanged,
rgLayerIPAdapterCLIPVisionModelChanged,
rgLayerIPAdapterDeleted,
rgLayerIPAdapterImageChanged,
rgLayerIPAdapterMethodChanged,
rgLayerIPAdapterModelChanged,
rgLayerIPAdapterWeightChanged,
selectRGLayerIPAdapterOrThrow,
} from 'features/controlLayers/store/controlLayersSlice';
import type { CLIPVisionModelV2, IPMethodV2 } from 'features/controlLayers/util/controlAdapters';
import type { RGLayerIPAdapterImageDropData } from 'features/dnd/types';
import { memo, useCallback, useMemo } from 'react';
import { PiTrashSimpleBold } from 'react-icons/pi';
import type { ImageDTO, IPAdapterModelConfig, RGLayerIPAdapterImagePostUploadAction } from 'services/api/types';
type Props = {
layerId: string;
ipAdapterId: string;
ipAdapterNumber: number;
};
export const RGLayerIPAdapterWrapper = memo(({ layerId, ipAdapterId, ipAdapterNumber }: Props) => {
const dispatch = useAppDispatch();
const onDeleteIPAdapter = useCallback(() => {
dispatch(rgLayerIPAdapterDeleted({ layerId, ipAdapterId }));
}, [dispatch, ipAdapterId, layerId]);
const ipAdapter = useAppSelector((s) => selectRGLayerIPAdapterOrThrow(s.controlLayers.present, layerId, ipAdapterId));
const onChangeBeginEndStepPct = useCallback(
(beginEndStepPct: [number, number]) => {
dispatch(
rgLayerIPAdapterBeginEndStepPctChanged({
layerId,
ipAdapterId,
beginEndStepPct,
})
);
},
[dispatch, ipAdapterId, layerId]
);
const onChangeWeight = useCallback(
(weight: number) => {
dispatch(rgLayerIPAdapterWeightChanged({ layerId, ipAdapterId, weight }));
},
[dispatch, ipAdapterId, layerId]
);
const onChangeIPMethod = useCallback(
(method: IPMethodV2) => {
dispatch(rgLayerIPAdapterMethodChanged({ layerId, ipAdapterId, method }));
},
[dispatch, ipAdapterId, layerId]
);
const onChangeModel = useCallback(
(modelConfig: IPAdapterModelConfig) => {
dispatch(rgLayerIPAdapterModelChanged({ layerId, ipAdapterId, modelConfig }));
},
[dispatch, ipAdapterId, layerId]
);
const onChangeCLIPVisionModel = useCallback(
(clipVisionModel: CLIPVisionModelV2) => {
dispatch(rgLayerIPAdapterCLIPVisionModelChanged({ layerId, ipAdapterId, clipVisionModel }));
},
[dispatch, ipAdapterId, layerId]
);
const onChangeImage = useCallback(
(imageDTO: ImageDTO | null) => {
dispatch(rgLayerIPAdapterImageChanged({ layerId, ipAdapterId, imageDTO }));
},
[dispatch, ipAdapterId, layerId]
);
const droppableData = useMemo<RGLayerIPAdapterImageDropData>(
() => ({
actionType: 'SET_RG_LAYER_IP_ADAPTER_IMAGE',
context: {
layerId,
ipAdapterId,
},
id: layerId,
}),
[ipAdapterId, layerId]
);
const postUploadAction = useMemo<RGLayerIPAdapterImagePostUploadAction>(
() => ({
type: 'SET_RG_LAYER_IP_ADAPTER_IMAGE',
layerId,
ipAdapterId,
}),
[ipAdapterId, layerId]
);
return (
<Flex flexDir="column" gap={3}>
<Flex alignItems="center" gap={3}>
<Text fontWeight="semibold" color="base.400">{`IP Adapter ${ipAdapterNumber}`}</Text>
<Spacer />
<IconButton
size="sm"
icon={<PiTrashSimpleBold />}
aria-label="Delete IP Adapter"
onClick={onDeleteIPAdapter}
variant="ghost"
colorScheme="error"
/>
</Flex>
<IPAdapter
ipAdapter={ipAdapter}
onChangeBeginEndStepPct={onChangeBeginEndStepPct}
onChangeWeight={onChangeWeight}
onChangeIPMethod={onChangeIPMethod}
onChangeModel={onChangeModel}
onChangeCLIPVisionModel={onChangeCLIPVisionModel}
onChangeImage={onChangeImage}
droppableData={droppableData}
postUploadAction={postUploadAction}
/>
</Flex>
);
});
RGLayerIPAdapterWrapper.displayName = 'RGLayerIPAdapterWrapper';

View File

@@ -1,258 +0,0 @@
import { Flex } from '@invoke-ai/ui-library';
import { useStore } from '@nanostores/react';
import { createSelector } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useMouseEvents } from 'features/controlLayers/hooks/mouseEventHooks';
import {
$lastCursorPos,
$lastMouseDownPos,
$tool,
isRegionalGuidanceLayer,
layerBboxChanged,
layerTranslated,
selectControlLayersSlice,
} from 'features/controlLayers/store/controlLayersSlice';
import { debouncedRenderers, renderers as normalRenderers } from 'features/controlLayers/util/renderers';
import Konva from 'konva';
import type { IRect } from 'konva/lib/types';
import { memo, useCallback, useLayoutEffect, useMemo, useState } from 'react';
import { useDevicePixelRatio } from 'use-device-pixel-ratio';
import { v4 as uuidv4 } from 'uuid';
// This will log warnings when layers > 5 - maybe use `import.meta.env.MODE === 'development'` instead?
Konva.showWarnings = false;
const log = logger('controlLayers');
const selectSelectedLayerColor = createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers
.filter(isRegionalGuidanceLayer)
.find((l) => l.id === controlLayers.present.selectedLayerId);
return layer?.previewColor ?? null;
});
const selectSelectedLayerType = createSelector(selectControlLayersSlice, (controlLayers) => {
const selectedLayer = controlLayers.present.layers.find((l) => l.id === controlLayers.present.selectedLayerId);
return selectedLayer?.type ?? null;
});
const useStageRenderer = (
stage: Konva.Stage,
container: HTMLDivElement | null,
wrapper: HTMLDivElement | null,
asPreview: boolean
) => {
const dispatch = useAppDispatch();
const state = useAppSelector((s) => s.controlLayers.present);
const tool = useStore($tool);
const mouseEventHandlers = useMouseEvents();
const lastCursorPos = useStore($lastCursorPos);
const lastMouseDownPos = useStore($lastMouseDownPos);
const selectedLayerIdColor = useAppSelector(selectSelectedLayerColor);
const selectedLayerType = useAppSelector(selectSelectedLayerType);
const layerIds = useMemo(() => state.layers.map((l) => l.id), [state.layers]);
const layerCount = useMemo(() => state.layers.length, [state.layers]);
const renderers = useMemo(() => (asPreview ? debouncedRenderers : normalRenderers), [asPreview]);
const dpr = useDevicePixelRatio({ round: false });
const onLayerPosChanged = useCallback(
(layerId: string, x: number, y: number) => {
dispatch(layerTranslated({ layerId, x, y }));
},
[dispatch]
);
const onBboxChanged = useCallback(
(layerId: string, bbox: IRect | null) => {
dispatch(layerBboxChanged({ layerId, bbox }));
},
[dispatch]
);
useLayoutEffect(() => {
log.trace('Initializing stage');
if (!container) {
return;
}
stage.container(container);
return () => {
log.trace('Cleaning up stage');
stage.destroy();
};
}, [container, stage]);
useLayoutEffect(() => {
log.trace('Adding stage listeners');
if (asPreview) {
return;
}
stage.on('mousedown', mouseEventHandlers.onMouseDown);
stage.on('mouseup', mouseEventHandlers.onMouseUp);
stage.on('mousemove', mouseEventHandlers.onMouseMove);
stage.on('mouseleave', mouseEventHandlers.onMouseLeave);
stage.on('wheel', mouseEventHandlers.onMouseWheel);
return () => {
log.trace('Cleaning up stage listeners');
stage.off('mousedown', mouseEventHandlers.onMouseDown);
stage.off('mouseup', mouseEventHandlers.onMouseUp);
stage.off('mousemove', mouseEventHandlers.onMouseMove);
stage.off('mouseleave', mouseEventHandlers.onMouseLeave);
stage.off('wheel', mouseEventHandlers.onMouseWheel);
};
}, [stage, asPreview, mouseEventHandlers]);
useLayoutEffect(() => {
log.trace('Updating stage dimensions');
if (!wrapper) {
return;
}
const fitStageToContainer = () => {
const newXScale = wrapper.offsetWidth / state.size.width;
const newYScale = wrapper.offsetHeight / state.size.height;
const newScale = Math.min(newXScale, newYScale, 1);
stage.width(state.size.width * newScale);
stage.height(state.size.height * newScale);
stage.scaleX(newScale);
stage.scaleY(newScale);
};
const resizeObserver = new ResizeObserver(fitStageToContainer);
resizeObserver.observe(wrapper);
fitStageToContainer();
return () => {
resizeObserver.disconnect();
};
}, [stage, state.size.width, state.size.height, wrapper]);
useLayoutEffect(() => {
if (asPreview) {
// Preview should not display tool
return;
}
log.trace('Rendering tool preview');
renderers.renderToolPreview(
stage,
tool,
selectedLayerIdColor,
selectedLayerType,
state.globalMaskLayerOpacity,
lastCursorPos,
lastMouseDownPos,
state.brushSize
);
}, [
asPreview,
stage,
tool,
selectedLayerIdColor,
selectedLayerType,
state.globalMaskLayerOpacity,
lastCursorPos,
lastMouseDownPos,
state.brushSize,
renderers,
]);
useLayoutEffect(() => {
log.trace('Rendering layers');
renderers.renderLayers(stage, state.layers, state.globalMaskLayerOpacity, tool, onLayerPosChanged);
}, [
stage,
state.layers,
state.globalMaskLayerOpacity,
tool,
onLayerPosChanged,
renderers,
state.size.width,
state.size.height,
]);
useLayoutEffect(() => {
log.trace('Rendering bbox');
if (asPreview) {
// Preview should not display bboxes
return;
}
renderers.renderBboxes(stage, state.layers, tool);
}, [stage, asPreview, state.layers, tool, onBboxChanged, renderers]);
useLayoutEffect(() => {
if (asPreview) {
// Preview should not check for transparency
return;
}
log.trace('Updating bboxes');
debouncedRenderers.updateBboxes(stage, state.layers, onBboxChanged);
}, [stage, asPreview, state.layers, onBboxChanged]);
useLayoutEffect(() => {
if (asPreview) {
// The preview should not have a background
return;
}
log.trace('Rendering background');
renderers.renderBackground(stage, state.size.width, state.size.height);
}, [stage, asPreview, state.size.width, state.size.height, renderers]);
useLayoutEffect(() => {
log.trace('Arranging layers');
renderers.arrangeLayers(stage, layerIds);
}, [stage, layerIds, renderers]);
useLayoutEffect(() => {
if (asPreview) {
// The preview should not display the no layers message
return;
}
log.trace('Rendering no layers message');
renderers.renderNoLayersMessage(stage, layerCount, state.size.width, state.size.height);
}, [stage, layerCount, renderers, asPreview, state.size.width, state.size.height]);
useLayoutEffect(() => {
Konva.pixelRatio = dpr;
}, [dpr]);
};
type Props = {
asPreview?: boolean;
};
export const StageComponent = memo(({ asPreview = false }: Props) => {
const [stage] = useState(
() => new Konva.Stage({ id: uuidv4(), container: document.createElement('div'), listening: !asPreview })
);
const [container, setContainer] = useState<HTMLDivElement | null>(null);
const [wrapper, setWrapper] = useState<HTMLDivElement | null>(null);
const containerRef = useCallback((el: HTMLDivElement | null) => {
setContainer(el);
}, []);
const wrapperRef = useCallback((el: HTMLDivElement | null) => {
setWrapper(el);
}, []);
useStageRenderer(stage, container, wrapper, asPreview);
return (
<Flex overflow="hidden" w="full" h="full">
<Flex ref={wrapperRef} w="full" h="full" alignItems="center" justifyContent="center">
<Flex
ref={containerRef}
tabIndex={-1}
bg="base.850"
borderRadius="base"
overflow="hidden"
data-testid="control-layers-canvas"
/>
</Flex>
</Flex>
);
});
StageComponent.displayName = 'StageComponent';

View File

@@ -1,111 +0,0 @@
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import {
caLayerAdded,
iiLayerAdded,
ipaLayerAdded,
isInitialImageLayer,
rgLayerIPAdapterAdded,
} from 'features/controlLayers/store/controlLayersSlice';
import {
buildControlNet,
buildIPAdapter,
buildT2IAdapter,
CA_PROCESSOR_DATA,
isProcessorTypeV2,
} from 'features/controlLayers/util/controlAdapters';
import { zModelIdentifierField } from 'features/nodes/types/common';
import { useCallback, useMemo } from 'react';
import { useControlNetAndT2IAdapterModels, useIPAdapterModels } from 'services/api/hooks/modelsByType';
import type { ControlNetModelConfig, IPAdapterModelConfig, T2IAdapterModelConfig } from 'services/api/types';
import { v4 as uuidv4 } from 'uuid';
export const useAddCALayer = () => {
const dispatch = useAppDispatch();
const baseModel = useAppSelector((s) => s.generation.model?.base);
const [modelConfigs] = useControlNetAndT2IAdapterModels();
const model: ControlNetModelConfig | T2IAdapterModelConfig | null = useMemo(() => {
// prefer to use a model that matches the base model
const compatibleModels = modelConfigs.filter((m) => (baseModel ? m.base === baseModel : true));
return compatibleModels[0] ?? modelConfigs[0] ?? null;
}, [baseModel, modelConfigs]);
const isDisabled = useMemo(() => !model, [model]);
const addCALayer = useCallback(() => {
if (!model) {
return;
}
const id = uuidv4();
const defaultPreprocessor = model.default_settings?.preprocessor;
const processorConfig = isProcessorTypeV2(defaultPreprocessor)
? CA_PROCESSOR_DATA[defaultPreprocessor].buildDefaults(baseModel)
: null;
const builder = model.type === 'controlnet' ? buildControlNet : buildT2IAdapter;
const controlAdapter = builder(id, {
model: zModelIdentifierField.parse(model),
processorConfig,
});
dispatch(caLayerAdded(controlAdapter));
}, [dispatch, model, baseModel]);
return [addCALayer, isDisabled] as const;
};
export const useAddIPALayer = () => {
const dispatch = useAppDispatch();
const baseModel = useAppSelector((s) => s.generation.model?.base);
const [modelConfigs] = useIPAdapterModels();
const model: IPAdapterModelConfig | null = useMemo(() => {
// prefer to use a model that matches the base model
const compatibleModels = modelConfigs.filter((m) => (baseModel ? m.base === baseModel : true));
return compatibleModels[0] ?? modelConfigs[0] ?? null;
}, [baseModel, modelConfigs]);
const isDisabled = useMemo(() => !model, [model]);
const addIPALayer = useCallback(() => {
if (!model) {
return;
}
const id = uuidv4();
const ipAdapter = buildIPAdapter(id, {
model: zModelIdentifierField.parse(model),
});
dispatch(ipaLayerAdded(ipAdapter));
}, [dispatch, model]);
return [addIPALayer, isDisabled] as const;
};
export const useAddIPAdapterToIPALayer = (layerId: string) => {
const dispatch = useAppDispatch();
const baseModel = useAppSelector((s) => s.generation.model?.base);
const [modelConfigs] = useIPAdapterModels();
const model: IPAdapterModelConfig | null = useMemo(() => {
// prefer to use a model that matches the base model
const compatibleModels = modelConfigs.filter((m) => (baseModel ? m.base === baseModel : true));
return compatibleModels[0] ?? modelConfigs[0] ?? null;
}, [baseModel, modelConfigs]);
const isDisabled = useMemo(() => !model, [model]);
const addIPAdapter = useCallback(() => {
if (!model) {
return;
}
const id = uuidv4();
const ipAdapter = buildIPAdapter(id, {
model: zModelIdentifierField.parse(model),
});
dispatch(rgLayerIPAdapterAdded({ layerId, ipAdapter }));
}, [dispatch, model, layerId]);
return [addIPAdapter, isDisabled] as const;
};
export const useAddIILayer = () => {
const dispatch = useAppDispatch();
const isDisabled = useAppSelector((s) => Boolean(s.controlLayers.present.layers.find(isInitialImageLayer)));
const addIILayer = useCallback(() => {
dispatch(iiLayerAdded(null));
}, [dispatch]);
return [addIILayer, isDisabled] as const;
};

View File

@@ -1,82 +0,0 @@
import { createSelector } from '@reduxjs/toolkit';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import {
isControlAdapterLayer,
isRegionalGuidanceLayer,
selectControlLayersSlice,
} from 'features/controlLayers/store/controlLayersSlice';
import { useMemo } from 'react';
import { assert } from 'tsafe';
export const useLayerPositivePrompt = (layerId: string) => {
const selectLayer = useMemo(
() =>
createSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.find((l) => l.id === layerId);
assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an RP layer`);
assert(layer.positivePrompt !== null, `Layer ${layerId} does not have a positive prompt`);
return layer.positivePrompt;
}),
[layerId]
);
const prompt = useAppSelector(selectLayer);
return prompt;
};
export const useLayerNegativePrompt = (layerId: string) => {
const selectLayer = useMemo(
() =>
createSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.find((l) => l.id === layerId);
assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an RP layer`);
assert(layer.negativePrompt !== null, `Layer ${layerId} does not have a negative prompt`);
return layer.negativePrompt;
}),
[layerId]
);
const prompt = useAppSelector(selectLayer);
return prompt;
};
export const useLayerIsVisible = (layerId: string) => {
const selectLayer = useMemo(
() =>
createSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.find((l) => l.id === layerId);
assert(layer, `Layer ${layerId} not found`);
return layer.isEnabled;
}),
[layerId]
);
const isVisible = useAppSelector(selectLayer);
return isVisible;
};
export const useLayerType = (layerId: string) => {
const selectLayer = useMemo(
() =>
createSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.find((l) => l.id === layerId);
assert(layer, `Layer ${layerId} not found`);
return layer.type;
}),
[layerId]
);
const type = useAppSelector(selectLayer);
return type;
};
export const useLayerOpacity = (layerId: string) => {
const selectLayer = useMemo(
() =>
createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.filter(isControlAdapterLayer).find((l) => l.id === layerId);
assert(layer, `Layer ${layerId} not found`);
return { opacity: Math.round(layer.opacity * 100), isFilterEnabled: layer.isFilterEnabled };
}),
[layerId]
);
const opacity = useAppSelector(selectLayer);
return opacity;
};

View File

@@ -1,233 +0,0 @@
import { $ctrl, $meta } from '@invoke-ai/ui-library';
import { useStore } from '@nanostores/react';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { calculateNewBrushSize } from 'features/canvas/hooks/useCanvasZoom';
import {
$isDrawing,
$lastCursorPos,
$lastMouseDownPos,
$tool,
brushSizeChanged,
rgLayerLineAdded,
rgLayerPointsAdded,
rgLayerRectAdded,
} from 'features/controlLayers/store/controlLayersSlice';
import type Konva from 'konva';
import type { KonvaEventObject } from 'konva/lib/Node';
import type { Vector2d } from 'konva/lib/types';
import { clamp } from 'lodash-es';
import { useCallback, useMemo, useRef } from 'react';
const getIsFocused = (stage: Konva.Stage) => {
return stage.container().contains(document.activeElement);
};
const getIsMouseDown = (e: KonvaEventObject<MouseEvent>) => e.evt.buttons === 1;
const SNAP_PX = 10;
export const snapPosToStage = (pos: Vector2d, stage: Konva.Stage) => {
const snappedPos = { ...pos };
// Get the normalized threshold for snapping to the edge of the stage
const thresholdX = SNAP_PX / stage.scaleX();
const thresholdY = SNAP_PX / stage.scaleY();
const stageWidth = stage.width() / stage.scaleX();
const stageHeight = stage.height() / stage.scaleY();
// Snap to the edge of the stage if within threshold
if (pos.x - thresholdX < 0) {
snappedPos.x = 0;
} else if (pos.x + thresholdX > stageWidth) {
snappedPos.x = Math.floor(stageWidth);
}
if (pos.y - thresholdY < 0) {
snappedPos.y = 0;
} else if (pos.y + thresholdY > stageHeight) {
snappedPos.y = Math.floor(stageHeight);
}
return snappedPos;
};
export const getScaledFlooredCursorPosition = (stage: Konva.Stage) => {
const pointerPosition = stage.getPointerPosition();
const stageTransform = stage.getAbsoluteTransform().copy();
if (!pointerPosition) {
return;
}
const scaledCursorPosition = stageTransform.invert().point(pointerPosition);
return {
x: Math.floor(scaledCursorPosition.x),
y: Math.floor(scaledCursorPosition.y),
};
};
const syncCursorPos = (stage: Konva.Stage): Vector2d | null => {
const pos = getScaledFlooredCursorPosition(stage);
if (!pos) {
return null;
}
$lastCursorPos.set(pos);
return pos;
};
const BRUSH_SPACING_PCT = 10;
const MIN_BRUSH_SPACING_PX = 5;
const MAX_BRUSH_SPACING_PX = 15;
export const useMouseEvents = () => {
const dispatch = useAppDispatch();
const selectedLayerId = useAppSelector((s) => s.controlLayers.present.selectedLayerId);
const selectedLayerType = useAppSelector((s) => {
const selectedLayer = s.controlLayers.present.layers.find((l) => l.id === s.controlLayers.present.selectedLayerId);
if (!selectedLayer) {
return null;
}
return selectedLayer.type;
});
const tool = useStore($tool);
const lastCursorPosRef = useRef<[number, number] | null>(null);
const shouldInvertBrushSizeScrollDirection = useAppSelector((s) => s.canvas.shouldInvertBrushSizeScrollDirection);
const brushSize = useAppSelector((s) => s.controlLayers.present.brushSize);
const brushSpacingPx = useMemo(
() => clamp(brushSize / BRUSH_SPACING_PCT, MIN_BRUSH_SPACING_PX, MAX_BRUSH_SPACING_PX),
[brushSize]
);
const onMouseDown = useCallback(
(e: KonvaEventObject<MouseEvent>) => {
const stage = e.target.getStage();
if (!stage) {
return;
}
const pos = syncCursorPos(stage);
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
return;
}
if (tool === 'brush' || tool === 'eraser') {
dispatch(
rgLayerLineAdded({
layerId: selectedLayerId,
points: [pos.x, pos.y, pos.x, pos.y],
tool,
})
);
$isDrawing.set(true);
$lastMouseDownPos.set(pos);
} else if (tool === 'rect') {
$lastMouseDownPos.set(snapPosToStage(pos, stage));
}
},
[dispatch, selectedLayerId, selectedLayerType, tool]
);
const onMouseUp = useCallback(
(e: KonvaEventObject<MouseEvent>) => {
const stage = e.target.getStage();
if (!stage) {
return;
}
const pos = $lastCursorPos.get();
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
return;
}
const lastPos = $lastMouseDownPos.get();
const tool = $tool.get();
if (lastPos && selectedLayerId && tool === 'rect') {
const snappedPos = snapPosToStage(pos, stage);
dispatch(
rgLayerRectAdded({
layerId: selectedLayerId,
rect: {
x: Math.min(snappedPos.x, lastPos.x),
y: Math.min(snappedPos.y, lastPos.y),
width: Math.abs(snappedPos.x - lastPos.x),
height: Math.abs(snappedPos.y - lastPos.y),
},
})
);
}
$isDrawing.set(false);
$lastMouseDownPos.set(null);
},
[dispatch, selectedLayerId, selectedLayerType]
);
const onMouseMove = useCallback(
(e: KonvaEventObject<MouseEvent>) => {
const stage = e.target.getStage();
if (!stage) {
return;
}
const pos = syncCursorPos(stage);
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
return;
}
if (getIsFocused(stage) && getIsMouseDown(e) && (tool === 'brush' || tool === 'eraser')) {
if ($isDrawing.get()) {
// Continue the last line
if (lastCursorPosRef.current) {
// Dispatching redux events impacts perf substantially - using brush spacing keeps dispatches to a reasonable number
if (Math.hypot(lastCursorPosRef.current[0] - pos.x, lastCursorPosRef.current[1] - pos.y) < brushSpacingPx) {
return;
}
}
lastCursorPosRef.current = [pos.x, pos.y];
dispatch(rgLayerPointsAdded({ layerId: selectedLayerId, point: lastCursorPosRef.current }));
} else {
// Start a new line
dispatch(rgLayerLineAdded({ layerId: selectedLayerId, points: [pos.x, pos.y, pos.x, pos.y], tool }));
}
$isDrawing.set(true);
}
},
[brushSpacingPx, dispatch, selectedLayerId, selectedLayerType, tool]
);
const onMouseLeave = useCallback(
(e: KonvaEventObject<MouseEvent>) => {
const stage = e.target.getStage();
if (!stage) {
return;
}
const pos = syncCursorPos(stage);
$isDrawing.set(false);
$lastCursorPos.set(null);
$lastMouseDownPos.set(null);
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
return;
}
if (getIsFocused(stage) && getIsMouseDown(e) && (tool === 'brush' || tool === 'eraser')) {
dispatch(rgLayerPointsAdded({ layerId: selectedLayerId, point: [pos.x, pos.y] }));
}
},
[selectedLayerId, selectedLayerType, tool, dispatch]
);
const onMouseWheel = useCallback(
(e: KonvaEventObject<WheelEvent>) => {
e.evt.preventDefault();
if (selectedLayerType !== 'regional_guidance_layer' || (tool !== 'brush' && tool !== 'eraser')) {
return;
}
// checking for ctrl key is pressed or not,
// so that brush size can be controlled using ctrl + scroll up/down
// Invert the delta if the property is set to true
let delta = e.evt.deltaY;
if (shouldInvertBrushSizeScrollDirection) {
delta = -delta;
}
if ($ctrl.get() || $meta.get()) {
dispatch(brushSizeChanged(calculateNewBrushSize(brushSize, delta)));
}
},
[selectedLayerType, tool, shouldInvertBrushSizeScrollDirection, dispatch, brushSize]
);
const handlers = useMemo(
() => ({ onMouseDown, onMouseUp, onMouseMove, onMouseLeave, onMouseWheel }),
[onMouseDown, onMouseUp, onMouseMove, onMouseLeave, onMouseWheel]
);
return handlers;
};

View File

@@ -1,963 +0,0 @@
import type { PayloadAction, UnknownAction } from '@reduxjs/toolkit';
import { createSlice, isAnyOf } from '@reduxjs/toolkit';
import type { PersistConfig, RootState } from 'app/store/store';
import { moveBackward, moveForward, moveToBack, moveToFront } from 'common/util/arrayUtils';
import { deepClone } from 'common/util/deepClone';
import { roundDownToMultiple } from 'common/util/roundDownToMultiple';
import type {
CLIPVisionModelV2,
ControlModeV2,
ControlNetConfigV2,
IPAdapterConfigV2,
IPMethodV2,
ProcessorConfig,
T2IAdapterConfigV2,
} from 'features/controlLayers/util/controlAdapters';
import {
buildControlAdapterProcessorV2,
controlNetToT2IAdapter,
imageDTOToImageWithDims,
t2iAdapterToControlNet,
} from 'features/controlLayers/util/controlAdapters';
import { zModelIdentifierField } from 'features/nodes/types/common';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { initialAspectRatioState } from 'features/parameters/components/ImageSize/constants';
import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
import { modelChanged } from 'features/parameters/store/generationSlice';
import type { ParameterAutoNegative } from 'features/parameters/types/parameterSchemas';
import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';
import type { IRect, Vector2d } from 'konva/lib/types';
import { isEqual, partition } from 'lodash-es';
import { atom } from 'nanostores';
import type { RgbColor } from 'react-colorful';
import type { UndoableOptions } from 'redux-undo';
import type { ControlNetModelConfig, ImageDTO, IPAdapterModelConfig, T2IAdapterModelConfig } from 'services/api/types';
import { assert } from 'tsafe';
import { v4 as uuidv4 } from 'uuid';
import type {
ControlAdapterLayer,
ControlLayersState,
DrawingTool,
InitialImageLayer,
IPAdapterLayer,
Layer,
RegionalGuidanceLayer,
Tool,
VectorMaskLine,
VectorMaskRect,
} from './types';
export const initialControlLayersState: ControlLayersState = {
_version: 2,
selectedLayerId: null,
brushSize: 100,
layers: [],
globalMaskLayerOpacity: 0.3, // this globally changes all mask layers' opacity
positivePrompt: '',
negativePrompt: '',
positivePrompt2: '',
negativePrompt2: '',
shouldConcatPrompts: true,
size: {
width: 512,
height: 512,
aspectRatio: deepClone(initialAspectRatioState),
},
};
const isLine = (obj: VectorMaskLine | VectorMaskRect): obj is VectorMaskLine => obj.type === 'vector_mask_line';
export const isRegionalGuidanceLayer = (layer?: Layer): layer is RegionalGuidanceLayer =>
layer?.type === 'regional_guidance_layer';
export const isControlAdapterLayer = (layer?: Layer): layer is ControlAdapterLayer =>
layer?.type === 'control_adapter_layer';
export const isIPAdapterLayer = (layer?: Layer): layer is IPAdapterLayer => layer?.type === 'ip_adapter_layer';
export const isInitialImageLayer = (layer?: Layer): layer is InitialImageLayer => layer?.type === 'initial_image_layer';
export const isRenderableLayer = (
layer?: Layer
): layer is RegionalGuidanceLayer | ControlAdapterLayer | InitialImageLayer =>
layer?.type === 'regional_guidance_layer' ||
layer?.type === 'control_adapter_layer' ||
layer?.type === 'initial_image_layer';
export const selectCALayerOrThrow = (state: ControlLayersState, layerId: string): ControlAdapterLayer => {
const layer = state.layers.find((l) => l.id === layerId);
assert(isControlAdapterLayer(layer));
return layer;
};
export const selectIPALayerOrThrow = (state: ControlLayersState, layerId: string): IPAdapterLayer => {
const layer = state.layers.find((l) => l.id === layerId);
assert(isIPAdapterLayer(layer));
return layer;
};
export const selectIILayerOrThrow = (state: ControlLayersState, layerId: string): InitialImageLayer => {
const layer = state.layers.find((l) => l.id === layerId);
assert(isInitialImageLayer(layer));
return layer;
};
const selectCAOrIPALayerOrThrow = (
state: ControlLayersState,
layerId: string
): ControlAdapterLayer | IPAdapterLayer => {
const layer = state.layers.find((l) => l.id === layerId);
assert(isControlAdapterLayer(layer) || isIPAdapterLayer(layer));
return layer;
};
const selectRGLayerOrThrow = (state: ControlLayersState, layerId: string): RegionalGuidanceLayer => {
const layer = state.layers.find((l) => l.id === layerId);
assert(isRegionalGuidanceLayer(layer));
return layer;
};
export const selectRGLayerIPAdapterOrThrow = (
state: ControlLayersState,
layerId: string,
ipAdapterId: string
): IPAdapterConfigV2 => {
const layer = state.layers.find((l) => l.id === layerId);
assert(isRegionalGuidanceLayer(layer));
const ipAdapter = layer.ipAdapters.find((ipAdapter) => ipAdapter.id === ipAdapterId);
assert(ipAdapter);
return ipAdapter;
};
const getVectorMaskPreviewColor = (state: ControlLayersState): RgbColor => {
const rgLayers = state.layers.filter(isRegionalGuidanceLayer);
const lastColor = rgLayers[rgLayers.length - 1]?.previewColor;
return LayerColors.next(lastColor);
};
const exclusivelySelectLayer = (state: ControlLayersState, layerId: string) => {
for (const layer of state.layers) {
layer.isSelected = layer.id === layerId;
}
state.selectedLayerId = layerId;
};
export const controlLayersSlice = createSlice({
name: 'controlLayers',
initialState: initialControlLayersState,
reducers: {
//#region Any Layer Type
layerSelected: (state, action: PayloadAction<string>) => {
exclusivelySelectLayer(state, action.payload);
},
layerVisibilityToggled: (state, action: PayloadAction<string>) => {
const layer = state.layers.find((l) => l.id === action.payload);
if (layer) {
layer.isEnabled = !layer.isEnabled;
}
},
layerTranslated: (state, action: PayloadAction<{ layerId: string; x: number; y: number }>) => {
const { layerId, x, y } = action.payload;
const layer = state.layers.find((l) => l.id === layerId);
if (isRenderableLayer(layer)) {
layer.x = x;
layer.y = y;
}
if (isRegionalGuidanceLayer(layer)) {
layer.uploadedMaskImage = null;
}
},
layerBboxChanged: (state, action: PayloadAction<{ layerId: string; bbox: IRect | null }>) => {
const { layerId, bbox } = action.payload;
const layer = state.layers.find((l) => l.id === layerId);
if (isRenderableLayer(layer)) {
layer.bbox = bbox;
layer.bboxNeedsUpdate = false;
if (bbox === null && layer.type === 'regional_guidance_layer') {
// The layer was fully erased, empty its objects to prevent accumulation of invisible objects
layer.maskObjects = [];
layer.uploadedMaskImage = null;
}
}
},
layerReset: (state, action: PayloadAction<string>) => {
const layer = state.layers.find((l) => l.id === action.payload);
// TODO(psyche): Should other layer types also have reset functionality?
if (isRegionalGuidanceLayer(layer)) {
layer.maskObjects = [];
layer.bbox = null;
layer.isEnabled = true;
layer.bboxNeedsUpdate = false;
layer.uploadedMaskImage = null;
}
},
layerDeleted: (state, action: PayloadAction<string>) => {
state.layers = state.layers.filter((l) => l.id !== action.payload);
state.selectedLayerId = state.layers[0]?.id ?? null;
},
layerMovedForward: (state, action: PayloadAction<string>) => {
const cb = (l: Layer) => l.id === action.payload;
const [renderableLayers, ipAdapterLayers] = partition(state.layers, isRenderableLayer);
moveForward(renderableLayers, cb);
state.layers = [...ipAdapterLayers, ...renderableLayers];
},
layerMovedToFront: (state, action: PayloadAction<string>) => {
const cb = (l: Layer) => l.id === action.payload;
const [renderableLayers, ipAdapterLayers] = partition(state.layers, isRenderableLayer);
// Because the layers are in reverse order, moving to the front is equivalent to moving to the back
moveToBack(renderableLayers, cb);
state.layers = [...ipAdapterLayers, ...renderableLayers];
},
layerMovedBackward: (state, action: PayloadAction<string>) => {
const cb = (l: Layer) => l.id === action.payload;
const [renderableLayers, ipAdapterLayers] = partition(state.layers, isRenderableLayer);
moveBackward(renderableLayers, cb);
state.layers = [...ipAdapterLayers, ...renderableLayers];
},
layerMovedToBack: (state, action: PayloadAction<string>) => {
const cb = (l: Layer) => l.id === action.payload;
const [renderableLayers, ipAdapterLayers] = partition(state.layers, isRenderableLayer);
// Because the layers are in reverse order, moving to the back is equivalent to moving to the front
moveToFront(renderableLayers, cb);
state.layers = [...ipAdapterLayers, ...renderableLayers];
},
selectedLayerDeleted: (state) => {
state.layers = state.layers.filter((l) => l.id !== state.selectedLayerId);
state.selectedLayerId = state.layers[0]?.id ?? null;
},
allLayersDeleted: (state) => {
state.layers = [];
state.selectedLayerId = null;
},
//#endregion
//#region CA Layers
caLayerAdded: {
reducer: (
state,
action: PayloadAction<{ layerId: string; controlAdapter: ControlNetConfigV2 | T2IAdapterConfigV2 }>
) => {
const { layerId, controlAdapter } = action.payload;
const layer: ControlAdapterLayer = {
id: getCALayerId(layerId),
type: 'control_adapter_layer',
x: 0,
y: 0,
bbox: null,
bboxNeedsUpdate: false,
isEnabled: true,
opacity: 1,
isSelected: true,
isFilterEnabled: true,
controlAdapter,
};
state.layers.push(layer);
exclusivelySelectLayer(state, layer.id);
},
prepare: (controlAdapter: ControlNetConfigV2 | T2IAdapterConfigV2) => ({
payload: { layerId: uuidv4(), controlAdapter },
}),
},
caLayerRecalled: (state, action: PayloadAction<ControlAdapterLayer>) => {
state.layers.push({ ...action.payload, isSelected: true });
exclusivelySelectLayer(state, action.payload.id);
},
caLayerImageChanged: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO | null }>) => {
const { layerId, imageDTO } = action.payload;
const layer = selectCALayerOrThrow(state, layerId);
layer.bbox = null;
layer.bboxNeedsUpdate = true;
layer.isEnabled = true;
if (imageDTO) {
const newImage = imageDTOToImageWithDims(imageDTO);
if (isEqual(newImage, layer.controlAdapter.image)) {
return;
}
layer.controlAdapter.image = newImage;
layer.controlAdapter.processedImage = null;
} else {
layer.controlAdapter.image = null;
layer.controlAdapter.processedImage = null;
}
},
caLayerProcessedImageChanged: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO | null }>) => {
const { layerId, imageDTO } = action.payload;
const layer = selectCALayerOrThrow(state, layerId);
layer.bbox = null;
layer.bboxNeedsUpdate = true;
layer.isEnabled = true;
layer.controlAdapter.processedImage = imageDTO ? imageDTOToImageWithDims(imageDTO) : null;
},
caLayerModelChanged: (
state,
action: PayloadAction<{
layerId: string;
modelConfig: ControlNetModelConfig | T2IAdapterModelConfig | null;
}>
) => {
const { layerId, modelConfig } = action.payload;
const layer = selectCALayerOrThrow(state, layerId);
if (!modelConfig) {
layer.controlAdapter.model = null;
return;
}
layer.controlAdapter.model = zModelIdentifierField.parse(modelConfig);
// We may need to convert the CA to match the model
if (layer.controlAdapter.type === 't2i_adapter' && layer.controlAdapter.model.type === 'controlnet') {
layer.controlAdapter = t2iAdapterToControlNet(layer.controlAdapter);
} else if (layer.controlAdapter.type === 'controlnet' && layer.controlAdapter.model.type === 't2i_adapter') {
layer.controlAdapter = controlNetToT2IAdapter(layer.controlAdapter);
}
const candidateProcessorConfig = buildControlAdapterProcessorV2(modelConfig);
if (candidateProcessorConfig?.type !== layer.controlAdapter.processorConfig?.type) {
// The processor has changed. For example, the previous model was a Canny model and the new model is a Depth
// model. We need to use the new processor.
layer.controlAdapter.processedImage = null;
layer.controlAdapter.processorConfig = candidateProcessorConfig;
}
},
caLayerControlModeChanged: (state, action: PayloadAction<{ layerId: string; controlMode: ControlModeV2 }>) => {
const { layerId, controlMode } = action.payload;
const layer = selectCALayerOrThrow(state, layerId);
assert(layer.controlAdapter.type === 'controlnet');
layer.controlAdapter.controlMode = controlMode;
},
caLayerProcessorConfigChanged: (
state,
action: PayloadAction<{ layerId: string; processorConfig: ProcessorConfig | null }>
) => {
const { layerId, processorConfig } = action.payload;
const layer = selectCALayerOrThrow(state, layerId);
layer.controlAdapter.processorConfig = processorConfig;
if (!processorConfig) {
layer.controlAdapter.processedImage = null;
}
},
caLayerIsFilterEnabledChanged: (state, action: PayloadAction<{ layerId: string; isFilterEnabled: boolean }>) => {
const { layerId, isFilterEnabled } = action.payload;
const layer = selectCALayerOrThrow(state, layerId);
layer.isFilterEnabled = isFilterEnabled;
},
caLayerOpacityChanged: (state, action: PayloadAction<{ layerId: string; opacity: number }>) => {
const { layerId, opacity } = action.payload;
const layer = selectCALayerOrThrow(state, layerId);
layer.opacity = opacity;
},
caLayerIsProcessingImageChanged: (
state,
action: PayloadAction<{ layerId: string; isProcessingImage: boolean }>
) => {
const { layerId, isProcessingImage } = action.payload;
const layer = selectCALayerOrThrow(state, layerId);
layer.controlAdapter.isProcessingImage = isProcessingImage;
},
//#endregion
//#region IP Adapter Layers
ipaLayerAdded: {
reducer: (state, action: PayloadAction<{ layerId: string; ipAdapter: IPAdapterConfigV2 }>) => {
const { layerId, ipAdapter } = action.payload;
const layer: IPAdapterLayer = {
id: getIPALayerId(layerId),
type: 'ip_adapter_layer',
isEnabled: true,
isSelected: true,
ipAdapter,
};
state.layers.push(layer);
exclusivelySelectLayer(state, layer.id);
},
prepare: (ipAdapter: IPAdapterConfigV2) => ({ payload: { layerId: uuidv4(), ipAdapter } }),
},
ipaLayerRecalled: (state, action: PayloadAction<IPAdapterLayer>) => {
state.layers.push(action.payload);
},
ipaLayerImageChanged: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO | null }>) => {
const { layerId, imageDTO } = action.payload;
const layer = selectIPALayerOrThrow(state, layerId);
layer.ipAdapter.image = imageDTO ? imageDTOToImageWithDims(imageDTO) : null;
},
ipaLayerMethodChanged: (state, action: PayloadAction<{ layerId: string; method: IPMethodV2 }>) => {
const { layerId, method } = action.payload;
const layer = selectIPALayerOrThrow(state, layerId);
layer.ipAdapter.method = method;
},
ipaLayerModelChanged: (
state,
action: PayloadAction<{
layerId: string;
modelConfig: IPAdapterModelConfig | null;
}>
) => {
const { layerId, modelConfig } = action.payload;
const layer = selectIPALayerOrThrow(state, layerId);
if (!modelConfig) {
layer.ipAdapter.model = null;
return;
}
layer.ipAdapter.model = zModelIdentifierField.parse(modelConfig);
},
ipaLayerCLIPVisionModelChanged: (
state,
action: PayloadAction<{ layerId: string; clipVisionModel: CLIPVisionModelV2 }>
) => {
const { layerId, clipVisionModel } = action.payload;
const layer = selectIPALayerOrThrow(state, layerId);
layer.ipAdapter.clipVisionModel = clipVisionModel;
},
//#endregion
//#region CA or IPA Layers
caOrIPALayerWeightChanged: (state, action: PayloadAction<{ layerId: string; weight: number }>) => {
const { layerId, weight } = action.payload;
const layer = selectCAOrIPALayerOrThrow(state, layerId);
if (layer.type === 'control_adapter_layer') {
layer.controlAdapter.weight = weight;
} else {
layer.ipAdapter.weight = weight;
}
},
caOrIPALayerBeginEndStepPctChanged: (
state,
action: PayloadAction<{ layerId: string; beginEndStepPct: [number, number] }>
) => {
const { layerId, beginEndStepPct } = action.payload;
const layer = selectCAOrIPALayerOrThrow(state, layerId);
if (layer.type === 'control_adapter_layer') {
layer.controlAdapter.beginEndStepPct = beginEndStepPct;
} else {
layer.ipAdapter.beginEndStepPct = beginEndStepPct;
}
},
//#endregion
//#region RG Layers
rgLayerAdded: {
reducer: (state, action: PayloadAction<{ layerId: string }>) => {
const { layerId } = action.payload;
const layer: RegionalGuidanceLayer = {
id: getRGLayerId(layerId),
type: 'regional_guidance_layer',
isEnabled: true,
bbox: null,
bboxNeedsUpdate: false,
maskObjects: [],
previewColor: getVectorMaskPreviewColor(state),
x: 0,
y: 0,
autoNegative: 'invert',
positivePrompt: '',
negativePrompt: null,
ipAdapters: [],
isSelected: true,
uploadedMaskImage: null,
};
state.layers.push(layer);
exclusivelySelectLayer(state, layer.id);
},
prepare: () => ({ payload: { layerId: uuidv4() } }),
},
rgLayerRecalled: (state, action: PayloadAction<RegionalGuidanceLayer>) => {
state.layers.push({ ...action.payload, isSelected: true });
exclusivelySelectLayer(state, action.payload.id);
},
rgLayerPositivePromptChanged: (state, action: PayloadAction<{ layerId: string; prompt: string | null }>) => {
const { layerId, prompt } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
layer.positivePrompt = prompt;
},
rgLayerNegativePromptChanged: (state, action: PayloadAction<{ layerId: string; prompt: string | null }>) => {
const { layerId, prompt } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
layer.negativePrompt = prompt;
},
rgLayerPreviewColorChanged: (state, action: PayloadAction<{ layerId: string; color: RgbColor }>) => {
const { layerId, color } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
layer.previewColor = color;
},
rgLayerLineAdded: {
reducer: (
state,
action: PayloadAction<{
layerId: string;
points: [number, number, number, number];
tool: DrawingTool;
lineUuid: string;
}>
) => {
const { layerId, points, tool, lineUuid } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
const lineId = getRGLayerLineId(layer.id, lineUuid);
layer.maskObjects.push({
type: 'vector_mask_line',
tool: tool,
id: lineId,
// Points must be offset by the layer's x and y coordinates
// TODO: Handle this in the event listener?
points: [points[0] - layer.x, points[1] - layer.y, points[2] - layer.x, points[3] - layer.y],
strokeWidth: state.brushSize,
});
layer.bboxNeedsUpdate = true;
layer.uploadedMaskImage = null;
},
prepare: (payload: { layerId: string; points: [number, number, number, number]; tool: DrawingTool }) => ({
payload: { ...payload, lineUuid: uuidv4() },
}),
},
rgLayerPointsAdded: (state, action: PayloadAction<{ layerId: string; point: [number, number] }>) => {
const { layerId, point } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
const lastLine = layer.maskObjects.findLast(isLine);
if (!lastLine) {
return;
}
// Points must be offset by the layer's x and y coordinates
// TODO: Handle this in the event listener
lastLine.points.push(point[0] - layer.x, point[1] - layer.y);
layer.bboxNeedsUpdate = true;
layer.uploadedMaskImage = null;
},
rgLayerRectAdded: {
reducer: (state, action: PayloadAction<{ layerId: string; rect: IRect; rectUuid: string }>) => {
const { layerId, rect, rectUuid } = action.payload;
if (rect.height === 0 || rect.width === 0) {
// Ignore zero-area rectangles
return;
}
const layer = selectRGLayerOrThrow(state, layerId);
const id = getRGLayerRectId(layer.id, rectUuid);
layer.maskObjects.push({
type: 'vector_mask_rect',
id,
x: rect.x - layer.x,
y: rect.y - layer.y,
width: rect.width,
height: rect.height,
});
layer.bboxNeedsUpdate = true;
layer.uploadedMaskImage = null;
},
prepare: (payload: { layerId: string; rect: IRect }) => ({ payload: { ...payload, rectUuid: uuidv4() } }),
},
rgLayerMaskImageUploaded: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO }>) => {
const { layerId, imageDTO } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
layer.uploadedMaskImage = imageDTOToImageWithDims(imageDTO);
},
rgLayerAutoNegativeChanged: (
state,
action: PayloadAction<{ layerId: string; autoNegative: ParameterAutoNegative }>
) => {
const { layerId, autoNegative } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
layer.autoNegative = autoNegative;
},
rgLayerIPAdapterAdded: (state, action: PayloadAction<{ layerId: string; ipAdapter: IPAdapterConfigV2 }>) => {
const { layerId, ipAdapter } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
layer.ipAdapters.push(ipAdapter);
},
rgLayerIPAdapterDeleted: (state, action: PayloadAction<{ layerId: string; ipAdapterId: string }>) => {
const { layerId, ipAdapterId } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
layer.ipAdapters = layer.ipAdapters.filter((ipAdapter) => ipAdapter.id !== ipAdapterId);
},
rgLayerIPAdapterImageChanged: (
state,
action: PayloadAction<{ layerId: string; ipAdapterId: string; imageDTO: ImageDTO | null }>
) => {
const { layerId, ipAdapterId, imageDTO } = action.payload;
const ipAdapter = selectRGLayerIPAdapterOrThrow(state, layerId, ipAdapterId);
ipAdapter.image = imageDTO ? imageDTOToImageWithDims(imageDTO) : null;
},
rgLayerIPAdapterWeightChanged: (
state,
action: PayloadAction<{ layerId: string; ipAdapterId: string; weight: number }>
) => {
const { layerId, ipAdapterId, weight } = action.payload;
const ipAdapter = selectRGLayerIPAdapterOrThrow(state, layerId, ipAdapterId);
ipAdapter.weight = weight;
},
rgLayerIPAdapterBeginEndStepPctChanged: (
state,
action: PayloadAction<{ layerId: string; ipAdapterId: string; beginEndStepPct: [number, number] }>
) => {
const { layerId, ipAdapterId, beginEndStepPct } = action.payload;
const ipAdapter = selectRGLayerIPAdapterOrThrow(state, layerId, ipAdapterId);
ipAdapter.beginEndStepPct = beginEndStepPct;
},
rgLayerIPAdapterMethodChanged: (
state,
action: PayloadAction<{ layerId: string; ipAdapterId: string; method: IPMethodV2 }>
) => {
const { layerId, ipAdapterId, method } = action.payload;
const ipAdapter = selectRGLayerIPAdapterOrThrow(state, layerId, ipAdapterId);
ipAdapter.method = method;
},
rgLayerIPAdapterModelChanged: (
state,
action: PayloadAction<{
layerId: string;
ipAdapterId: string;
modelConfig: IPAdapterModelConfig | null;
}>
) => {
const { layerId, ipAdapterId, modelConfig } = action.payload;
const ipAdapter = selectRGLayerIPAdapterOrThrow(state, layerId, ipAdapterId);
if (!modelConfig) {
ipAdapter.model = null;
return;
}
ipAdapter.model = zModelIdentifierField.parse(modelConfig);
},
rgLayerIPAdapterCLIPVisionModelChanged: (
state,
action: PayloadAction<{ layerId: string; ipAdapterId: string; clipVisionModel: CLIPVisionModelV2 }>
) => {
const { layerId, ipAdapterId, clipVisionModel } = action.payload;
const ipAdapter = selectRGLayerIPAdapterOrThrow(state, layerId, ipAdapterId);
ipAdapter.clipVisionModel = clipVisionModel;
},
//#endregion
//#region Initial Image Layer
iiLayerAdded: {
reducer: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO | null }>) => {
const { layerId, imageDTO } = action.payload;
// Highlander! There can be only one!
state.layers = state.layers.filter((l) => (isInitialImageLayer(l) ? false : true));
const layer: InitialImageLayer = {
id: layerId,
type: 'initial_image_layer',
opacity: 1,
x: 0,
y: 0,
bbox: null,
bboxNeedsUpdate: false,
isEnabled: true,
image: imageDTO ? imageDTOToImageWithDims(imageDTO) : null,
isSelected: true,
denoisingStrength: 0.75,
};
state.layers.push(layer);
exclusivelySelectLayer(state, layer.id);
},
prepare: (imageDTO: ImageDTO | null) => ({ payload: { layerId: INITIAL_IMAGE_LAYER_ID, imageDTO } }),
},
iiLayerRecalled: (state, action: PayloadAction<InitialImageLayer>) => {
state.layers = state.layers.filter((l) => (isInitialImageLayer(l) ? false : true));
state.layers.push({ ...action.payload, isSelected: true });
exclusivelySelectLayer(state, action.payload.id);
},
iiLayerImageChanged: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO | null }>) => {
const { layerId, imageDTO } = action.payload;
const layer = selectIILayerOrThrow(state, layerId);
layer.bbox = null;
layer.bboxNeedsUpdate = true;
layer.isEnabled = true;
layer.image = imageDTO ? imageDTOToImageWithDims(imageDTO) : null;
},
iiLayerOpacityChanged: (state, action: PayloadAction<{ layerId: string; opacity: number }>) => {
const { layerId, opacity } = action.payload;
const layer = selectIILayerOrThrow(state, layerId);
layer.opacity = opacity;
},
iiLayerDenoisingStrengthChanged: (state, action: PayloadAction<{ layerId: string; denoisingStrength: number }>) => {
const { layerId, denoisingStrength } = action.payload;
const layer = selectIILayerOrThrow(state, layerId);
layer.denoisingStrength = denoisingStrength;
},
//#endregion
//#region Globals
positivePromptChanged: (state, action: PayloadAction<string>) => {
state.positivePrompt = action.payload;
},
negativePromptChanged: (state, action: PayloadAction<string>) => {
state.negativePrompt = action.payload;
},
positivePrompt2Changed: (state, action: PayloadAction<string>) => {
state.positivePrompt2 = action.payload;
},
negativePrompt2Changed: (state, action: PayloadAction<string>) => {
state.negativePrompt2 = action.payload;
},
shouldConcatPromptsChanged: (state, action: PayloadAction<boolean>) => {
state.shouldConcatPrompts = action.payload;
},
widthChanged: (state, action: PayloadAction<{ width: number; updateAspectRatio?: boolean; clamp?: boolean }>) => {
const { width, updateAspectRatio, clamp } = action.payload;
state.size.width = clamp ? Math.max(roundDownToMultiple(width, 8), 64) : width;
if (updateAspectRatio) {
state.size.aspectRatio.value = state.size.width / state.size.height;
state.size.aspectRatio.id = 'Free';
state.size.aspectRatio.isLocked = false;
}
},
heightChanged: (state, action: PayloadAction<{ height: number; updateAspectRatio?: boolean; clamp?: boolean }>) => {
const { height, updateAspectRatio, clamp } = action.payload;
state.size.height = clamp ? Math.max(roundDownToMultiple(height, 8), 64) : height;
if (updateAspectRatio) {
state.size.aspectRatio.value = state.size.width / state.size.height;
state.size.aspectRatio.id = 'Free';
state.size.aspectRatio.isLocked = false;
}
},
aspectRatioChanged: (state, action: PayloadAction<AspectRatioState>) => {
state.size.aspectRatio = action.payload;
},
brushSizeChanged: (state, action: PayloadAction<number>) => {
state.brushSize = Math.round(action.payload);
},
globalMaskLayerOpacityChanged: (state, action: PayloadAction<number>) => {
state.globalMaskLayerOpacity = action.payload;
},
undo: (state) => {
// Invalidate the bbox for all layers to prevent stale bboxes
for (const layer of state.layers.filter(isRenderableLayer)) {
layer.bboxNeedsUpdate = true;
}
},
redo: (state) => {
// Invalidate the bbox for all layers to prevent stale bboxes
for (const layer of state.layers.filter(isRenderableLayer)) {
layer.bboxNeedsUpdate = true;
}
},
//#endregion
},
extraReducers(builder) {
builder.addCase(modelChanged, (state, action) => {
const newModel = action.payload;
if (!newModel || action.meta.previousModel?.base === newModel.base) {
// Model was cleared or the base didn't change
return;
}
const optimalDimension = getOptimalDimension(newModel);
if (getIsSizeOptimal(state.size.width, state.size.height, optimalDimension)) {
return;
}
const { width, height } = calculateNewSize(state.size.aspectRatio.value, optimalDimension * optimalDimension);
state.size.width = width;
state.size.height = height;
});
// // TODO: This is a temp fix to reduce issues with T2I adapter having a different downscaling
// // factor than the UNet. Hopefully we get an upstream fix in diffusers.
// builder.addMatcher(isAnyControlAdapterAdded, (state, action) => {
// if (action.payload.type === 't2i_adapter') {
// state.size.width = roundToMultiple(state.size.width, 64);
// state.size.height = roundToMultiple(state.size.height, 64);
// }
// });
},
});
/**
* This class is used to cycle through a set of colors for the prompt region layers.
*/
class LayerColors {
static COLORS: RgbColor[] = [
{ r: 121, g: 157, b: 219 }, // rgb(121, 157, 219)
{ r: 131, g: 214, b: 131 }, // rgb(131, 214, 131)
{ r: 250, g: 225, b: 80 }, // rgb(250, 225, 80)
{ r: 220, g: 144, b: 101 }, // rgb(220, 144, 101)
{ r: 224, g: 117, b: 117 }, // rgb(224, 117, 117)
{ r: 213, g: 139, b: 202 }, // rgb(213, 139, 202)
{ r: 161, g: 120, b: 214 }, // rgb(161, 120, 214)
];
static i = this.COLORS.length - 1;
/**
* Get the next color in the sequence. If a known color is provided, the next color will be the one after it.
*/
static next(currentColor?: RgbColor): RgbColor {
if (currentColor) {
const i = this.COLORS.findIndex((c) => isEqual(c, currentColor));
if (i !== -1) {
this.i = i;
}
}
this.i = (this.i + 1) % this.COLORS.length;
const color = this.COLORS[this.i];
assert(color);
return color;
}
}
export const {
// Any Layer Type
layerSelected,
layerVisibilityToggled,
layerTranslated,
layerBboxChanged,
layerReset,
layerDeleted,
layerMovedForward,
layerMovedToFront,
layerMovedBackward,
layerMovedToBack,
selectedLayerDeleted,
allLayersDeleted,
// CA Layers
caLayerAdded,
caLayerRecalled,
caLayerImageChanged,
caLayerProcessedImageChanged,
caLayerModelChanged,
caLayerControlModeChanged,
caLayerProcessorConfigChanged,
caLayerIsFilterEnabledChanged,
caLayerOpacityChanged,
caLayerIsProcessingImageChanged,
// IPA Layers
ipaLayerAdded,
ipaLayerRecalled,
ipaLayerImageChanged,
ipaLayerMethodChanged,
ipaLayerModelChanged,
ipaLayerCLIPVisionModelChanged,
// CA or IPA Layers
caOrIPALayerWeightChanged,
caOrIPALayerBeginEndStepPctChanged,
// RG Layers
rgLayerAdded,
rgLayerRecalled,
rgLayerPositivePromptChanged,
rgLayerNegativePromptChanged,
rgLayerPreviewColorChanged,
rgLayerLineAdded,
rgLayerPointsAdded,
rgLayerRectAdded,
rgLayerMaskImageUploaded,
rgLayerAutoNegativeChanged,
rgLayerIPAdapterAdded,
rgLayerIPAdapterDeleted,
rgLayerIPAdapterImageChanged,
rgLayerIPAdapterWeightChanged,
rgLayerIPAdapterBeginEndStepPctChanged,
rgLayerIPAdapterMethodChanged,
rgLayerIPAdapterModelChanged,
rgLayerIPAdapterCLIPVisionModelChanged,
// II Layer
iiLayerAdded,
iiLayerRecalled,
iiLayerImageChanged,
iiLayerOpacityChanged,
iiLayerDenoisingStrengthChanged,
// Globals
positivePromptChanged,
negativePromptChanged,
positivePrompt2Changed,
negativePrompt2Changed,
shouldConcatPromptsChanged,
widthChanged,
heightChanged,
aspectRatioChanged,
brushSizeChanged,
globalMaskLayerOpacityChanged,
undo,
redo,
} = controlLayersSlice.actions;
export const selectControlLayersSlice = (state: RootState) => state.controlLayers;
/* eslint-disable-next-line @typescript-eslint/no-explicit-any */
const migrateControlLayersState = (state: any): any => {
if (state._version === 1) {
// Reset state for users on v1 (e.g. beta users), some changes could cause
return deepClone(initialControlLayersState);
}
return state;
};
export const $isDrawing = atom(false);
export const $lastMouseDownPos = atom<Vector2d | null>(null);
export const $tool = atom<Tool>('brush');
export const $lastCursorPos = atom<Vector2d | null>(null);
// IDs for singleton Konva layers and objects
export const TOOL_PREVIEW_LAYER_ID = 'tool_preview_layer';
export const TOOL_PREVIEW_BRUSH_GROUP_ID = 'tool_preview_layer.brush_group';
export const TOOL_PREVIEW_BRUSH_FILL_ID = 'tool_preview_layer.brush_fill';
export const TOOL_PREVIEW_BRUSH_BORDER_INNER_ID = 'tool_preview_layer.brush_border_inner';
export const TOOL_PREVIEW_BRUSH_BORDER_OUTER_ID = 'tool_preview_layer.brush_border_outer';
export const TOOL_PREVIEW_RECT_ID = 'tool_preview_layer.rect';
export const BACKGROUND_LAYER_ID = 'background_layer';
export const BACKGROUND_RECT_ID = 'background_layer.rect';
export const NO_LAYERS_MESSAGE_LAYER_ID = 'no_layers_message';
// Names (aka classes) for Konva layers and objects
export const CA_LAYER_NAME = 'control_adapter_layer';
export const CA_LAYER_IMAGE_NAME = 'control_adapter_layer.image';
export const RG_LAYER_NAME = 'regional_guidance_layer';
export const RG_LAYER_LINE_NAME = 'regional_guidance_layer.line';
export const RG_LAYER_OBJECT_GROUP_NAME = 'regional_guidance_layer.object_group';
export const RG_LAYER_RECT_NAME = 'regional_guidance_layer.rect';
export const INITIAL_IMAGE_LAYER_ID = 'singleton_initial_image_layer';
export const INITIAL_IMAGE_LAYER_NAME = 'initial_image_layer';
export const INITIAL_IMAGE_LAYER_IMAGE_NAME = 'initial_image_layer.image';
export const LAYER_BBOX_NAME = 'layer.bbox';
export const COMPOSITING_RECT_NAME = 'compositing-rect';
// Getters for non-singleton layer and object IDs
export const getRGLayerId = (layerId: string) => `${RG_LAYER_NAME}_${layerId}`;
const getRGLayerLineId = (layerId: string, lineId: string) => `${layerId}.line_${lineId}`;
const getRGLayerRectId = (layerId: string, lineId: string) => `${layerId}.rect_${lineId}`;
export const getRGLayerObjectGroupId = (layerId: string, groupId: string) => `${layerId}.objectGroup_${groupId}`;
export const getLayerBboxId = (layerId: string) => `${layerId}.bbox`;
export const getCALayerId = (layerId: string) => `control_adapter_layer_${layerId}`;
export const getCALayerImageId = (layerId: string, imageName: string) => `${layerId}.image_${imageName}`;
export const getIILayerImageId = (layerId: string, imageName: string) => `${layerId}.image_${imageName}`;
export const getIPALayerId = (layerId: string) => `ip_adapter_layer_${layerId}`;
export const controlLayersPersistConfig: PersistConfig<ControlLayersState> = {
name: controlLayersSlice.name,
initialState: initialControlLayersState,
migrate: migrateControlLayersState,
persistDenylist: [],
};
// These actions are _individually_ grouped together as single undoable actions
const undoableGroupByMatcher = isAnyOf(
layerTranslated,
brushSizeChanged,
globalMaskLayerOpacityChanged,
positivePromptChanged,
negativePromptChanged,
positivePrompt2Changed,
negativePrompt2Changed,
rgLayerPositivePromptChanged,
rgLayerNegativePromptChanged,
rgLayerPreviewColorChanged
);
// These are used to group actions into logical lines below (hate typos)
const LINE_1 = 'LINE_1';
const LINE_2 = 'LINE_2';
export const controlLayersUndoableConfig: UndoableOptions<ControlLayersState, UnknownAction> = {
limit: 64,
undoType: controlLayersSlice.actions.undo.type,
redoType: controlLayersSlice.actions.redo.type,
groupBy: (action, state, history) => {
// Lines are started with `rgLayerLineAdded` and may have any number of subsequent `rgLayerPointsAdded` events.
// We can use a double-buffer-esque trick to group each "logical" line as a single undoable action, without grouping
// separate logical lines as a single undo action.
if (rgLayerLineAdded.match(action)) {
return history.group === LINE_1 ? LINE_2 : LINE_1;
}
if (rgLayerPointsAdded.match(action)) {
if (history.group === LINE_1 || history.group === LINE_2) {
return history.group;
}
}
if (undoableGroupByMatcher(action)) {
return action.type;
}
return null;
},
filter: (action, _state, _history) => {
// Ignore all actions from other slices
if (!action.type.startsWith(controlLayersSlice.name)) {
return false;
}
// This action is triggered on state changes, including when we undo. If we do not ignore this action, when we
// undo, this action triggers and empties the future states array. Therefore, we must ignore this action.
if (layerBboxChanged.match(action)) {
return false;
}
return true;
},
};

View File

@@ -1,131 +0,0 @@
import {
zControlNetConfigV2,
zImageWithDims,
zIPAdapterConfigV2,
zT2IAdapterConfigV2,
} from 'features/controlLayers/util/controlAdapters';
import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
import {
type ParameterHeight,
type ParameterNegativePrompt,
type ParameterNegativeStylePromptSDXL,
type ParameterPositivePrompt,
type ParameterPositiveStylePromptSDXL,
type ParameterWidth,
zAutoNegative,
zParameterNegativePrompt,
zParameterPositivePrompt,
zParameterStrength,
} from 'features/parameters/types/parameterSchemas';
import { z } from 'zod';
const zTool = z.enum(['brush', 'eraser', 'move', 'rect']);
export type Tool = z.infer<typeof zTool>;
const zDrawingTool = zTool.extract(['brush', 'eraser']);
export type DrawingTool = z.infer<typeof zDrawingTool>;
const zPoints = z.array(z.number()).refine((points) => points.length % 2 === 0, {
message: 'Must have an even number of points',
});
const zVectorMaskLine = z.object({
id: z.string(),
type: z.literal('vector_mask_line'),
tool: zDrawingTool,
strokeWidth: z.number().min(1),
points: zPoints,
});
export type VectorMaskLine = z.infer<typeof zVectorMaskLine>;
const zVectorMaskRect = z.object({
id: z.string(),
type: z.literal('vector_mask_rect'),
x: z.number(),
y: z.number(),
width: z.number().min(1),
height: z.number().min(1),
});
export type VectorMaskRect = z.infer<typeof zVectorMaskRect>;
const zLayerBase = z.object({
id: z.string(),
isEnabled: z.boolean().default(true),
isSelected: z.boolean().default(true),
});
const zRect = z.object({
x: z.number(),
y: z.number(),
width: z.number().min(1),
height: z.number().min(1),
});
const zRenderableLayerBase = zLayerBase.extend({
x: z.number(),
y: z.number(),
bbox: zRect.nullable(),
bboxNeedsUpdate: z.boolean(),
});
const zControlAdapterLayer = zRenderableLayerBase.extend({
type: z.literal('control_adapter_layer'),
opacity: z.number().gte(0).lte(1),
isFilterEnabled: z.boolean(),
controlAdapter: z.discriminatedUnion('type', [zControlNetConfigV2, zT2IAdapterConfigV2]),
});
export type ControlAdapterLayer = z.infer<typeof zControlAdapterLayer>;
const zIPAdapterLayer = zLayerBase.extend({
type: z.literal('ip_adapter_layer'),
ipAdapter: zIPAdapterConfigV2,
});
export type IPAdapterLayer = z.infer<typeof zIPAdapterLayer>;
const zRgbColor = z.object({
r: z.number().int().min(0).max(255),
g: z.number().int().min(0).max(255),
b: z.number().int().min(0).max(255),
});
const zRegionalGuidanceLayer = zRenderableLayerBase.extend({
type: z.literal('regional_guidance_layer'),
maskObjects: z.array(z.discriminatedUnion('type', [zVectorMaskLine, zVectorMaskRect])),
positivePrompt: zParameterPositivePrompt.nullable(),
negativePrompt: zParameterNegativePrompt.nullable(),
ipAdapters: z.array(zIPAdapterConfigV2),
previewColor: zRgbColor,
autoNegative: zAutoNegative,
uploadedMaskImage: zImageWithDims.nullable(),
});
export type RegionalGuidanceLayer = z.infer<typeof zRegionalGuidanceLayer>;
const zInitialImageLayer = zRenderableLayerBase.extend({
type: z.literal('initial_image_layer'),
opacity: z.number().gte(0).lte(1),
image: zImageWithDims.nullable(),
denoisingStrength: zParameterStrength,
});
export type InitialImageLayer = z.infer<typeof zInitialImageLayer>;
export const zLayer = z.discriminatedUnion('type', [
zRegionalGuidanceLayer,
zControlAdapterLayer,
zIPAdapterLayer,
zInitialImageLayer,
]);
export type Layer = z.infer<typeof zLayer>;
export type ControlLayersState = {
_version: 2;
selectedLayerId: string | null;
layers: Layer[];
brushSize: number;
globalMaskLayerOpacity: number;
positivePrompt: ParameterPositivePrompt;
negativePrompt: ParameterNegativePrompt;
positivePrompt2: ParameterPositiveStylePromptSDXL;
negativePrompt2: ParameterNegativeStylePromptSDXL;
shouldConcatPrompts: boolean;
size: {
width: ParameterWidth;
height: ParameterHeight;
aspectRatio: AspectRatioState;
};
};

View File

@@ -1,77 +0,0 @@
import type { S } from 'services/api/types';
import type { Equals } from 'tsafe';
import { assert } from 'tsafe';
import { describe, test } from 'vitest';
import type {
_CannyProcessorConfig,
_ColorMapProcessorConfig,
_ContentShuffleProcessorConfig,
_DepthAnythingProcessorConfig,
_DWOpenposeProcessorConfig,
_HedProcessorConfig,
_LineartAnimeProcessorConfig,
_LineartProcessorConfig,
_MediapipeFaceProcessorConfig,
_MidasDepthProcessorConfig,
_MlsdProcessorConfig,
_NormalbaeProcessorConfig,
_PidiProcessorConfig,
_ZoeDepthProcessorConfig,
CannyProcessorConfig,
CLIPVisionModelV2,
ColorMapProcessorConfig,
ContentShuffleProcessorConfig,
ControlModeV2,
DepthAnythingModelSize,
DepthAnythingProcessorConfig,
DWOpenposeProcessorConfig,
HedProcessorConfig,
IPMethodV2,
LineartAnimeProcessorConfig,
LineartProcessorConfig,
MediapipeFaceProcessorConfig,
MidasDepthProcessorConfig,
MlsdProcessorConfig,
NormalbaeProcessorConfig,
PidiProcessorConfig,
ProcessorConfig,
ProcessorTypeV2,
ZoeDepthProcessorConfig,
} from './controlAdapters';
describe('Control Adapter Types', () => {
test('ProcessorType', () => {
assert<Equals<ProcessorConfig['type'], ProcessorTypeV2>>();
});
test('IP Adapter Method', () => {
assert<Equals<NonNullable<S['IPAdapterInvocation']['method']>, IPMethodV2>>();
});
test('CLIP Vision Model', () => {
assert<Equals<NonNullable<S['IPAdapterInvocation']['clip_vision_model']>, CLIPVisionModelV2>>();
});
test('Control Mode', () => {
assert<Equals<NonNullable<S['ControlNetInvocation']['control_mode']>, ControlModeV2>>();
});
test('DepthAnything Model Size', () => {
assert<Equals<NonNullable<S['DepthAnythingImageProcessorInvocation']['model_size']>, DepthAnythingModelSize>>();
});
test('Processor Configs', () => {
// The processor configs are manually modeled zod schemas. This test ensures that the inferred types are correct.
// The types prefixed with `_` are types generated from OpenAPI, while the types without the prefix are manually modeled.
assert<Equals<_CannyProcessorConfig, CannyProcessorConfig>>();
assert<Equals<_ColorMapProcessorConfig, ColorMapProcessorConfig>>();
assert<Equals<_ContentShuffleProcessorConfig, ContentShuffleProcessorConfig>>();
assert<Equals<_DepthAnythingProcessorConfig, DepthAnythingProcessorConfig>>();
assert<Equals<_HedProcessorConfig, HedProcessorConfig>>();
assert<Equals<_LineartAnimeProcessorConfig, LineartAnimeProcessorConfig>>();
assert<Equals<_LineartProcessorConfig, LineartProcessorConfig>>();
assert<Equals<_MediapipeFaceProcessorConfig, MediapipeFaceProcessorConfig>>();
assert<Equals<_MidasDepthProcessorConfig, MidasDepthProcessorConfig>>();
assert<Equals<_MlsdProcessorConfig, MlsdProcessorConfig>>();
assert<Equals<_NormalbaeProcessorConfig, NormalbaeProcessorConfig>>();
assert<Equals<_DWOpenposeProcessorConfig, DWOpenposeProcessorConfig>>();
assert<Equals<_PidiProcessorConfig, PidiProcessorConfig>>();
assert<Equals<_ZoeDepthProcessorConfig, ZoeDepthProcessorConfig>>();
});
});

View File

@@ -1,591 +0,0 @@
import { deepClone } from 'common/util/deepClone';
import { zModelIdentifierField } from 'features/nodes/types/common';
import { merge, omit } from 'lodash-es';
import type {
BaseModelType,
CannyImageProcessorInvocation,
ColorMapImageProcessorInvocation,
ContentShuffleImageProcessorInvocation,
ControlNetModelConfig,
DepthAnythingImageProcessorInvocation,
DWOpenposeImageProcessorInvocation,
Graph,
HedImageProcessorInvocation,
ImageDTO,
LineartAnimeImageProcessorInvocation,
LineartImageProcessorInvocation,
MediapipeFaceProcessorInvocation,
MidasDepthImageProcessorInvocation,
MlsdImageProcessorInvocation,
NormalbaeImageProcessorInvocation,
PidiImageProcessorInvocation,
T2IAdapterModelConfig,
ZoeDepthImageProcessorInvocation,
} from 'services/api/types';
import { z } from 'zod';
const zId = z.string().min(1);
const zCannyProcessorConfig = z.object({
id: zId,
type: z.literal('canny_image_processor'),
low_threshold: z.number().int().gte(0).lte(255),
high_threshold: z.number().int().gte(0).lte(255),
});
export type _CannyProcessorConfig = Required<
Pick<CannyImageProcessorInvocation, 'id' | 'type' | 'low_threshold' | 'high_threshold'>
>;
export type CannyProcessorConfig = z.infer<typeof zCannyProcessorConfig>;
const zColorMapProcessorConfig = z.object({
id: zId,
type: z.literal('color_map_image_processor'),
color_map_tile_size: z.number().int().gte(1),
});
export type _ColorMapProcessorConfig = Required<
Pick<ColorMapImageProcessorInvocation, 'id' | 'type' | 'color_map_tile_size'>
>;
export type ColorMapProcessorConfig = z.infer<typeof zColorMapProcessorConfig>;
const zContentShuffleProcessorConfig = z.object({
id: zId,
type: z.literal('content_shuffle_image_processor'),
w: z.number().int().gte(0),
h: z.number().int().gte(0),
f: z.number().int().gte(0),
});
export type _ContentShuffleProcessorConfig = Required<
Pick<ContentShuffleImageProcessorInvocation, 'id' | 'type' | 'w' | 'h' | 'f'>
>;
export type ContentShuffleProcessorConfig = z.infer<typeof zContentShuffleProcessorConfig>;
const zDepthAnythingModelSize = z.enum(['large', 'base', 'small']);
export type DepthAnythingModelSize = z.infer<typeof zDepthAnythingModelSize>;
export const isDepthAnythingModelSize = (v: unknown): v is DepthAnythingModelSize =>
zDepthAnythingModelSize.safeParse(v).success;
const zDepthAnythingProcessorConfig = z.object({
id: zId,
type: z.literal('depth_anything_image_processor'),
model_size: zDepthAnythingModelSize,
});
export type _DepthAnythingProcessorConfig = Required<
Pick<DepthAnythingImageProcessorInvocation, 'id' | 'type' | 'model_size'>
>;
export type DepthAnythingProcessorConfig = z.infer<typeof zDepthAnythingProcessorConfig>;
const zHedProcessorConfig = z.object({
id: zId,
type: z.literal('hed_image_processor'),
scribble: z.boolean(),
});
export type _HedProcessorConfig = Required<Pick<HedImageProcessorInvocation, 'id' | 'type' | 'scribble'>>;
export type HedProcessorConfig = z.infer<typeof zHedProcessorConfig>;
const zLineartAnimeProcessorConfig = z.object({
id: zId,
type: z.literal('lineart_anime_image_processor'),
});
export type _LineartAnimeProcessorConfig = Required<Pick<LineartAnimeImageProcessorInvocation, 'id' | 'type'>>;
export type LineartAnimeProcessorConfig = z.infer<typeof zLineartAnimeProcessorConfig>;
const zLineartProcessorConfig = z.object({
id: zId,
type: z.literal('lineart_image_processor'),
coarse: z.boolean(),
});
export type _LineartProcessorConfig = Required<Pick<LineartImageProcessorInvocation, 'id' | 'type' | 'coarse'>>;
export type LineartProcessorConfig = z.infer<typeof zLineartProcessorConfig>;
const zMediapipeFaceProcessorConfig = z.object({
id: zId,
type: z.literal('mediapipe_face_processor'),
max_faces: z.number().int().gte(1),
min_confidence: z.number().gte(0).lte(1),
});
export type _MediapipeFaceProcessorConfig = Required<
Pick<MediapipeFaceProcessorInvocation, 'id' | 'type' | 'max_faces' | 'min_confidence'>
>;
export type MediapipeFaceProcessorConfig = z.infer<typeof zMediapipeFaceProcessorConfig>;
const zMidasDepthProcessorConfig = z.object({
id: zId,
type: z.literal('midas_depth_image_processor'),
a_mult: z.number().gte(0),
bg_th: z.number().gte(0),
});
export type _MidasDepthProcessorConfig = Required<
Pick<MidasDepthImageProcessorInvocation, 'id' | 'type' | 'a_mult' | 'bg_th'>
>;
export type MidasDepthProcessorConfig = z.infer<typeof zMidasDepthProcessorConfig>;
const zMlsdProcessorConfig = z.object({
id: zId,
type: z.literal('mlsd_image_processor'),
thr_v: z.number().gte(0),
thr_d: z.number().gte(0),
});
export type _MlsdProcessorConfig = Required<Pick<MlsdImageProcessorInvocation, 'id' | 'type' | 'thr_v' | 'thr_d'>>;
export type MlsdProcessorConfig = z.infer<typeof zMlsdProcessorConfig>;
const zNormalbaeProcessorConfig = z.object({
id: zId,
type: z.literal('normalbae_image_processor'),
});
export type _NormalbaeProcessorConfig = Required<Pick<NormalbaeImageProcessorInvocation, 'id' | 'type'>>;
export type NormalbaeProcessorConfig = z.infer<typeof zNormalbaeProcessorConfig>;
const zDWOpenposeProcessorConfig = z.object({
id: zId,
type: z.literal('dw_openpose_image_processor'),
draw_body: z.boolean(),
draw_face: z.boolean(),
draw_hands: z.boolean(),
});
export type _DWOpenposeProcessorConfig = Required<
Pick<DWOpenposeImageProcessorInvocation, 'id' | 'type' | 'draw_body' | 'draw_face' | 'draw_hands'>
>;
export type DWOpenposeProcessorConfig = z.infer<typeof zDWOpenposeProcessorConfig>;
const zPidiProcessorConfig = z.object({
id: zId,
type: z.literal('pidi_image_processor'),
safe: z.boolean(),
scribble: z.boolean(),
});
export type _PidiProcessorConfig = Required<Pick<PidiImageProcessorInvocation, 'id' | 'type' | 'safe' | 'scribble'>>;
export type PidiProcessorConfig = z.infer<typeof zPidiProcessorConfig>;
const zZoeDepthProcessorConfig = z.object({
id: zId,
type: z.literal('zoe_depth_image_processor'),
});
export type _ZoeDepthProcessorConfig = Required<Pick<ZoeDepthImageProcessorInvocation, 'id' | 'type'>>;
export type ZoeDepthProcessorConfig = z.infer<typeof zZoeDepthProcessorConfig>;
const zProcessorConfig = z.discriminatedUnion('type', [
zCannyProcessorConfig,
zColorMapProcessorConfig,
zContentShuffleProcessorConfig,
zDepthAnythingProcessorConfig,
zHedProcessorConfig,
zLineartAnimeProcessorConfig,
zLineartProcessorConfig,
zMediapipeFaceProcessorConfig,
zMidasDepthProcessorConfig,
zMlsdProcessorConfig,
zNormalbaeProcessorConfig,
zDWOpenposeProcessorConfig,
zPidiProcessorConfig,
zZoeDepthProcessorConfig,
]);
export type ProcessorConfig = z.infer<typeof zProcessorConfig>;
export const zImageWithDims = z.object({
name: z.string(),
width: z.number().int().positive(),
height: z.number().int().positive(),
});
export type ImageWithDims = z.infer<typeof zImageWithDims>;
const zBeginEndStepPct = z
.tuple([z.number().gte(0).lte(1), z.number().gte(0).lte(1)])
.refine(([begin, end]) => begin < end, {
message: 'Begin must be less than end',
});
const zControlAdapterBase = z.object({
id: zId,
weight: z.number().gte(0).lte(1),
image: zImageWithDims.nullable(),
processedImage: zImageWithDims.nullable(),
isProcessingImage: z.boolean(),
processorConfig: zProcessorConfig.nullable(),
beginEndStepPct: zBeginEndStepPct,
});
const zControlModeV2 = z.enum(['balanced', 'more_prompt', 'more_control', 'unbalanced']);
export type ControlModeV2 = z.infer<typeof zControlModeV2>;
export const isControlModeV2 = (v: unknown): v is ControlModeV2 => zControlModeV2.safeParse(v).success;
export const zControlNetConfigV2 = zControlAdapterBase.extend({
type: z.literal('controlnet'),
model: zModelIdentifierField.nullable(),
controlMode: zControlModeV2,
});
export type ControlNetConfigV2 = z.infer<typeof zControlNetConfigV2>;
export const zT2IAdapterConfigV2 = zControlAdapterBase.extend({
type: z.literal('t2i_adapter'),
model: zModelIdentifierField.nullable(),
});
export type T2IAdapterConfigV2 = z.infer<typeof zT2IAdapterConfigV2>;
const zCLIPVisionModelV2 = z.enum(['ViT-H', 'ViT-G']);
export type CLIPVisionModelV2 = z.infer<typeof zCLIPVisionModelV2>;
export const isCLIPVisionModelV2 = (v: unknown): v is CLIPVisionModelV2 => zCLIPVisionModelV2.safeParse(v).success;
const zIPMethodV2 = z.enum(['full', 'style', 'composition']);
export type IPMethodV2 = z.infer<typeof zIPMethodV2>;
export const isIPMethodV2 = (v: unknown): v is IPMethodV2 => zIPMethodV2.safeParse(v).success;
export const zIPAdapterConfigV2 = z.object({
id: zId,
type: z.literal('ip_adapter'),
weight: z.number().gte(0).lte(1),
method: zIPMethodV2,
image: zImageWithDims.nullable(),
model: zModelIdentifierField.nullable(),
clipVisionModel: zCLIPVisionModelV2,
beginEndStepPct: zBeginEndStepPct,
});
export type IPAdapterConfigV2 = z.infer<typeof zIPAdapterConfigV2>;
const zProcessorTypeV2 = z.enum([
'canny_image_processor',
'color_map_image_processor',
'content_shuffle_image_processor',
'depth_anything_image_processor',
'hed_image_processor',
'lineart_anime_image_processor',
'lineart_image_processor',
'mediapipe_face_processor',
'midas_depth_image_processor',
'mlsd_image_processor',
'normalbae_image_processor',
'dw_openpose_image_processor',
'pidi_image_processor',
'zoe_depth_image_processor',
]);
export type ProcessorTypeV2 = z.infer<typeof zProcessorTypeV2>;
export const isProcessorTypeV2 = (v: unknown): v is ProcessorTypeV2 => zProcessorTypeV2.safeParse(v).success;
type ProcessorData<T extends ProcessorTypeV2> = {
type: T;
labelTKey: string;
descriptionTKey: string;
buildDefaults(baseModel?: BaseModelType): Extract<ProcessorConfig, { type: T }>;
buildNode(
image: ImageWithDims,
config: Extract<ProcessorConfig, { type: T }>
): Extract<Graph['nodes'][string], { type: T }>;
};
const minDim = (image: ImageWithDims): number => Math.min(image.width, image.height);
type CAProcessorsData = {
[key in ProcessorTypeV2]: ProcessorData<key>;
};
/**
* A dict of ControlNet processors, including:
* - label translation key
* - description translation key
* - a builder to create default values for the config
* - a builder to create the node for the config
*
* TODO: Generate from the OpenAPI schema
*/
export const CA_PROCESSOR_DATA: CAProcessorsData = {
canny_image_processor: {
type: 'canny_image_processor',
labelTKey: 'controlnet.canny',
descriptionTKey: 'controlnet.cannyDescription',
buildDefaults: () => ({
id: 'canny_image_processor',
type: 'canny_image_processor',
low_threshold: 100,
high_threshold: 200,
}),
buildNode: (image, config) => ({
...config,
type: 'canny_image_processor',
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
},
color_map_image_processor: {
type: 'color_map_image_processor',
labelTKey: 'controlnet.colorMap',
descriptionTKey: 'controlnet.colorMapDescription',
buildDefaults: () => ({
id: 'color_map_image_processor',
type: 'color_map_image_processor',
color_map_tile_size: 64,
}),
buildNode: (image, config) => ({
...config,
type: 'color_map_image_processor',
image: { image_name: image.name },
}),
},
content_shuffle_image_processor: {
type: 'content_shuffle_image_processor',
labelTKey: 'controlnet.contentShuffle',
descriptionTKey: 'controlnet.contentShuffleDescription',
buildDefaults: (baseModel) => ({
id: 'content_shuffle_image_processor',
type: 'content_shuffle_image_processor',
h: baseModel === 'sdxl' ? 1024 : 512,
w: baseModel === 'sdxl' ? 1024 : 512,
f: baseModel === 'sdxl' ? 512 : 256,
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
},
depth_anything_image_processor: {
type: 'depth_anything_image_processor',
labelTKey: 'controlnet.depthAnything',
descriptionTKey: 'controlnet.depthAnythingDescription',
buildDefaults: () => ({
id: 'depth_anything_image_processor',
type: 'depth_anything_image_processor',
model_size: 'small',
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
resolution: minDim(image),
}),
},
hed_image_processor: {
type: 'hed_image_processor',
labelTKey: 'controlnet.hed',
descriptionTKey: 'controlnet.hedDescription',
buildDefaults: () => ({
id: 'hed_image_processor',
type: 'hed_image_processor',
scribble: false,
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
},
lineart_anime_image_processor: {
type: 'lineart_anime_image_processor',
labelTKey: 'controlnet.lineartAnime',
descriptionTKey: 'controlnet.lineartAnimeDescription',
buildDefaults: () => ({
id: 'lineart_anime_image_processor',
type: 'lineart_anime_image_processor',
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
},
lineart_image_processor: {
type: 'lineart_image_processor',
labelTKey: 'controlnet.lineart',
descriptionTKey: 'controlnet.lineartDescription',
buildDefaults: () => ({
id: 'lineart_image_processor',
type: 'lineart_image_processor',
coarse: false,
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
},
mediapipe_face_processor: {
type: 'mediapipe_face_processor',
labelTKey: 'controlnet.mediapipeFace',
descriptionTKey: 'controlnet.mediapipeFaceDescription',
buildDefaults: () => ({
id: 'mediapipe_face_processor',
type: 'mediapipe_face_processor',
max_faces: 1,
min_confidence: 0.5,
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
},
midas_depth_image_processor: {
type: 'midas_depth_image_processor',
labelTKey: 'controlnet.depthMidas',
descriptionTKey: 'controlnet.depthMidasDescription',
buildDefaults: () => ({
id: 'midas_depth_image_processor',
type: 'midas_depth_image_processor',
a_mult: 2,
bg_th: 0.1,
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
},
mlsd_image_processor: {
type: 'mlsd_image_processor',
labelTKey: 'controlnet.mlsd',
descriptionTKey: 'controlnet.mlsdDescription',
buildDefaults: () => ({
id: 'mlsd_image_processor',
type: 'mlsd_image_processor',
thr_d: 0.1,
thr_v: 0.1,
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
},
normalbae_image_processor: {
type: 'normalbae_image_processor',
labelTKey: 'controlnet.normalBae',
descriptionTKey: 'controlnet.normalBaeDescription',
buildDefaults: () => ({
id: 'normalbae_image_processor',
type: 'normalbae_image_processor',
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
},
dw_openpose_image_processor: {
type: 'dw_openpose_image_processor',
labelTKey: 'controlnet.dwOpenpose',
descriptionTKey: 'controlnet.dwOpenposeDescription',
buildDefaults: () => ({
id: 'dw_openpose_image_processor',
type: 'dw_openpose_image_processor',
draw_body: true,
draw_face: false,
draw_hands: false,
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
image_resolution: minDim(image),
}),
},
pidi_image_processor: {
type: 'pidi_image_processor',
labelTKey: 'controlnet.pidi',
descriptionTKey: 'controlnet.pidiDescription',
buildDefaults: () => ({
id: 'pidi_image_processor',
type: 'pidi_image_processor',
scribble: false,
safe: false,
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
},
zoe_depth_image_processor: {
type: 'zoe_depth_image_processor',
labelTKey: 'controlnet.depthZoe',
descriptionTKey: 'controlnet.depthZoeDescription',
buildDefaults: () => ({
id: 'zoe_depth_image_processor',
type: 'zoe_depth_image_processor',
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.name },
}),
},
};
export const initialControlNetV2: Omit<ControlNetConfigV2, 'id'> = {
type: 'controlnet',
model: null,
weight: 1,
beginEndStepPct: [0, 1],
controlMode: 'balanced',
image: null,
processedImage: null,
isProcessingImage: false,
processorConfig: CA_PROCESSOR_DATA.canny_image_processor.buildDefaults(),
};
export const initialT2IAdapterV2: Omit<T2IAdapterConfigV2, 'id'> = {
type: 't2i_adapter',
model: null,
weight: 1,
beginEndStepPct: [0, 1],
image: null,
processedImage: null,
isProcessingImage: false,
processorConfig: CA_PROCESSOR_DATA.canny_image_processor.buildDefaults(),
};
export const initialIPAdapterV2: Omit<IPAdapterConfigV2, 'id'> = {
type: 'ip_adapter',
image: null,
model: null,
beginEndStepPct: [0, 1],
method: 'full',
clipVisionModel: 'ViT-H',
weight: 1,
};
export const buildControlNet = (id: string, overrides?: Partial<ControlNetConfigV2>): ControlNetConfigV2 => {
return merge(deepClone(initialControlNetV2), { id, ...overrides });
};
export const buildT2IAdapter = (id: string, overrides?: Partial<T2IAdapterConfigV2>): T2IAdapterConfigV2 => {
return merge(deepClone(initialT2IAdapterV2), { id, ...overrides });
};
export const buildIPAdapter = (id: string, overrides?: Partial<IPAdapterConfigV2>): IPAdapterConfigV2 => {
return merge(deepClone(initialIPAdapterV2), { id, ...overrides });
};
export const buildControlAdapterProcessorV2 = (
modelConfig: ControlNetModelConfig | T2IAdapterModelConfig
): ProcessorConfig | null => {
const defaultPreprocessor = modelConfig.default_settings?.preprocessor;
if (!isProcessorTypeV2(defaultPreprocessor)) {
return null;
}
const processorConfig = CA_PROCESSOR_DATA[defaultPreprocessor].buildDefaults(modelConfig.base);
return processorConfig;
};
export const imageDTOToImageWithDims = ({ image_name, width, height }: ImageDTO): ImageWithDims => ({
name: image_name,
width,
height,
});
export const t2iAdapterToControlNet = (t2iAdapter: T2IAdapterConfigV2): ControlNetConfigV2 => {
return {
...deepClone(t2iAdapter),
type: 'controlnet',
controlMode: initialControlNetV2.controlMode,
};
};
export const controlNetToT2IAdapter = (controlNet: ControlNetConfigV2): T2IAdapterConfigV2 => {
return {
...omit(deepClone(controlNet), 'controlMode'),
type: 't2i_adapter',
};
};

Some files were not shown because too many files have changed in this diff Show More