mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-01-16 06:08:02 -05:00
Compare commits
31 Commits
maryhipp/s
...
ryan/upsca
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
59284c707e | ||
|
|
911792f258 | ||
|
|
9567c6e196 | ||
|
|
6e47bd14af | ||
|
|
9ac9b6a014 | ||
|
|
459d487620 | ||
|
|
787e1bbb5f | ||
|
|
bb5648983f | ||
|
|
da066979cf | ||
|
|
2c03a0fa53 | ||
|
|
ea9fc99ce7 | ||
|
|
a406fb725a | ||
|
|
fe4112c54e | ||
|
|
385ff0f86c | ||
|
|
5c3517e2a6 | ||
|
|
7cb7f5107e | ||
|
|
084ccccfff | ||
|
|
b2cf57d8ff | ||
|
|
f5bc616699 | ||
|
|
50021dad94 | ||
|
|
dda98f7a4b | ||
|
|
76c97ec411 | ||
|
|
78852228cd | ||
|
|
dec0ffd47c | ||
|
|
638bf33483 | ||
|
|
b961495b57 | ||
|
|
b35cde7db7 | ||
|
|
103e34691b | ||
|
|
0d90999a19 | ||
|
|
4cefa48307 | ||
|
|
6ade5df25c |
@@ -128,8 +128,7 @@ The queue operates on a series of download job objects. These objects
|
||||
specify the source and destination of the download, and keep track of
|
||||
the progress of the download.
|
||||
|
||||
Two job types are defined. `DownloadJob` and
|
||||
`MultiFileDownloadJob`. The former is a pydantic object with the
|
||||
The only job type currently implemented is `DownloadJob`, a pydantic object with the
|
||||
following fields:
|
||||
|
||||
| **Field** | **Type** | **Default** | **Description** |
|
||||
@@ -139,7 +138,7 @@ following fields:
|
||||
| `dest` | Path | | Where to download to |
|
||||
| `access_token` | str | | [optional] string containing authentication token for access |
|
||||
| `on_start` | Callable | | [optional] callback when the download starts |
|
||||
| `on_progress` | Callable | | [optional] callback called at intervals during download progress |
|
||||
| `on_progress` | Callable | | [optional] callback called at intervals during download progress |
|
||||
| `on_complete` | Callable | | [optional] callback called after successful download completion |
|
||||
| `on_error` | Callable | | [optional] callback called after an error occurs |
|
||||
| `id` | int | auto assigned | Job ID, an integer >= 0 |
|
||||
@@ -191,33 +190,6 @@ A cancelled job will have status `DownloadJobStatus.ERROR` and an
|
||||
`error_type` field of "DownloadJobCancelledException". In addition,
|
||||
the job's `cancelled` property will be set to True.
|
||||
|
||||
The `MultiFileDownloadJob` is used for diffusers model downloads,
|
||||
which contain multiple files and directories under a common root:
|
||||
|
||||
| **Field** | **Type** | **Default** | **Description** |
|
||||
|----------------|-----------------|---------------|-----------------|
|
||||
| _Fields passed in at job creation time_ |
|
||||
| `download_parts` | Set[DownloadJob]| | Component download jobs |
|
||||
| `dest` | Path | | Where to download to |
|
||||
| `on_start` | Callable | | [optional] callback when the download starts |
|
||||
| `on_progress` | Callable | | [optional] callback called at intervals during download progress |
|
||||
| `on_complete` | Callable | | [optional] callback called after successful download completion |
|
||||
| `on_error` | Callable | | [optional] callback called after an error occurs |
|
||||
| `id` | int | auto assigned | Job ID, an integer >= 0 |
|
||||
| _Fields updated over the course of the download task_
|
||||
| `status` | DownloadJobStatus| | Status code |
|
||||
| `download_path` | Path | | Path to the root of the downloaded files |
|
||||
| `bytes` | int | 0 | Bytes downloaded so far |
|
||||
| `total_bytes` | int | 0 | Total size of the file at the remote site |
|
||||
| `error_type` | str | | String version of the exception that caused an error during download |
|
||||
| `error` | str | | String version of the traceback associated with an error |
|
||||
| `cancelled` | bool | False | Set to true if the job was cancelled by the caller|
|
||||
|
||||
Note that the MultiFileDownloadJob does not support the `priority`,
|
||||
`job_started`, `job_ended` or `content_type` attributes. You can get
|
||||
these from the individual download jobs in `download_parts`.
|
||||
|
||||
|
||||
### Callbacks
|
||||
|
||||
Download jobs can be associated with a series of callbacks, each with
|
||||
@@ -279,40 +251,11 @@ jobs using `list_jobs()`, fetch a single job by its with
|
||||
running jobs with `cancel_all_jobs()`, and wait for all jobs to finish
|
||||
with `join()`.
|
||||
|
||||
#### job = queue.download(source, dest, priority, access_token, on_start, on_progress, on_complete, on_cancelled, on_error)
|
||||
#### job = queue.download(source, dest, priority, access_token)
|
||||
|
||||
Create a new download job and put it on the queue, returning the
|
||||
DownloadJob object.
|
||||
|
||||
#### multifile_job = queue.multifile_download(parts, dest, access_token, on_start, on_progress, on_complete, on_cancelled, on_error)
|
||||
|
||||
This is similar to download(), but instead of taking a single source,
|
||||
it accepts a `parts` argument consisting of a list of
|
||||
`RemoteModelFile` objects. Each part corresponds to a URL/Path pair,
|
||||
where the URL is the location of the remote file, and the Path is the
|
||||
destination.
|
||||
|
||||
`RemoteModelFile` can be imported from `invokeai.backend.model_manager.metadata`, and
|
||||
consists of a url/path pair. Note that the path *must* be relative.
|
||||
|
||||
The method returns a `MultiFileDownloadJob`.
|
||||
|
||||
|
||||
```
|
||||
from invokeai.backend.model_manager.metadata import RemoteModelFile
|
||||
remote_file_1 = RemoteModelFile(url='http://www.foo.bar/my/pytorch_model.safetensors'',
|
||||
path='my_model/textencoder/pytorch_model.safetensors'
|
||||
)
|
||||
remote_file_2 = RemoteModelFile(url='http://www.bar.baz/vae.ckpt',
|
||||
path='my_model/vae/diffusers_model.safetensors'
|
||||
)
|
||||
job = queue.multifile_download(parts=[remote_file_1, remote_file_2],
|
||||
dest='/tmp/downloads',
|
||||
on_progress=TqdmProgress().update)
|
||||
queue.wait_for_job(job)
|
||||
print(f"The files were downloaded to {job.download_path}")
|
||||
```
|
||||
|
||||
#### jobs = queue.list_jobs()
|
||||
|
||||
Return a list of all active and inactive `DownloadJob`s.
|
||||
|
||||
@@ -397,25 +397,26 @@ In the event you wish to create a new installer, you may use the
|
||||
following initialization pattern:
|
||||
|
||||
```
|
||||
from invokeai.app.services.config import get_config
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
from invokeai.app.services.model_records import ModelRecordServiceSQL
|
||||
from invokeai.app.services.model_install import ModelInstallService
|
||||
from invokeai.app.services.download import DownloadQueueService
|
||||
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||
from invokeai.app.services.shared.sqlite import SqliteDatabase
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
config = get_config()
|
||||
config = InvokeAIAppConfig.get_config()
|
||||
config.parse_args()
|
||||
|
||||
logger = InvokeAILogger.get_logger(config=config)
|
||||
db = SqliteDatabase(config.db_path, logger)
|
||||
db = SqliteDatabase(config, logger)
|
||||
record_store = ModelRecordServiceSQL(db)
|
||||
queue = DownloadQueueService()
|
||||
queue.start()
|
||||
|
||||
installer = ModelInstallService(app_config=config,
|
||||
installer = ModelInstallService(app_config=config,
|
||||
record_store=record_store,
|
||||
download_queue=queue
|
||||
)
|
||||
download_queue=queue
|
||||
)
|
||||
installer.start()
|
||||
```
|
||||
|
||||
@@ -1366,20 +1367,12 @@ the in-memory loaded model:
|
||||
| `model` | AnyModel | The instantiated model (details below) |
|
||||
| `locker` | ModelLockerBase | A context manager that mediates the movement of the model into VRAM |
|
||||
|
||||
### get_model_by_key(key, [submodel]) -> LoadedModel
|
||||
|
||||
The `get_model_by_key()` method will retrieve the model using its
|
||||
unique database key. For example:
|
||||
|
||||
loaded_model = loader.get_model_by_key('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
|
||||
|
||||
`get_model_by_key()` may raise any of the following exceptions:
|
||||
|
||||
* `UnknownModelException` -- key not in database
|
||||
* `ModelNotFoundException` -- key in database but model not found at path
|
||||
* `NotImplementedException` -- the loader doesn't know how to load this type of model
|
||||
|
||||
### Using the Loaded Model in Inference
|
||||
Because the loader can return multiple model types, it is typed to
|
||||
return `AnyModel`, a Union `ModelMixin`, `torch.nn.Module`,
|
||||
`IAIOnnxRuntimeModel`, `IPAdapter`, `IPAdapterPlus`, and
|
||||
`EmbeddingModelRaw`. `ModelMixin` is the base class of all diffusers
|
||||
models, `EmbeddingModelRaw` is used for LoRA and TextualInversion
|
||||
models. The others are obvious.
|
||||
|
||||
`LoadedModel` acts as a context manager. The context loads the model
|
||||
into the execution device (e.g. VRAM on CUDA systems), locks the model
|
||||
@@ -1387,33 +1380,17 @@ in the execution device for the duration of the context, and returns
|
||||
the model. Use it like this:
|
||||
|
||||
```
|
||||
loaded_model_= loader.get_model_by_key('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
|
||||
with loaded_model as vae:
|
||||
model_info = loader.get_model_by_key('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
|
||||
with model_info as vae:
|
||||
image = vae.decode(latents)[0]
|
||||
```
|
||||
|
||||
The object returned by the LoadedModel context manager is an
|
||||
`AnyModel`, which is a Union of `ModelMixin`, `torch.nn.Module`,
|
||||
`IAIOnnxRuntimeModel`, `IPAdapter`, `IPAdapterPlus`, and
|
||||
`EmbeddingModelRaw`. `ModelMixin` is the base class of all diffusers
|
||||
models, `EmbeddingModelRaw` is used for LoRA and TextualInversion
|
||||
models. The others are obvious.
|
||||
|
||||
In addition, you may call `LoadedModel.model_on_device()`, a context
|
||||
manager that returns a tuple of the model's state dict in CPU and the
|
||||
model itself in VRAM. It is used to optimize the LoRA patching and
|
||||
unpatching process:
|
||||
|
||||
```
|
||||
loaded_model_= loader.get_model_by_key('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
|
||||
with loaded_model.model_on_device() as (state_dict, vae):
|
||||
image = vae.decode(latents)[0]
|
||||
```
|
||||
|
||||
Since not all models have state dicts, the `state_dict` return value
|
||||
can be None.
|
||||
|
||||
`get_model_by_key()` may raise any of the following exceptions:
|
||||
|
||||
* `UnknownModelException` -- key not in database
|
||||
* `ModelNotFoundException` -- key in database but model not found at path
|
||||
* `NotImplementedException` -- the loader doesn't know how to load this type of model
|
||||
|
||||
### Emitting model loading events
|
||||
|
||||
When the `context` argument is passed to `load_model_*()`, it will
|
||||
@@ -1601,59 +1578,3 @@ This method takes a model key, looks it up using the
|
||||
`ModelRecordServiceBase` object in `mm.store`, and passes the returned
|
||||
model configuration to `load_model_by_config()`. It may raise a
|
||||
`NotImplementedException`.
|
||||
|
||||
## Invocation Context Model Manager API
|
||||
|
||||
Within invocations, the following methods are available from the
|
||||
`InvocationContext` object:
|
||||
|
||||
### context.download_and_cache_model(source) -> Path
|
||||
|
||||
This method accepts a `source` of a remote model, downloads and caches
|
||||
it locally, and then returns a Path to the local model. The source can
|
||||
be a direct download URL or a HuggingFace repo_id.
|
||||
|
||||
In the case of HuggingFace repo_id, the following variants are
|
||||
recognized:
|
||||
|
||||
* stabilityai/stable-diffusion-v4 -- default model
|
||||
* stabilityai/stable-diffusion-v4:fp16 -- fp16 variant
|
||||
* stabilityai/stable-diffusion-v4:fp16:vae -- the fp16 vae subfolder
|
||||
* stabilityai/stable-diffusion-v4:onnx:vae -- the onnx variant vae subfolder
|
||||
|
||||
You can also point at an arbitrary individual file within a repo_id
|
||||
directory using this syntax:
|
||||
|
||||
* stabilityai/stable-diffusion-v4::/checkpoints/sd4.safetensors
|
||||
|
||||
### context.load_local_model(model_path, [loader]) -> LoadedModel
|
||||
|
||||
This method loads a local model from the indicated path, returning a
|
||||
`LoadedModel`. The optional loader is a Callable that accepts a Path
|
||||
to the object, and returns a `AnyModel` object. If no loader is
|
||||
provided, then the method will use `torch.load()` for a .ckpt or .bin
|
||||
checkpoint file, `safetensors.torch.load_file()` for a safetensors
|
||||
checkpoint file, or `cls.from_pretrained()` for a directory that looks
|
||||
like a diffusers directory.
|
||||
|
||||
### context.load_remote_model(source, [loader]) -> LoadedModel
|
||||
|
||||
This method accepts a `source` of a remote model, downloads and caches
|
||||
it locally, loads it, and returns a `LoadedModel`. The source can be a
|
||||
direct download URL or a HuggingFace repo_id.
|
||||
|
||||
In the case of HuggingFace repo_id, the following variants are
|
||||
recognized:
|
||||
|
||||
* stabilityai/stable-diffusion-v4 -- default model
|
||||
* stabilityai/stable-diffusion-v4:fp16 -- fp16 variant
|
||||
* stabilityai/stable-diffusion-v4:fp16:vae -- the fp16 vae subfolder
|
||||
* stabilityai/stable-diffusion-v4:onnx:vae -- the onnx variant vae subfolder
|
||||
|
||||
You can also point at an arbitrary individual file within a repo_id
|
||||
directory using this syntax:
|
||||
|
||||
* stabilityai/stable-diffusion-v4::/checkpoints/sd4.safetensors
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -93,7 +93,7 @@ class ApiDependencies:
|
||||
conditioning = ObjectSerializerForwardCache(
|
||||
ObjectSerializerDisk[ConditioningFieldData](output_folder / "conditioning", ephemeral=True)
|
||||
)
|
||||
download_queue_service = DownloadQueueService(app_config=configuration, event_bus=events)
|
||||
download_queue_service = DownloadQueueService(event_bus=events)
|
||||
model_images_service = ModelImageFileStorageDisk(model_images_folder / "model_images")
|
||||
model_manager = ModelManagerService.build_model_manager(
|
||||
app_config=configuration,
|
||||
|
||||
@@ -316,7 +316,6 @@ async def list_image_dtos(
|
||||
),
|
||||
offset: int = Query(default=0, description="The page offset"),
|
||||
limit: int = Query(default=10, description="The number of images per page"),
|
||||
search_term: Optional[str] = Query(default=None, description="The term to search for"),
|
||||
) -> OffsetPaginatedResults[ImageDTO]:
|
||||
"""Gets a list of image DTOs"""
|
||||
|
||||
@@ -327,7 +326,6 @@ async def list_image_dtos(
|
||||
categories,
|
||||
is_intermediate,
|
||||
board_id,
|
||||
search_term
|
||||
)
|
||||
|
||||
return image_dtos
|
||||
|
||||
@@ -9,7 +9,7 @@ from copy import deepcopy
|
||||
from typing import Any, Dict, List, Optional, Type
|
||||
|
||||
from fastapi import Body, Path, Query, Response, UploadFile
|
||||
from fastapi.responses import FileResponse, HTMLResponse
|
||||
from fastapi.responses import FileResponse
|
||||
from fastapi.routing import APIRouter
|
||||
from PIL import Image
|
||||
from pydantic import AnyHttpUrl, BaseModel, ConfigDict, Field
|
||||
@@ -502,133 +502,6 @@ async def install_model(
|
||||
return result
|
||||
|
||||
|
||||
@model_manager_router.get(
|
||||
"/install/huggingface",
|
||||
operation_id="install_hugging_face_model",
|
||||
responses={
|
||||
201: {"description": "The model is being installed"},
|
||||
400: {"description": "Bad request"},
|
||||
409: {"description": "There is already a model corresponding to this path or repo_id"},
|
||||
},
|
||||
status_code=201,
|
||||
response_class=HTMLResponse,
|
||||
)
|
||||
async def install_hugging_face_model(
|
||||
source: str = Query(description="HuggingFace repo_id to install"),
|
||||
) -> HTMLResponse:
|
||||
"""Install a Hugging Face model using a string identifier."""
|
||||
|
||||
def generate_html(title: str, heading: str, repo_id: str, is_error: bool, message: str | None = "") -> str:
|
||||
if message:
|
||||
message = f"<p>{message}</p>"
|
||||
title_class = "error" if is_error else "success"
|
||||
return f"""
|
||||
<html>
|
||||
|
||||
<head>
|
||||
<title>{title}</title>
|
||||
<style>
|
||||
body {{
|
||||
text-align: center;
|
||||
background-color: hsl(220 12% 10% / 1);
|
||||
font-family: Helvetica, sans-serif;
|
||||
color: hsl(220 12% 86% / 1);
|
||||
}}
|
||||
|
||||
.repo-id {{
|
||||
color: hsl(220 12% 68% / 1);
|
||||
}}
|
||||
|
||||
.error {{
|
||||
color: hsl(0 42% 68% / 1)
|
||||
}}
|
||||
|
||||
.message-box {{
|
||||
display: inline-block;
|
||||
border-radius: 5px;
|
||||
background-color: hsl(220 12% 20% / 1);
|
||||
padding-inline-end: 30px;
|
||||
padding: 20px;
|
||||
padding-inline-start: 30px;
|
||||
padding-inline-end: 30px;
|
||||
}}
|
||||
|
||||
.container {{
|
||||
display: flex;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}}
|
||||
|
||||
a {{
|
||||
color: inherit
|
||||
}}
|
||||
|
||||
a:visited {{
|
||||
color: inherit
|
||||
}}
|
||||
|
||||
a:active {{
|
||||
color: inherit
|
||||
}}
|
||||
</style>
|
||||
</head>
|
||||
|
||||
<body style="background-color: hsl(220 12% 10% / 1);">
|
||||
<div class="container">
|
||||
<div class="message-box">
|
||||
<h2 class="{title_class}">{heading}</h2>
|
||||
{message}
|
||||
<p class="repo-id">Repo ID: {repo_id}</p>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
|
||||
</html>
|
||||
"""
|
||||
|
||||
try:
|
||||
metadata = HuggingFaceMetadataFetch().from_id(source)
|
||||
assert isinstance(metadata, ModelMetadataWithFiles)
|
||||
except UnknownMetadataException:
|
||||
title = "Unable to Install Model"
|
||||
heading = "No HuggingFace repository found with that repo ID."
|
||||
message = "Ensure the repo ID is correct and try again."
|
||||
return HTMLResponse(content=generate_html(title, heading, source, True, message), status_code=400)
|
||||
|
||||
logger = ApiDependencies.invoker.services.logger
|
||||
|
||||
try:
|
||||
installer = ApiDependencies.invoker.services.model_manager.install
|
||||
if metadata.is_diffusers:
|
||||
installer.heuristic_import(
|
||||
source=source,
|
||||
inplace=False,
|
||||
)
|
||||
elif metadata.ckpt_urls is not None and len(metadata.ckpt_urls) == 1:
|
||||
installer.heuristic_import(
|
||||
source=str(metadata.ckpt_urls[0]),
|
||||
inplace=False,
|
||||
)
|
||||
else:
|
||||
title = "Unable to Install Model"
|
||||
heading = "This HuggingFace repo has multiple models."
|
||||
message = "Please use the Model Manager to install this model."
|
||||
return HTMLResponse(content=generate_html(title, heading, source, True, message), status_code=200)
|
||||
|
||||
title = "Model Install Started"
|
||||
heading = "Your HuggingFace model is installing now."
|
||||
message = "You can close this tab and check the Model Manager for installation progress."
|
||||
return HTMLResponse(content=generate_html(title, heading, source, False, message), status_code=201)
|
||||
except Exception as e:
|
||||
logger.error(str(e))
|
||||
title = "Unable to Install Model"
|
||||
heading = "There was an problem installing this model."
|
||||
message = 'Please use the Model Manager directly to install this model. If the issue persists, ask for help on <a href="https://discord.gg/ZmtBAhwWhy">discord</a>.'
|
||||
return HTMLResponse(content=generate_html(title, heading, source, True, message), status_code=500)
|
||||
|
||||
|
||||
@model_manager_router.get(
|
||||
"/install",
|
||||
operation_id="list_model_installs",
|
||||
|
||||
@@ -81,13 +81,9 @@ class CompelInvocation(BaseInvocation):
|
||||
|
||||
with (
|
||||
# apply all patches while the model is on the target device
|
||||
text_encoder_info.model_on_device() as (model_state_dict, text_encoder),
|
||||
text_encoder_info as text_encoder,
|
||||
tokenizer_info as tokenizer,
|
||||
ModelPatcher.apply_lora_text_encoder(
|
||||
text_encoder,
|
||||
loras=_lora_loader(),
|
||||
model_state_dict=model_state_dict,
|
||||
),
|
||||
ModelPatcher.apply_lora_text_encoder(text_encoder, _lora_loader()),
|
||||
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
|
||||
ModelPatcher.apply_clip_skip(text_encoder, self.clip.skipped_layers),
|
||||
ModelPatcher.apply_ti(tokenizer, text_encoder, ti_list) as (
|
||||
@@ -176,14 +172,9 @@ class SDXLPromptInvocationBase:
|
||||
|
||||
with (
|
||||
# apply all patches while the model is on the target device
|
||||
text_encoder_info.model_on_device() as (state_dict, text_encoder),
|
||||
text_encoder_info as text_encoder,
|
||||
tokenizer_info as tokenizer,
|
||||
ModelPatcher.apply_lora(
|
||||
text_encoder,
|
||||
loras=_lora_loader(),
|
||||
prefix=lora_prefix,
|
||||
model_state_dict=state_dict,
|
||||
),
|
||||
ModelPatcher.apply_lora(text_encoder, _lora_loader(), lora_prefix),
|
||||
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
|
||||
ModelPatcher.apply_clip_skip(text_encoder, clip_field.skipped_layers),
|
||||
ModelPatcher.apply_ti(tokenizer, text_encoder, ti_list) as (
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
# initial implementation by Gregg Helt, 2023
|
||||
# heavily leverages controlnet_aux package: https://github.com/patrickvonplaten/controlnet_aux
|
||||
from builtins import bool, float
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Literal, Union
|
||||
|
||||
import cv2
|
||||
@@ -37,13 +36,12 @@ from invokeai.app.invocations.util import validate_begin_end_step, validate_weig
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES, heuristic_resize
|
||||
from invokeai.backend.image_util.canny import get_canny_edges
|
||||
from invokeai.backend.image_util.depth_anything import DEPTH_ANYTHING_MODELS, DepthAnythingDetector
|
||||
from invokeai.backend.image_util.dw_openpose import DWPOSE_MODELS, DWOpenposeDetector
|
||||
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
|
||||
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector
|
||||
from invokeai.backend.image_util.hed import HEDProcessor
|
||||
from invokeai.backend.image_util.lineart import LineartProcessor
|
||||
from invokeai.backend.image_util.lineart_anime import LineartAnimeProcessor
|
||||
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, Classification, invocation, invocation_output
|
||||
|
||||
@@ -141,7 +139,6 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
return context.images.get_pil(self.image.image_name, "RGB")
|
||||
|
||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
||||
self._context = context
|
||||
raw_image = self.load_image(context)
|
||||
# image type should be PIL.PngImagePlugin.PngImageFile ?
|
||||
processed_image = self.run_processor(raw_image)
|
||||
@@ -287,8 +284,7 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
|
||||
# depth_and_normal not supported in controlnet_aux v0.0.3
|
||||
# depth_and_normal: bool = InputField(default=False, description="whether to use depth and normal mode")
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
# TODO: replace from_pretrained() calls with context.models.download_and_cache() (or similar)
|
||||
def run_processor(self, image):
|
||||
midas_processor = MidasDetector.from_pretrained("lllyasviel/Annotators")
|
||||
processed_image = midas_processor(
|
||||
image,
|
||||
@@ -315,7 +311,7 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
|
||||
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
|
||||
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
def run_processor(self, image):
|
||||
normalbae_processor = NormalBaeDetector.from_pretrained("lllyasviel/Annotators")
|
||||
processed_image = normalbae_processor(
|
||||
image, detect_resolution=self.detect_resolution, image_resolution=self.image_resolution
|
||||
@@ -334,7 +330,7 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
|
||||
thr_v: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_v`")
|
||||
thr_d: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_d`")
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
def run_processor(self, image):
|
||||
mlsd_processor = MLSDdetector.from_pretrained("lllyasviel/Annotators")
|
||||
processed_image = mlsd_processor(
|
||||
image,
|
||||
@@ -357,7 +353,7 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
|
||||
safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
|
||||
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
def run_processor(self, image):
|
||||
pidi_processor = PidiNetDetector.from_pretrained("lllyasviel/Annotators")
|
||||
processed_image = pidi_processor(
|
||||
image,
|
||||
@@ -385,7 +381,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
|
||||
w: int = InputField(default=512, ge=0, description="Content shuffle `w` parameter")
|
||||
f: int = InputField(default=256, ge=0, description="Content shuffle `f` parameter")
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
def run_processor(self, image):
|
||||
content_shuffle_processor = ContentShuffleDetector()
|
||||
processed_image = content_shuffle_processor(
|
||||
image,
|
||||
@@ -409,7 +405,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
|
||||
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Applies Zoe depth processing to image"""
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
def run_processor(self, image):
|
||||
zoe_depth_processor = ZoeDetector.from_pretrained("lllyasviel/Annotators")
|
||||
processed_image = zoe_depth_processor(image)
|
||||
return processed_image
|
||||
@@ -430,7 +426,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
|
||||
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
|
||||
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
def run_processor(self, image):
|
||||
mediapipe_face_processor = MediapipeFaceDetector()
|
||||
processed_image = mediapipe_face_processor(
|
||||
image,
|
||||
@@ -458,7 +454,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
|
||||
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
|
||||
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
def run_processor(self, image):
|
||||
leres_processor = LeresDetector.from_pretrained("lllyasviel/Annotators")
|
||||
processed_image = leres_processor(
|
||||
image,
|
||||
@@ -500,8 +496,8 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
|
||||
np_img = cv2.resize(np_img, (W, H), interpolation=cv2.INTER_AREA)
|
||||
return np_img
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
np_img = np.array(image, dtype=np.uint8)
|
||||
def run_processor(self, img):
|
||||
np_img = np.array(img, dtype=np.uint8)
|
||||
processed_np_image = self.tile_resample(
|
||||
np_img,
|
||||
# res=self.tile_size,
|
||||
@@ -524,7 +520,7 @@ class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
|
||||
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
|
||||
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
def run_processor(self, image):
|
||||
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
|
||||
segment_anything_processor = SamDetectorReproducibleColors.from_pretrained(
|
||||
"ybelkada/segment-anything", subfolder="checkpoints"
|
||||
@@ -570,7 +566,7 @@ class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
|
||||
|
||||
color_map_tile_size: int = InputField(default=64, ge=1, description=FieldDescriptions.tile_size)
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
def run_processor(self, image: Image.Image):
|
||||
np_image = np.array(image, dtype=np.uint8)
|
||||
height, width = np_image.shape[:2]
|
||||
|
||||
@@ -605,18 +601,12 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
|
||||
)
|
||||
resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
def loader(model_path: Path):
|
||||
return DepthAnythingDetector.load_model(
|
||||
model_path, model_size=self.model_size, device=TorchDevice.choose_torch_device()
|
||||
)
|
||||
def run_processor(self, image: Image.Image):
|
||||
depth_anything_detector = DepthAnythingDetector()
|
||||
depth_anything_detector.load_model(model_size=self.model_size)
|
||||
|
||||
with self._context.models.load_remote_model(
|
||||
source=DEPTH_ANYTHING_MODELS[self.model_size], loader=loader
|
||||
) as model:
|
||||
depth_anything_detector = DepthAnythingDetector(model, TorchDevice.choose_torch_device())
|
||||
processed_image = depth_anything_detector(image=image, resolution=self.resolution)
|
||||
return processed_image
|
||||
processed_image = depth_anything_detector(image=image, resolution=self.resolution)
|
||||
return processed_image
|
||||
|
||||
|
||||
@invocation(
|
||||
@@ -634,11 +624,8 @@ class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
|
||||
draw_hands: bool = InputField(default=False)
|
||||
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
|
||||
|
||||
def run_processor(self, image: Image.Image) -> Image.Image:
|
||||
onnx_det = self._context.models.download_and_cache_model(DWPOSE_MODELS["yolox_l.onnx"])
|
||||
onnx_pose = self._context.models.download_and_cache_model(DWPOSE_MODELS["dw-ll_ucoco_384.onnx"])
|
||||
|
||||
dw_openpose = DWOpenposeDetector(onnx_det=onnx_det, onnx_pose=onnx_pose)
|
||||
def run_processor(self, image: Image.Image):
|
||||
dw_openpose = DWOpenposeDetector()
|
||||
processed_image = dw_openpose(
|
||||
image,
|
||||
draw_face=self.draw_face,
|
||||
|
||||
@@ -42,16 +42,15 @@ class InfillImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
"""Infill the image with the specified method"""
|
||||
pass
|
||||
|
||||
def load_image(self) -> tuple[Image.Image, bool]:
|
||||
def load_image(self, context: InvocationContext) -> tuple[Image.Image, bool]:
|
||||
"""Process the image to have an alpha channel before being infilled"""
|
||||
image = self._context.images.get_pil(self.image.image_name)
|
||||
image = context.images.get_pil(self.image.image_name)
|
||||
has_alpha = True if image.mode == "RGBA" else False
|
||||
return image, has_alpha
|
||||
|
||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
||||
self._context = context
|
||||
# Retrieve and process image to be infilled
|
||||
input_image, has_alpha = self.load_image()
|
||||
input_image, has_alpha = self.load_image(context)
|
||||
|
||||
# If the input image has no alpha channel, return it
|
||||
if has_alpha is False:
|
||||
@@ -134,12 +133,8 @@ class LaMaInfillInvocation(InfillImageProcessorInvocation):
|
||||
"""Infills transparent areas of an image using the LaMa model"""
|
||||
|
||||
def infill(self, image: Image.Image):
|
||||
with self._context.models.load_remote_model(
|
||||
source="https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
|
||||
loader=LaMA.load_jit_model,
|
||||
) as model:
|
||||
lama = LaMA(model)
|
||||
return lama(image)
|
||||
lama = LaMA()
|
||||
return lama(image)
|
||||
|
||||
|
||||
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
|
||||
|
||||
@@ -65,6 +65,9 @@ def get_scheduler(
|
||||
scheduler_name: str,
|
||||
seed: int,
|
||||
) -> Scheduler:
|
||||
"""Load a scheduler and apply some scheduler-specific overrides."""
|
||||
# TODO(ryand): Silently falling back to ddim seems like a bad idea. Look into why this was added and remove if
|
||||
# possible.
|
||||
scheduler_class, scheduler_extra_config = SCHEDULER_MAP.get(scheduler_name, SCHEDULER_MAP["ddim"])
|
||||
orig_scheduler_info = context.models.load(scheduler_info)
|
||||
with orig_scheduler_info as orig_scheduler:
|
||||
@@ -84,9 +87,6 @@ def get_scheduler(
|
||||
|
||||
scheduler = scheduler_class.from_config(scheduler_config)
|
||||
|
||||
# hack copied over from generate.py
|
||||
if not hasattr(scheduler, "uses_inpainting_model"):
|
||||
scheduler.uses_inpainting_model = lambda: False
|
||||
assert isinstance(scheduler, Scheduler)
|
||||
return scheduler
|
||||
|
||||
@@ -182,8 +182,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
raise ValueError("cfg_scale must be greater than 1")
|
||||
return v
|
||||
|
||||
@staticmethod
|
||||
def _get_text_embeddings_and_masks(
|
||||
self,
|
||||
cond_list: list[ConditioningField],
|
||||
context: InvocationContext,
|
||||
device: torch.device,
|
||||
@@ -203,8 +203,9 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
|
||||
return text_embeddings, text_embeddings_masks
|
||||
|
||||
@staticmethod
|
||||
def _preprocess_regional_prompt_mask(
|
||||
self, mask: Optional[torch.Tensor], target_height: int, target_width: int, dtype: torch.dtype
|
||||
mask: Optional[torch.Tensor], target_height: int, target_width: int, dtype: torch.dtype
|
||||
) -> torch.Tensor:
|
||||
"""Preprocess a regional prompt mask to match the target height and width.
|
||||
If mask is None, returns a mask of all ones with the target height and width.
|
||||
@@ -228,8 +229,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
resized_mask = tf(mask)
|
||||
return resized_mask
|
||||
|
||||
@staticmethod
|
||||
def _concat_regional_text_embeddings(
|
||||
self,
|
||||
text_conditionings: Union[list[BasicConditioningInfo], list[SDXLConditioningInfo]],
|
||||
masks: Optional[list[Optional[torch.Tensor]]],
|
||||
latent_height: int,
|
||||
@@ -279,7 +280,9 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
)
|
||||
)
|
||||
processed_masks.append(
|
||||
self._preprocess_regional_prompt_mask(mask, latent_height, latent_width, dtype=dtype)
|
||||
DenoiseLatentsInvocation._preprocess_regional_prompt_mask(
|
||||
mask, latent_height, latent_width, dtype=dtype
|
||||
)
|
||||
)
|
||||
|
||||
cur_text_embedding_len += text_embedding_info.embeds.shape[1]
|
||||
@@ -301,36 +304,41 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
)
|
||||
return BasicConditioningInfo(embeds=text_embedding), regions
|
||||
|
||||
@staticmethod
|
||||
def get_conditioning_data(
|
||||
self,
|
||||
context: InvocationContext,
|
||||
positive_conditioning_field: Union[ConditioningField, list[ConditioningField]],
|
||||
negative_conditioning_field: Union[ConditioningField, list[ConditioningField]],
|
||||
unet: UNet2DConditionModel,
|
||||
latent_height: int,
|
||||
latent_width: int,
|
||||
cfg_scale: float | list[float],
|
||||
steps: int,
|
||||
cfg_rescale_multiplier: float,
|
||||
) -> TextConditioningData:
|
||||
# Normalize self.positive_conditioning and self.negative_conditioning to lists.
|
||||
cond_list = self.positive_conditioning
|
||||
# Normalize positive_conditioning_field and negative_conditioning_field to lists.
|
||||
cond_list = positive_conditioning_field
|
||||
if not isinstance(cond_list, list):
|
||||
cond_list = [cond_list]
|
||||
uncond_list = self.negative_conditioning
|
||||
uncond_list = negative_conditioning_field
|
||||
if not isinstance(uncond_list, list):
|
||||
uncond_list = [uncond_list]
|
||||
|
||||
cond_text_embeddings, cond_text_embedding_masks = self._get_text_embeddings_and_masks(
|
||||
cond_text_embeddings, cond_text_embedding_masks = DenoiseLatentsInvocation._get_text_embeddings_and_masks(
|
||||
cond_list, context, unet.device, unet.dtype
|
||||
)
|
||||
uncond_text_embeddings, uncond_text_embedding_masks = self._get_text_embeddings_and_masks(
|
||||
uncond_text_embeddings, uncond_text_embedding_masks = DenoiseLatentsInvocation._get_text_embeddings_and_masks(
|
||||
uncond_list, context, unet.device, unet.dtype
|
||||
)
|
||||
|
||||
cond_text_embedding, cond_regions = self._concat_regional_text_embeddings(
|
||||
cond_text_embedding, cond_regions = DenoiseLatentsInvocation._concat_regional_text_embeddings(
|
||||
text_conditionings=cond_text_embeddings,
|
||||
masks=cond_text_embedding_masks,
|
||||
latent_height=latent_height,
|
||||
latent_width=latent_width,
|
||||
dtype=unet.dtype,
|
||||
)
|
||||
uncond_text_embedding, uncond_regions = self._concat_regional_text_embeddings(
|
||||
uncond_text_embedding, uncond_regions = DenoiseLatentsInvocation._concat_regional_text_embeddings(
|
||||
text_conditionings=uncond_text_embeddings,
|
||||
masks=uncond_text_embedding_masks,
|
||||
latent_height=latent_height,
|
||||
@@ -338,23 +346,21 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
dtype=unet.dtype,
|
||||
)
|
||||
|
||||
if isinstance(self.cfg_scale, list):
|
||||
assert (
|
||||
len(self.cfg_scale) == self.steps
|
||||
), "cfg_scale (list) must have the same length as the number of steps"
|
||||
if isinstance(cfg_scale, list):
|
||||
assert len(cfg_scale) == steps, "cfg_scale (list) must have the same length as the number of steps"
|
||||
|
||||
conditioning_data = TextConditioningData(
|
||||
uncond_text=uncond_text_embedding,
|
||||
cond_text=cond_text_embedding,
|
||||
uncond_regions=uncond_regions,
|
||||
cond_regions=cond_regions,
|
||||
guidance_scale=self.cfg_scale,
|
||||
guidance_rescale_multiplier=self.cfg_rescale_multiplier,
|
||||
guidance_scale=cfg_scale,
|
||||
guidance_rescale_multiplier=cfg_rescale_multiplier,
|
||||
)
|
||||
return conditioning_data
|
||||
|
||||
@staticmethod
|
||||
def create_pipeline(
|
||||
self,
|
||||
unet: UNet2DConditionModel,
|
||||
scheduler: Scheduler,
|
||||
) -> StableDiffusionGeneratorPipeline:
|
||||
@@ -583,8 +589,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
|
||||
# original idea by https://github.com/AmericanPresidentJimmyCarter
|
||||
# TODO: research more for second order schedulers timesteps
|
||||
@staticmethod
|
||||
def init_scheduler(
|
||||
self,
|
||||
scheduler: Union[Scheduler, ConfigMixin],
|
||||
device: torch.device,
|
||||
steps: int,
|
||||
@@ -656,31 +662,40 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
|
||||
return 1 - mask, masked_latents, self.denoise_mask.gradient
|
||||
|
||||
@torch.no_grad()
|
||||
@SilenceWarnings() # This quenches the NSFW nag from diffusers.
|
||||
def invoke(self, context: InvocationContext) -> LatentsOutput:
|
||||
seed = None
|
||||
@staticmethod
|
||||
def prepare_noise_and_latents(
|
||||
context: InvocationContext, noise_field: LatentsField | None, latents_field: LatentsField | None
|
||||
) -> Tuple[int, torch.Tensor | None, torch.Tensor]:
|
||||
noise = None
|
||||
if self.noise is not None:
|
||||
noise = context.tensors.load(self.noise.latents_name)
|
||||
seed = self.noise.seed
|
||||
|
||||
if self.latents is not None:
|
||||
latents = context.tensors.load(self.latents.latents_name)
|
||||
if seed is None:
|
||||
seed = self.latents.seed
|
||||
|
||||
if noise is not None and noise.shape[1:] != latents.shape[1:]:
|
||||
raise Exception(f"Incompatable 'noise' and 'latents' shapes: {latents.shape=} {noise.shape=}")
|
||||
if noise_field is not None:
|
||||
noise = context.tensors.load(noise_field.latents_name)
|
||||
|
||||
if latents_field is not None:
|
||||
latents = context.tensors.load(latents_field.latents_name)
|
||||
elif noise is not None:
|
||||
latents = torch.zeros_like(noise)
|
||||
else:
|
||||
raise Exception("'latents' or 'noise' must be provided!")
|
||||
raise ValueError("'latents' or 'noise' must be provided!")
|
||||
|
||||
if seed is None:
|
||||
if noise is not None and noise.shape[1:] != latents.shape[1:]:
|
||||
raise ValueError(f"Incompatable 'noise' and 'latents' shapes: {latents.shape=} {noise.shape=}")
|
||||
|
||||
# The seed comes from (in order of priority): the noise field, the latents field, or 0.
|
||||
seed = 0
|
||||
if noise_field is not None and noise_field.seed is not None:
|
||||
seed = noise_field.seed
|
||||
elif latents_field is not None and latents_field.seed is not None:
|
||||
seed = latents_field.seed
|
||||
else:
|
||||
seed = 0
|
||||
|
||||
return seed, noise, latents
|
||||
|
||||
@torch.no_grad()
|
||||
@SilenceWarnings() # This quenches the NSFW nag from diffusers.
|
||||
def invoke(self, context: InvocationContext) -> LatentsOutput:
|
||||
seed, noise, latents = self.prepare_noise_and_latents(context, self.noise, self.latents)
|
||||
|
||||
mask, masked_latents, gradient_mask = self.prep_inpaint_mask(context, latents)
|
||||
|
||||
# TODO(ryand): I have hard-coded `do_classifier_free_guidance=True` to mirror the behaviour of ControlNets,
|
||||
@@ -724,15 +739,11 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
assert isinstance(unet_info.model, UNet2DConditionModel)
|
||||
with (
|
||||
ExitStack() as exit_stack,
|
||||
unet_info.model_on_device() as (model_state_dict, unet),
|
||||
unet_info as unet,
|
||||
ModelPatcher.apply_freeu(unet, self.unet.freeu_config),
|
||||
set_seamless(unet, self.unet.seamless_axes), # FIXME
|
||||
# Apply the LoRA after unet has been moved to its target device for faster patching.
|
||||
ModelPatcher.apply_lora_unet(
|
||||
unet,
|
||||
loras=_lora_loader(),
|
||||
model_state_dict=model_state_dict,
|
||||
),
|
||||
ModelPatcher.apply_lora_unet(unet, _lora_loader()),
|
||||
):
|
||||
assert isinstance(unet, UNet2DConditionModel)
|
||||
latents = latents.to(device=unet.device, dtype=unet.dtype)
|
||||
@@ -754,7 +765,15 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
|
||||
_, _, latent_height, latent_width = latents.shape
|
||||
conditioning_data = self.get_conditioning_data(
|
||||
context=context, unet=unet, latent_height=latent_height, latent_width=latent_width
|
||||
context=context,
|
||||
positive_conditioning_field=self.positive_conditioning,
|
||||
negative_conditioning_field=self.negative_conditioning,
|
||||
unet=unet,
|
||||
latent_height=latent_height,
|
||||
latent_width=latent_width,
|
||||
cfg_scale=self.cfg_scale,
|
||||
steps=self.steps,
|
||||
cfg_rescale_multiplier=self.cfg_rescale_multiplier,
|
||||
)
|
||||
|
||||
controlnet_data = self.prep_control_data(
|
||||
@@ -8,7 +8,7 @@ from diffusers.models.attention_processor import (
|
||||
)
|
||||
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
|
||||
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
|
||||
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
|
||||
from PIL import Image
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.constants import DEFAULT_PRECISION
|
||||
@@ -23,6 +23,7 @@ from invokeai.app.invocations.fields import (
|
||||
from invokeai.app.invocations.model import VAEField
|
||||
from invokeai.app.invocations.primitives import ImageOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.backend.model_manager.load.load_base import LoadedModel
|
||||
from invokeai.backend.stable_diffusion import set_seamless
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
@@ -48,16 +49,20 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
|
||||
fp32: bool = InputField(default=DEFAULT_PRECISION == torch.float32, description=FieldDescriptions.fp32)
|
||||
|
||||
@torch.no_grad()
|
||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
||||
latents = context.tensors.load(self.latents.latents_name)
|
||||
|
||||
vae_info = context.models.load(self.vae.vae)
|
||||
assert isinstance(vae_info.model, (UNet2DConditionModel, AutoencoderKL, AutoencoderTiny))
|
||||
with set_seamless(vae_info.model, self.vae.seamless_axes), vae_info as vae:
|
||||
assert isinstance(vae, torch.nn.Module)
|
||||
@staticmethod
|
||||
def vae_decode(
|
||||
context: InvocationContext,
|
||||
vae_info: LoadedModel,
|
||||
seamless_axes: list[str],
|
||||
latents: torch.Tensor,
|
||||
use_fp32: bool,
|
||||
use_tiling: bool,
|
||||
) -> Image.Image:
|
||||
assert isinstance(vae_info.model, (AutoencoderKL, AutoencoderTiny))
|
||||
with set_seamless(vae_info.model, seamless_axes), vae_info as vae:
|
||||
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
|
||||
latents = latents.to(vae.device)
|
||||
if self.fp32:
|
||||
if use_fp32:
|
||||
vae.to(dtype=torch.float32)
|
||||
|
||||
use_torch_2_0_or_xformers = hasattr(vae.decoder, "mid_block") and isinstance(
|
||||
@@ -82,7 +87,7 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
vae.to(dtype=torch.float16)
|
||||
latents = latents.half()
|
||||
|
||||
if self.tiled or context.config.get().force_tiled_decode:
|
||||
if use_tiling or context.config.get().force_tiled_decode:
|
||||
vae.enable_tiling()
|
||||
else:
|
||||
vae.disable_tiling()
|
||||
@@ -102,6 +107,21 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
|
||||
TorchDevice.empty_cache()
|
||||
|
||||
return image
|
||||
|
||||
@torch.no_grad()
|
||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
||||
latents = context.tensors.load(self.latents.latents_name)
|
||||
vae_info = context.models.load(self.vae.vae)
|
||||
|
||||
image = self.vae_decode(
|
||||
context=context,
|
||||
vae_info=vae_info,
|
||||
seamless_axes=self.vae.seamless_axes,
|
||||
latents=latents,
|
||||
use_fp32=self.fp32,
|
||||
use_tiling=self.tiled,
|
||||
)
|
||||
image_dto = context.images.save(image=image)
|
||||
|
||||
return ImageOutput.build(image_dto)
|
||||
|
||||
384
invokeai/app/invocations/tiled_stable_diffusion_refine.py
Normal file
384
invokeai/app/invocations/tiled_stable_diffusion_refine.py
Normal file
@@ -0,0 +1,384 @@
|
||||
from contextlib import ExitStack
|
||||
from typing import Iterator, Tuple
|
||||
|
||||
import numpy as np
|
||||
import numpy.typing as npt
|
||||
import torch
|
||||
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
|
||||
from PIL import Image
|
||||
from pydantic import field_validator
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.constants import DEFAULT_PRECISION, LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
|
||||
from invokeai.app.invocations.fields import (
|
||||
ConditioningField,
|
||||
FieldDescriptions,
|
||||
ImageField,
|
||||
Input,
|
||||
InputField,
|
||||
UIType,
|
||||
)
|
||||
from invokeai.app.invocations.image_to_latents import ImageToLatentsInvocation
|
||||
from invokeai.app.invocations.latent import DenoiseLatentsInvocation, get_scheduler
|
||||
from invokeai.app.invocations.latents_to_image import LatentsToImageInvocation
|
||||
from invokeai.app.invocations.model import ModelIdentifierField, UNetField, VAEField
|
||||
from invokeai.app.invocations.noise import get_noise
|
||||
from invokeai.app.invocations.primitives import ImageOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES, prepare_control_image
|
||||
from invokeai.backend.lora import LoRAModelRaw
|
||||
from invokeai.backend.model_patcher import ModelPatcher
|
||||
from invokeai.backend.stable_diffusion.diffusers_pipeline import ControlNetData, image_resized_to_grid_as_tensor
|
||||
from invokeai.backend.tiles.tiles import calc_tiles_with_overlap, merge_tiles_with_linear_blending
|
||||
from invokeai.backend.tiles.utils import Tile
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.hotfixes import ControlNetModel
|
||||
|
||||
|
||||
@invocation(
|
||||
"tiled_stable_diffusion_refine",
|
||||
title="Tiled Stable Diffusion Refine",
|
||||
tags=["upscale", "denoise"],
|
||||
category="latents",
|
||||
version="1.0.0",
|
||||
)
|
||||
class TiledStableDiffusionRefineInvocation(BaseInvocation):
|
||||
"""A tiled Stable Diffusion pipeline for refining high resolution images. This invocation is intended to be used to
|
||||
refine an image after upscaling i.e. it is the second step in a typical "tiled upscaling" workflow.
|
||||
"""
|
||||
|
||||
image: ImageField = InputField(description="Image to be refined.")
|
||||
|
||||
positive_conditioning: ConditioningField = InputField(
|
||||
description=FieldDescriptions.positive_cond, input=Input.Connection
|
||||
)
|
||||
negative_conditioning: ConditioningField = InputField(
|
||||
description=FieldDescriptions.negative_cond, input=Input.Connection
|
||||
)
|
||||
# TODO(ryand): Add multiple-of validation.
|
||||
tile_height: int = InputField(default=512, gt=0, description="Height of the tiles.")
|
||||
tile_width: int = InputField(default=512, gt=0, description="Width of the tiles.")
|
||||
tile_overlap: int = InputField(
|
||||
default=16,
|
||||
gt=0,
|
||||
description="Target overlap between adjacent tiles (the last row/column may overlap more than this).",
|
||||
)
|
||||
steps: int = InputField(default=18, gt=0, description=FieldDescriptions.steps)
|
||||
cfg_scale: float | list[float] = InputField(default=6.0, description=FieldDescriptions.cfg_scale, title="CFG Scale")
|
||||
denoising_start: float = InputField(
|
||||
default=0.65,
|
||||
ge=0,
|
||||
le=1,
|
||||
description=FieldDescriptions.denoising_start,
|
||||
)
|
||||
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
|
||||
scheduler: SCHEDULER_NAME_VALUES = InputField(
|
||||
default="euler",
|
||||
description=FieldDescriptions.scheduler,
|
||||
ui_type=UIType.Scheduler,
|
||||
)
|
||||
unet: UNetField = InputField(
|
||||
description=FieldDescriptions.unet,
|
||||
input=Input.Connection,
|
||||
title="UNet",
|
||||
)
|
||||
cfg_rescale_multiplier: float = InputField(
|
||||
title="CFG Rescale Multiplier", default=0, ge=0, lt=1, description=FieldDescriptions.cfg_rescale_multiplier
|
||||
)
|
||||
vae: VAEField = InputField(
|
||||
description=FieldDescriptions.vae,
|
||||
input=Input.Connection,
|
||||
)
|
||||
vae_fp32: bool = InputField(
|
||||
default=DEFAULT_PRECISION == torch.float32, description="Whether to use float32 precision when running the VAE."
|
||||
)
|
||||
# HACK(ryand): We probably want to allow the user to control all of the parameters in ControlField. But, we akwardly
|
||||
# don't want to use the image field. Figure out how best to handle this.
|
||||
# TODO(ryand): Currently, there is no ControlNet preprocessor applied to the tile images. In other words, we pretty
|
||||
# much assume that it is a tile ControlNet. We need to decide how we want to handle this. E.g. find a way to support
|
||||
# CN preprocessors, raise a clear warning when a non-tile CN model is selected, hardcode the supported CN models,
|
||||
# etc.
|
||||
control_model: ModelIdentifierField = InputField(
|
||||
description=FieldDescriptions.controlnet_model, ui_type=UIType.ControlNetModel
|
||||
)
|
||||
control_weight: float = InputField(default=0.6)
|
||||
|
||||
@field_validator("cfg_scale")
|
||||
def ge_one(cls, v: list[float] | float) -> list[float] | float:
|
||||
"""Validate that all cfg_scale values are >= 1"""
|
||||
if isinstance(v, list):
|
||||
for i in v:
|
||||
if i < 1:
|
||||
raise ValueError("cfg_scale must be greater than 1")
|
||||
else:
|
||||
if v < 1:
|
||||
raise ValueError("cfg_scale must be greater than 1")
|
||||
return v
|
||||
|
||||
@staticmethod
|
||||
def crop_latents_to_tile(latents: torch.Tensor, image_tile: Tile) -> torch.Tensor:
|
||||
"""Crop the latent-space tensor to the area corresponding to the image-space tile.
|
||||
The tile coordinates must be divisible by the LATENT_SCALE_FACTOR.
|
||||
"""
|
||||
for coord in [image_tile.coords.top, image_tile.coords.left, image_tile.coords.right, image_tile.coords.bottom]:
|
||||
if coord % LATENT_SCALE_FACTOR != 0:
|
||||
raise ValueError(
|
||||
f"The tile coordinates must all be divisible by the latent scale factor"
|
||||
f" ({LATENT_SCALE_FACTOR}). {image_tile.coords=}."
|
||||
)
|
||||
assert latents.dim() == 4 # We expect: (batch_size, channels, height, width).
|
||||
|
||||
top = image_tile.coords.top // LATENT_SCALE_FACTOR
|
||||
left = image_tile.coords.left // LATENT_SCALE_FACTOR
|
||||
bottom = image_tile.coords.bottom // LATENT_SCALE_FACTOR
|
||||
right = image_tile.coords.right // LATENT_SCALE_FACTOR
|
||||
return latents[..., top:bottom, left:right]
|
||||
|
||||
def run_controlnet(
|
||||
self,
|
||||
image: Image.Image,
|
||||
controlnet_model: ControlNetModel,
|
||||
weight: float,
|
||||
do_classifier_free_guidance: bool,
|
||||
width: int,
|
||||
height: int,
|
||||
device: torch.device,
|
||||
dtype: torch.dtype,
|
||||
control_mode: CONTROLNET_MODE_VALUES = "balanced",
|
||||
resize_mode: CONTROLNET_RESIZE_VALUES = "just_resize_simple",
|
||||
) -> ControlNetData:
|
||||
control_image = prepare_control_image(
|
||||
image=image,
|
||||
do_classifier_free_guidance=do_classifier_free_guidance,
|
||||
width=width,
|
||||
height=height,
|
||||
device=device,
|
||||
dtype=dtype,
|
||||
control_mode=control_mode,
|
||||
resize_mode=resize_mode,
|
||||
)
|
||||
return ControlNetData(
|
||||
model=controlnet_model,
|
||||
image_tensor=control_image,
|
||||
weight=weight,
|
||||
begin_step_percent=0.0,
|
||||
end_step_percent=1.0,
|
||||
control_mode=control_mode,
|
||||
# Any resizing needed should currently be happening in prepare_control_image(), but adding resize_mode to
|
||||
# ControlNetData in case needed in the future.
|
||||
resize_mode=resize_mode,
|
||||
)
|
||||
|
||||
@torch.no_grad()
|
||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
||||
# TODO(ryand): Expose the seed parameter.
|
||||
seed = 0
|
||||
|
||||
# Load the input image.
|
||||
input_image = context.images.get_pil(self.image.image_name)
|
||||
|
||||
# Calculate the tile locations to cover the image.
|
||||
# We have selected this tiling strategy to make it easy to achieve tile coords that are multiples of 8. This
|
||||
# facilitates conversions between image space and latent space.
|
||||
# TODO(ryand): Expose these tiling parameters. (Keep in mind the multiple-of constraints on these params.)
|
||||
tiles = calc_tiles_with_overlap(
|
||||
image_height=input_image.height,
|
||||
image_width=input_image.width,
|
||||
tile_height=self.tile_height,
|
||||
tile_width=self.tile_width,
|
||||
overlap=self.tile_overlap,
|
||||
)
|
||||
|
||||
# Convert the input image to a torch.Tensor.
|
||||
input_image_torch = image_resized_to_grid_as_tensor(input_image.convert("RGB"), multiple_of=LATENT_SCALE_FACTOR)
|
||||
input_image_torch = input_image_torch.unsqueeze(0) # Add a batch dimension.
|
||||
# Validate our assumptions about the shape of input_image_torch.
|
||||
assert input_image_torch.dim() == 4 # We expect: (batch_size, channels, height, width).
|
||||
assert input_image_torch.shape[:2] == (1, 3)
|
||||
|
||||
# Split the input image into tiles in torch.Tensor format.
|
||||
image_tiles_torch: list[torch.Tensor] = []
|
||||
for tile in tiles:
|
||||
image_tile = input_image_torch[
|
||||
:,
|
||||
:,
|
||||
tile.coords.top : tile.coords.bottom,
|
||||
tile.coords.left : tile.coords.right,
|
||||
]
|
||||
image_tiles_torch.append(image_tile)
|
||||
|
||||
# Split the input image into tiles in numpy format.
|
||||
# TODO(ryand): We currently maintain both np.ndarray and torch.Tensor tiles. Ideally, all operations should work
|
||||
# with torch.Tensor tiles.
|
||||
input_image_np = np.array(input_image)
|
||||
image_tiles_np: list[npt.NDArray[np.uint8]] = []
|
||||
for tile in tiles:
|
||||
image_tile_np = input_image_np[
|
||||
tile.coords.top : tile.coords.bottom,
|
||||
tile.coords.left : tile.coords.right,
|
||||
:,
|
||||
]
|
||||
image_tiles_np.append(image_tile_np)
|
||||
|
||||
# VAE-encode each image tile independently.
|
||||
# TODO(ryand): Is there any advantage to VAE-encoding the entire image before splitting it into tiles? What
|
||||
# about for decoding?
|
||||
vae_info = context.models.load(self.vae.vae)
|
||||
latent_tiles: list[torch.Tensor] = []
|
||||
for image_tile_torch in image_tiles_torch:
|
||||
latent_tiles.append(
|
||||
ImageToLatentsInvocation.vae_encode(
|
||||
vae_info=vae_info, upcast=self.vae_fp32, tiled=False, image_tensor=image_tile_torch
|
||||
)
|
||||
)
|
||||
|
||||
# Generate noise with dimensions corresponding to the full image in latent space.
|
||||
# It is important that the noise tensor is generated at the full image dimension and then tiled, rather than
|
||||
# generating for each tile independently. This ensures that overlapping regions between tiles use the same
|
||||
# noise.
|
||||
assert input_image_torch.shape[2] % LATENT_SCALE_FACTOR == 0
|
||||
assert input_image_torch.shape[3] % LATENT_SCALE_FACTOR == 0
|
||||
global_noise = get_noise(
|
||||
width=input_image_torch.shape[3],
|
||||
height=input_image_torch.shape[2],
|
||||
device=TorchDevice.choose_torch_device(),
|
||||
seed=seed,
|
||||
downsampling_factor=LATENT_SCALE_FACTOR,
|
||||
use_cpu=True,
|
||||
)
|
||||
|
||||
# Crop the global noise into tiles.
|
||||
noise_tiles = [self.crop_latents_to_tile(latents=global_noise, image_tile=t) for t in tiles]
|
||||
|
||||
# Prepare an iterator that yields the UNet's LoRA models and their weights.
|
||||
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
|
||||
for lora in self.unet.loras:
|
||||
lora_info = context.models.load(lora.lora)
|
||||
assert isinstance(lora_info.model, LoRAModelRaw)
|
||||
yield (lora_info.model, lora.weight)
|
||||
del lora_info
|
||||
|
||||
# Load the UNet model.
|
||||
unet_info = context.models.load(self.unet.unet)
|
||||
|
||||
refined_latent_tiles: list[torch.Tensor] = []
|
||||
with ExitStack() as exit_stack, unet_info as unet, ModelPatcher.apply_lora_unet(unet, _lora_loader()):
|
||||
assert isinstance(unet, UNet2DConditionModel)
|
||||
scheduler = get_scheduler(
|
||||
context=context,
|
||||
scheduler_info=self.unet.scheduler,
|
||||
scheduler_name=self.scheduler,
|
||||
seed=seed,
|
||||
)
|
||||
pipeline = DenoiseLatentsInvocation.create_pipeline(unet=unet, scheduler=scheduler)
|
||||
|
||||
# Prepare the prompt conditioning data. The same prompt conditioning is applied to all tiles.
|
||||
# Assume that all tiles have the same shape.
|
||||
_, _, latent_height, latent_width = latent_tiles[0].shape
|
||||
conditioning_data = DenoiseLatentsInvocation.get_conditioning_data(
|
||||
context=context,
|
||||
positive_conditioning_field=self.positive_conditioning,
|
||||
negative_conditioning_field=self.negative_conditioning,
|
||||
unet=unet,
|
||||
latent_height=latent_height,
|
||||
latent_width=latent_width,
|
||||
cfg_scale=self.cfg_scale,
|
||||
steps=self.steps,
|
||||
cfg_rescale_multiplier=self.cfg_rescale_multiplier,
|
||||
)
|
||||
|
||||
# Load the ControlNet model.
|
||||
# TODO(ryand): Support multiple ControlNet models.
|
||||
controlnet_model = exit_stack.enter_context(context.models.load(self.control_model))
|
||||
assert isinstance(controlnet_model, ControlNetModel)
|
||||
|
||||
# Denoise (i.e. "refine") each tile independently.
|
||||
for image_tile_np, latent_tile, noise_tile in zip(image_tiles_np, latent_tiles, noise_tiles, strict=True):
|
||||
assert latent_tile.shape == noise_tile.shape
|
||||
|
||||
# Prepare a PIL Image for ControlNet processing.
|
||||
# TODO(ryand): This is a bit awkward that we have to prepare both torch.Tensor and PIL.Image versions of
|
||||
# the tiles. Ideally, the ControlNet code should be able to work with Tensors.
|
||||
image_tile_pil = Image.fromarray(image_tile_np)
|
||||
|
||||
# Run the ControlNet on the image tile.
|
||||
height, width, _ = image_tile_np.shape
|
||||
# The height and width must be evenly divisible by LATENT_SCALE_FACTOR. This is enforced earlier, but we
|
||||
# validate this assumption here.
|
||||
assert height % LATENT_SCALE_FACTOR == 0
|
||||
assert width % LATENT_SCALE_FACTOR == 0
|
||||
controlnet_data = self.run_controlnet(
|
||||
image=image_tile_pil,
|
||||
controlnet_model=controlnet_model,
|
||||
weight=self.control_weight,
|
||||
do_classifier_free_guidance=True,
|
||||
width=width,
|
||||
height=height,
|
||||
device=controlnet_model.device,
|
||||
dtype=controlnet_model.dtype,
|
||||
control_mode="balanced",
|
||||
resize_mode="just_resize_simple",
|
||||
)
|
||||
|
||||
num_inference_steps, timesteps, init_timestep, scheduler_step_kwargs = (
|
||||
DenoiseLatentsInvocation.init_scheduler(
|
||||
scheduler,
|
||||
device=unet.device,
|
||||
steps=self.steps,
|
||||
denoising_start=self.denoising_start,
|
||||
denoising_end=self.denoising_end,
|
||||
seed=seed,
|
||||
)
|
||||
)
|
||||
|
||||
# TODO(ryand): Think about when/if latents/noise should be moved off of the device to save VRAM.
|
||||
latent_tile = latent_tile.to(device=unet.device, dtype=unet.dtype)
|
||||
noise_tile = noise_tile.to(device=unet.device, dtype=unet.dtype)
|
||||
refined_latent_tile = pipeline.latents_from_embeddings(
|
||||
latents=latent_tile,
|
||||
timesteps=timesteps,
|
||||
init_timestep=init_timestep,
|
||||
noise=noise_tile,
|
||||
seed=seed,
|
||||
mask=None,
|
||||
masked_latents=None,
|
||||
gradient_mask=None,
|
||||
num_inference_steps=num_inference_steps,
|
||||
scheduler_step_kwargs=scheduler_step_kwargs,
|
||||
conditioning_data=conditioning_data,
|
||||
control_data=[controlnet_data],
|
||||
ip_adapter_data=None,
|
||||
t2i_adapter_data=None,
|
||||
callback=lambda x: None,
|
||||
)
|
||||
refined_latent_tiles.append(refined_latent_tile)
|
||||
|
||||
# VAE-decode each refined latent tile independently.
|
||||
refined_image_tiles: list[Image.Image] = []
|
||||
for refined_latent_tile in refined_latent_tiles:
|
||||
refined_image_tile = LatentsToImageInvocation.vae_decode(
|
||||
context=context,
|
||||
vae_info=vae_info,
|
||||
seamless_axes=self.vae.seamless_axes,
|
||||
latents=refined_latent_tile,
|
||||
use_fp32=self.vae_fp32,
|
||||
use_tiling=False,
|
||||
)
|
||||
refined_image_tiles.append(refined_image_tile)
|
||||
|
||||
# TODO(ryand): I copied this from DenoiseLatentsInvocation. I'm not sure if it's actually important.
|
||||
TorchDevice.empty_cache()
|
||||
|
||||
# Merge the refined image tiles back into a single image.
|
||||
refined_image_tiles_np = [np.array(t) for t in refined_image_tiles]
|
||||
merged_image_np = np.zeros(shape=(input_image.height, input_image.width, 3), dtype=np.uint8)
|
||||
# TODO(ryand): Tune the blend_amount. Should this be exposed as a parameter?
|
||||
merge_tiles_with_linear_blending(
|
||||
dst_image=merged_image_np, tiles=tiles, tile_images=refined_image_tiles_np, blend_amount=self.tile_overlap
|
||||
)
|
||||
|
||||
# Save the refined image and return its reference.
|
||||
merged_image_pil = Image.fromarray(merged_image_np)
|
||||
image_dto = context.images.save(image=merged_image_pil)
|
||||
|
||||
return ImageOutput.build(image_dto)
|
||||
@@ -1,4 +1,5 @@
|
||||
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) & the InvokeAI Team
|
||||
from pathlib import Path
|
||||
from typing import Literal
|
||||
|
||||
import cv2
|
||||
@@ -9,8 +10,10 @@ from pydantic import ConfigDict
|
||||
from invokeai.app.invocations.fields import ImageField
|
||||
from invokeai.app.invocations.primitives import ImageOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
||||
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
|
||||
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
from .baseinvocation import BaseInvocation, invocation
|
||||
from .fields import InputField, WithBoard, WithMetadata
|
||||
@@ -49,6 +52,7 @@ class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
|
||||
rrdbnet_model = None
|
||||
netscale = None
|
||||
esrgan_model_path = None
|
||||
|
||||
if self.model_name in [
|
||||
"RealESRGAN_x4plus.pth",
|
||||
@@ -91,25 +95,28 @@ class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
context.logger.error(msg)
|
||||
raise ValueError(msg)
|
||||
|
||||
loadnet = context.models.load_remote_model(
|
||||
source=ESRGAN_MODEL_URLS[self.model_name],
|
||||
esrgan_model_path = Path(context.config.get().models_path, f"core/upscaling/realesrgan/{self.model_name}")
|
||||
|
||||
# Downloads the ESRGAN model if it doesn't already exist
|
||||
download_with_progress_bar(
|
||||
name=self.model_name, url=ESRGAN_MODEL_URLS[self.model_name], dest_path=esrgan_model_path
|
||||
)
|
||||
|
||||
with loadnet as loadnet_model:
|
||||
upscaler = RealESRGAN(
|
||||
scale=netscale,
|
||||
loadnet=loadnet_model,
|
||||
model=rrdbnet_model,
|
||||
half=False,
|
||||
tile=self.tile_size,
|
||||
)
|
||||
upscaler = RealESRGAN(
|
||||
scale=netscale,
|
||||
model_path=esrgan_model_path,
|
||||
model=rrdbnet_model,
|
||||
half=False,
|
||||
tile=self.tile_size,
|
||||
)
|
||||
|
||||
# prepare image - Real-ESRGAN uses cv2 internally, and cv2 uses BGR vs RGB for PIL
|
||||
# TODO: This strips the alpha... is that okay?
|
||||
cv2_image = cv2.cvtColor(np.array(image.convert("RGB")), cv2.COLOR_RGB2BGR)
|
||||
upscaled_image = upscaler.upscale(cv2_image)
|
||||
# prepare image - Real-ESRGAN uses cv2 internally, and cv2 uses BGR vs RGB for PIL
|
||||
# TODO: This strips the alpha... is that okay?
|
||||
cv2_image = cv2.cvtColor(np.array(image.convert("RGB")), cv2.COLOR_RGB2BGR)
|
||||
upscaled_image = upscaler.upscale(cv2_image)
|
||||
pil_image = Image.fromarray(cv2.cvtColor(upscaled_image, cv2.COLOR_BGR2RGB)).convert("RGBA")
|
||||
|
||||
pil_image = Image.fromarray(cv2.cvtColor(upscaled_image, cv2.COLOR_BGR2RGB)).convert("RGBA")
|
||||
TorchDevice.empty_cache()
|
||||
|
||||
image_dto = context.images.save(image=pil_image)
|
||||
|
||||
|
||||
@@ -86,7 +86,6 @@ class InvokeAIAppConfig(BaseSettings):
|
||||
patchmatch: Enable patchmatch inpaint code.
|
||||
models_dir: Path to the models directory.
|
||||
convert_cache_dir: Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.
|
||||
download_cache_dir: Path to the directory that contains dynamically downloaded models.
|
||||
legacy_conf_dir: Path to directory of legacy checkpoint config files.
|
||||
db_dir: Path to InvokeAI databases directory.
|
||||
outputs_dir: Path to directory for outputs.
|
||||
@@ -113,7 +112,6 @@ class InvokeAIAppConfig(BaseSettings):
|
||||
force_tiled_decode: Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).
|
||||
pil_compress_level: The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.
|
||||
max_queue_size: Maximum number of items in the session queue.
|
||||
clear_queue_on_startup: Empties session queue on startup.
|
||||
allow_nodes: List of nodes to allow. Omit to allow all.
|
||||
deny_nodes: List of nodes to deny. Omit to deny none.
|
||||
node_cache_size: How many cached nodes to keep in memory.
|
||||
@@ -148,8 +146,7 @@ class InvokeAIAppConfig(BaseSettings):
|
||||
|
||||
# PATHS
|
||||
models_dir: Path = Field(default=Path("models"), description="Path to the models directory.")
|
||||
convert_cache_dir: Path = Field(default=Path("models/.convert_cache"), description="Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.")
|
||||
download_cache_dir: Path = Field(default=Path("models/.download_cache"), description="Path to the directory that contains dynamically downloaded models.")
|
||||
convert_cache_dir: Path = Field(default=Path("models/.cache"), description="Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.")
|
||||
legacy_conf_dir: Path = Field(default=Path("configs"), description="Path to directory of legacy checkpoint config files.")
|
||||
db_dir: Path = Field(default=Path("databases"), description="Path to InvokeAI databases directory.")
|
||||
outputs_dir: Path = Field(default=Path("outputs"), description="Path to directory for outputs.")
|
||||
@@ -187,7 +184,6 @@ class InvokeAIAppConfig(BaseSettings):
|
||||
force_tiled_decode: bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).")
|
||||
pil_compress_level: int = Field(default=1, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.")
|
||||
max_queue_size: int = Field(default=10000, gt=0, description="Maximum number of items in the session queue.")
|
||||
clear_queue_on_startup: bool = Field(default=False, description="Empties session queue on startup.")
|
||||
|
||||
# NODES
|
||||
allow_nodes: Optional[list[str]] = Field(default=None, description="List of nodes to allow. Omit to allow all.")
|
||||
@@ -307,11 +303,6 @@ class InvokeAIAppConfig(BaseSettings):
|
||||
"""Path to the converted cache models directory, resolved to an absolute path.."""
|
||||
return self._resolve(self.convert_cache_dir)
|
||||
|
||||
@property
|
||||
def download_cache_path(self) -> Path:
|
||||
"""Path to the downloaded models directory, resolved to an absolute path.."""
|
||||
return self._resolve(self.download_cache_dir)
|
||||
|
||||
@property
|
||||
def custom_nodes_path(self) -> Path:
|
||||
"""Path to the custom nodes directory, resolved to an absolute path.."""
|
||||
|
||||
@@ -1,17 +1,10 @@
|
||||
"""Init file for download queue."""
|
||||
|
||||
from .download_base import (
|
||||
DownloadJob,
|
||||
DownloadJobStatus,
|
||||
DownloadQueueServiceBase,
|
||||
MultiFileDownloadJob,
|
||||
UnknownJobIDException,
|
||||
)
|
||||
from .download_base import DownloadJob, DownloadJobStatus, DownloadQueueServiceBase, UnknownJobIDException
|
||||
from .download_default import DownloadQueueService, TqdmProgress
|
||||
|
||||
__all__ = [
|
||||
"DownloadJob",
|
||||
"MultiFileDownloadJob",
|
||||
"DownloadQueueServiceBase",
|
||||
"DownloadQueueService",
|
||||
"TqdmProgress",
|
||||
|
||||
@@ -5,13 +5,11 @@ from abc import ABC, abstractmethod
|
||||
from enum import Enum
|
||||
from functools import total_ordering
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, List, Optional, Set, Union
|
||||
from typing import Any, Callable, List, Optional
|
||||
|
||||
from pydantic import BaseModel, Field, PrivateAttr
|
||||
from pydantic.networks import AnyHttpUrl
|
||||
|
||||
from invokeai.backend.model_manager.metadata import RemoteModelFile
|
||||
|
||||
|
||||
class DownloadJobStatus(str, Enum):
|
||||
"""State of a download job."""
|
||||
@@ -35,23 +33,30 @@ class ServiceInactiveException(Exception):
|
||||
"""This exception is raised when user attempts to initiate a download before the service is started."""
|
||||
|
||||
|
||||
SingleFileDownloadEventHandler = Callable[["DownloadJob"], None]
|
||||
SingleFileDownloadExceptionHandler = Callable[["DownloadJob", Optional[Exception]], None]
|
||||
MultiFileDownloadEventHandler = Callable[["MultiFileDownloadJob"], None]
|
||||
MultiFileDownloadExceptionHandler = Callable[["MultiFileDownloadJob", Optional[Exception]], None]
|
||||
DownloadEventHandler = Union[SingleFileDownloadEventHandler, MultiFileDownloadEventHandler]
|
||||
DownloadExceptionHandler = Union[SingleFileDownloadExceptionHandler, MultiFileDownloadExceptionHandler]
|
||||
DownloadEventHandler = Callable[["DownloadJob"], None]
|
||||
DownloadExceptionHandler = Callable[["DownloadJob", Optional[Exception]], None]
|
||||
|
||||
|
||||
class DownloadJobBase(BaseModel):
|
||||
"""Base of classes to monitor and control downloads."""
|
||||
@total_ordering
|
||||
class DownloadJob(BaseModel):
|
||||
"""Class to monitor and control a model download request."""
|
||||
|
||||
# required variables to be passed in on creation
|
||||
source: AnyHttpUrl = Field(description="Where to download from. Specific types specified in child classes.")
|
||||
dest: Path = Field(description="Destination of downloaded model on local disk; a directory or file path")
|
||||
access_token: Optional[str] = Field(default=None, description="authorization token for protected resources")
|
||||
# automatically assigned on creation
|
||||
id: int = Field(description="Numeric ID of this job", default=-1) # default id is a sentinel
|
||||
priority: int = Field(default=10, description="Queue priority; lower values are higher priority")
|
||||
|
||||
dest: Path = Field(description="Initial destination of downloaded model on local disk; a directory or file path")
|
||||
download_path: Optional[Path] = Field(default=None, description="Final location of downloaded file or directory")
|
||||
# set internally during download process
|
||||
status: DownloadJobStatus = Field(default=DownloadJobStatus.WAITING, description="Status of the download")
|
||||
download_path: Optional[Path] = Field(default=None, description="Final location of downloaded file")
|
||||
job_started: Optional[str] = Field(default=None, description="Timestamp for when the download job started")
|
||||
job_ended: Optional[str] = Field(
|
||||
default=None, description="Timestamp for when the download job ende1d (completed or errored)"
|
||||
)
|
||||
content_type: Optional[str] = Field(default=None, description="Content type of downloaded file")
|
||||
bytes: int = Field(default=0, description="Bytes downloaded so far")
|
||||
total_bytes: int = Field(default=0, description="Total file size (bytes)")
|
||||
|
||||
@@ -69,6 +74,14 @@ class DownloadJobBase(BaseModel):
|
||||
_on_cancelled: Optional[DownloadEventHandler] = PrivateAttr(default=None)
|
||||
_on_error: Optional[DownloadExceptionHandler] = PrivateAttr(default=None)
|
||||
|
||||
def __hash__(self) -> int:
|
||||
"""Return hash of the string representation of this object, for indexing."""
|
||||
return hash(str(self))
|
||||
|
||||
def __le__(self, other: "DownloadJob") -> bool:
|
||||
"""Return True if this job's priority is less than another's."""
|
||||
return self.priority <= other.priority
|
||||
|
||||
def cancel(self) -> None:
|
||||
"""Call to cancel the job."""
|
||||
self._cancelled = True
|
||||
@@ -85,11 +98,6 @@ class DownloadJobBase(BaseModel):
|
||||
"""Return true if job completed without errors."""
|
||||
return self.status == DownloadJobStatus.COMPLETED
|
||||
|
||||
@property
|
||||
def waiting(self) -> bool:
|
||||
"""Return true if the job is waiting to run."""
|
||||
return self.status == DownloadJobStatus.WAITING
|
||||
|
||||
@property
|
||||
def running(self) -> bool:
|
||||
"""Return true if the job is running."""
|
||||
@@ -146,37 +154,6 @@ class DownloadJobBase(BaseModel):
|
||||
self._on_cancelled = on_cancelled
|
||||
|
||||
|
||||
@total_ordering
|
||||
class DownloadJob(DownloadJobBase):
|
||||
"""Class to monitor and control a model download request."""
|
||||
|
||||
# required variables to be passed in on creation
|
||||
source: AnyHttpUrl = Field(description="Where to download from. Specific types specified in child classes.")
|
||||
access_token: Optional[str] = Field(default=None, description="authorization token for protected resources")
|
||||
priority: int = Field(default=10, description="Queue priority; lower values are higher priority")
|
||||
|
||||
# set internally during download process
|
||||
job_started: Optional[str] = Field(default=None, description="Timestamp for when the download job started")
|
||||
job_ended: Optional[str] = Field(
|
||||
default=None, description="Timestamp for when the download job ende1d (completed or errored)"
|
||||
)
|
||||
content_type: Optional[str] = Field(default=None, description="Content type of downloaded file")
|
||||
|
||||
def __hash__(self) -> int:
|
||||
"""Return hash of the string representation of this object, for indexing."""
|
||||
return hash(str(self))
|
||||
|
||||
def __le__(self, other: "DownloadJob") -> bool:
|
||||
"""Return True if this job's priority is less than another's."""
|
||||
return self.priority <= other.priority
|
||||
|
||||
|
||||
class MultiFileDownloadJob(DownloadJobBase):
|
||||
"""Class to monitor and control multifile downloads."""
|
||||
|
||||
download_parts: Set[DownloadJob] = Field(default_factory=set, description="List of download parts.")
|
||||
|
||||
|
||||
class DownloadQueueServiceBase(ABC):
|
||||
"""Multithreaded queue for downloading models via URL."""
|
||||
|
||||
@@ -224,48 +201,6 @@ class DownloadQueueServiceBase(ABC):
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def multifile_download(
|
||||
self,
|
||||
parts: List[RemoteModelFile],
|
||||
dest: Path,
|
||||
access_token: Optional[str] = None,
|
||||
submit_job: bool = True,
|
||||
on_start: Optional[DownloadEventHandler] = None,
|
||||
on_progress: Optional[DownloadEventHandler] = None,
|
||||
on_complete: Optional[DownloadEventHandler] = None,
|
||||
on_cancelled: Optional[DownloadEventHandler] = None,
|
||||
on_error: Optional[DownloadExceptionHandler] = None,
|
||||
) -> MultiFileDownloadJob:
|
||||
"""
|
||||
Create and enqueue a multifile download job.
|
||||
|
||||
:param parts: Set of URL / filename pairs
|
||||
:param dest: Path to download to. See below.
|
||||
:param access_token: Access token to download the indicated files. If not provided,
|
||||
each file's URL may be matched to an access token using the config file matching
|
||||
system.
|
||||
:param submit_job: If true [default] then submit the job for execution. Otherwise,
|
||||
you will need to pass the job to submit_multifile_download().
|
||||
:param on_start, on_progress, on_complete, on_error: Callbacks for the indicated
|
||||
events.
|
||||
:returns: A MultiFileDownloadJob object for monitoring the state of the download.
|
||||
|
||||
The `dest` argument is a Path object pointing to a directory. All downloads
|
||||
with be placed inside this directory. The callbacks will receive the
|
||||
MultiFileDownloadJob.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def submit_multifile_download(self, job: MultiFileDownloadJob) -> None:
|
||||
"""
|
||||
Enqueue a previously-created multi-file download job.
|
||||
|
||||
:param job: A MultiFileDownloadJob created with multifile_download()
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def submit_download_job(
|
||||
self,
|
||||
@@ -317,7 +252,7 @@ class DownloadQueueServiceBase(ABC):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def cancel_job(self, job: DownloadJobBase) -> None:
|
||||
def cancel_job(self, job: DownloadJob) -> None:
|
||||
"""Cancel the job, clearing partial downloads and putting it into ERROR state."""
|
||||
pass
|
||||
|
||||
@@ -327,7 +262,7 @@ class DownloadQueueServiceBase(ABC):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def wait_for_job(self, job: DownloadJobBase, timeout: int = 0) -> DownloadJobBase:
|
||||
def wait_for_job(self, job: DownloadJob, timeout: int = 0) -> DownloadJob:
|
||||
"""Wait until the indicated download job has reached a terminal state.
|
||||
|
||||
This will block until the indicated install job has completed,
|
||||
|
||||
@@ -8,32 +8,30 @@ import time
|
||||
import traceback
|
||||
from pathlib import Path
|
||||
from queue import Empty, PriorityQueue
|
||||
from typing import Any, Dict, List, Literal, Optional, Set
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Set
|
||||
|
||||
import requests
|
||||
from pydantic.networks import AnyHttpUrl
|
||||
from requests import HTTPError
|
||||
from tqdm import tqdm
|
||||
|
||||
from invokeai.app.services.config import InvokeAIAppConfig, get_config
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
from invokeai.app.util.misc import get_iso_timestamp
|
||||
from invokeai.backend.model_manager.metadata import RemoteModelFile
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
from .download_base import (
|
||||
DownloadEventHandler,
|
||||
DownloadExceptionHandler,
|
||||
DownloadJob,
|
||||
DownloadJobBase,
|
||||
DownloadJobCancelledException,
|
||||
DownloadJobStatus,
|
||||
DownloadQueueServiceBase,
|
||||
MultiFileDownloadJob,
|
||||
ServiceInactiveException,
|
||||
UnknownJobIDException,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
|
||||
# Maximum number of bytes to download during each call to requests.iter_content()
|
||||
DOWNLOAD_CHUNK_SIZE = 100000
|
||||
|
||||
@@ -44,24 +42,20 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
def __init__(
|
||||
self,
|
||||
max_parallel_dl: int = 5,
|
||||
app_config: Optional[InvokeAIAppConfig] = None,
|
||||
event_bus: Optional["EventServiceBase"] = None,
|
||||
requests_session: Optional[requests.sessions.Session] = None,
|
||||
):
|
||||
"""
|
||||
Initialize DownloadQueue.
|
||||
|
||||
:param app_config: InvokeAIAppConfig object
|
||||
:param max_parallel_dl: Number of simultaneous downloads allowed [5].
|
||||
:param requests_session: Optional requests.sessions.Session object, for unit tests.
|
||||
"""
|
||||
self._app_config = app_config or get_config()
|
||||
self._jobs: Dict[int, DownloadJob] = {}
|
||||
self._download_part2parent: Dict[AnyHttpUrl, MultiFileDownloadJob] = {}
|
||||
self._next_job_id = 0
|
||||
self._queue: PriorityQueue[DownloadJob] = PriorityQueue()
|
||||
self._stop_event = threading.Event()
|
||||
self._job_terminated_event = threading.Event()
|
||||
self._job_completed_event = threading.Event()
|
||||
self._worker_pool: Set[threading.Thread] = set()
|
||||
self._lock = threading.Lock()
|
||||
self._logger = InvokeAILogger.get_logger("DownloadQueueService")
|
||||
@@ -113,16 +107,18 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
raise ServiceInactiveException(
|
||||
"The download service is not currently accepting requests. Please call start() to initialize the service."
|
||||
)
|
||||
job.id = self._next_id()
|
||||
job.set_callbacks(
|
||||
on_start=on_start,
|
||||
on_progress=on_progress,
|
||||
on_complete=on_complete,
|
||||
on_cancelled=on_cancelled,
|
||||
on_error=on_error,
|
||||
)
|
||||
self._jobs[job.id] = job
|
||||
self._queue.put(job)
|
||||
with self._lock:
|
||||
job.id = self._next_job_id
|
||||
self._next_job_id += 1
|
||||
job.set_callbacks(
|
||||
on_start=on_start,
|
||||
on_progress=on_progress,
|
||||
on_complete=on_complete,
|
||||
on_cancelled=on_cancelled,
|
||||
on_error=on_error,
|
||||
)
|
||||
self._jobs[job.id] = job
|
||||
self._queue.put(job)
|
||||
|
||||
def download(
|
||||
self,
|
||||
@@ -145,7 +141,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
source=source,
|
||||
dest=dest,
|
||||
priority=priority,
|
||||
access_token=access_token or self._lookup_access_token(source),
|
||||
access_token=access_token,
|
||||
)
|
||||
self.submit_download_job(
|
||||
job,
|
||||
@@ -157,63 +153,10 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
)
|
||||
return job
|
||||
|
||||
def multifile_download(
|
||||
self,
|
||||
parts: List[RemoteModelFile],
|
||||
dest: Path,
|
||||
access_token: Optional[str] = None,
|
||||
submit_job: bool = True,
|
||||
on_start: Optional[DownloadEventHandler] = None,
|
||||
on_progress: Optional[DownloadEventHandler] = None,
|
||||
on_complete: Optional[DownloadEventHandler] = None,
|
||||
on_cancelled: Optional[DownloadEventHandler] = None,
|
||||
on_error: Optional[DownloadExceptionHandler] = None,
|
||||
) -> MultiFileDownloadJob:
|
||||
mfdj = MultiFileDownloadJob(dest=dest, id=self._next_id())
|
||||
mfdj.set_callbacks(
|
||||
on_start=on_start,
|
||||
on_progress=on_progress,
|
||||
on_complete=on_complete,
|
||||
on_cancelled=on_cancelled,
|
||||
on_error=on_error,
|
||||
)
|
||||
|
||||
for part in parts:
|
||||
url = part.url
|
||||
path = dest / part.path
|
||||
assert path.is_relative_to(dest), "only relative download paths accepted"
|
||||
job = DownloadJob(
|
||||
source=url,
|
||||
dest=path,
|
||||
access_token=access_token,
|
||||
)
|
||||
mfdj.download_parts.add(job)
|
||||
self._download_part2parent[job.source] = mfdj
|
||||
if submit_job:
|
||||
self.submit_multifile_download(mfdj)
|
||||
return mfdj
|
||||
|
||||
def submit_multifile_download(self, job: MultiFileDownloadJob) -> None:
|
||||
for download_job in job.download_parts:
|
||||
self.submit_download_job(
|
||||
download_job,
|
||||
on_start=self._mfd_started,
|
||||
on_progress=self._mfd_progress,
|
||||
on_complete=self._mfd_complete,
|
||||
on_cancelled=self._mfd_cancelled,
|
||||
on_error=self._mfd_error,
|
||||
)
|
||||
|
||||
def join(self) -> None:
|
||||
"""Wait for all jobs to complete."""
|
||||
self._queue.join()
|
||||
|
||||
def _next_id(self) -> int:
|
||||
with self._lock:
|
||||
id = self._next_job_id
|
||||
self._next_job_id += 1
|
||||
return id
|
||||
|
||||
def list_jobs(self) -> List[DownloadJob]:
|
||||
"""List all the jobs."""
|
||||
return list(self._jobs.values())
|
||||
@@ -235,14 +178,14 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
except KeyError as excp:
|
||||
raise UnknownJobIDException("Unrecognized job") from excp
|
||||
|
||||
def cancel_job(self, job: DownloadJobBase) -> None:
|
||||
def cancel_job(self, job: DownloadJob) -> None:
|
||||
"""
|
||||
Cancel the indicated job.
|
||||
|
||||
If it is running it will be stopped.
|
||||
job.status will be set to DownloadJobStatus.CANCELLED
|
||||
"""
|
||||
if job.status in [DownloadJobStatus.WAITING, DownloadJobStatus.RUNNING]:
|
||||
with self._lock:
|
||||
job.cancel()
|
||||
|
||||
def cancel_all_jobs(self) -> None:
|
||||
@@ -251,12 +194,12 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
if not job.in_terminal_state:
|
||||
self.cancel_job(job)
|
||||
|
||||
def wait_for_job(self, job: DownloadJobBase, timeout: int = 0) -> DownloadJobBase:
|
||||
def wait_for_job(self, job: DownloadJob, timeout: int = 0) -> DownloadJob:
|
||||
"""Block until the indicated job has reached terminal state, or when timeout limit reached."""
|
||||
start = time.time()
|
||||
while not job.in_terminal_state:
|
||||
if self._job_terminated_event.wait(timeout=0.25): # in case we miss an event
|
||||
self._job_terminated_event.clear()
|
||||
if self._job_completed_event.wait(timeout=0.25): # in case we miss an event
|
||||
self._job_completed_event.clear()
|
||||
if timeout > 0 and time.time() - start > timeout:
|
||||
raise TimeoutError("Timeout exceeded")
|
||||
return job
|
||||
@@ -285,25 +228,22 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
job.job_started = get_iso_timestamp()
|
||||
self._do_download(job)
|
||||
self._signal_job_complete(job)
|
||||
except DownloadJobCancelledException:
|
||||
self._signal_job_cancelled(job)
|
||||
self._cleanup_cancelled_job(job)
|
||||
except Exception as excp:
|
||||
except (OSError, HTTPError) as excp:
|
||||
job.error_type = excp.__class__.__name__ + f"({str(excp)})"
|
||||
job.error = traceback.format_exc()
|
||||
self._signal_job_error(job, excp)
|
||||
except DownloadJobCancelledException:
|
||||
self._signal_job_cancelled(job)
|
||||
self._cleanup_cancelled_job(job)
|
||||
|
||||
finally:
|
||||
job.job_ended = get_iso_timestamp()
|
||||
self._job_terminated_event.set() # signal a change to terminal state
|
||||
self._download_part2parent.pop(job.source, None) # if this is a subpart of a multipart job, remove it
|
||||
self._job_terminated_event.set()
|
||||
self._job_completed_event.set() # signal a change to terminal state
|
||||
self._queue.task_done()
|
||||
|
||||
self._logger.debug(f"Download queue worker thread {threading.current_thread().name} exiting.")
|
||||
|
||||
def _do_download(self, job: DownloadJob) -> None:
|
||||
"""Do the actual download."""
|
||||
|
||||
url = job.source
|
||||
header = {"Authorization": f"Bearer {job.access_token}"} if job.access_token else {}
|
||||
open_mode = "wb"
|
||||
@@ -395,29 +335,38 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
def _in_progress_path(self, path: Path) -> Path:
|
||||
return path.with_name(path.name + ".downloading")
|
||||
|
||||
def _lookup_access_token(self, source: AnyHttpUrl) -> Optional[str]:
|
||||
# Pull the token from config if it exists and matches the URL
|
||||
token = None
|
||||
for pair in self._app_config.remote_api_tokens or []:
|
||||
if re.search(pair.url_regex, str(source)):
|
||||
token = pair.token
|
||||
break
|
||||
return token
|
||||
|
||||
def _signal_job_started(self, job: DownloadJob) -> None:
|
||||
job.status = DownloadJobStatus.RUNNING
|
||||
self._execute_cb(job, "on_start")
|
||||
if job.on_start:
|
||||
try:
|
||||
job.on_start(job)
|
||||
except Exception as e:
|
||||
self._logger.error(
|
||||
f"An error occurred while processing the on_start callback: {traceback.format_exception(e)}"
|
||||
)
|
||||
if self._event_bus:
|
||||
self._event_bus.emit_download_started(job)
|
||||
|
||||
def _signal_job_progress(self, job: DownloadJob) -> None:
|
||||
self._execute_cb(job, "on_progress")
|
||||
if job.on_progress:
|
||||
try:
|
||||
job.on_progress(job)
|
||||
except Exception as e:
|
||||
self._logger.error(
|
||||
f"An error occurred while processing the on_progress callback: {traceback.format_exception(e)}"
|
||||
)
|
||||
if self._event_bus:
|
||||
self._event_bus.emit_download_progress(job)
|
||||
|
||||
def _signal_job_complete(self, job: DownloadJob) -> None:
|
||||
job.status = DownloadJobStatus.COMPLETED
|
||||
self._execute_cb(job, "on_complete")
|
||||
if job.on_complete:
|
||||
try:
|
||||
job.on_complete(job)
|
||||
except Exception as e:
|
||||
self._logger.error(
|
||||
f"An error occurred while processing the on_complete callback: {traceback.format_exception(e)}"
|
||||
)
|
||||
if self._event_bus:
|
||||
self._event_bus.emit_download_complete(job)
|
||||
|
||||
@@ -425,21 +374,26 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
if job.status not in [DownloadJobStatus.RUNNING, DownloadJobStatus.WAITING]:
|
||||
return
|
||||
job.status = DownloadJobStatus.CANCELLED
|
||||
self._execute_cb(job, "on_cancelled")
|
||||
if job.on_cancelled:
|
||||
try:
|
||||
job.on_cancelled(job)
|
||||
except Exception as e:
|
||||
self._logger.error(
|
||||
f"An error occurred while processing the on_cancelled callback: {traceback.format_exception(e)}"
|
||||
)
|
||||
if self._event_bus:
|
||||
self._event_bus.emit_download_cancelled(job)
|
||||
|
||||
# if multifile download, then signal the parent
|
||||
if parent_job := self._download_part2parent.get(job.source, None):
|
||||
if not parent_job.in_terminal_state:
|
||||
parent_job.status = DownloadJobStatus.CANCELLED
|
||||
self._execute_cb(parent_job, "on_cancelled")
|
||||
|
||||
def _signal_job_error(self, job: DownloadJob, excp: Optional[Exception] = None) -> None:
|
||||
job.status = DownloadJobStatus.ERROR
|
||||
self._logger.error(f"{str(job.source)}: {traceback.format_exception(excp)}")
|
||||
self._execute_cb(job, "on_error", excp)
|
||||
|
||||
if job.on_error:
|
||||
try:
|
||||
job.on_error(job, excp)
|
||||
except Exception as e:
|
||||
self._logger.error(
|
||||
f"An error occurred while processing the on_error callback: {traceback.format_exception(e)}"
|
||||
)
|
||||
if self._event_bus:
|
||||
self._event_bus.emit_download_error(job)
|
||||
|
||||
@@ -452,97 +406,6 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
except OSError as excp:
|
||||
self._logger.warning(excp)
|
||||
|
||||
########################################
|
||||
# callbacks used for multifile downloads
|
||||
########################################
|
||||
def _mfd_started(self, download_job: DownloadJob) -> None:
|
||||
self._logger.info(f"File download started: {download_job.source}")
|
||||
with self._lock:
|
||||
mf_job = self._download_part2parent[download_job.source]
|
||||
if mf_job.waiting:
|
||||
mf_job.total_bytes = sum(x.total_bytes for x in mf_job.download_parts)
|
||||
mf_job.status = DownloadJobStatus.RUNNING
|
||||
assert download_job.download_path is not None
|
||||
path_relative_to_destdir = download_job.download_path.relative_to(mf_job.dest)
|
||||
mf_job.download_path = (
|
||||
mf_job.dest / path_relative_to_destdir.parts[0]
|
||||
) # keep just the first component of the path
|
||||
self._execute_cb(mf_job, "on_start")
|
||||
|
||||
def _mfd_progress(self, download_job: DownloadJob) -> None:
|
||||
with self._lock:
|
||||
mf_job = self._download_part2parent[download_job.source]
|
||||
if mf_job.cancelled:
|
||||
for part in mf_job.download_parts:
|
||||
self.cancel_job(part)
|
||||
elif mf_job.running:
|
||||
mf_job.total_bytes = sum(x.total_bytes for x in mf_job.download_parts)
|
||||
mf_job.bytes = sum(x.total_bytes for x in mf_job.download_parts)
|
||||
self._execute_cb(mf_job, "on_progress")
|
||||
|
||||
def _mfd_complete(self, download_job: DownloadJob) -> None:
|
||||
self._logger.info(f"Download complete: {download_job.source}")
|
||||
with self._lock:
|
||||
mf_job = self._download_part2parent[download_job.source]
|
||||
|
||||
# are there any more active jobs left in this task?
|
||||
if mf_job.running and all(x.complete for x in mf_job.download_parts):
|
||||
mf_job.status = DownloadJobStatus.COMPLETED
|
||||
self._execute_cb(mf_job, "on_complete")
|
||||
|
||||
# we're done with this sub-job
|
||||
self._job_terminated_event.set()
|
||||
|
||||
def _mfd_cancelled(self, download_job: DownloadJob) -> None:
|
||||
with self._lock:
|
||||
mf_job = self._download_part2parent[download_job.source]
|
||||
assert mf_job is not None
|
||||
|
||||
if not mf_job.in_terminal_state:
|
||||
self._logger.warning(f"Download cancelled: {download_job.source}")
|
||||
mf_job.cancel()
|
||||
|
||||
for s in mf_job.download_parts:
|
||||
self.cancel_job(s)
|
||||
|
||||
def _mfd_error(self, download_job: DownloadJob, excp: Optional[Exception] = None) -> None:
|
||||
with self._lock:
|
||||
mf_job = self._download_part2parent[download_job.source]
|
||||
assert mf_job is not None
|
||||
if not mf_job.in_terminal_state:
|
||||
mf_job.status = download_job.status
|
||||
mf_job.error = download_job.error
|
||||
mf_job.error_type = download_job.error_type
|
||||
self._execute_cb(mf_job, "on_error", excp)
|
||||
self._logger.error(
|
||||
f"Cancelling {mf_job.dest} due to an error while downloading {download_job.source}: {str(excp)}"
|
||||
)
|
||||
for s in [x for x in mf_job.download_parts if x.running]:
|
||||
self.cancel_job(s)
|
||||
self._download_part2parent.pop(download_job.source)
|
||||
self._job_terminated_event.set()
|
||||
|
||||
def _execute_cb(
|
||||
self,
|
||||
job: DownloadJob | MultiFileDownloadJob,
|
||||
callback_name: Literal[
|
||||
"on_start",
|
||||
"on_progress",
|
||||
"on_complete",
|
||||
"on_cancelled",
|
||||
"on_error",
|
||||
],
|
||||
excp: Optional[Exception] = None,
|
||||
) -> None:
|
||||
if callback := getattr(job, callback_name, None):
|
||||
args = [job, excp] if excp else [job]
|
||||
try:
|
||||
callback(*args)
|
||||
except Exception as e:
|
||||
self._logger.error(
|
||||
f"An error occurred while processing the {callback_name} callback: {traceback.format_exception(e)}"
|
||||
)
|
||||
|
||||
|
||||
def get_pc_name_max(directory: str) -> int:
|
||||
if hasattr(os, "pathconf"):
|
||||
|
||||
@@ -22,7 +22,6 @@ from invokeai.app.services.events.events_common import (
|
||||
ModelInstallCompleteEvent,
|
||||
ModelInstallDownloadProgressEvent,
|
||||
ModelInstallDownloadsCompleteEvent,
|
||||
ModelInstallDownloadStartedEvent,
|
||||
ModelInstallErrorEvent,
|
||||
ModelInstallStartedEvent,
|
||||
ModelLoadCompleteEvent,
|
||||
@@ -35,6 +34,7 @@ from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineInterme
|
||||
if TYPE_CHECKING:
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput
|
||||
from invokeai.app.services.download.download_base import DownloadJob
|
||||
from invokeai.app.services.events.events_common import EventBase
|
||||
from invokeai.app.services.model_install.model_install_common import ModelInstallJob
|
||||
from invokeai.app.services.session_processor.session_processor_common import ProgressImage
|
||||
from invokeai.app.services.session_queue.session_queue_common import (
|
||||
@@ -145,10 +145,6 @@ class EventServiceBase:
|
||||
|
||||
# region Model install
|
||||
|
||||
def emit_model_install_download_started(self, job: "ModelInstallJob") -> None:
|
||||
"""Emitted at intervals while the install job is started (remote models only)."""
|
||||
self.dispatch(ModelInstallDownloadStartedEvent.build(job))
|
||||
|
||||
def emit_model_install_download_progress(self, job: "ModelInstallJob") -> None:
|
||||
"""Emitted at intervals while the install job is in progress (remote models only)."""
|
||||
self.dispatch(ModelInstallDownloadProgressEvent.build(job))
|
||||
|
||||
@@ -417,42 +417,6 @@ class ModelLoadCompleteEvent(ModelEventBase):
|
||||
return cls(config=config, submodel_type=submodel_type)
|
||||
|
||||
|
||||
@payload_schema.register
|
||||
class ModelInstallDownloadStartedEvent(ModelEventBase):
|
||||
"""Event model for model_install_download_started"""
|
||||
|
||||
__event_name__ = "model_install_download_started"
|
||||
|
||||
id: int = Field(description="The ID of the install job")
|
||||
source: str = Field(description="Source of the model; local path, repo_id or url")
|
||||
local_path: str = Field(description="Where model is downloading to")
|
||||
bytes: int = Field(description="Number of bytes downloaded so far")
|
||||
total_bytes: int = Field(description="Total size of download, including all files")
|
||||
parts: list[dict[str, int | str]] = Field(
|
||||
description="Progress of downloading URLs that comprise the model, if any"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def build(cls, job: "ModelInstallJob") -> "ModelInstallDownloadStartedEvent":
|
||||
parts: list[dict[str, str | int]] = [
|
||||
{
|
||||
"url": str(x.source),
|
||||
"local_path": str(x.download_path),
|
||||
"bytes": x.bytes,
|
||||
"total_bytes": x.total_bytes,
|
||||
}
|
||||
for x in job.download_parts
|
||||
]
|
||||
return cls(
|
||||
id=job.id,
|
||||
source=str(job.source),
|
||||
local_path=job.local_path.as_posix(),
|
||||
parts=parts,
|
||||
bytes=job.bytes,
|
||||
total_bytes=job.total_bytes,
|
||||
)
|
||||
|
||||
|
||||
@payload_schema.register
|
||||
class ModelInstallDownloadProgressEvent(ModelEventBase):
|
||||
"""Event model for model_install_download_progress"""
|
||||
|
||||
@@ -41,7 +41,6 @@ class ImageRecordStorageBase(ABC):
|
||||
categories: Optional[list[ImageCategory]] = None,
|
||||
is_intermediate: Optional[bool] = None,
|
||||
board_id: Optional[str] = None,
|
||||
search_term: Optional[str] = None,
|
||||
) -> OffsetPaginatedResults[ImageRecord]:
|
||||
"""Gets a page of image records."""
|
||||
pass
|
||||
|
||||
@@ -148,7 +148,6 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
|
||||
categories: Optional[list[ImageCategory]] = None,
|
||||
is_intermediate: Optional[bool] = None,
|
||||
board_id: Optional[str] = None,
|
||||
search_term: Optional[str] = None,
|
||||
) -> OffsetPaginatedResults[ImageRecord]:
|
||||
try:
|
||||
self._lock.acquire()
|
||||
@@ -209,13 +208,6 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
|
||||
"""
|
||||
query_params.append(board_id)
|
||||
|
||||
# Search term condition
|
||||
if search_term:
|
||||
query_conditions += """--sql
|
||||
AND json_extract(images.metadata, '$') LIKE ?
|
||||
"""
|
||||
query_params.append(f'%{search_term}%')
|
||||
|
||||
query_pagination = """--sql
|
||||
ORDER BY images.starred DESC, images.created_at DESC LIMIT ? OFFSET ?
|
||||
"""
|
||||
|
||||
@@ -120,7 +120,6 @@ class ImageServiceABC(ABC):
|
||||
categories: Optional[list[ImageCategory]] = None,
|
||||
is_intermediate: Optional[bool] = None,
|
||||
board_id: Optional[str] = None,
|
||||
search_term: Optional[str] = None
|
||||
) -> OffsetPaginatedResults[ImageDTO]:
|
||||
"""Gets a paginated list of image DTOs."""
|
||||
pass
|
||||
|
||||
@@ -206,7 +206,6 @@ class ImageService(ImageServiceABC):
|
||||
categories: Optional[list[ImageCategory]] = None,
|
||||
is_intermediate: Optional[bool] = None,
|
||||
board_id: Optional[str] = None,
|
||||
search_term: Optional[str] = None,
|
||||
) -> OffsetPaginatedResults[ImageDTO]:
|
||||
try:
|
||||
results = self.__invoker.services.image_records.get_many(
|
||||
@@ -216,7 +215,6 @@ class ImageService(ImageServiceABC):
|
||||
categories,
|
||||
is_intermediate,
|
||||
board_id,
|
||||
search_term
|
||||
)
|
||||
|
||||
image_dtos = [
|
||||
|
||||
@@ -13,7 +13,7 @@ from invokeai.app.services.events.events_base import EventServiceBase
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.services.model_install.model_install_common import ModelInstallJob, ModelSource
|
||||
from invokeai.app.services.model_records import ModelRecordServiceBase
|
||||
from invokeai.backend.model_manager import AnyModelConfig
|
||||
from invokeai.backend.model_manager.config import AnyModelConfig
|
||||
|
||||
|
||||
class ModelInstallServiceBase(ABC):
|
||||
@@ -243,11 +243,12 @@ class ModelInstallServiceBase(ABC):
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def download_and_cache_model(self, source: str | AnyHttpUrl) -> Path:
|
||||
def download_and_cache(self, source: Union[str, AnyHttpUrl], access_token: Optional[str] = None) -> Path:
|
||||
"""
|
||||
Download the model file located at source to the models cache and return its Path.
|
||||
|
||||
:param source: A string representing a URL or repo_id.
|
||||
:param source: A Url or a string that can be converted into one.
|
||||
:param access_token: Optional access token to access restricted resources.
|
||||
|
||||
The model file will be downloaded into the system-wide model cache
|
||||
(`models/.cache`) if it isn't already there. Note that the model cache
|
||||
|
||||
@@ -8,7 +8,7 @@ from pydantic import BaseModel, Field, PrivateAttr, field_validator
|
||||
from pydantic.networks import AnyHttpUrl
|
||||
from typing_extensions import Annotated
|
||||
|
||||
from invokeai.app.services.download import DownloadJob, MultiFileDownloadJob
|
||||
from invokeai.app.services.download import DownloadJob
|
||||
from invokeai.backend.model_manager import AnyModelConfig, ModelRepoVariant
|
||||
from invokeai.backend.model_manager.config import ModelSourceType
|
||||
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata
|
||||
@@ -26,6 +26,13 @@ class InstallStatus(str, Enum):
|
||||
CANCELLED = "cancelled" # terminated with an error message
|
||||
|
||||
|
||||
class ModelInstallPart(BaseModel):
|
||||
url: AnyHttpUrl
|
||||
path: Path
|
||||
bytes: int = 0
|
||||
total_bytes: int = 0
|
||||
|
||||
|
||||
class UnknownInstallJobException(Exception):
|
||||
"""Raised when the status of an unknown job is requested."""
|
||||
|
||||
@@ -162,7 +169,6 @@ class ModelInstallJob(BaseModel):
|
||||
)
|
||||
# internal flags and transitory settings
|
||||
_install_tmpdir: Optional[Path] = PrivateAttr(default=None)
|
||||
_multifile_job: Optional[MultiFileDownloadJob] = PrivateAttr(default=None)
|
||||
_exception: Optional[Exception] = PrivateAttr(default=None)
|
||||
|
||||
def set_error(self, e: Exception) -> None:
|
||||
|
||||
@@ -5,22 +5,21 @@ import os
|
||||
import re
|
||||
import threading
|
||||
import time
|
||||
from hashlib import sha256
|
||||
from pathlib import Path
|
||||
from queue import Empty, Queue
|
||||
from shutil import copyfile, copytree, move, rmtree
|
||||
from tempfile import mkdtemp
|
||||
from typing import Any, Dict, List, Optional, Tuple, Type, Union
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
|
||||
|
||||
import torch
|
||||
import yaml
|
||||
from huggingface_hub import HfFolder
|
||||
from pydantic.networks import AnyHttpUrl
|
||||
from pydantic_core import Url
|
||||
from requests import Session
|
||||
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
from invokeai.app.services.download import DownloadQueueServiceBase, MultiFileDownloadJob
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
from invokeai.app.services.download import DownloadJob, DownloadQueueServiceBase, TqdmProgress
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
|
||||
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase
|
||||
@@ -45,7 +44,6 @@ from invokeai.backend.model_manager.search import ModelSearch
|
||||
from invokeai.backend.util import InvokeAILogger
|
||||
from invokeai.backend.util.catch_sigint import catch_sigint
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.util import slugify
|
||||
|
||||
from .model_install_common import (
|
||||
MODEL_SOURCE_TO_TYPE_MAP,
|
||||
@@ -60,6 +58,9 @@ from .model_install_common import (
|
||||
|
||||
TMPDIR_PREFIX = "tmpinstall_"
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
|
||||
|
||||
class ModelInstallService(ModelInstallServiceBase):
|
||||
"""class for InvokeAI model installation."""
|
||||
@@ -90,7 +91,7 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
self._downloads_changed_event = threading.Event()
|
||||
self._install_completed_event = threading.Event()
|
||||
self._download_queue = download_queue
|
||||
self._download_cache: Dict[int, ModelInstallJob] = {}
|
||||
self._download_cache: Dict[AnyHttpUrl, ModelInstallJob] = {}
|
||||
self._running = False
|
||||
self._session = session
|
||||
self._install_thread: Optional[threading.Thread] = None
|
||||
@@ -209,12 +210,33 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
access_token: Optional[str] = None,
|
||||
inplace: Optional[bool] = False,
|
||||
) -> ModelInstallJob:
|
||||
"""Install a model using pattern matching to infer the type of source."""
|
||||
source_obj = self._guess_source(source)
|
||||
if isinstance(source_obj, LocalModelSource):
|
||||
source_obj.inplace = inplace
|
||||
elif isinstance(source_obj, HFModelSource) or isinstance(source_obj, URLModelSource):
|
||||
source_obj.access_token = access_token
|
||||
variants = "|".join(ModelRepoVariant.__members__.values())
|
||||
hf_repoid_re = f"^([^/:]+/[^/:]+)(?::({variants})?(?::/?([^:]+))?)?$"
|
||||
source_obj: Optional[StringLikeSource] = None
|
||||
|
||||
if Path(source).exists(): # A local file or directory
|
||||
source_obj = LocalModelSource(path=Path(source), inplace=inplace)
|
||||
elif match := re.match(hf_repoid_re, source):
|
||||
source_obj = HFModelSource(
|
||||
repo_id=match.group(1),
|
||||
variant=match.group(2) if match.group(2) else None, # pass None rather than ''
|
||||
subfolder=Path(match.group(3)) if match.group(3) else None,
|
||||
access_token=access_token,
|
||||
)
|
||||
elif re.match(r"^https?://[^/]+", source):
|
||||
# Pull the token from config if it exists and matches the URL
|
||||
_token = access_token
|
||||
if _token is None:
|
||||
for pair in self.app_config.remote_api_tokens or []:
|
||||
if re.search(pair.url_regex, source):
|
||||
_token = pair.token
|
||||
break
|
||||
source_obj = URLModelSource(
|
||||
url=AnyHttpUrl(source),
|
||||
access_token=_token,
|
||||
)
|
||||
else:
|
||||
raise ValueError(f"Unsupported model source: '{source}'")
|
||||
return self.import_model(source_obj, config)
|
||||
|
||||
def import_model(self, source: ModelSource, config: Optional[Dict[str, Any]] = None) -> ModelInstallJob: # noqa D102
|
||||
@@ -275,9 +297,8 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
def cancel_job(self, job: ModelInstallJob) -> None:
|
||||
"""Cancel the indicated job."""
|
||||
job.cancel()
|
||||
self._logger.warning(f"Cancelling {job.source}")
|
||||
if dj := job._multifile_job:
|
||||
self._download_queue.cancel_job(dj)
|
||||
with self._lock:
|
||||
self._cancel_download_parts(job)
|
||||
|
||||
def prune_jobs(self) -> None:
|
||||
"""Prune all completed and errored jobs."""
|
||||
@@ -325,7 +346,7 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
legacy_config_path = stanza.get("config")
|
||||
if legacy_config_path:
|
||||
# In v3, these paths were relative to the root. Migrate them to be relative to the legacy_conf_dir.
|
||||
legacy_config_path = self._app_config.root_path / legacy_config_path
|
||||
legacy_config_path: Path = self._app_config.root_path / legacy_config_path
|
||||
if legacy_config_path.is_relative_to(self._app_config.legacy_conf_path):
|
||||
legacy_config_path = legacy_config_path.relative_to(self._app_config.legacy_conf_path)
|
||||
config["config_path"] = str(legacy_config_path)
|
||||
@@ -365,95 +386,38 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
rmtree(model_path)
|
||||
self.unregister(key)
|
||||
|
||||
@classmethod
|
||||
def _download_cache_path(cls, source: Union[str, AnyHttpUrl], app_config: InvokeAIAppConfig) -> Path:
|
||||
escaped_source = slugify(str(source))
|
||||
return app_config.download_cache_path / escaped_source
|
||||
|
||||
def download_and_cache_model(
|
||||
def download_and_cache(
|
||||
self,
|
||||
source: str | AnyHttpUrl,
|
||||
source: Union[str, AnyHttpUrl],
|
||||
access_token: Optional[str] = None,
|
||||
timeout: int = 0,
|
||||
) -> Path:
|
||||
"""Download the model file located at source to the models cache and return its Path."""
|
||||
model_path = self._download_cache_path(str(source), self._app_config)
|
||||
model_hash = sha256(str(source).encode("utf-8")).hexdigest()[0:32]
|
||||
model_path = self._app_config.convert_cache_path / model_hash
|
||||
|
||||
# We expect the cache directory to contain one and only one downloaded file or directory.
|
||||
# We expect the cache directory to contain one and only one downloaded file.
|
||||
# We don't know the file's name in advance, as it is set by the download
|
||||
# content-disposition header.
|
||||
if model_path.exists():
|
||||
contents: List[Path] = list(model_path.iterdir())
|
||||
contents = [x for x in model_path.iterdir() if x.is_file()]
|
||||
if len(contents) > 0:
|
||||
return contents[0]
|
||||
|
||||
model_path.mkdir(parents=True, exist_ok=True)
|
||||
model_source = self._guess_source(str(source))
|
||||
remote_files, _ = self._remote_files_from_source(model_source)
|
||||
job = self._multifile_download(
|
||||
job = self._download_queue.download(
|
||||
source=AnyHttpUrl(str(source)),
|
||||
dest=model_path,
|
||||
remote_files=remote_files,
|
||||
subfolder=model_source.subfolder if isinstance(model_source, HFModelSource) else None,
|
||||
access_token=access_token,
|
||||
on_progress=TqdmProgress().update,
|
||||
)
|
||||
files_string = "file" if len(remote_files) == 1 else "files"
|
||||
self._logger.info(f"Queuing model download: {source} ({len(remote_files)} {files_string})")
|
||||
self._download_queue.wait_for_job(job)
|
||||
self._download_queue.wait_for_job(job, timeout)
|
||||
if job.complete:
|
||||
assert job.download_path is not None
|
||||
return job.download_path
|
||||
else:
|
||||
raise Exception(job.error)
|
||||
|
||||
def _remote_files_from_source(
|
||||
self, source: ModelSource
|
||||
) -> Tuple[List[RemoteModelFile], Optional[AnyModelRepoMetadata]]:
|
||||
metadata = None
|
||||
if isinstance(source, HFModelSource):
|
||||
metadata = HuggingFaceMetadataFetch(self._session).from_id(source.repo_id, source.variant)
|
||||
assert isinstance(metadata, ModelMetadataWithFiles)
|
||||
return (
|
||||
metadata.download_urls(
|
||||
variant=source.variant or self._guess_variant(),
|
||||
subfolder=source.subfolder,
|
||||
session=self._session,
|
||||
),
|
||||
metadata,
|
||||
)
|
||||
|
||||
if isinstance(source, URLModelSource):
|
||||
try:
|
||||
fetcher = self.get_fetcher_from_url(str(source.url))
|
||||
kwargs: dict[str, Any] = {"session": self._session}
|
||||
metadata = fetcher(**kwargs).from_url(source.url)
|
||||
assert isinstance(metadata, ModelMetadataWithFiles)
|
||||
return metadata.download_urls(session=self._session), metadata
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
return [RemoteModelFile(url=source.url, path=Path("."), size=0)], None
|
||||
|
||||
raise Exception(f"No files associated with {source}")
|
||||
|
||||
def _guess_source(self, source: str) -> ModelSource:
|
||||
"""Turn a source string into a ModelSource object."""
|
||||
variants = "|".join(ModelRepoVariant.__members__.values())
|
||||
hf_repoid_re = f"^([^/:]+/[^/:]+)(?::({variants})?(?::/?([^:]+))?)?$"
|
||||
source_obj: Optional[StringLikeSource] = None
|
||||
|
||||
if Path(source).exists(): # A local file or directory
|
||||
source_obj = LocalModelSource(path=Path(source))
|
||||
elif match := re.match(hf_repoid_re, source):
|
||||
source_obj = HFModelSource(
|
||||
repo_id=match.group(1),
|
||||
variant=ModelRepoVariant(match.group(2)) if match.group(2) else None, # pass None rather than ''
|
||||
subfolder=Path(match.group(3)) if match.group(3) else None,
|
||||
)
|
||||
elif re.match(r"^https?://[^/]+", source):
|
||||
source_obj = URLModelSource(
|
||||
url=Url(source),
|
||||
)
|
||||
else:
|
||||
raise ValueError(f"Unsupported model source: '{source}'")
|
||||
return source_obj
|
||||
|
||||
# --------------------------------------------------------------------------------------------
|
||||
# Internal functions that manage the installer threads
|
||||
# --------------------------------------------------------------------------------------------
|
||||
@@ -514,19 +478,16 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
job.config_out = self.record_store.get_model(key)
|
||||
self._signal_job_completed(job)
|
||||
|
||||
def _set_error(self, install_job: ModelInstallJob, excp: Exception) -> None:
|
||||
multifile_download_job = install_job._multifile_job
|
||||
if multifile_download_job and any(
|
||||
x.content_type is not None and "text/html" in x.content_type for x in multifile_download_job.download_parts
|
||||
):
|
||||
install_job.set_error(
|
||||
def _set_error(self, job: ModelInstallJob, excp: Exception) -> None:
|
||||
if any(x.content_type is not None and "text/html" in x.content_type for x in job.download_parts):
|
||||
job.set_error(
|
||||
InvalidModelConfigException(
|
||||
f"At least one file in {install_job.local_path} is an HTML page, not a model. This can happen when an access token is required to download."
|
||||
f"At least one file in {job.local_path} is an HTML page, not a model. This can happen when an access token is required to download."
|
||||
)
|
||||
)
|
||||
else:
|
||||
install_job.set_error(excp)
|
||||
self._signal_job_errored(install_job)
|
||||
job.set_error(excp)
|
||||
self._signal_job_errored(job)
|
||||
|
||||
# --------------------------------------------------------------------------------------------
|
||||
# Internal functions that manage the models directory
|
||||
@@ -552,6 +513,7 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
This is typically only used during testing with a new DB or when using the memory DB, because those are the
|
||||
only situations in which we may have orphaned models in the models directory.
|
||||
"""
|
||||
|
||||
installed_model_paths = {
|
||||
(self._app_config.models_path / x.path).resolve() for x in self.record_store.all_models()
|
||||
}
|
||||
@@ -563,13 +525,8 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
if resolved_path in installed_model_paths:
|
||||
return True
|
||||
# Skip core models entirely - these aren't registered with the model manager.
|
||||
for special_directory in [
|
||||
self.app_config.models_path / "core",
|
||||
self.app_config.convert_cache_dir,
|
||||
self.app_config.download_cache_dir,
|
||||
]:
|
||||
if resolved_path.is_relative_to(special_directory):
|
||||
return False
|
||||
if str(resolved_path).startswith(str(self.app_config.models_path / "core")):
|
||||
return False
|
||||
try:
|
||||
model_id = self.register_path(model_path)
|
||||
self._logger.info(f"Registered {model_path.name} with id {model_id}")
|
||||
@@ -684,15 +641,20 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
inplace=source.inplace or False,
|
||||
)
|
||||
|
||||
def _import_from_hf(
|
||||
self,
|
||||
source: HFModelSource,
|
||||
config: Optional[Dict[str, Any]] = None,
|
||||
) -> ModelInstallJob:
|
||||
def _import_from_hf(self, source: HFModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
|
||||
# Add user's cached access token to HuggingFace requests
|
||||
if source.access_token is None:
|
||||
source.access_token = HfFolder.get_token()
|
||||
remote_files, metadata = self._remote_files_from_source(source)
|
||||
source.access_token = source.access_token or HfFolder.get_token()
|
||||
if not source.access_token:
|
||||
self._logger.info("No HuggingFace access token present; some models may not be downloadable.")
|
||||
|
||||
metadata = HuggingFaceMetadataFetch(self._session).from_id(source.repo_id, source.variant)
|
||||
assert isinstance(metadata, ModelMetadataWithFiles)
|
||||
remote_files = metadata.download_urls(
|
||||
variant=source.variant or self._guess_variant(),
|
||||
subfolder=source.subfolder,
|
||||
session=self._session,
|
||||
)
|
||||
|
||||
return self._import_remote_model(
|
||||
source=source,
|
||||
config=config,
|
||||
@@ -700,12 +662,22 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
metadata=metadata,
|
||||
)
|
||||
|
||||
def _import_from_url(
|
||||
self,
|
||||
source: URLModelSource,
|
||||
config: Optional[Dict[str, Any]],
|
||||
) -> ModelInstallJob:
|
||||
remote_files, metadata = self._remote_files_from_source(source)
|
||||
def _import_from_url(self, source: URLModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
|
||||
# URLs from HuggingFace will be handled specially
|
||||
metadata = None
|
||||
fetcher = None
|
||||
try:
|
||||
fetcher = self.get_fetcher_from_url(str(source.url))
|
||||
except ValueError:
|
||||
pass
|
||||
kwargs: dict[str, Any] = {"session": self._session}
|
||||
if fetcher is not None:
|
||||
metadata = fetcher(**kwargs).from_url(source.url)
|
||||
self._logger.debug(f"metadata={metadata}")
|
||||
if metadata and isinstance(metadata, ModelMetadataWithFiles):
|
||||
remote_files = metadata.download_urls(session=self._session)
|
||||
else:
|
||||
remote_files = [RemoteModelFile(url=source.url, path=Path("."), size=0)]
|
||||
return self._import_remote_model(
|
||||
source=source,
|
||||
config=config,
|
||||
@@ -720,9 +692,12 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
metadata: Optional[AnyModelRepoMetadata],
|
||||
config: Optional[Dict[str, Any]],
|
||||
) -> ModelInstallJob:
|
||||
# TODO: Replace with tempfile.tmpdir() when multithreading is cleaned up.
|
||||
# Currently the tmpdir isn't automatically removed at exit because it is
|
||||
# being held in a daemon thread.
|
||||
if len(remote_files) == 0:
|
||||
raise ValueError(f"{source}: No downloadable files found")
|
||||
destdir = Path(
|
||||
tmpdir = Path(
|
||||
mkdtemp(
|
||||
dir=self._app_config.models_path,
|
||||
prefix=TMPDIR_PREFIX,
|
||||
@@ -733,28 +708,55 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
source=source,
|
||||
config_in=config or {},
|
||||
source_metadata=metadata,
|
||||
local_path=destdir, # local path may change once the download has started due to content-disposition handling
|
||||
local_path=tmpdir, # local path may change once the download has started due to content-disposition handling
|
||||
bytes=0,
|
||||
total_bytes=0,
|
||||
)
|
||||
# remember the temporary directory for later removal
|
||||
install_job._install_tmpdir = destdir
|
||||
install_job.total_bytes = sum((x.size or 0) for x in remote_files)
|
||||
# In the event that there is a subfolder specified in the source,
|
||||
# we need to remove it from the destination path in order to avoid
|
||||
# creating unwanted subfolders
|
||||
if isinstance(source, HFModelSource) and source.subfolder:
|
||||
root = Path(remote_files[0].path.parts[0])
|
||||
subfolder = root / source.subfolder
|
||||
else:
|
||||
root = Path(".")
|
||||
subfolder = Path(".")
|
||||
|
||||
multifile_job = self._multifile_download(
|
||||
remote_files=remote_files,
|
||||
dest=destdir,
|
||||
subfolder=source.subfolder if isinstance(source, HFModelSource) else None,
|
||||
access_token=source.access_token,
|
||||
submit_job=False, # Important! Don't submit the job until we have set our _download_cache dict
|
||||
)
|
||||
self._download_cache[multifile_job.id] = install_job
|
||||
install_job._multifile_job = multifile_job
|
||||
# we remember the path up to the top of the tmpdir so that it may be
|
||||
# removed safely at the end of the install process.
|
||||
install_job._install_tmpdir = tmpdir
|
||||
assert install_job.total_bytes is not None # to avoid type checking complaints in the loop below
|
||||
|
||||
files_string = "file" if len(remote_files) == 1 else "files"
|
||||
self._logger.info(f"Queueing model install: {source} ({len(remote_files)} {files_string})")
|
||||
files_string = "file" if len(remote_files) == 1 else "file"
|
||||
self._logger.info(f"Queuing model install: {source} ({len(remote_files)} {files_string})")
|
||||
self._logger.debug(f"remote_files={remote_files}")
|
||||
self._download_queue.submit_multifile_download(multifile_job)
|
||||
for model_file in remote_files:
|
||||
url = model_file.url
|
||||
path = root / model_file.path.relative_to(subfolder)
|
||||
self._logger.debug(f"Downloading {url} => {path}")
|
||||
install_job.total_bytes += model_file.size
|
||||
assert hasattr(source, "access_token")
|
||||
dest = tmpdir / path.parent
|
||||
dest.mkdir(parents=True, exist_ok=True)
|
||||
download_job = DownloadJob(
|
||||
source=url,
|
||||
dest=dest,
|
||||
access_token=source.access_token,
|
||||
)
|
||||
self._download_cache[download_job.source] = install_job # matches a download job to an install job
|
||||
install_job.download_parts.add(download_job)
|
||||
|
||||
# only start the jobs once install_job.download_parts is fully populated
|
||||
for download_job in install_job.download_parts:
|
||||
self._download_queue.submit_download_job(
|
||||
download_job,
|
||||
on_start=self._download_started_callback,
|
||||
on_progress=self._download_progress_callback,
|
||||
on_complete=self._download_complete_callback,
|
||||
on_error=self._download_error_callback,
|
||||
on_cancelled=self._download_cancelled_callback,
|
||||
)
|
||||
|
||||
return install_job
|
||||
|
||||
def _stat_size(self, path: Path) -> int:
|
||||
@@ -766,104 +768,87 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
size += sum(self._stat_size(Path(root, x)) for x in files)
|
||||
return size
|
||||
|
||||
def _multifile_download(
|
||||
self,
|
||||
remote_files: List[RemoteModelFile],
|
||||
dest: Path,
|
||||
subfolder: Optional[Path] = None,
|
||||
access_token: Optional[str] = None,
|
||||
submit_job: bool = True,
|
||||
) -> MultiFileDownloadJob:
|
||||
# HuggingFace repo subfolders are a little tricky. If the name of the model is "sdxl-turbo", and
|
||||
# we are installing the "vae" subfolder, we do not want to create an additional folder level, such
|
||||
# as "sdxl-turbo/vae", nor do we want to put the contents of the vae folder directly into "sdxl-turbo".
|
||||
# So what we do is to synthesize a folder named "sdxl-turbo_vae" here.
|
||||
if subfolder:
|
||||
top = Path(remote_files[0].path.parts[0]) # e.g. "sdxl-turbo/"
|
||||
path_to_remove = top / subfolder.parts[-1] # sdxl-turbo/vae/
|
||||
path_to_add = Path(f"{top}_{subfolder}")
|
||||
else:
|
||||
path_to_remove = Path(".")
|
||||
path_to_add = Path(".")
|
||||
|
||||
parts: List[RemoteModelFile] = []
|
||||
for model_file in remote_files:
|
||||
assert model_file.size is not None
|
||||
parts.append(
|
||||
RemoteModelFile(
|
||||
url=model_file.url, # if a subfolder, then sdxl-turbo_vae/config.json
|
||||
path=path_to_add / model_file.path.relative_to(path_to_remove),
|
||||
)
|
||||
)
|
||||
|
||||
return self._download_queue.multifile_download(
|
||||
parts=parts,
|
||||
dest=dest,
|
||||
access_token=access_token,
|
||||
submit_job=submit_job,
|
||||
on_start=self._download_started_callback,
|
||||
on_progress=self._download_progress_callback,
|
||||
on_complete=self._download_complete_callback,
|
||||
on_error=self._download_error_callback,
|
||||
on_cancelled=self._download_cancelled_callback,
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Callbacks are executed by the download queue in a separate thread
|
||||
# ------------------------------------------------------------------
|
||||
def _download_started_callback(self, download_job: MultiFileDownloadJob) -> None:
|
||||
def _download_started_callback(self, download_job: DownloadJob) -> None:
|
||||
self._logger.info(f"Model download started: {download_job.source}")
|
||||
with self._lock:
|
||||
if install_job := self._download_cache.get(download_job.id, None):
|
||||
install_job.status = InstallStatus.DOWNLOADING
|
||||
install_job = self._download_cache[download_job.source]
|
||||
install_job.status = InstallStatus.DOWNLOADING
|
||||
|
||||
if install_job.local_path == install_job._install_tmpdir: # first time
|
||||
assert download_job.download_path
|
||||
install_job.local_path = download_job.download_path
|
||||
install_job.download_parts = download_job.download_parts
|
||||
install_job.bytes = sum(x.bytes for x in download_job.download_parts)
|
||||
install_job.total_bytes = download_job.total_bytes
|
||||
self._signal_job_download_started(install_job)
|
||||
assert download_job.download_path
|
||||
if install_job.local_path == install_job._install_tmpdir:
|
||||
partial_path = download_job.download_path.relative_to(install_job._install_tmpdir)
|
||||
dest_name = partial_path.parts[0]
|
||||
install_job.local_path = install_job._install_tmpdir / dest_name
|
||||
|
||||
def _download_progress_callback(self, download_job: MultiFileDownloadJob) -> None:
|
||||
# Update the total bytes count for remote sources.
|
||||
if not install_job.total_bytes:
|
||||
install_job.total_bytes = sum(x.total_bytes for x in install_job.download_parts)
|
||||
|
||||
def _download_progress_callback(self, download_job: DownloadJob) -> None:
|
||||
with self._lock:
|
||||
if install_job := self._download_cache.get(download_job.id, None):
|
||||
if install_job.cancelled: # This catches the case in which the caller directly calls job.cancel()
|
||||
self._download_queue.cancel_job(download_job)
|
||||
else:
|
||||
# update sizes
|
||||
install_job.bytes = sum(x.bytes for x in download_job.download_parts)
|
||||
install_job.total_bytes = sum(x.total_bytes for x in download_job.download_parts)
|
||||
self._signal_job_downloading(install_job)
|
||||
install_job = self._download_cache[download_job.source]
|
||||
if install_job.cancelled: # This catches the case in which the caller directly calls job.cancel()
|
||||
self._cancel_download_parts(install_job)
|
||||
else:
|
||||
# update sizes
|
||||
install_job.bytes = sum(x.bytes for x in install_job.download_parts)
|
||||
self._signal_job_downloading(install_job)
|
||||
|
||||
def _download_complete_callback(self, download_job: MultiFileDownloadJob) -> None:
|
||||
def _download_complete_callback(self, download_job: DownloadJob) -> None:
|
||||
self._logger.info(f"Model download complete: {download_job.source}")
|
||||
with self._lock:
|
||||
if install_job := self._download_cache.pop(download_job.id, None):
|
||||
install_job = self._download_cache[download_job.source]
|
||||
|
||||
# are there any more active jobs left in this task?
|
||||
if install_job.downloading and all(x.complete for x in install_job.download_parts):
|
||||
self._signal_job_downloads_done(install_job)
|
||||
self._put_in_queue(install_job) # this starts the installation and registration
|
||||
self._put_in_queue(install_job)
|
||||
|
||||
# Let other threads know that the number of downloads has changed
|
||||
self._downloads_changed_event.set()
|
||||
# Let other threads know that the number of downloads has changed
|
||||
self._download_cache.pop(download_job.source, None)
|
||||
self._downloads_changed_event.set()
|
||||
|
||||
def _download_error_callback(self, download_job: MultiFileDownloadJob, excp: Optional[Exception] = None) -> None:
|
||||
def _download_error_callback(self, download_job: DownloadJob, excp: Optional[Exception] = None) -> None:
|
||||
with self._lock:
|
||||
if install_job := self._download_cache.pop(download_job.id, None):
|
||||
assert excp is not None
|
||||
install_job.set_error(excp)
|
||||
self._download_queue.cancel_job(download_job)
|
||||
install_job = self._download_cache.pop(download_job.source, None)
|
||||
assert install_job is not None
|
||||
assert excp is not None
|
||||
install_job.set_error(excp)
|
||||
self._logger.error(
|
||||
f"Cancelling {install_job.source} due to an error while downloading {download_job.source}: {str(excp)}"
|
||||
)
|
||||
self._cancel_download_parts(install_job)
|
||||
|
||||
# Let other threads know that the number of downloads has changed
|
||||
self._downloads_changed_event.set()
|
||||
# Let other threads know that the number of downloads has changed
|
||||
self._downloads_changed_event.set()
|
||||
|
||||
def _download_cancelled_callback(self, download_job: MultiFileDownloadJob) -> None:
|
||||
def _download_cancelled_callback(self, download_job: DownloadJob) -> None:
|
||||
with self._lock:
|
||||
if install_job := self._download_cache.pop(download_job.id, None):
|
||||
self._downloads_changed_event.set()
|
||||
# if install job has already registered an error, then do not replace its status with cancelled
|
||||
if not install_job.errored:
|
||||
install_job.cancel()
|
||||
install_job = self._download_cache.pop(download_job.source, None)
|
||||
if not install_job:
|
||||
return
|
||||
self._downloads_changed_event.set()
|
||||
self._logger.warning(f"Model download canceled: {download_job.source}")
|
||||
# if install job has already registered an error, then do not replace its status with cancelled
|
||||
if not install_job.errored:
|
||||
install_job.cancel()
|
||||
self._cancel_download_parts(install_job)
|
||||
|
||||
# Let other threads know that the number of downloads has changed
|
||||
self._downloads_changed_event.set()
|
||||
# Let other threads know that the number of downloads has changed
|
||||
self._downloads_changed_event.set()
|
||||
|
||||
def _cancel_download_parts(self, install_job: ModelInstallJob) -> None:
|
||||
# on multipart downloads, _cancel_components() will get called repeatedly from the download callbacks
|
||||
# do not lock here because it gets called within a locked context
|
||||
for s in install_job.download_parts:
|
||||
self._download_queue.cancel_job(s)
|
||||
|
||||
if all(x.in_terminal_state for x in install_job.download_parts):
|
||||
# When all parts have reached their terminal state, we finalize the job to clean up the temporary directory and other resources
|
||||
self._put_in_queue(install_job)
|
||||
|
||||
# ------------------------------------------------------------------------------------------------
|
||||
# Internal methods that put events on the event bus
|
||||
@@ -874,18 +859,8 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
if self._event_bus:
|
||||
self._event_bus.emit_model_install_started(job)
|
||||
|
||||
def _signal_job_download_started(self, job: ModelInstallJob) -> None:
|
||||
if self._event_bus:
|
||||
assert job._multifile_job is not None
|
||||
assert job.bytes is not None
|
||||
assert job.total_bytes is not None
|
||||
self._event_bus.emit_model_install_download_started(job)
|
||||
|
||||
def _signal_job_downloading(self, job: ModelInstallJob) -> None:
|
||||
if self._event_bus:
|
||||
assert job._multifile_job is not None
|
||||
assert job.bytes is not None
|
||||
assert job.total_bytes is not None
|
||||
self._event_bus.emit_model_install_download_progress(job)
|
||||
|
||||
def _signal_job_downloads_done(self, job: ModelInstallJob) -> None:
|
||||
@@ -900,8 +875,6 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
self._logger.info(f"Model install complete: {job.source}")
|
||||
self._logger.debug(f"{job.local_path} registered key {job.config_out.key}")
|
||||
if self._event_bus:
|
||||
assert job.local_path is not None
|
||||
assert job.config_out is not None
|
||||
self._event_bus.emit_model_install_complete(job)
|
||||
|
||||
def _signal_job_errored(self, job: ModelInstallJob) -> None:
|
||||
@@ -917,13 +890,7 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
self._event_bus.emit_model_install_cancelled(job)
|
||||
|
||||
@staticmethod
|
||||
def get_fetcher_from_url(url: str) -> Type[ModelMetadataFetchBase]:
|
||||
"""
|
||||
Return a metadata fetcher appropriate for provided url.
|
||||
|
||||
This used to be more useful, but the number of supported model
|
||||
sources has been reduced to HuggingFace alone.
|
||||
"""
|
||||
def get_fetcher_from_url(url: str) -> ModelMetadataFetchBase:
|
||||
if re.match(r"^https?://huggingface.co/[^/]+/[^/]+$", url.lower()):
|
||||
return HuggingFaceMetadataFetch
|
||||
raise ValueError(f"Unsupported model source: '{url}'")
|
||||
|
||||
@@ -2,11 +2,10 @@
|
||||
"""Base class for model loader."""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from pathlib import Path
|
||||
from typing import Callable, Optional
|
||||
from typing import Optional
|
||||
|
||||
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, SubModelType
|
||||
from invokeai.backend.model_manager.load import LoadedModel, LoadedModelWithoutConfig
|
||||
from invokeai.backend.model_manager.load import LoadedModel
|
||||
from invokeai.backend.model_manager.load.convert_cache import ModelConvertCacheBase
|
||||
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase
|
||||
|
||||
@@ -32,26 +31,3 @@ class ModelLoadServiceBase(ABC):
|
||||
@abstractmethod
|
||||
def convert_cache(self) -> ModelConvertCacheBase:
|
||||
"""Return the checkpoint convert cache used by this loader."""
|
||||
|
||||
@abstractmethod
|
||||
def load_model_from_path(
|
||||
self, model_path: Path, loader: Optional[Callable[[Path], AnyModel]] = None
|
||||
) -> LoadedModelWithoutConfig:
|
||||
"""
|
||||
Load the model file or directory located at the indicated Path.
|
||||
|
||||
This will load an arbitrary model file into the RAM cache. If the optional loader
|
||||
argument is provided, the loader will be invoked to load the model into
|
||||
memory. Otherwise the method will call safetensors.torch.load_file() or
|
||||
torch.load() as appropriate to the file suffix.
|
||||
|
||||
Be aware that this returns a LoadedModelWithoutConfig object, which is the same as
|
||||
LoadedModel, but without the config attribute.
|
||||
|
||||
Args:
|
||||
model_path: A pathlib.Path to a checkpoint-style models file
|
||||
loader: A Callable that expects a Path and returns a Dict[str, Tensor]
|
||||
|
||||
Returns:
|
||||
A LoadedModel object.
|
||||
"""
|
||||
|
||||
@@ -1,26 +1,18 @@
|
||||
# Copyright (c) 2024 Lincoln D. Stein and the InvokeAI Team
|
||||
"""Implementation of model loader service."""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Callable, Optional, Type
|
||||
|
||||
from picklescan.scanner import scan_file_path
|
||||
from safetensors.torch import load_file as safetensors_load_file
|
||||
from torch import load as torch_load
|
||||
from typing import Optional, Type
|
||||
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, SubModelType
|
||||
from invokeai.backend.model_manager.load import (
|
||||
LoadedModel,
|
||||
LoadedModelWithoutConfig,
|
||||
ModelLoaderRegistry,
|
||||
ModelLoaderRegistryBase,
|
||||
)
|
||||
from invokeai.backend.model_manager.load.convert_cache import ModelConvertCacheBase
|
||||
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase
|
||||
from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
from .model_load_base import ModelLoadServiceBase
|
||||
@@ -83,41 +75,3 @@ class ModelLoadService(ModelLoadServiceBase):
|
||||
self._invoker.services.events.emit_model_load_complete(model_config, submodel_type)
|
||||
|
||||
return loaded_model
|
||||
|
||||
def load_model_from_path(
|
||||
self, model_path: Path, loader: Optional[Callable[[Path], AnyModel]] = None
|
||||
) -> LoadedModelWithoutConfig:
|
||||
cache_key = str(model_path)
|
||||
ram_cache = self.ram_cache
|
||||
try:
|
||||
return LoadedModelWithoutConfig(_locker=ram_cache.get(key=cache_key))
|
||||
except IndexError:
|
||||
pass
|
||||
|
||||
def torch_load_file(checkpoint: Path) -> AnyModel:
|
||||
scan_result = scan_file_path(checkpoint)
|
||||
if scan_result.infected_files != 0:
|
||||
raise Exception("The model at {checkpoint} is potentially infected by malware. Aborting load.")
|
||||
result = torch_load(checkpoint, map_location="cpu")
|
||||
return result
|
||||
|
||||
def diffusers_load_directory(directory: Path) -> AnyModel:
|
||||
load_class = GenericDiffusersLoader(
|
||||
app_config=self._app_config,
|
||||
logger=self._logger,
|
||||
ram_cache=self._ram_cache,
|
||||
convert_cache=self.convert_cache,
|
||||
).get_hf_load_class(directory)
|
||||
return load_class.from_pretrained(model_path, torch_dtype=TorchDevice.choose_torch_dtype())
|
||||
|
||||
loader = loader or (
|
||||
diffusers_load_directory
|
||||
if model_path.is_dir()
|
||||
else torch_load_file
|
||||
if model_path.suffix.endswith((".ckpt", ".pt", ".pth", ".bin"))
|
||||
else lambda path: safetensors_load_file(path, device="cpu")
|
||||
)
|
||||
assert loader is not None
|
||||
raw_model = loader(model_path)
|
||||
ram_cache.put(key=cache_key, model=raw_model)
|
||||
return LoadedModelWithoutConfig(_locker=ram_cache.get(key=cache_key))
|
||||
|
||||
@@ -12,13 +12,15 @@ from pydantic import BaseModel, Field
|
||||
|
||||
from invokeai.app.services.shared.pagination import PaginatedResults
|
||||
from invokeai.app.util.model_exclude_null import BaseModelExcludeNull
|
||||
from invokeai.backend.model_manager.config import (
|
||||
from invokeai.backend.model_manager import (
|
||||
AnyModelConfig,
|
||||
BaseModelType,
|
||||
ControlAdapterDefaultSettings,
|
||||
MainModelDefaultSettings,
|
||||
ModelFormat,
|
||||
ModelType,
|
||||
)
|
||||
from invokeai.backend.model_manager.config import (
|
||||
ControlAdapterDefaultSettings,
|
||||
MainModelDefaultSettings,
|
||||
ModelVariantType,
|
||||
SchedulerPredictionType,
|
||||
)
|
||||
|
||||
@@ -37,14 +37,10 @@ class SqliteSessionQueue(SessionQueueBase):
|
||||
def start(self, invoker: Invoker) -> None:
|
||||
self.__invoker = invoker
|
||||
self._set_in_progress_to_canceled()
|
||||
if self.__invoker.services.configuration.clear_queue_on_startup:
|
||||
clear_result = self.clear(DEFAULT_QUEUE_ID)
|
||||
if clear_result.deleted > 0:
|
||||
self.__invoker.services.logger.info(f"Cleared all {clear_result.deleted} queue items")
|
||||
else:
|
||||
prune_result = self.prune(DEFAULT_QUEUE_ID)
|
||||
if prune_result.deleted > 0:
|
||||
self.__invoker.services.logger.info(f"Pruned {prune_result.deleted} finished queue items")
|
||||
prune_result = self.prune(DEFAULT_QUEUE_ID)
|
||||
|
||||
if prune_result.deleted > 0:
|
||||
self.__invoker.services.logger.info(f"Pruned {prune_result.deleted} finished queue items")
|
||||
|
||||
def __init__(self, db: SqliteDatabase) -> None:
|
||||
super().__init__()
|
||||
|
||||
@@ -3,7 +3,6 @@ from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Callable, Optional, Union
|
||||
|
||||
from PIL.Image import Image
|
||||
from pydantic.networks import AnyHttpUrl
|
||||
from torch import Tensor
|
||||
|
||||
from invokeai.app.invocations.constants import IMAGE_MODES
|
||||
@@ -15,15 +14,8 @@ from invokeai.app.services.images.images_common import ImageDTO
|
||||
from invokeai.app.services.invocation_services import InvocationServices
|
||||
from invokeai.app.services.model_records.model_records_base import UnknownModelException
|
||||
from invokeai.app.util.step_callback import stable_diffusion_step_callback
|
||||
from invokeai.backend.model_manager.config import (
|
||||
AnyModel,
|
||||
AnyModelConfig,
|
||||
BaseModelType,
|
||||
ModelFormat,
|
||||
ModelType,
|
||||
SubModelType,
|
||||
)
|
||||
from invokeai.backend.model_manager.load.load_base import LoadedModel, LoadedModelWithoutConfig
|
||||
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelFormat, ModelType, SubModelType
|
||||
from invokeai.backend.model_manager.load.load_base import LoadedModel
|
||||
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
|
||||
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
|
||||
|
||||
@@ -328,10 +320,8 @@ class ConditioningInterface(InvocationContextInterface):
|
||||
|
||||
|
||||
class ModelsInterface(InvocationContextInterface):
|
||||
"""Common API for loading, downloading and managing models."""
|
||||
|
||||
def exists(self, identifier: Union[str, "ModelIdentifierField"]) -> bool:
|
||||
"""Check if a model exists.
|
||||
"""Checks if a model exists.
|
||||
|
||||
Args:
|
||||
identifier: The key or ModelField representing the model.
|
||||
@@ -341,13 +331,13 @@ class ModelsInterface(InvocationContextInterface):
|
||||
"""
|
||||
if isinstance(identifier, str):
|
||||
return self._services.model_manager.store.exists(identifier)
|
||||
else:
|
||||
return self._services.model_manager.store.exists(identifier.key)
|
||||
|
||||
return self._services.model_manager.store.exists(identifier.key)
|
||||
|
||||
def load(
|
||||
self, identifier: Union[str, "ModelIdentifierField"], submodel_type: Optional[SubModelType] = None
|
||||
) -> LoadedModel:
|
||||
"""Load a model.
|
||||
"""Loads a model.
|
||||
|
||||
Args:
|
||||
identifier: The key or ModelField representing the model.
|
||||
@@ -371,7 +361,7 @@ class ModelsInterface(InvocationContextInterface):
|
||||
def load_by_attrs(
|
||||
self, name: str, base: BaseModelType, type: ModelType, submodel_type: Optional[SubModelType] = None
|
||||
) -> LoadedModel:
|
||||
"""Load a model by its attributes.
|
||||
"""Loads a model by its attributes.
|
||||
|
||||
Args:
|
||||
name: Name of the model.
|
||||
@@ -394,7 +384,7 @@ class ModelsInterface(InvocationContextInterface):
|
||||
return self._services.model_manager.load.load_model(configs[0], submodel_type)
|
||||
|
||||
def get_config(self, identifier: Union[str, "ModelIdentifierField"]) -> AnyModelConfig:
|
||||
"""Get a model's config.
|
||||
"""Gets a model's config.
|
||||
|
||||
Args:
|
||||
identifier: The key or ModelField representing the model.
|
||||
@@ -404,11 +394,11 @@ class ModelsInterface(InvocationContextInterface):
|
||||
"""
|
||||
if isinstance(identifier, str):
|
||||
return self._services.model_manager.store.get_model(identifier)
|
||||
else:
|
||||
return self._services.model_manager.store.get_model(identifier.key)
|
||||
|
||||
return self._services.model_manager.store.get_model(identifier.key)
|
||||
|
||||
def search_by_path(self, path: Path) -> list[AnyModelConfig]:
|
||||
"""Search for models by path.
|
||||
"""Searches for models by path.
|
||||
|
||||
Args:
|
||||
path: The path to search for.
|
||||
@@ -425,7 +415,7 @@ class ModelsInterface(InvocationContextInterface):
|
||||
type: Optional[ModelType] = None,
|
||||
format: Optional[ModelFormat] = None,
|
||||
) -> list[AnyModelConfig]:
|
||||
"""Search for models by attributes.
|
||||
"""Searches for models by attributes.
|
||||
|
||||
Args:
|
||||
name: The name to search for (exact match).
|
||||
@@ -444,72 +434,6 @@ class ModelsInterface(InvocationContextInterface):
|
||||
model_format=format,
|
||||
)
|
||||
|
||||
def download_and_cache_model(
|
||||
self,
|
||||
source: str | AnyHttpUrl,
|
||||
) -> Path:
|
||||
"""
|
||||
Download the model file located at source to the models cache and return its Path.
|
||||
|
||||
This can be used to single-file install models and other resources of arbitrary types
|
||||
which should not get registered with the database. If the model is already
|
||||
installed, the cached path will be returned. Otherwise it will be downloaded.
|
||||
|
||||
Args:
|
||||
source: A URL that points to the model, or a huggingface repo_id.
|
||||
|
||||
Returns:
|
||||
Path to the downloaded model
|
||||
"""
|
||||
return self._services.model_manager.install.download_and_cache_model(source=source)
|
||||
|
||||
def load_local_model(
|
||||
self,
|
||||
model_path: Path,
|
||||
loader: Optional[Callable[[Path], AnyModel]] = None,
|
||||
) -> LoadedModelWithoutConfig:
|
||||
"""
|
||||
Load the model file located at the indicated path
|
||||
|
||||
If a loader callable is provided, it will be invoked to load the model. Otherwise,
|
||||
`safetensors.torch.load_file()` or `torch.load()` will be called to load the model.
|
||||
|
||||
Be aware that the LoadedModelWithoutConfig object has no `config` attribute
|
||||
|
||||
Args:
|
||||
path: A model Path
|
||||
loader: A Callable that expects a Path and returns a dict[str|int, Any]
|
||||
|
||||
Returns:
|
||||
A LoadedModelWithoutConfig object.
|
||||
"""
|
||||
return self._services.model_manager.load.load_model_from_path(model_path=model_path, loader=loader)
|
||||
|
||||
def load_remote_model(
|
||||
self,
|
||||
source: str | AnyHttpUrl,
|
||||
loader: Optional[Callable[[Path], AnyModel]] = None,
|
||||
) -> LoadedModelWithoutConfig:
|
||||
"""
|
||||
Download, cache, and load the model file located at the indicated URL or repo_id.
|
||||
|
||||
If the model is already downloaded, it will be loaded from the cache.
|
||||
|
||||
If the a loader callable is provided, it will be invoked to load the model. Otherwise,
|
||||
`safetensors.torch.load_file()` or `torch.load()` will be called to load the model.
|
||||
|
||||
Be aware that the LoadedModelWithoutConfig object has no `config` attribute
|
||||
|
||||
Args:
|
||||
source: A URL or huggingface repoid.
|
||||
loader: A Callable that expects a Path and returns a dict[str|int, Any]
|
||||
|
||||
Returns:
|
||||
A LoadedModelWithoutConfig object.
|
||||
"""
|
||||
model_path = self._services.model_manager.install.download_and_cache_model(source=str(source))
|
||||
return self._services.model_manager.load.load_model_from_path(model_path=model_path, loader=loader)
|
||||
|
||||
|
||||
class ConfigInterface(InvocationContextInterface):
|
||||
def get(self) -> InvokeAIAppConfig:
|
||||
|
||||
@@ -13,7 +13,6 @@ from invokeai.app.services.shared.sqlite_migrator.migrations.migration_7 import
|
||||
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_8 import build_migration_8
|
||||
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_9 import build_migration_9
|
||||
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_10 import build_migration_10
|
||||
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_11 import build_migration_11
|
||||
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
|
||||
|
||||
|
||||
@@ -44,7 +43,6 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
|
||||
migrator.register_migration(build_migration_8(app_config=config))
|
||||
migrator.register_migration(build_migration_9())
|
||||
migrator.register_migration(build_migration_10())
|
||||
migrator.register_migration(build_migration_11(app_config=config, logger=logger))
|
||||
migrator.run_migrations()
|
||||
|
||||
return db
|
||||
|
||||
@@ -1,75 +0,0 @@
|
||||
import shutil
|
||||
import sqlite3
|
||||
from logging import Logger
|
||||
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
|
||||
|
||||
LEGACY_CORE_MODELS = [
|
||||
# OpenPose
|
||||
"any/annotators/dwpose/yolox_l.onnx",
|
||||
"any/annotators/dwpose/dw-ll_ucoco_384.onnx",
|
||||
# DepthAnything
|
||||
"any/annotators/depth_anything/depth_anything_vitl14.pth",
|
||||
"any/annotators/depth_anything/depth_anything_vitb14.pth",
|
||||
"any/annotators/depth_anything/depth_anything_vits14.pth",
|
||||
# Lama inpaint
|
||||
"core/misc/lama/lama.pt",
|
||||
# RealESRGAN upscale
|
||||
"core/upscaling/realesrgan/RealESRGAN_x4plus.pth",
|
||||
"core/upscaling/realesrgan/RealESRGAN_x4plus_anime_6B.pth",
|
||||
"core/upscaling/realesrgan/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth",
|
||||
"core/upscaling/realesrgan/RealESRGAN_x2plus.pth",
|
||||
]
|
||||
|
||||
|
||||
class Migration11Callback:
|
||||
def __init__(self, app_config: InvokeAIAppConfig, logger: Logger) -> None:
|
||||
self._app_config = app_config
|
||||
self._logger = logger
|
||||
|
||||
def __call__(self, cursor: sqlite3.Cursor) -> None:
|
||||
self._remove_convert_cache()
|
||||
self._remove_downloaded_models()
|
||||
self._remove_unused_core_models()
|
||||
|
||||
def _remove_convert_cache(self) -> None:
|
||||
"""Rename models/.cache to models/.convert_cache."""
|
||||
self._logger.info("Removing .cache directory. Converted models will now be cached in .convert_cache.")
|
||||
legacy_convert_path = self._app_config.root_path / "models" / ".cache"
|
||||
shutil.rmtree(legacy_convert_path, ignore_errors=True)
|
||||
|
||||
def _remove_downloaded_models(self) -> None:
|
||||
"""Remove models from their old locations; they will re-download when needed."""
|
||||
self._logger.info(
|
||||
"Removing legacy just-in-time models. Downloaded models will now be cached in .download_cache."
|
||||
)
|
||||
for model_path in LEGACY_CORE_MODELS:
|
||||
legacy_dest_path = self._app_config.models_path / model_path
|
||||
legacy_dest_path.unlink(missing_ok=True)
|
||||
|
||||
def _remove_unused_core_models(self) -> None:
|
||||
"""Remove unused core models and their directories."""
|
||||
self._logger.info("Removing defunct core models.")
|
||||
for dir in ["face_restoration", "misc", "upscaling"]:
|
||||
path_to_remove = self._app_config.models_path / "core" / dir
|
||||
shutil.rmtree(path_to_remove, ignore_errors=True)
|
||||
shutil.rmtree(self._app_config.models_path / "any" / "annotators", ignore_errors=True)
|
||||
|
||||
|
||||
def build_migration_11(app_config: InvokeAIAppConfig, logger: Logger) -> Migration:
|
||||
"""
|
||||
Build the migration from database version 10 to 11.
|
||||
|
||||
This migration does the following:
|
||||
- Moves "core" models previously downloaded with download_with_progress_bar() into new
|
||||
"models/.download_cache" directory.
|
||||
- Renames "models/.cache" to "models/.convert_cache".
|
||||
"""
|
||||
migration_11 = Migration(
|
||||
from_version=10,
|
||||
to_version=11,
|
||||
callback=Migration11Callback(app_config=app_config, logger=logger),
|
||||
)
|
||||
|
||||
return migration_11
|
||||
@@ -289,7 +289,7 @@ def prepare_control_image(
|
||||
width: int,
|
||||
height: int,
|
||||
num_channels: int = 3,
|
||||
device: str = "cuda",
|
||||
device: str | torch.device = "cuda",
|
||||
dtype: torch.dtype = torch.float16,
|
||||
control_mode: CONTROLNET_MODE_VALUES = "balanced",
|
||||
resize_mode: CONTROLNET_RESIZE_VALUES = "just_resize_simple",
|
||||
@@ -304,7 +304,7 @@ def prepare_control_image(
|
||||
num_channels (int, optional): The target number of image channels. This is achieved by converting the input
|
||||
image to RGB, then naively taking the first `num_channels` channels. The primary use case is converting a
|
||||
RGB image to a single-channel grayscale image. Raises if `num_channels` cannot be achieved. Defaults to 3.
|
||||
device (str, optional): The target device for the output image. Defaults to "cuda".
|
||||
device (str | torch.Device, optional): The target device for the output image. Defaults to "cuda".
|
||||
dtype (_type_, optional): The dtype for the output image. Defaults to torch.float16.
|
||||
do_classifier_free_guidance (bool, optional): If True, repeat the output image along the batch dimension.
|
||||
Defaults to True.
|
||||
|
||||
51
invokeai/app/util/download_with_progress.py
Normal file
51
invokeai/app/util/download_with_progress.py
Normal file
@@ -0,0 +1,51 @@
|
||||
from pathlib import Path
|
||||
from urllib import request
|
||||
|
||||
from tqdm import tqdm
|
||||
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
|
||||
class ProgressBar:
|
||||
"""Simple progress bar for urllib.request.urlretrieve using tqdm."""
|
||||
|
||||
def __init__(self, model_name: str = "file"):
|
||||
self.pbar = None
|
||||
self.name = model_name
|
||||
|
||||
def __call__(self, block_num: int, block_size: int, total_size: int):
|
||||
if not self.pbar:
|
||||
self.pbar = tqdm(
|
||||
desc=self.name,
|
||||
initial=0,
|
||||
unit="iB",
|
||||
unit_scale=True,
|
||||
unit_divisor=1000,
|
||||
total=total_size,
|
||||
)
|
||||
self.pbar.update(block_size)
|
||||
|
||||
|
||||
def download_with_progress_bar(name: str, url: str, dest_path: Path) -> bool:
|
||||
"""Download a file from a URL to a destination path, with a progress bar.
|
||||
If the file already exists, it will not be downloaded again.
|
||||
|
||||
Exceptions are not caught.
|
||||
|
||||
Args:
|
||||
name (str): Name of the file being downloaded.
|
||||
url (str): URL to download the file from.
|
||||
dest_path (Path): Destination path to save the file to.
|
||||
|
||||
Returns:
|
||||
bool: True if the file was downloaded, False if it already existed.
|
||||
"""
|
||||
if dest_path.exists():
|
||||
return False # already downloaded
|
||||
|
||||
InvokeAILogger.get_logger().info(f"Downloading {name}...")
|
||||
|
||||
dest_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
request.urlretrieve(url, dest_path, ProgressBar(name))
|
||||
|
||||
return True
|
||||
@@ -1,5 +1,5 @@
|
||||
from pathlib import Path
|
||||
from typing import Literal
|
||||
import pathlib
|
||||
from typing import Literal, Union
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
@@ -10,17 +10,28 @@ from PIL import Image
|
||||
from torchvision.transforms import Compose
|
||||
|
||||
from invokeai.app.services.config.config_default import get_config
|
||||
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
||||
from invokeai.backend.image_util.depth_anything.model.dpt import DPT_DINOv2
|
||||
from invokeai.backend.image_util.depth_anything.utilities.util import NormalizeImage, PrepareForNet, Resize
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
config = get_config()
|
||||
logger = InvokeAILogger.get_logger(config=config)
|
||||
|
||||
DEPTH_ANYTHING_MODELS = {
|
||||
"large": "https://huggingface.co/spaces/LiheYoung/Depth-Anything/resolve/main/checkpoints/depth_anything_vitl14.pth?download=true",
|
||||
"base": "https://huggingface.co/spaces/LiheYoung/Depth-Anything/resolve/main/checkpoints/depth_anything_vitb14.pth?download=true",
|
||||
"small": "https://huggingface.co/spaces/LiheYoung/Depth-Anything/resolve/main/checkpoints/depth_anything_vits14.pth?download=true",
|
||||
"large": {
|
||||
"url": "https://huggingface.co/spaces/LiheYoung/Depth-Anything/resolve/main/checkpoints/depth_anything_vitl14.pth?download=true",
|
||||
"local": "any/annotators/depth_anything/depth_anything_vitl14.pth",
|
||||
},
|
||||
"base": {
|
||||
"url": "https://huggingface.co/spaces/LiheYoung/Depth-Anything/resolve/main/checkpoints/depth_anything_vitb14.pth?download=true",
|
||||
"local": "any/annotators/depth_anything/depth_anything_vitb14.pth",
|
||||
},
|
||||
"small": {
|
||||
"url": "https://huggingface.co/spaces/LiheYoung/Depth-Anything/resolve/main/checkpoints/depth_anything_vits14.pth?download=true",
|
||||
"local": "any/annotators/depth_anything/depth_anything_vits14.pth",
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -42,27 +53,36 @@ transform = Compose(
|
||||
|
||||
|
||||
class DepthAnythingDetector:
|
||||
def __init__(self, model: DPT_DINOv2, device: torch.device) -> None:
|
||||
self.model = model
|
||||
self.device = device
|
||||
def __init__(self) -> None:
|
||||
self.model = None
|
||||
self.model_size: Union[Literal["large", "base", "small"], None] = None
|
||||
self.device = TorchDevice.choose_torch_device()
|
||||
|
||||
@staticmethod
|
||||
def load_model(
|
||||
model_path: Path, device: torch.device, model_size: Literal["large", "base", "small"] = "small"
|
||||
) -> DPT_DINOv2:
|
||||
match model_size:
|
||||
case "small":
|
||||
model = DPT_DINOv2(encoder="vits", features=64, out_channels=[48, 96, 192, 384])
|
||||
case "base":
|
||||
model = DPT_DINOv2(encoder="vitb", features=128, out_channels=[96, 192, 384, 768])
|
||||
case "large":
|
||||
model = DPT_DINOv2(encoder="vitl", features=256, out_channels=[256, 512, 1024, 1024])
|
||||
def load_model(self, model_size: Literal["large", "base", "small"] = "small"):
|
||||
DEPTH_ANYTHING_MODEL_PATH = config.models_path / DEPTH_ANYTHING_MODELS[model_size]["local"]
|
||||
download_with_progress_bar(
|
||||
pathlib.Path(DEPTH_ANYTHING_MODELS[model_size]["url"]).name,
|
||||
DEPTH_ANYTHING_MODELS[model_size]["url"],
|
||||
DEPTH_ANYTHING_MODEL_PATH,
|
||||
)
|
||||
|
||||
model.load_state_dict(torch.load(model_path.as_posix(), map_location="cpu"))
|
||||
model.eval()
|
||||
if not self.model or model_size != self.model_size:
|
||||
del self.model
|
||||
self.model_size = model_size
|
||||
|
||||
model.to(device)
|
||||
return model
|
||||
match self.model_size:
|
||||
case "small":
|
||||
self.model = DPT_DINOv2(encoder="vits", features=64, out_channels=[48, 96, 192, 384])
|
||||
case "base":
|
||||
self.model = DPT_DINOv2(encoder="vitb", features=128, out_channels=[96, 192, 384, 768])
|
||||
case "large":
|
||||
self.model = DPT_DINOv2(encoder="vitl", features=256, out_channels=[256, 512, 1024, 1024])
|
||||
|
||||
self.model.load_state_dict(torch.load(DEPTH_ANYTHING_MODEL_PATH.as_posix(), map_location="cpu"))
|
||||
self.model.eval()
|
||||
|
||||
self.model.to(self.device)
|
||||
return self.model
|
||||
|
||||
def __call__(self, image: Image.Image, resolution: int = 512) -> Image.Image:
|
||||
if not self.model:
|
||||
|
||||
@@ -1,53 +1,30 @@
|
||||
from pathlib import Path
|
||||
from typing import Dict
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
from controlnet_aux.util import resize_image
|
||||
from PIL import Image
|
||||
|
||||
from invokeai.backend.image_util.dw_openpose.utils import NDArrayInt, draw_bodypose, draw_facepose, draw_handpose
|
||||
from invokeai.backend.image_util.dw_openpose.utils import draw_bodypose, draw_facepose, draw_handpose
|
||||
from invokeai.backend.image_util.dw_openpose.wholebody import Wholebody
|
||||
|
||||
DWPOSE_MODELS = {
|
||||
"yolox_l.onnx": "https://huggingface.co/yzd-v/DWPose/resolve/main/yolox_l.onnx?download=true",
|
||||
"dw-ll_ucoco_384.onnx": "https://huggingface.co/yzd-v/DWPose/resolve/main/dw-ll_ucoco_384.onnx?download=true",
|
||||
}
|
||||
|
||||
|
||||
def draw_pose(
|
||||
pose: Dict[str, NDArrayInt | Dict[str, NDArrayInt]],
|
||||
H: int,
|
||||
W: int,
|
||||
draw_face: bool = True,
|
||||
draw_body: bool = True,
|
||||
draw_hands: bool = True,
|
||||
resolution: int = 512,
|
||||
) -> Image.Image:
|
||||
def draw_pose(pose, H, W, draw_face=True, draw_body=True, draw_hands=True, resolution=512):
|
||||
bodies = pose["bodies"]
|
||||
faces = pose["faces"]
|
||||
hands = pose["hands"]
|
||||
|
||||
assert isinstance(bodies, dict)
|
||||
candidate = bodies["candidate"]
|
||||
|
||||
assert isinstance(bodies, dict)
|
||||
subset = bodies["subset"]
|
||||
|
||||
canvas = np.zeros(shape=(H, W, 3), dtype=np.uint8)
|
||||
|
||||
if draw_body:
|
||||
canvas = draw_bodypose(canvas, candidate, subset)
|
||||
|
||||
if draw_hands:
|
||||
assert isinstance(hands, np.ndarray)
|
||||
canvas = draw_handpose(canvas, hands)
|
||||
|
||||
if draw_face:
|
||||
assert isinstance(hands, np.ndarray)
|
||||
canvas = draw_facepose(canvas, faces) # type: ignore
|
||||
canvas = draw_facepose(canvas, faces)
|
||||
|
||||
dwpose_image: Image.Image = resize_image(
|
||||
dwpose_image = resize_image(
|
||||
canvas,
|
||||
resolution,
|
||||
)
|
||||
@@ -62,16 +39,11 @@ class DWOpenposeDetector:
|
||||
Credits: https://github.com/IDEA-Research/DWPose
|
||||
"""
|
||||
|
||||
def __init__(self, onnx_det: Path, onnx_pose: Path) -> None:
|
||||
self.pose_estimation = Wholebody(onnx_det=onnx_det, onnx_pose=onnx_pose)
|
||||
def __init__(self) -> None:
|
||||
self.pose_estimation = Wholebody()
|
||||
|
||||
def __call__(
|
||||
self,
|
||||
image: Image.Image,
|
||||
draw_face: bool = False,
|
||||
draw_body: bool = True,
|
||||
draw_hands: bool = False,
|
||||
resolution: int = 512,
|
||||
self, image: Image.Image, draw_face=False, draw_body=True, draw_hands=False, resolution=512
|
||||
) -> Image.Image:
|
||||
np_image = np.array(image)
|
||||
H, W, C = np_image.shape
|
||||
@@ -107,6 +79,3 @@ class DWOpenposeDetector:
|
||||
return draw_pose(
|
||||
pose, H, W, draw_face=draw_face, draw_hands=draw_hands, draw_body=draw_body, resolution=resolution
|
||||
)
|
||||
|
||||
|
||||
__all__ = ["DWPOSE_MODELS", "DWOpenposeDetector"]
|
||||
|
||||
@@ -5,13 +5,11 @@ import math
|
||||
import cv2
|
||||
import matplotlib
|
||||
import numpy as np
|
||||
import numpy.typing as npt
|
||||
|
||||
eps = 0.01
|
||||
NDArrayInt = npt.NDArray[np.uint8]
|
||||
|
||||
|
||||
def draw_bodypose(canvas: NDArrayInt, candidate: NDArrayInt, subset: NDArrayInt) -> NDArrayInt:
|
||||
def draw_bodypose(canvas, candidate, subset):
|
||||
H, W, C = canvas.shape
|
||||
candidate = np.array(candidate)
|
||||
subset = np.array(subset)
|
||||
@@ -90,7 +88,7 @@ def draw_bodypose(canvas: NDArrayInt, candidate: NDArrayInt, subset: NDArrayInt)
|
||||
return canvas
|
||||
|
||||
|
||||
def draw_handpose(canvas: NDArrayInt, all_hand_peaks: NDArrayInt) -> NDArrayInt:
|
||||
def draw_handpose(canvas, all_hand_peaks):
|
||||
H, W, C = canvas.shape
|
||||
|
||||
edges = [
|
||||
@@ -144,7 +142,7 @@ def draw_handpose(canvas: NDArrayInt, all_hand_peaks: NDArrayInt) -> NDArrayInt:
|
||||
return canvas
|
||||
|
||||
|
||||
def draw_facepose(canvas: NDArrayInt, all_lmks: NDArrayInt) -> NDArrayInt:
|
||||
def draw_facepose(canvas, all_lmks):
|
||||
H, W, C = canvas.shape
|
||||
for lmks in all_lmks:
|
||||
lmks = np.array(lmks)
|
||||
|
||||
@@ -2,26 +2,47 @@
|
||||
# Modified pathing to suit Invoke
|
||||
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import numpy as np
|
||||
import onnxruntime as ort
|
||||
|
||||
from invokeai.app.services.config.config_default import get_config
|
||||
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
from .onnxdet import inference_detector
|
||||
from .onnxpose import inference_pose
|
||||
|
||||
DWPOSE_MODELS = {
|
||||
"yolox_l.onnx": {
|
||||
"local": "any/annotators/dwpose/yolox_l.onnx",
|
||||
"url": "https://huggingface.co/yzd-v/DWPose/resolve/main/yolox_l.onnx?download=true",
|
||||
},
|
||||
"dw-ll_ucoco_384.onnx": {
|
||||
"local": "any/annotators/dwpose/dw-ll_ucoco_384.onnx",
|
||||
"url": "https://huggingface.co/yzd-v/DWPose/resolve/main/dw-ll_ucoco_384.onnx?download=true",
|
||||
},
|
||||
}
|
||||
|
||||
config = get_config()
|
||||
|
||||
|
||||
class Wholebody:
|
||||
def __init__(self, onnx_det: Path, onnx_pose: Path):
|
||||
def __init__(self):
|
||||
device = TorchDevice.choose_torch_device()
|
||||
|
||||
providers = ["CUDAExecutionProvider"] if device.type == "cuda" else ["CPUExecutionProvider"]
|
||||
|
||||
DET_MODEL_PATH = config.models_path / DWPOSE_MODELS["yolox_l.onnx"]["local"]
|
||||
download_with_progress_bar("yolox_l.onnx", DWPOSE_MODELS["yolox_l.onnx"]["url"], DET_MODEL_PATH)
|
||||
|
||||
POSE_MODEL_PATH = config.models_path / DWPOSE_MODELS["dw-ll_ucoco_384.onnx"]["local"]
|
||||
download_with_progress_bar(
|
||||
"dw-ll_ucoco_384.onnx", DWPOSE_MODELS["dw-ll_ucoco_384.onnx"]["url"], POSE_MODEL_PATH
|
||||
)
|
||||
|
||||
onnx_det = DET_MODEL_PATH
|
||||
onnx_pose = POSE_MODEL_PATH
|
||||
|
||||
self.session_det = ort.InferenceSession(path_or_bytes=onnx_det, providers=providers)
|
||||
self.session_pose = ort.InferenceSession(path_or_bytes=onnx_pose, providers=providers)
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from pathlib import Path
|
||||
import gc
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
@@ -6,7 +6,9 @@ import torch
|
||||
from PIL import Image
|
||||
|
||||
import invokeai.backend.util.logging as logger
|
||||
from invokeai.backend.model_manager.config import AnyModel
|
||||
from invokeai.app.services.config.config_default import get_config
|
||||
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
|
||||
def norm_img(np_img):
|
||||
@@ -17,11 +19,28 @@ def norm_img(np_img):
|
||||
return np_img
|
||||
|
||||
|
||||
class LaMA:
|
||||
def __init__(self, model: AnyModel):
|
||||
self._model = model
|
||||
def load_jit_model(url_or_path, device):
|
||||
model_path = url_or_path
|
||||
logger.info(f"Loading model from: {model_path}")
|
||||
model = torch.jit.load(model_path, map_location="cpu").to(device)
|
||||
model.eval()
|
||||
return model
|
||||
|
||||
|
||||
class LaMA:
|
||||
def __call__(self, input_image: Image.Image, *args: Any, **kwds: Any) -> Any:
|
||||
device = TorchDevice.choose_torch_device()
|
||||
model_location = get_config().models_path / "core/misc/lama/lama.pt"
|
||||
|
||||
if not model_location.exists():
|
||||
download_with_progress_bar(
|
||||
name="LaMa Inpainting Model",
|
||||
url="https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
|
||||
dest_path=model_location,
|
||||
)
|
||||
|
||||
model = load_jit_model(model_location, device)
|
||||
|
||||
image = np.asarray(input_image.convert("RGB"))
|
||||
image = norm_img(image)
|
||||
|
||||
@@ -29,25 +48,20 @@ class LaMA:
|
||||
mask = np.asarray(mask)
|
||||
mask = np.invert(mask)
|
||||
mask = norm_img(mask)
|
||||
mask = (mask > 0) * 1
|
||||
|
||||
device = next(self._model.buffers()).device
|
||||
mask = (mask > 0) * 1
|
||||
image = torch.from_numpy(image).unsqueeze(0).to(device)
|
||||
mask = torch.from_numpy(mask).unsqueeze(0).to(device)
|
||||
|
||||
with torch.inference_mode():
|
||||
infilled_image = self._model(image, mask)
|
||||
infilled_image = model(image, mask)
|
||||
|
||||
infilled_image = infilled_image[0].permute(1, 2, 0).detach().cpu().numpy()
|
||||
infilled_image = np.clip(infilled_image * 255, 0, 255).astype("uint8")
|
||||
infilled_image = Image.fromarray(infilled_image)
|
||||
|
||||
return infilled_image
|
||||
del model
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
@staticmethod
|
||||
def load_jit_model(url_or_path: str | Path, device: torch.device | str = "cpu") -> torch.nn.Module:
|
||||
model_path = url_or_path
|
||||
logger.info(f"Loading model from: {model_path}")
|
||||
model: torch.nn.Module = torch.jit.load(model_path, map_location="cpu").to(device) # type: ignore
|
||||
model.eval()
|
||||
return model
|
||||
return infilled_image
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import math
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
|
||||
import cv2
|
||||
@@ -10,7 +11,6 @@ from cv2.typing import MatLike
|
||||
from tqdm import tqdm
|
||||
|
||||
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
|
||||
from invokeai.backend.model_manager.config import AnyModel
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
"""
|
||||
@@ -52,7 +52,7 @@ class RealESRGAN:
|
||||
def __init__(
|
||||
self,
|
||||
scale: int,
|
||||
loadnet: AnyModel,
|
||||
model_path: Path,
|
||||
model: RRDBNet,
|
||||
tile: int = 0,
|
||||
tile_pad: int = 10,
|
||||
@@ -67,6 +67,8 @@ class RealESRGAN:
|
||||
self.half = half
|
||||
self.device = TorchDevice.choose_torch_device()
|
||||
|
||||
loadnet = torch.load(model_path, map_location=torch.device("cpu"))
|
||||
|
||||
# prefer to use params_ema
|
||||
if "params_ema" in loadnet:
|
||||
keyname = "params_ema"
|
||||
|
||||
@@ -125,16 +125,13 @@ class IPAdapter(RawModel):
|
||||
self.device, dtype=self.dtype
|
||||
)
|
||||
|
||||
def to(
|
||||
self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, non_blocking: bool = False
|
||||
):
|
||||
if device is not None:
|
||||
self.device = device
|
||||
def to(self, device: torch.device, dtype: Optional[torch.dtype] = None):
|
||||
self.device = device
|
||||
if dtype is not None:
|
||||
self.dtype = dtype
|
||||
|
||||
self._image_proj_model.to(device=self.device, dtype=self.dtype, non_blocking=non_blocking)
|
||||
self.attn_weights.to(device=self.device, dtype=self.dtype, non_blocking=non_blocking)
|
||||
self._image_proj_model.to(device=self.device, dtype=self.dtype)
|
||||
self.attn_weights.to(device=self.device, dtype=self.dtype)
|
||||
|
||||
def calc_size(self):
|
||||
# workaround for circular import
|
||||
|
||||
@@ -61,10 +61,9 @@ class LoRALayerBase:
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
if self.bias is not None:
|
||||
self.bias = self.bias.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.bias = self.bias.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
# TODO: find and debug lora/locon with bias
|
||||
@@ -110,15 +109,14 @@ class LoRALayer(LoRALayerBase):
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
super().to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
super().to(device=device, dtype=dtype)
|
||||
|
||||
self.up = self.up.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.down = self.down.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.up = self.up.to(device=device, dtype=dtype)
|
||||
self.down = self.down.to(device=device, dtype=dtype)
|
||||
|
||||
if self.mid is not None:
|
||||
self.mid = self.mid.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.mid = self.mid.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
class LoHALayer(LoRALayerBase):
|
||||
@@ -171,19 +169,18 @@ class LoHALayer(LoRALayerBase):
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
super().to(device=device, dtype=dtype)
|
||||
|
||||
self.w1_a = self.w1_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w1_b = self.w1_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w1_a = self.w1_a.to(device=device, dtype=dtype)
|
||||
self.w1_b = self.w1_b.to(device=device, dtype=dtype)
|
||||
if self.t1 is not None:
|
||||
self.t1 = self.t1.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.t1 = self.t1.to(device=device, dtype=dtype)
|
||||
|
||||
self.w2_a = self.w2_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w2_b = self.w2_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w2_a = self.w2_a.to(device=device, dtype=dtype)
|
||||
self.w2_b = self.w2_b.to(device=device, dtype=dtype)
|
||||
if self.t2 is not None:
|
||||
self.t2 = self.t2.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.t2 = self.t2.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
class LoKRLayer(LoRALayerBase):
|
||||
@@ -268,7 +265,6 @@ class LoKRLayer(LoRALayerBase):
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
super().to(device=device, dtype=dtype)
|
||||
|
||||
@@ -277,19 +273,19 @@ class LoKRLayer(LoRALayerBase):
|
||||
else:
|
||||
assert self.w1_a is not None
|
||||
assert self.w1_b is not None
|
||||
self.w1_a = self.w1_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w1_b = self.w1_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w1_a = self.w1_a.to(device=device, dtype=dtype)
|
||||
self.w1_b = self.w1_b.to(device=device, dtype=dtype)
|
||||
|
||||
if self.w2 is not None:
|
||||
self.w2 = self.w2.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w2 = self.w2.to(device=device, dtype=dtype)
|
||||
else:
|
||||
assert self.w2_a is not None
|
||||
assert self.w2_b is not None
|
||||
self.w2_a = self.w2_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w2_b = self.w2_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w2_a = self.w2_a.to(device=device, dtype=dtype)
|
||||
self.w2_b = self.w2_b.to(device=device, dtype=dtype)
|
||||
|
||||
if self.t2 is not None:
|
||||
self.t2 = self.t2.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.t2 = self.t2.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
class FullLayer(LoRALayerBase):
|
||||
@@ -323,11 +319,10 @@ class FullLayer(LoRALayerBase):
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
super().to(device=device, dtype=dtype)
|
||||
|
||||
self.weight = self.weight.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.weight = self.weight.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
class IA3Layer(LoRALayerBase):
|
||||
@@ -363,12 +358,11 @@ class IA3Layer(LoRALayerBase):
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
):
|
||||
super().to(device=device, dtype=dtype)
|
||||
|
||||
self.weight = self.weight.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.on_input = self.on_input.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.weight = self.weight.to(device=device, dtype=dtype)
|
||||
self.on_input = self.on_input.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
AnyLoRALayer = Union[LoRALayer, LoHALayer, LoKRLayer, FullLayer, IA3Layer]
|
||||
@@ -394,11 +388,10 @@ class LoRAModelRaw(RawModel): # (torch.nn.Module):
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
# TODO: try revert if exception?
|
||||
for _key, layer in self.layers.items():
|
||||
layer.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
layer.to(device=device, dtype=dtype)
|
||||
|
||||
def calc_size(self) -> int:
|
||||
model_size = 0
|
||||
@@ -521,7 +514,7 @@ class LoRAModelRaw(RawModel): # (torch.nn.Module):
|
||||
# lower memory consumption by removing already parsed layer values
|
||||
state_dict[layer_key].clear()
|
||||
|
||||
layer.to(device=device, dtype=dtype, non_blocking=True)
|
||||
layer.to(device=device, dtype=dtype)
|
||||
model.layers[layer_key] = layer
|
||||
|
||||
return model
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
import json
|
||||
from base64 import b64decode
|
||||
|
||||
|
||||
def validate_hash(hash: str):
|
||||
if ":" not in hash:
|
||||
return
|
||||
for enc_hash in hashes:
|
||||
alg, hash_ = hash.split(":")
|
||||
if alg == "blake3":
|
||||
alg = "blake3_single"
|
||||
map = json.loads(b64decode(enc_hash))
|
||||
if alg in map:
|
||||
if hash_ == map[alg]:
|
||||
raise Exception("Unrecoverable Model Error")
|
||||
|
||||
|
||||
hashes: list[str] = [
|
||||
"eyJibGFrZTNfbXVsdGkiOiI3Yjc5ODZmM2QyNTk3MDZiMjVhZDRhM2NmNGM2MTcyNGNhZmQ0Yjc4NjI4MjIwNjMyZGU4NjVlM2UxNDEyMTVlIiwiYmxha2UzX3NpbmdsZSI6IjdiNzk4NmYzZDI1OTcwNmIyNWFkNGEzY2Y0YzYxNzI0Y2FmZDRiNzg2MjgyMjA2MzJkZTg2NWUzZTE0MTIxNWUiLCJyYW5kb20iOiJhNDQxYjE1ZmU5YTNjZjU2NjYxMTkwYTBiOTNiOWRlYzdkMDQxMjcyODhjYzg3MjUwOTY3Y2YzYjUyODk0ZDExIiwibWQ1IjoiNzdlZmU5MzRhZGQ3YmU5Njc3NmJkODM3NWJhZDQxN2QiLCJzaGExIjoiYmM2YzYxYzgwNDgyMTE2ZTY2ZGQyNTYwNjRkYTgxYjFlY2U4NzMzOCIsInNoYTIyNCI6IjgzNzNlZGM4ZTg4Y2UxMTljODdlOTM2OTY4ZWViMWNmMzdjZGY4NTBmZjhjOTZkYjNmMDc4YmE0Iiwic2hhMjU2IjoiNzNjYWMxZWRlZmUyZjdlODFkNjRiMTI2YjIxMmY2Yzk2ZTAwNjgyNGJjZmJkZDI3Y2E5NmUyNTk5ZTQwNzUwZiIsInNoYTM4NCI6IjlmNmUwNzlmOTNiNDlkMTg1YzEyNzY0OGQwNzE3YTA0N2E3MzYyNDI4YzY4MzBhNDViNzExODAwZDE4NjIwZDZjMjcwZGE3ZmY0Y2FjOTRmNGVmZDdiZWQ5OTlkOWU0ZCIsInNoYTUxMiI6IjAwNzE5MGUyYjk5ZjVlN2Q1OGZiYWI2YTk1YmY0NjJiODhkOTg1N2NlNjY4MTMyMGJmM2M0Y2ZiZmY0MjkxZmEzNTMyMTk3YzdkODc2YWQ3NjZhOTQyOTQ2Zjc1OWY2YTViNDBlM2I2MzM3YzIwNWI0M2JkOWMyN2JiMTljNzk0IiwiYmxha2UyYiI6IjlhN2VhNTQzY2ZhMmMzMWYyZDIyNjg2MjUwNzUyNDE0Mjc1OWJiZTA0MWZlMWJkMzQzNDM1MWQwNWZlYjI2OGY2MjU0OTFlMzlmMzdkYWQ4MGM2Y2UzYTE4ZjAxNGEzZjJiMmQ2OGU2OTc0MjRmNTU2M2Y5ZjlhYzc1MzJiMjEwIiwiYmxha2UycyI6ImYxZmMwMjA0YjdjNzIwNGJlNWI1YzY3NDEyYjQ2MjY5NWE3YjFlYWQ2M2E5ZGVkMjEzYjZmYTU0NGZjNjJlYzUiLCJzaGEzXzIyNCI6IjljZDQ3YTBhMzA3NmNmYzI0NjJhNTAzMjVmMjg4ZjFiYzJjMmY2NmU2ODIxODc5NjJhNzU0NjFmIiwic2hhM18yNTYiOiI4NTFlNGI1ZDI1MWZlZTFiYzk0ODU1OWNjMDNiNjhlNTllYWU5YWI1ZTUyYjA0OTgxYTRhOTU4YWQyMDdkYjYwIiwic2hhM18zODQiOiJiZDA2ZTRhZGFlMWQ0MTJmZjFjOTcxMDJkZDFlN2JmY2UzMDViYTgxMTgyNzM3NWY5NTI4OWJkOGIyYTUxNjdiMmUyNzZjODNjNTU3ODFhMTEyMDRhNzc5MTUwMzM5ZTEiLCJzaGEzXzUxMiI6ImQ1ZGQ2OGZmZmY5NGRhZjJhMDkzZTliNmM1MTBlZmZkNThmZTA0ODMyZGQzMzEyOTZmN2NkZmYzNmRhZmQ3NGMxY2VmNjUxNTBkZjk5OGM1ODgyY2MzMzk2MTk1ZTViYjc5OTY1OGFkMTQ3MzFiMjJmZWZiMWQzNmY2MWJjYzJjIiwic2hha2VfMTI4IjoiOWJlNTgwNWMwNjg1MmZmNDUzNGQ4ZDZmODYyMmFkOTJkMGUwMWE2Y2JmYjIwN2QxOTRmM2JkYThiOGNmNWU4ZiIsInNoYWtlXzI1NiI6IjRhYjgwYjY2MzcxYzdhNjBhYWM4NDVkMTZlNWMzZDNhMmM4M2FjM2FjZDNiNTBiNzdjYWYyYTNmMWMyY2ZjZjc5OGNjYjkxN2FjZjQzNzBmZDdjN2ZmODQ5M2Q3NGY1MWM4NGU3M2ViZGQ4MTRmM2MwMzk3YzI4ODlmNTI0Mzg3In0K",
|
||||
"eyJibGFrZTNfbXVsdGkiOiI4ODlmYzIwMDA4NWY1NWY4YTA4MjhiODg3MDM0OTRhMGFmNWZkZGI5N2E2YmYwMDRjM2VkYTdiYzBkNDU0MjQzIiwiYmxha2UzX3NpbmdsZSI6Ijg4OWZjMjAwMDg1ZjU1ZjhhMDgyOGI4ODcwMzQ5NGEwYWY1ZmRkYjk3YTZiZjAwNGMzZWRhN2JjMGQ0NTQyNDMiLCJyYW5kb20iOiJhNDQxYjE1ZmU5YTNjZjU2NjYxMTkwYTBiOTNiOWRlYzdkMDQxMjcyODhjYzg3MjUwOTY3Y2YzYjUyODk0ZDExIiwibWQ1IjoiNTIzNTRhMzkzYTVmOGNjNmMyMzQ0OThiYjcxMDljYzEiLCJzaGExIjoiMTJmYmRhOGE3ZGUwOGMwNDc2NTA5OWY2NGNmMGIzYjcxMjc1MGM1NyIsInNoYTIyNCI6IjEyZWU3N2U0Y2NhODViMDk4YjdjNWJlMWFjNGMwNzljNGM3MmJmODA2YjdlZjU1NGI0NzgxZDkxIiwic2hhMjU2IjoiMjU1NTMwZDAyYTY4MjY4OWE5ZTZjMjRhOWZhMDM2OGNhODMxZTI1OTAyYjM2NzQyNzkwZTk3NzU1ZjEzMmNmNSIsInNoYTM4NCI6IjhkMGEyMTRlNDk0NGE2NGY3ZmZjNTg3MGY0ZWUyZTA0OGIzYjRjMmQ0MGRmMWFmYTVlOGE1ZWNkN2IwOTY3M2ZjNWI5YzM5Yzg4Yjc2YmIwY2I4ZjQ1ZjAxY2MwNjZkNCIsInNoYTUxMiI6Ijg3NTM3OWNiYzdlOGYyNzU4YjVjMDY5ZTU2ZWRjODY1ODE4MGFkNDEzNGMwMzY1NzM4ZjM1YjQwYzI2M2JkMTMwMzcwZTE0MzZkNDNmOGFhMTgyMTg5MzgzMTg1ODNhOWJhYTUyYTBjMTk1Mjg5OTQzYzZiYTY2NTg1Yjg5M2ZiIiwiYmxha2UyYiI6IjBhY2MwNWEwOGE5YjhhODNmZTVjYTk4ZmExMTg3NTYwNjk0MjY0YWUxNTI4NDliYzFkNzQzNTYzMzMyMTlhYTg3N2ZiNjc4MmRjZDZiOGIyYjM1MTkyNDQzNDE2ODJiMTQ3YmY2YTY3MDU2ZWIwOTQ4MzE1M2E4Y2ZiNTNmMTI0IiwiYmxha2UycyI6ImY5ZTRhZGRlNGEzZDRhOTZhOWUyNjVjMGVmMjdmZDNiNjA0NzI1NDllMTEyMWQzOGQwMTkxNTY5ZDY5YzdhYzAiLCJzaGEzXzIyNCI6ImM0NjQ3MGRjMjkyNGI0YjZkMTA2NDY5MDRiNWM2OGVjNTU2YmQ4MTA5NmVkMTA4YjZiMzQyZmU1Iiwic2hhM18yNTYiOiIwMDBlMThiZTI1MzYxYTk0NGExZTIwNjQ5ZmY0ZGM2OGRiZTk0OGNkNTYwY2I5MTFhODU1OTE3ODdkNWQ5YWYwIiwic2hhM18zODQiOiIzNDljZmVhMGUxZGE0NWZlMmYzNjJhMWFjZjI1ZTczOWNiNGQ0NDdiM2NiODUzZDVkYWNjMzU5ZmRhMWE1M2FhYWU5OTM2ZmFhZWM1NmFhZDkwMThhYjgxMTI4ZjI3N2YiLCJzaGEzXzUxMiI6ImMxNDgwNGY1YTNjNWE4ZGEyMTAyODk1YTFjZGU4MmIwNGYwZmY4OTczMTc0MmY2NDQyY2NmNzQ1OTQzYWQ5NGViOWZmMTNhZDg3YjRmODkxN2M5NmY5ZjMwZjkwYTFhYTI4OTI3OTkwMjg0ZDJhMzcyMjA0NjE4MTNiNDI0MzEyIiwic2hha2VfMTI4IjoiN2IxY2RkMWUyMzUzMzk0OTg5M2UyMmZkMTAwZmU0YjJhMTU1MDJmMTNjMTI0YzhiZDgxY2QwZDdlOWEzMGNmOCIsInNoYWtlXzI1NiI6ImI0NjMzZThhMjNkZDM0ODk0ZTIyNzc0ODYyNTE1MzVjYWFlNjkyMTdmOTQ0NTc3MzE1NTljODBjNWQ3M2ZkOTMxZTFjMDJlZDI0Yjc3MzE3OTJjMjVlNTZhYjg3NjI4YmJiMDgxNTU0MjU2MWY5ZGI2NWE0NDk4NDFmNGQzYTU4In0K",
|
||||
"eyJibGFrZTNfbXVsdGkiOiI2Y2M0MmU4NGRiOGQyZTliYjA4YjUxNWUwYzlmYzg2NTViNDUwNGRlZDM1MzBlZjFjNTFjZWEwOWUxYThiNGYxIiwiYmxha2UzX3NpbmdsZSI6IjZjYzQyZTg0ZGI4ZDJlOWJiMDhiNTE1ZTBjOWZjODY1NWI0NTA0ZGVkMzUzMGVmMWM1MWNlYTA5ZTFhOGI0ZjEiLCJyYW5kb20iOiJhNDQxYjE1ZmU5YTNjZjU2NjYxMTkwYTBiOTNiOWRlYzdkMDQxMjcyODhjYzg3MjUwOTY3Y2YzYjUyODk0ZDExIiwibWQ1IjoiZDQwNjk3NTJhYjQ0NzFhZDliMDY3YmUxMmRjNTM2ZjYiLCJzaGExIjoiOGRjZmVlMjZjZjUyOTllMDBjN2QwZjJiZTc0NmVmMTlkZjliZGExNCIsInNoYTIyNCI6IjhjMzAzOTU3ZjI3NDNiMjUwNmQyYzIzY2VmNmU4MTQ5MTllZmE2MWM0MTFiMDk5ZmMzODc2MmRjIiwic2hhMjU2IjoiZDk3ZjQ2OWJjMWZkMjhjMjZkMjJhN2Y3ODczNzlhZmM4NjY3ZmZmM2FhYTQ5NTE4NmQyZTM4OTU2MTBjZDJmMyIsInNoYTM4NCI6IjY0NmY0YWM0ZDA2YWJkZmE2MDAwN2VjZWNiOWNjOTk4ZmJkOTBiYzYwMmY3NTk2M2RhZDUzMGMzNGE5ZGE1YzY4NjhlMGIwMDJkZDNlMTM4ZjhmMjA2ODcyNzFkMDVjMSIsInNoYTUxMiI6ImYzZTU4NTA0YzYyOGUwYjViNzBhOTYxYThmODA1MDA1NjQ1M2E5NDlmNTgzNDhiYTNhZTVlMjdkNDRhNGJkMjc5ZjA3MmU1OGQ5YjEyOGE1NDc1MTU2ZmM3YzcxMGJkYjI3OWQ5OGFmN2EwYTI4Y2Y1ZDY2MmQxODY4Zjg3ZjI3IiwiYmxha2UyYiI6ImFhNjgyYmJjM2U1ZGRjNDZkNWUxN2VjMzRlNmEzZGY5ZjhiNWQyNzk0YTZkNmY0M2VjODMxZjhjOTU2OGYyY2RiOGE4YjAyNTE4MDA4YmY0Y2FhYTlhY2FhYjNkNzRmZmRiNGZlNDgwOTcwODU3OGJiZjNlNzJjYTc5ZDQwYzZmIiwiYmxha2UycyI6ImQ0ZGJlZTJkMmZlNDMwOGViYTkwMTY1MDdmMzI1ZmJiODZlMWQzNDQ0MjgzNzRlMjAwNjNiNWQ1MzkzZTExNjMiLCJzaGEzXzIyNCI6ImE1ZTM5NWZlNGRlYjIyY2JhNjgwMWFiZTliZjljMjM2YmMzYjkwZDdiN2ZjMTRhZDhjZjQ0NzBlIiwic2hhM18yNTYiOiIwOWYwZGVjODk0OWEzYmQzYzU3N2RjYzUyMTMwMGRiY2UwMjVjM2VjOTJkNzQ0MDJkNTE1ZDA4NTQwODg2NGY1Iiwic2hhM18zODQiOiJmMjEyNmM5NTcxODQ3NDZmNjYyMjE4MTRkMDZkZWQ3NDBhYWU3MDA4MTc0YjI0OTEzY2YwOTQzY2IwMTA5Y2QxNWI4YmMwOGY1YjUwMWYwYzhhOTY4MzUwYzgzY2I1ZWUiLCJzaGEzXzUxMiI6ImU1ZmEwMzIwMzk2YTJjMThjN2UxZjVlZmJiODYwYTU1M2NlMTlkMDQ0MWMxNWEwZTI1M2RiNjJkM2JmNjg0ZDI1OWIxYmQ4OTJkYTcyMDVjYTYyODQ2YzU0YWI1ODYxOTBmNDUxZDlmZmNkNDA5YmU5MzlhNWM1YWIyZDdkM2ZkIiwic2hha2VfMTI4IjoiNGI2MTllM2I4N2U1YTY4OTgxMjk0YzgzMmU0NzljZGI4MWFmODdlZTE4YzM1Zjc5ZjExODY5ZWEzNWUxN2I3MiIsInNoYWtlXzI1NiI6ImYzOWVkNmMxZmQ2NzVmMDg3ODAyYTc4ZTUwYWFkN2ZiYTZiM2QxNzhlZWYzMjRkMTI3ZTZjYmEwMGRjNzkwNTkxNjQ1Y2U1Y2NmMjhjYzVkNWRkODU1OWIzMDMxYTM3ZjE5NjhmYmFhNDQzMmI2ZWU0Yzg3ZWE2YTdkMmE2NWM2In0K",
|
||||
"eyJibGFrZTNfbXVsdGkiOiJhNDRiZjJkMzVkZDI3OTZlZTI1NmY0MzVkODFhNTdhOGM0MjZhMzM5ZDc3NTVkMmNiMjdmMzU4ZjM0NTM4OWM2IiwiYmxha2UzX3NpbmdsZSI6ImE0NGJmMmQzNWRkMjc5NmVlMjU2ZjQzNWQ4MWE1N2E4YzQyNmEzMzlkNzc1NWQyY2IyN2YzNThmMzQ1Mzg5YzYiLCJyYW5kb20iOiJhNDQxYjE1ZmU5YTNjZjU2NjYxMTkwYTBiOTNiOWRlYzdkMDQxMjcyODhjYzg3MjUwOTY3Y2YzYjUyODk0ZDExIiwibWQ1IjoiOGU5OTMzMzEyZjg4NDY4MDg0ZmRiZWNjNDYyMTMxZTgiLCJzaGExIjoiNmI0MmZjZDFmMmQyNzUwYWNkY2JkMTUzMmQ4NjQ5YTM1YWI2NDYzNCIsInNoYTIyNCI6ImQ2Y2E2OTUxNzIzZjdjZjg0NzBjZWRjMmVhNjA2ODNmMWU4NDMzM2Q2NDM2MGIzOWIyMjZlZmQzIiwic2hhMjU2IjoiMDAxNGY5Yzg0YjcwMTFhMGJkNzliNzU0NGVjNzg4NDQzNWQ4ZGY0NmRjMDBiNDk0ZmFkYzA4NWQzNDM1NjI4MyIsInNoYTM4NCI6IjMxODg2OTYxODc4NWY3MWJlM2RlZjkyZDgyNzY2NjBhZGE0MGViYTdkMDk1M2Y0YTc5ODdlMThhNzFlNjBlY2EwY2YyM2YwMjVhMmQ4ZjUyMmNkZGY3MTcxODFhMTQxNSIsInNoYTUxMiI6IjdmZGQxN2NmOWU3ZTBhZDcwMzJjMDg1MTkyYWMxZmQ0ZmFhZjZkNWNlYzAzOTE5ZDk0MmZiZTIyNWNhNmIwZTg0NmQ4ZGI0ZjllYTQ5MjJlMTdhNTg4MTY4YzExMTM1NWZiZDQ1NTlmMmU5NDcwNjAwZWE1MzBhMDdiMzY0YWQwIiwiYmxha2UyYiI6IjI0ZjExZWI5M2VlN2YxOTI5NWZiZGU5MTczMmE0NGJkZGYxOWE1ZTQ4MWNmOWFhMjQ2M2UzNDllYjg0Mzc4ZDBkODFjNzY0YWQ1NTk1YjkxZjQzYzgxODcxNTRlYWU5NTZkY2ZjZTlkMWU2MTZjNTFkZThhZDZjZTBhODcyY2Q0IiwiYmxha2UycyI6IjVkZTUwZDUwMGYwYTBmOGRlMTEwOGE2ZmFkZGM4ODNlMTA3NmQ3MThiNmQxN2E4ZDVkMjgzZDdiNGYzZDU2OGEiLCJzaGEzXzIyNCI6IjFhNTA0OGNlYWZiYjg2ZDc4ZmNiNTI0ZTViYTc4NWQ2ZmY5NzY1ZTNlMzdhZWRjZmYxZGVjNGJhIiwic2hhM18yNTYiOiI0YjA0YjE1NTRmMzRkYTlmMjBmZDczM2IzNDg4NjE0ZWNhM2IwOWU1OTJjOGJlMmM0NjA1NjYyMWU0MjJmZDllIiwic2hhM18zODQiOiI1NjMwYjM2OGQ4MGM1YmM5MTgzM2VmNWM2YWUzOTJhNDE4NTNjYmM2MWJiNTI4ZDE4YWM1OWFjZGZiZWU1YThkMWMyZDE4MTM1ZGI2ZWQ2OTJlODFkZThmYTM3MzkxN2MiLCJzaGEzXzUxMiI6IjA2ODg4MGE1MmNiNDkzODYwZDhjOTVhOTFhZGFmZTYwZGYxODc2ZDhjYjFhNmI3NTU2ZjJjM2Y1NjFmMGYwZjMyZjZhYTA1YmVmN2FhYjQ5OWEwNTM0Zjk0Njc4MDEzODlmNDc0ODFiNzcxMjdjMDFiOGFhOTY4NGJhZGUzYmY2Iiwic2hha2VfMTI4IjoiODlmYTdjNDcwNGI4NGZkMWQ1M2E0MTBlN2ZjMzU3NWRhNmUxMGU1YzkzMjM1NWYyZWEyMWM4NDVhZDBlM2UxOCIsInNoYWtlXzI1NiI6IjE4NGNlMWY2NjdmYmIyODA5NWJhZmVkZTQzNTUzZjhkYzBhNGY1MDQwYWJlMjcxMzkzMzcwNDEyZWFiZTg0ZGJhNjI0Y2ZiZWE4YzUxZDU2YzkwMTM2Mjg2ODgyZmQ0Y2E3MzA3NzZjNWUzODFlYzI5MWYxYTczOTE1MDkyMTFmIn0K",
|
||||
"eyJibGFrZTNfbXVsdGkiOiJhYjA2YjNmMDliNTExOTAzMTMzMzY5NDE2MTc4ZDk2ZjlkYTc3ZGEwOTgyNDJmN2VlMTVjNTNhNTRkMDZhNWVmIiwiYmxha2UzX3NpbmdsZSI6ImFiMDZiM2YwOWI1MTE5MDMxMzMzNjk0MTYxNzhkOTZmOWRhNzdkYTA5ODI0MmY3ZWUxNWM1M2E1NGQwNmE1ZWYiLCJyYW5kb20iOiJhNDQxYjE1ZmU5YTNjZjU2NjYxMTkwYTBiOTNiOWRlYzdkMDQxMjcyODhjYzg3MjUwOTY3Y2YzYjUyODk0ZDExIiwibWQ1IjoiZWY0MjcxYjU3NTQwMjU4NGQ2OTI5ZWJkMGI3Nzk5NzYiLCJzaGExIjoiMzgzNzliYWQzZjZiZjc4MmM4OTgzOGY3YWVkMzRkNDNkMzNlYWM2MSIsInNoYTIyNCI6ImQ5ZDNiMjJkYmZlY2M1NTdlODAzNjg5M2M3ZWE0N2I0NTQzYzM2NzZhMDk4NzMxMzRhNjQ0OWEwIiwic2hhMjU2IjoiMjYxZGI3NmJlMGYxMzdlZWJkYmI5OGRlYWM0ZjcyMDdiOGUxMjdiY2MyZmMwODI5OGVjZDczYjQ3MjYxNjQ1NiIsInNoYTM4NCI6IjMzMjkwYWQxYjlhMmRkYmU0ODY3MWZiMTIxNDdiZWJhNjI4MjA1MDcwY2VkNjNiZTFmNGU5YWRhMjgwYWU2ZjZjNDkzYTY2MDllMGQ2YTIzMWU2ODU5ZmIyNGZhM2FjMCIsInNoYTUxMiI6IjAzMDZhMWI1NmNiYTdjNjJiNTNmNTk4MTAwMTQ3MDQ5ODBhNGRmZTdjZjQ5NTU4ZmMyMmQxZDczZDc5NzJmZTllODk2ZWRjMmEyYTQxYWVjNjRjZjkwZGUwYjI1NGM0MDBlZTU1YzcwZjk3OGVlMzk5NmM2YzhkNTBjYTI4YTdiIiwiYmxha2UyYiI6IjY1MDZhMDg1YWQ5MGZkZjk2NGJmMGE5NTFkZmVkMTllZTc0NGVjY2EyODQzZjQzYTI5NmFjZDM0M2RiODhhMDNlNTlkNmFmMGM1YWJkNTEzMzc4MTQ5Yjg3OTExMTVmODRmMDIyZWM1M2JmNGFjNDZhZDczNWIwMmJlYTM0MDk5IiwiYmxha2UycyI6IjdlZDQ3ZWQxOTg3MTk0YWFmNGIwMjQ3MWFkNTMyMmY3NTE3ZjI0OTcwMDc2Y2NmNDkzMWI0MzYxMDU1NzBlNDAiLCJzaGEzXzIyNCI6Ijk2MGM4MDExOTlhMGUzYWExNjdiNmU2MWVkMzE2ZDUzMDM2Yjk4M2UyOThkNWI5MjZmMDc3NDlhIiwic2hhM18yNTYiOiIzYzdmYWE1ZDE3Zjk2MGYxOTI2ZjNlNGIyZjc1ZjdiOWIyZDQ4NGFhNmEwM2ViOWNlMTI4NmM2OTE2YWEyM2RlIiwic2hhM18zODQiOiI5Y2Y0NDA1NWFjYzFlYjZmMDY1YjRjODcxYTYzNTM1MGE1ZjY0ODQwM2YwYTU0MWEzYzZhNjI3N2ViZjZmYTNjYmM1YmJiNjQwMDE4OGFlMWIxMTI2OGZmMDJiMzYzZDUiLCJzaGEzXzUxMiI6ImEyZDk3ZDRlYjYxM2UwZDViYTc2OTk2MzE2MzcxOGEwNDIxZDkxNTNiNjllYjM5MDRmZjI4ODRhZDdjNGJiYmIwNGY2Nzc1OTA1YmQxNGI2NTJmZTQ1Njg0YmI5MTQ3ZjBkYWViZjAxZjIzY2MzZDhkMjIzMTE0MGUzNjI4NTE5Iiwic2hha2VfMTI4IjoiNjkwMWMwYjg1MTg5ZTkyNTJiODI3MTc5NjE2MjRlMTM0MDQ1ZjlkMmI5MzM0MzVkM2Y0OThiZWIyN2Q3N2JiNSIsInNoYWtlXzI1NiI6ImIwMjA4ZTFkNDVjZWI0ODdiZDUwNzk3MWJiNWI3MjdjN2UyYmE3ZDliNWM2ZTEyYWE5YTNhOTY5YzcyNDRjODIwZDcyNDY1ODhlZWU3Yjk4ZWM1NzhjZWIxNjc3OTkxODljMWRkMmZkMmZmYWM4MWExZDAzZDFiNjMxOGRkMjBiIn0K",
|
||||
]
|
||||
@@ -31,13 +31,12 @@ from typing_extensions import Annotated, Any, Dict
|
||||
|
||||
from invokeai.app.invocations.constants import SCHEDULER_NAME_VALUES
|
||||
from invokeai.app.util.misc import uuid_string
|
||||
from invokeai.backend.model_hash.hash_validator import validate_hash
|
||||
|
||||
from ..raw_model import RawModel
|
||||
|
||||
# ModelMixin is the base class for all diffusers and transformers models
|
||||
# RawModel is the InvokeAI wrapper class for ip_adapters, loras, textual_inversion and onnx runtime
|
||||
AnyModel = Union[ModelMixin, RawModel, torch.nn.Module, Dict[str, torch.Tensor]]
|
||||
AnyModel = Union[ModelMixin, RawModel, torch.nn.Module]
|
||||
|
||||
|
||||
class InvalidModelConfigException(Exception):
|
||||
@@ -116,7 +115,7 @@ class SchedulerPredictionType(str, Enum):
|
||||
class ModelRepoVariant(str, Enum):
|
||||
"""Various hugging face variants on the diffusers format."""
|
||||
|
||||
Default = "" # model files without "fp16" or other qualifier
|
||||
Default = "" # model files without "fp16" or other qualifier - empty str
|
||||
FP16 = "fp16"
|
||||
FP32 = "fp32"
|
||||
ONNX = "onnx"
|
||||
@@ -449,6 +448,4 @@ class ModelConfigFactory(object):
|
||||
model.key = key
|
||||
if isinstance(model, CheckpointConfigBase) and timestamp is not None:
|
||||
model.converted_at = timestamp
|
||||
if model:
|
||||
validate_hash(model.hash)
|
||||
return model # type: ignore
|
||||
|
||||
@@ -7,7 +7,7 @@ from importlib import import_module
|
||||
from pathlib import Path
|
||||
|
||||
from .convert_cache.convert_cache_default import ModelConvertCache
|
||||
from .load_base import LoadedModel, LoadedModelWithoutConfig, ModelLoaderBase
|
||||
from .load_base import LoadedModel, ModelLoaderBase
|
||||
from .load_default import ModelLoader
|
||||
from .model_cache.model_cache_default import ModelCache
|
||||
from .model_loader_registry import ModelLoaderRegistry, ModelLoaderRegistryBase
|
||||
@@ -19,7 +19,6 @@ for module in loaders:
|
||||
|
||||
__all__ = [
|
||||
"LoadedModel",
|
||||
"LoadedModelWithoutConfig",
|
||||
"ModelCache",
|
||||
"ModelConvertCache",
|
||||
"ModelLoaderBase",
|
||||
|
||||
@@ -7,7 +7,6 @@ from pathlib import Path
|
||||
|
||||
from invokeai.backend.util import GIG, directory_size
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
from invokeai.backend.util.util import safe_filename
|
||||
|
||||
from .convert_cache_base import ModelConvertCacheBase
|
||||
|
||||
@@ -36,7 +35,6 @@ class ModelConvertCache(ModelConvertCacheBase):
|
||||
|
||||
def cache_path(self, key: str) -> Path:
|
||||
"""Return the path for a model with the indicated key."""
|
||||
key = safe_filename(self._cache_path, key)
|
||||
return self._cache_path / key
|
||||
|
||||
def make_room(self, size: float) -> None:
|
||||
|
||||
@@ -4,13 +4,10 @@ Base class for model loading in InvokeAI.
|
||||
"""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from contextlib import contextmanager
|
||||
from dataclasses import dataclass
|
||||
from logging import Logger
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, Generator, Optional, Tuple
|
||||
|
||||
import torch
|
||||
from typing import Any, Optional
|
||||
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
from invokeai.backend.model_manager.config import (
|
||||
@@ -23,44 +20,10 @@ from invokeai.backend.model_manager.load.model_cache.model_cache_base import Mod
|
||||
|
||||
|
||||
@dataclass
|
||||
class LoadedModelWithoutConfig:
|
||||
"""
|
||||
Context manager object that mediates transfer from RAM<->VRAM.
|
||||
|
||||
This is a context manager object that has two distinct APIs:
|
||||
|
||||
1. Older API (deprecated):
|
||||
Use the LoadedModel object directly as a context manager.
|
||||
It will move the model into VRAM (on CUDA devices), and
|
||||
return the model in a form suitable for passing to torch.
|
||||
Example:
|
||||
```
|
||||
loaded_model_= loader.get_model_by_key('f13dd932', SubModelType('vae'))
|
||||
with loaded_model as vae:
|
||||
image = vae.decode(latents)[0]
|
||||
```
|
||||
|
||||
2. Newer API (recommended):
|
||||
Call the LoadedModel's `model_on_device()` method in a
|
||||
context. It returns a tuple consisting of a copy of
|
||||
the model's state dict in CPU RAM followed by a copy
|
||||
of the model in VRAM. The state dict is provided to allow
|
||||
LoRAs and other model patchers to return the model to
|
||||
its unpatched state without expensive copy and restore
|
||||
operations.
|
||||
|
||||
Example:
|
||||
```
|
||||
loaded_model_= loader.get_model_by_key('f13dd932', SubModelType('vae'))
|
||||
with loaded_model.model_on_device() as (state_dict, vae):
|
||||
image = vae.decode(latents)[0]
|
||||
```
|
||||
|
||||
The state_dict should be treated as a read-only object and
|
||||
never modified. Also be aware that some loadable models do
|
||||
not have a state_dict, in which case this value will be None.
|
||||
"""
|
||||
class LoadedModel:
|
||||
"""Context manager object that mediates transfer from RAM<->VRAM."""
|
||||
|
||||
config: AnyModelConfig
|
||||
_locker: ModelLockerBase
|
||||
|
||||
def __enter__(self) -> AnyModel:
|
||||
@@ -72,29 +35,12 @@ class LoadedModelWithoutConfig:
|
||||
"""Context exit."""
|
||||
self._locker.unlock()
|
||||
|
||||
@contextmanager
|
||||
def model_on_device(self) -> Generator[Tuple[Optional[Dict[str, torch.Tensor]], AnyModel], None, None]:
|
||||
"""Return a tuple consisting of the model's state dict (if it exists) and the locked model on execution device."""
|
||||
locked_model = self._locker.lock()
|
||||
try:
|
||||
state_dict = self._locker.get_state_dict()
|
||||
yield (state_dict, locked_model)
|
||||
finally:
|
||||
self._locker.unlock()
|
||||
|
||||
@property
|
||||
def model(self) -> AnyModel:
|
||||
"""Return the model without locking it."""
|
||||
return self._locker.model
|
||||
|
||||
|
||||
@dataclass
|
||||
class LoadedModel(LoadedModelWithoutConfig):
|
||||
"""Context manager object that mediates transfer from RAM<->VRAM."""
|
||||
|
||||
config: Optional[AnyModelConfig] = None
|
||||
|
||||
|
||||
# TODO(MM2):
|
||||
# Some "intermediary" subclasses in the ModelLoaderBase class hierarchy define methods that their subclasses don't
|
||||
# know about. I think the problem may be related to this class being an ABC.
|
||||
|
||||
@@ -16,7 +16,7 @@ from invokeai.backend.model_manager.config import DiffusersConfigBase, ModelType
|
||||
from invokeai.backend.model_manager.load.convert_cache import ModelConvertCacheBase
|
||||
from invokeai.backend.model_manager.load.load_base import LoadedModel, ModelLoaderBase
|
||||
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase, ModelLockerBase
|
||||
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_fs
|
||||
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data, calc_model_size_by_fs
|
||||
from invokeai.backend.model_manager.load.optimizations import skip_torch_weight_init
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
@@ -84,7 +84,7 @@ class ModelLoader(ModelLoaderBase):
|
||||
except IndexError:
|
||||
pass
|
||||
|
||||
cache_path: Path = self._convert_cache.cache_path(str(model_path))
|
||||
cache_path: Path = self._convert_cache.cache_path(config.key)
|
||||
if self._needs_conversion(config, model_path, cache_path):
|
||||
loaded_model = self._do_convert(config, model_path, cache_path, submodel_type)
|
||||
else:
|
||||
@@ -95,6 +95,7 @@ class ModelLoader(ModelLoaderBase):
|
||||
config.key,
|
||||
submodel_type=submodel_type,
|
||||
model=loaded_model,
|
||||
size=calc_model_size_by_data(loaded_model),
|
||||
)
|
||||
|
||||
return self._ram_cache.get(
|
||||
@@ -125,7 +126,9 @@ class ModelLoader(ModelLoaderBase):
|
||||
if subtype == submodel_type:
|
||||
continue
|
||||
if submodel := getattr(pipeline, subtype.value, None):
|
||||
self._ram_cache.put(config.key, submodel_type=subtype, model=submodel)
|
||||
self._ram_cache.put(
|
||||
config.key, submodel_type=subtype, model=submodel, size=calc_model_size_by_data(submodel)
|
||||
)
|
||||
return getattr(pipeline, submodel_type.value) if submodel_type else pipeline
|
||||
|
||||
def _needs_conversion(self, config: AnyModelConfig, model_path: Path, dest_path: Path) -> bool:
|
||||
|
||||
@@ -30,11 +30,6 @@ class ModelLockerBase(ABC):
|
||||
"""Unlock the contained model, and remove it from VRAM."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_state_dict(self) -> Optional[Dict[str, torch.Tensor]]:
|
||||
"""Return the state dict (if any) for the cached model."""
|
||||
pass
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def model(self) -> AnyModel:
|
||||
@@ -61,11 +56,6 @@ class CacheRecord(Generic[T]):
|
||||
and then injected into the model. When the model is finished, the VRAM
|
||||
copy of the state dict is deleted, and the RAM version is reinjected
|
||||
into the model.
|
||||
|
||||
The state_dict should be treated as a read-only attribute. Do not attempt
|
||||
to patch or otherwise modify it. Instead, patch the copy of the state_dict
|
||||
after it is loaded into the execution device (e.g. CUDA) using the `LoadedModel`
|
||||
context manager call `model_on_device()`.
|
||||
"""
|
||||
|
||||
key: str
|
||||
@@ -169,6 +159,7 @@ class ModelCacheBase(ABC, Generic[T]):
|
||||
self,
|
||||
key: str,
|
||||
model: T,
|
||||
size: int,
|
||||
submodel_type: Optional[SubModelType] = None,
|
||||
) -> None:
|
||||
"""Store model under key and optional submodel_type."""
|
||||
|
||||
@@ -29,7 +29,6 @@ import torch
|
||||
|
||||
from invokeai.backend.model_manager import AnyModel, SubModelType
|
||||
from invokeai.backend.model_manager.load.memory_snapshot import MemorySnapshot, get_pretty_snapshot_diff
|
||||
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
@@ -154,13 +153,13 @@ class ModelCache(ModelCacheBase[AnyModel]):
|
||||
self,
|
||||
key: str,
|
||||
model: AnyModel,
|
||||
size: int,
|
||||
submodel_type: Optional[SubModelType] = None,
|
||||
) -> None:
|
||||
"""Store model under key and optional submodel_type."""
|
||||
key = self._make_cache_key(key, submodel_type)
|
||||
if key in self._cached_models:
|
||||
return
|
||||
size = calc_model_size_by_data(model)
|
||||
self.make_room(size)
|
||||
|
||||
state_dict = model.state_dict() if isinstance(model, torch.nn.Module) else None
|
||||
@@ -253,7 +252,12 @@ class ModelCache(ModelCacheBase[AnyModel]):
|
||||
|
||||
May raise a torch.cuda.OutOfMemoryError
|
||||
"""
|
||||
# These attributes are not in the base ModelMixin class but in various derived classes.
|
||||
# Some models don't have these attributes, in which case they run in RAM/CPU.
|
||||
self.logger.debug(f"Called to move {cache_entry.key} to {target_device}")
|
||||
if not (hasattr(cache_entry.model, "device") and hasattr(cache_entry.model, "to")):
|
||||
return
|
||||
|
||||
source_device = cache_entry.device
|
||||
|
||||
# Note: We compare device types only so that 'cuda' == 'cuda:0'.
|
||||
@@ -261,10 +265,6 @@ class ModelCache(ModelCacheBase[AnyModel]):
|
||||
if torch.device(source_device).type == torch.device(target_device).type:
|
||||
return
|
||||
|
||||
# Some models don't have a `to` method, in which case they run in RAM/CPU.
|
||||
if not hasattr(cache_entry.model, "to"):
|
||||
return
|
||||
|
||||
# This roundabout method for moving the model around is done to avoid
|
||||
# the cost of moving the model from RAM to VRAM and then back from VRAM to RAM.
|
||||
# When moving to VRAM, we copy (not move) each element of the state dict from
|
||||
@@ -285,9 +285,9 @@ class ModelCache(ModelCacheBase[AnyModel]):
|
||||
else:
|
||||
new_dict: Dict[str, torch.Tensor] = {}
|
||||
for k, v in cache_entry.state_dict.items():
|
||||
new_dict[k] = v.to(torch.device(target_device), copy=True, non_blocking=True)
|
||||
new_dict[k] = v.to(torch.device(target_device), copy=True)
|
||||
cache_entry.model.load_state_dict(new_dict, assign=True)
|
||||
cache_entry.model.to(target_device, non_blocking=True)
|
||||
cache_entry.model.to(target_device)
|
||||
cache_entry.device = target_device
|
||||
except Exception as e: # blow away cache entry
|
||||
self._delete_cache_entry(cache_entry)
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
Base class and implementation of a class that moves models in and out of VRAM.
|
||||
"""
|
||||
|
||||
from typing import Dict, Optional
|
||||
|
||||
import torch
|
||||
|
||||
from invokeai.backend.model_manager import AnyModel
|
||||
@@ -29,18 +27,20 @@ class ModelLocker(ModelLockerBase):
|
||||
"""Return the model without moving it around."""
|
||||
return self._cache_entry.model
|
||||
|
||||
def get_state_dict(self) -> Optional[Dict[str, torch.Tensor]]:
|
||||
"""Return the state dict (if any) for the cached model."""
|
||||
return self._cache_entry.state_dict
|
||||
|
||||
def lock(self) -> AnyModel:
|
||||
"""Move the model into the execution device (GPU) and lock it."""
|
||||
if not hasattr(self.model, "to"):
|
||||
return self.model
|
||||
|
||||
# NOTE that the model has to have the to() method in order for this code to move it into GPU!
|
||||
self._cache_entry.lock()
|
||||
try:
|
||||
if self._cache.lazy_offloading:
|
||||
self._cache.offload_unlocked_models(self._cache_entry.size)
|
||||
|
||||
self._cache.move_model_to_device(self._cache_entry, self._cache.execution_device)
|
||||
self._cache_entry.loaded = True
|
||||
|
||||
self._cache.logger.debug(f"Locking {self._cache_entry.key} in {self._cache.execution_device}")
|
||||
self._cache.print_cuda_stats()
|
||||
except torch.cuda.OutOfMemoryError:
|
||||
@@ -55,6 +55,9 @@ class ModelLocker(ModelLockerBase):
|
||||
|
||||
def unlock(self) -> None:
|
||||
"""Call upon exit from context."""
|
||||
if not hasattr(self.model, "to"):
|
||||
return
|
||||
|
||||
self._cache_entry.unlock()
|
||||
if not self._cache.lazy_offloading:
|
||||
self._cache.offload_unlocked_models(0)
|
||||
|
||||
@@ -65,11 +65,14 @@ class GenericDiffusersLoader(ModelLoader):
|
||||
else:
|
||||
try:
|
||||
config = self._load_diffusers_config(model_path, config_name="config.json")
|
||||
if class_name := config.get("_class_name"):
|
||||
class_name = config.get("_class_name", None)
|
||||
if class_name:
|
||||
result = self._hf_definition_to_type(module="diffusers", class_name=class_name)
|
||||
elif class_name := config.get("architectures"):
|
||||
if config.get("model_type", None) == "clip_vision_model":
|
||||
class_name = config.get("architectures")
|
||||
assert class_name is not None
|
||||
result = self._hf_definition_to_type(module="transformers", class_name=class_name[0])
|
||||
else:
|
||||
if not class_name:
|
||||
raise InvalidModelConfigException("Unable to decipher Load Class based on given config.json")
|
||||
except KeyError as e:
|
||||
raise InvalidModelConfigException("An expected config.json file is missing from this model.") from e
|
||||
|
||||
@@ -22,7 +22,8 @@ from .generic_diffusers import GenericDiffusersLoader
|
||||
|
||||
|
||||
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.VAE, format=ModelFormat.Diffusers)
|
||||
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.VAE, format=ModelFormat.Checkpoint)
|
||||
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion1, type=ModelType.VAE, format=ModelFormat.Checkpoint)
|
||||
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion2, type=ModelType.VAE, format=ModelFormat.Checkpoint)
|
||||
class VAELoader(GenericDiffusersLoader):
|
||||
"""Class to load VAE models."""
|
||||
|
||||
@@ -39,8 +40,12 @@ class VAELoader(GenericDiffusersLoader):
|
||||
return True
|
||||
|
||||
def _convert_model(self, config: AnyModelConfig, model_path: Path, output_path: Optional[Path] = None) -> AnyModel:
|
||||
assert isinstance(config, CheckpointConfigBase)
|
||||
config_file = self._app_config.legacy_conf_path / config.config_path
|
||||
# TODO(MM2): check whether sdxl VAE models convert.
|
||||
if config.base not in {BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2}:
|
||||
raise Exception(f"VAE conversion not supported for model type: {config.base}")
|
||||
else:
|
||||
assert isinstance(config, CheckpointConfigBase)
|
||||
config_file = self._app_config.legacy_conf_path / config.config_path
|
||||
|
||||
if model_path.suffix == ".safetensors":
|
||||
checkpoint = safetensors_load_file(model_path, device="cpu")
|
||||
|
||||
@@ -83,7 +83,7 @@ class HuggingFaceMetadataFetch(ModelMetadataFetchBase):
|
||||
assert s.size is not None
|
||||
files.append(
|
||||
RemoteModelFile(
|
||||
url=hf_hub_url(id, s.rfilename, revision=variant or "main"),
|
||||
url=hf_hub_url(id, s.rfilename, revision=variant),
|
||||
path=Path(name, s.rfilename),
|
||||
size=s.size,
|
||||
sha256=s.lfs.get("sha256") if s.lfs else None,
|
||||
|
||||
@@ -37,12 +37,9 @@ class RemoteModelFile(BaseModel):
|
||||
|
||||
url: AnyHttpUrl = Field(description="The url to download this model file")
|
||||
path: Path = Field(description="The path to the file, relative to the model root")
|
||||
size: Optional[int] = Field(description="The size of this file, in bytes", default=0)
|
||||
size: int = Field(description="The size of this file, in bytes")
|
||||
sha256: Optional[str] = Field(description="SHA256 hash of this model (not always available)", default=None)
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash(str(self))
|
||||
|
||||
|
||||
class ModelMetadataBase(BaseModel):
|
||||
"""Base class for model metadata information."""
|
||||
|
||||
@@ -451,16 +451,8 @@ class PipelineCheckpointProbe(CheckpointProbeBase):
|
||||
|
||||
class VaeCheckpointProbe(CheckpointProbeBase):
|
||||
def get_base_type(self) -> BaseModelType:
|
||||
# VAEs of all base types have the same structure, so we wimp out and
|
||||
# guess using the name.
|
||||
for regexp, basetype in [
|
||||
(r"xl", BaseModelType.StableDiffusionXL),
|
||||
(r"sd2", BaseModelType.StableDiffusion2),
|
||||
(r"vae", BaseModelType.StableDiffusion1),
|
||||
]:
|
||||
if re.search(regexp, self.model_path.name, re.IGNORECASE):
|
||||
return basetype
|
||||
raise InvalidModelConfigException("Cannot determine base type")
|
||||
# I can't find any standalone 2.X VAEs to test with!
|
||||
return BaseModelType.StableDiffusion1
|
||||
|
||||
|
||||
class LoRACheckpointProbe(CheckpointProbeBase):
|
||||
|
||||
@@ -5,7 +5,7 @@ from __future__ import annotations
|
||||
|
||||
import pickle
|
||||
from contextlib import contextmanager
|
||||
from typing import Any, Dict, Generator, Iterator, List, Optional, Tuple, Union
|
||||
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
@@ -66,14 +66,8 @@ class ModelPatcher:
|
||||
cls,
|
||||
unet: UNet2DConditionModel,
|
||||
loras: Iterator[Tuple[LoRAModelRaw, float]],
|
||||
model_state_dict: Optional[Dict[str, torch.Tensor]] = None,
|
||||
) -> Generator[None, None, None]:
|
||||
with cls.apply_lora(
|
||||
unet,
|
||||
loras=loras,
|
||||
prefix="lora_unet_",
|
||||
model_state_dict=model_state_dict,
|
||||
):
|
||||
) -> None:
|
||||
with cls.apply_lora(unet, loras, "lora_unet_"):
|
||||
yield
|
||||
|
||||
@classmethod
|
||||
@@ -82,9 +76,28 @@ class ModelPatcher:
|
||||
cls,
|
||||
text_encoder: CLIPTextModel,
|
||||
loras: Iterator[Tuple[LoRAModelRaw, float]],
|
||||
model_state_dict: Optional[Dict[str, torch.Tensor]] = None,
|
||||
) -> Generator[None, None, None]:
|
||||
with cls.apply_lora(text_encoder, loras=loras, prefix="lora_te_", model_state_dict=model_state_dict):
|
||||
) -> None:
|
||||
with cls.apply_lora(text_encoder, loras, "lora_te_"):
|
||||
yield
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_sdxl_lora_text_encoder(
|
||||
cls,
|
||||
text_encoder: CLIPTextModel,
|
||||
loras: List[Tuple[LoRAModelRaw, float]],
|
||||
) -> None:
|
||||
with cls.apply_lora(text_encoder, loras, "lora_te1_"):
|
||||
yield
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_sdxl_lora_text_encoder2(
|
||||
cls,
|
||||
text_encoder: CLIPTextModel,
|
||||
loras: List[Tuple[LoRAModelRaw, float]],
|
||||
) -> None:
|
||||
with cls.apply_lora(text_encoder, loras, "lora_te2_"):
|
||||
yield
|
||||
|
||||
@classmethod
|
||||
@@ -94,16 +107,7 @@ class ModelPatcher:
|
||||
model: AnyModel,
|
||||
loras: Iterator[Tuple[LoRAModelRaw, float]],
|
||||
prefix: str,
|
||||
model_state_dict: Optional[Dict[str, torch.Tensor]] = None,
|
||||
) -> Generator[None, None, None]:
|
||||
"""
|
||||
Apply one or more LoRAs to a model.
|
||||
|
||||
:param model: The model to patch.
|
||||
:param loras: An iterator that returns the LoRA to patch in and its patch weight.
|
||||
:param prefix: A string prefix that precedes keys used in the LoRAs weight layers.
|
||||
:model_state_dict: Read-only copy of the model's state dict in CPU, for unpatching purposes.
|
||||
"""
|
||||
) -> None:
|
||||
original_weights = {}
|
||||
try:
|
||||
with torch.no_grad():
|
||||
@@ -129,22 +133,19 @@ class ModelPatcher:
|
||||
dtype = module.weight.dtype
|
||||
|
||||
if module_key not in original_weights:
|
||||
if model_state_dict is not None: # we were provided with the CPU copy of the state dict
|
||||
original_weights[module_key] = model_state_dict[module_key + ".weight"]
|
||||
else:
|
||||
original_weights[module_key] = module.weight.detach().to(device="cpu", copy=True)
|
||||
original_weights[module_key] = module.weight.detach().to(device="cpu", copy=True)
|
||||
|
||||
layer_scale = layer.alpha / layer.rank if (layer.alpha and layer.rank) else 1.0
|
||||
|
||||
# We intentionally move to the target device first, then cast. Experimentally, this was found to
|
||||
# be significantly faster for 16-bit CPU tensors being moved to a CUDA device than doing the
|
||||
# same thing in a single call to '.to(...)'.
|
||||
layer.to(device=device, non_blocking=True)
|
||||
layer.to(dtype=torch.float32, non_blocking=True)
|
||||
layer.to(device=device)
|
||||
layer.to(dtype=torch.float32)
|
||||
# TODO(ryand): Using torch.autocast(...) over explicit casting may offer a speed benefit on CUDA
|
||||
# devices here. Experimentally, it was found to be very slow on CPU. More investigation needed.
|
||||
layer_weight = layer.get_weight(module.weight) * (lora_weight * layer_scale)
|
||||
layer.to(device=torch.device("cpu"), non_blocking=True)
|
||||
layer.to(device=torch.device("cpu"))
|
||||
|
||||
assert isinstance(layer_weight, torch.Tensor) # mypy thinks layer_weight is a float|Any ??!
|
||||
if module.weight.shape != layer_weight.shape:
|
||||
@@ -153,7 +154,7 @@ class ModelPatcher:
|
||||
layer_weight = layer_weight.reshape(module.weight.shape)
|
||||
|
||||
assert isinstance(layer_weight, torch.Tensor) # mypy thinks layer_weight is a float|Any ??!
|
||||
module.weight += layer_weight.to(dtype=dtype, non_blocking=True)
|
||||
module.weight += layer_weight.to(dtype=dtype)
|
||||
|
||||
yield # wait for context manager exit
|
||||
|
||||
@@ -161,7 +162,7 @@ class ModelPatcher:
|
||||
assert hasattr(model, "get_submodule") # mypy not picking up fact that torch.nn.Module has get_submodule()
|
||||
with torch.no_grad():
|
||||
for module_key, weight in original_weights.items():
|
||||
model.get_submodule(module_key).weight.copy_(weight, non_blocking=True)
|
||||
model.get_submodule(module_key).weight.copy_(weight)
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
|
||||
@@ -6,7 +6,6 @@ from typing import Any, List, Optional, Tuple, Union
|
||||
|
||||
import numpy as np
|
||||
import onnx
|
||||
import torch
|
||||
from onnx import numpy_helper
|
||||
from onnxruntime import InferenceSession, SessionOptions, get_available_providers
|
||||
|
||||
@@ -189,15 +188,6 @@ class IAIOnnxRuntimeModel(RawModel):
|
||||
# return self.io_binding.copy_outputs_to_cpu()
|
||||
return self.session.run(None, inputs)
|
||||
|
||||
# compatability with RawModel ABC
|
||||
def to(
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
pass
|
||||
|
||||
# compatability with diffusers load code
|
||||
@classmethod
|
||||
def from_pretrained(
|
||||
|
||||
@@ -10,20 +10,6 @@ The term 'raw' was introduced to describe a wrapper around a torch.nn.Module
|
||||
that adds additional methods and attributes.
|
||||
"""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Optional
|
||||
|
||||
import torch
|
||||
|
||||
|
||||
class RawModel(ABC):
|
||||
"""Abstract base class for 'Raw' model wrappers."""
|
||||
|
||||
@abstractmethod
|
||||
def to(
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
pass
|
||||
class RawModel:
|
||||
"""Base class for 'Raw' model wrappers."""
|
||||
|
||||
@@ -11,7 +11,6 @@ import psutil
|
||||
import torch
|
||||
import torchvision.transforms as T
|
||||
from diffusers.models import AutoencoderKL, UNet2DConditionModel
|
||||
from diffusers.models.controlnet import ControlNetModel
|
||||
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipeline
|
||||
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
|
||||
from diffusers.schedulers import KarrasDiffusionSchedulers
|
||||
@@ -26,6 +25,7 @@ from invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion impor
|
||||
from invokeai.backend.stable_diffusion.diffusion.unet_attention_patcher import UNetAttentionPatcher, UNetIPAdapterData
|
||||
from invokeai.backend.util.attention import auto_detect_slice_size
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.hotfixes import ControlNetModel
|
||||
|
||||
|
||||
@dataclass
|
||||
|
||||
@@ -65,18 +65,6 @@ class TextualInversionModelRaw(RawModel):
|
||||
|
||||
return result
|
||||
|
||||
def to(
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
if not torch.cuda.is_available():
|
||||
return
|
||||
for emb in [self.embedding, self.embedding_2]:
|
||||
if emb is not None:
|
||||
emb.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
|
||||
|
||||
class TextualInversionManager(BaseTextualInversionManager):
|
||||
"""TextualInversionManager implements the BaseTextualInversionManager ABC from the compel library."""
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
import base64
|
||||
import io
|
||||
import os
|
||||
import re
|
||||
import unicodedata
|
||||
from pathlib import Path
|
||||
|
||||
from PIL import Image
|
||||
@@ -11,33 +9,6 @@ from PIL import Image
|
||||
GIG = 1073741824
|
||||
|
||||
|
||||
def slugify(value: str, allow_unicode: bool = False) -> str:
|
||||
"""
|
||||
Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated
|
||||
dashes to single dashes. Remove characters that aren't alphanumerics,
|
||||
underscores, or hyphens. Replace slashes with underscores.
|
||||
Convert to lowercase. Also strip leading and
|
||||
trailing whitespace, dashes, and underscores.
|
||||
|
||||
Adapted from Django: https://github.com/django/django/blob/main/django/utils/text.py
|
||||
"""
|
||||
value = str(value)
|
||||
if allow_unicode:
|
||||
value = unicodedata.normalize("NFKC", value)
|
||||
else:
|
||||
value = unicodedata.normalize("NFKD", value).encode("ascii", "ignore").decode("ascii")
|
||||
value = re.sub(r"[/]", "_", value.lower())
|
||||
value = re.sub(r"[^.\w\s-]", "", value.lower())
|
||||
return re.sub(r"[-\s]+", "-", value).strip("-_")
|
||||
|
||||
|
||||
def safe_filename(directory: Path, value: str) -> str:
|
||||
"""Make a string safe to use as a filename."""
|
||||
escaped_string = slugify(value)
|
||||
max_name_length = os.pathconf(directory, "PC_NAME_MAX") if hasattr(os, "pathconf") else 256
|
||||
return escaped_string[len(escaped_string) - max_name_length :]
|
||||
|
||||
|
||||
def directory_size(directory: Path) -> int:
|
||||
"""
|
||||
Return the aggregate size of all files in a directory (bytes).
|
||||
|
||||
@@ -37,11 +37,7 @@
|
||||
"selectBoard": "Select a Board",
|
||||
"topMessage": "This board contains images used in the following features:",
|
||||
"uncategorized": "Uncategorized",
|
||||
"downloadBoard": "Download Board",
|
||||
"imagesWithCount_one": "{{count}} image",
|
||||
"imagesWithCount_other": "{{count}} images",
|
||||
"assetsWithCount_one": "{{count}} asset",
|
||||
"assetsWithCount_other": "{{count}} assets"
|
||||
"downloadBoard": "Download Board"
|
||||
},
|
||||
"accordions": {
|
||||
"generation": {
|
||||
@@ -384,11 +380,7 @@
|
||||
"problemDeletingImagesDesc": "One or more images could not be deleted",
|
||||
"viewerImage": "Viewer Image",
|
||||
"compareImage": "Compare Image",
|
||||
"noActiveSearch": "No active search",
|
||||
"openInViewer": "Open in Viewer",
|
||||
"searchingBy": "Searching by",
|
||||
"selectAllOnPage": "Select All On Page",
|
||||
"selectAllOnBoard": "Select All On Board",
|
||||
"selectForCompare": "Select for Compare",
|
||||
"selectAnImageToCompare": "Select an Image to Compare",
|
||||
"slider": "Slider",
|
||||
|
||||
@@ -2,7 +2,8 @@ import type { AppStartListening } from 'app/store/middleware/listenerMiddleware'
|
||||
import { imageSelected } from 'features/gallery/store/gallerySlice';
|
||||
import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import { getListImagesUrl } from 'services/api/util';
|
||||
import type { ImageCache } from 'services/api/types';
|
||||
import { getListImagesUrl, imagesSelectors } from 'services/api/util';
|
||||
|
||||
export const addFirstListImagesListener = (startAppListening: AppStartListening) => {
|
||||
startAppListening({
|
||||
@@ -17,10 +18,13 @@ export const addFirstListImagesListener = (startAppListening: AppStartListening)
|
||||
cancelActiveListeners();
|
||||
unsubscribe();
|
||||
|
||||
const data = action.payload;
|
||||
// TODO: figure out how to type the predicate
|
||||
const data = action.payload as ImageCache;
|
||||
|
||||
if (data.items.length > 0) {
|
||||
dispatch(imageSelected(data.items[0] ?? null));
|
||||
if (data.ids.length > 0) {
|
||||
// Select the first image
|
||||
const firstImage = imagesSelectors.selectAll(data)[0];
|
||||
dispatch(imageSelected(firstImage ?? null));
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
@@ -1,13 +1,9 @@
|
||||
import { isAnyOf } from '@reduxjs/toolkit';
|
||||
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
|
||||
import { selectListImagesQueryArgs } from 'features/gallery/store/gallerySelectors';
|
||||
import {
|
||||
boardIdSelected,
|
||||
galleryViewChanged,
|
||||
imageSelected,
|
||||
selectionChanged,
|
||||
} from 'features/gallery/store/gallerySlice';
|
||||
import { boardIdSelected, galleryViewChanged, imageSelected } from 'features/gallery/store/gallerySlice';
|
||||
import { ASSETS_CATEGORIES, IMAGE_CATEGORIES } from 'features/gallery/store/types';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import { imagesSelectors } from 'services/api/util';
|
||||
|
||||
export const addBoardIdSelectedListener = (startAppListening: AppStartListening) => {
|
||||
startAppListening({
|
||||
@@ -18,9 +14,14 @@ export const addBoardIdSelectedListener = (startAppListening: AppStartListening)
|
||||
|
||||
const state = getState();
|
||||
|
||||
const queryArgs = selectListImagesQueryArgs(state);
|
||||
const board_id = boardIdSelected.match(action) ? action.payload.boardId : state.gallery.selectedBoardId;
|
||||
|
||||
dispatch(selectionChanged([]));
|
||||
const galleryView = galleryViewChanged.match(action) ? action.payload : state.gallery.galleryView;
|
||||
|
||||
// when a board is selected, we need to wait until the board has loaded *some* images, then select the first one
|
||||
const categories = galleryView === 'images' ? IMAGE_CATEGORIES : ASSETS_CATEGORIES;
|
||||
|
||||
const queryArgs = { board_id: board_id ?? 'none', categories };
|
||||
|
||||
// wait until the board has some images - maybe it already has some from a previous fetch
|
||||
// must use getState() to ensure we do not have stale state
|
||||
@@ -34,12 +35,11 @@ export const addBoardIdSelectedListener = (startAppListening: AppStartListening)
|
||||
const { data: boardImagesData } = imagesApi.endpoints.listImages.select(queryArgs)(getState());
|
||||
|
||||
if (boardImagesData && boardIdSelected.match(action) && action.payload.selectedImageName) {
|
||||
const selectedImage = boardImagesData.items.find(
|
||||
(item) => item.image_name === action.payload.selectedImageName
|
||||
);
|
||||
const selectedImage = imagesSelectors.selectById(boardImagesData, action.payload.selectedImageName);
|
||||
dispatch(imageSelected(selectedImage || null));
|
||||
} else if (boardImagesData) {
|
||||
dispatch(imageSelected(boardImagesData.items[0] || null));
|
||||
const firstImage = imagesSelectors.selectAll(boardImagesData)[0];
|
||||
dispatch(imageSelected(firstImage || null));
|
||||
} else {
|
||||
// board has no images - deselect
|
||||
dispatch(imageSelected(null));
|
||||
|
||||
@@ -22,13 +22,7 @@ import type { BatchConfig } from 'services/api/types';
|
||||
import { socketInvocationComplete } from 'services/events/actions';
|
||||
import { assert } from 'tsafe';
|
||||
|
||||
const matcher = isAnyOf(
|
||||
caLayerImageChanged,
|
||||
caLayerProcessedImageChanged,
|
||||
caLayerProcessorConfigChanged,
|
||||
caLayerModelChanged,
|
||||
caLayerRecalled
|
||||
);
|
||||
const matcher = isAnyOf(caLayerImageChanged, caLayerProcessorConfigChanged, caLayerModelChanged, caLayerRecalled);
|
||||
|
||||
const DEBOUNCE_MS = 300;
|
||||
const log = logger('session');
|
||||
@@ -79,10 +73,9 @@ export const addControlAdapterPreprocessor = (startAppListening: AppStartListeni
|
||||
const originalConfig = originalLayer?.controlAdapter.processorConfig;
|
||||
|
||||
const image = layer.controlAdapter.image;
|
||||
const processedImage = layer.controlAdapter.processedImage;
|
||||
const config = layer.controlAdapter.processorConfig;
|
||||
|
||||
if (isEqual(config, originalConfig) && isEqual(image, originalImage) && processedImage) {
|
||||
if (isEqual(config, originalConfig) && isEqual(image, originalImage)) {
|
||||
// Neither config nor image have changed, we can bail
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ import { selectListImagesQueryArgs } from 'features/gallery/store/gallerySelecto
|
||||
import { imageToCompareChanged, selectionChanged } from 'features/gallery/store/gallerySlice';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import type { ImageDTO } from 'services/api/types';
|
||||
import { imagesSelectors } from 'services/api/util';
|
||||
|
||||
export const galleryImageClicked = createAction<{
|
||||
imageDTO: ImageDTO;
|
||||
@@ -31,14 +32,14 @@ export const addGalleryImageClickedListener = (startAppListening: AppStartListen
|
||||
const { imageDTO, shiftKey, ctrlKey, metaKey, altKey } = action.payload;
|
||||
const state = getState();
|
||||
const queryArgs = selectListImagesQueryArgs(state);
|
||||
const queryResult = imagesApi.endpoints.listImages.select(queryArgs)(state);
|
||||
const { data: listImagesData } = imagesApi.endpoints.listImages.select(queryArgs)(state);
|
||||
|
||||
if (!queryResult.data) {
|
||||
if (!listImagesData) {
|
||||
// Should never happen if we have clicked a gallery image
|
||||
return;
|
||||
}
|
||||
|
||||
const imageDTOs = queryResult.data.items;
|
||||
const imageDTOs = imagesSelectors.selectAll(listImagesData);
|
||||
const selection = state.gallery.selection;
|
||||
|
||||
if (altKey) {
|
||||
|
||||
@@ -22,10 +22,11 @@ import { imageSelected } from 'features/gallery/store/gallerySlice';
|
||||
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
|
||||
import { isImageFieldInputInstance } from 'features/nodes/types/field';
|
||||
import { isInvocationNode } from 'features/nodes/types/invocation';
|
||||
import { forEach } from 'lodash-es';
|
||||
import { clamp, forEach } from 'lodash-es';
|
||||
import { api } from 'services/api';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import type { ImageDTO } from 'services/api/types';
|
||||
import { imagesSelectors } from 'services/api/util';
|
||||
|
||||
const deleteNodesImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
|
||||
state.nodes.present.nodes.forEach((node) => {
|
||||
@@ -117,7 +118,32 @@ export const addRequestedSingleImageDeletionListener = (startAppListening: AppSt
|
||||
}
|
||||
|
||||
dispatch(isModalOpenChanged(false));
|
||||
|
||||
const state = getState();
|
||||
const lastSelectedImage = state.gallery.selection[state.gallery.selection.length - 1]?.image_name;
|
||||
|
||||
if (imageDTO && imageDTO?.image_name === lastSelectedImage) {
|
||||
const { image_name } = imageDTO;
|
||||
|
||||
const baseQueryArgs = selectListImagesQueryArgs(state);
|
||||
const { data } = imagesApi.endpoints.listImages.select(baseQueryArgs)(state);
|
||||
|
||||
const cachedImageDTOs = data ? imagesSelectors.selectAll(data) : [];
|
||||
|
||||
const deletedImageIndex = cachedImageDTOs.findIndex((i) => i.image_name === image_name);
|
||||
|
||||
const filteredImageDTOs = cachedImageDTOs.filter((i) => i.image_name !== image_name);
|
||||
|
||||
const newSelectedImageIndex = clamp(deletedImageIndex, 0, filteredImageDTOs.length - 1);
|
||||
|
||||
const newSelectedImageDTO = filteredImageDTOs[newSelectedImageIndex];
|
||||
|
||||
if (newSelectedImageDTO) {
|
||||
dispatch(imageSelected(newSelectedImageDTO));
|
||||
} else {
|
||||
dispatch(imageSelected(null));
|
||||
}
|
||||
}
|
||||
|
||||
// We need to reset the features where the image is in use - none of these work if their image(s) don't exist
|
||||
if (imageUsage.isCanvasImage) {
|
||||
@@ -142,20 +168,6 @@ export const addRequestedSingleImageDeletionListener = (startAppListening: AppSt
|
||||
if (wasImageDeleted) {
|
||||
dispatch(api.util.invalidateTags([{ type: 'Board', id: imageDTO.board_id ?? 'none' }]));
|
||||
}
|
||||
|
||||
const lastSelectedImage = state.gallery.selection[state.gallery.selection.length - 1]?.image_name;
|
||||
|
||||
if (imageDTO && imageDTO?.image_name === lastSelectedImage) {
|
||||
const baseQueryArgs = selectListImagesQueryArgs(state);
|
||||
const { data } = imagesApi.endpoints.listImages.select(baseQueryArgs)(state);
|
||||
|
||||
if (data && data.items) {
|
||||
const newlySelectedImage = data?.items.find((img) => img.image_name !== imageDTO?.image_name);
|
||||
dispatch(imageSelected(newlySelectedImage || null));
|
||||
} else {
|
||||
dispatch(imageSelected(null));
|
||||
}
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
@@ -176,8 +188,10 @@ export const addRequestedSingleImageDeletionListener = (startAppListening: AppSt
|
||||
const queryArgs = selectListImagesQueryArgs(state);
|
||||
const { data } = imagesApi.endpoints.listImages.select(queryArgs)(state);
|
||||
|
||||
if (data && data.items[0]) {
|
||||
dispatch(imageSelected(data.items[0]));
|
||||
const newSelectedImageDTO = data ? imagesSelectors.selectAll(data)[0] : undefined;
|
||||
|
||||
if (newSelectedImageDTO) {
|
||||
dispatch(imageSelected(newSelectedImageDTO));
|
||||
} else {
|
||||
dispatch(imageSelected(null));
|
||||
}
|
||||
|
||||
@@ -15,12 +15,7 @@ import {
|
||||
} from 'features/controlLayers/store/controlLayersSlice';
|
||||
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
|
||||
import { isValidDrop } from 'features/dnd/util/isValidDrop';
|
||||
import {
|
||||
imageSelected,
|
||||
imageToCompareChanged,
|
||||
isImageViewerOpenChanged,
|
||||
selectionChanged,
|
||||
} from 'features/gallery/store/gallerySlice';
|
||||
import { imageSelected, imageToCompareChanged, isImageViewerOpenChanged } from 'features/gallery/store/gallerySlice';
|
||||
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
|
||||
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
@@ -221,7 +216,6 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
|
||||
board_id: boardId,
|
||||
})
|
||||
);
|
||||
dispatch(selectionChanged([]));
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -239,7 +233,6 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
|
||||
imageDTO,
|
||||
})
|
||||
);
|
||||
dispatch(selectionChanged([]));
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -255,7 +248,6 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
|
||||
board_id: boardId,
|
||||
})
|
||||
);
|
||||
dispatch(selectionChanged([]));
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -269,7 +261,6 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
|
||||
imageDTOs,
|
||||
})
|
||||
);
|
||||
dispatch(selectionChanged([]));
|
||||
return;
|
||||
}
|
||||
},
|
||||
|
||||
@@ -8,14 +8,14 @@ import {
|
||||
galleryViewChanged,
|
||||
imageSelected,
|
||||
isImageViewerOpenChanged,
|
||||
offsetChanged,
|
||||
} from 'features/gallery/store/gallerySlice';
|
||||
import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
|
||||
import { $nodeExecutionStates, upsertExecutionState } from 'features/nodes/hooks/useExecutionState';
|
||||
import { zNodeStatus } from 'features/nodes/types/invocation';
|
||||
import { CANVAS_OUTPUT } from 'features/nodes/util/graph/constants';
|
||||
import { boardsApi } from 'services/api/endpoints/boards';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import { getCategories, getListImagesUrl } from 'services/api/util';
|
||||
import { imagesAdapter } from 'services/api/util';
|
||||
import { socketInvocationComplete } from 'services/events/actions';
|
||||
|
||||
// These nodes output an image, but do not actually *save* an image, so we don't want to handle the gallery logic on them
|
||||
@@ -52,6 +52,24 @@ export const addInvocationCompleteEventListener = (startAppListening: AppStartLi
|
||||
}
|
||||
|
||||
if (!imageDTO.is_intermediate) {
|
||||
/**
|
||||
* Cache updates for when an image result is received
|
||||
* - add it to the no_board/images
|
||||
*/
|
||||
|
||||
dispatch(
|
||||
imagesApi.util.updateQueryData(
|
||||
'listImages',
|
||||
{
|
||||
board_id: imageDTO.board_id ?? 'none',
|
||||
categories: IMAGE_CATEGORIES,
|
||||
},
|
||||
(draft) => {
|
||||
imagesAdapter.addOne(draft, imageDTO);
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
// update the total images for the board
|
||||
dispatch(
|
||||
boardsApi.util.updateQueryData('getBoardImagesTotal', imageDTO.board_id ?? 'none', (draft) => {
|
||||
@@ -60,18 +78,7 @@ export const addInvocationCompleteEventListener = (startAppListening: AppStartLi
|
||||
})
|
||||
);
|
||||
|
||||
dispatch(
|
||||
imagesApi.util.invalidateTags([
|
||||
{ type: 'Board', id: imageDTO.board_id ?? 'none' },
|
||||
{
|
||||
type: 'ImageList',
|
||||
id: getListImagesUrl({
|
||||
board_id: imageDTO.board_id ?? 'none',
|
||||
categories: getCategories(imageDTO),
|
||||
}),
|
||||
},
|
||||
])
|
||||
);
|
||||
dispatch(imagesApi.util.invalidateTags([{ type: 'Board', id: imageDTO.board_id ?? 'none' }]));
|
||||
|
||||
const { shouldAutoSwitch } = gallery;
|
||||
|
||||
@@ -91,8 +98,6 @@ export const addInvocationCompleteEventListener = (startAppListening: AppStartLi
|
||||
);
|
||||
}
|
||||
|
||||
dispatch(offsetChanged(0));
|
||||
|
||||
if (!imageDTO.board_id && gallery.selectedBoardId !== 'none') {
|
||||
dispatch(
|
||||
boardIdSelected({
|
||||
|
||||
@@ -5,122 +5,43 @@ import {
|
||||
socketModelInstallCancelled,
|
||||
socketModelInstallComplete,
|
||||
socketModelInstallDownloadProgress,
|
||||
socketModelInstallDownloadsComplete,
|
||||
socketModelInstallDownloadStarted,
|
||||
socketModelInstallError,
|
||||
socketModelInstallStarted,
|
||||
} from 'services/events/actions';
|
||||
|
||||
/**
|
||||
* A model install has two main stages - downloading and installing. All these events are namespaced under `model_install_`
|
||||
* which is a bit misleading. For example, a `model_install_started` event is actually fired _after_ the model has fully
|
||||
* downloaded and is being "physically" installed.
|
||||
*
|
||||
* Note: the download events are only fired for remote model installs, not local.
|
||||
*
|
||||
* Here's the expected flow:
|
||||
* - API receives install request, model manager preps the install
|
||||
* - `model_install_download_started` fired when the download starts
|
||||
* - `model_install_download_progress` fired continually until the download is complete
|
||||
* - `model_install_download_complete` fired when the download is complete
|
||||
* - `model_install_started` fired when the "physical" installation starts
|
||||
* - `model_install_complete` fired when the installation is complete
|
||||
* - `model_install_cancelled` fired if the installation is cancelled
|
||||
* - `model_install_error` fired if the installation has an error
|
||||
*/
|
||||
|
||||
const selectModelInstalls = modelsApi.endpoints.listModelInstalls.select();
|
||||
|
||||
export const addModelInstallEventListener = (startAppListening: AppStartListening) => {
|
||||
startAppListening({
|
||||
actionCreator: socketModelInstallDownloadStarted,
|
||||
effect: async (action, { dispatch, getState }) => {
|
||||
const { id } = action.payload.data;
|
||||
const { data } = selectModelInstalls(getState());
|
||||
|
||||
if (!data || !data.find((m) => m.id === id)) {
|
||||
dispatch(api.util.invalidateTags([{ type: 'ModelInstalls' }]));
|
||||
} else {
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.status = 'downloading';
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
startAppListening({
|
||||
actionCreator: socketModelInstallStarted,
|
||||
effect: async (action, { dispatch, getState }) => {
|
||||
const { id } = action.payload.data;
|
||||
const { data } = selectModelInstalls(getState());
|
||||
|
||||
if (!data || !data.find((m) => m.id === id)) {
|
||||
dispatch(api.util.invalidateTags([{ type: 'ModelInstalls' }]));
|
||||
} else {
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.status = 'running';
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
startAppListening({
|
||||
actionCreator: socketModelInstallDownloadProgress,
|
||||
effect: async (action, { dispatch, getState }) => {
|
||||
effect: async (action, { dispatch }) => {
|
||||
const { bytes, total_bytes, id } = action.payload.data;
|
||||
const { data } = selectModelInstalls(getState());
|
||||
|
||||
if (!data || !data.find((m) => m.id === id)) {
|
||||
dispatch(api.util.invalidateTags([{ type: 'ModelInstalls' }]));
|
||||
} else {
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.bytes = bytes;
|
||||
modelImport.total_bytes = total_bytes;
|
||||
modelImport.status = 'downloading';
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
}
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.bytes = bytes;
|
||||
modelImport.total_bytes = total_bytes;
|
||||
modelImport.status = 'downloading';
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
},
|
||||
});
|
||||
|
||||
startAppListening({
|
||||
actionCreator: socketModelInstallComplete,
|
||||
effect: (action, { dispatch, getState }) => {
|
||||
effect: (action, { dispatch }) => {
|
||||
const { id } = action.payload.data;
|
||||
|
||||
const { data } = selectModelInstalls(getState());
|
||||
|
||||
if (!data || !data.find((m) => m.id === id)) {
|
||||
dispatch(api.util.invalidateTags([{ type: 'ModelInstalls' }]));
|
||||
} else {
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.status = 'completed';
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.status = 'completed';
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
dispatch(api.util.invalidateTags([{ type: 'ModelConfig', id: LIST_TAG }]));
|
||||
dispatch(api.util.invalidateTags([{ type: 'ModelScanFolderResults', id: LIST_TAG }]));
|
||||
},
|
||||
@@ -128,69 +49,37 @@ export const addModelInstallEventListener = (startAppListening: AppStartListenin
|
||||
|
||||
startAppListening({
|
||||
actionCreator: socketModelInstallError,
|
||||
effect: (action, { dispatch, getState }) => {
|
||||
effect: (action, { dispatch }) => {
|
||||
const { id, error, error_type } = action.payload.data;
|
||||
const { data } = selectModelInstalls(getState());
|
||||
|
||||
if (!data || !data.find((m) => m.id === id)) {
|
||||
dispatch(api.util.invalidateTags([{ type: 'ModelInstalls' }]));
|
||||
} else {
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.status = 'error';
|
||||
modelImport.error_reason = error_type;
|
||||
modelImport.error = error;
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
}
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.status = 'error';
|
||||
modelImport.error_reason = error_type;
|
||||
modelImport.error = error;
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
},
|
||||
});
|
||||
|
||||
startAppListening({
|
||||
actionCreator: socketModelInstallCancelled,
|
||||
effect: (action, { dispatch, getState }) => {
|
||||
effect: (action, { dispatch }) => {
|
||||
const { id } = action.payload.data;
|
||||
const { data } = selectModelInstalls(getState());
|
||||
|
||||
if (!data || !data.find((m) => m.id === id)) {
|
||||
dispatch(api.util.invalidateTags([{ type: 'ModelInstalls' }]));
|
||||
} else {
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.status = 'cancelled';
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
startAppListening({
|
||||
actionCreator: socketModelInstallDownloadsComplete,
|
||||
effect: (action, { dispatch, getState }) => {
|
||||
const { id } = action.payload.data;
|
||||
const { data } = selectModelInstalls(getState());
|
||||
|
||||
if (!data || !data.find((m) => m.id === id)) {
|
||||
dispatch(api.util.invalidateTags([{ type: 'ModelInstalls' }]));
|
||||
} else {
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.status = 'downloads_done';
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
}
|
||||
dispatch(
|
||||
modelsApi.util.updateQueryData('listModelInstalls', undefined, (draft) => {
|
||||
const modelImport = draft.find((m) => m.id === id);
|
||||
if (modelImport) {
|
||||
modelImport.status = 'cancelled';
|
||||
}
|
||||
return draft;
|
||||
})
|
||||
);
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
@@ -1,37 +1,47 @@
|
||||
import type { IconButtonProps, SystemStyleObject } from '@invoke-ai/ui-library';
|
||||
import type { SystemStyleObject } from '@invoke-ai/ui-library';
|
||||
import { IconButton } from '@invoke-ai/ui-library';
|
||||
import type { MouseEvent } from 'react';
|
||||
import { memo } from 'react';
|
||||
import type { MouseEvent, ReactElement } from 'react';
|
||||
import { memo, useMemo } from 'react';
|
||||
|
||||
const sx: SystemStyleObject = {
|
||||
minW: 0,
|
||||
svg: {
|
||||
transitionProperty: 'common',
|
||||
transitionDuration: 'normal',
|
||||
fill: 'base.100',
|
||||
_hover: { fill: 'base.50' },
|
||||
filter: 'drop-shadow(0px 0px 0.1rem var(--invoke-colors-base-800))',
|
||||
},
|
||||
};
|
||||
|
||||
type Props = Omit<IconButtonProps, 'aria-label' | 'onClick' | 'tooltip'> & {
|
||||
type Props = {
|
||||
onClick: (event: MouseEvent<HTMLButtonElement>) => void;
|
||||
tooltip: string;
|
||||
icon?: ReactElement;
|
||||
styleOverrides?: SystemStyleObject;
|
||||
};
|
||||
|
||||
const IAIDndImageIcon = (props: Props) => {
|
||||
const { onClick, tooltip, icon, ...rest } = props;
|
||||
const { onClick, tooltip, icon, styleOverrides } = props;
|
||||
|
||||
const sx = useMemo(
|
||||
() => ({
|
||||
position: 'absolute',
|
||||
top: 1,
|
||||
insetInlineEnd: 1,
|
||||
p: 0,
|
||||
minW: 0,
|
||||
svg: {
|
||||
transitionProperty: 'common',
|
||||
transitionDuration: 'normal',
|
||||
fill: 'base.100',
|
||||
_hover: { fill: 'base.50' },
|
||||
filter: 'drop-shadow(0px 0px 0.1rem var(--invoke-colors-base-800))',
|
||||
},
|
||||
...styleOverrides,
|
||||
}),
|
||||
[styleOverrides]
|
||||
);
|
||||
|
||||
return (
|
||||
<IconButton
|
||||
onClick={onClick}
|
||||
aria-label={tooltip}
|
||||
tooltip={tooltip}
|
||||
icon={icon}
|
||||
size="sm"
|
||||
variant="link"
|
||||
sx={sx}
|
||||
data-testid={tooltip}
|
||||
{...rest}
|
||||
/>
|
||||
);
|
||||
};
|
||||
|
||||
16
invokeai/frontend/web/src/common/util/dateComparator.ts
Normal file
16
invokeai/frontend/web/src/common/util/dateComparator.ts
Normal file
@@ -0,0 +1,16 @@
|
||||
/**
|
||||
* Comparator function for sorting dates in ascending order
|
||||
*/
|
||||
export const dateComparator = (a: string, b: string) => {
|
||||
const dateA = new Date(a);
|
||||
const dateB = new Date(b);
|
||||
|
||||
// sort in ascending order
|
||||
if (dateA > dateB) {
|
||||
return 1;
|
||||
}
|
||||
if (dateA < dateB) {
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
};
|
||||
@@ -1,3 +1,4 @@
|
||||
import type { SystemStyleObject } from '@invoke-ai/ui-library';
|
||||
import { Box, Flex, Spinner } from '@invoke-ai/ui-library';
|
||||
import { skipToken } from '@reduxjs/toolkit/query';
|
||||
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
|
||||
@@ -184,7 +185,7 @@ const ControlAdapterImagePreview = ({ isSmall, id }: Props) => {
|
||||
/>
|
||||
</Box>
|
||||
|
||||
<Flex flexDir="column" top={1} insetInlineEnd={1}>
|
||||
<>
|
||||
<IAIDndImageIcon
|
||||
onClick={handleResetControlImage}
|
||||
icon={controlImage ? <PiArrowCounterClockwiseBold size={16} /> : undefined}
|
||||
@@ -194,13 +195,15 @@ const ControlAdapterImagePreview = ({ isSmall, id }: Props) => {
|
||||
onClick={handleSaveControlImage}
|
||||
icon={controlImage ? <PiFloppyDiskBold size={16} /> : undefined}
|
||||
tooltip={t('controlnet.saveControlImage')}
|
||||
styleOverrides={saveControlImageStyleOverrides}
|
||||
/>
|
||||
<IAIDndImageIcon
|
||||
onClick={handleSetControlImageToDimensions}
|
||||
icon={controlImage ? <PiRulerBold size={16} /> : undefined}
|
||||
tooltip={t('controlnet.setControlImageDimensions')}
|
||||
styleOverrides={setControlImageDimensionsStyleOverrides}
|
||||
/>
|
||||
</Flex>
|
||||
</>
|
||||
|
||||
{pendingControlImages.includes(id) && (
|
||||
<Flex
|
||||
@@ -223,3 +226,6 @@ const ControlAdapterImagePreview = ({ isSmall, id }: Props) => {
|
||||
};
|
||||
|
||||
export default memo(ControlAdapterImagePreview);
|
||||
|
||||
const saveControlImageStyleOverrides: SystemStyleObject = { mt: 6 };
|
||||
const setControlImageDimensionsStyleOverrides: SystemStyleObject = { mt: 12 };
|
||||
|
||||
@@ -4,7 +4,6 @@ import {
|
||||
caLayerControlModeChanged,
|
||||
caLayerImageChanged,
|
||||
caLayerModelChanged,
|
||||
caLayerProcessedImageChanged,
|
||||
caLayerProcessorConfigChanged,
|
||||
caOrIPALayerBeginEndStepPctChanged,
|
||||
caOrIPALayerWeightChanged,
|
||||
@@ -85,14 +84,6 @@ export const CALayerControlAdapterWrapper = memo(({ layerId }: Props) => {
|
||||
[dispatch, layerId]
|
||||
);
|
||||
|
||||
const onErrorLoadingImage = useCallback(() => {
|
||||
dispatch(caLayerImageChanged({ layerId, imageDTO: null }));
|
||||
}, [dispatch, layerId]);
|
||||
|
||||
const onErrorLoadingProcessedImage = useCallback(() => {
|
||||
dispatch(caLayerProcessedImageChanged({ layerId, imageDTO: null }));
|
||||
}, [dispatch, layerId]);
|
||||
|
||||
const droppableData = useMemo<CALayerImageDropData>(
|
||||
() => ({
|
||||
actionType: 'SET_CA_LAYER_IMAGE',
|
||||
@@ -123,8 +114,6 @@ export const CALayerControlAdapterWrapper = memo(({ layerId }: Props) => {
|
||||
onChangeImage={onChangeImage}
|
||||
droppableData={droppableData}
|
||||
postUploadAction={postUploadAction}
|
||||
onErrorLoadingImage={onErrorLoadingImage}
|
||||
onErrorLoadingProcessedImage={onErrorLoadingProcessedImage}
|
||||
/>
|
||||
);
|
||||
});
|
||||
|
||||
@@ -28,8 +28,6 @@ type Props = {
|
||||
onChangeProcessorConfig: (processorConfig: ProcessorConfig | null) => void;
|
||||
onChangeModel: (modelConfig: ControlNetModelConfig | T2IAdapterModelConfig) => void;
|
||||
onChangeImage: (imageDTO: ImageDTO | null) => void;
|
||||
onErrorLoadingImage: () => void;
|
||||
onErrorLoadingProcessedImage: () => void;
|
||||
droppableData: TypesafeDroppableData;
|
||||
postUploadAction: PostUploadAction;
|
||||
};
|
||||
@@ -43,8 +41,6 @@ export const ControlAdapter = memo(
|
||||
onChangeProcessorConfig,
|
||||
onChangeModel,
|
||||
onChangeImage,
|
||||
onErrorLoadingImage,
|
||||
onErrorLoadingProcessedImage,
|
||||
droppableData,
|
||||
postUploadAction,
|
||||
}: Props) => {
|
||||
@@ -95,8 +91,6 @@ export const ControlAdapter = memo(
|
||||
onChangeImage={onChangeImage}
|
||||
droppableData={droppableData}
|
||||
postUploadAction={postUploadAction}
|
||||
onErrorLoadingImage={onErrorLoadingImage}
|
||||
onErrorLoadingProcessedImage={onErrorLoadingProcessedImage}
|
||||
/>
|
||||
</Flex>
|
||||
</Flex>
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import type { SystemStyleObject } from '@invoke-ai/ui-library';
|
||||
import { Box, Flex, Spinner, useShiftModifier } from '@invoke-ai/ui-library';
|
||||
import { skipToken } from '@reduxjs/toolkit/query';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
@@ -26,19 +27,10 @@ type Props = {
|
||||
onChangeImage: (imageDTO: ImageDTO | null) => void;
|
||||
droppableData: TypesafeDroppableData;
|
||||
postUploadAction: PostUploadAction;
|
||||
onErrorLoadingImage: () => void;
|
||||
onErrorLoadingProcessedImage: () => void;
|
||||
};
|
||||
|
||||
export const ControlAdapterImagePreview = memo(
|
||||
({
|
||||
controlAdapter,
|
||||
onChangeImage,
|
||||
droppableData,
|
||||
postUploadAction,
|
||||
onErrorLoadingImage,
|
||||
onErrorLoadingProcessedImage,
|
||||
}: Props) => {
|
||||
({ controlAdapter, onChangeImage, droppableData, postUploadAction }: Props) => {
|
||||
const { t } = useTranslation();
|
||||
const dispatch = useAppDispatch();
|
||||
const autoAddBoardId = useAppSelector((s) => s.gallery.autoAddBoardId);
|
||||
@@ -136,23 +128,10 @@ export const ControlAdapterImagePreview = memo(
|
||||
controlAdapter.processorConfig !== null;
|
||||
|
||||
useEffect(() => {
|
||||
if (!isConnected) {
|
||||
return;
|
||||
if (isConnected && (isErrorControlImage || isErrorProcessedControlImage)) {
|
||||
handleResetControlImage();
|
||||
}
|
||||
if (isErrorControlImage) {
|
||||
onErrorLoadingImage();
|
||||
}
|
||||
if (isErrorProcessedControlImage) {
|
||||
onErrorLoadingProcessedImage();
|
||||
}
|
||||
}, [
|
||||
handleResetControlImage,
|
||||
isConnected,
|
||||
isErrorControlImage,
|
||||
isErrorProcessedControlImage,
|
||||
onErrorLoadingImage,
|
||||
onErrorLoadingProcessedImage,
|
||||
]);
|
||||
}, [handleResetControlImage, isConnected, isErrorControlImage, isErrorProcessedControlImage]);
|
||||
|
||||
return (
|
||||
<Flex
|
||||
@@ -188,7 +167,6 @@ export const ControlAdapterImagePreview = memo(
|
||||
droppableData={droppableData}
|
||||
imageDTO={processedControlImage}
|
||||
isUploadDisabled={true}
|
||||
onError={handleResetControlImage}
|
||||
/>
|
||||
</Box>
|
||||
|
||||
@@ -202,13 +180,13 @@ export const ControlAdapterImagePreview = memo(
|
||||
onClick={handleSaveControlImage}
|
||||
icon={controlImage ? <PiFloppyDiskBold size={16} /> : undefined}
|
||||
tooltip={t('controlnet.saveControlImage')}
|
||||
mt={6}
|
||||
styleOverrides={saveControlImageStyleOverrides}
|
||||
/>
|
||||
<IAIDndImageIcon
|
||||
onClick={handleSetControlImageToDimensions}
|
||||
icon={controlImage ? <PiRulerBold size={16} /> : undefined}
|
||||
tooltip={shift ? t('controlnet.setControlImageDimensionsForce') : t('controlnet.setControlImageDimensions')}
|
||||
mt={12}
|
||||
styleOverrides={setControlImageDimensionsStyleOverrides}
|
||||
/>
|
||||
</>
|
||||
|
||||
@@ -234,3 +212,6 @@ export const ControlAdapterImagePreview = memo(
|
||||
);
|
||||
|
||||
ControlAdapterImagePreview.displayName = 'ControlAdapterImagePreview';
|
||||
|
||||
const saveControlImageStyleOverrides: SystemStyleObject = { mt: 6 };
|
||||
const setControlImageDimensionsStyleOverrides: SystemStyleObject = { mt: 12 };
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import type { SystemStyleObject } from '@invoke-ai/ui-library';
|
||||
import { Flex, useShiftModifier } from '@invoke-ai/ui-library';
|
||||
import { skipToken } from '@reduxjs/toolkit/query';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
@@ -99,7 +100,7 @@ export const IPAdapterImagePreview = memo(
|
||||
onClick={handleSetControlImageToDimensions}
|
||||
icon={controlImage ? <PiRulerBold size={16} /> : undefined}
|
||||
tooltip={shift ? t('controlnet.setControlImageDimensionsForce') : t('controlnet.setControlImageDimensions')}
|
||||
mt={6}
|
||||
styleOverrides={setControlImageDimensionsStyleOverrides}
|
||||
/>
|
||||
</>
|
||||
</Flex>
|
||||
@@ -108,3 +109,5 @@ export const IPAdapterImagePreview = memo(
|
||||
);
|
||||
|
||||
IPAdapterImagePreview.displayName = 'IPAdapterImagePreview';
|
||||
|
||||
const setControlImageDimensionsStyleOverrides: SystemStyleObject = { mt: 6 };
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import type { SystemStyleObject } from '@invoke-ai/ui-library';
|
||||
import { Flex, useShiftModifier } from '@invoke-ai/ui-library';
|
||||
import { skipToken } from '@reduxjs/toolkit/query';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
@@ -96,7 +97,7 @@ export const InitialImagePreview = memo(({ image, onChangeImage, droppableData,
|
||||
onClick={onUseSize}
|
||||
icon={imageDTO ? <PiRulerBold size={16} /> : undefined}
|
||||
tooltip={shift ? t('controlnet.setControlImageDimensionsForce') : t('controlnet.setControlImageDimensions')}
|
||||
mt={6}
|
||||
styleOverrides={useSizeStyleOverrides}
|
||||
/>
|
||||
</>
|
||||
</Flex>
|
||||
@@ -104,3 +105,5 @@ export const InitialImagePreview = memo(({ image, onChangeImage, droppableData,
|
||||
});
|
||||
|
||||
InitialImagePreview.displayName = 'InitialImagePreview';
|
||||
|
||||
const useSizeStyleOverrides: SystemStyleObject = { mt: 6 };
|
||||
|
||||
@@ -4,35 +4,20 @@ import { createSelector } from '@reduxjs/toolkit';
|
||||
import { logger } from 'app/logging/logger';
|
||||
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { BRUSH_SPACING_PCT, MAX_BRUSH_SPACING_PX, MIN_BRUSH_SPACING_PX } from 'features/controlLayers/konva/constants';
|
||||
import { setStageEventHandlers } from 'features/controlLayers/konva/events';
|
||||
import { debouncedRenderers, renderers as normalRenderers } from 'features/controlLayers/konva/renderers';
|
||||
import { useMouseEvents } from 'features/controlLayers/hooks/mouseEventHooks';
|
||||
import {
|
||||
$brushSize,
|
||||
$brushSpacingPx,
|
||||
$isDrawing,
|
||||
$lastAddedPoint,
|
||||
$lastCursorPos,
|
||||
$lastMouseDownPos,
|
||||
$selectedLayerId,
|
||||
$selectedLayerType,
|
||||
$shouldInvertBrushSizeScrollDirection,
|
||||
$tool,
|
||||
brushSizeChanged,
|
||||
isRegionalGuidanceLayer,
|
||||
layerBboxChanged,
|
||||
layerTranslated,
|
||||
rgLayerLineAdded,
|
||||
rgLayerPointsAdded,
|
||||
rgLayerRectAdded,
|
||||
selectControlLayersSlice,
|
||||
} from 'features/controlLayers/store/controlLayersSlice';
|
||||
import type { AddLineArg, AddPointToLineArg, AddRectArg } from 'features/controlLayers/store/types';
|
||||
import { debouncedRenderers, renderers as normalRenderers } from 'features/controlLayers/util/renderers';
|
||||
import Konva from 'konva';
|
||||
import type { IRect } from 'konva/lib/types';
|
||||
import { clamp } from 'lodash-es';
|
||||
import { memo, useCallback, useLayoutEffect, useMemo, useState } from 'react';
|
||||
import { getImageDTO } from 'services/api/endpoints/images';
|
||||
import { useDevicePixelRatio } from 'use-device-pixel-ratio';
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
|
||||
@@ -62,6 +47,7 @@ const useStageRenderer = (
|
||||
const dispatch = useAppDispatch();
|
||||
const state = useAppSelector((s) => s.controlLayers.present);
|
||||
const tool = useStore($tool);
|
||||
const mouseEventHandlers = useMouseEvents();
|
||||
const lastCursorPos = useStore($lastCursorPos);
|
||||
const lastMouseDownPos = useStore($lastMouseDownPos);
|
||||
const selectedLayerIdColor = useAppSelector(selectSelectedLayerColor);
|
||||
@@ -70,26 +56,6 @@ const useStageRenderer = (
|
||||
const layerCount = useMemo(() => state.layers.length, [state.layers]);
|
||||
const renderers = useMemo(() => (asPreview ? debouncedRenderers : normalRenderers), [asPreview]);
|
||||
const dpr = useDevicePixelRatio({ round: false });
|
||||
const shouldInvertBrushSizeScrollDirection = useAppSelector((s) => s.canvas.shouldInvertBrushSizeScrollDirection);
|
||||
const brushSpacingPx = useMemo(
|
||||
() => clamp(state.brushSize / BRUSH_SPACING_PCT, MIN_BRUSH_SPACING_PX, MAX_BRUSH_SPACING_PX),
|
||||
[state.brushSize]
|
||||
);
|
||||
|
||||
useLayoutEffect(() => {
|
||||
$brushSize.set(state.brushSize);
|
||||
$brushSpacingPx.set(brushSpacingPx);
|
||||
$selectedLayerId.set(state.selectedLayerId);
|
||||
$selectedLayerType.set(selectedLayerType);
|
||||
$shouldInvertBrushSizeScrollDirection.set(shouldInvertBrushSizeScrollDirection);
|
||||
}, [
|
||||
brushSpacingPx,
|
||||
selectedLayerIdColor,
|
||||
selectedLayerType,
|
||||
shouldInvertBrushSizeScrollDirection,
|
||||
state.brushSize,
|
||||
state.selectedLayerId,
|
||||
]);
|
||||
|
||||
const onLayerPosChanged = useCallback(
|
||||
(layerId: string, x: number, y: number) => {
|
||||
@@ -105,31 +71,6 @@ const useStageRenderer = (
|
||||
[dispatch]
|
||||
);
|
||||
|
||||
const onRGLayerLineAdded = useCallback(
|
||||
(arg: AddLineArg) => {
|
||||
dispatch(rgLayerLineAdded(arg));
|
||||
},
|
||||
[dispatch]
|
||||
);
|
||||
const onRGLayerPointAddedToLine = useCallback(
|
||||
(arg: AddPointToLineArg) => {
|
||||
dispatch(rgLayerPointsAdded(arg));
|
||||
},
|
||||
[dispatch]
|
||||
);
|
||||
const onRGLayerRectAdded = useCallback(
|
||||
(arg: AddRectArg) => {
|
||||
dispatch(rgLayerRectAdded(arg));
|
||||
},
|
||||
[dispatch]
|
||||
);
|
||||
const onBrushSizeChanged = useCallback(
|
||||
(size: number) => {
|
||||
dispatch(brushSizeChanged(size));
|
||||
},
|
||||
[dispatch]
|
||||
);
|
||||
|
||||
useLayoutEffect(() => {
|
||||
log.trace('Initializing stage');
|
||||
if (!container) {
|
||||
@@ -147,29 +88,21 @@ const useStageRenderer = (
|
||||
if (asPreview) {
|
||||
return;
|
||||
}
|
||||
const cleanup = setStageEventHandlers({
|
||||
stage,
|
||||
$tool,
|
||||
$isDrawing,
|
||||
$lastMouseDownPos,
|
||||
$lastCursorPos,
|
||||
$lastAddedPoint,
|
||||
$brushSize,
|
||||
$brushSpacingPx,
|
||||
$selectedLayerId,
|
||||
$selectedLayerType,
|
||||
$shouldInvertBrushSizeScrollDirection,
|
||||
onRGLayerLineAdded,
|
||||
onRGLayerPointAddedToLine,
|
||||
onRGLayerRectAdded,
|
||||
onBrushSizeChanged,
|
||||
});
|
||||
stage.on('mousedown', mouseEventHandlers.onMouseDown);
|
||||
stage.on('mouseup', mouseEventHandlers.onMouseUp);
|
||||
stage.on('mousemove', mouseEventHandlers.onMouseMove);
|
||||
stage.on('mouseleave', mouseEventHandlers.onMouseLeave);
|
||||
stage.on('wheel', mouseEventHandlers.onMouseWheel);
|
||||
|
||||
return () => {
|
||||
log.trace('Removing stage listeners');
|
||||
cleanup();
|
||||
log.trace('Cleaning up stage listeners');
|
||||
stage.off('mousedown', mouseEventHandlers.onMouseDown);
|
||||
stage.off('mouseup', mouseEventHandlers.onMouseUp);
|
||||
stage.off('mousemove', mouseEventHandlers.onMouseMove);
|
||||
stage.off('mouseleave', mouseEventHandlers.onMouseLeave);
|
||||
stage.off('wheel', mouseEventHandlers.onMouseWheel);
|
||||
};
|
||||
}, [asPreview, onBrushSizeChanged, onRGLayerLineAdded, onRGLayerPointAddedToLine, onRGLayerRectAdded, stage]);
|
||||
}, [stage, asPreview, mouseEventHandlers]);
|
||||
|
||||
useLayoutEffect(() => {
|
||||
log.trace('Updating stage dimensions');
|
||||
@@ -227,7 +160,7 @@ const useStageRenderer = (
|
||||
|
||||
useLayoutEffect(() => {
|
||||
log.trace('Rendering layers');
|
||||
renderers.renderLayers(stage, state.layers, state.globalMaskLayerOpacity, tool, getImageDTO, onLayerPosChanged);
|
||||
renderers.renderLayers(stage, state.layers, state.globalMaskLayerOpacity, tool, onLayerPosChanged);
|
||||
}, [
|
||||
stage,
|
||||
state.layers,
|
||||
|
||||
@@ -0,0 +1,233 @@
|
||||
import { $ctrl, $meta } from '@invoke-ai/ui-library';
|
||||
import { useStore } from '@nanostores/react';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { calculateNewBrushSize } from 'features/canvas/hooks/useCanvasZoom';
|
||||
import {
|
||||
$isDrawing,
|
||||
$lastCursorPos,
|
||||
$lastMouseDownPos,
|
||||
$tool,
|
||||
brushSizeChanged,
|
||||
rgLayerLineAdded,
|
||||
rgLayerPointsAdded,
|
||||
rgLayerRectAdded,
|
||||
} from 'features/controlLayers/store/controlLayersSlice';
|
||||
import type Konva from 'konva';
|
||||
import type { KonvaEventObject } from 'konva/lib/Node';
|
||||
import type { Vector2d } from 'konva/lib/types';
|
||||
import { clamp } from 'lodash-es';
|
||||
import { useCallback, useMemo, useRef } from 'react';
|
||||
|
||||
const getIsFocused = (stage: Konva.Stage) => {
|
||||
return stage.container().contains(document.activeElement);
|
||||
};
|
||||
const getIsMouseDown = (e: KonvaEventObject<MouseEvent>) => e.evt.buttons === 1;
|
||||
|
||||
const SNAP_PX = 10;
|
||||
|
||||
export const snapPosToStage = (pos: Vector2d, stage: Konva.Stage) => {
|
||||
const snappedPos = { ...pos };
|
||||
// Get the normalized threshold for snapping to the edge of the stage
|
||||
const thresholdX = SNAP_PX / stage.scaleX();
|
||||
const thresholdY = SNAP_PX / stage.scaleY();
|
||||
const stageWidth = stage.width() / stage.scaleX();
|
||||
const stageHeight = stage.height() / stage.scaleY();
|
||||
// Snap to the edge of the stage if within threshold
|
||||
if (pos.x - thresholdX < 0) {
|
||||
snappedPos.x = 0;
|
||||
} else if (pos.x + thresholdX > stageWidth) {
|
||||
snappedPos.x = Math.floor(stageWidth);
|
||||
}
|
||||
if (pos.y - thresholdY < 0) {
|
||||
snappedPos.y = 0;
|
||||
} else if (pos.y + thresholdY > stageHeight) {
|
||||
snappedPos.y = Math.floor(stageHeight);
|
||||
}
|
||||
return snappedPos;
|
||||
};
|
||||
|
||||
export const getScaledFlooredCursorPosition = (stage: Konva.Stage) => {
|
||||
const pointerPosition = stage.getPointerPosition();
|
||||
const stageTransform = stage.getAbsoluteTransform().copy();
|
||||
if (!pointerPosition) {
|
||||
return;
|
||||
}
|
||||
const scaledCursorPosition = stageTransform.invert().point(pointerPosition);
|
||||
return {
|
||||
x: Math.floor(scaledCursorPosition.x),
|
||||
y: Math.floor(scaledCursorPosition.y),
|
||||
};
|
||||
};
|
||||
|
||||
const syncCursorPos = (stage: Konva.Stage): Vector2d | null => {
|
||||
const pos = getScaledFlooredCursorPosition(stage);
|
||||
if (!pos) {
|
||||
return null;
|
||||
}
|
||||
$lastCursorPos.set(pos);
|
||||
return pos;
|
||||
};
|
||||
|
||||
const BRUSH_SPACING_PCT = 10;
|
||||
const MIN_BRUSH_SPACING_PX = 5;
|
||||
const MAX_BRUSH_SPACING_PX = 15;
|
||||
|
||||
export const useMouseEvents = () => {
|
||||
const dispatch = useAppDispatch();
|
||||
const selectedLayerId = useAppSelector((s) => s.controlLayers.present.selectedLayerId);
|
||||
const selectedLayerType = useAppSelector((s) => {
|
||||
const selectedLayer = s.controlLayers.present.layers.find((l) => l.id === s.controlLayers.present.selectedLayerId);
|
||||
if (!selectedLayer) {
|
||||
return null;
|
||||
}
|
||||
return selectedLayer.type;
|
||||
});
|
||||
const tool = useStore($tool);
|
||||
const lastCursorPosRef = useRef<[number, number] | null>(null);
|
||||
const shouldInvertBrushSizeScrollDirection = useAppSelector((s) => s.canvas.shouldInvertBrushSizeScrollDirection);
|
||||
const brushSize = useAppSelector((s) => s.controlLayers.present.brushSize);
|
||||
const brushSpacingPx = useMemo(
|
||||
() => clamp(brushSize / BRUSH_SPACING_PCT, MIN_BRUSH_SPACING_PX, MAX_BRUSH_SPACING_PX),
|
||||
[brushSize]
|
||||
);
|
||||
|
||||
const onMouseDown = useCallback(
|
||||
(e: KonvaEventObject<MouseEvent>) => {
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) {
|
||||
return;
|
||||
}
|
||||
const pos = syncCursorPos(stage);
|
||||
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
|
||||
return;
|
||||
}
|
||||
if (tool === 'brush' || tool === 'eraser') {
|
||||
dispatch(
|
||||
rgLayerLineAdded({
|
||||
layerId: selectedLayerId,
|
||||
points: [pos.x, pos.y, pos.x, pos.y],
|
||||
tool,
|
||||
})
|
||||
);
|
||||
$isDrawing.set(true);
|
||||
$lastMouseDownPos.set(pos);
|
||||
} else if (tool === 'rect') {
|
||||
$lastMouseDownPos.set(snapPosToStage(pos, stage));
|
||||
}
|
||||
},
|
||||
[dispatch, selectedLayerId, selectedLayerType, tool]
|
||||
);
|
||||
|
||||
const onMouseUp = useCallback(
|
||||
(e: KonvaEventObject<MouseEvent>) => {
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) {
|
||||
return;
|
||||
}
|
||||
const pos = $lastCursorPos.get();
|
||||
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
|
||||
return;
|
||||
}
|
||||
const lastPos = $lastMouseDownPos.get();
|
||||
const tool = $tool.get();
|
||||
if (lastPos && selectedLayerId && tool === 'rect') {
|
||||
const snappedPos = snapPosToStage(pos, stage);
|
||||
dispatch(
|
||||
rgLayerRectAdded({
|
||||
layerId: selectedLayerId,
|
||||
rect: {
|
||||
x: Math.min(snappedPos.x, lastPos.x),
|
||||
y: Math.min(snappedPos.y, lastPos.y),
|
||||
width: Math.abs(snappedPos.x - lastPos.x),
|
||||
height: Math.abs(snappedPos.y - lastPos.y),
|
||||
},
|
||||
})
|
||||
);
|
||||
}
|
||||
$isDrawing.set(false);
|
||||
$lastMouseDownPos.set(null);
|
||||
},
|
||||
[dispatch, selectedLayerId, selectedLayerType]
|
||||
);
|
||||
|
||||
const onMouseMove = useCallback(
|
||||
(e: KonvaEventObject<MouseEvent>) => {
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) {
|
||||
return;
|
||||
}
|
||||
const pos = syncCursorPos(stage);
|
||||
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
|
||||
return;
|
||||
}
|
||||
if (getIsFocused(stage) && getIsMouseDown(e) && (tool === 'brush' || tool === 'eraser')) {
|
||||
if ($isDrawing.get()) {
|
||||
// Continue the last line
|
||||
if (lastCursorPosRef.current) {
|
||||
// Dispatching redux events impacts perf substantially - using brush spacing keeps dispatches to a reasonable number
|
||||
if (Math.hypot(lastCursorPosRef.current[0] - pos.x, lastCursorPosRef.current[1] - pos.y) < brushSpacingPx) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
lastCursorPosRef.current = [pos.x, pos.y];
|
||||
dispatch(rgLayerPointsAdded({ layerId: selectedLayerId, point: lastCursorPosRef.current }));
|
||||
} else {
|
||||
// Start a new line
|
||||
dispatch(rgLayerLineAdded({ layerId: selectedLayerId, points: [pos.x, pos.y, pos.x, pos.y], tool }));
|
||||
}
|
||||
$isDrawing.set(true);
|
||||
}
|
||||
},
|
||||
[brushSpacingPx, dispatch, selectedLayerId, selectedLayerType, tool]
|
||||
);
|
||||
|
||||
const onMouseLeave = useCallback(
|
||||
(e: KonvaEventObject<MouseEvent>) => {
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) {
|
||||
return;
|
||||
}
|
||||
const pos = syncCursorPos(stage);
|
||||
$isDrawing.set(false);
|
||||
$lastCursorPos.set(null);
|
||||
$lastMouseDownPos.set(null);
|
||||
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
|
||||
return;
|
||||
}
|
||||
if (getIsFocused(stage) && getIsMouseDown(e) && (tool === 'brush' || tool === 'eraser')) {
|
||||
dispatch(rgLayerPointsAdded({ layerId: selectedLayerId, point: [pos.x, pos.y] }));
|
||||
}
|
||||
},
|
||||
[selectedLayerId, selectedLayerType, tool, dispatch]
|
||||
);
|
||||
|
||||
const onMouseWheel = useCallback(
|
||||
(e: KonvaEventObject<WheelEvent>) => {
|
||||
e.evt.preventDefault();
|
||||
|
||||
if (selectedLayerType !== 'regional_guidance_layer' || (tool !== 'brush' && tool !== 'eraser')) {
|
||||
return;
|
||||
}
|
||||
// checking for ctrl key is pressed or not,
|
||||
// so that brush size can be controlled using ctrl + scroll up/down
|
||||
|
||||
// Invert the delta if the property is set to true
|
||||
let delta = e.evt.deltaY;
|
||||
if (shouldInvertBrushSizeScrollDirection) {
|
||||
delta = -delta;
|
||||
}
|
||||
|
||||
if ($ctrl.get() || $meta.get()) {
|
||||
dispatch(brushSizeChanged(calculateNewBrushSize(brushSize, delta)));
|
||||
}
|
||||
},
|
||||
[selectedLayerType, tool, shouldInvertBrushSizeScrollDirection, dispatch, brushSize]
|
||||
);
|
||||
|
||||
const handlers = useMemo(
|
||||
() => ({ onMouseDown, onMouseUp, onMouseMove, onMouseLeave, onMouseWheel }),
|
||||
[onMouseDown, onMouseUp, onMouseMove, onMouseLeave, onMouseWheel]
|
||||
);
|
||||
|
||||
return handlers;
|
||||
};
|
||||
@@ -1,36 +0,0 @@
|
||||
/**
|
||||
* A transparency checker pattern image.
|
||||
* This is invokeai/frontend/web/public/assets/images/transparent_bg.png as a dataURL
|
||||
*/
|
||||
export const TRANSPARENCY_CHECKER_PATTERN =
|
||||
'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAIAAAAC64paAAAEsmlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPD94cGFja2V0IGJlZ2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4KPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS41LjAiPgogPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgeG1sbnM6ZXhpZj0iaHR0cDovL25zLmFkb2JlLmNvbS9leGlmLzEuMC8iCiAgICB4bWxuczp0aWZmPSJodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyIKICAgIHhtbG5zOnBob3Rvc2hvcD0iaHR0cDovL25zLmFkb2JlLmNvbS9waG90b3Nob3AvMS4wLyIKICAgIHhtbG5zOnhtcD0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLyIKICAgIHhtbG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIgogICAgeG1sbnM6c3RFdnQ9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZUV2ZW50IyIKICAgZXhpZjpQaXhlbFhEaW1lbnNpb249IjIwIgogICBleGlmOlBpeGVsWURpbWVuc2lvbj0iMjAiCiAgIGV4aWY6Q29sb3JTcGFjZT0iMSIKICAgdGlmZjpJbWFnZVdpZHRoPSIyMCIKICAgdGlmZjpJbWFnZUxlbmd0aD0iMjAiCiAgIHRpZmY6UmVzb2x1dGlvblVuaXQ9IjIiCiAgIHRpZmY6WFJlc29sdXRpb249IjMwMC8xIgogICB0aWZmOllSZXNvbHV0aW9uPSIzMDAvMSIKICAgcGhvdG9zaG9wOkNvbG9yTW9kZT0iMyIKICAgcGhvdG9zaG9wOklDQ1Byb2ZpbGU9InNSR0IgSUVDNjE5NjYtMi4xIgogICB4bXA6TW9kaWZ5RGF0ZT0iMjAyNC0wNC0yM1QwODoyMDo0NysxMDowMCIKICAgeG1wOk1ldGFkYXRhRGF0ZT0iMjAyNC0wNC0yM1QwODoyMDo0NysxMDowMCI+CiAgIDx4bXBNTTpIaXN0b3J5PgogICAgPHJkZjpTZXE+CiAgICAgPHJkZjpsaQogICAgICBzdEV2dDphY3Rpb249InByb2R1Y2VkIgogICAgICBzdEV2dDpzb2Z0d2FyZUFnZW50PSJBZmZpbml0eSBQaG90byAxLjEwLjgiCiAgICAgIHN0RXZ0OndoZW49IjIwMjQtMDQtMjNUMDg6MjA6NDcrMTA6MDAiLz4KICAgIDwvcmRmOlNlcT4KICAgPC94bXBNTTpIaXN0b3J5PgogIDwvcmRmOkRlc2NyaXB0aW9uPgogPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KPD94cGFja2V0IGVuZD0iciI/Pn9pdVgAAAGBaUNDUHNSR0IgSUVDNjE5NjYtMi4xAAAokXWR3yuDURjHP5uJmKghFy6WxpVpqMWNMgm1tGbKr5vt3S+1d3t73y3JrXKrKHHj1wV/AbfKtVJESq53TdywXs9rakv2nJ7zfM73nOfpnOeAPZJRVMPhAzWb18NTAffC4pK7oYiDTjpw4YgqhjYeCgWpaR8P2Kx457Vq1T73rzXHE4YCtkbhMUXT88LTwsG1vGbxrnC7ko7Ghc+F+3W5oPC9pcfKXLQ4VeYvi/VIeALsbcLuVBXHqlhJ66qwvByPmikov/exXuJMZOfnJPaId2MQZooAbmaYZAI/g4zK7MfLEAOyoka+7yd/lpzkKjJrrKOzSoo0efpFLUj1hMSk6AkZGdat/v/tq5EcHipXdwag/sU033qhYQdK26b5eWyapROoe4arbCU/dwQj76JvVzTPIbRuwsV1RYvtweUWdD1pUT36I9WJ25NJeD2DlkVw3ULTcrlnv/ucPkJkQ77qBvYPoE/Ot658AxagZ8FoS/a7AAAACXBIWXMAAC4jAAAuIwF4pT92AAAAL0lEQVQ4jWM8ffo0A25gYmKCR5YJjxxBMKp5ZGhm/P//Px7pM2fO0MrmUc0jQzMAB2EIhZC3pUYAAAAASUVORK5CYII=';
|
||||
|
||||
/**
|
||||
* The color of a bounding box stroke when its object is selected.
|
||||
*/
|
||||
export const BBOX_SELECTED_STROKE = 'rgba(78, 190, 255, 1)';
|
||||
|
||||
/**
|
||||
* The inner border color for the brush preview.
|
||||
*/
|
||||
export const BRUSH_BORDER_INNER_COLOR = 'rgba(0,0,0,1)';
|
||||
|
||||
/**
|
||||
* The outer border color for the brush preview.
|
||||
*/
|
||||
export const BRUSH_BORDER_OUTER_COLOR = 'rgba(255,255,255,0.8)';
|
||||
|
||||
/**
|
||||
* The target spacing of individual points of brush strokes, as a percentage of the brush size.
|
||||
*/
|
||||
export const BRUSH_SPACING_PCT = 10;
|
||||
|
||||
/**
|
||||
* The minimum brush spacing in pixels.
|
||||
*/
|
||||
export const MIN_BRUSH_SPACING_PX = 5;
|
||||
|
||||
/**
|
||||
* The maximum brush spacing in pixels.
|
||||
*/
|
||||
export const MAX_BRUSH_SPACING_PX = 15;
|
||||
@@ -1,201 +0,0 @@
|
||||
import { calculateNewBrushSize } from 'features/canvas/hooks/useCanvasZoom';
|
||||
import {
|
||||
getIsFocused,
|
||||
getIsMouseDown,
|
||||
getScaledFlooredCursorPosition,
|
||||
snapPosToStage,
|
||||
} from 'features/controlLayers/konva/util';
|
||||
import type { AddLineArg, AddPointToLineArg, AddRectArg, Layer, Tool } from 'features/controlLayers/store/types';
|
||||
import type Konva from 'konva';
|
||||
import type { Vector2d } from 'konva/lib/types';
|
||||
import type { WritableAtom } from 'nanostores';
|
||||
|
||||
import { TOOL_PREVIEW_LAYER_ID } from './naming';
|
||||
|
||||
type SetStageEventHandlersArg = {
|
||||
stage: Konva.Stage;
|
||||
$tool: WritableAtom<Tool>;
|
||||
$isDrawing: WritableAtom<boolean>;
|
||||
$lastMouseDownPos: WritableAtom<Vector2d | null>;
|
||||
$lastCursorPos: WritableAtom<Vector2d | null>;
|
||||
$lastAddedPoint: WritableAtom<Vector2d | null>;
|
||||
$brushSize: WritableAtom<number>;
|
||||
$brushSpacingPx: WritableAtom<number>;
|
||||
$selectedLayerId: WritableAtom<string | null>;
|
||||
$selectedLayerType: WritableAtom<Layer['type'] | null>;
|
||||
$shouldInvertBrushSizeScrollDirection: WritableAtom<boolean>;
|
||||
onRGLayerLineAdded: (arg: AddLineArg) => void;
|
||||
onRGLayerPointAddedToLine: (arg: AddPointToLineArg) => void;
|
||||
onRGLayerRectAdded: (arg: AddRectArg) => void;
|
||||
onBrushSizeChanged: (size: number) => void;
|
||||
};
|
||||
|
||||
const syncCursorPos = (stage: Konva.Stage, $lastCursorPos: WritableAtom<Vector2d | null>) => {
|
||||
const pos = getScaledFlooredCursorPosition(stage);
|
||||
if (!pos) {
|
||||
return null;
|
||||
}
|
||||
$lastCursorPos.set(pos);
|
||||
return pos;
|
||||
};
|
||||
|
||||
export const setStageEventHandlers = ({
|
||||
stage,
|
||||
$tool,
|
||||
$isDrawing,
|
||||
$lastMouseDownPos,
|
||||
$lastCursorPos,
|
||||
$lastAddedPoint,
|
||||
$brushSize,
|
||||
$brushSpacingPx,
|
||||
$selectedLayerId,
|
||||
$selectedLayerType,
|
||||
$shouldInvertBrushSizeScrollDirection,
|
||||
onRGLayerLineAdded,
|
||||
onRGLayerPointAddedToLine,
|
||||
onRGLayerRectAdded,
|
||||
onBrushSizeChanged,
|
||||
}: SetStageEventHandlersArg): (() => void) => {
|
||||
stage.on('mouseenter', (e) => {
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) {
|
||||
return;
|
||||
}
|
||||
const tool = $tool.get();
|
||||
stage.findOne<Konva.Layer>(`#${TOOL_PREVIEW_LAYER_ID}`)?.visible(tool === 'brush' || tool === 'eraser');
|
||||
});
|
||||
|
||||
stage.on('mousedown', (e) => {
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) {
|
||||
return;
|
||||
}
|
||||
const tool = $tool.get();
|
||||
const pos = syncCursorPos(stage, $lastCursorPos);
|
||||
const selectedLayerId = $selectedLayerId.get();
|
||||
const selectedLayerType = $selectedLayerType.get();
|
||||
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
|
||||
return;
|
||||
}
|
||||
if (tool === 'brush' || tool === 'eraser') {
|
||||
onRGLayerLineAdded({
|
||||
layerId: selectedLayerId,
|
||||
points: [pos.x, pos.y, pos.x, pos.y],
|
||||
tool,
|
||||
});
|
||||
$isDrawing.set(true);
|
||||
$lastMouseDownPos.set(pos);
|
||||
} else if (tool === 'rect') {
|
||||
$lastMouseDownPos.set(snapPosToStage(pos, stage));
|
||||
}
|
||||
});
|
||||
|
||||
stage.on('mouseup', (e) => {
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) {
|
||||
return;
|
||||
}
|
||||
const pos = $lastCursorPos.get();
|
||||
const selectedLayerId = $selectedLayerId.get();
|
||||
const selectedLayerType = $selectedLayerType.get();
|
||||
|
||||
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
|
||||
return;
|
||||
}
|
||||
const lastPos = $lastMouseDownPos.get();
|
||||
const tool = $tool.get();
|
||||
if (lastPos && selectedLayerId && tool === 'rect') {
|
||||
const snappedPos = snapPosToStage(pos, stage);
|
||||
onRGLayerRectAdded({
|
||||
layerId: selectedLayerId,
|
||||
rect: {
|
||||
x: Math.min(snappedPos.x, lastPos.x),
|
||||
y: Math.min(snappedPos.y, lastPos.y),
|
||||
width: Math.abs(snappedPos.x - lastPos.x),
|
||||
height: Math.abs(snappedPos.y - lastPos.y),
|
||||
},
|
||||
});
|
||||
}
|
||||
$isDrawing.set(false);
|
||||
$lastMouseDownPos.set(null);
|
||||
});
|
||||
|
||||
stage.on('mousemove', (e) => {
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) {
|
||||
return;
|
||||
}
|
||||
const tool = $tool.get();
|
||||
const pos = syncCursorPos(stage, $lastCursorPos);
|
||||
const selectedLayerId = $selectedLayerId.get();
|
||||
const selectedLayerType = $selectedLayerType.get();
|
||||
|
||||
stage.findOne<Konva.Layer>(`#${TOOL_PREVIEW_LAYER_ID}`)?.visible(tool === 'brush' || tool === 'eraser');
|
||||
|
||||
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
|
||||
return;
|
||||
}
|
||||
if (getIsFocused(stage) && getIsMouseDown(e) && (tool === 'brush' || tool === 'eraser')) {
|
||||
if ($isDrawing.get()) {
|
||||
// Continue the last line
|
||||
const lastAddedPoint = $lastAddedPoint.get();
|
||||
if (lastAddedPoint) {
|
||||
// Dispatching redux events impacts perf substantially - using brush spacing keeps dispatches to a reasonable number
|
||||
if (Math.hypot(lastAddedPoint.x - pos.x, lastAddedPoint.y - pos.y) < $brushSpacingPx.get()) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
$lastAddedPoint.set({ x: pos.x, y: pos.y });
|
||||
onRGLayerPointAddedToLine({ layerId: selectedLayerId, point: [pos.x, pos.y] });
|
||||
} else {
|
||||
// Start a new line
|
||||
onRGLayerLineAdded({ layerId: selectedLayerId, points: [pos.x, pos.y, pos.x, pos.y], tool });
|
||||
}
|
||||
$isDrawing.set(true);
|
||||
}
|
||||
});
|
||||
|
||||
stage.on('mouseleave', (e) => {
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) {
|
||||
return;
|
||||
}
|
||||
const pos = syncCursorPos(stage, $lastCursorPos);
|
||||
$isDrawing.set(false);
|
||||
$lastCursorPos.set(null);
|
||||
$lastMouseDownPos.set(null);
|
||||
const selectedLayerId = $selectedLayerId.get();
|
||||
const selectedLayerType = $selectedLayerType.get();
|
||||
const tool = $tool.get();
|
||||
|
||||
stage.findOne<Konva.Layer>(`#${TOOL_PREVIEW_LAYER_ID}`)?.visible(false);
|
||||
|
||||
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
|
||||
return;
|
||||
}
|
||||
if (getIsFocused(stage) && getIsMouseDown(e) && (tool === 'brush' || tool === 'eraser')) {
|
||||
onRGLayerPointAddedToLine({ layerId: selectedLayerId, point: [pos.x, pos.y] });
|
||||
}
|
||||
});
|
||||
|
||||
stage.on('wheel', (e) => {
|
||||
e.evt.preventDefault();
|
||||
const selectedLayerType = $selectedLayerType.get();
|
||||
const tool = $tool.get();
|
||||
if (selectedLayerType !== 'regional_guidance_layer' || (tool !== 'brush' && tool !== 'eraser')) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Invert the delta if the property is set to true
|
||||
let delta = e.evt.deltaY;
|
||||
if ($shouldInvertBrushSizeScrollDirection.get()) {
|
||||
delta = -delta;
|
||||
}
|
||||
|
||||
if (e.evt.ctrlKey || e.evt.metaKey) {
|
||||
onBrushSizeChanged(calculateNewBrushSize($brushSize.get(), delta));
|
||||
}
|
||||
});
|
||||
|
||||
return () => stage.off('mousedown mouseup mousemove mouseenter mouseleave wheel');
|
||||
};
|
||||
@@ -1,21 +0,0 @@
|
||||
/**
|
||||
* Konva filters
|
||||
* https://konvajs.org/docs/filters/Custom_Filter.html
|
||||
*/
|
||||
|
||||
/**
|
||||
* Calculates the lightness (HSL) of a given pixel and sets the alpha channel to that value.
|
||||
* This is useful for edge maps and other masks, to make the black areas transparent.
|
||||
* @param imageData The image data to apply the filter to
|
||||
*/
|
||||
export const LightnessToAlphaFilter = (imageData: ImageData): void => {
|
||||
const len = imageData.data.length / 4;
|
||||
for (let i = 0; i < len; i++) {
|
||||
const r = imageData.data[i * 4 + 0] as number;
|
||||
const g = imageData.data[i * 4 + 1] as number;
|
||||
const b = imageData.data[i * 4 + 2] as number;
|
||||
const cMin = Math.min(r, g, b);
|
||||
const cMax = Math.max(r, g, b);
|
||||
imageData.data[i * 4 + 3] = (cMin + cMax) / 2;
|
||||
}
|
||||
};
|
||||
@@ -1,38 +0,0 @@
|
||||
/**
|
||||
* This file contains IDs, names, and ID getters for konva layers and objects.
|
||||
*/
|
||||
|
||||
// IDs for singleton Konva layers and objects
|
||||
export const TOOL_PREVIEW_LAYER_ID = 'tool_preview_layer';
|
||||
export const TOOL_PREVIEW_BRUSH_GROUP_ID = 'tool_preview_layer.brush_group';
|
||||
export const TOOL_PREVIEW_BRUSH_FILL_ID = 'tool_preview_layer.brush_fill';
|
||||
export const TOOL_PREVIEW_BRUSH_BORDER_INNER_ID = 'tool_preview_layer.brush_border_inner';
|
||||
export const TOOL_PREVIEW_BRUSH_BORDER_OUTER_ID = 'tool_preview_layer.brush_border_outer';
|
||||
export const TOOL_PREVIEW_RECT_ID = 'tool_preview_layer.rect';
|
||||
export const BACKGROUND_LAYER_ID = 'background_layer';
|
||||
export const BACKGROUND_RECT_ID = 'background_layer.rect';
|
||||
export const NO_LAYERS_MESSAGE_LAYER_ID = 'no_layers_message';
|
||||
|
||||
// Names for Konva layers and objects (comparable to CSS classes)
|
||||
export const CA_LAYER_NAME = 'control_adapter_layer';
|
||||
export const CA_LAYER_IMAGE_NAME = 'control_adapter_layer.image';
|
||||
export const RG_LAYER_NAME = 'regional_guidance_layer';
|
||||
export const RG_LAYER_LINE_NAME = 'regional_guidance_layer.line';
|
||||
export const RG_LAYER_OBJECT_GROUP_NAME = 'regional_guidance_layer.object_group';
|
||||
export const RG_LAYER_RECT_NAME = 'regional_guidance_layer.rect';
|
||||
export const INITIAL_IMAGE_LAYER_ID = 'singleton_initial_image_layer';
|
||||
export const INITIAL_IMAGE_LAYER_NAME = 'initial_image_layer';
|
||||
export const INITIAL_IMAGE_LAYER_IMAGE_NAME = 'initial_image_layer.image';
|
||||
export const LAYER_BBOX_NAME = 'layer.bbox';
|
||||
export const COMPOSITING_RECT_NAME = 'compositing-rect';
|
||||
|
||||
// Getters for non-singleton layer and object IDs
|
||||
export const getRGLayerId = (layerId: string) => `${RG_LAYER_NAME}_${layerId}`;
|
||||
export const getRGLayerLineId = (layerId: string, lineId: string) => `${layerId}.line_${lineId}`;
|
||||
export const getRGLayerRectId = (layerId: string, lineId: string) => `${layerId}.rect_${lineId}`;
|
||||
export const getRGLayerObjectGroupId = (layerId: string, groupId: string) => `${layerId}.objectGroup_${groupId}`;
|
||||
export const getLayerBboxId = (layerId: string) => `${layerId}.bbox`;
|
||||
export const getCALayerId = (layerId: string) => `control_adapter_layer_${layerId}`;
|
||||
export const getCALayerImageId = (layerId: string, imageName: string) => `${layerId}.image_${imageName}`;
|
||||
export const getIILayerImageId = (layerId: string, imageName: string) => `${layerId}.image_${imageName}`;
|
||||
export const getIPALayerId = (layerId: string) => `ip_adapter_layer_${layerId}`;
|
||||
@@ -1,67 +0,0 @@
|
||||
import type Konva from 'konva';
|
||||
import type { KonvaEventObject } from 'konva/lib/Node';
|
||||
import type { Vector2d } from 'konva/lib/types';
|
||||
|
||||
//#region getScaledFlooredCursorPosition
|
||||
/**
|
||||
* Gets the scaled and floored cursor position on the stage. If the cursor is not currently over the stage, returns null.
|
||||
* @param stage The konva stage
|
||||
*/
|
||||
export const getScaledFlooredCursorPosition = (stage: Konva.Stage): Vector2d | null => {
|
||||
const pointerPosition = stage.getPointerPosition();
|
||||
const stageTransform = stage.getAbsoluteTransform().copy();
|
||||
if (!pointerPosition) {
|
||||
return null;
|
||||
}
|
||||
const scaledCursorPosition = stageTransform.invert().point(pointerPosition);
|
||||
return {
|
||||
x: Math.floor(scaledCursorPosition.x),
|
||||
y: Math.floor(scaledCursorPosition.y),
|
||||
};
|
||||
};
|
||||
//#endregion
|
||||
|
||||
//#region snapPosToStage
|
||||
/**
|
||||
* Snaps a position to the edge of the stage if within a threshold of the edge
|
||||
* @param pos The position to snap
|
||||
* @param stage The konva stage
|
||||
* @param snapPx The snap threshold in pixels
|
||||
*/
|
||||
export const snapPosToStage = (pos: Vector2d, stage: Konva.Stage, snapPx = 10): Vector2d => {
|
||||
const snappedPos = { ...pos };
|
||||
// Get the normalized threshold for snapping to the edge of the stage
|
||||
const thresholdX = snapPx / stage.scaleX();
|
||||
const thresholdY = snapPx / stage.scaleY();
|
||||
const stageWidth = stage.width() / stage.scaleX();
|
||||
const stageHeight = stage.height() / stage.scaleY();
|
||||
// Snap to the edge of the stage if within threshold
|
||||
if (pos.x - thresholdX < 0) {
|
||||
snappedPos.x = 0;
|
||||
} else if (pos.x + thresholdX > stageWidth) {
|
||||
snappedPos.x = Math.floor(stageWidth);
|
||||
}
|
||||
if (pos.y - thresholdY < 0) {
|
||||
snappedPos.y = 0;
|
||||
} else if (pos.y + thresholdY > stageHeight) {
|
||||
snappedPos.y = Math.floor(stageHeight);
|
||||
}
|
||||
return snappedPos;
|
||||
};
|
||||
//#endregion
|
||||
|
||||
//#region getIsMouseDown
|
||||
/**
|
||||
* Checks if the left mouse button is currently pressed
|
||||
* @param e The konva event
|
||||
*/
|
||||
export const getIsMouseDown = (e: KonvaEventObject<MouseEvent>): boolean => e.evt.buttons === 1;
|
||||
//#endregion
|
||||
|
||||
//#region getIsFocused
|
||||
/**
|
||||
* Checks if the stage is currently focused
|
||||
* @param stage The konva stage
|
||||
*/
|
||||
export const getIsFocused = (stage: Konva.Stage): boolean => stage.container().contains(document.activeElement);
|
||||
//#endregion
|
||||
@@ -4,14 +4,6 @@ import type { PersistConfig, RootState } from 'app/store/store';
|
||||
import { moveBackward, moveForward, moveToBack, moveToFront } from 'common/util/arrayUtils';
|
||||
import { deepClone } from 'common/util/deepClone';
|
||||
import { roundDownToMultiple } from 'common/util/roundDownToMultiple';
|
||||
import {
|
||||
getCALayerId,
|
||||
getIPALayerId,
|
||||
getRGLayerId,
|
||||
getRGLayerLineId,
|
||||
getRGLayerRectId,
|
||||
INITIAL_IMAGE_LAYER_ID,
|
||||
} from 'features/controlLayers/konva/naming';
|
||||
import type {
|
||||
CLIPVisionModelV2,
|
||||
ControlModeV2,
|
||||
@@ -44,9 +36,6 @@ import { assert } from 'tsafe';
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
|
||||
import type {
|
||||
AddLineArg,
|
||||
AddPointToLineArg,
|
||||
AddRectArg,
|
||||
ControlAdapterLayer,
|
||||
ControlLayersState,
|
||||
DrawingTool,
|
||||
@@ -503,11 +492,11 @@ export const controlLayersSlice = createSlice({
|
||||
layer.bboxNeedsUpdate = true;
|
||||
layer.uploadedMaskImage = null;
|
||||
},
|
||||
prepare: (payload: AddLineArg) => ({
|
||||
prepare: (payload: { layerId: string; points: [number, number, number, number]; tool: DrawingTool }) => ({
|
||||
payload: { ...payload, lineUuid: uuidv4() },
|
||||
}),
|
||||
},
|
||||
rgLayerPointsAdded: (state, action: PayloadAction<AddPointToLineArg>) => {
|
||||
rgLayerPointsAdded: (state, action: PayloadAction<{ layerId: string; point: [number, number] }>) => {
|
||||
const { layerId, point } = action.payload;
|
||||
const layer = selectRGLayerOrThrow(state, layerId);
|
||||
const lastLine = layer.maskObjects.findLast(isLine);
|
||||
@@ -540,7 +529,7 @@ export const controlLayersSlice = createSlice({
|
||||
layer.bboxNeedsUpdate = true;
|
||||
layer.uploadedMaskImage = null;
|
||||
},
|
||||
prepare: (payload: AddRectArg) => ({ payload: { ...payload, rectUuid: uuidv4() } }),
|
||||
prepare: (payload: { layerId: string; rect: IRect }) => ({ payload: { ...payload, rectUuid: uuidv4() } }),
|
||||
},
|
||||
rgLayerMaskImageUploaded: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO }>) => {
|
||||
const { layerId, imageDTO } = action.payload;
|
||||
@@ -894,21 +883,45 @@ const migrateControlLayersState = (state: any): any => {
|
||||
return state;
|
||||
};
|
||||
|
||||
// Ephemeral interaction state
|
||||
export const $isDrawing = atom(false);
|
||||
export const $lastMouseDownPos = atom<Vector2d | null>(null);
|
||||
export const $tool = atom<Tool>('brush');
|
||||
export const $lastCursorPos = atom<Vector2d | null>(null);
|
||||
export const $isPreviewVisible = atom(true);
|
||||
export const $lastAddedPoint = atom<Vector2d | null>(null);
|
||||
|
||||
// Some nanostores that are manually synced to redux state to provide imperative access
|
||||
// TODO(psyche): This is a hack, figure out another way to handle this...
|
||||
export const $brushSize = atom<number>(0);
|
||||
export const $brushSpacingPx = atom<number>(0);
|
||||
export const $selectedLayerId = atom<string | null>(null);
|
||||
export const $selectedLayerType = atom<Layer['type'] | null>(null);
|
||||
export const $shouldInvertBrushSizeScrollDirection = atom(false);
|
||||
// IDs for singleton Konva layers and objects
|
||||
export const TOOL_PREVIEW_LAYER_ID = 'tool_preview_layer';
|
||||
export const TOOL_PREVIEW_BRUSH_GROUP_ID = 'tool_preview_layer.brush_group';
|
||||
export const TOOL_PREVIEW_BRUSH_FILL_ID = 'tool_preview_layer.brush_fill';
|
||||
export const TOOL_PREVIEW_BRUSH_BORDER_INNER_ID = 'tool_preview_layer.brush_border_inner';
|
||||
export const TOOL_PREVIEW_BRUSH_BORDER_OUTER_ID = 'tool_preview_layer.brush_border_outer';
|
||||
export const TOOL_PREVIEW_RECT_ID = 'tool_preview_layer.rect';
|
||||
export const BACKGROUND_LAYER_ID = 'background_layer';
|
||||
export const BACKGROUND_RECT_ID = 'background_layer.rect';
|
||||
export const NO_LAYERS_MESSAGE_LAYER_ID = 'no_layers_message';
|
||||
|
||||
// Names (aka classes) for Konva layers and objects
|
||||
export const CA_LAYER_NAME = 'control_adapter_layer';
|
||||
export const CA_LAYER_IMAGE_NAME = 'control_adapter_layer.image';
|
||||
export const RG_LAYER_NAME = 'regional_guidance_layer';
|
||||
export const RG_LAYER_LINE_NAME = 'regional_guidance_layer.line';
|
||||
export const RG_LAYER_OBJECT_GROUP_NAME = 'regional_guidance_layer.object_group';
|
||||
export const RG_LAYER_RECT_NAME = 'regional_guidance_layer.rect';
|
||||
export const INITIAL_IMAGE_LAYER_ID = 'singleton_initial_image_layer';
|
||||
export const INITIAL_IMAGE_LAYER_NAME = 'initial_image_layer';
|
||||
export const INITIAL_IMAGE_LAYER_IMAGE_NAME = 'initial_image_layer.image';
|
||||
export const LAYER_BBOX_NAME = 'layer.bbox';
|
||||
export const COMPOSITING_RECT_NAME = 'compositing-rect';
|
||||
|
||||
// Getters for non-singleton layer and object IDs
|
||||
export const getRGLayerId = (layerId: string) => `${RG_LAYER_NAME}_${layerId}`;
|
||||
const getRGLayerLineId = (layerId: string, lineId: string) => `${layerId}.line_${lineId}`;
|
||||
const getRGLayerRectId = (layerId: string, lineId: string) => `${layerId}.rect_${lineId}`;
|
||||
export const getRGLayerObjectGroupId = (layerId: string, groupId: string) => `${layerId}.objectGroup_${groupId}`;
|
||||
export const getLayerBboxId = (layerId: string) => `${layerId}.bbox`;
|
||||
export const getCALayerId = (layerId: string) => `control_adapter_layer_${layerId}`;
|
||||
export const getCALayerImageId = (layerId: string, imageName: string) => `${layerId}.image_${imageName}`;
|
||||
export const getIILayerImageId = (layerId: string, imageName: string) => `${layerId}.image_${imageName}`;
|
||||
export const getIPALayerId = (layerId: string) => `ip_adapter_layer_${layerId}`;
|
||||
|
||||
export const controlLayersPersistConfig: PersistConfig<ControlLayersState> = {
|
||||
name: controlLayersSlice.name,
|
||||
|
||||
@@ -17,7 +17,6 @@ import {
|
||||
zParameterPositivePrompt,
|
||||
zParameterStrength,
|
||||
} from 'features/parameters/types/parameterSchemas';
|
||||
import type { IRect } from 'konva/lib/types';
|
||||
import { z } from 'zod';
|
||||
|
||||
const zTool = z.enum(['brush', 'eraser', 'move', 'rect']);
|
||||
@@ -130,7 +129,3 @@ export type ControlLayersState = {
|
||||
aspectRatio: AspectRatioState;
|
||||
};
|
||||
};
|
||||
|
||||
export type AddLineArg = { layerId: string; points: [number, number, number, number]; tool: DrawingTool };
|
||||
export type AddPointToLineArg = { layerId: string; point: [number, number] };
|
||||
export type AddRectArg = { layerId: string; rect: IRect };
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
import openBase64ImageInTab from 'common/util/openBase64ImageInTab';
|
||||
import { imageDataToDataURL } from 'features/canvas/util/blobToDataURL';
|
||||
import { RG_LAYER_OBJECT_GROUP_NAME } from 'features/controlLayers/store/controlLayersSlice';
|
||||
import Konva from 'konva';
|
||||
import type { IRect } from 'konva/lib/types';
|
||||
import { assert } from 'tsafe';
|
||||
|
||||
import { RG_LAYER_OBJECT_GROUP_NAME } from './naming';
|
||||
const GET_CLIENT_RECT_CONFIG = { skipTransform: true };
|
||||
|
||||
type Extents = {
|
||||
minX: number;
|
||||
@@ -13,13 +14,10 @@ type Extents = {
|
||||
maxY: number;
|
||||
};
|
||||
|
||||
const GET_CLIENT_RECT_CONFIG = { skipTransform: true };
|
||||
|
||||
//#region getImageDataBbox
|
||||
/**
|
||||
* Get the bounding box of an image.
|
||||
* @param imageData The ImageData object to get the bounding box of.
|
||||
* @returns The minimum and maximum x and y values of the image's bounding box, or null if the image has no pixels.
|
||||
* @returns The minimum and maximum x and y values of the image's bounding box.
|
||||
*/
|
||||
const getImageDataBbox = (imageData: ImageData): Extents | null => {
|
||||
const { data, width, height } = imageData;
|
||||
@@ -53,9 +51,7 @@ const getImageDataBbox = (imageData: ImageData): Extents | null => {
|
||||
|
||||
return isEmpty ? null : { minX, minY, maxX, maxY };
|
||||
};
|
||||
//#endregion
|
||||
|
||||
//#region getIsolatedRGLayerClone
|
||||
/**
|
||||
* Clones a regional guidance konva layer onto an offscreen stage/canvas. This allows the pixel data for a given layer
|
||||
* to be captured, manipulated or analyzed without interference from other layers.
|
||||
@@ -92,9 +88,7 @@ const getIsolatedRGLayerClone = (layer: Konva.Layer): { stageClone: Konva.Stage;
|
||||
|
||||
return { stageClone, layerClone };
|
||||
};
|
||||
//#endregion
|
||||
|
||||
//#region getLayerBboxPixels
|
||||
/**
|
||||
* Get the bounding box of a regional prompt konva layer. This function has special handling for regional prompt layers.
|
||||
* @param layer The konva layer to get the bounding box of.
|
||||
@@ -143,9 +137,7 @@ export const getLayerBboxPixels = (layer: Konva.Layer, preview: boolean = false)
|
||||
|
||||
return correctedLayerBbox;
|
||||
};
|
||||
//#endregion
|
||||
|
||||
//#region getLayerBboxFast
|
||||
/**
|
||||
* Get the bounding box of a konva layer. This function is faster than `getLayerBboxPixels` but less accurate. It
|
||||
* should only be used when there are no eraser strokes or shapes in the layer.
|
||||
@@ -161,4 +153,3 @@ export const getLayerBboxFast = (layer: Konva.Layer): IRect => {
|
||||
height: Math.floor(bbox.height),
|
||||
};
|
||||
};
|
||||
//#endregion
|
||||
@@ -0,0 +1,66 @@
|
||||
import { getStore } from 'app/store/nanostores/store';
|
||||
import openBase64ImageInTab from 'common/util/openBase64ImageInTab';
|
||||
import { blobToDataURL } from 'features/canvas/util/blobToDataURL';
|
||||
import { isRegionalGuidanceLayer, RG_LAYER_NAME } from 'features/controlLayers/store/controlLayersSlice';
|
||||
import { renderers } from 'features/controlLayers/util/renderers';
|
||||
import Konva from 'konva';
|
||||
import { assert } from 'tsafe';
|
||||
|
||||
/**
|
||||
* Get the blobs of all regional prompt layers. Only visible layers are returned.
|
||||
* @param layerIds The IDs of the layers to get blobs for. If not provided, all regional prompt layers are used.
|
||||
* @param preview Whether to open a new tab displaying each layer.
|
||||
* @returns A map of layer IDs to blobs.
|
||||
*/
|
||||
export const getRegionalPromptLayerBlobs = async (
|
||||
layerIds?: string[],
|
||||
preview: boolean = false
|
||||
): Promise<Record<string, Blob>> => {
|
||||
const state = getStore().getState();
|
||||
const { layers } = state.controlLayers.present;
|
||||
const { width, height } = state.controlLayers.present.size;
|
||||
const reduxLayers = layers.filter(isRegionalGuidanceLayer);
|
||||
const container = document.createElement('div');
|
||||
const stage = new Konva.Stage({ container, width, height });
|
||||
renderers.renderLayers(stage, reduxLayers, 1, 'brush');
|
||||
|
||||
const konvaLayers = stage.find<Konva.Layer>(`.${RG_LAYER_NAME}`);
|
||||
const blobs: Record<string, Blob> = {};
|
||||
|
||||
// First remove all layers
|
||||
for (const layer of konvaLayers) {
|
||||
layer.remove();
|
||||
}
|
||||
|
||||
// Next render each layer to a blob
|
||||
for (const layer of konvaLayers) {
|
||||
if (layerIds && !layerIds.includes(layer.id())) {
|
||||
continue;
|
||||
}
|
||||
const reduxLayer = reduxLayers.find((l) => l.id === layer.id());
|
||||
assert(reduxLayer, `Redux layer ${layer.id()} not found`);
|
||||
stage.add(layer);
|
||||
const blob = await new Promise<Blob>((resolve) => {
|
||||
stage.toBlob({
|
||||
callback: (blob) => {
|
||||
assert(blob, 'Blob is null');
|
||||
resolve(blob);
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
if (preview) {
|
||||
const base64 = await blobToDataURL(blob);
|
||||
openBase64ImageInTab([
|
||||
{
|
||||
base64,
|
||||
caption: `${reduxLayer.id}: ${reduxLayer.positivePrompt} / ${reduxLayer.negativePrompt}`,
|
||||
},
|
||||
]);
|
||||
}
|
||||
layer.remove();
|
||||
blobs[layer.id()] = blob;
|
||||
}
|
||||
|
||||
return blobs;
|
||||
};
|
||||
@@ -1,7 +1,8 @@
|
||||
import { getStore } from 'app/store/nanostores/store';
|
||||
import { rgbaColorToString, rgbColorToString } from 'features/canvas/util/colorToString';
|
||||
import { getLayerBboxFast, getLayerBboxPixels } from 'features/controlLayers/konva/bbox';
|
||||
import { LightnessToAlphaFilter } from 'features/controlLayers/konva/filters';
|
||||
import { getScaledFlooredCursorPosition, snapPosToStage } from 'features/controlLayers/hooks/mouseEventHooks';
|
||||
import {
|
||||
$tool,
|
||||
BACKGROUND_LAYER_ID,
|
||||
BACKGROUND_RECT_ID,
|
||||
CA_LAYER_IMAGE_NAME,
|
||||
@@ -13,6 +14,10 @@ import {
|
||||
getRGLayerObjectGroupId,
|
||||
INITIAL_IMAGE_LAYER_IMAGE_NAME,
|
||||
INITIAL_IMAGE_LAYER_NAME,
|
||||
isControlAdapterLayer,
|
||||
isInitialImageLayer,
|
||||
isRegionalGuidanceLayer,
|
||||
isRenderableLayer,
|
||||
LAYER_BBOX_NAME,
|
||||
NO_LAYERS_MESSAGE_LAYER_ID,
|
||||
RG_LAYER_LINE_NAME,
|
||||
@@ -25,13 +30,6 @@ import {
|
||||
TOOL_PREVIEW_BRUSH_GROUP_ID,
|
||||
TOOL_PREVIEW_LAYER_ID,
|
||||
TOOL_PREVIEW_RECT_ID,
|
||||
} from 'features/controlLayers/konva/naming';
|
||||
import { getScaledFlooredCursorPosition, snapPosToStage } from 'features/controlLayers/konva/util';
|
||||
import {
|
||||
isControlAdapterLayer,
|
||||
isInitialImageLayer,
|
||||
isRegionalGuidanceLayer,
|
||||
isRenderableLayer,
|
||||
} from 'features/controlLayers/store/controlLayersSlice';
|
||||
import type {
|
||||
ControlAdapterLayer,
|
||||
@@ -42,46 +40,61 @@ import type {
|
||||
VectorMaskLine,
|
||||
VectorMaskRect,
|
||||
} from 'features/controlLayers/store/types';
|
||||
import { getLayerBboxFast, getLayerBboxPixels } from 'features/controlLayers/util/bbox';
|
||||
import { t } from 'i18next';
|
||||
import Konva from 'konva';
|
||||
import type { IRect, Vector2d } from 'konva/lib/types';
|
||||
import { debounce } from 'lodash-es';
|
||||
import type { RgbColor } from 'react-colorful';
|
||||
import type { ImageDTO } from 'services/api/types';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import { assert } from 'tsafe';
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
|
||||
import {
|
||||
BBOX_SELECTED_STROKE,
|
||||
BRUSH_BORDER_INNER_COLOR,
|
||||
BRUSH_BORDER_OUTER_COLOR,
|
||||
TRANSPARENCY_CHECKER_PATTERN,
|
||||
} from './constants';
|
||||
const BBOX_SELECTED_STROKE = 'rgba(78, 190, 255, 1)';
|
||||
const BRUSH_BORDER_INNER_COLOR = 'rgba(0,0,0,1)';
|
||||
const BRUSH_BORDER_OUTER_COLOR = 'rgba(255,255,255,0.8)';
|
||||
// This is invokeai/frontend/web/public/assets/images/transparent_bg.png as a dataURL
|
||||
export const STAGE_BG_DATAURL =
|
||||
'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAIAAAAC64paAAAEsmlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPD94cGFja2V0IGJlZ2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4KPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS41LjAiPgogPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgeG1sbnM6ZXhpZj0iaHR0cDovL25zLmFkb2JlLmNvbS9leGlmLzEuMC8iCiAgICB4bWxuczp0aWZmPSJodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyIKICAgIHhtbG5zOnBob3Rvc2hvcD0iaHR0cDovL25zLmFkb2JlLmNvbS9waG90b3Nob3AvMS4wLyIKICAgIHhtbG5zOnhtcD0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLyIKICAgIHhtbG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIgogICAgeG1sbnM6c3RFdnQ9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZUV2ZW50IyIKICAgZXhpZjpQaXhlbFhEaW1lbnNpb249IjIwIgogICBleGlmOlBpeGVsWURpbWVuc2lvbj0iMjAiCiAgIGV4aWY6Q29sb3JTcGFjZT0iMSIKICAgdGlmZjpJbWFnZVdpZHRoPSIyMCIKICAgdGlmZjpJbWFnZUxlbmd0aD0iMjAiCiAgIHRpZmY6UmVzb2x1dGlvblVuaXQ9IjIiCiAgIHRpZmY6WFJlc29sdXRpb249IjMwMC8xIgogICB0aWZmOllSZXNvbHV0aW9uPSIzMDAvMSIKICAgcGhvdG9zaG9wOkNvbG9yTW9kZT0iMyIKICAgcGhvdG9zaG9wOklDQ1Byb2ZpbGU9InNSR0IgSUVDNjE5NjYtMi4xIgogICB4bXA6TW9kaWZ5RGF0ZT0iMjAyNC0wNC0yM1QwODoyMDo0NysxMDowMCIKICAgeG1wOk1ldGFkYXRhRGF0ZT0iMjAyNC0wNC0yM1QwODoyMDo0NysxMDowMCI+CiAgIDx4bXBNTTpIaXN0b3J5PgogICAgPHJkZjpTZXE+CiAgICAgPHJkZjpsaQogICAgICBzdEV2dDphY3Rpb249InByb2R1Y2VkIgogICAgICBzdEV2dDpzb2Z0d2FyZUFnZW50PSJBZmZpbml0eSBQaG90byAxLjEwLjgiCiAgICAgIHN0RXZ0OndoZW49IjIwMjQtMDQtMjNUMDg6MjA6NDcrMTA6MDAiLz4KICAgIDwvcmRmOlNlcT4KICAgPC94bXBNTTpIaXN0b3J5PgogIDwvcmRmOkRlc2NyaXB0aW9uPgogPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KPD94cGFja2V0IGVuZD0iciI/Pn9pdVgAAAGBaUNDUHNSR0IgSUVDNjE5NjYtMi4xAAAokXWR3yuDURjHP5uJmKghFy6WxpVpqMWNMgm1tGbKr5vt3S+1d3t73y3JrXKrKHHj1wV/AbfKtVJESq53TdywXs9rakv2nJ7zfM73nOfpnOeAPZJRVMPhAzWb18NTAffC4pK7oYiDTjpw4YgqhjYeCgWpaR8P2Kx457Vq1T73rzXHE4YCtkbhMUXT88LTwsG1vGbxrnC7ko7Ghc+F+3W5oPC9pcfKXLQ4VeYvi/VIeALsbcLuVBXHqlhJ66qwvByPmikov/exXuJMZOfnJPaId2MQZooAbmaYZAI/g4zK7MfLEAOyoka+7yd/lpzkKjJrrKOzSoo0efpFLUj1hMSk6AkZGdat/v/tq5EcHipXdwag/sU033qhYQdK26b5eWyapROoe4arbCU/dwQj76JvVzTPIbRuwsV1RYvtweUWdD1pUT36I9WJ25NJeD2DlkVw3ULTcrlnv/ucPkJkQ77qBvYPoE/Ot658AxagZ8FoS/a7AAAACXBIWXMAAC4jAAAuIwF4pT92AAAAL0lEQVQ4jWM8ffo0A25gYmKCR5YJjxxBMKp5ZGhm/P//Px7pM2fO0MrmUc0jQzMAB2EIhZC3pUYAAAAASUVORK5CYII=';
|
||||
|
||||
const mapId = (object: { id: string }): string => object.id;
|
||||
const mapId = (object: { id: string }) => object.id;
|
||||
|
||||
/**
|
||||
* Konva selection callback to select all renderable layers. This includes RG, CA and II layers.
|
||||
*/
|
||||
const selectRenderableLayers = (n: Konva.Node): boolean =>
|
||||
const selectRenderableLayers = (n: Konva.Node) =>
|
||||
n.name() === RG_LAYER_NAME || n.name() === CA_LAYER_NAME || n.name() === INITIAL_IMAGE_LAYER_NAME;
|
||||
|
||||
/**
|
||||
* Konva selection callback to select RG mask objects. This includes lines and rects.
|
||||
*/
|
||||
const selectVectorMaskObjects = (node: Konva.Node): boolean => {
|
||||
const selectVectorMaskObjects = (node: Konva.Node) => {
|
||||
return node.name() === RG_LAYER_LINE_NAME || node.name() === RG_LAYER_RECT_NAME;
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates the singleton tool preview layer and all its objects.
|
||||
* @param stage The konva stage
|
||||
* Creates the brush preview layer.
|
||||
* @param stage The konva stage to render on.
|
||||
* @returns The brush preview layer.
|
||||
*/
|
||||
const createToolPreviewLayer = (stage: Konva.Stage): Konva.Layer => {
|
||||
const createToolPreviewLayer = (stage: Konva.Stage) => {
|
||||
// Initialize the brush preview layer & add to the stage
|
||||
const toolPreviewLayer = new Konva.Layer({ id: TOOL_PREVIEW_LAYER_ID, visible: false, listening: false });
|
||||
stage.add(toolPreviewLayer);
|
||||
|
||||
// Add handlers to show/hide the brush preview layer
|
||||
stage.on('mousemove', (e) => {
|
||||
const tool = $tool.get();
|
||||
e.target
|
||||
.getStage()
|
||||
?.findOne<Konva.Layer>(`#${TOOL_PREVIEW_LAYER_ID}`)
|
||||
?.visible(tool === 'brush' || tool === 'eraser');
|
||||
});
|
||||
stage.on('mouseleave', (e) => {
|
||||
e.target.getStage()?.findOne<Konva.Layer>(`#${TOOL_PREVIEW_LAYER_ID}`)?.visible(false);
|
||||
});
|
||||
stage.on('mouseenter', (e) => {
|
||||
const tool = $tool.get();
|
||||
e.target
|
||||
.getStage()
|
||||
?.findOne<Konva.Layer>(`#${TOOL_PREVIEW_LAYER_ID}`)
|
||||
?.visible(tool === 'brush' || tool === 'eraser');
|
||||
});
|
||||
|
||||
// Create the brush preview group & circles
|
||||
const brushPreviewGroup = new Konva.Group({ id: TOOL_PREVIEW_BRUSH_GROUP_ID });
|
||||
const brushPreviewFill = new Konva.Circle({
|
||||
@@ -108,7 +121,7 @@ const createToolPreviewLayer = (stage: Konva.Stage): Konva.Layer => {
|
||||
brushPreviewGroup.add(brushPreviewBorderOuter);
|
||||
toolPreviewLayer.add(brushPreviewGroup);
|
||||
|
||||
// Create the rect preview - this is a rectangle drawn from the last mouse down position to the current cursor position
|
||||
// Create the rect preview
|
||||
const rectPreview = new Konva.Rect({ id: TOOL_PREVIEW_RECT_ID, listening: false, stroke: 'white', strokeWidth: 1 });
|
||||
toolPreviewLayer.add(rectPreview);
|
||||
|
||||
@@ -117,14 +130,12 @@ const createToolPreviewLayer = (stage: Konva.Stage): Konva.Layer => {
|
||||
|
||||
/**
|
||||
* Renders the brush preview for the selected tool.
|
||||
* @param stage The konva stage
|
||||
* @param tool The selected tool
|
||||
* @param color The selected layer's color
|
||||
* @param selectedLayerType The selected layer's type
|
||||
* @param globalMaskLayerOpacity The global mask layer opacity
|
||||
* @param cursorPos The cursor position
|
||||
* @param lastMouseDownPos The position of the last mouse down event - used for the rect tool
|
||||
* @param brushSize The brush size
|
||||
* @param stage The konva stage to render on.
|
||||
* @param tool The selected tool.
|
||||
* @param color The selected layer's color.
|
||||
* @param cursorPos The cursor position.
|
||||
* @param lastMouseDownPos The position of the last mouse down event - used for the rect tool.
|
||||
* @param brushSize The brush size.
|
||||
*/
|
||||
const renderToolPreview = (
|
||||
stage: Konva.Stage,
|
||||
@@ -135,7 +146,7 @@ const renderToolPreview = (
|
||||
cursorPos: Vector2d | null,
|
||||
lastMouseDownPos: Vector2d | null,
|
||||
brushSize: number
|
||||
): void => {
|
||||
) => {
|
||||
const layerCount = stage.find(selectRenderableLayers).length;
|
||||
// Update the stage's pointer style
|
||||
if (layerCount === 0) {
|
||||
@@ -151,7 +162,7 @@ const renderToolPreview = (
|
||||
// Move rect gets a crosshair
|
||||
stage.container().style.cursor = 'crosshair';
|
||||
} else {
|
||||
// Else we hide the native cursor and use the konva-rendered brush preview
|
||||
// Else we use the brush preview
|
||||
stage.container().style.cursor = 'none';
|
||||
}
|
||||
|
||||
@@ -216,29 +227,28 @@ const renderToolPreview = (
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates a regional guidance layer.
|
||||
* @param stage The konva stage
|
||||
* @param layerState The regional guidance layer state
|
||||
* @param onLayerPosChanged Callback for when the layer's position changes
|
||||
* Creates a vector mask layer.
|
||||
* @param stage The konva stage to attach the layer to.
|
||||
* @param reduxLayer The redux layer to create the konva layer from.
|
||||
* @param onLayerPosChanged Callback for when the layer's position changes.
|
||||
*/
|
||||
const createRGLayer = (
|
||||
const createRegionalGuidanceLayer = (
|
||||
stage: Konva.Stage,
|
||||
layerState: RegionalGuidanceLayer,
|
||||
reduxLayer: RegionalGuidanceLayer,
|
||||
onLayerPosChanged?: (layerId: string, x: number, y: number) => void
|
||||
): Konva.Layer => {
|
||||
) => {
|
||||
// This layer hasn't been added to the konva state yet
|
||||
const konvaLayer = new Konva.Layer({
|
||||
id: layerState.id,
|
||||
id: reduxLayer.id,
|
||||
name: RG_LAYER_NAME,
|
||||
draggable: true,
|
||||
dragDistance: 0,
|
||||
});
|
||||
|
||||
// When a drag on the layer finishes, update the layer's position in state. During the drag, konva handles changing
|
||||
// the position - we do not need to call this on the `dragmove` event.
|
||||
// Create a `dragmove` listener for this layer
|
||||
if (onLayerPosChanged) {
|
||||
konvaLayer.on('dragend', function (e) {
|
||||
onLayerPosChanged(layerState.id, Math.floor(e.target.x()), Math.floor(e.target.y()));
|
||||
onLayerPosChanged(reduxLayer.id, Math.floor(e.target.x()), Math.floor(e.target.y()));
|
||||
});
|
||||
}
|
||||
|
||||
@@ -248,7 +258,7 @@ const createRGLayer = (
|
||||
if (!cursorPos) {
|
||||
return this.getAbsolutePosition();
|
||||
}
|
||||
// Prevent the user from dragging the layer out of the stage bounds by constaining the cursor position to the stage bounds
|
||||
// Prevent the user from dragging the layer out of the stage bounds.
|
||||
if (
|
||||
cursorPos.x < 0 ||
|
||||
cursorPos.x > stage.width() / stage.scaleX() ||
|
||||
@@ -262,7 +272,7 @@ const createRGLayer = (
|
||||
|
||||
// The object group holds all of the layer's objects (e.g. lines and rects)
|
||||
const konvaObjectGroup = new Konva.Group({
|
||||
id: getRGLayerObjectGroupId(layerState.id, uuidv4()),
|
||||
id: getRGLayerObjectGroupId(reduxLayer.id, uuidv4()),
|
||||
name: RG_LAYER_OBJECT_GROUP_NAME,
|
||||
listening: false,
|
||||
});
|
||||
@@ -274,51 +284,47 @@ const createRGLayer = (
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates a konva line from a vector mask line.
|
||||
* @param vectorMaskLine The vector mask line state
|
||||
* @param layerObjectGroup The konva layer's object group to add the line to
|
||||
* Creates a konva line from a redux vector mask line.
|
||||
* @param reduxObject The redux object to create the konva line from.
|
||||
* @param konvaGroup The konva group to add the line to.
|
||||
*/
|
||||
const createVectorMaskLine = (vectorMaskLine: VectorMaskLine, layerObjectGroup: Konva.Group): Konva.Line => {
|
||||
const konvaLine = new Konva.Line({
|
||||
id: vectorMaskLine.id,
|
||||
key: vectorMaskLine.id,
|
||||
const createVectorMaskLine = (reduxObject: VectorMaskLine, konvaGroup: Konva.Group): Konva.Line => {
|
||||
const vectorMaskLine = new Konva.Line({
|
||||
id: reduxObject.id,
|
||||
key: reduxObject.id,
|
||||
name: RG_LAYER_LINE_NAME,
|
||||
strokeWidth: vectorMaskLine.strokeWidth,
|
||||
strokeWidth: reduxObject.strokeWidth,
|
||||
tension: 0,
|
||||
lineCap: 'round',
|
||||
lineJoin: 'round',
|
||||
shadowForStrokeEnabled: false,
|
||||
globalCompositeOperation: vectorMaskLine.tool === 'brush' ? 'source-over' : 'destination-out',
|
||||
globalCompositeOperation: reduxObject.tool === 'brush' ? 'source-over' : 'destination-out',
|
||||
listening: false,
|
||||
});
|
||||
layerObjectGroup.add(konvaLine);
|
||||
return konvaLine;
|
||||
konvaGroup.add(vectorMaskLine);
|
||||
return vectorMaskLine;
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates a konva rect from a vector mask rect.
|
||||
* @param vectorMaskRect The vector mask rect state
|
||||
* @param layerObjectGroup The konva layer's object group to add the line to
|
||||
* Creates a konva rect from a redux vector mask rect.
|
||||
* @param reduxObject The redux object to create the konva rect from.
|
||||
* @param konvaGroup The konva group to add the rect to.
|
||||
*/
|
||||
const createVectorMaskRect = (vectorMaskRect: VectorMaskRect, layerObjectGroup: Konva.Group): Konva.Rect => {
|
||||
const konvaRect = new Konva.Rect({
|
||||
id: vectorMaskRect.id,
|
||||
key: vectorMaskRect.id,
|
||||
const createVectorMaskRect = (reduxObject: VectorMaskRect, konvaGroup: Konva.Group): Konva.Rect => {
|
||||
const vectorMaskRect = new Konva.Rect({
|
||||
id: reduxObject.id,
|
||||
key: reduxObject.id,
|
||||
name: RG_LAYER_RECT_NAME,
|
||||
x: vectorMaskRect.x,
|
||||
y: vectorMaskRect.y,
|
||||
width: vectorMaskRect.width,
|
||||
height: vectorMaskRect.height,
|
||||
x: reduxObject.x,
|
||||
y: reduxObject.y,
|
||||
width: reduxObject.width,
|
||||
height: reduxObject.height,
|
||||
listening: false,
|
||||
});
|
||||
layerObjectGroup.add(konvaRect);
|
||||
return konvaRect;
|
||||
konvaGroup.add(vectorMaskRect);
|
||||
return vectorMaskRect;
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates the "compositing rect" for a layer.
|
||||
* @param konvaLayer The konva layer
|
||||
*/
|
||||
const createCompositingRect = (konvaLayer: Konva.Layer): Konva.Rect => {
|
||||
const compositingRect = new Konva.Rect({ name: COMPOSITING_RECT_NAME, listening: false });
|
||||
konvaLayer.add(compositingRect);
|
||||
@@ -326,41 +332,41 @@ const createCompositingRect = (konvaLayer: Konva.Layer): Konva.Rect => {
|
||||
};
|
||||
|
||||
/**
|
||||
* Renders a regional guidance layer.
|
||||
* @param stage The konva stage
|
||||
* @param layerState The regional guidance layer state
|
||||
* @param globalMaskLayerOpacity The global mask layer opacity
|
||||
* @param tool The current tool
|
||||
* @param onLayerPosChanged Callback for when the layer's position changes
|
||||
* Renders a vector mask layer.
|
||||
* @param stage The konva stage to render on.
|
||||
* @param reduxLayer The redux vector mask layer to render.
|
||||
* @param reduxLayerIndex The index of the layer in the redux store.
|
||||
* @param globalMaskLayerOpacity The opacity of the global mask layer.
|
||||
* @param tool The current tool.
|
||||
*/
|
||||
const renderRGLayer = (
|
||||
const renderRegionalGuidanceLayer = (
|
||||
stage: Konva.Stage,
|
||||
layerState: RegionalGuidanceLayer,
|
||||
reduxLayer: RegionalGuidanceLayer,
|
||||
globalMaskLayerOpacity: number,
|
||||
tool: Tool,
|
||||
onLayerPosChanged?: (layerId: string, x: number, y: number) => void
|
||||
): void => {
|
||||
const konvaLayer =
|
||||
stage.findOne<Konva.Layer>(`#${layerState.id}`) ?? createRGLayer(stage, layerState, onLayerPosChanged);
|
||||
stage.findOne<Konva.Layer>(`#${reduxLayer.id}`) ??
|
||||
createRegionalGuidanceLayer(stage, reduxLayer, onLayerPosChanged);
|
||||
|
||||
// Update the layer's position and listening state
|
||||
konvaLayer.setAttrs({
|
||||
listening: tool === 'move', // The layer only listens when using the move tool - otherwise the stage is handling mouse events
|
||||
x: Math.floor(layerState.x),
|
||||
y: Math.floor(layerState.y),
|
||||
x: Math.floor(reduxLayer.x),
|
||||
y: Math.floor(reduxLayer.y),
|
||||
});
|
||||
|
||||
// Convert the color to a string, stripping the alpha - the object group will handle opacity.
|
||||
const rgbColor = rgbColorToString(layerState.previewColor);
|
||||
const rgbColor = rgbColorToString(reduxLayer.previewColor);
|
||||
|
||||
const konvaObjectGroup = konvaLayer.findOne<Konva.Group>(`.${RG_LAYER_OBJECT_GROUP_NAME}`);
|
||||
assert(konvaObjectGroup, `Object group not found for layer ${layerState.id}`);
|
||||
assert(konvaObjectGroup, `Object group not found for layer ${reduxLayer.id}`);
|
||||
|
||||
// We use caching to handle "global" layer opacity, but caching is expensive and we should only do it when required.
|
||||
let groupNeedsCache = false;
|
||||
|
||||
const objectIds = layerState.maskObjects.map(mapId);
|
||||
// Destroy any objects that are no longer in the redux state
|
||||
const objectIds = reduxLayer.maskObjects.map(mapId);
|
||||
for (const objectNode of konvaObjectGroup.find(selectVectorMaskObjects)) {
|
||||
if (!objectIds.includes(objectNode.id())) {
|
||||
objectNode.destroy();
|
||||
@@ -368,15 +374,15 @@ const renderRGLayer = (
|
||||
}
|
||||
}
|
||||
|
||||
for (const maskObject of layerState.maskObjects) {
|
||||
if (maskObject.type === 'vector_mask_line') {
|
||||
for (const reduxObject of reduxLayer.maskObjects) {
|
||||
if (reduxObject.type === 'vector_mask_line') {
|
||||
const vectorMaskLine =
|
||||
stage.findOne<Konva.Line>(`#${maskObject.id}`) ?? createVectorMaskLine(maskObject, konvaObjectGroup);
|
||||
stage.findOne<Konva.Line>(`#${reduxObject.id}`) ?? createVectorMaskLine(reduxObject, konvaObjectGroup);
|
||||
|
||||
// Only update the points if they have changed. The point values are never mutated, they are only added to the
|
||||
// array, so checking the length is sufficient to determine if we need to re-cache.
|
||||
if (vectorMaskLine.points().length !== maskObject.points.length) {
|
||||
vectorMaskLine.points(maskObject.points);
|
||||
if (vectorMaskLine.points().length !== reduxObject.points.length) {
|
||||
vectorMaskLine.points(reduxObject.points);
|
||||
groupNeedsCache = true;
|
||||
}
|
||||
// Only update the color if it has changed.
|
||||
@@ -384,9 +390,9 @@ const renderRGLayer = (
|
||||
vectorMaskLine.stroke(rgbColor);
|
||||
groupNeedsCache = true;
|
||||
}
|
||||
} else if (maskObject.type === 'vector_mask_rect') {
|
||||
} else if (reduxObject.type === 'vector_mask_rect') {
|
||||
const konvaObject =
|
||||
stage.findOne<Konva.Rect>(`#${maskObject.id}`) ?? createVectorMaskRect(maskObject, konvaObjectGroup);
|
||||
stage.findOne<Konva.Rect>(`#${reduxObject.id}`) ?? createVectorMaskRect(reduxObject, konvaObjectGroup);
|
||||
|
||||
// Only update the color if it has changed.
|
||||
if (konvaObject.fill() !== rgbColor) {
|
||||
@@ -397,8 +403,8 @@ const renderRGLayer = (
|
||||
}
|
||||
|
||||
// Only update layer visibility if it has changed.
|
||||
if (konvaLayer.visible() !== layerState.isEnabled) {
|
||||
konvaLayer.visible(layerState.isEnabled);
|
||||
if (konvaLayer.visible() !== reduxLayer.isEnabled) {
|
||||
konvaLayer.visible(reduxLayer.isEnabled);
|
||||
groupNeedsCache = true;
|
||||
}
|
||||
|
||||
@@ -422,7 +428,7 @@ const renderRGLayer = (
|
||||
* Instead, with the special handling, the effect is as if you drew all the shapes at 100% opacity, flattened them to
|
||||
* a single raster image, and _then_ applied the 50% opacity.
|
||||
*/
|
||||
if (layerState.isSelected && tool !== 'move') {
|
||||
if (reduxLayer.isSelected && tool !== 'move') {
|
||||
// We must clear the cache first so Konva will re-draw the group with the new compositing rect
|
||||
if (konvaObjectGroup.isCached()) {
|
||||
konvaObjectGroup.clearCache();
|
||||
@@ -432,7 +438,7 @@ const renderRGLayer = (
|
||||
|
||||
compositingRect.setAttrs({
|
||||
// The rect should be the size of the layer - use the fast method if we don't have a pixel-perfect bbox already
|
||||
...(!layerState.bboxNeedsUpdate && layerState.bbox ? layerState.bbox : getLayerBboxFast(konvaLayer)),
|
||||
...(!reduxLayer.bboxNeedsUpdate && reduxLayer.bbox ? reduxLayer.bbox : getLayerBboxFast(konvaLayer)),
|
||||
fill: rgbColor,
|
||||
opacity: globalMaskLayerOpacity,
|
||||
// Draw this rect only where there are non-transparent pixels under it (e.g. the mask shapes)
|
||||
@@ -453,14 +459,9 @@ const renderRGLayer = (
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates an initial image konva layer.
|
||||
* @param stage The konva stage
|
||||
* @param layerState The initial image layer state
|
||||
*/
|
||||
const createIILayer = (stage: Konva.Stage, layerState: InitialImageLayer): Konva.Layer => {
|
||||
const createInitialImageLayer = (stage: Konva.Stage, reduxLayer: InitialImageLayer): Konva.Layer => {
|
||||
const konvaLayer = new Konva.Layer({
|
||||
id: layerState.id,
|
||||
id: reduxLayer.id,
|
||||
name: INITIAL_IMAGE_LAYER_NAME,
|
||||
imageSmoothingEnabled: true,
|
||||
listening: false,
|
||||
@@ -469,27 +470,20 @@ const createIILayer = (stage: Konva.Stage, layerState: InitialImageLayer): Konva
|
||||
return konvaLayer;
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates the konva image for an initial image layer.
|
||||
* @param konvaLayer The konva layer
|
||||
* @param imageEl The image element
|
||||
*/
|
||||
const createIILayerImage = (konvaLayer: Konva.Layer, imageEl: HTMLImageElement): Konva.Image => {
|
||||
const createInitialImageLayerImage = (konvaLayer: Konva.Layer, image: HTMLImageElement): Konva.Image => {
|
||||
const konvaImage = new Konva.Image({
|
||||
name: INITIAL_IMAGE_LAYER_IMAGE_NAME,
|
||||
image: imageEl,
|
||||
image,
|
||||
});
|
||||
konvaLayer.add(konvaImage);
|
||||
return konvaImage;
|
||||
};
|
||||
|
||||
/**
|
||||
* Updates an initial image layer's attributes (width, height, opacity, visibility).
|
||||
* @param stage The konva stage
|
||||
* @param konvaImage The konva image
|
||||
* @param layerState The initial image layer state
|
||||
*/
|
||||
const updateIILayerImageAttrs = (stage: Konva.Stage, konvaImage: Konva.Image, layerState: InitialImageLayer): void => {
|
||||
const updateInitialImageLayerImageAttrs = (
|
||||
stage: Konva.Stage,
|
||||
konvaImage: Konva.Image,
|
||||
reduxLayer: InitialImageLayer
|
||||
) => {
|
||||
// Konva erroneously reports NaN for width and height when the stage is hidden. This causes errors when caching,
|
||||
// but it doesn't seem to break anything.
|
||||
// TODO(psyche): Investigate and report upstream.
|
||||
@@ -498,55 +492,46 @@ const updateIILayerImageAttrs = (stage: Konva.Stage, konvaImage: Konva.Image, la
|
||||
if (
|
||||
konvaImage.width() !== newWidth ||
|
||||
konvaImage.height() !== newHeight ||
|
||||
konvaImage.visible() !== layerState.isEnabled
|
||||
konvaImage.visible() !== reduxLayer.isEnabled
|
||||
) {
|
||||
konvaImage.setAttrs({
|
||||
opacity: layerState.opacity,
|
||||
opacity: reduxLayer.opacity,
|
||||
scaleX: 1,
|
||||
scaleY: 1,
|
||||
width: stage.width() / stage.scaleX(),
|
||||
height: stage.height() / stage.scaleY(),
|
||||
visible: layerState.isEnabled,
|
||||
visible: reduxLayer.isEnabled,
|
||||
});
|
||||
}
|
||||
if (konvaImage.opacity() !== layerState.opacity) {
|
||||
konvaImage.opacity(layerState.opacity);
|
||||
if (konvaImage.opacity() !== reduxLayer.opacity) {
|
||||
konvaImage.opacity(reduxLayer.opacity);
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Update an initial image layer's image source when the image changes.
|
||||
* @param stage The konva stage
|
||||
* @param konvaLayer The konva layer
|
||||
* @param layerState The initial image layer state
|
||||
* @param getImageDTO A function to retrieve an image DTO from the server, used to update the image source
|
||||
*/
|
||||
const updateIILayerImageSource = async (
|
||||
const updateInitialImageLayerImageSource = async (
|
||||
stage: Konva.Stage,
|
||||
konvaLayer: Konva.Layer,
|
||||
layerState: InitialImageLayer,
|
||||
getImageDTO: (imageName: string) => Promise<ImageDTO | null>
|
||||
): Promise<void> => {
|
||||
if (layerState.image) {
|
||||
const imageName = layerState.image.name;
|
||||
const imageDTO = await getImageDTO(imageName);
|
||||
if (!imageDTO) {
|
||||
return;
|
||||
}
|
||||
reduxLayer: InitialImageLayer
|
||||
) => {
|
||||
if (reduxLayer.image) {
|
||||
const imageName = reduxLayer.image.name;
|
||||
const req = getStore().dispatch(imagesApi.endpoints.getImageDTO.initiate(imageName));
|
||||
const imageDTO = await req.unwrap();
|
||||
req.unsubscribe();
|
||||
const imageEl = new Image();
|
||||
const imageId = getIILayerImageId(layerState.id, imageName);
|
||||
const imageId = getIILayerImageId(reduxLayer.id, imageName);
|
||||
imageEl.onload = () => {
|
||||
// Find the existing image or create a new one - must find using the name, bc the id may have just changed
|
||||
const konvaImage =
|
||||
konvaLayer.findOne<Konva.Image>(`.${INITIAL_IMAGE_LAYER_IMAGE_NAME}`) ??
|
||||
createIILayerImage(konvaLayer, imageEl);
|
||||
createInitialImageLayerImage(konvaLayer, imageEl);
|
||||
|
||||
// Update the image's attributes
|
||||
konvaImage.setAttrs({
|
||||
id: imageId,
|
||||
image: imageEl,
|
||||
});
|
||||
updateIILayerImageAttrs(stage, konvaImage, layerState);
|
||||
updateInitialImageLayerImageAttrs(stage, konvaImage, reduxLayer);
|
||||
imageEl.id = imageId;
|
||||
};
|
||||
imageEl.src = imageDTO.image_url;
|
||||
@@ -555,24 +540,14 @@ const updateIILayerImageSource = async (
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Renders an initial image layer.
|
||||
* @param stage The konva stage
|
||||
* @param layerState The initial image layer state
|
||||
* @param getImageDTO A function to retrieve an image DTO from the server, used to update the image source
|
||||
*/
|
||||
const renderIILayer = (
|
||||
stage: Konva.Stage,
|
||||
layerState: InitialImageLayer,
|
||||
getImageDTO: (imageName: string) => Promise<ImageDTO | null>
|
||||
): void => {
|
||||
const konvaLayer = stage.findOne<Konva.Layer>(`#${layerState.id}`) ?? createIILayer(stage, layerState);
|
||||
const renderInitialImageLayer = (stage: Konva.Stage, reduxLayer: InitialImageLayer) => {
|
||||
const konvaLayer = stage.findOne<Konva.Layer>(`#${reduxLayer.id}`) ?? createInitialImageLayer(stage, reduxLayer);
|
||||
const konvaImage = konvaLayer.findOne<Konva.Image>(`.${INITIAL_IMAGE_LAYER_IMAGE_NAME}`);
|
||||
const canvasImageSource = konvaImage?.image();
|
||||
let imageSourceNeedsUpdate = false;
|
||||
if (canvasImageSource instanceof HTMLImageElement) {
|
||||
const image = layerState.image;
|
||||
if (image && canvasImageSource.id !== getCALayerImageId(layerState.id, image.name)) {
|
||||
const image = reduxLayer.image;
|
||||
if (image && canvasImageSource.id !== getCALayerImageId(reduxLayer.id, image.name)) {
|
||||
imageSourceNeedsUpdate = true;
|
||||
} else if (!image) {
|
||||
imageSourceNeedsUpdate = true;
|
||||
@@ -582,20 +557,15 @@ const renderIILayer = (
|
||||
}
|
||||
|
||||
if (imageSourceNeedsUpdate) {
|
||||
updateIILayerImageSource(stage, konvaLayer, layerState, getImageDTO);
|
||||
updateInitialImageLayerImageSource(stage, konvaLayer, reduxLayer);
|
||||
} else if (konvaImage) {
|
||||
updateIILayerImageAttrs(stage, konvaImage, layerState);
|
||||
updateInitialImageLayerImageAttrs(stage, konvaImage, reduxLayer);
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates a control adapter layer.
|
||||
* @param stage The konva stage
|
||||
* @param layerState The control adapter layer state
|
||||
*/
|
||||
const createCALayer = (stage: Konva.Stage, layerState: ControlAdapterLayer): Konva.Layer => {
|
||||
const createControlNetLayer = (stage: Konva.Stage, reduxLayer: ControlAdapterLayer): Konva.Layer => {
|
||||
const konvaLayer = new Konva.Layer({
|
||||
id: layerState.id,
|
||||
id: reduxLayer.id,
|
||||
name: CA_LAYER_NAME,
|
||||
imageSmoothingEnabled: true,
|
||||
listening: false,
|
||||
@@ -604,53 +574,39 @@ const createCALayer = (stage: Konva.Stage, layerState: ControlAdapterLayer): Kon
|
||||
return konvaLayer;
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates a control adapter layer image.
|
||||
* @param konvaLayer The konva layer
|
||||
* @param imageEl The image element
|
||||
*/
|
||||
const createCALayerImage = (konvaLayer: Konva.Layer, imageEl: HTMLImageElement): Konva.Image => {
|
||||
const createControlNetLayerImage = (konvaLayer: Konva.Layer, image: HTMLImageElement): Konva.Image => {
|
||||
const konvaImage = new Konva.Image({
|
||||
name: CA_LAYER_IMAGE_NAME,
|
||||
image: imageEl,
|
||||
image,
|
||||
});
|
||||
konvaLayer.add(konvaImage);
|
||||
return konvaImage;
|
||||
};
|
||||
|
||||
/**
|
||||
* Updates the image source for a control adapter layer. This includes loading the image from the server and updating the konva image.
|
||||
* @param stage The konva stage
|
||||
* @param konvaLayer The konva layer
|
||||
* @param layerState The control adapter layer state
|
||||
* @param getImageDTO A function to retrieve an image DTO from the server, used to update the image source
|
||||
*/
|
||||
const updateCALayerImageSource = async (
|
||||
const updateControlNetLayerImageSource = async (
|
||||
stage: Konva.Stage,
|
||||
konvaLayer: Konva.Layer,
|
||||
layerState: ControlAdapterLayer,
|
||||
getImageDTO: (imageName: string) => Promise<ImageDTO | null>
|
||||
): Promise<void> => {
|
||||
const image = layerState.controlAdapter.processedImage ?? layerState.controlAdapter.image;
|
||||
reduxLayer: ControlAdapterLayer
|
||||
) => {
|
||||
const image = reduxLayer.controlAdapter.processedImage ?? reduxLayer.controlAdapter.image;
|
||||
if (image) {
|
||||
const imageName = image.name;
|
||||
const imageDTO = await getImageDTO(imageName);
|
||||
if (!imageDTO) {
|
||||
return;
|
||||
}
|
||||
const req = getStore().dispatch(imagesApi.endpoints.getImageDTO.initiate(imageName));
|
||||
const imageDTO = await req.unwrap();
|
||||
req.unsubscribe();
|
||||
const imageEl = new Image();
|
||||
const imageId = getCALayerImageId(layerState.id, imageName);
|
||||
const imageId = getCALayerImageId(reduxLayer.id, imageName);
|
||||
imageEl.onload = () => {
|
||||
// Find the existing image or create a new one - must find using the name, bc the id may have just changed
|
||||
const konvaImage =
|
||||
konvaLayer.findOne<Konva.Image>(`.${CA_LAYER_IMAGE_NAME}`) ?? createCALayerImage(konvaLayer, imageEl);
|
||||
konvaLayer.findOne<Konva.Image>(`.${CA_LAYER_IMAGE_NAME}`) ?? createControlNetLayerImage(konvaLayer, imageEl);
|
||||
|
||||
// Update the image's attributes
|
||||
konvaImage.setAttrs({
|
||||
id: imageId,
|
||||
image: imageEl,
|
||||
});
|
||||
updateCALayerImageAttrs(stage, konvaImage, layerState);
|
||||
updateControlNetLayerImageAttrs(stage, konvaImage, reduxLayer);
|
||||
// Must cache after this to apply the filters
|
||||
konvaImage.cache();
|
||||
imageEl.id = imageId;
|
||||
@@ -661,17 +617,11 @@ const updateCALayerImageSource = async (
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Updates the image attributes for a control adapter layer's image (width, height, visibility, opacity, filters).
|
||||
* @param stage The konva stage
|
||||
* @param konvaImage The konva image
|
||||
* @param layerState The control adapter layer state
|
||||
*/
|
||||
const updateCALayerImageAttrs = (
|
||||
const updateControlNetLayerImageAttrs = (
|
||||
stage: Konva.Stage,
|
||||
konvaImage: Konva.Image,
|
||||
layerState: ControlAdapterLayer
|
||||
): void => {
|
||||
reduxLayer: ControlAdapterLayer
|
||||
) => {
|
||||
let needsCache = false;
|
||||
// Konva erroneously reports NaN for width and height when the stage is hidden. This causes errors when caching,
|
||||
// but it doesn't seem to break anything.
|
||||
@@ -682,47 +632,36 @@ const updateCALayerImageAttrs = (
|
||||
if (
|
||||
konvaImage.width() !== newWidth ||
|
||||
konvaImage.height() !== newHeight ||
|
||||
konvaImage.visible() !== layerState.isEnabled ||
|
||||
hasFilter !== layerState.isFilterEnabled
|
||||
konvaImage.visible() !== reduxLayer.isEnabled ||
|
||||
hasFilter !== reduxLayer.isFilterEnabled
|
||||
) {
|
||||
konvaImage.setAttrs({
|
||||
opacity: layerState.opacity,
|
||||
opacity: reduxLayer.opacity,
|
||||
scaleX: 1,
|
||||
scaleY: 1,
|
||||
width: stage.width() / stage.scaleX(),
|
||||
height: stage.height() / stage.scaleY(),
|
||||
visible: layerState.isEnabled,
|
||||
filters: layerState.isFilterEnabled ? [LightnessToAlphaFilter] : [],
|
||||
visible: reduxLayer.isEnabled,
|
||||
filters: reduxLayer.isFilterEnabled ? [LightnessToAlphaFilter] : [],
|
||||
});
|
||||
needsCache = true;
|
||||
}
|
||||
if (konvaImage.opacity() !== layerState.opacity) {
|
||||
konvaImage.opacity(layerState.opacity);
|
||||
if (konvaImage.opacity() !== reduxLayer.opacity) {
|
||||
konvaImage.opacity(reduxLayer.opacity);
|
||||
}
|
||||
if (needsCache) {
|
||||
konvaImage.cache();
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Renders a control adapter layer. If the layer doesn't already exist, it is created. Otherwise, the layer is updated
|
||||
* with the current image source and attributes.
|
||||
* @param stage The konva stage
|
||||
* @param layerState The control adapter layer state
|
||||
* @param getImageDTO A function to retrieve an image DTO from the server, used to update the image source
|
||||
*/
|
||||
const renderCALayer = (
|
||||
stage: Konva.Stage,
|
||||
layerState: ControlAdapterLayer,
|
||||
getImageDTO: (imageName: string) => Promise<ImageDTO | null>
|
||||
): void => {
|
||||
const konvaLayer = stage.findOne<Konva.Layer>(`#${layerState.id}`) ?? createCALayer(stage, layerState);
|
||||
const renderControlNetLayer = (stage: Konva.Stage, reduxLayer: ControlAdapterLayer) => {
|
||||
const konvaLayer = stage.findOne<Konva.Layer>(`#${reduxLayer.id}`) ?? createControlNetLayer(stage, reduxLayer);
|
||||
const konvaImage = konvaLayer.findOne<Konva.Image>(`.${CA_LAYER_IMAGE_NAME}`);
|
||||
const canvasImageSource = konvaImage?.image();
|
||||
let imageSourceNeedsUpdate = false;
|
||||
if (canvasImageSource instanceof HTMLImageElement) {
|
||||
const image = layerState.controlAdapter.processedImage ?? layerState.controlAdapter.image;
|
||||
if (image && canvasImageSource.id !== getCALayerImageId(layerState.id, image.name)) {
|
||||
const image = reduxLayer.controlAdapter.processedImage ?? reduxLayer.controlAdapter.image;
|
||||
if (image && canvasImageSource.id !== getCALayerImageId(reduxLayer.id, image.name)) {
|
||||
imageSourceNeedsUpdate = true;
|
||||
} else if (!image) {
|
||||
imageSourceNeedsUpdate = true;
|
||||
@@ -732,46 +671,44 @@ const renderCALayer = (
|
||||
}
|
||||
|
||||
if (imageSourceNeedsUpdate) {
|
||||
updateCALayerImageSource(stage, konvaLayer, layerState, getImageDTO);
|
||||
updateControlNetLayerImageSource(stage, konvaLayer, reduxLayer);
|
||||
} else if (konvaImage) {
|
||||
updateCALayerImageAttrs(stage, konvaImage, layerState);
|
||||
updateControlNetLayerImageAttrs(stage, konvaImage, reduxLayer);
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Renders the layers on the stage.
|
||||
* @param stage The konva stage
|
||||
* @param layerStates Array of all layer states
|
||||
* @param globalMaskLayerOpacity The global mask layer opacity
|
||||
* @param tool The current tool
|
||||
* @param getImageDTO A function to retrieve an image DTO from the server, used to update the image source
|
||||
* @param onLayerPosChanged Callback for when the layer's position changes
|
||||
* @param stage The konva stage to render on.
|
||||
* @param reduxLayers Array of the layers from the redux store.
|
||||
* @param layerOpacity The opacity of the layer.
|
||||
* @param onLayerPosChanged Callback for when the layer's position changes. This is optional to allow for offscreen rendering.
|
||||
* @returns
|
||||
*/
|
||||
const renderLayers = (
|
||||
stage: Konva.Stage,
|
||||
layerStates: Layer[],
|
||||
reduxLayers: Layer[],
|
||||
globalMaskLayerOpacity: number,
|
||||
tool: Tool,
|
||||
getImageDTO: (imageName: string) => Promise<ImageDTO | null>,
|
||||
onLayerPosChanged?: (layerId: string, x: number, y: number) => void
|
||||
): void => {
|
||||
const layerIds = layerStates.filter(isRenderableLayer).map(mapId);
|
||||
) => {
|
||||
const reduxLayerIds = reduxLayers.filter(isRenderableLayer).map(mapId);
|
||||
// Remove un-rendered layers
|
||||
for (const konvaLayer of stage.find<Konva.Layer>(selectRenderableLayers)) {
|
||||
if (!layerIds.includes(konvaLayer.id())) {
|
||||
if (!reduxLayerIds.includes(konvaLayer.id())) {
|
||||
konvaLayer.destroy();
|
||||
}
|
||||
}
|
||||
|
||||
for (const layer of layerStates) {
|
||||
if (isRegionalGuidanceLayer(layer)) {
|
||||
renderRGLayer(stage, layer, globalMaskLayerOpacity, tool, onLayerPosChanged);
|
||||
for (const reduxLayer of reduxLayers) {
|
||||
if (isRegionalGuidanceLayer(reduxLayer)) {
|
||||
renderRegionalGuidanceLayer(stage, reduxLayer, globalMaskLayerOpacity, tool, onLayerPosChanged);
|
||||
}
|
||||
if (isControlAdapterLayer(layer)) {
|
||||
renderCALayer(stage, layer, getImageDTO);
|
||||
if (isControlAdapterLayer(reduxLayer)) {
|
||||
renderControlNetLayer(stage, reduxLayer);
|
||||
}
|
||||
if (isInitialImageLayer(layer)) {
|
||||
renderIILayer(stage, layer, getImageDTO);
|
||||
if (isInitialImageLayer(reduxLayer)) {
|
||||
renderInitialImageLayer(stage, reduxLayer);
|
||||
}
|
||||
// IP Adapter layers are not rendered
|
||||
}
|
||||
@@ -779,12 +716,13 @@ const renderLayers = (
|
||||
|
||||
/**
|
||||
* Creates a bounding box rect for a layer.
|
||||
* @param layerState The layer state for the layer to create the bounding box for
|
||||
* @param konvaLayer The konva layer to attach the bounding box to
|
||||
* @param reduxLayer The redux layer to create the bounding box for.
|
||||
* @param konvaLayer The konva layer to attach the bounding box to.
|
||||
* @param onBboxMouseDown Callback for when the bounding box is clicked.
|
||||
*/
|
||||
const createBboxRect = (layerState: Layer, konvaLayer: Konva.Layer): Konva.Rect => {
|
||||
const createBboxRect = (reduxLayer: Layer, konvaLayer: Konva.Layer) => {
|
||||
const rect = new Konva.Rect({
|
||||
id: getLayerBboxId(layerState.id),
|
||||
id: getLayerBboxId(reduxLayer.id),
|
||||
name: LAYER_BBOX_NAME,
|
||||
strokeWidth: 1,
|
||||
visible: false,
|
||||
@@ -795,12 +733,12 @@ const createBboxRect = (layerState: Layer, konvaLayer: Konva.Layer): Konva.Rect
|
||||
|
||||
/**
|
||||
* Renders the bounding boxes for the layers.
|
||||
* @param stage The konva stage
|
||||
* @param layerStates An array of layers to draw bboxes for
|
||||
* @param stage The konva stage to render on
|
||||
* @param reduxLayers An array of all redux layers to draw bboxes for
|
||||
* @param tool The current tool
|
||||
* @returns
|
||||
*/
|
||||
const renderBboxes = (stage: Konva.Stage, layerStates: Layer[], tool: Tool): void => {
|
||||
const renderBboxes = (stage: Konva.Stage, reduxLayers: Layer[], tool: Tool) => {
|
||||
// Hide all bboxes so they don't interfere with getClientRect
|
||||
for (const bboxRect of stage.find<Konva.Rect>(`.${LAYER_BBOX_NAME}`)) {
|
||||
bboxRect.visible(false);
|
||||
@@ -811,39 +749,39 @@ const renderBboxes = (stage: Konva.Stage, layerStates: Layer[], tool: Tool): voi
|
||||
return;
|
||||
}
|
||||
|
||||
for (const layer of layerStates.filter(isRegionalGuidanceLayer)) {
|
||||
if (!layer.bbox) {
|
||||
for (const reduxLayer of reduxLayers.filter(isRegionalGuidanceLayer)) {
|
||||
if (!reduxLayer.bbox) {
|
||||
continue;
|
||||
}
|
||||
const konvaLayer = stage.findOne<Konva.Layer>(`#${layer.id}`);
|
||||
assert(konvaLayer, `Layer ${layer.id} not found in stage`);
|
||||
const konvaLayer = stage.findOne<Konva.Layer>(`#${reduxLayer.id}`);
|
||||
assert(konvaLayer, `Layer ${reduxLayer.id} not found in stage`);
|
||||
|
||||
const bboxRect = konvaLayer.findOne<Konva.Rect>(`.${LAYER_BBOX_NAME}`) ?? createBboxRect(layer, konvaLayer);
|
||||
const bboxRect = konvaLayer.findOne<Konva.Rect>(`.${LAYER_BBOX_NAME}`) ?? createBboxRect(reduxLayer, konvaLayer);
|
||||
|
||||
bboxRect.setAttrs({
|
||||
visible: !layer.bboxNeedsUpdate,
|
||||
listening: layer.isSelected,
|
||||
x: layer.bbox.x,
|
||||
y: layer.bbox.y,
|
||||
width: layer.bbox.width,
|
||||
height: layer.bbox.height,
|
||||
stroke: layer.isSelected ? BBOX_SELECTED_STROKE : '',
|
||||
visible: !reduxLayer.bboxNeedsUpdate,
|
||||
listening: reduxLayer.isSelected,
|
||||
x: reduxLayer.bbox.x,
|
||||
y: reduxLayer.bbox.y,
|
||||
width: reduxLayer.bbox.width,
|
||||
height: reduxLayer.bbox.height,
|
||||
stroke: reduxLayer.isSelected ? BBOX_SELECTED_STROKE : '',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Calculates the bbox of each regional guidance layer. Only calculates if the mask has changed.
|
||||
* @param stage The konva stage
|
||||
* @param layerStates An array of layers to calculate bboxes for
|
||||
* @param stage The konva stage to render on.
|
||||
* @param reduxLayers An array of redux layers to calculate bboxes for
|
||||
* @param onBboxChanged Callback for when the bounding box changes
|
||||
*/
|
||||
const updateBboxes = (
|
||||
stage: Konva.Stage,
|
||||
layerStates: Layer[],
|
||||
reduxLayers: Layer[],
|
||||
onBboxChanged: (layerId: string, bbox: IRect | null) => void
|
||||
): void => {
|
||||
for (const rgLayer of layerStates.filter(isRegionalGuidanceLayer)) {
|
||||
) => {
|
||||
for (const rgLayer of reduxLayers.filter(isRegionalGuidanceLayer)) {
|
||||
const konvaLayer = stage.findOne<Konva.Layer>(`#${rgLayer.id}`);
|
||||
assert(konvaLayer, `Layer ${rgLayer.id} not found in stage`);
|
||||
// We only need to recalculate the bbox if the layer has changed
|
||||
@@ -870,7 +808,7 @@ const updateBboxes = (
|
||||
|
||||
/**
|
||||
* Creates the background layer for the stage.
|
||||
* @param stage The konva stage
|
||||
* @param stage The konva stage to render on
|
||||
*/
|
||||
const createBackgroundLayer = (stage: Konva.Stage): Konva.Layer => {
|
||||
const layer = new Konva.Layer({
|
||||
@@ -891,17 +829,17 @@ const createBackgroundLayer = (stage: Konva.Stage): Konva.Layer => {
|
||||
image.onload = () => {
|
||||
background.fillPatternImage(image);
|
||||
};
|
||||
image.src = TRANSPARENCY_CHECKER_PATTERN;
|
||||
image.src = STAGE_BG_DATAURL;
|
||||
return layer;
|
||||
};
|
||||
|
||||
/**
|
||||
* Renders the background layer for the stage.
|
||||
* @param stage The konva stage
|
||||
* @param stage The konva stage to render on
|
||||
* @param width The unscaled width of the canvas
|
||||
* @param height The unscaled height of the canvas
|
||||
*/
|
||||
const renderBackground = (stage: Konva.Stage, width: number, height: number): void => {
|
||||
const renderBackground = (stage: Konva.Stage, width: number, height: number) => {
|
||||
const layer = stage.findOne<Konva.Layer>(`#${BACKGROUND_LAYER_ID}`) ?? createBackgroundLayer(stage);
|
||||
|
||||
const background = layer.findOne<Konva.Rect>(`#${BACKGROUND_RECT_ID}`);
|
||||
@@ -942,10 +880,6 @@ const arrangeLayers = (stage: Konva.Stage, layerIds: string[]): void => {
|
||||
stage.findOne<Konva.Layer>(`#${TOOL_PREVIEW_LAYER_ID}`)?.zIndex(nextZIndex++);
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates the "no layers" fallback layer
|
||||
* @param stage The konva stage
|
||||
*/
|
||||
const createNoLayersMessageLayer = (stage: Konva.Stage): Konva.Layer => {
|
||||
const noLayersMessageLayer = new Konva.Layer({
|
||||
id: NO_LAYERS_MESSAGE_LAYER_ID,
|
||||
@@ -957,7 +891,7 @@ const createNoLayersMessageLayer = (stage: Konva.Stage): Konva.Layer => {
|
||||
y: 0,
|
||||
align: 'center',
|
||||
verticalAlign: 'middle',
|
||||
text: t('controlLayers.noLayersAdded', 'No Layers Added'),
|
||||
text: t('controlLayers.noLayersAdded'),
|
||||
fontFamily: '"Inter Variable", sans-serif',
|
||||
fontStyle: '600',
|
||||
fill: 'white',
|
||||
@@ -967,14 +901,7 @@ const createNoLayersMessageLayer = (stage: Konva.Stage): Konva.Layer => {
|
||||
return noLayersMessageLayer;
|
||||
};
|
||||
|
||||
/**
|
||||
* Renders the "no layers" message when there are no layers to render
|
||||
* @param stage The konva stage
|
||||
* @param layerCount The current number of layers
|
||||
* @param width The target width of the text
|
||||
* @param height The target height of the text
|
||||
*/
|
||||
const renderNoLayersMessage = (stage: Konva.Stage, layerCount: number, width: number, height: number): void => {
|
||||
const renderNoLayersMessage = (stage: Konva.Stage, layerCount: number, width: number, height: number) => {
|
||||
const noLayersMessageLayer =
|
||||
stage.findOne<Konva.Layer>(`#${NO_LAYERS_MESSAGE_LAYER_ID}`) ?? createNoLayersMessageLayer(stage);
|
||||
if (layerCount === 0) {
|
||||
@@ -1009,3 +936,20 @@ export const debouncedRenderers = {
|
||||
arrangeLayers: debounce(arrangeLayers, DEBOUNCE_MS),
|
||||
updateBboxes: debounce(updateBboxes, DEBOUNCE_MS),
|
||||
};
|
||||
|
||||
/**
|
||||
* Calculates the lightness (HSL) of a given pixel and sets the alpha channel to that value.
|
||||
* This is useful for edge maps and other masks, to make the black areas transparent.
|
||||
* @param imageData The image data to apply the filter to
|
||||
*/
|
||||
const LightnessToAlphaFilter = (imageData: ImageData) => {
|
||||
const len = imageData.data.length / 4;
|
||||
for (let i = 0; i < len; i++) {
|
||||
const r = imageData.data[i * 4 + 0] as number;
|
||||
const g = imageData.data[i * 4 + 1] as number;
|
||||
const b = imageData.data[i * 4 + 2] as number;
|
||||
const cMin = Math.min(r, g, b);
|
||||
const cMax = Math.max(r, g, b);
|
||||
imageData.data[i * 4 + 3] = (cMin + cMax) / 2;
|
||||
}
|
||||
};
|
||||
@@ -1,21 +0,0 @@
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { useGetBoardAssetsTotalQuery, useGetBoardImagesTotalQuery } from 'services/api/endpoints/boards';
|
||||
|
||||
type Props = {
|
||||
board_id: string;
|
||||
};
|
||||
|
||||
export const BoardTotalsTooltip = ({ board_id }: Props) => {
|
||||
const { t } = useTranslation();
|
||||
const { imagesTotal } = useGetBoardImagesTotalQuery(board_id, {
|
||||
selectFromResult: ({ data }) => {
|
||||
return { imagesTotal: data?.total ?? 0 };
|
||||
},
|
||||
});
|
||||
const { assetsTotal } = useGetBoardAssetsTotalQuery(board_id, {
|
||||
selectFromResult: ({ data }) => {
|
||||
return { assetsTotal: data?.total ?? 0 };
|
||||
},
|
||||
});
|
||||
return `${t('boards.imagesWithCount', { count: imagesTotal })}, ${t('boards.assetsWithCount', { count: assetsTotal })}`;
|
||||
};
|
||||
@@ -8,12 +8,15 @@ import SelectionOverlay from 'common/components/SelectionOverlay';
|
||||
import type { AddToBoardDropData } from 'features/dnd/types';
|
||||
import AutoAddIcon from 'features/gallery/components/Boards/AutoAddIcon';
|
||||
import BoardContextMenu from 'features/gallery/components/Boards/BoardContextMenu';
|
||||
import { BoardTotalsTooltip } from 'features/gallery/components/Boards/BoardsList/BoardTotalsTooltip';
|
||||
import { autoAddBoardIdChanged, boardIdSelected, selectGallerySlice } from 'features/gallery/store/gallerySlice';
|
||||
import { memo, useCallback, useMemo, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { PiImagesSquare } from 'react-icons/pi';
|
||||
import { useUpdateBoardMutation } from 'services/api/endpoints/boards';
|
||||
import {
|
||||
useGetBoardAssetsTotalQuery,
|
||||
useGetBoardImagesTotalQuery,
|
||||
useUpdateBoardMutation,
|
||||
} from 'services/api/endpoints/boards';
|
||||
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
|
||||
import type { BoardDTO } from 'services/api/types';
|
||||
|
||||
@@ -48,6 +51,17 @@ const GalleryBoard = ({ board, isSelected, setBoardToDelete }: GalleryBoardProps
|
||||
setIsHovered(false);
|
||||
}, []);
|
||||
|
||||
const { data: imagesTotal } = useGetBoardImagesTotalQuery(board.board_id);
|
||||
const { data: assetsTotal } = useGetBoardAssetsTotalQuery(board.board_id);
|
||||
const tooltip = useMemo(() => {
|
||||
if (imagesTotal?.total === undefined || assetsTotal?.total === undefined) {
|
||||
return undefined;
|
||||
}
|
||||
return `${imagesTotal.total} image${imagesTotal.total === 1 ? '' : 's'}, ${
|
||||
assetsTotal.total
|
||||
} asset${assetsTotal.total === 1 ? '' : 's'}`;
|
||||
}, [assetsTotal, imagesTotal]);
|
||||
|
||||
const { currentData: coverImage } = useGetImageDTOQuery(board.cover_image_name ?? skipToken);
|
||||
|
||||
const { board_name, board_id } = board;
|
||||
@@ -118,7 +132,7 @@ const GalleryBoard = ({ board, isSelected, setBoardToDelete }: GalleryBoardProps
|
||||
>
|
||||
<BoardContextMenu board={board} board_id={board_id} setBoardToDelete={setBoardToDelete}>
|
||||
{(ref) => (
|
||||
<Tooltip label={<BoardTotalsTooltip board_id={board.board_id} />} openDelay={1000}>
|
||||
<Tooltip label={tooltip} openDelay={1000}>
|
||||
<Flex
|
||||
ref={ref}
|
||||
onClick={handleSelectBoard}
|
||||
|
||||
@@ -5,11 +5,11 @@ import SelectionOverlay from 'common/components/SelectionOverlay';
|
||||
import type { RemoveFromBoardDropData } from 'features/dnd/types';
|
||||
import AutoAddIcon from 'features/gallery/components/Boards/AutoAddIcon';
|
||||
import BoardContextMenu from 'features/gallery/components/Boards/BoardContextMenu';
|
||||
import { BoardTotalsTooltip } from 'features/gallery/components/Boards/BoardsList/BoardTotalsTooltip';
|
||||
import { autoAddBoardIdChanged, boardIdSelected } from 'features/gallery/store/gallerySlice';
|
||||
import InvokeLogoSVG from 'public/assets/images/invoke-symbol-wht-lrg.svg';
|
||||
import { memo, useCallback, useMemo, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { useGetBoardAssetsTotalQuery, useGetBoardImagesTotalQuery } from 'services/api/endpoints/boards';
|
||||
import { useBoardName } from 'services/api/hooks/useBoardName';
|
||||
|
||||
interface Props {
|
||||
@@ -29,6 +29,17 @@ const NoBoardBoard = memo(({ isSelected }: Props) => {
|
||||
}, [dispatch, autoAssignBoardOnClick]);
|
||||
const [isHovered, setIsHovered] = useState(false);
|
||||
|
||||
const { data: imagesTotal } = useGetBoardImagesTotalQuery('none');
|
||||
const { data: assetsTotal } = useGetBoardAssetsTotalQuery('none');
|
||||
const tooltip = useMemo(() => {
|
||||
if (imagesTotal?.total === undefined || assetsTotal?.total === undefined) {
|
||||
return undefined;
|
||||
}
|
||||
return `${imagesTotal.total} image${imagesTotal.total === 1 ? '' : 's'}, ${
|
||||
assetsTotal.total
|
||||
} asset${assetsTotal.total === 1 ? '' : 's'}`;
|
||||
}, [assetsTotal, imagesTotal]);
|
||||
|
||||
const handleMouseOver = useCallback(() => {
|
||||
setIsHovered(true);
|
||||
}, []);
|
||||
@@ -60,7 +71,7 @@ const NoBoardBoard = memo(({ isSelected }: Props) => {
|
||||
>
|
||||
<BoardContextMenu board_id="none">
|
||||
{(ref) => (
|
||||
<Tooltip label={<BoardTotalsTooltip board_id="none" />} openDelay={1000}>
|
||||
<Tooltip label={tooltip} openDelay={1000}>
|
||||
<Flex
|
||||
ref={ref}
|
||||
onClick={handleSelectBoard}
|
||||
|
||||
@@ -1,55 +0,0 @@
|
||||
import { Flex, IconButton, Spacer, Tag, TagCloseButton, TagLabel, Tooltip } from '@invoke-ai/ui-library';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { useGalleryImages } from 'features/gallery/hooks/useGalleryImages';
|
||||
import { selectionChanged } from 'features/gallery/store/gallerySlice';
|
||||
import { useCallback } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { BiSelectMultiple } from 'react-icons/bi';
|
||||
|
||||
import { GallerySearch } from './GallerySearch';
|
||||
|
||||
export const GalleryBulkSelect = () => {
|
||||
const dispatch = useAppDispatch();
|
||||
const { selection } = useAppSelector((s) => s.gallery);
|
||||
const { t } = useTranslation();
|
||||
const { imageDTOs } = useGalleryImages();
|
||||
|
||||
const onClickClearSelection = useCallback(() => {
|
||||
dispatch(selectionChanged([]));
|
||||
}, [dispatch]);
|
||||
|
||||
const onClickSelectAllPage = useCallback(() => {
|
||||
dispatch(selectionChanged(selection.concat(imageDTOs)));
|
||||
}, [dispatch, imageDTOs, selection]);
|
||||
|
||||
return (
|
||||
<Flex alignItems="center" justifyContent="space-between">
|
||||
<Flex>
|
||||
{selection.length > 0 ? (
|
||||
<Tag>
|
||||
<TagLabel>
|
||||
{selection.length} {t('common.selected')}
|
||||
</TagLabel>
|
||||
<Tooltip label="Clear selection">
|
||||
<TagCloseButton onClick={onClickClearSelection} />
|
||||
</Tooltip>
|
||||
</Tag>
|
||||
) : (
|
||||
<Spacer />
|
||||
)}
|
||||
|
||||
<Tooltip label={t('gallery.selectAllOnPage')}>
|
||||
<IconButton
|
||||
variant="outline"
|
||||
size="sm"
|
||||
icon={<BiSelectMultiple />}
|
||||
aria-label="Bulk select"
|
||||
onClick={onClickSelectAllPage}
|
||||
/>
|
||||
</Tooltip>
|
||||
</Flex>
|
||||
|
||||
<GallerySearch />
|
||||
</Flex>
|
||||
);
|
||||
};
|
||||
@@ -1,97 +0,0 @@
|
||||
import { Flex, IconButton, Input, InputGroup, InputRightElement, Tooltip } from '@invoke-ai/ui-library';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { searchTermChanged } from 'features/gallery/store/gallerySlice';
|
||||
import { motion } from 'framer-motion';
|
||||
import { debounce } from 'lodash-es';
|
||||
import type { ChangeEvent } from 'react';
|
||||
import { useCallback, useMemo, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { PiMagnifyingGlassBold, PiXBold } from 'react-icons/pi';
|
||||
|
||||
export const GallerySearch = () => {
|
||||
const dispatch = useAppDispatch();
|
||||
const { searchTerm } = useAppSelector((s) => s.gallery);
|
||||
const { t } = useTranslation();
|
||||
|
||||
const [expanded, setExpanded] = useState(false);
|
||||
const [searchTermInput, setSearchTermInput] = useState('');
|
||||
|
||||
const debouncedSetSearchTerm = useMemo(() => {
|
||||
return debounce((value: string) => {
|
||||
dispatch(searchTermChanged(value));
|
||||
}, 1000);
|
||||
}, [dispatch]);
|
||||
|
||||
const onChangeInput = useCallback(
|
||||
(e: ChangeEvent<HTMLInputElement>) => {
|
||||
setSearchTermInput(e.target.value);
|
||||
debouncedSetSearchTerm(e.target.value);
|
||||
},
|
||||
[debouncedSetSearchTerm]
|
||||
);
|
||||
|
||||
const onClearInput = useCallback(() => {
|
||||
setSearchTermInput('');
|
||||
debouncedSetSearchTerm('');
|
||||
}, [debouncedSetSearchTerm]);
|
||||
|
||||
const toggleExpanded = useCallback((newState: boolean) => {
|
||||
setExpanded(newState);
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<Flex>
|
||||
{!expanded && (
|
||||
<Tooltip
|
||||
label={
|
||||
searchTerm && searchTerm.length ? `${t('gallery.searchingBy')} ${searchTerm}` : t('gallery.noActiveSearch')
|
||||
}
|
||||
>
|
||||
<IconButton
|
||||
aria-label="Close"
|
||||
icon={<PiMagnifyingGlassBold />}
|
||||
onClick={toggleExpanded.bind(null, true)}
|
||||
variant="outline"
|
||||
size="sm"
|
||||
/>
|
||||
</Tooltip>
|
||||
)}
|
||||
<motion.div
|
||||
initial={false}
|
||||
animate={{ width: expanded ? '200px' : '0px' }}
|
||||
transition={{ duration: 0.3 }}
|
||||
style={{ overflow: 'hidden' }}
|
||||
>
|
||||
<InputGroup size="sm">
|
||||
<IconButton
|
||||
aria-label="Close"
|
||||
icon={<PiMagnifyingGlassBold />}
|
||||
onClick={toggleExpanded.bind(null, false)}
|
||||
variant="ghost"
|
||||
size="sm"
|
||||
/>
|
||||
|
||||
<Input
|
||||
type="text"
|
||||
placeholder="Search..."
|
||||
size="sm"
|
||||
variant="outline"
|
||||
onChange={onChangeInput}
|
||||
value={searchTermInput}
|
||||
/>
|
||||
{searchTermInput && searchTermInput.length && (
|
||||
<InputRightElement h="full" pe={2}>
|
||||
<IconButton
|
||||
onClick={onClearInput}
|
||||
size="sm"
|
||||
variant="link"
|
||||
aria-label={t('boards.clearSearch')}
|
||||
icon={<PiXBold />}
|
||||
/>
|
||||
</InputRightElement>
|
||||
)}
|
||||
</InputGroup>
|
||||
</motion.div>
|
||||
</Flex>
|
||||
);
|
||||
};
|
||||
@@ -1,22 +1,22 @@
|
||||
import { Box, Button, ButtonGroup, Flex, Tab, TabList, Tabs, useDisclosure } from '@invoke-ai/ui-library';
|
||||
import { Box, Button, ButtonGroup, Flex, Tab, TabList, Tabs, useDisclosure, VStack } from '@invoke-ai/ui-library';
|
||||
import { useStore } from '@nanostores/react';
|
||||
import { $galleryHeader } from 'app/store/nanostores/galleryHeader';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { galleryViewChanged } from 'features/gallery/store/gallerySlice';
|
||||
import { memo, useCallback } from 'react';
|
||||
import { memo, useCallback, useRef } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { PiImagesBold } from 'react-icons/pi';
|
||||
import { RiServerLine } from 'react-icons/ri';
|
||||
|
||||
import BoardsList from './Boards/BoardsList/BoardsList';
|
||||
import GalleryBoardName from './GalleryBoardName';
|
||||
import { GalleryBulkSelect } from './GalleryBulkSelect';
|
||||
import GallerySettingsPopover from './GallerySettingsPopover';
|
||||
import GalleryImageGrid from './ImageGrid/GalleryImageGrid';
|
||||
import { GalleryPagination } from './ImageGrid/GalleryPagination';
|
||||
|
||||
const ImageGalleryContent = () => {
|
||||
const { t } = useTranslation();
|
||||
const resizeObserverRef = useRef<HTMLDivElement>(null);
|
||||
const galleryGridRef = useRef<HTMLDivElement>(null);
|
||||
const galleryView = useAppSelector((s) => s.gallery.galleryView);
|
||||
const dispatch = useAppDispatch();
|
||||
const galleryHeader = useStore($galleryHeader);
|
||||
@@ -31,10 +31,10 @@ const ImageGalleryContent = () => {
|
||||
}, [dispatch]);
|
||||
|
||||
return (
|
||||
<Flex layerStyle="first" flexDirection="column" h="full" w="full" borderRadius="base" p={2} gap={2}>
|
||||
<VStack layerStyle="first" flexDirection="column" h="full" w="full" borderRadius="base" p={2}>
|
||||
{galleryHeader}
|
||||
<Box>
|
||||
<Flex alignItems="center" justifyContent="space-between" gap={2}>
|
||||
<Box w="full">
|
||||
<Flex ref={resizeObserverRef} alignItems="center" justifyContent="space-between" gap={2}>
|
||||
<GalleryBoardName isOpen={isBoardListOpen} onToggle={onToggleBoardList} />
|
||||
<GallerySettingsPopover />
|
||||
</Flex>
|
||||
@@ -42,41 +42,40 @@ const ImageGalleryContent = () => {
|
||||
<BoardsList isOpen={isBoardListOpen} />
|
||||
</Box>
|
||||
</Box>
|
||||
<Flex alignItems="center" justifyContent="space-between" gap={2}>
|
||||
<Tabs index={galleryView === 'images' ? 0 : 1} variant="unstyled" size="sm" w="full">
|
||||
<TabList>
|
||||
<ButtonGroup w="full">
|
||||
<Tab
|
||||
as={Button}
|
||||
size="sm"
|
||||
isChecked={galleryView === 'images'}
|
||||
onClick={handleClickImages}
|
||||
w="full"
|
||||
leftIcon={<PiImagesBold size="16px" />}
|
||||
data-testid="images-tab"
|
||||
>
|
||||
{t('parameters.images')}
|
||||
</Tab>
|
||||
<Tab
|
||||
as={Button}
|
||||
size="sm"
|
||||
isChecked={galleryView === 'assets'}
|
||||
onClick={handleClickAssets}
|
||||
w="full"
|
||||
leftIcon={<RiServerLine size="16px" />}
|
||||
data-testid="assets-tab"
|
||||
>
|
||||
{t('gallery.assets')}
|
||||
</Tab>
|
||||
</ButtonGroup>
|
||||
</TabList>
|
||||
</Tabs>
|
||||
<Flex ref={galleryGridRef} direction="column" gap={2} h="full" w="full">
|
||||
<Flex alignItems="center" justifyContent="space-between" gap={2}>
|
||||
<Tabs index={galleryView === 'images' ? 0 : 1} variant="unstyled" size="sm" w="full">
|
||||
<TabList>
|
||||
<ButtonGroup w="full">
|
||||
<Tab
|
||||
as={Button}
|
||||
size="sm"
|
||||
isChecked={galleryView === 'images'}
|
||||
onClick={handleClickImages}
|
||||
w="full"
|
||||
leftIcon={<PiImagesBold size="16px" />}
|
||||
data-testid="images-tab"
|
||||
>
|
||||
{t('parameters.images')}
|
||||
</Tab>
|
||||
<Tab
|
||||
as={Button}
|
||||
size="sm"
|
||||
isChecked={galleryView === 'assets'}
|
||||
onClick={handleClickAssets}
|
||||
w="full"
|
||||
leftIcon={<RiServerLine size="16px" />}
|
||||
data-testid="assets-tab"
|
||||
>
|
||||
{t('gallery.assets')}
|
||||
</Tab>
|
||||
</ButtonGroup>
|
||||
</TabList>
|
||||
</Tabs>
|
||||
</Flex>
|
||||
<GalleryImageGrid />
|
||||
</Flex>
|
||||
<GalleryBulkSelect />
|
||||
|
||||
<GalleryImageGrid />
|
||||
<GalleryPagination />
|
||||
</Flex>
|
||||
</VStack>
|
||||
);
|
||||
};
|
||||
|
||||
|
||||
@@ -16,13 +16,13 @@ import type { MouseEvent } from 'react';
|
||||
import { memo, useCallback, useMemo, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { PiStarBold, PiStarFill, PiTrashSimpleFill } from 'react-icons/pi';
|
||||
import { useStarImagesMutation, useUnstarImagesMutation } from 'services/api/endpoints/images';
|
||||
import type { ImageDTO } from 'services/api/types';
|
||||
|
||||
// This class name is used to calculate the number of images that fit in the gallery
|
||||
export const GALLERY_IMAGE_CLASS_NAME = 'gallery-image';
|
||||
import { useGetImageDTOQuery, useStarImagesMutation, useUnstarImagesMutation } from 'services/api/endpoints/images';
|
||||
|
||||
const imageSx: SystemStyleObject = { w: 'full', h: 'full' };
|
||||
const imageIconStyleOverrides: SystemStyleObject = {
|
||||
bottom: 2,
|
||||
top: 'auto',
|
||||
};
|
||||
const boxSx: SystemStyleObject = {
|
||||
containerType: 'inline-size',
|
||||
};
|
||||
@@ -34,22 +34,24 @@ const badgeSx: SystemStyleObject = {
|
||||
};
|
||||
|
||||
interface HoverableImageProps {
|
||||
imageDTO: ImageDTO;
|
||||
imageName: string;
|
||||
index: number;
|
||||
}
|
||||
|
||||
const GalleryImage = ({ index, imageDTO }: HoverableImageProps) => {
|
||||
const GalleryImage = (props: HoverableImageProps) => {
|
||||
const dispatch = useAppDispatch();
|
||||
const { imageName } = props;
|
||||
const { currentData: imageDTO } = useGetImageDTOQuery(imageName);
|
||||
const shift = useShiftModifier();
|
||||
const { t } = useTranslation();
|
||||
const selectedBoardId = useAppSelector((s) => s.gallery.selectedBoardId);
|
||||
const alwaysShowImageSizeBadge = useAppSelector((s) => s.gallery.alwaysShowImageSizeBadge);
|
||||
const isSelectedForCompare = useAppSelector((s) => s.gallery.imageToCompare?.image_name === imageDTO.image_name);
|
||||
const isSelectedForCompare = useAppSelector((s) => s.gallery.imageToCompare?.image_name === imageName);
|
||||
const { handleClick, isSelected, areMultiplesSelected } = useMultiselect(imageDTO);
|
||||
|
||||
const customStarUi = useStore($customStarUI);
|
||||
|
||||
const imageContainerRef = useScrollIntoView(isSelected, index, areMultiplesSelected);
|
||||
const imageContainerRef = useScrollIntoView(isSelected, props.index, areMultiplesSelected);
|
||||
|
||||
const handleDelete = useCallback(
|
||||
(e: MouseEvent<HTMLButtonElement>) => {
|
||||
@@ -112,32 +114,32 @@ const GalleryImage = ({ index, imageDTO }: HoverableImageProps) => {
|
||||
}, []);
|
||||
|
||||
const starIcon = useMemo(() => {
|
||||
if (imageDTO.starred) {
|
||||
if (imageDTO?.starred) {
|
||||
return customStarUi ? customStarUi.on.icon : <PiStarFill size="20" />;
|
||||
}
|
||||
if (!imageDTO.starred && isHovered) {
|
||||
if (!imageDTO?.starred && isHovered) {
|
||||
return customStarUi ? customStarUi.off.icon : <PiStarBold size="20" />;
|
||||
}
|
||||
}, [imageDTO.starred, isHovered, customStarUi]);
|
||||
}, [imageDTO?.starred, isHovered, customStarUi]);
|
||||
|
||||
const starTooltip = useMemo(() => {
|
||||
if (imageDTO.starred) {
|
||||
if (imageDTO?.starred) {
|
||||
return customStarUi ? customStarUi.off.text : 'Unstar';
|
||||
}
|
||||
if (!imageDTO.starred) {
|
||||
if (!imageDTO?.starred) {
|
||||
return customStarUi ? customStarUi.on.text : 'Star';
|
||||
}
|
||||
return '';
|
||||
}, [imageDTO.starred, customStarUi]);
|
||||
}, [imageDTO?.starred, customStarUi]);
|
||||
|
||||
const dataTestId = useMemo(() => getGalleryImageDataTestId(imageDTO.image_name), [imageDTO.image_name]);
|
||||
const dataTestId = useMemo(() => getGalleryImageDataTestId(imageDTO?.image_name), [imageDTO?.image_name]);
|
||||
|
||||
if (!imageDTO) {
|
||||
return <IAIFillSkeleton />;
|
||||
}
|
||||
|
||||
return (
|
||||
<Box w="full" h="full" p={1.5} className={GALLERY_IMAGE_CLASS_NAME} data-testid={dataTestId} sx={boxSx}>
|
||||
<Box w="full" h="full" className="gallerygrid-image" data-testid={dataTestId} sx={boxSx}>
|
||||
<Flex
|
||||
ref={imageContainerRef}
|
||||
userSelect="none"
|
||||
@@ -181,23 +183,14 @@ const GalleryImage = ({ index, imageDTO }: HoverableImageProps) => {
|
||||
pointerEvents="none"
|
||||
>{`${imageDTO.width}x${imageDTO.height}`}</Text>
|
||||
)}
|
||||
<IAIDndImageIcon
|
||||
onClick={toggleStarredState}
|
||||
icon={starIcon}
|
||||
tooltip={starTooltip}
|
||||
position="absolute"
|
||||
top={1}
|
||||
insetInlineEnd={1}
|
||||
/>
|
||||
<IAIDndImageIcon onClick={toggleStarredState} icon={starIcon} tooltip={starTooltip} />
|
||||
|
||||
{isHovered && shift && (
|
||||
<IAIDndImageIcon
|
||||
onClick={handleDelete}
|
||||
icon={<PiTrashSimpleFill size="16px" />}
|
||||
tooltip={t('gallery.deleteImage_one')}
|
||||
position="absolute"
|
||||
bottom={1}
|
||||
insetInlineEnd={1}
|
||||
tooltip={t('gallery.deleteImage', { count: 1 })}
|
||||
styleOverrides={imageIconStyleOverrides}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
|
||||
@@ -1,32 +1,120 @@
|
||||
import { Box, Flex, Grid } from '@invoke-ai/ui-library';
|
||||
import { EMPTY_ARRAY } from 'app/store/constants';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { Box, Button, Flex } from '@invoke-ai/ui-library';
|
||||
import type { EntityId } from '@reduxjs/toolkit';
|
||||
import { useAppSelector } from 'app/store/storeHooks';
|
||||
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
|
||||
import { overlayScrollbarsParams } from 'common/components/OverlayScrollbars/constants';
|
||||
import { virtuosoGridRefs } from 'features/gallery/components/ImageGrid/types';
|
||||
import { useGalleryHotkeys } from 'features/gallery/hooks/useGalleryHotkeys';
|
||||
import { selectListImagesQueryArgs } from 'features/gallery/store/gallerySelectors';
|
||||
import { limitChanged } from 'features/gallery/store/gallerySlice';
|
||||
import { debounce } from 'lodash-es';
|
||||
import { memo, useEffect, useMemo, useState } from 'react';
|
||||
import { useGalleryImages } from 'features/gallery/hooks/useGalleryImages';
|
||||
import { useOverlayScrollbars } from 'overlayscrollbars-react';
|
||||
import type { CSSProperties } from 'react';
|
||||
import { memo, useCallback, useEffect, useRef, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { PiImageBold, PiWarningCircleBold } from 'react-icons/pi';
|
||||
import { useListImagesQuery } from 'services/api/endpoints/images';
|
||||
import type { GridComponents, ItemContent, ListRange, VirtuosoGridHandle } from 'react-virtuoso';
|
||||
import { VirtuosoGrid } from 'react-virtuoso';
|
||||
import { useBoardTotal } from 'services/api/hooks/useBoardTotal';
|
||||
|
||||
import { GALLERY_GRID_CLASS_NAME } from './constants';
|
||||
import GalleryImage, { GALLERY_IMAGE_CLASS_NAME } from './GalleryImage';
|
||||
import GalleryImage from './GalleryImage';
|
||||
import ImageGridItemContainer from './ImageGridItemContainer';
|
||||
import ImageGridListContainer from './ImageGridListContainer';
|
||||
|
||||
const components: GridComponents = {
|
||||
Item: ImageGridItemContainer,
|
||||
List: ImageGridListContainer,
|
||||
};
|
||||
|
||||
const virtuosoStyles: CSSProperties = { height: '100%' };
|
||||
|
||||
const GalleryImageGrid = () => {
|
||||
useGalleryHotkeys();
|
||||
const { t } = useTranslation();
|
||||
const queryArgs = useAppSelector(selectListImagesQueryArgs);
|
||||
const { imageDTOs, isLoading, isError, isFetching } = useListImagesQuery(queryArgs, {
|
||||
selectFromResult: ({ data, isLoading, isSuccess, isError, isFetching }) => ({
|
||||
imageDTOs: data?.items ?? EMPTY_ARRAY,
|
||||
isLoading,
|
||||
isSuccess,
|
||||
isError,
|
||||
isFetching,
|
||||
}),
|
||||
});
|
||||
const rootRef = useRef<HTMLDivElement>(null);
|
||||
const [scroller, setScroller] = useState<HTMLElement | null>(null);
|
||||
const [initialize, osInstance] = useOverlayScrollbars(overlayScrollbarsParams);
|
||||
const selectedBoardId = useAppSelector((s) => s.gallery.selectedBoardId);
|
||||
const { currentViewTotal } = useBoardTotal(selectedBoardId);
|
||||
const virtuosoRangeRef = useRef<ListRange | null>(null);
|
||||
const virtuosoRef = useRef<VirtuosoGridHandle>(null);
|
||||
const {
|
||||
areMoreImagesAvailable,
|
||||
handleLoadMoreImages,
|
||||
queryResult: { currentData, isFetching, isSuccess, isError },
|
||||
} = useGalleryImages();
|
||||
useGalleryHotkeys();
|
||||
const itemContentFunc: ItemContent<EntityId, void> = useCallback(
|
||||
(index, imageName) => <GalleryImage key={imageName} index={index} imageName={imageName as string} />,
|
||||
[]
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
// Initialize the gallery's custom scrollbar
|
||||
const { current: root } = rootRef;
|
||||
if (scroller && root) {
|
||||
initialize({
|
||||
target: root,
|
||||
elements: {
|
||||
viewport: scroller,
|
||||
},
|
||||
});
|
||||
}
|
||||
return () => osInstance()?.destroy();
|
||||
}, [scroller, initialize, osInstance]);
|
||||
|
||||
const onRangeChanged = useCallback((range: ListRange) => {
|
||||
virtuosoRangeRef.current = range;
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
virtuosoGridRefs.set({ rootRef, virtuosoRangeRef, virtuosoRef });
|
||||
return () => {
|
||||
virtuosoGridRefs.set({});
|
||||
};
|
||||
}, []);
|
||||
|
||||
if (!currentData) {
|
||||
return (
|
||||
<Flex w="full" h="full" alignItems="center" justifyContent="center">
|
||||
<IAINoContentFallback label={t('gallery.loading')} icon={PiImageBold} />
|
||||
</Flex>
|
||||
);
|
||||
}
|
||||
|
||||
if (isSuccess && currentData?.ids.length === 0) {
|
||||
return (
|
||||
<Flex w="full" h="full" alignItems="center" justifyContent="center">
|
||||
<IAINoContentFallback label={t('gallery.noImagesInGallery')} icon={PiImageBold} />
|
||||
</Flex>
|
||||
);
|
||||
}
|
||||
|
||||
if (isSuccess && currentData) {
|
||||
return (
|
||||
<>
|
||||
<Box ref={rootRef} data-overlayscrollbars="" h="100%" id="gallery-grid">
|
||||
<VirtuosoGrid
|
||||
style={virtuosoStyles}
|
||||
data={currentData.ids}
|
||||
endReached={handleLoadMoreImages}
|
||||
components={components}
|
||||
scrollerRef={setScroller}
|
||||
itemContent={itemContentFunc}
|
||||
ref={virtuosoRef}
|
||||
rangeChanged={onRangeChanged}
|
||||
overscan={10}
|
||||
/>
|
||||
</Box>
|
||||
<Button
|
||||
onClick={handleLoadMoreImages}
|
||||
isDisabled={!areMoreImagesAvailable}
|
||||
isLoading={isFetching}
|
||||
loadingText={t('gallery.loading')}
|
||||
flexShrink={0}
|
||||
>
|
||||
{`${t('accessibility.loadMore')} (${currentData.ids.length} / ${currentViewTotal})`}
|
||||
</Button>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
if (isError) {
|
||||
return (
|
||||
@@ -36,115 +124,7 @@ const GalleryImageGrid = () => {
|
||||
);
|
||||
}
|
||||
|
||||
if (isLoading || isFetching) {
|
||||
return (
|
||||
<Flex w="full" h="full" alignItems="center" justifyContent="center">
|
||||
<IAINoContentFallback label={t('gallery.loading')} icon={PiImageBold} />
|
||||
</Flex>
|
||||
);
|
||||
}
|
||||
|
||||
if (imageDTOs.length === 0) {
|
||||
return (
|
||||
<Flex w="full" h="full" alignItems="center" justifyContent="center">
|
||||
<IAINoContentFallback label={t('gallery.noImagesInGallery')} icon={PiImageBold} />
|
||||
</Flex>
|
||||
);
|
||||
}
|
||||
|
||||
return <Content />;
|
||||
return null;
|
||||
};
|
||||
|
||||
export default memo(GalleryImageGrid);
|
||||
|
||||
const Content = () => {
|
||||
const dispatch = useAppDispatch();
|
||||
const galleryImageMinimumWidth = useAppSelector((s) => s.gallery.galleryImageMinimumWidth);
|
||||
|
||||
const queryArgs = useAppSelector(selectListImagesQueryArgs);
|
||||
const { imageDTOs } = useListImagesQuery(queryArgs, {
|
||||
selectFromResult: ({ data }) => ({ imageDTOs: data?.items ?? EMPTY_ARRAY }),
|
||||
});
|
||||
// Use a callback ref to get reactivity on the container element because it is conditionally rendered
|
||||
const [container, containerRef] = useState<HTMLDivElement | null>(null);
|
||||
|
||||
const calculateNewLimit = useMemo(() => {
|
||||
// Debounce this to not thrash the API
|
||||
return debounce(() => {
|
||||
if (!container) {
|
||||
// Container not rendered yet
|
||||
return;
|
||||
}
|
||||
// Managing refs for dynamically rendered components is a bit tedious:
|
||||
// - https://react.dev/learn/manipulating-the-dom-with-refs#how-to-manage-a-list-of-refs-using-a-ref-callback
|
||||
// As a easy workaround, we can just grab the first gallery image element directly.
|
||||
const galleryImageEl = document.querySelector(`.${GALLERY_IMAGE_CLASS_NAME}`);
|
||||
if (!galleryImageEl) {
|
||||
// No images in gallery?
|
||||
return;
|
||||
}
|
||||
|
||||
const galleryImageRect = galleryImageEl.getBoundingClientRect();
|
||||
const containerRect = container.getBoundingClientRect();
|
||||
|
||||
if (!galleryImageRect.width || !galleryImageRect.height || !containerRect.width || !containerRect.height) {
|
||||
// Gallery is too small to fit images or not rendered yet
|
||||
return;
|
||||
}
|
||||
|
||||
// Floating-point precision requires we round to get the correct number of images per row
|
||||
const imagesPerRow = Math.round(containerRect.width / galleryImageRect.width);
|
||||
// However, when calculating the number of images per column, we want to floor the value to not overflow the container
|
||||
const imagesPerColumn = Math.floor(containerRect.height / galleryImageRect.height);
|
||||
// Always load at least 1 row of images
|
||||
const limit = Math.max(imagesPerRow, imagesPerRow * imagesPerColumn);
|
||||
dispatch(limitChanged(limit));
|
||||
}, 300);
|
||||
}, [container, dispatch]);
|
||||
|
||||
useEffect(() => {
|
||||
// We want to recalculate the limit when image size changes
|
||||
calculateNewLimit();
|
||||
}, [calculateNewLimit, galleryImageMinimumWidth]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!container) {
|
||||
return;
|
||||
}
|
||||
|
||||
const resizeObserver = new ResizeObserver(calculateNewLimit);
|
||||
resizeObserver.observe(container);
|
||||
|
||||
// First render
|
||||
calculateNewLimit();
|
||||
|
||||
return () => {
|
||||
resizeObserver.disconnect();
|
||||
};
|
||||
}, [calculateNewLimit, container, dispatch]);
|
||||
|
||||
return (
|
||||
<Box position="relative" w="full" h="full">
|
||||
<Box
|
||||
ref={containerRef}
|
||||
position="absolute"
|
||||
top={0}
|
||||
right={0}
|
||||
bottom={0}
|
||||
left={0}
|
||||
w="full"
|
||||
h="full"
|
||||
overflow="hidden"
|
||||
>
|
||||
<Grid
|
||||
className={GALLERY_GRID_CLASS_NAME}
|
||||
gridTemplateColumns={`repeat(auto-fill, minmax(${galleryImageMinimumWidth}px, 1fr))`}
|
||||
>
|
||||
{imageDTOs.map((imageDTO, index) => (
|
||||
<GalleryImage key={imageDTO.image_name} imageDTO={imageDTO} index={index} />
|
||||
))}
|
||||
</Grid>
|
||||
</Box>
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -1,73 +0,0 @@
|
||||
import { Button, Flex, IconButton, Spacer, Text } from '@invoke-ai/ui-library';
|
||||
import { useGalleryPagination } from 'features/gallery/hooks/useGalleryPagination';
|
||||
import { PiCaretDoubleLeftBold, PiCaretDoubleRightBold, PiCaretLeftBold, PiCaretRightBold } from 'react-icons/pi';
|
||||
|
||||
export const GalleryPagination = () => {
|
||||
const {
|
||||
goPrev,
|
||||
goNext,
|
||||
goToFirst,
|
||||
goToLast,
|
||||
isFirstEnabled,
|
||||
isLastEnabled,
|
||||
isPrevEnabled,
|
||||
isNextEnabled,
|
||||
pageButtons,
|
||||
goToPage,
|
||||
currentPage,
|
||||
rangeDisplay,
|
||||
total,
|
||||
} = useGalleryPagination();
|
||||
|
||||
if (!total) {
|
||||
return <Flex flexDir="column" alignItems="center" gap="2" height="48px"></Flex>;
|
||||
}
|
||||
|
||||
return (
|
||||
<Flex flexDir="column" alignItems="center" gap="2" height="48px">
|
||||
<Flex gap={2} alignItems="center" w="full">
|
||||
<IconButton
|
||||
size="sm"
|
||||
aria-label="prev"
|
||||
icon={<PiCaretDoubleLeftBold />}
|
||||
onClick={goToFirst}
|
||||
isDisabled={!isFirstEnabled}
|
||||
/>
|
||||
<IconButton
|
||||
size="sm"
|
||||
aria-label="prev"
|
||||
icon={<PiCaretLeftBold />}
|
||||
onClick={goPrev}
|
||||
isDisabled={!isPrevEnabled}
|
||||
/>
|
||||
<Spacer />
|
||||
{pageButtons.map((page) => (
|
||||
<Button
|
||||
size="sm"
|
||||
key={page}
|
||||
onClick={goToPage.bind(null, page)}
|
||||
variant={currentPage === page ? 'solid' : 'outline'}
|
||||
>
|
||||
{page + 1}
|
||||
</Button>
|
||||
))}
|
||||
<Spacer />
|
||||
<IconButton
|
||||
size="sm"
|
||||
aria-label="next"
|
||||
icon={<PiCaretRightBold />}
|
||||
onClick={goNext}
|
||||
isDisabled={!isNextEnabled}
|
||||
/>
|
||||
<IconButton
|
||||
size="sm"
|
||||
aria-label="next"
|
||||
icon={<PiCaretDoubleRightBold />}
|
||||
onClick={goToLast}
|
||||
isDisabled={!isLastEnabled}
|
||||
/>
|
||||
</Flex>
|
||||
<Text>{rangeDisplay}</Text>
|
||||
</Flex>
|
||||
);
|
||||
};
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user