Compare commits

..

1 Commits

Author SHA1 Message Date
brandonrising
0b238b1ece Update probe to always use cpu for loading models 2024-04-03 16:29:38 -04:00
116 changed files with 707 additions and 1710 deletions

View File

@@ -18,47 +18,12 @@ Note that any releases marked as _pre-release_ are in a beta state. You may expe
The Model Manager tab in the UI provides a few ways to install models, including using your already-downloaded models. You'll see a popup directing you there on first startup. For more information, see the [model install docs].
## Missing models after updating to v4
If you find some models are missing after updating to v4, it's likely they weren't correctly registered before the update and didn't get picked up in the migration.
You can use the `Scan Folder` tab in the Model Manager UI to fix this. The models will either be in the old, now-unused `autoimport` folder, or your `models` folder.
- Find and copy your install's old `autoimport` folder path, install the main install folder.
- Go to the Model Manager and click `Scan Folder`.
- Paste the path and scan.
- IMPORTANT: Uncheck `Inplace install`.
- Click `Install All` to install all found models, or just install the models you want.
Next, find and copy your install's `models` folder path (this could be your custom models folder path, or the `models` folder inside the main install folder).
Follow the same steps to scan and import the missing models.
## Slow generation
- Check the [system requirements] to ensure that your system is capable of generating images.
- Check the `ram` setting in `invokeai.yaml`. This setting tells Invoke how much of your system RAM can be used to cache models. Having this too high or too low can slow things down. That said, it's generally safest to not set this at all and instead let Invoke manage it.
- Check the `vram` setting in `invokeai.yaml`. This setting tells Invoke how much of your GPU VRAM can be used to cache models. Counter-intuitively, if this setting is too high, Invoke will need to do a lot of shuffling of models as it juggles the VRAM cache and the currently-loaded model. The default value of 0.25 is generally works well for GPUs without 16GB or more VRAM. Even on a 24GB card, the default works well.
- Check that your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup. If your GPU isn't used, re-install to ensure the correct versions of torch get installed.
- If you are on Windows, you may have exceeded your GPU's VRAM capacity and are using slower [shared GPU memory](#shared-gpu-memory-windows). There's a guide to opt out of this behaviour in the linked FAQ entry.
## Shared GPU Memory (Windows)
!!! tip "Nvidia GPUs with driver 536.40"
This only applies to current Nvidia cards with driver 536.40 or later, released in June 2023.
When the GPU doesn't have enough VRAM for a task, Windows is able to allocate some of its CPU RAM to the GPU. This is much slower than VRAM, but it does allow the system to generate when it otherwise might no have enough VRAM.
When shared GPU memory is used, generation slows down dramatically - but at least it doesn't crash.
If you'd like to opt out of this behavior and instead get an error when you exceed your GPU's VRAM, follow [this guide from Nvidia](https://nvidia.custhelp.com/app/answers/detail/a_id/5490).
Here's how to get the python path required in the linked guide:
- Run `invoke.bat`.
- Select option 2 for developer console.
- At least one python path will be printed. Copy the path that includes your invoke installation directory (typically the first).
## Installer cannot find python (Windows)

View File

@@ -44,7 +44,7 @@ The installation process is simple, with a few prompts:
- Select the version to install. Unless you have a specific reason to install a specific version, select the default (the latest version).
- Select location for the install. Be sure you have enough space in this folder for the base application, as described in the [installation requirements].
- Select a GPU device.
- Select a GPU device. If you are unsure, you can let the installer figure it out.
!!! info "Slow Installation"

View File

@@ -6,7 +6,11 @@
## Introduction
InvokeAI is distributed as a python package on PyPI, installable with `pip`. There are a few things that are handled by the installer and launcher that you'll need to manage manually, described in this guide.
!!! tip "Conda"
As of InvokeAI v2.3.0 installation using the `conda` package manager is no longer being supported. It will likely still work, but we are not testing this installation method.
InvokeAI is distributed as a python package on PyPI, installable with `pip`. There are a few things that are handled by the installer that you'll need to manage manually, described in this guide.
### Requirements
@@ -36,11 +40,11 @@ Before you start, go through the [installation requirements].
1. Enter the root (invokeai) directory and create a virtual Python environment within it named `.venv`.
!!! warning "Virtual Environment Location"
!!! info "Virtual Environment Location"
While you may create the virtual environment anywhere in the file system, we recommend that you create it within the root directory as shown here. This allows the application to automatically detect its data directories.
If you choose a different location for the venv, then you _must_ set the `INVOKEAI_ROOT` environment variable or specify the root directory using the `--root` CLI arg.
If you choose a different location for the venv, then you must set the `INVOKEAI_ROOT` environment variable or pass the directory using the `--root` CLI arg.
```terminal
cd $INVOKEAI_ROOT
@@ -77,23 +81,31 @@ Before you start, go through the [installation requirements].
python3 -m pip install --upgrade pip
```
1. Install the InvokeAI Package. The base command is `pip install InvokeAI --use-pep517`, but you may need to change this depending on your system and the desired features.
1. Install the InvokeAI Package. The `--extra-index-url` option is used to select the correct `torch` backend:
- You may need to provide an [extra index URL]. Select your platform configuration using [this tool on the PyTorch website]. Copy the `--extra-index-url` string from this and append it to your install command.
=== "CUDA (NVidia)"
!!! example "Install with an extra index URL"
```bash
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
=== "ROCm (AMD)"
- If you have a CUDA GPU and want to install with `xformers`, you need to add an option to the package name. Note that `xformers` is not necessary. PyTorch includes an implementation of the SDP attention algorithm with the same performance.
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
!!! example "Install with `xformers`"
=== "CPU (Intel Macs & non-GPU systems)"
```bash
pip install "InvokeAI[xformers]" --use-pep517
```
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
=== "MPS (Apple Silicon)"
```bash
pip install InvokeAI --use-pep517
```
1. Deactivate and reactivate your runtime directory so that the invokeai-specific commands become available in the environment:
@@ -114,6 +126,37 @@ Before you start, go through the [installation requirements].
Run `invokeai-web` to start the UI. You must activate the virtual environment before running the app.
!!! warning
If the virtual environment you selected is NOT inside `INVOKEAI_ROOT`, then you must specify the path to the root directory by adding
`--root_dir \path\to\invokeai`.
If the virtual environment is _not_ inside the root directory, then you _must_ specify the path to the root directory with `--root_dir \path\to\invokeai` or the `INVOKEAI_ROOT` environment variable.
!!! tip
You can permanently set the location of the runtime directory
by setting the environment variable `INVOKEAI_ROOT` to the
path of the directory. As mentioned previously, this is
recommended if your virtual environment is located outside of
your runtime directory.
## Unsupported Conda Install
Congratulations, you found the "secret" Conda installation instructions. If you really **really** want to use Conda with InvokeAI, you can do so using this unsupported recipe:
```sh
mkdir ~/invokeai
conda create -n invokeai python=3.11
conda activate invokeai
# Adjust this as described above for the appropriate torch backend
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
invokeai-web --root ~/invokeai
```
The `pip install` command shown in this recipe is for Linux/Windows
systems with an NVIDIA GPU. See step (6) above for the command to use
with other platforms/GPU combinations. If you don't wish to pass the
`--root` argument to `invokeai` with each launch, you may set the
environment variable `INVOKEAI_ROOT` to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI
staff will **not** be able to help you out. Caveat Emptor!
[installation requirements]: INSTALL_REQUIREMENTS.md

View File

@@ -32,5 +32,5 @@ As described in the [frontend dev toolchain] docs, you can run the UI using a de
[Fork and clone]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo
[InvokeAI repo]: https://github.com/invoke-ai/InvokeAI
[frontend dev toolchain]: ../contributing/frontend/OVERVIEW.md
[manual installation]: ./020_INSTALL_MANUAL.md
[manual installation]: installation/020_INSTALL_MANUAL.md
[editable install]: https://pip.pypa.io/en/latest/cli/pip_install/#cmdoption-e

View File

@@ -3,7 +3,6 @@
InvokeAI installer script
"""
import locale
import os
import platform
import re
@@ -317,9 +316,7 @@ def upgrade_pip(venv_path: Path) -> str | None:
python = str(venv_path.expanduser().resolve() / python)
try:
result = subprocess.check_output([python, "-m", "pip", "install", "--upgrade", "pip"]).decode(
encoding=locale.getpreferredencoding()
)
result = subprocess.check_output([python, "-m", "pip", "install", "--upgrade", "pip"]).decode()
except subprocess.CalledProcessError as e:
print(e)
result = None
@@ -407,29 +404,22 @@ def get_torch_source() -> Tuple[str | None, str | None]:
# device can be one of: "cuda", "rocm", "cpu", "cuda_and_dml, autodetect"
device = select_gpu()
# The correct extra index URLs for torch are inconsistent, see https://pytorch.org/get-started/locally/#start-locally
url = None
optional_modules: str | None = None
optional_modules = "[onnx]"
if OS == "Linux":
if device.value == "rocm":
url = "https://download.pytorch.org/whl/rocm5.6"
elif device.value == "cpu":
url = "https://download.pytorch.org/whl/cpu"
elif device.value == "cuda":
# CUDA uses the default PyPi index
optional_modules = "[xformers,onnx-cuda]"
elif OS == "Windows":
if device.value == "cuda":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-cuda]"
elif device.value == "cpu":
# CPU uses the default PyPi index, no optional modules
pass
elif OS == "Darwin":
# macOS uses the default PyPi index, no optional modules
pass
if device.value == "cuda_and_dml":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-directml]"
# Fall back to defaults
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13
return (url, optional_modules)

View File

@@ -207,8 +207,10 @@ def dest_path(dest: Optional[str | Path] = None) -> Path | None:
class GpuType(Enum):
CUDA = "cuda"
CUDA_AND_DML = "cuda_and_dml"
ROCM = "rocm"
CPU = "cpu"
AUTODETECT = "autodetect"
def select_gpu() -> GpuType:
@@ -224,6 +226,10 @@ def select_gpu() -> GpuType:
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
GpuType.CUDA,
)
nvidia_with_dml = (
"an [gold1 b]NVIDIA[/] GPU (using CUDA™, and DirectML™ for ONNX) -- ALPHA",
GpuType.CUDA_AND_DML,
)
amd = (
"an [gold1 b]AMD[/] GPU (using ROCm™)",
GpuType.ROCM,
@@ -232,19 +238,27 @@ def select_gpu() -> GpuType:
"Do not install any GPU support, use CPU for generation (slow)",
GpuType.CPU,
)
autodetect = (
"I'm not sure what to choose",
GpuType.AUTODETECT,
)
options = []
if OS == "Windows":
options = [nvidia, cpu]
options = [nvidia, nvidia_with_dml, cpu]
if OS == "Linux":
options = [nvidia, amd, cpu]
elif OS == "Darwin":
options = [cpu]
# future CoreML?
if len(options) == 1:
print(f'Your platform [gold1]{OS}-{ARCH}[/] only supports the "{options[0][1]}" driver. Proceeding with that.')
return options[0][1]
# "I don't know" is always added the last option
options.append(autodetect) # type: ignore
options = {str(i): opt for i, opt in enumerate(options, 1)}
console.rule(":space_invader: GPU (Graphics Card) selection :space_invader:")
@@ -278,6 +292,11 @@ def select_gpu() -> GpuType:
),
)
if options[choice][1] is GpuType.AUTODETECT:
console.print(
"No problem. We will install CUDA support first :crossed_fingers: If Invoke does not detect a GPU, please re-run the installer and select one of the other GPU types."
)
return options[choice][1]

View File

@@ -12,7 +12,7 @@ from pydantic import BaseModel, Field
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
from invokeai.backend.image_util.patchmatch import PatchMatch
from invokeai.backend.image_util.safety_checker import SafetyChecker
from invokeai.backend.util.logging import logging
from invokeai.version import __version__
@@ -100,7 +100,7 @@ async def get_app_deps() -> AppDependencyVersions:
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)
async def get_config() -> AppConfig:
infill_methods = ["tile", "lama", "cv2", "color"] # TODO: add mosaic back
infill_methods = ["tile", "lama", "cv2"]
if PatchMatch.patchmatch_available():
infill_methods.append("patchmatch")

View File

@@ -219,13 +219,28 @@ async def scan_for_models(
non_core_model_paths = [p for p in found_model_paths if not p.is_relative_to(core_models_path)]
installed_models = ApiDependencies.invoker.services.model_manager.store.search_by_attr()
resolved_installed_model_paths: list[str] = []
installed_model_sources: list[str] = []
# This call lists all installed models.
for model in installed_models:
path = pathlib.Path(model.path)
# If the model has a source, we need to add it to the list of installed sources.
if model.source:
installed_model_sources.append(model.source)
# If the path is not absolute, that means it is in the app models directory, and we need to join it with
# the models path before resolving.
if not path.is_absolute():
resolved_installed_model_paths.append(str(pathlib.Path(models_path, path).resolve()))
continue
resolved_installed_model_paths.append(str(path.resolve()))
scan_results: list[FoundModel] = []
# Check if the model is installed by comparing paths, appending to the scan result.
# Check if the model is installed by comparing the resolved paths, appending to the scan result.
for p in non_core_model_paths:
path = str(p)
is_installed = any(str(models_path / m.path) == path for m in installed_models)
is_installed = path in resolved_installed_model_paths or path in installed_model_sources
found_model = FoundModel(path=path, is_installed=is_installed)
scan_results.append(found_model)
except Exception as e:

View File

@@ -3,7 +3,6 @@ Invoke-managed custom node loader. See README.md for more information.
"""
import sys
import traceback
from importlib.util import module_from_spec, spec_from_file_location
from pathlib import Path
@@ -42,15 +41,11 @@ for d in Path(__file__).parent.iterdir():
logger.info(f"Loading node pack {module_name}")
try:
module = module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
module = module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
loaded_count += 1
except Exception:
full_error = traceback.format_exc()
logger.error(f"Failed to load node pack {module_name}:\n{full_error}")
loaded_count += 1
del init, module_name

View File

@@ -1,91 +1,154 @@
from abc import abstractmethod
from typing import Literal, get_args
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team
from PIL import Image
import math
from typing import Literal, Optional, get_args
import numpy as np
from PIL import Image, ImageOps
from invokeai.app.invocations.fields import ColorField, ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.app.util.misc import SEED_MAX
from invokeai.backend.image_util.infill_methods.cv2_inpaint import cv2_inpaint
from invokeai.backend.image_util.infill_methods.lama import LaMA
from invokeai.backend.image_util.infill_methods.mosaic import infill_mosaic
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch, infill_patchmatch
from invokeai.backend.image_util.infill_methods.tile import infill_tile
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.backend.image_util.cv2_inpaint import cv2_inpaint
from invokeai.backend.image_util.lama import LaMA
from invokeai.backend.image_util.patchmatch import PatchMatch
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
logger = InvokeAILogger.get_logger()
def get_infill_methods():
methods = Literal["tile", "color", "lama", "cv2"] # TODO: add mosaic back
def infill_methods() -> list[str]:
methods = ["tile", "solid", "lama", "cv2"]
if PatchMatch.patchmatch_available():
methods = Literal["patchmatch", "tile", "color", "lama", "cv2"] # TODO: add mosaic back
methods.insert(0, "patchmatch")
return methods
INFILL_METHODS = get_infill_methods()
INFILL_METHODS = Literal[tuple(infill_methods())]
DEFAULT_INFILL_METHOD = "patchmatch" if "patchmatch" in get_args(INFILL_METHODS) else "tile"
class InfillImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Base class for invocations that preprocess images for Infilling"""
def infill_lama(im: Image.Image) -> Image.Image:
lama = LaMA()
return lama(im)
image: ImageField = InputField(description="The image to process")
@abstractmethod
def infill(self, image: Image.Image) -> Image.Image:
"""Infill the image with the specified method"""
pass
def infill_patchmatch(im: Image.Image) -> Image.Image:
if im.mode != "RGBA":
return im
def load_image(self, context: InvocationContext) -> tuple[Image.Image, bool]:
"""Process the image to have an alpha channel before being infilled"""
image = context.images.get_pil(self.image.image_name)
has_alpha = True if image.mode == "RGBA" else False
return image, has_alpha
# Skip patchmatch if patchmatch isn't available
if not PatchMatch.patchmatch_available():
return im
def invoke(self, context: InvocationContext) -> ImageOutput:
# Retrieve and process image to be infilled
input_image, has_alpha = self.load_image(context)
# Patchmatch (note, we may want to expose patch_size? Increasing it significantly impacts performance though)
im_patched_np = PatchMatch.inpaint(im.convert("RGB"), ImageOps.invert(im.split()[-1]), patch_size=3)
im_patched = Image.fromarray(im_patched_np, mode="RGB")
return im_patched
# If the input image has no alpha channel, return it
if has_alpha is False:
return ImageOutput.build(context.images.get_dto(self.image.image_name))
# Perform Infill action
infilled_image = self.infill(input_image)
def infill_cv2(im: Image.Image) -> Image.Image:
return cv2_inpaint(im)
# Create ImageDTO for Infilled Image
infilled_image_dto = context.images.save(image=infilled_image)
# Return Infilled Image
return ImageOutput.build(infilled_image_dto)
def get_tile_images(image: np.ndarray, width=8, height=8):
_nrows, _ncols, depth = image.shape
_strides = image.strides
nrows, _m = divmod(_nrows, height)
ncols, _n = divmod(_ncols, width)
if _m != 0 or _n != 0:
return None
return np.lib.stride_tricks.as_strided(
np.ravel(image),
shape=(nrows, ncols, height, width, depth),
strides=(height * _strides[0], width * _strides[1], *_strides),
writeable=False,
)
def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int] = None) -> Image.Image:
# Only fill if there's an alpha layer
if im.mode != "RGBA":
return im
a = np.asarray(im, dtype=np.uint8)
tile_size_tuple = (tile_size, tile_size)
# Get the image as tiles of a specified size
tiles = get_tile_images(a, *tile_size_tuple).copy()
# Get the mask as tiles
tiles_mask = tiles[:, :, :, :, 3]
# Find any mask tiles with any fully transparent pixels (we will be replacing these later)
tmask_shape = tiles_mask.shape
tiles_mask = tiles_mask.reshape(math.prod(tiles_mask.shape))
n, ny = (math.prod(tmask_shape[0:2])), math.prod(tmask_shape[2:])
tiles_mask = tiles_mask > 0
tiles_mask = tiles_mask.reshape((n, ny)).all(axis=1)
# Get RGB tiles in single array and filter by the mask
tshape = tiles.shape
tiles_all = tiles.reshape((math.prod(tiles.shape[0:2]), *tiles.shape[2:]))
filtered_tiles = tiles_all[tiles_mask]
if len(filtered_tiles) == 0:
return im
# Find all invalid tiles and replace with a random valid tile
replace_count = (tiles_mask == False).sum() # noqa: E712
rng = np.random.default_rng(seed=seed)
tiles_all[np.logical_not(tiles_mask)] = filtered_tiles[rng.choice(filtered_tiles.shape[0], replace_count), :, :, :]
# Convert back to an image
tiles_all = tiles_all.reshape(tshape)
tiles_all = tiles_all.swapaxes(1, 2)
st = tiles_all.reshape(
(
math.prod(tiles_all.shape[0:2]),
math.prod(tiles_all.shape[2:4]),
tiles_all.shape[4],
)
)
si = Image.fromarray(st, mode="RGBA")
return si
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class InfillColorInvocation(InfillImageProcessorInvocation):
class InfillColorInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image with a solid color"""
image: ImageField = InputField(description="The image to infill")
color: ColorField = InputField(
default=ColorField(r=127, g=127, b=127, a=255),
description="The color to use to infill",
)
def infill(self, image: Image.Image):
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
solid_bg = Image.new("RGBA", image.size, self.color.tuple())
infilled = Image.alpha_composite(solid_bg, image.convert("RGBA"))
infilled.paste(image, (0, 0), image.split()[-1])
return infilled
image_dto = context.images.save(image=infilled)
return ImageOutput.build(image_dto)
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.3")
class InfillTileInvocation(InfillImageProcessorInvocation):
class InfillTileInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image with tiles of the image"""
image: ImageField = InputField(description="The image to infill")
tile_size: int = InputField(default=32, ge=1, description="The tile size (px)")
seed: int = InputField(
default=0,
@@ -94,74 +157,92 @@ class InfillTileInvocation(InfillImageProcessorInvocation):
description="The seed to use for tile generation (omit for random)",
)
def infill(self, image: Image.Image):
output = infill_tile(image, seed=self.seed, tile_size=self.tile_size)
return output.infilled
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
infilled = tile_fill_missing(image.copy(), seed=self.seed, tile_size=self.tile_size)
infilled.paste(image, (0, 0), image.split()[-1])
image_dto = context.images.save(image=infilled)
return ImageOutput.build(image_dto)
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2"
)
class InfillPatchMatchInvocation(InfillImageProcessorInvocation):
class InfillPatchMatchInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
image: ImageField = InputField(description="The image to infill")
downscale: float = InputField(default=2.0, gt=0, description="Run patchmatch on downscaled image to speedup infill")
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
def infill(self, image: Image.Image):
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name).convert("RGBA")
resample_mode = PIL_RESAMPLING_MAP[self.resample_mode]
infill_image = image.copy()
width = int(image.width / self.downscale)
height = int(image.height / self.downscale)
infilled = image.resize(
infill_image = infill_image.resize(
(width, height),
resample=resample_mode,
)
infilled = infill_patchmatch(image)
if PatchMatch.patchmatch_available():
infilled = infill_patchmatch(infill_image)
else:
raise ValueError("PatchMatch is not available on this system")
infilled = infilled.resize(
(image.width, image.height),
resample=resample_mode,
)
infilled.paste(image, (0, 0), mask=image.split()[-1])
return infilled
infilled.paste(image, (0, 0), mask=image.split()[-1])
# image.paste(infilled, (0, 0), mask=image.split()[-1])
image_dto = context.images.save(image=infilled)
return ImageOutput.build(image_dto)
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class LaMaInfillInvocation(InfillImageProcessorInvocation):
class LaMaInfillInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using the LaMa model"""
def infill(self, image: Image.Image):
lama = LaMA()
return lama(image)
image: ImageField = InputField(description="The image to infill")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
# Downloads the LaMa model if it doesn't already exist
download_with_progress_bar(
name="LaMa Inpainting Model",
url="https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
dest_path=context.config.get().models_path / "core/misc/lama/lama.pt",
)
infilled = infill_lama(image.copy())
image_dto = context.images.save(image=infilled)
return ImageOutput.build(image_dto)
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class CV2InfillInvocation(InfillImageProcessorInvocation):
class CV2InfillInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using OpenCV Inpainting"""
def infill(self, image: Image.Image):
return cv2_inpaint(image)
# @invocation(
# "infill_mosaic", title="Mosaic Infill", tags=["image", "inpaint", "outpaint"], category="inpaint", version="1.0.0"
# )
class MosaicInfillInvocation(InfillImageProcessorInvocation):
"""Infills transparent areas of an image with a mosaic pattern drawing colors from the rest of the image"""
image: ImageField = InputField(description="The image to infill")
tile_width: int = InputField(default=64, description="Width of the tile")
tile_height: int = InputField(default=64, description="Height of the tile")
min_color: ColorField = InputField(
default=ColorField(r=0, g=0, b=0, a=255),
description="The min threshold for color",
)
max_color: ColorField = InputField(
default=ColorField(r=255, g=255, b=255, a=255),
description="The max threshold for color",
)
def infill(self, image: Image.Image):
return infill_mosaic(image, (self.tile_width, self.tile_height), self.min_color.tuple(), self.max_color.tuple())
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
infilled = infill_cv2(image.copy())
image_dto = context.images.save(image=infilled)
return ImageOutput.build(image_dto)

View File

@@ -1,22 +1,21 @@
from builtins import float
from typing import List, Literal, Union
from typing import List, Union
from pydantic import BaseModel, Field, field_validator, model_validator
from typing_extensions import Self
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
IPAdapterCheckpointConfig,
IPAdapterInvokeAIConfig,
ModelType,
)
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, IPAdapterConfig, ModelType
class IPAdapterField(BaseModel):
@@ -49,15 +48,12 @@ class IPAdapterOutput(BaseInvocationOutput):
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
CLIP_VISION_MODEL_MAP = {"ViT-H": "ip_adapter_sd_image_encoder", "ViT-G": "ip_adapter_sdxl_image_encoder"}
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.2.2")
class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes."""
# Inputs
image: Union[ImageField, List[ImageField]] = InputField(description="The IP-Adapter image prompt(s).", ui_order=1)
image: Union[ImageField, List[ImageField]] = InputField(description="The IP-Adapter image prompt(s).")
ip_adapter_model: ModelIdentifierField = InputField(
description="The IP-Adapter model.",
title="IP-Adapter Model",
@@ -65,11 +61,7 @@ class IPAdapterInvocation(BaseInvocation):
ui_order=-1,
ui_type=UIType.IPAdapterModel,
)
clip_vision_model: Literal["ViT-H", "ViT-G"] = InputField(
description="CLIP Vision model to use. Overrides model settings. Mandatory for checkpoint models.",
default="ViT-H",
ui_order=2,
)
weight: Union[float, List[float]] = InputField(
default=1, description="The weight given to the IP-Adapter", title="Weight"
)
@@ -94,16 +86,10 @@ class IPAdapterInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
assert isinstance(ip_adapter_info, (IPAdapterInvokeAIConfig, IPAdapterCheckpointConfig))
if isinstance(ip_adapter_info, IPAdapterInvokeAIConfig):
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
else:
image_encoder_model_name = CLIP_VISION_MODEL_MAP[self.clip_vision_model]
assert isinstance(ip_adapter_info, IPAdapterConfig)
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
image_encoder_model = self._get_image_encoder(context, image_encoder_model_name)
return IPAdapterOutput(
ip_adapter=IPAdapterField(
image=self.image,
@@ -116,25 +102,19 @@ class IPAdapterInvocation(BaseInvocation):
)
def _get_image_encoder(self, context: InvocationContext, image_encoder_model_name: str) -> AnyModelConfig:
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
)
if not len(image_encoder_models) > 0:
context.logger.warning(
f"The image encoder required by this IP Adapter ({image_encoder_model_name}) is not installed. \
Downloading and installing now. This may take a while."
)
installer = context._services.model_manager.install
job = installer.heuristic_import(f"InvokeAI/{image_encoder_model_name}")
installer.wait_for_job(job, timeout=600) # Wait for up to 10 minutes
found = False
while not found:
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
)
if len(image_encoder_models) == 0:
context.logger.error("Error while fetching CLIP Vision Image Encoder")
assert len(image_encoder_models) == 1
found = len(image_encoder_models) > 0
if not found:
context.logger.warning(
f"The image encoder required by this IP Adapter ({image_encoder_model_name}) is not installed."
)
context.logger.warning("Downloading and installing now. This may take a while.")
installer = context._services.model_manager.install
job = installer.heuristic_import(f"InvokeAI/{image_encoder_model_name}")
installer.wait_for_job(job, timeout=600) # wait up to 10 minutes - then raise a TimeoutException
assert len(image_encoder_models) == 1
return image_encoder_models[0]

View File

@@ -43,7 +43,11 @@ from invokeai.app.invocations.fields import (
WithMetadata,
)
from invokeai.app.invocations.ip_adapter import IPAdapterField
from invokeai.app.invocations.primitives import DenoiseMaskOutput, ImageOutput, LatentsOutput
from invokeai.app.invocations.primitives import (
DenoiseMaskOutput,
ImageOutput,
LatentsOutput,
)
from invokeai.app.invocations.t2i_adapter import T2IAdapterField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import prepare_control_image
@@ -64,7 +68,12 @@ from ...backend.stable_diffusion.diffusers_pipeline import (
)
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
from ...backend.util.devices import choose_precision, choose_torch_device
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from .controlnet_image_processors import ControlField
from .model import ModelIdentifierField, UNetField, VAEField
@@ -1254,7 +1263,7 @@ class IdealSizeInvocation(BaseInvocation):
return tuple((x - x % multiple_of) for x in args)
def invoke(self, context: InvocationContext) -> IdealSizeOutput:
unet_config = context.models.get_config(self.unet.unet.key)
unet_config = context.models.get_config(**self.unet.unet.model_dump())
aspect = self.width / self.height
dimension: float = 512
if unet_config.base == BaseModelType.StableDiffusion2:

View File

@@ -2,8 +2,16 @@ from typing import Any, Literal, Optional, Union
from pydantic import BaseModel, ConfigDict, Field
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.controlnet_image_processors import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import (
CONTROLNET_MODE_VALUES,
CONTROLNET_RESIZE_VALUES,
)
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
@@ -35,7 +43,6 @@ class IPAdapterMetadataField(BaseModel):
image: ImageField = Field(description="The IP-Adapter image prompt.")
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model.")
clip_vision_model: Literal["ViT-H", "ViT-G"] = Field(description="The CLIP Vision model")
weight: Union[float, list[float]] = Field(description="The weight given to the IP-Adapter")
begin_step_percent: float = Field(description="When the IP-Adapter is first applied (% of total steps)")
end_step_percent: float = Field(description="When the IP-Adapter is last applied (% of total steps)")

View File

@@ -3,7 +3,6 @@
from __future__ import annotations
import locale
import os
import re
import shutil
@@ -318,10 +317,11 @@ class InvokeAIAppConfig(BaseSettings):
@staticmethod
def find_root() -> Path:
"""Choose the runtime root directory when not specified on command line or init file."""
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
if os.environ.get("INVOKEAI_ROOT"):
root = Path(os.environ["INVOKEAI_ROOT"])
elif venv := os.environ.get("VIRTUAL_ENV", None):
root = Path(venv).parent.resolve()
elif any((venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE]):
root = (venv.parent).resolve()
else:
root = Path("~/invokeai").expanduser().resolve()
return root
@@ -373,16 +373,13 @@ def migrate_v3_config_dict(config_dict: dict[str, Any]) -> InvokeAIAppConfig:
if k == "conf_path":
parsed_config_dict["legacy_models_yaml_path"] = v
if k == "legacy_conf_dir":
# The old default for this was "configs/stable-diffusion" ("configs\stable-diffusion" on Windows).
if v == "configs/stable-diffusion" or v == "configs\\stable-diffusion":
# If if the incoming config has the default value, skip
continue
elif Path(v).name == "stable-diffusion":
# Else if the path ends in "stable-diffusion", we assume the parent is the new correct path.
parsed_config_dict["legacy_conf_dir"] = str(Path(v).parent)
else:
# Else we do not attempt to migrate this setting
# The old default for this was "configs/stable-diffusion". If if the incoming config has that as the value, we won't set it.
# Else if the path ends in "stable-diffusion", we assume the parent is the new correct path.
# Else we do not attempt to migrate this setting
if v != "configs/stable-diffusion":
parsed_config_dict["legacy_conf_dir"] = v
elif Path(v).name == "stable-diffusion":
parsed_config_dict["legacy_conf_dir"] = str(Path(v).parent)
elif k in InvokeAIAppConfig.model_fields:
# skip unknown fields
parsed_config_dict[k] = v
@@ -402,7 +399,7 @@ def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
An instance of `InvokeAIAppConfig` with the loaded and migrated settings.
"""
assert config_path.suffix == ".yaml"
with open(config_path, "rt", encoding=locale.getpreferredencoding()) as file:
with open(config_path) as file:
loaded_config_dict = yaml.safe_load(file)
assert isinstance(loaded_config_dict, dict)

View File

@@ -1,6 +1,5 @@
"""Model installation class."""
import locale
import os
import re
import signal
@@ -324,8 +323,7 @@ class ModelInstallService(ModelInstallServiceBase):
legacy_models_yaml_path = Path(self._app_config.root_path, legacy_models_yaml_path)
if legacy_models_yaml_path.exists():
with open(legacy_models_yaml_path, "rt", encoding=locale.getpreferredencoding()) as file:
legacy_models_yaml = yaml.safe_load(file)
legacy_models_yaml = yaml.safe_load(legacy_models_yaml_path.read_text())
yaml_metadata = legacy_models_yaml.pop("__metadata__")
yaml_version = yaml_metadata.get("version")
@@ -566,7 +564,7 @@ class ModelInstallService(ModelInstallServiceBase):
# The model is not in the models directory - we don't need to move it.
return model
new_path = models_dir / model.base.value / model.type.value / old_path.name
new_path = (models_dir / model.base.value / model.type.value / model.name).with_suffix(old_path.suffix)
if old_path == new_path or new_path.exists() and old_path == new_path.resolve():
return model

View File

@@ -80,7 +80,6 @@ class ModelManagerService(ModelManagerServiceBase):
ram_cache = ModelCache(
max_cache_size=app_config.ram,
max_vram_cache_size=app_config.vram,
lazy_offloading=app_config.lazy_offload,
logger=logger,
execution_device=execution_device,
)

View File

@@ -11,7 +11,6 @@ from invokeai.app.services.shared.sqlite_migrator.migrations.migration_5 import
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_6 import build_migration_6
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_7 import build_migration_7
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_8 import build_migration_8
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_9 import build_migration_9
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
@@ -40,7 +39,6 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
migrator.register_migration(build_migration_6())
migrator.register_migration(build_migration_7())
migrator.register_migration(build_migration_8(app_config=config))
migrator.register_migration(build_migration_9())
migrator.run_migrations()
return db

View File

@@ -1,29 +0,0 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration9Callback:
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._empty_session_queue(cursor)
def _empty_session_queue(self, cursor: sqlite3.Cursor) -> None:
"""Empties the session queue. This is done to prevent any lingering session queue items from causing pydantic errors due to changed schemas."""
cursor.execute("DELETE FROM session_queue;")
def build_migration_9() -> Migration:
"""
Build the migration from database version 8 to 9.
This migration does the following:
- Empties the session queue. This is done to prevent any lingering session queue items from causing pydantic errors due to changed schemas.
"""
migration_9 = Migration(
from_version=8,
to_version=9,
callback=Migration9Callback(),
)
return migration_9

View File

@@ -1,6 +1,4 @@
import sqlite3
from contextlib import closing
from datetime import datetime
from pathlib import Path
from typing import Optional
@@ -34,7 +32,6 @@ class SqliteMigrator:
self._db = db
self._logger = db.logger
self._migration_set = MigrationSet()
self._backup_path: Optional[Path] = None
def register_migration(self, migration: Migration) -> None:
"""Registers a migration."""
@@ -58,18 +55,6 @@ class SqliteMigrator:
return False
self._logger.info("Database update needed")
# Make a backup of the db if it needs to be updated and is a file db
if self._db.db_path is not None:
timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
self._backup_path = self._db.db_path.parent / f"{self._db.db_path.stem}_backup_{timestamp}.db"
self._logger.info(f"Backing up database to {str(self._backup_path)}")
# Use SQLite to do the backup
with closing(sqlite3.connect(self._backup_path)) as backup_conn:
self._db.conn.backup(backup_conn)
else:
self._logger.info("Using in-memory database, no backup needed")
next_migration = self._migration_set.get(from_version=self._get_current_version(cursor))
while next_migration is not None:
self._run_migration(next_migration)

View File

@@ -2,7 +2,7 @@
Initialization file for invokeai.backend.image_util methods.
"""
from .infill_methods.patchmatch import PatchMatch # noqa: F401
from .patchmatch import PatchMatch # noqa: F401
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
from .seamless import configure_model_padding # noqa: F401
from .util import InitImageResizer, make_grid # noqa: F401

View File

@@ -1,60 +0,0 @@
from typing import Tuple
import numpy as np
from PIL import Image
def infill_mosaic(
image: Image.Image,
tile_shape: Tuple[int, int] = (64, 64),
min_color: Tuple[int, int, int, int] = (0, 0, 0, 0),
max_color: Tuple[int, int, int, int] = (255, 255, 255, 0),
) -> Image.Image:
"""
image:PIL - A PIL Image
tile_shape: Tuple[int,int] - Tile width & Tile Height
min_color: Tuple[int,int,int] - RGB values for the lowest color to clip to (0-255)
max_color: Tuple[int,int,int] - RGB values for the highest color to clip to (0-255)
"""
np_image = np.array(image) # Convert image to np array
alpha = np_image[:, :, 3] # Get the mask from the alpha channel of the image
non_transparent_pixels = np_image[alpha != 0, :3] # List of non-transparent pixels
# Create color tiles to paste in the empty areas of the image
tile_width, tile_height = tile_shape
# Clip the range of colors in the image to a particular spectrum only
r_min, g_min, b_min, _ = min_color
r_max, g_max, b_max, _ = max_color
non_transparent_pixels[:, 0] = np.clip(non_transparent_pixels[:, 0], r_min, r_max)
non_transparent_pixels[:, 1] = np.clip(non_transparent_pixels[:, 1], g_min, g_max)
non_transparent_pixels[:, 2] = np.clip(non_transparent_pixels[:, 2], b_min, b_max)
tiles = []
for _ in range(256):
color = non_transparent_pixels[np.random.randint(len(non_transparent_pixels))]
tile = np.zeros((tile_height, tile_width, 3), dtype=np.uint8)
tile[:, :] = color
tiles.append(tile)
# Fill the transparent area with tiles
filled_image = np.zeros((image.height, image.width, 3), dtype=np.uint8)
for x in range(image.width):
for y in range(image.height):
tile = tiles[np.random.randint(len(tiles))]
try:
filled_image[
y - (y % tile_height) : y - (y % tile_height) + tile_height,
x - (x % tile_width) : x - (x % tile_width) + tile_width,
] = tile
except ValueError:
# Need to handle edge cases - literally
pass
filled_image = Image.fromarray(filled_image) # Convert the filled tiles image to PIL
image = Image.composite(
image, filled_image, image.split()[-1]
) # Composite the original image on top of the filled tiles
return image

View File

@@ -1,67 +0,0 @@
"""
This module defines a singleton object, "patchmatch" that
wraps the actual patchmatch object. It respects the global
"try_patchmatch" attribute, so that patchmatch loading can
be suppressed or deferred
"""
import numpy as np
from PIL import Image
import invokeai.backend.util.logging as logger
from invokeai.app.services.config.config_default import get_config
class PatchMatch:
"""
Thin class wrapper around the patchmatch function.
"""
patch_match = None
tried_load: bool = False
def __init__(self):
super().__init__()
@classmethod
def _load_patch_match(cls):
if cls.tried_load:
return
if get_config().patchmatch:
from patchmatch import patch_match as pm
if pm.patchmatch_available:
logger.info("Patchmatch initialized")
cls.patch_match = pm
else:
logger.info("Patchmatch not loaded (nonfatal)")
else:
logger.info("Patchmatch loading disabled")
cls.tried_load = True
@classmethod
def patchmatch_available(cls) -> bool:
cls._load_patch_match()
if not cls.patch_match:
return False
return cls.patch_match.patchmatch_available
@classmethod
def inpaint(cls, image: Image.Image) -> Image.Image:
if cls.patch_match is None or not cls.patchmatch_available():
return image
np_image = np.array(image)
mask = 255 - np_image[:, :, 3]
infilled = cls.patch_match.inpaint(np_image[:, :, :3], mask, patch_size=3)
return Image.fromarray(infilled, mode="RGB")
def infill_patchmatch(image: Image.Image) -> Image.Image:
IS_PATCHMATCH_AVAILABLE = PatchMatch.patchmatch_available()
if not IS_PATCHMATCH_AVAILABLE:
logger.warning("PatchMatch is not available on this system")
return image
return PatchMatch.inpaint(image)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

View File

@@ -1,95 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\"\"\"Smoke test for the tile infill\"\"\"\n",
"\n",
"from pathlib import Path\n",
"from typing import Optional\n",
"from PIL import Image\n",
"from invokeai.backend.image_util.infill_methods.tile import infill_tile\n",
"\n",
"images: list[tuple[str, Image.Image]] = []\n",
"\n",
"for i in sorted(Path(\"./test_images/\").glob(\"*.webp\")):\n",
" images.append((i.name, Image.open(i)))\n",
" images.append((i.name, Image.open(i).transpose(Image.FLIP_LEFT_RIGHT)))\n",
" images.append((i.name, Image.open(i).transpose(Image.FLIP_TOP_BOTTOM)))\n",
" images.append((i.name, Image.open(i).resize((512, 512))))\n",
" images.append((i.name, Image.open(i).resize((1234, 461))))\n",
"\n",
"outputs: list[tuple[str, Image.Image, Image.Image, Optional[Image.Image]]] = []\n",
"\n",
"for name, image in images:\n",
" try:\n",
" output = infill_tile(image, seed=0, tile_size=32)\n",
" outputs.append((name, image, output.infilled, output.tile_image))\n",
" except ValueError as e:\n",
" print(f\"Skipping image {name}: {e}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Display the images in jupyter notebook\n",
"import matplotlib.pyplot as plt\n",
"from PIL import ImageOps\n",
"\n",
"fig, axes = plt.subplots(len(outputs), 3, figsize=(10, 3 * len(outputs)))\n",
"plt.subplots_adjust(hspace=0)\n",
"\n",
"for i, (name, original, infilled, tile_image) in enumerate(outputs):\n",
" # Add a border to each image, helps to see the edges\n",
" size = original.size\n",
" original = ImageOps.expand(original, border=5, fill=\"red\")\n",
" filled = ImageOps.expand(infilled, border=5, fill=\"red\")\n",
" if tile_image:\n",
" tile_image = ImageOps.expand(tile_image, border=5, fill=\"red\")\n",
"\n",
" axes[i, 0].imshow(original)\n",
" axes[i, 0].axis(\"off\")\n",
" axes[i, 0].set_title(f\"Original ({name} - {size})\")\n",
"\n",
" if tile_image:\n",
" axes[i, 1].imshow(tile_image)\n",
" axes[i, 1].axis(\"off\")\n",
" axes[i, 1].set_title(\"Tile Image\")\n",
" else:\n",
" axes[i, 1].axis(\"off\")\n",
" axes[i, 1].set_title(\"NO TILES GENERATED (NO TRANSPARENCY)\")\n",
"\n",
" axes[i, 2].imshow(filled)\n",
" axes[i, 2].axis(\"off\")\n",
" axes[i, 2].set_title(\"Filled\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".invokeai",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,122 +0,0 @@
from dataclasses import dataclass
from typing import Optional
import numpy as np
from PIL import Image
def create_tile_pool(img_array: np.ndarray, tile_size: tuple[int, int]) -> list[np.ndarray]:
"""
Create a pool of tiles from non-transparent areas of the image by systematically walking through the image.
Args:
img_array: numpy array of the image.
tile_size: tuple (tile_width, tile_height) specifying the size of each tile.
Returns:
A list of numpy arrays, each representing a tile.
"""
tiles: list[np.ndarray] = []
rows, cols = img_array.shape[:2]
tile_width, tile_height = tile_size
for y in range(0, rows - tile_height + 1, tile_height):
for x in range(0, cols - tile_width + 1, tile_width):
tile = img_array[y : y + tile_height, x : x + tile_width]
# Check if the image has an alpha channel and the tile is completely opaque
if img_array.shape[2] == 4 and np.all(tile[:, :, 3] == 255):
tiles.append(tile)
elif img_array.shape[2] == 3: # If no alpha channel, append the tile
tiles.append(tile)
if not tiles:
raise ValueError(
"Not enough opaque pixels to generate any tiles. Use a smaller tile size or a different image."
)
return tiles
def create_filled_image(
img_array: np.ndarray, tile_pool: list[np.ndarray], tile_size: tuple[int, int], seed: int
) -> np.ndarray:
"""
Create an image of the same dimensions as the original, filled entirely with tiles from the pool.
Args:
img_array: numpy array of the original image.
tile_pool: A list of numpy arrays, each representing a tile.
tile_size: tuple (tile_width, tile_height) specifying the size of each tile.
Returns:
A numpy array representing the filled image.
"""
rows, cols, _ = img_array.shape
tile_width, tile_height = tile_size
# Prep an empty RGB image
filled_img_array = np.zeros((rows, cols, 3), dtype=img_array.dtype)
# Make the random tile selection reproducible
rng = np.random.default_rng(seed)
for y in range(0, rows, tile_height):
for x in range(0, cols, tile_width):
# Pick a random tile from the pool
tile = tile_pool[rng.integers(len(tile_pool))]
# Calculate the space available (may be less than tile size near the edges)
space_y = min(tile_height, rows - y)
space_x = min(tile_width, cols - x)
# Crop the tile if necessary to fit into the available space
cropped_tile = tile[:space_y, :space_x, :3]
# Fill the available space with the (possibly cropped) tile
filled_img_array[y : y + space_y, x : x + space_x, :3] = cropped_tile
return filled_img_array
@dataclass
class InfillTileOutput:
infilled: Image.Image
tile_image: Optional[Image.Image] = None
def infill_tile(image_to_infill: Image.Image, seed: int, tile_size: int) -> InfillTileOutput:
"""Infills an image with random tiles from the image itself.
If the image is not an RGBA image, it is returned untouched.
Args:
image: The image to infill.
tile_size: The size of the tiles to use for infilling.
Raises:
ValueError: If there are not enough opaque pixels to generate any tiles.
"""
if image_to_infill.mode != "RGBA":
return InfillTileOutput(infilled=image_to_infill)
# Internally, we want a tuple of (tile_width, tile_height). In the future, the tile size can be any rectangle.
_tile_size = (tile_size, tile_size)
np_image = np.array(image_to_infill, dtype=np.uint8)
# Create the pool of tiles that we will use to infill
tile_pool = create_tile_pool(np_image, _tile_size)
# Create an image from the tiles, same size as the original
tile_np_image = create_filled_image(np_image, tile_pool, _tile_size, seed)
# Paste the OG image over the tile image, effectively infilling the area
tile_image = Image.fromarray(tile_np_image, "RGB")
infilled = tile_image.copy()
infilled.paste(image_to_infill, (0, 0), image_to_infill.split()[-1])
# I think we want this to be "RGBA"?
infilled.convert("RGBA")
return InfillTileOutput(infilled=infilled, tile_image=tile_image)

View File

@@ -7,7 +7,6 @@ from PIL import Image
import invokeai.backend.util.logging as logger
from invokeai.app.services.config.config_default import get_config
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.backend.util.devices import choose_torch_device
@@ -31,14 +30,6 @@ class LaMA:
def __call__(self, input_image: Image.Image, *args: Any, **kwds: Any) -> Any:
device = choose_torch_device()
model_location = get_config().models_path / "core/misc/lama/lama.pt"
if not model_location.exists():
download_with_progress_bar(
name="LaMa Inpainting Model",
url="https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
dest_path=model_location,
)
model = load_jit_model(model_location, device)
image = np.asarray(input_image.convert("RGB"))

View File

@@ -0,0 +1,49 @@
"""
This module defines a singleton object, "patchmatch" that
wraps the actual patchmatch object. It respects the global
"try_patchmatch" attribute, so that patchmatch loading can
be suppressed or deferred
"""
import numpy as np
import invokeai.backend.util.logging as logger
from invokeai.app.services.config.config_default import get_config
class PatchMatch:
"""
Thin class wrapper around the patchmatch function.
"""
patch_match = None
tried_load: bool = False
def __init__(self):
super().__init__()
@classmethod
def _load_patch_match(self):
if self.tried_load:
return
if get_config().patchmatch:
from patchmatch import patch_match as pm
if pm.patchmatch_available:
logger.info("Patchmatch initialized")
else:
logger.info("Patchmatch not loaded (nonfatal)")
self.patch_match = pm
else:
logger.info("Patchmatch loading disabled")
self.tried_load = True
@classmethod
def patchmatch_available(self) -> bool:
self._load_patch_match()
return self.patch_match and self.patch_match.patchmatch_available
@classmethod
def inpaint(self, *args, **kwargs) -> np.ndarray:
if self.patchmatch_available():
return self.patch_match.inpaint(*args, **kwargs)

View File

@@ -1,11 +1,8 @@
# copied from https://github.com/tencent-ailab/IP-Adapter (Apache License 2.0)
# and modified as needed
import pathlib
from typing import List, Optional, TypedDict, Union
from typing import Optional, Union
import safetensors
import safetensors.torch
import torch
from PIL import Image
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
@@ -16,17 +13,10 @@ from ..raw_model import RawModel
from .resampler import Resampler
class IPAdapterStateDict(TypedDict):
ip_adapter: dict[str, torch.Tensor]
image_proj: dict[str, torch.Tensor]
class ImageProjModel(torch.nn.Module):
"""Image Projection Model"""
def __init__(
self, cross_attention_dim: int = 1024, clip_embeddings_dim: int = 1024, clip_extra_context_tokens: int = 4
):
def __init__(self, cross_attention_dim=1024, clip_embeddings_dim=1024, clip_extra_context_tokens=4):
super().__init__()
self.cross_attention_dim = cross_attention_dim
@@ -35,7 +25,7 @@ class ImageProjModel(torch.nn.Module):
self.norm = torch.nn.LayerNorm(cross_attention_dim)
@classmethod
def from_state_dict(cls, state_dict: dict[str, torch.Tensor], clip_extra_context_tokens: int = 4):
def from_state_dict(cls, state_dict: dict[torch.Tensor], clip_extra_context_tokens=4):
"""Initialize an ImageProjModel from a state_dict.
The cross_attention_dim and clip_embeddings_dim are inferred from the shape of the tensors in the state_dict.
@@ -55,7 +45,7 @@ class ImageProjModel(torch.nn.Module):
model.load_state_dict(state_dict)
return model
def forward(self, image_embeds: torch.Tensor):
def forward(self, image_embeds):
embeds = image_embeds
clip_extra_context_tokens = self.proj(embeds).reshape(
-1, self.clip_extra_context_tokens, self.cross_attention_dim
@@ -67,7 +57,7 @@ class ImageProjModel(torch.nn.Module):
class MLPProjModel(torch.nn.Module):
"""SD model with image prompt"""
def __init__(self, cross_attention_dim: int = 1024, clip_embeddings_dim: int = 1024):
def __init__(self, cross_attention_dim=1024, clip_embeddings_dim=1024):
super().__init__()
self.proj = torch.nn.Sequential(
@@ -78,7 +68,7 @@ class MLPProjModel(torch.nn.Module):
)
@classmethod
def from_state_dict(cls, state_dict: dict[str, torch.Tensor]):
def from_state_dict(cls, state_dict: dict[torch.Tensor]):
"""Initialize an MLPProjModel from a state_dict.
The cross_attention_dim and clip_embeddings_dim are inferred from the shape of the tensors in the state_dict.
@@ -97,7 +87,7 @@ class MLPProjModel(torch.nn.Module):
model.load_state_dict(state_dict)
return model
def forward(self, image_embeds: torch.Tensor):
def forward(self, image_embeds):
clip_extra_context_tokens = self.proj(image_embeds)
return clip_extra_context_tokens
@@ -107,7 +97,7 @@ class IPAdapter(RawModel):
def __init__(
self,
state_dict: IPAdapterStateDict,
state_dict: dict[str, torch.Tensor],
device: torch.device,
dtype: torch.dtype = torch.float16,
num_tokens: int = 4,
@@ -139,27 +129,24 @@ class IPAdapter(RawModel):
return calc_model_size_by_data(self._image_proj_model) + calc_model_size_by_data(self.attn_weights)
def _init_image_proj_model(
self, state_dict: dict[str, torch.Tensor]
) -> Union[ImageProjModel, Resampler, MLPProjModel]:
def _init_image_proj_model(self, state_dict):
return ImageProjModel.from_state_dict(state_dict, self._num_tokens).to(self.device, dtype=self.dtype)
@torch.inference_mode()
def get_image_embeds(self, pil_image: List[Image.Image], image_encoder: CLIPVisionModelWithProjection):
def get_image_embeds(self, pil_image, image_encoder: CLIPVisionModelWithProjection):
if isinstance(pil_image, Image.Image):
pil_image = [pil_image]
clip_image = self._clip_image_processor(images=pil_image, return_tensors="pt").pixel_values
clip_image_embeds = image_encoder(clip_image.to(self.device, dtype=self.dtype)).image_embeds
try:
image_prompt_embeds = self._image_proj_model(clip_image_embeds)
uncond_image_prompt_embeds = self._image_proj_model(torch.zeros_like(clip_image_embeds))
return image_prompt_embeds, uncond_image_prompt_embeds
except RuntimeError as e:
raise RuntimeError("Selected CLIP Vision Model is incompatible with the current IP Adapter") from e
image_prompt_embeds = self._image_proj_model(clip_image_embeds)
uncond_image_prompt_embeds = self._image_proj_model(torch.zeros_like(clip_image_embeds))
return image_prompt_embeds, uncond_image_prompt_embeds
class IPAdapterPlus(IPAdapter):
"""IP-Adapter with fine-grained features"""
def _init_image_proj_model(self, state_dict: dict[str, torch.Tensor]) -> Union[Resampler, MLPProjModel]:
def _init_image_proj_model(self, state_dict):
return Resampler.from_state_dict(
state_dict=state_dict,
depth=4,
@@ -170,32 +157,31 @@ class IPAdapterPlus(IPAdapter):
).to(self.device, dtype=self.dtype)
@torch.inference_mode()
def get_image_embeds(self, pil_image: List[Image.Image], image_encoder: CLIPVisionModelWithProjection):
def get_image_embeds(self, pil_image, image_encoder: CLIPVisionModelWithProjection):
if isinstance(pil_image, Image.Image):
pil_image = [pil_image]
clip_image = self._clip_image_processor(images=pil_image, return_tensors="pt").pixel_values
clip_image = clip_image.to(self.device, dtype=self.dtype)
clip_image_embeds = image_encoder(clip_image, output_hidden_states=True).hidden_states[-2]
image_prompt_embeds = self._image_proj_model(clip_image_embeds)
uncond_clip_image_embeds = image_encoder(torch.zeros_like(clip_image), output_hidden_states=True).hidden_states[
-2
]
try:
image_prompt_embeds = self._image_proj_model(clip_image_embeds)
uncond_image_prompt_embeds = self._image_proj_model(uncond_clip_image_embeds)
return image_prompt_embeds, uncond_image_prompt_embeds
except RuntimeError as e:
raise RuntimeError("Selected CLIP Vision Model is incompatible with the current IP Adapter") from e
uncond_image_prompt_embeds = self._image_proj_model(uncond_clip_image_embeds)
return image_prompt_embeds, uncond_image_prompt_embeds
class IPAdapterFull(IPAdapterPlus):
"""IP-Adapter Plus with full features."""
def _init_image_proj_model(self, state_dict: dict[str, torch.Tensor]):
def _init_image_proj_model(self, state_dict: dict[torch.Tensor]):
return MLPProjModel.from_state_dict(state_dict).to(self.device, dtype=self.dtype)
class IPAdapterPlusXL(IPAdapterPlus):
"""IP-Adapter Plus for SDXL."""
def _init_image_proj_model(self, state_dict: dict[str, torch.Tensor]):
def _init_image_proj_model(self, state_dict):
return Resampler.from_state_dict(
state_dict=state_dict,
depth=4,
@@ -206,48 +192,24 @@ class IPAdapterPlusXL(IPAdapterPlus):
).to(self.device, dtype=self.dtype)
def load_ip_adapter_tensors(ip_adapter_ckpt_path: pathlib.Path, device: str) -> IPAdapterStateDict:
state_dict: IPAdapterStateDict = {"ip_adapter": {}, "image_proj": {}}
if ip_adapter_ckpt_path.suffix == ".safetensors":
model = safetensors.torch.load_file(ip_adapter_ckpt_path, device=device)
for key in model.keys():
if key.startswith("image_proj."):
state_dict["image_proj"][key.replace("image_proj.", "")] = model[key]
elif key.startswith("ip_adapter."):
state_dict["ip_adapter"][key.replace("ip_adapter.", "")] = model[key]
else:
raise RuntimeError(f"Encountered unexpected IP Adapter state dict key: '{key}'.")
else:
ip_adapter_diffusers_checkpoint_path = ip_adapter_ckpt_path / "ip_adapter.bin"
state_dict = torch.load(ip_adapter_diffusers_checkpoint_path, map_location="cpu")
return state_dict
def build_ip_adapter(
ip_adapter_ckpt_path: pathlib.Path, device: torch.device, dtype: torch.dtype = torch.float16
) -> Union[IPAdapter, IPAdapterPlus, IPAdapterPlusXL, IPAdapterPlus]:
state_dict = load_ip_adapter_tensors(ip_adapter_ckpt_path, device.type)
ip_adapter_ckpt_path: str, device: torch.device, dtype: torch.dtype = torch.float16
) -> Union[IPAdapter, IPAdapterPlus]:
state_dict = torch.load(ip_adapter_ckpt_path, map_location="cpu")
# IPAdapter (with ImageProjModel)
if "proj.weight" in state_dict["image_proj"]:
if "proj.weight" in state_dict["image_proj"]: # IPAdapter (with ImageProjModel).
return IPAdapter(state_dict, device=device, dtype=dtype)
# IPAdaterPlus or IPAdapterPlusXL (with Resampler)
elif "proj_in.weight" in state_dict["image_proj"]:
elif "proj_in.weight" in state_dict["image_proj"]: # IPAdaterPlus or IPAdapterPlusXL (with Resampler).
cross_attention_dim = state_dict["ip_adapter"]["1.to_k_ip.weight"].shape[-1]
if cross_attention_dim == 768:
return IPAdapterPlus(state_dict, device=device, dtype=dtype) # SD1 IP-Adapter Plus
# SD1 IP-Adapter Plus
return IPAdapterPlus(state_dict, device=device, dtype=dtype)
elif cross_attention_dim == 2048:
return IPAdapterPlusXL(state_dict, device=device, dtype=dtype) # SDXL IP-Adapter Plus
# SDXL IP-Adapter Plus
return IPAdapterPlusXL(state_dict, device=device, dtype=dtype)
else:
raise Exception(f"Unsupported IP-Adapter Plus cross-attention dimension: {cross_attention_dim}.")
# IPAdapterFull (with MLPProjModel)
elif "proj.0.weight" in state_dict["image_proj"]:
elif "proj.0.weight" in state_dict["image_proj"]: # IPAdapterFull (with MLPProjModel).
return IPAdapterFull(state_dict, device=device, dtype=dtype)
# Unrecognized IP Adapter Architectures
else:
raise ValueError(f"'{ip_adapter_ckpt_path}' has an unrecognized IP-Adapter model architecture.")

View File

@@ -9,8 +9,8 @@ import torch.nn as nn
# FFN
def FeedForward(dim: int, mult: int = 4):
inner_dim = dim * mult
def FeedForward(dim, mult=4):
inner_dim = int(dim * mult)
return nn.Sequential(
nn.LayerNorm(dim),
nn.Linear(dim, inner_dim, bias=False),
@@ -19,8 +19,8 @@ def FeedForward(dim: int, mult: int = 4):
)
def reshape_tensor(x: torch.Tensor, heads: int):
bs, length, _ = x.shape
def reshape_tensor(x, heads):
bs, length, width = x.shape
# (bs, length, width) --> (bs, length, n_heads, dim_per_head)
x = x.view(bs, length, heads, -1)
# (bs, length, n_heads, dim_per_head) --> (bs, n_heads, length, dim_per_head)
@@ -31,7 +31,7 @@ def reshape_tensor(x: torch.Tensor, heads: int):
class PerceiverAttention(nn.Module):
def __init__(self, *, dim: int, dim_head: int = 64, heads: int = 8):
def __init__(self, *, dim, dim_head=64, heads=8):
super().__init__()
self.scale = dim_head**-0.5
self.dim_head = dim_head
@@ -45,7 +45,7 @@ class PerceiverAttention(nn.Module):
self.to_kv = nn.Linear(dim, inner_dim * 2, bias=False)
self.to_out = nn.Linear(inner_dim, dim, bias=False)
def forward(self, x: torch.Tensor, latents: torch.Tensor):
def forward(self, x, latents):
"""
Args:
x (torch.Tensor): image features
@@ -80,14 +80,14 @@ class PerceiverAttention(nn.Module):
class Resampler(nn.Module):
def __init__(
self,
dim: int = 1024,
depth: int = 8,
dim_head: int = 64,
heads: int = 16,
num_queries: int = 8,
embedding_dim: int = 768,
output_dim: int = 1024,
ff_mult: int = 4,
dim=1024,
depth=8,
dim_head=64,
heads=16,
num_queries=8,
embedding_dim=768,
output_dim=1024,
ff_mult=4,
):
super().__init__()
@@ -110,15 +110,7 @@ class Resampler(nn.Module):
)
@classmethod
def from_state_dict(
cls,
state_dict: dict[str, torch.Tensor],
depth: int = 8,
dim_head: int = 64,
heads: int = 16,
num_queries: int = 8,
ff_mult: int = 4,
):
def from_state_dict(cls, state_dict: dict[torch.Tensor], depth=8, dim_head=64, heads=16, num_queries=8, ff_mult=4):
"""A convenience function that initializes a Resampler from a state_dict.
Some of the shape parameters are inferred from the state_dict (e.g. dim, embedding_dim, etc.). At the time of
@@ -153,7 +145,7 @@ class Resampler(nn.Module):
model.load_state_dict(state_dict)
return model
def forward(self, x: torch.Tensor):
def forward(self, x):
latents = self.latents.repeat(x.size(0), 1, 1)
x = self.proj_in(x)

View File

@@ -323,13 +323,10 @@ class MainDiffusersConfig(DiffusersConfigBase, MainConfigBase):
return Tag(f"{ModelType.Main.value}.{ModelFormat.Diffusers.value}")
class IPAdapterBaseConfig(ModelConfigBase):
class IPAdapterConfig(ModelConfigBase):
"""Model config for IP Adaptor format models."""
type: Literal[ModelType.IPAdapter] = ModelType.IPAdapter
class IPAdapterInvokeAIConfig(IPAdapterBaseConfig):
"""Model config for IP Adapter diffusers format models."""
image_encoder_model_id: str
format: Literal[ModelFormat.InvokeAI]
@@ -338,16 +335,6 @@ class IPAdapterInvokeAIConfig(IPAdapterBaseConfig):
return Tag(f"{ModelType.IPAdapter.value}.{ModelFormat.InvokeAI.value}")
class IPAdapterCheckpointConfig(IPAdapterBaseConfig):
"""Model config for IP Adapter checkpoint format models."""
format: Literal[ModelFormat.Checkpoint]
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.IPAdapter.value}.{ModelFormat.Checkpoint.value}")
class CLIPVisionDiffusersConfig(DiffusersConfigBase):
"""Model config for CLIPVision."""
@@ -403,8 +390,7 @@ AnyModelConfig = Annotated[
Annotated[LoRADiffusersConfig, LoRADiffusersConfig.get_tag()],
Annotated[TextualInversionFileConfig, TextualInversionFileConfig.get_tag()],
Annotated[TextualInversionFolderConfig, TextualInversionFolderConfig.get_tag()],
Annotated[IPAdapterInvokeAIConfig, IPAdapterInvokeAIConfig.get_tag()],
Annotated[IPAdapterCheckpointConfig, IPAdapterCheckpointConfig.get_tag()],
Annotated[IPAdapterConfig, IPAdapterConfig.get_tag()],
Annotated[T2IAdapterConfig, T2IAdapterConfig.get_tag()],
Annotated[CLIPVisionDiffusersConfig, CLIPVisionDiffusersConfig.get_tag()],
],

View File

@@ -117,7 +117,7 @@ class ModelCacheBase(ABC, Generic[T]):
@property
@abstractmethod
def stats(self) -> Optional[CacheStats]:
def stats(self) -> CacheStats:
"""Return collected CacheStats object."""
pass

View File

@@ -269,6 +269,9 @@ class ModelCache(ModelCacheBase[AnyModel]):
if torch.device(source_device).type == torch.device(target_device).type:
return
# may raise an exception here if insufficient GPU VRAM
self._check_free_vram(target_device, cache_entry.size)
start_model_to_time = time.time()
snapshot_before = self._capture_memory_snapshot()
cache_entry.model.to(target_device)
@@ -326,11 +329,11 @@ class ModelCache(ModelCacheBase[AnyModel]):
f" {in_ram_models}/{in_vram_models}({locked_in_vram_models})"
)
def make_room(self, size: int) -> None:
def make_room(self, model_size: int) -> None:
"""Make enough room in the cache to accommodate a new model of indicated size."""
# calculate how much memory this model will require
# multiplier = 2 if self.precision==torch.float32 else 1
bytes_needed = size
bytes_needed = model_size
maximum_size = self.max_cache_size * GIG # stored in GB, convert to bytes
current_size = self.cache_size()
@@ -385,7 +388,7 @@ class ModelCache(ModelCacheBase[AnyModel]):
# 1 from onnx runtime object
if not cache_entry.locked and refs <= (3 if "onnx" in model_key else 2):
self.logger.debug(
f"Removing {model_key} from RAM cache to free at least {(size/GIG):.2f} GB (-{(cache_entry.size/GIG):.2f} GB)"
f"Removing {model_key} from RAM cache to free at least {(model_size/GIG):.2f} GB (-{(cache_entry.size/GIG):.2f} GB)"
)
current_size -= cache_entry.size
models_cleared += 1
@@ -417,3 +420,13 @@ class ModelCache(ModelCacheBase[AnyModel]):
mps.empty_cache()
self.logger.debug(f"After making room: cached_models={len(self._cached_models)}")
def _check_free_vram(self, target_device: torch.device, needed_size: int) -> None:
if target_device.type != "cuda":
return
vram_device = ( # mem_get_info() needs an indexed device
target_device if target_device.index is not None else torch.device(str(target_device), index=0)
)
free_mem, _ = torch.cuda.mem_get_info(torch.device(vram_device))
if needed_size > free_mem:
raise torch.cuda.OutOfMemoryError

View File

@@ -34,6 +34,7 @@ class ModelLocker(ModelLockerBase):
# NOTE that the model has to have the to() method in order for this code to move it into GPU!
self._cache_entry.lock()
try:
if self._cache.lazy_offloading:
self._cache.offload_unlocked_models(self._cache_entry.size)
@@ -50,7 +51,6 @@ class ModelLocker(ModelLockerBase):
except Exception:
self._cache_entry.unlock()
raise
return self.model
def unlock(self) -> None:

View File

@@ -7,13 +7,19 @@ from typing import Optional
import torch
from invokeai.backend.ip_adapter.ip_adapter import build_ip_adapter
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, BaseModelType, ModelFormat, ModelType, SubModelType
from invokeai.backend.model_manager import (
AnyModel,
AnyModelConfig,
BaseModelType,
ModelFormat,
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.load import ModelLoader, ModelLoaderRegistry
from invokeai.backend.raw_model import RawModel
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.IPAdapter, format=ModelFormat.InvokeAI)
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.IPAdapter, format=ModelFormat.Checkpoint)
class IPAdapterInvokeAILoader(ModelLoader):
"""Class to load IP Adapter diffusers models."""
@@ -26,7 +32,7 @@ class IPAdapterInvokeAILoader(ModelLoader):
raise ValueError("There are no submodels in an IP-Adapter model.")
model_path = Path(config.path)
model: RawModel = build_ip_adapter(
ip_adapter_ckpt_path=model_path,
ip_adapter_ckpt_path=str(model_path / "ip_adapter.bin"),
device=torch.device("cpu"),
dtype=self._torch_dtype,
)

View File

@@ -230,10 +230,9 @@ class ModelProbe(object):
return ModelType.LoRA
elif any(key.startswith(v) for v in {"controlnet", "control_model", "input_blocks"}):
return ModelType.ControlNet
elif any(key.startswith(v) for v in {"image_proj.", "ip_adapter."}):
return ModelType.IPAdapter
elif key in {"emb_params", "string_to_param"}:
return ModelType.TextualInversion
else:
# diffusers-ti
if len(ckpt) < 10 and all(isinstance(v, torch.Tensor) for v in ckpt.values()):
@@ -528,25 +527,8 @@ class ControlNetCheckpointProbe(CheckpointProbeBase):
class IPAdapterCheckpointProbe(CheckpointProbeBase):
"""Class for probing IP Adapters"""
def get_base_type(self) -> BaseModelType:
checkpoint = self.checkpoint
for key in checkpoint.keys():
if not key.startswith(("image_proj.", "ip_adapter.")):
continue
cross_attention_dim = checkpoint["ip_adapter.1.to_k_ip.weight"].shape[-1]
if cross_attention_dim == 768:
return BaseModelType.StableDiffusion1
elif cross_attention_dim == 1024:
return BaseModelType.StableDiffusion2
elif cross_attention_dim == 2048:
return BaseModelType.StableDiffusionXL
else:
raise InvalidModelConfigException(
f"IP-Adapter had unexpected cross-attention dimension: {cross_attention_dim}."
)
raise InvalidModelConfigException(f"{self.model_path}: Unable to determine base type")
raise NotImplementedError()
class CLIPVisionCheckpointProbe(CheckpointProbeBase):
@@ -786,7 +768,7 @@ class T2IAdapterFolderProbe(FolderProbeBase):
)
# Register probe classes
############## register probe classes ######
ModelProbe.register_probe("diffusers", ModelType.Main, PipelineFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.VAE, VaeFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.LoRA, LoRAFolderProbe)

View File

@@ -94,7 +94,6 @@
"reactflow": "^11.10.4",
"redux-dynamic-middlewares": "^2.2.0",
"redux-remember": "^5.1.0",
"rfdc": "^1.3.1",
"roarr": "^7.21.1",
"serialize-error": "^11.0.3",
"socket.io-client": "^4.7.5",

View File

@@ -137,9 +137,6 @@ dependencies:
redux-remember:
specifier: ^5.1.0
version: 5.1.0(redux@5.0.1)
rfdc:
specifier: ^1.3.1
version: 1.3.1
roarr:
specifier: ^7.21.1
version: 7.21.1
@@ -12131,10 +12128,6 @@ packages:
resolution: {integrity: sha512-/x8uIPdTafBqakK0TmPNJzgkLP+3H+yxpUJhCQHsLBg1rYEVNR2D8BRYNWQhVBjyOd7oo1dZRVzIkwMY2oqfYQ==}
dev: true
/rfdc@1.3.1:
resolution: {integrity: sha512-r5a3l5HzYlIC68TpmYKlxWjmOP6wiPJ1vWv2HeLhNsRZMrCkxeqxiHlQ21oXmQ4F3SiryXBHhAD7JZqvOJjFmg==}
dev: false
/rimraf@2.6.3:
resolution: {integrity: sha512-mwqeW5XsA2qAejG46gYdENaxXjx9onRNCfn7L0duuP4hCuTIi/QO7PDK07KJfp1d+izWPrzEJDcSqBa0OZQriA==}
hasBin: true

View File

@@ -291,6 +291,7 @@
"canvasMerged": "تم دمج الخط",
"sentToImageToImage": "تم إرسال إلى صورة إلى صورة",
"sentToUnifiedCanvas": "تم إرسال إلى لوحة موحدة",
"parametersSet": "تم تعيين المعلمات",
"parametersNotSet": "لم يتم تعيين المعلمات",
"metadataLoadFailed": "فشل تحميل البيانات الوصفية"
},

View File

@@ -4,7 +4,7 @@
"reportBugLabel": "Fehler melden",
"settingsLabel": "Einstellungen",
"img2img": "Bild zu Bild",
"nodes": "Arbeitsabläufe",
"nodes": "Knoten Editor",
"upload": "Hochladen",
"load": "Laden",
"statusDisconnected": "Getrennt",
@@ -74,9 +74,7 @@
"updated": "Aktualisiert",
"copy": "Kopieren",
"aboutHeading": "Nutzen Sie Ihre kreative Energie",
"toResolve": "Lösen",
"add": "Hinzufügen",
"loglevel": "Protokoll Stufe"
"toResolve": "Lösen"
},
"gallery": {
"galleryImageSize": "Bildgröße",
@@ -106,16 +104,11 @@
"dropToUpload": "$t(gallery.drop) zum hochladen",
"dropOrUpload": "$t(gallery.drop) oder hochladen",
"drop": "Ablegen",
"problemDeletingImages": "Problem beim Löschen der Bilder",
"bulkDownloadRequested": "Download vorbereiten",
"bulkDownloadRequestedDesc": "Dein Download wird vorbereitet. Dies kann ein paar Momente dauern.",
"bulkDownloadRequestFailed": "Problem beim Download vorbereiten",
"bulkDownloadFailed": "Download fehlgeschlagen",
"alwaysShowImageSizeBadge": "Zeige immer Bilder Größe Abzeichen"
"problemDeletingImages": "Problem beim Löschen der Bilder"
},
"hotkeys": {
"keyboardShortcuts": "Tastenkürzel",
"appHotkeys": "App",
"appHotkeys": "App-Tastenkombinationen",
"generalHotkeys": "Allgemein",
"galleryHotkeys": "Galerie",
"unifiedCanvasHotkeys": "Leinwand",
@@ -389,14 +382,7 @@
"vaePrecision": "VAE-Präzision",
"variant": "Variante",
"modelDeleteFailed": "Modell konnte nicht gelöscht werden",
"noModelSelected": "Kein Modell ausgewählt",
"huggingFace": "HuggingFace",
"defaultSettings": "Standardeinstellungen",
"edit": "Bearbeiten",
"cancel": "Stornieren",
"defaultSettingsSaved": "Standardeinstellungen gespeichert",
"addModels": "Model hinzufügen",
"deleteModelImage": "Lösche Model Bild"
"noModelSelected": "Kein Modell ausgewählt"
},
"parameters": {
"images": "Bilder",
@@ -480,6 +466,7 @@
"canvasMerged": "Leinwand zusammengeführt",
"sentToImageToImage": "Gesendet an Bild zu Bild",
"sentToUnifiedCanvas": "Gesendet an Leinwand",
"parametersSet": "Parameter festlegen",
"parametersNotSet": "Parameter nicht festgelegt",
"metadataLoadFailed": "Metadaten konnten nicht geladen werden",
"setCanvasInitialImage": "Ausgangsbild setzen",
@@ -684,8 +671,7 @@
"body": "Körper",
"hands": "Hände",
"dwOpenpose": "DW Openpose",
"dwOpenposeDescription": "Posenschätzung mit DW Openpose",
"selectCLIPVisionModel": "Wähle ein CLIP Vision Model aus"
"dwOpenposeDescription": "Posenschätzung mit DW Openpose"
},
"queue": {
"status": "Status",
@@ -771,12 +757,7 @@
"scheduler": "Planer",
"noRecallParameters": "Es wurden keine Parameter zum Abrufen gefunden",
"recallParameters": "Parameter wiederherstellen",
"cfgRescaleMultiplier": "$t(parameters.cfgRescaleMultiplier)",
"allPrompts": "Alle Prompts",
"imageDimensions": "Bilder Auslösungen",
"parameterSet": "Parameter {{parameter}} setzen",
"recallParameter": "{{label}} Abrufen",
"parsingFailed": "Parsing Fehlgeschlagen"
"cfgRescaleMultiplier": "$t(parameters.cfgRescaleMultiplier)"
},
"popovers": {
"noiseUseCPU": {
@@ -1041,8 +1022,7 @@
"title": "Bild"
},
"advanced": {
"title": "Erweitert",
"options": "$t(accordions.advanced.title) Optionen"
"title": "Erweitert"
},
"control": {
"title": "Kontrolle"
@@ -1088,10 +1068,5 @@
},
"dynamicPrompts": {
"showDynamicPrompts": "Dynamische Prompts anzeigen"
},
"prompt": {
"noMatchingTriggers": "Keine passenden Auslöser",
"addPromptTrigger": "Auslöse Text hinzufügen",
"compatibleEmbeddings": "Kompatible Einbettungen"
}
}

View File

@@ -217,7 +217,6 @@
"saveControlImage": "Save Control Image",
"scribble": "scribble",
"selectModel": "Select a model",
"selectCLIPVisionModel": "Select a CLIP Vision model",
"setControlImageDimensions": "Set Control Image Dimensions To W/H",
"showAdvanced": "Show Advanced",
"small": "Small",
@@ -656,7 +655,6 @@
"install": "Install",
"installAll": "Install All",
"installRepo": "Install Repo",
"ipAdapters": "IP Adapters",
"load": "Load",
"localOnly": "local only",
"manual": "Manual",
@@ -684,7 +682,6 @@
"noModelsInstalled": "No Models Installed",
"noModelsInstalledDesc1": "Install models with the",
"noModelSelected": "No Model Selected",
"noMatchingModels": "No matching Models",
"none": "none",
"path": "Path",
"pathToConfig": "Path To Config",
@@ -888,11 +885,6 @@
"imageFit": "Fit Initial Image To Output Size",
"images": "Images",
"infillMethod": "Infill Method",
"infillMosaicTileWidth": "Tile Width",
"infillMosaicTileHeight": "Tile Height",
"infillMosaicMinColor": "Min Color",
"infillMosaicMaxColor": "Max Color",
"infillColorValue": "Fill Color",
"info": "Info",
"invoke": {
"addingImagesTo": "Adding images to",
@@ -1041,10 +1033,10 @@
"metadataLoadFailed": "Failed to load metadata",
"modelAddedSimple": "Model Added to Queue",
"modelImportCanceled": "Model Import Canceled",
"parameters": "Parameters",
"parameterNotSet": "{{parameter}} not set",
"parameterSet": "{{parameter}} set",
"parametersNotSet": "Parameters Not Set",
"parametersSet": "Parameters Set",
"problemCopyingCanvas": "Problem Copying Canvas",
"problemCopyingCanvasDesc": "Unable to export base layer",
"problemCopyingImage": "Unable to Copy Image",
@@ -1423,7 +1415,6 @@
"eraseBoundingBox": "Erase Bounding Box",
"eraser": "Eraser",
"fillBoundingBox": "Fill Bounding Box",
"initialFitImageSize": "Fit Image Size on Drop",
"invertBrushSizeScrollDirection": "Invert Scroll for Brush Size",
"layer": "Layer",
"limitStrokesToBox": "Limit Strokes to Box",

View File

@@ -363,6 +363,7 @@
"canvasMerged": "Lienzo consolidado",
"sentToImageToImage": "Enviar hacia Imagen a Imagen",
"sentToUnifiedCanvas": "Enviar hacia Lienzo Consolidado",
"parametersSet": "Parámetros establecidos",
"parametersNotSet": "Parámetros no establecidos",
"metadataLoadFailed": "Error al cargar metadatos",
"serverError": "Error en el servidor",

View File

@@ -298,6 +298,7 @@
"canvasMerged": "Canvas fusionné",
"sentToImageToImage": "Envoyé à Image à Image",
"sentToUnifiedCanvas": "Envoyé à Canvas unifié",
"parametersSet": "Paramètres définis",
"parametersNotSet": "Paramètres non définis",
"metadataLoadFailed": "Échec du chargement des métadonnées"
},

View File

@@ -306,6 +306,7 @@
"canvasMerged": "קנבס מוזג",
"sentToImageToImage": "נשלח לתמונה לתמונה",
"sentToUnifiedCanvas": "נשלח אל קנבס מאוחד",
"parametersSet": "הגדרת פרמטרים",
"parametersNotSet": "פרמטרים לא הוגדרו",
"metadataLoadFailed": "טעינת מטא-נתונים נכשלה"
},

View File

@@ -73,8 +73,7 @@
"ai": "ia",
"file": "File",
"toResolve": "Da risolvere",
"add": "Aggiungi",
"loglevel": "Livello di log"
"add": "Aggiungi"
},
"gallery": {
"galleryImageSize": "Dimensione dell'immagine",
@@ -366,7 +365,7 @@
"modelConverted": "Modello convertito",
"alpha": "Alpha",
"convertToDiffusersHelpText1": "Questo modello verrà convertito nel formato 🧨 Diffusori.",
"convertToDiffusersHelpText3": "Il file del modello su disco verrà eliminato se si trova nella cartella principale di InvokeAI. Se si trova invece in una posizione personalizzata, NON verrà eliminato.",
"convertToDiffusersHelpText3": "Il file Checkpoint su disco verrà eliminato se si trova nella cartella principale di InvokeAI. Se si trova invece in una posizione personalizzata, NON verrà eliminato.",
"v2_base": "v2 (512px)",
"v2_768": "v2 (768px)",
"none": "nessuno",
@@ -443,8 +442,7 @@
"noModelsInstalled": "Nessun modello installato",
"hfTokenInvalidErrorMessage2": "Aggiornalo in ",
"main": "Principali",
"noModelsInstalledDesc1": "Installa i modelli con",
"ipAdapters": "Adattatori IP"
"noModelsInstalledDesc1": "Installa i modelli con"
},
"parameters": {
"images": "Immagini",
@@ -569,6 +567,7 @@
"canvasMerged": "Tela unita",
"sentToImageToImage": "Inviato a Immagine a Immagine",
"sentToUnifiedCanvas": "Inviato a Tela Unificata",
"parametersSet": "Parametri impostati",
"parametersNotSet": "Parametri non impostati",
"metadataLoadFailed": "Impossibile caricare i metadati",
"serverError": "Errore del Server",
@@ -935,10 +934,7 @@
"base": "Base",
"lineart": "Linea",
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
"mediapipeFace": "Mediapipe Volto",
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"selectCLIPVisionModel": "Seleziona un modello CLIP Vision"
"mediapipeFace": "Mediapipe Volto"
},
"queue": {
"queueFront": "Aggiungi all'inizio della coda",
@@ -1494,8 +1490,7 @@
"title": "Generazione"
},
"advanced": {
"title": "Avanzate",
"options": "Opzioni $t(accordions.advanced.title)"
"title": "Avanzate"
},
"image": {
"title": "Immagine"

View File

@@ -420,6 +420,7 @@
"canvasMerged": "Canvas samengevoegd",
"sentToImageToImage": "Gestuurd naar Afbeelding naar afbeelding",
"sentToUnifiedCanvas": "Gestuurd naar Centraal canvas",
"parametersSet": "Parameters ingesteld",
"parametersNotSet": "Parameters niet ingesteld",
"metadataLoadFailed": "Fout bij laden metagegevens",
"serverError": "Serverfout",

View File

@@ -267,6 +267,7 @@
"canvasMerged": "Scalono widoczne warstwy",
"sentToImageToImage": "Wysłano do Obraz na obraz",
"sentToUnifiedCanvas": "Wysłano do trybu uniwersalnego",
"parametersSet": "Ustawiono parametry",
"parametersNotSet": "Nie ustawiono parametrów",
"metadataLoadFailed": "Błąd wczytywania metadanych"
},

View File

@@ -310,6 +310,7 @@
"canvasMerged": "Tela Fundida",
"sentToImageToImage": "Mandar Para Imagem Para Imagem",
"sentToUnifiedCanvas": "Enviada para a Tela Unificada",
"parametersSet": "Parâmetros Definidos",
"parametersNotSet": "Parâmetros Não Definidos",
"metadataLoadFailed": "Falha ao tentar carregar metadados"
},

View File

@@ -307,6 +307,7 @@
"canvasMerged": "Tela Fundida",
"sentToImageToImage": "Mandar Para Imagem Para Imagem",
"sentToUnifiedCanvas": "Enviada para a Tela Unificada",
"parametersSet": "Parâmetros Definidos",
"parametersNotSet": "Parâmetros Não Definidos",
"metadataLoadFailed": "Falha ao tentar carregar metadados"
},

View File

@@ -75,8 +75,7 @@
"copy": "Копировать",
"localSystem": "Локальная система",
"aboutDesc": "Используя Invoke для работы? Проверьте это:",
"add": "Добавить",
"loglevel": "Уровень логов"
"add": "Добавить"
},
"gallery": {
"galleryImageSize": "Размер изображений",
@@ -575,6 +574,7 @@
"canvasMerged": "Холст объединен",
"sentToImageToImage": "Отправить в img2img",
"sentToUnifiedCanvas": "Отправлено на Единый холст",
"parametersSet": "Параметры заданы",
"parametersNotSet": "Параметры не заданы",
"metadataLoadFailed": "Не удалось загрузить метаданные",
"serverError": "Ошибка сервера",
@@ -1505,8 +1505,7 @@
"title": "Генерация"
},
"advanced": {
"title": "Расширенные",
"options": "Опции $t(accordions.advanced.title)"
"title": "Расширенные"
},
"image": {
"title": "Изображение"

View File

@@ -315,6 +315,7 @@
"canvasMerged": "Полотно об'єднане",
"sentToImageToImage": "Надіслати до img2img",
"sentToUnifiedCanvas": "Надіслати на полотно",
"parametersSet": "Параметри задані",
"parametersNotSet": "Параметри не задані",
"metadataLoadFailed": "Не вдалося завантажити метадані",
"serverError": "Помилка сервера",

View File

@@ -487,6 +487,7 @@
"canvasMerged": "画布已合并",
"sentToImageToImage": "已发送到图生图",
"sentToUnifiedCanvas": "已发送到统一画布",
"parametersSet": "参数已设定",
"parametersNotSet": "参数未设定",
"metadataLoadFailed": "加载元数据失败",
"uploadFailedInvalidUploadDesc": "必须是单张的 PNG 或 JPEG 图片",

View File

@@ -1,7 +1,7 @@
import type { UnknownAction } from '@reduxjs/toolkit';
import { deepClone } from 'common/util/deepClone';
import { isAnyGraphBuilt } from 'features/nodes/store/actions';
import { nodeTemplatesBuilt } from 'features/nodes/store/nodesSlice';
import { cloneDeep } from 'lodash-es';
import { appInfoApi } from 'services/api/endpoints/appInfo';
import type { Graph } from 'services/api/types';
import { socketGeneratorProgress } from 'services/events/actions';
@@ -33,7 +33,7 @@ export const actionSanitizer = <A extends UnknownAction>(action: A): A => {
}
if (socketGeneratorProgress.match(action)) {
const sanitized = deepClone(action);
const sanitized = cloneDeep(action);
if (sanitized.payload.data.progress_image) {
sanitized.payload.data.progress_image.dataURL = '<Progress image omitted>';
}

View File

@@ -43,7 +43,6 @@ export const addModelInstallEventListener = (startAppListening: AppStartListenin
})
);
dispatch(api.util.invalidateTags([{ type: 'ModelConfig', id: LIST_TAG }]));
dispatch(api.util.invalidateTags([{ type: 'ModelScanFolderResults', id: LIST_TAG }]));
},
});

View File

@@ -1,5 +1,4 @@
import { deepClone } from 'common/util/deepClone';
import { merge } from 'lodash-es';
import { cloneDeep, merge } from 'lodash-es';
import { ClickScrollPlugin, OverlayScrollbars } from 'overlayscrollbars';
import type { UseOverlayScrollbarsParams } from 'overlayscrollbars-react';
@@ -23,7 +22,7 @@ export const getOverlayScrollbarsParams = (
overflowX: 'hidden' | 'scroll' = 'hidden',
overflowY: 'hidden' | 'scroll' = 'scroll'
) => {
const params = deepClone(overlayScrollbarsParams);
const params = cloneDeep(overlayScrollbarsParams);
merge(params, { options: { overflow: { y: overflowY, x: overflowX } } });
return params;
};

View File

@@ -1,15 +0,0 @@
import rfdc from 'rfdc';
const _rfdc = rfdc();
/**
* Deep-clones an object using Really Fast Deep Clone.
* This is the fastest deep clone library on Chrome, but not the fastest on FF. Still, it's much faster than lodash
* and structuredClone, so it's the best all-around choice.
*
* Simple Benchmark: https://www.measurethat.net/Benchmarks/Show/30358/0/lodash-clonedeep-vs-jsonparsejsonstringify-vs-recursive
* Repo: https://github.com/davidmarkclements/rfdc
*
* @param obj The object to deep-clone
* @returns The cloned object
*/
export const deepClone = <T>(obj: T): T => _rfdc(obj);

View File

@@ -18,7 +18,6 @@ import {
setShouldAutoSave,
setShouldCropToBoundingBoxOnSave,
setShouldDarkenOutsideBoundingBox,
setShouldFitImageSize,
setShouldInvertBrushSizeScrollDirection,
setShouldRestrictStrokesToBox,
setShouldShowCanvasDebugInfo,
@@ -49,7 +48,6 @@ const IAICanvasSettingsButtonPopover = () => {
const shouldSnapToGrid = useAppSelector((s) => s.canvas.shouldSnapToGrid);
const shouldRestrictStrokesToBox = useAppSelector((s) => s.canvas.shouldRestrictStrokesToBox);
const shouldAntialias = useAppSelector((s) => s.canvas.shouldAntialias);
const shouldFitImageSize = useAppSelector((s) => s.canvas.shouldFitImageSize);
useHotkeys(
['n'],
@@ -104,10 +102,6 @@ const IAICanvasSettingsButtonPopover = () => {
(e: ChangeEvent<HTMLInputElement>) => dispatch(setShouldAntialias(e.target.checked)),
[dispatch]
);
const handleChangeShouldFitImageSize = useCallback(
(e: ChangeEvent<HTMLInputElement>) => dispatch(setShouldFitImageSize(e.target.checked)),
[dispatch]
);
return (
<Popover>
@@ -171,10 +165,6 @@ const IAICanvasSettingsButtonPopover = () => {
<FormLabel>{t('unifiedCanvas.antialiasing')}</FormLabel>
<Checkbox isChecked={shouldAntialias} onChange={handleChangeShouldAntialias} />
</FormControl>
<FormControl>
<FormLabel>{t('unifiedCanvas.initialFitImageSize')}</FormLabel>
<Checkbox isChecked={shouldFitImageSize} onChange={handleChangeShouldFitImageSize} />
</FormControl>
</FormControlGroup>
<ClearCanvasHistoryButtonModal />
</Flex>

View File

@@ -1,7 +1,6 @@
import type { PayloadAction } from '@reduxjs/toolkit';
import { createSlice } from '@reduxjs/toolkit';
import type { PersistConfig, RootState } from 'app/store/store';
import { deepClone } from 'common/util/deepClone';
import { roundDownToMultiple, roundToMultiple } from 'common/util/roundDownToMultiple';
import calculateCoordinates from 'features/canvas/util/calculateCoordinates';
import calculateScale from 'features/canvas/util/calculateScale';
@@ -14,7 +13,7 @@ import { modelChanged } from 'features/parameters/store/generationSlice';
import type { PayloadActionWithOptimalDimension } from 'features/parameters/store/types';
import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';
import type { IRect, Vector2d } from 'konva/lib/types';
import { clamp } from 'lodash-es';
import { clamp, cloneDeep } from 'lodash-es';
import type { RgbaColor } from 'react-colorful';
import { queueApi } from 'services/api/endpoints/queue';
import type { ImageDTO } from 'services/api/types';
@@ -37,7 +36,7 @@ import { CANVAS_GRID_SIZE_FINE } from './constants';
/**
* The maximum history length to keep in the past/future layer states.
*/
const MAX_HISTORY = 100;
const MAX_HISTORY = 128;
const initialLayerState: CanvasLayerState = {
objects: [],
@@ -66,7 +65,6 @@ const initialCanvasState: CanvasState = {
shouldAutoSave: false,
shouldCropToBoundingBoxOnSave: false,
shouldDarkenOutsideBoundingBox: false,
shouldFitImageSize: true,
shouldInvertBrushSizeScrollDirection: false,
shouldLockBoundingBox: false,
shouldPreserveMaskedArea: false,
@@ -123,7 +121,7 @@ export const canvasSlice = createSlice({
state.brushSize = action.payload;
},
clearMask: (state) => {
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
state.layerState.objects = state.layerState.objects.filter((obj) => !isCanvasMaskLine(obj));
state.futureLayerStates = [];
state.shouldPreserveMaskedArea = false;
@@ -145,20 +143,12 @@ export const canvasSlice = createSlice({
reducer: (state, action: PayloadActionWithOptimalDimension<ImageDTO>) => {
const { width, height, image_name } = action.payload;
const { optimalDimension } = action.meta;
const { stageDimensions, shouldFitImageSize } = state;
const { stageDimensions } = state;
const newBoundingBoxDimensions = shouldFitImageSize
? {
width: roundDownToMultiple(width, CANVAS_GRID_SIZE_FINE),
height: roundDownToMultiple(height, CANVAS_GRID_SIZE_FINE),
}
: {
width: roundDownToMultiple(clamp(width, CANVAS_GRID_SIZE_FINE, optimalDimension), CANVAS_GRID_SIZE_FINE),
height: roundDownToMultiple(
clamp(height, CANVAS_GRID_SIZE_FINE, optimalDimension),
CANVAS_GRID_SIZE_FINE
),
};
const newBoundingBoxDimensions = {
width: roundDownToMultiple(clamp(width, CANVAS_GRID_SIZE_FINE, optimalDimension), CANVAS_GRID_SIZE_FINE),
height: roundDownToMultiple(clamp(height, CANVAS_GRID_SIZE_FINE, optimalDimension), CANVAS_GRID_SIZE_FINE),
};
const newBoundingBoxCoordinates = {
x: roundToMultiple(width / 2 - newBoundingBoxDimensions.width / 2, CANVAS_GRID_SIZE_FINE),
@@ -173,10 +163,10 @@ export const canvasSlice = createSlice({
state.boundingBoxDimensions = newBoundingBoxDimensions;
state.boundingBoxCoordinates = newBoundingBoxCoordinates;
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
state.layerState = {
...deepClone(initialLayerState),
...cloneDeep(initialLayerState),
objects: [
{
kind: 'image',
@@ -271,7 +261,11 @@ export const canvasSlice = createSlice({
return;
}
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
if (state.pastLayerStates.length > MAX_HISTORY) {
state.pastLayerStates.shift();
}
state.layerState.stagingArea.images.push({
kind: 'image',
@@ -285,9 +279,13 @@ export const canvasSlice = createSlice({
state.futureLayerStates = [];
},
discardStagedImages: (state) => {
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
state.layerState.stagingArea = deepClone(initialLayerState.stagingArea);
if (state.pastLayerStates.length > MAX_HISTORY) {
state.pastLayerStates.shift();
}
state.layerState.stagingArea = cloneDeep(cloneDeep(initialLayerState)).stagingArea;
state.futureLayerStates = [];
state.shouldShowStagingOutline = true;
@@ -296,21 +294,18 @@ export const canvasSlice = createSlice({
},
discardStagedImage: (state) => {
const { images, selectedImageIndex } = state.layerState.stagingArea;
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
if (state.pastLayerStates.length > MAX_HISTORY) {
state.pastLayerStates.shift();
}
if (!images.length) {
return;
}
images.splice(selectedImageIndex, 1);
if (images.length === 0) {
pushToPrevLayerStates(state);
state.layerState.stagingArea = deepClone(initialLayerState.stagingArea);
state.futureLayerStates = [];
state.shouldShowStagingOutline = true;
state.shouldShowStagingImage = true;
state.batchIds = [];
}
if (selectedImageIndex >= images.length) {
state.layerState.stagingArea.selectedImageIndex = images.length - 1;
}
@@ -325,7 +320,11 @@ export const canvasSlice = createSlice({
addFillRect: (state) => {
const { boundingBoxCoordinates, boundingBoxDimensions, brushColor } = state;
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
if (state.pastLayerStates.length > MAX_HISTORY) {
state.pastLayerStates.shift();
}
state.layerState.objects.push({
kind: 'fillRect',
@@ -340,7 +339,11 @@ export const canvasSlice = createSlice({
addEraseRect: (state) => {
const { boundingBoxCoordinates, boundingBoxDimensions } = state;
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
if (state.pastLayerStates.length > MAX_HISTORY) {
state.pastLayerStates.shift();
}
state.layerState.objects.push({
kind: 'eraseRect',
@@ -364,7 +367,11 @@ export const canvasSlice = createSlice({
// set & then spread this to only conditionally add the "color" key
const newColor = layer === 'base' && tool === 'brush' ? { color: brushColor } : {};
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
if (state.pastLayerStates.length > MAX_HISTORY) {
state.pastLayerStates.shift();
}
const newLine: CanvasMaskLine | CanvasBaseLine = {
kind: 'line',
@@ -402,7 +409,11 @@ export const canvasSlice = createSlice({
return;
}
pushToFutureLayerStates(state);
state.futureLayerStates.unshift(cloneDeep(state.layerState));
if (state.futureLayerStates.length > MAX_HISTORY) {
state.futureLayerStates.pop();
}
state.layerState = targetState;
},
@@ -413,7 +424,11 @@ export const canvasSlice = createSlice({
return;
}
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
if (state.pastLayerStates.length > MAX_HISTORY) {
state.pastLayerStates.shift();
}
state.layerState = targetState;
},
@@ -430,8 +445,8 @@ export const canvasSlice = createSlice({
state.shouldShowIntermediates = action.payload;
},
resetCanvas: (state) => {
pushToPrevLayerStates(state);
state.layerState = deepClone(initialLayerState);
state.pastLayerStates.push(cloneDeep(state.layerState));
state.layerState = cloneDeep(initialLayerState);
state.futureLayerStates = [];
state.batchIds = [];
state.boundingBoxCoordinates = {
@@ -525,7 +540,11 @@ export const canvasSlice = createSlice({
const { images, selectedImageIndex } = state.layerState.stagingArea;
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
if (state.pastLayerStates.length > MAX_HISTORY) {
state.pastLayerStates.shift();
}
const imageToCommit = images[selectedImageIndex];
@@ -534,7 +553,7 @@ export const canvasSlice = createSlice({
...imageToCommit,
});
}
state.layerState.stagingArea = deepClone(initialLayerState.stagingArea);
state.layerState.stagingArea = cloneDeep(initialLayerState).stagingArea;
state.futureLayerStates = [];
state.shouldShowStagingOutline = true;
@@ -591,9 +610,6 @@ export const canvasSlice = createSlice({
setShouldAntialias: (state, action: PayloadAction<boolean>) => {
state.shouldAntialias = action.payload;
},
setShouldFitImageSize: (state, action: PayloadAction<boolean>) => {
state.shouldFitImageSize = action.payload;
},
setShouldCropToBoundingBoxOnSave: (state, action: PayloadAction<boolean>) => {
state.shouldCropToBoundingBoxOnSave = action.payload;
},
@@ -607,7 +623,7 @@ export const canvasSlice = createSlice({
};
},
setMergedCanvas: (state, action: PayloadAction<CanvasImage>) => {
pushToPrevLayerStates(state);
state.pastLayerStates.push(cloneDeep(state.layerState));
state.futureLayerStates = [];
@@ -704,7 +720,6 @@ export const {
setShouldRestrictStrokesToBox,
stagingAreaInitialized,
setShouldAntialias,
setShouldFitImageSize,
canvasResized,
canvasBatchIdAdded,
canvasBatchIdsReset,
@@ -728,17 +743,3 @@ export const canvasPersistConfig: PersistConfig<CanvasState> = {
migrate: migrateCanvasState,
persistDenylist: [],
};
const pushToPrevLayerStates = (state: CanvasState) => {
state.pastLayerStates.push(deepClone(state.layerState));
if (state.pastLayerStates.length > MAX_HISTORY) {
state.pastLayerStates = state.pastLayerStates.slice(-MAX_HISTORY);
}
};
const pushToFutureLayerStates = (state: CanvasState) => {
state.futureLayerStates.unshift(deepClone(state.layerState));
if (state.futureLayerStates.length > MAX_HISTORY) {
state.futureLayerStates = state.futureLayerStates.slice(0, MAX_HISTORY);
}
};

View File

@@ -120,7 +120,6 @@ export interface CanvasState {
shouldAutoSave: boolean;
shouldCropToBoundingBoxOnSave: boolean;
shouldDarkenOutsideBoundingBox: boolean;
shouldFitImageSize: boolean;
shouldInvertBrushSizeScrollDirection: boolean;
shouldLockBoundingBox: boolean;
shouldPreserveMaskedArea: boolean;

View File

@@ -1,18 +1,12 @@
import type { ComboboxOnChange, ComboboxOption } from '@invoke-ai/ui-library';
import { Combobox, Flex, FormControl, Tooltip } from '@invoke-ai/ui-library';
import { Combobox, FormControl, Tooltip } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useGroupedModelCombobox } from 'common/hooks/useGroupedModelCombobox';
import { useControlAdapterCLIPVisionModel } from 'features/controlAdapters/hooks/useControlAdapterCLIPVisionModel';
import { useControlAdapterIsEnabled } from 'features/controlAdapters/hooks/useControlAdapterIsEnabled';
import { useControlAdapterModel } from 'features/controlAdapters/hooks/useControlAdapterModel';
import { useControlAdapterModels } from 'features/controlAdapters/hooks/useControlAdapterModels';
import { useControlAdapterType } from 'features/controlAdapters/hooks/useControlAdapterType';
import {
controlAdapterCLIPVisionModelChanged,
controlAdapterModelChanged,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import type { CLIPVisionModel } from 'features/controlAdapters/store/types';
import { controlAdapterModelChanged } from 'features/controlAdapters/store/controlAdaptersSlice';
import { selectGenerationSlice } from 'features/parameters/store/generationSlice';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
@@ -35,7 +29,6 @@ const ParamControlAdapterModel = ({ id }: ParamControlAdapterModelProps) => {
const { modelConfig } = useControlAdapterModel(id);
const dispatch = useAppDispatch();
const currentBaseModel = useAppSelector((s) => s.generation.model?.base);
const currentCLIPVisionModel = useControlAdapterCLIPVisionModel(id);
const mainModel = useAppSelector(selectMainModel);
const { t } = useTranslation();
@@ -56,16 +49,6 @@ const ParamControlAdapterModel = ({ id }: ParamControlAdapterModelProps) => {
[dispatch, id]
);
const onCLIPVisionModelChange = useCallback<ComboboxOnChange>(
(v) => {
if (!v?.value) {
return;
}
dispatch(controlAdapterCLIPVisionModelChanged({ id, clipVisionModel: v.value as CLIPVisionModel }));
},
[dispatch, id]
);
const selectedModel = useMemo(
() => (modelConfig && controlAdapterType ? { ...modelConfig, model_type: controlAdapterType } : null),
[controlAdapterType, modelConfig]
@@ -88,51 +71,18 @@ const ParamControlAdapterModel = ({ id }: ParamControlAdapterModelProps) => {
isLoading,
});
const clipVisionOptions = useMemo<ComboboxOption[]>(
() => [
{ label: 'ViT-H', value: 'ViT-H' },
{ label: 'ViT-G', value: 'ViT-G' },
],
[]
);
const clipVisionModel = useMemo(
() => clipVisionOptions.find((o) => o.value === currentCLIPVisionModel),
[clipVisionOptions, currentCLIPVisionModel]
);
return (
<Flex sx={{ gap: 2 }}>
<Tooltip label={value?.description}>
<FormControl
isDisabled={!isEnabled}
isInvalid={!value || mainModel?.base !== modelConfig?.base}
sx={{ width: '100%' }}
>
<Combobox
options={options}
placeholder={t('controlnet.selectModel')}
value={value}
onChange={onChange}
noOptionsMessage={noOptionsMessage}
/>
</FormControl>
</Tooltip>
{modelConfig?.type === 'ip_adapter' && modelConfig.format === 'checkpoint' && (
<FormControl
isDisabled={!isEnabled}
isInvalid={!value || mainModel?.base !== modelConfig?.base}
sx={{ width: 'max-content', minWidth: 28 }}
>
<Combobox
options={clipVisionOptions}
placeholder={t('controlnet.selectCLIPVisionModel')}
value={clipVisionModel}
onChange={onCLIPVisionModelChange}
/>
</FormControl>
)}
</Flex>
<Tooltip label={value?.description}>
<FormControl isDisabled={!isEnabled} isInvalid={!value || mainModel?.base !== modelConfig?.base}>
<Combobox
options={options}
placeholder={t('controlnet.selectModel')}
value={value}
onChange={onChange}
noOptionsMessage={noOptionsMessage}
/>
</FormControl>
</Tooltip>
);
};

View File

@@ -1,24 +0,0 @@
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import {
selectControlAdapterById,
selectControlAdaptersSlice,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { useMemo } from 'react';
export const useControlAdapterCLIPVisionModel = (id: string) => {
const selector = useMemo(
() =>
createMemoizedSelector(selectControlAdaptersSlice, (controlAdapters) => {
const cn = selectControlAdapterById(controlAdapters, id);
if (cn && cn?.type === 'ip_adapter') {
return cn.clipVisionModel;
}
}),
[id]
);
const clipVisionModel = useAppSelector(selector);
return clipVisionModel;
};

View File

@@ -2,11 +2,10 @@ import type { PayloadAction, Update } from '@reduxjs/toolkit';
import { createEntityAdapter, createSlice, isAnyOf } from '@reduxjs/toolkit';
import { getSelectorsOptions } from 'app/store/createMemoizedSelector';
import type { PersistConfig, RootState } from 'app/store/store';
import { deepClone } from 'common/util/deepClone';
import { buildControlAdapter } from 'features/controlAdapters/util/buildControlAdapter';
import { buildControlAdapterProcessor } from 'features/controlAdapters/util/buildControlAdapterProcessor';
import { zModelIdentifierField } from 'features/nodes/types/common';
import { merge, uniq } from 'lodash-es';
import { cloneDeep, merge, uniq } from 'lodash-es';
import type { ControlNetModelConfig, IPAdapterModelConfig, T2IAdapterModelConfig } from 'services/api/types';
import { socketInvocationError } from 'services/events/actions';
import { v4 as uuidv4 } from 'uuid';
@@ -14,7 +13,6 @@ import { v4 as uuidv4 } from 'uuid';
import { controlAdapterImageProcessed } from './actions';
import { CONTROLNET_PROCESSORS } from './constants';
import type {
CLIPVisionModel,
ControlAdapterConfig,
ControlAdapterProcessorType,
ControlAdaptersState,
@@ -116,7 +114,7 @@ export const controlAdaptersSlice = createSlice({
if (!controlAdapter) {
return;
}
const newControlAdapter = merge(deepClone(controlAdapter), {
const newControlAdapter = merge(cloneDeep(controlAdapter), {
id: newId,
isEnabled: true,
});
@@ -245,13 +243,6 @@ export const controlAdaptersSlice = createSlice({
}
caAdapter.updateOne(state, { id, changes: { controlMode } });
},
controlAdapterCLIPVisionModelChanged: (
state,
action: PayloadAction<{ id: string; clipVisionModel: CLIPVisionModel }>
) => {
const { id, clipVisionModel } = action.payload;
caAdapter.updateOne(state, { id, changes: { clipVisionModel } });
},
controlAdapterResizeModeChanged: (
state,
action: PayloadAction<{
@@ -279,7 +270,7 @@ export const controlAdaptersSlice = createSlice({
return;
}
const processorNode = merge(deepClone(cn.processorNode), params);
const processorNode = merge(cloneDeep(cn.processorNode), params);
caAdapter.updateOne(state, {
id,
@@ -302,7 +293,7 @@ export const controlAdaptersSlice = createSlice({
return;
}
const processorNode = deepClone(
const processorNode = cloneDeep(
CONTROLNET_PROCESSORS[processorType].buildDefaults(cn.model?.base)
) as RequiredControlAdapterProcessorNode;
@@ -342,7 +333,7 @@ export const controlAdaptersSlice = createSlice({
caAdapter.updateOne(state, update);
},
controlAdaptersReset: () => {
return deepClone(initialControlAdaptersState);
return cloneDeep(initialControlAdaptersState);
},
pendingControlImagesCleared: (state) => {
state.pendingControlImages = [];
@@ -389,7 +380,6 @@ export const {
controlAdapterProcessedImageChanged,
controlAdapterIsEnabledChanged,
controlAdapterModelChanged,
controlAdapterCLIPVisionModelChanged,
controlAdapterWeightChanged,
controlAdapterBeginStepPctChanged,
controlAdapterEndStepPctChanged,
@@ -416,7 +406,7 @@ const migrateControlAdaptersState = (state: any): any => {
state._version = 1;
}
if (state._version === 1) {
state = deepClone(initialControlAdaptersState);
state = cloneDeep(initialControlAdaptersState);
}
return state;
};

View File

@@ -243,15 +243,12 @@ export type T2IAdapterConfig = {
shouldAutoConfig: boolean;
};
export type CLIPVisionModel = 'ViT-H' | 'ViT-G';
export type IPAdapterConfig = {
type: 'ip_adapter';
id: string;
isEnabled: boolean;
controlImage: string | null;
model: ParameterIPAdapterModel | null;
clipVisionModel: CLIPVisionModel;
weight: number;
beginStepPct: number;
endStepPct: number;

View File

@@ -1,4 +1,3 @@
import { deepClone } from 'common/util/deepClone';
import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants';
import type {
ControlAdapterConfig,
@@ -8,7 +7,7 @@ import type {
RequiredCannyImageProcessorInvocation,
T2IAdapterConfig,
} from 'features/controlAdapters/store/types';
import { merge } from 'lodash-es';
import { cloneDeep, merge } from 'lodash-es';
export const initialControlNet: Omit<ControlNetConfig, 'id'> = {
type: 'controlnet',
@@ -46,7 +45,6 @@ export const initialIPAdapter: Omit<IPAdapterConfig, 'id'> = {
isEnabled: true,
controlImage: null,
model: null,
clipVisionModel: 'ViT-H',
weight: 1,
beginStepPct: 0,
endStepPct: 1,
@@ -59,11 +57,11 @@ export const buildControlAdapter = (
): ControlAdapterConfig => {
switch (type) {
case 'controlnet':
return merge(deepClone(initialControlNet), { id, ...overrides });
return merge(cloneDeep(initialControlNet), { id, ...overrides });
case 't2i_adapter':
return merge(deepClone(initialT2IAdapter), { id, ...overrides });
return merge(cloneDeep(initialT2IAdapter), { id, ...overrides });
case 'ip_adapter':
return merge(deepClone(initialIPAdapter), { id, ...overrides });
return merge(cloneDeep(initialIPAdapter), { id, ...overrides });
default:
throw new Error(`Unknown control adapter type: ${type}`);
}

View File

@@ -33,7 +33,6 @@ const ImageMetadataActions = (props: Props) => {
<MetadataItem metadata={metadata} handlers={handlers.scheduler} />
<MetadataItem metadata={metadata} handlers={handlers.cfgScale} />
<MetadataItem metadata={metadata} handlers={handlers.cfgRescaleMultiplier} />
<MetadataItem metadata={metadata} handlers={handlers.initialImage} />
<MetadataItem metadata={metadata} handlers={handlers.strength} />
<MetadataItem metadata={metadata} handlers={handlers.hrfEnabled} />
<MetadataItem metadata={metadata} handlers={handlers.hrfMethod} />

View File

@@ -1,9 +1,9 @@
import type { PayloadAction } from '@reduxjs/toolkit';
import { createSlice } from '@reduxjs/toolkit';
import type { PersistConfig, RootState } from 'app/store/store';
import { deepClone } from 'common/util/deepClone';
import { zModelIdentifierField } from 'features/nodes/types/common';
import type { ParameterLoRAModel } from 'features/parameters/types/parameterSchemas';
import { cloneDeep } from 'lodash-es';
import type { LoRAModelConfig } from 'services/api/types';
export type LoRA = {
@@ -58,7 +58,7 @@ export const loraSlice = createSlice({
}
lora.isEnabled = isEnabled;
},
lorasReset: () => deepClone(initialLoraState),
lorasReset: () => cloneDeep(initialLoraState),
},
});
@@ -74,7 +74,7 @@ const migrateLoRAState = (state: any): any => {
}
if (state._version === 1) {
// Model type has changed, so we need to reset the state - too risky to migrate
state = deepClone(initialLoraState);
state = cloneDeep(initialLoraState);
}
return state;
};

View File

@@ -189,12 +189,6 @@ export const handlers = {
recaller: recallers.cfgScale,
}),
height: buildHandlers({ getLabel: () => t('metadata.height'), parser: parsers.height, recaller: recallers.height }),
initialImage: buildHandlers({
getLabel: () => t('metadata.initImage'),
parser: parsers.initialImage,
recaller: recallers.initialImage,
renderValue: async (imageDTO) => imageDTO.image_name,
}),
negativePrompt: buildHandlers({
getLabel: () => t('metadata.negativePrompt'),
parser: parsers.negativePrompt,
@@ -411,6 +405,6 @@ export const parseAndRecallAllMetadata = async (metadata: unknown, skip: (keyof
})
);
if (results.some((result) => result.status === 'fulfilled')) {
parameterSetToast(t('toast.parameters'));
parameterSetToast(t('toast.parametersSet'));
}
};

View File

@@ -1,4 +1,3 @@
import { getStore } from 'app/store/nanostores/store';
import {
initialControlNet,
initialIPAdapter,
@@ -58,8 +57,6 @@ import {
isParameterWidth,
} from 'features/parameters/types/parameterSchemas';
import { get, isArray, isString } from 'lodash-es';
import { imagesApi } from 'services/api/endpoints/images';
import type { ImageDTO } from 'services/api/types';
import {
isControlNetModelConfig,
isIPAdapterModelConfig,
@@ -138,14 +135,6 @@ const parseCFGRescaleMultiplier: MetadataParseFunc<ParameterCFGRescaleMultiplier
const parseScheduler: MetadataParseFunc<ParameterScheduler> = (metadata) =>
getProperty(metadata, 'scheduler', isParameterScheduler);
const parseInitialImage: MetadataParseFunc<ImageDTO> = async (metadata) => {
const imageName = await getProperty(metadata, 'init_image', isString);
const imageDTORequest = getStore().dispatch(imagesApi.endpoints.getImageDTO.initiate(imageName));
const imageDTO = await imageDTORequest.unwrap();
imageDTORequest.unsubscribe();
return imageDTO;
};
const parseWidth: MetadataParseFunc<ParameterWidth> = (metadata) => getProperty(metadata, 'width', isParameterWidth);
const parseHeight: MetadataParseFunc<ParameterHeight> = (metadata) =>
@@ -383,7 +372,6 @@ const parseIPAdapter: MetadataParseFunc<IPAdapterConfigMetadata> = async (metada
type: 'ip_adapter',
isEnabled: true,
model: zModelIdentifierField.parse(ipAdapterModel),
clipVisionModel: 'ViT-H',
controlImage: image?.image_name ?? null,
weight: weight ?? initialIPAdapter.weight,
beginStepPct: begin_step_percent ?? initialIPAdapter.beginStepPct,
@@ -413,7 +401,6 @@ export const parsers = {
cfgScale: parseCFGScale,
cfgRescaleMultiplier: parseCFGRescaleMultiplier,
scheduler: parseScheduler,
initialImage: parseInitialImage,
width: parseWidth,
height: parseHeight,
steps: parseSteps,

View File

@@ -17,7 +17,6 @@ import type {
import { modelSelected } from 'features/parameters/store/actions';
import {
heightRecalled,
initialImageChanged,
setCfgRescaleMultiplier,
setCfgScale,
setImg2imgStrength,
@@ -62,7 +61,6 @@ import {
setRefinerStart,
setRefinerSteps,
} from 'features/sdxl/store/sdxlSlice';
import type { ImageDTO } from 'services/api/types';
const recallPositivePrompt: MetadataRecallFunc<ParameterPositivePrompt> = (positivePrompt) => {
getStore().dispatch(setPositivePrompt(positivePrompt));
@@ -96,10 +94,6 @@ const recallScheduler: MetadataRecallFunc<ParameterScheduler> = (scheduler) => {
getStore().dispatch(setScheduler(scheduler));
};
const recallInitialImage: MetadataRecallFunc<ImageDTO> = async (imageDTO) => {
getStore().dispatch(initialImageChanged(imageDTO));
};
const recallWidth: MetadataRecallFunc<ParameterWidth> = (width) => {
getStore().dispatch(widthRecalled(width));
};
@@ -241,7 +235,6 @@ export const recallers = {
cfgScale: recallCFGScale,
cfgRescaleMultiplier: recallCFGRescaleMultiplier,
scheduler: recallScheduler,
initialImage: recallInitialImage,
width: recallWidth,
height: recallHeight,
steps: recallSteps,

View File

@@ -3,7 +3,7 @@ import { createSlice } from '@reduxjs/toolkit';
import type { PersistConfig } from 'app/store/store';
import type { ModelType } from 'services/api/types';
export type FilterableModelType = Exclude<ModelType, 'onnx' | 'clip_vision'> | 'refiner';
export type FilterableModelType = Exclude<ModelType, 'onnx' | 'clip_vision'>;
type ModelManagerState = {
_version: 1;

View File

@@ -87,10 +87,6 @@ export const ModelInstallQueueItem = (props: ModelListItemProps) => {
}, [installJob.source]);
const progressValue = useMemo(() => {
if (installJob.status === 'completed' || installJob.status === 'error' || installJob.status === 'cancelled') {
return 100;
}
if (isNil(installJob.bytes) || isNil(installJob.total_bytes)) {
return null;
}
@@ -100,7 +96,7 @@ export const ModelInstallQueueItem = (props: ModelListItemProps) => {
}
return (installJob.bytes / installJob.total_bytes) * 100;
}, [installJob.bytes, installJob.status, installJob.total_bytes]);
}, [installJob.bytes, installJob.total_bytes]);
return (
<Flex gap={3} w="full" alignItems="center">

View File

@@ -1,19 +1,48 @@
import { Badge, Box, Flex, IconButton, Text } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
import type { ScanFolderResponse } from 'services/api/endpoints/models';
import { useInstallModelMutation } from 'services/api/endpoints/models';
type Props = {
result: ScanFolderResponse[number];
installModel: (source: string) => void;
};
export const ScanModelResultItem = ({ result, installModel }: Props) => {
export const ScanModelResultItem = ({ result }: Props) => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const handleInstall = useCallback(() => {
installModel(result.path);
}, [installModel, result]);
const [installModel] = useInstallModelMutation();
const handleQuickAdd = useCallback(() => {
installModel({ source: result.path })
.unwrap()
.then((_) => {
dispatch(
addToast(
makeToast({
title: t('toast.modelAddedSimple'),
status: 'success',
})
)
);
})
.catch((error) => {
if (error) {
dispatch(
addToast(
makeToast({
title: `${error.data.detail} `,
status: 'error',
})
)
);
}
});
}, [installModel, result, dispatch, t]);
return (
<Flex alignItems="center" justifyContent="space-between" w="100%" gap={3}>
@@ -25,7 +54,7 @@ export const ScanModelResultItem = ({ result, installModel }: Props) => {
{result.is_installed ? (
<Badge>{t('common.installed')}</Badge>
) : (
<IconButton aria-label={t('modelManager.install')} icon={<PiPlusBold />} onClick={handleInstall} size="sm" />
<IconButton aria-label={t('modelManager.install')} icon={<PiPlusBold />} onClick={handleQuickAdd} size="sm" />
)}
</Box>
</Flex>

View File

@@ -1,10 +1,7 @@
import {
Button,
Checkbox,
Divider,
Flex,
FormControl,
FormLabel,
Heading,
IconButton,
Input,
@@ -15,7 +12,7 @@ import { useAppDispatch } from 'app/store/storeHooks';
import ScrollableContent from 'common/components/OverlayScrollbars/ScrollableContent';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import type { ChangeEvent, ChangeEventHandler } from 'react';
import type { ChangeEventHandler } from 'react';
import { useCallback, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { PiXBold } from 'react-icons/pi';
@@ -31,7 +28,7 @@ export const ScanModelsResults = ({ results }: ScanModelResultsProps) => {
const { t } = useTranslation();
const [searchTerm, setSearchTerm] = useState('');
const dispatch = useAppDispatch();
const [inplace, setInplace] = useState(true);
const [installModel] = useInstallModelMutation();
const filteredResults = useMemo(() => {
@@ -45,10 +42,6 @@ export const ScanModelsResults = ({ results }: ScanModelResultsProps) => {
setSearchTerm(e.target.value.trim());
}, []);
const onChangeInplace = useCallback((e: ChangeEvent<HTMLInputElement>) => {
setInplace(e.target.checked);
}, []);
const clearSearch = useCallback(() => {
setSearchTerm('');
}, []);
@@ -58,7 +51,7 @@ export const ScanModelsResults = ({ results }: ScanModelResultsProps) => {
if (result.is_installed) {
continue;
}
installModel({ source: result.path, inplace })
installModel({ source: result.path })
.unwrap()
.then((_) => {
dispatch(
@@ -83,37 +76,7 @@ export const ScanModelsResults = ({ results }: ScanModelResultsProps) => {
}
});
}
}, [filteredResults, installModel, inplace, dispatch, t]);
const handleInstallOne = useCallback(
(source: string) => {
installModel({ source, inplace })
.unwrap()
.then((_) => {
dispatch(
addToast(
makeToast({
title: t('toast.modelAddedSimple'),
status: 'success',
})
)
);
})
.catch((error) => {
if (error) {
dispatch(
addToast(
makeToast({
title: `${error.data.detail} `,
status: 'error',
})
)
);
}
});
},
[installModel, inplace, dispatch, t]
);
}, [installModel, filteredResults, dispatch, t]);
return (
<>
@@ -122,10 +85,6 @@ export const ScanModelsResults = ({ results }: ScanModelResultsProps) => {
<Flex justifyContent="space-between" alignItems="center">
<Heading size="sm">{t('modelManager.scanResults')}</Heading>
<Flex alignItems="center" gap={3}>
<FormControl w="min-content">
<FormLabel m={0}>{t('modelManager.inplaceInstall')}</FormLabel>
<Checkbox isChecked={inplace} onChange={onChangeInplace} size="md" />
</FormControl>
<Button size="sm" onClick={handleAddAll} isDisabled={filteredResults.length === 0}>
{t('modelManager.installAll')}
</Button>
@@ -157,7 +116,7 @@ export const ScanModelsResults = ({ results }: ScanModelResultsProps) => {
<ScrollableContent>
<Flex flexDir="column" gap={3}>
{filteredResults.map((result) => (
<ScanModelResultItem key={result.path} result={result} installModel={handleInstallOne} />
<ScanModelResultItem key={result.path} result={result} />
))}
</Flex>
</ScrollableContent>

View File

@@ -1,7 +1,6 @@
import { Flex, Text } from '@invoke-ai/ui-library';
import { Flex } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import ScrollableContent from 'common/components/OverlayScrollbars/ScrollableContent';
import type { FilterableModelType } from 'features/modelManagerV2/store/modelManagerV2Slice';
import { memo, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import {
@@ -10,11 +9,10 @@ import {
useIPAdapterModels,
useLoRAModels,
useMainModels,
useRefinerModels,
useT2IAdapterModels,
useVAEModels,
} from 'services/api/hooks/modelsByType';
import type { AnyModelConfig } from 'services/api/types';
import type { AnyModelConfig, ModelType } from 'services/api/types';
import { FetchingModelsLoader } from './FetchingModelsLoader';
import { ModelListWrapper } from './ModelListWrapper';
@@ -29,12 +27,6 @@ const ModelList = () => {
[mainModels, searchTerm, filteredModelType]
);
const [refinerModels, { isLoading: isLoadingRefinerModels }] = useRefinerModels();
const filteredRefinerModels = useMemo(
() => modelsFilter(refinerModels, searchTerm, filteredModelType),
[refinerModels, searchTerm, filteredModelType]
);
const [loraModels, { isLoading: isLoadingLoRAModels }] = useLoRAModels();
const filteredLoRAModels = useMemo(
() => modelsFilter(loraModels, searchTerm, filteredModelType),
@@ -71,28 +63,6 @@ const ModelList = () => {
[vaeModels, searchTerm, filteredModelType]
);
const totalFilteredModels = useMemo(() => {
return (
filteredMainModels.length +
filteredRefinerModels.length +
filteredLoRAModels.length +
filteredEmbeddingModels.length +
filteredControlNetModels.length +
filteredT2IAdapterModels.length +
filteredIPAdapterModels.length +
filteredVAEModels.length
);
}, [
filteredControlNetModels.length,
filteredEmbeddingModels.length,
filteredIPAdapterModels.length,
filteredLoRAModels.length,
filteredMainModels.length,
filteredRefinerModels.length,
filteredT2IAdapterModels.length,
filteredVAEModels.length,
]);
return (
<ScrollableContent>
<Flex flexDirection="column" w="full" h="full" gap={4}>
@@ -101,11 +71,6 @@ const ModelList = () => {
{!isLoadingMainModels && filteredMainModels.length > 0 && (
<ModelListWrapper title={t('modelManager.main')} modelList={filteredMainModels} key="main" />
)}
{/* Refiner Model List */}
{isLoadingRefinerModels && <FetchingModelsLoader loadingMessage="Loading Refiner Models..." />}
{!isLoadingRefinerModels && filteredRefinerModels.length > 0 && (
<ModelListWrapper title={t('sdxl.refiner')} modelList={filteredRefinerModels} key="refiner" />
)}
{/* LoRAs List */}
{isLoadingLoRAModels && <FetchingModelsLoader loadingMessage="Loading LoRAs..." />}
{!isLoadingLoRAModels && filteredLoRAModels.length > 0 && (
@@ -143,11 +108,6 @@ const ModelList = () => {
{!isLoadingT2IAdapterModels && filteredT2IAdapterModels.length > 0 && (
<ModelListWrapper title={t('common.t2iAdapter')} modelList={filteredT2IAdapterModels} key="t2i-adapters" />
)}
{totalFilteredModels === 0 && (
<Flex w="full" h="full" alignItems="center" justifyContent="center">
<Text>{t('modelManager.noMatchingModels')}</Text>
</Flex>
)}
</Flex>
</ScrollableContent>
);
@@ -158,24 +118,12 @@ export default memo(ModelList);
const modelsFilter = <T extends AnyModelConfig>(
data: T[],
nameFilter: string,
filteredModelType: FilterableModelType | null
filteredModelType: ModelType | null
): T[] => {
return data.filter((model) => {
const matchesFilter = model.name.toLowerCase().includes(nameFilter.toLowerCase());
const matchesType = getMatchesType(model, filteredModelType);
const matchesType = filteredModelType ? model.type === filteredModelType : true;
return matchesFilter && matchesType;
});
};
const getMatchesType = (modelConfig: AnyModelConfig, filteredModelType: FilterableModelType | null): boolean => {
if (filteredModelType === 'refiner') {
return modelConfig.base === 'sdxl-refiner';
}
if (filteredModelType === 'main' && modelConfig.base === 'sdxl-refiner') {
return false;
}
return filteredModelType ? modelConfig.type === filteredModelType : true;
};

View File

@@ -90,13 +90,11 @@ const ModelListItem = (props: ModelListItemProps) => {
cursor="pointer"
onClick={handleSelectModel}
>
<Flex gap={2} w="full" h="full" minW={0}>
<Flex gap={2} w="full" h="full">
<ModelImage image_url={model.cover_image} />
<Flex gap={1} alignItems="flex-start" flexDir="column" w="full" minW={0}>
<Flex gap={1} alignItems="flex-start" flexDir="column" w="full">
<Flex gap={2} w="full" alignItems="flex-start">
<Text fontWeight="semibold" noOfLines={1} wordBreak="break-all">
{model.name}
</Text>
<Text fontWeight="semibold">{model.name}</Text>
<Spacer />
</Flex>
<Text variant="subtext" noOfLines={1}>

View File

@@ -13,7 +13,6 @@ export const ModelTypeFilter = () => {
const MODEL_TYPE_LABELS: Record<FilterableModelType, string> = useMemo(
() => ({
main: t('modelManager.main'),
refiner: t('sdxl.refiner'),
lora: 'LoRA',
embedding: t('modelManager.textualInversions'),
controlnet: 'ControlNet',

View File

@@ -87,9 +87,9 @@ export const Model = () => {
<Flex flexDir="column" gap={4}>
<Flex alignItems="flex-start" gap={4}>
<ModelImageUpload model_key={selectedModelKey} model_image={data.cover_image} />
<Flex flexDir="column" gap={1} flexGrow={1} minW={0}>
<Flex flexDir="column" gap={1} flexGrow={1}>
<Flex gap={2}>
<Heading as="h2" fontSize="lg" noOfLines={1} wordBreak="break-all">
<Heading as="h2" fontSize="lg">
{data.name}
</Heading>
<Spacer />
@@ -114,7 +114,7 @@ export const Model = () => {
)}
</Flex>
{data.source && (
<Text variant="subtext" noOfLines={1} wordBreak="break-all">
<Text variant="subtext">
{t('modelManager.source')}: {data?.source}
</Text>
)}

View File

@@ -9,9 +9,7 @@ export const ModelAttrView = ({ label, value }: Props) => {
return (
<FormControl flexDir="column" alignItems="flex-start" gap={0}>
<FormLabel>{label}</FormLabel>
<Text fontSize="md" noOfLines={1} wordBreak="break-all">
{value || '-'}
</Text>
<Text fontSize="md">{value || '-'}</Text>
</FormControl>
);
};

View File

@@ -53,7 +53,7 @@ export const ModelView = () => {
</>
)}
{data.type === 'ip_adapter' && data.format === 'invokeai' && (
{data.type === 'ip_adapter' && (
<Flex gap={2}>
<ModelAttrView label={t('modelManager.imageEncoderModelId')} value={data.image_encoder_model_id} />
</Flex>

View File

@@ -1,7 +1,6 @@
import type { PayloadAction } from '@reduxjs/toolkit';
import { createSlice, isAnyOf } from '@reduxjs/toolkit';
import type { PersistConfig, RootState } from 'app/store/store';
import { deepClone } from 'common/util/deepClone';
import { workflowLoaded } from 'features/nodes/store/actions';
import { SHARED_NODE_PROPERTIES } from 'features/nodes/types/constants';
import type {
@@ -45,7 +44,7 @@ import {
} from 'features/nodes/types/field';
import type { AnyNode, InvocationTemplate, NodeExecutionState } from 'features/nodes/types/invocation';
import { isInvocationNode, isNotesNode, zNodeStatus } from 'features/nodes/types/invocation';
import { forEach } from 'lodash-es';
import { cloneDeep, forEach } from 'lodash-es';
import type {
Connection,
Edge,
@@ -572,23 +571,8 @@ export const nodesSlice = createSlice({
);
},
selectionCopied: (state) => {
const nodesToCopy: AnyNode[] = [];
const edgesToCopy: Edge[] = [];
for (const node of state.nodes) {
if (node.selected) {
nodesToCopy.push(deepClone(node));
}
}
for (const edge of state.edges) {
if (edge.selected) {
edgesToCopy.push(deepClone(edge));
}
}
state.nodesToCopy = nodesToCopy;
state.edgesToCopy = edgesToCopy;
state.nodesToCopy = state.nodes.filter((n) => n.selected).map(cloneDeep);
state.edgesToCopy = state.edges.filter((e) => e.selected).map(cloneDeep);
if (state.nodesToCopy.length > 0) {
const averagePosition = { x: 0, y: 0 };
@@ -610,21 +594,11 @@ export const nodesSlice = createSlice({
},
selectionPasted: (state, action: PayloadAction<{ cursorPosition?: XYPosition }>) => {
const { cursorPosition } = action.payload;
const newNodes: AnyNode[] = [];
for (const node of state.nodesToCopy) {
newNodes.push(deepClone(node));
}
const newNodes = state.nodesToCopy.map(cloneDeep);
const oldNodeIds = newNodes.map((n) => n.data.id);
const newEdges: Edge[] = [];
for (const edge of state.edgesToCopy) {
if (oldNodeIds.includes(edge.source) && oldNodeIds.includes(edge.target)) {
newEdges.push(deepClone(edge));
}
}
const newEdges = state.edgesToCopy
.filter((e) => oldNodeIds.includes(e.source) && oldNodeIds.includes(e.target))
.map(cloneDeep);
newEdges.forEach((e) => (e.selected = true));

View File

@@ -1,7 +1,6 @@
import type { PayloadAction } from '@reduxjs/toolkit';
import { createSlice } from '@reduxjs/toolkit';
import type { PersistConfig, RootState } from 'app/store/store';
import { deepClone } from 'common/util/deepClone';
import { workflowLoaded } from 'features/nodes/store/actions';
import { isAnyNodeOrEdgeMutation, nodeEditorReset, nodesChanged, nodesDeleted } from 'features/nodes/store/nodesSlice';
import type {
@@ -12,7 +11,7 @@ import type {
import type { FieldIdentifier } from 'features/nodes/types/field';
import { isInvocationNode } from 'features/nodes/types/invocation';
import type { WorkflowCategory, WorkflowV3 } from 'features/nodes/types/workflow';
import { isEqual, omit, uniqBy } from 'lodash-es';
import { cloneDeep, isEqual, omit, uniqBy } from 'lodash-es';
const blankWorkflow: Omit<WorkflowV3, 'nodes' | 'edges'> = {
name: '',
@@ -132,8 +131,8 @@ export const workflowSlice = createSlice({
});
return {
...deepClone(initialWorkflowState),
...deepClone(workflowExtra),
...cloneDeep(initialWorkflowState),
...cloneDeep(workflowExtra),
originalExposedFieldValues,
mode: state.mode,
};
@@ -145,7 +144,7 @@ export const workflowSlice = createSlice({
});
});
builder.addCase(nodeEditorReset, () => deepClone(initialWorkflowState));
builder.addCase(nodeEditorReset, () => cloneDeep(initialWorkflowState));
builder.addCase(nodesChanged, (state, action) => {
// Not all changes to nodes should result in the workflow being marked touched

View File

@@ -48,7 +48,7 @@ export const addIPAdapterToLinearGraph = async (
if (!ipAdapter.model) {
return;
}
const { id, weight, model, clipVisionModel, beginStepPct, endStepPct, controlImage } = ipAdapter;
const { id, weight, model, beginStepPct, endStepPct, controlImage } = ipAdapter;
assert(controlImage, 'IP Adapter image is required');
@@ -58,7 +58,6 @@ export const addIPAdapterToLinearGraph = async (
is_intermediate: true,
weight: weight,
ip_adapter_model: model,
clip_vision_model: clipVisionModel,
begin_step_percent: beginStepPct,
end_step_percent: endStepPct,
image: {
@@ -84,7 +83,7 @@ export const addIPAdapterToLinearGraph = async (
};
const buildIPAdapterMetadata = (ipAdapter: IPAdapterConfig): S['IPAdapterMetadataField'] => {
const { controlImage, beginStepPct, endStepPct, model, clipVisionModel, weight } = ipAdapter;
const { controlImage, beginStepPct, endStepPct, model, weight } = ipAdapter;
assert(model, 'IP Adapter model is required');
@@ -100,7 +99,6 @@ const buildIPAdapterMetadata = (ipAdapter: IPAdapterConfig): S['IPAdapterMetadat
return {
ip_adapter_model: model,
clip_vision_model: clipVisionModel,
weight,
begin_step_percent: beginStepPct,
end_step_percent: endStepPct,

View File

@@ -65,11 +65,6 @@ export const buildCanvasOutpaintGraph = async (
infillTileSize,
infillPatchmatchDownscaleSize,
infillMethod,
// infillMosaicTileWidth,
// infillMosaicTileHeight,
// infillMosaicMinColor,
// infillMosaicMaxColor,
infillColorValue,
clipSkip,
seamlessXAxis,
seamlessYAxis,
@@ -361,28 +356,6 @@ export const buildCanvasOutpaintGraph = async (
};
}
// TODO: add mosaic back
// if (infillMethod === 'mosaic') {
// graph.nodes[INPAINT_INFILL] = {
// type: 'infill_mosaic',
// id: INPAINT_INFILL,
// is_intermediate,
// tile_width: infillMosaicTileWidth,
// tile_height: infillMosaicTileHeight,
// min_color: infillMosaicMinColor,
// max_color: infillMosaicMaxColor,
// };
// }
if (infillMethod === 'color') {
graph.nodes[INPAINT_INFILL] = {
type: 'infill_rgba',
id: INPAINT_INFILL,
color: infillColorValue,
is_intermediate,
};
}
// Handle Scale Before Processing
if (isUsingScaledDimensions) {
const scaledWidth: number = scaledBoundingBoxDimensions.width;

View File

@@ -66,11 +66,6 @@ export const buildCanvasSDXLOutpaintGraph = async (
infillTileSize,
infillPatchmatchDownscaleSize,
infillMethod,
// infillMosaicTileWidth,
// infillMosaicTileHeight,
// infillMosaicMinColor,
// infillMosaicMaxColor,
infillColorValue,
seamlessXAxis,
seamlessYAxis,
canvasCoherenceMode,
@@ -370,28 +365,6 @@ export const buildCanvasSDXLOutpaintGraph = async (
};
}
// TODO: add mosaic back
// if (infillMethod === 'mosaic') {
// graph.nodes[INPAINT_INFILL] = {
// type: 'infill_mosaic',
// id: INPAINT_INFILL,
// is_intermediate,
// tile_width: infillMosaicTileWidth,
// tile_height: infillMosaicTileHeight,
// min_color: infillMosaicMinColor,
// max_color: infillMosaicMaxColor,
// };
// }
if (infillMethod === 'color') {
graph.nodes[INPAINT_INFILL] = {
type: 'infill_rgba',
id: INPAINT_INFILL,
is_intermediate,
color: infillColorValue,
};
}
// Handle Scale Before Processing
if (isUsingScaledDimensions) {
const scaledWidth: number = scaledBoundingBoxDimensions.width;

View File

@@ -1,9 +1,8 @@
import { deepClone } from 'common/util/deepClone';
import { satisfies } from 'compare-versions';
import { NodeUpdateError } from 'features/nodes/types/error';
import type { InvocationNode, InvocationTemplate } from 'features/nodes/types/invocation';
import { zParsedSemver } from 'features/nodes/types/semver';
import { defaultsDeep, keys, pick } from 'lodash-es';
import { cloneDeep, defaultsDeep, keys, pick } from 'lodash-es';
import { buildInvocationNode } from './buildInvocationNode';
@@ -51,7 +50,7 @@ export const updateNode = (node: InvocationNode, template: InvocationTemplate):
// The updateability of a node, via semver comparison, relies on the this kind of recursive merge
// being valid. We rely on the template's major version to be majorly incremented if this kind of
// merge would result in an invalid node.
const clone = deepClone(node);
const clone = cloneDeep(node);
clone.data.version = template.version;
defaultsDeep(clone, defaults); // mutates!

View File

@@ -1,12 +1,11 @@
import { logger } from 'app/logging/logger';
import { deepClone } from 'common/util/deepClone';
import { parseify } from 'common/util/serialize';
import type { NodesState, WorkflowsState } from 'features/nodes/store/types';
import { isInvocationNode, isNotesNode } from 'features/nodes/types/invocation';
import type { WorkflowV3 } from 'features/nodes/types/workflow';
import { zWorkflowV3 } from 'features/nodes/types/workflow';
import i18n from 'i18n';
import { pick } from 'lodash-es';
import { cloneDeep, pick } from 'lodash-es';
import { fromZodError } from 'zod-validation-error';
export type BuildWorkflowArg = {
@@ -31,7 +30,7 @@ const workflowKeys = [
type BuildWorkflowFunction = (arg: BuildWorkflowArg) => WorkflowV3;
export const buildWorkflowFast: BuildWorkflowFunction = ({ nodes, edges, workflow }: BuildWorkflowArg): WorkflowV3 => {
const clonedWorkflow = pick(deepClone(workflow), workflowKeys);
const clonedWorkflow = pick(cloneDeep(workflow), workflowKeys);
const newWorkflow: WorkflowV3 = {
...clonedWorkflow,
@@ -44,14 +43,14 @@ export const buildWorkflowFast: BuildWorkflowFunction = ({ nodes, edges, workflo
newWorkflow.nodes.push({
id: node.id,
type: node.type,
data: deepClone(node.data),
data: cloneDeep(node.data),
position: { ...node.position },
});
} else if (isNotesNode(node) && node.type) {
newWorkflow.nodes.push({
id: node.id,
type: node.type,
data: deepClone(node.data),
data: cloneDeep(node.data),
position: { ...node.position },
});
}

View File

@@ -1,5 +1,4 @@
import { $store } from 'app/store/nanostores/store';
import { deepClone } from 'common/util/deepClone';
import { WorkflowMigrationError, WorkflowVersionError } from 'features/nodes/types/error';
import type { FieldType } from 'features/nodes/types/field';
import type { InvocationNodeData } from 'features/nodes/types/invocation';
@@ -12,7 +11,7 @@ import { zWorkflowV2 } from 'features/nodes/types/v2/workflow';
import type { WorkflowV3 } from 'features/nodes/types/workflow';
import { zWorkflowV3 } from 'features/nodes/types/workflow';
import { t } from 'i18next';
import { forEach } from 'lodash-es';
import { cloneDeep, forEach } from 'lodash-es';
import { z } from 'zod';
/**
@@ -90,7 +89,7 @@ export const parseAndMigrateWorkflow = (data: unknown): WorkflowV3 => {
throw new WorkflowVersionError(t('nodes.unableToGetWorkflowVersion'));
}
let workflow = deepClone(data) as WorkflowV1 | WorkflowV2 | WorkflowV3;
let workflow = cloneDeep(data) as WorkflowV1 | WorkflowV2 | WorkflowV3;
if (workflow.meta.version === '1.0.0') {
const v1 = zWorkflowV1.parse(workflow);

View File

@@ -1,46 +0,0 @@
import { Box, Flex, FormControl, FormLabel } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIColorPicker from 'common/components/IAIColorPicker';
import { selectGenerationSlice, setInfillColorValue } from 'features/parameters/store/generationSlice';
import { memo, useCallback, useMemo } from 'react';
import type { RgbaColor } from 'react-colorful';
import { useTranslation } from 'react-i18next';
const ParamInfillColorOptions = () => {
const dispatch = useAppDispatch();
const selector = useMemo(
() =>
createSelector(selectGenerationSlice, (generation) => ({
infillColor: generation.infillColorValue,
})),
[]
);
const { infillColor } = useAppSelector(selector);
const infillMethod = useAppSelector((s) => s.generation.infillMethod);
const { t } = useTranslation();
const handleInfillColor = useCallback(
(v: RgbaColor) => {
dispatch(setInfillColorValue(v));
},
[dispatch]
);
return (
<Flex flexDir="column" gap={4}>
<FormControl isDisabled={infillMethod !== 'color'}>
<FormLabel>{t('parameters.infillColorValue')}</FormLabel>
<Box w="full" pt={2} pb={2}>
<IAIColorPicker color={infillColor} onChange={handleInfillColor} />
</Box>
</FormControl>
</Flex>
);
};
export default memo(ParamInfillColorOptions);

View File

@@ -1,127 +0,0 @@
import { Box, CompositeNumberInput, CompositeSlider, Flex, FormControl, FormLabel } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIColorPicker from 'common/components/IAIColorPicker';
import {
selectGenerationSlice,
setInfillMosaicMaxColor,
setInfillMosaicMinColor,
setInfillMosaicTileHeight,
setInfillMosaicTileWidth,
} from 'features/parameters/store/generationSlice';
import { memo, useCallback, useMemo } from 'react';
import type { RgbaColor } from 'react-colorful';
import { useTranslation } from 'react-i18next';
const ParamInfillMosaicTileSize = () => {
const dispatch = useAppDispatch();
const selector = useMemo(
() =>
createSelector(selectGenerationSlice, (generation) => ({
infillMosaicTileWidth: generation.infillMosaicTileWidth,
infillMosaicTileHeight: generation.infillMosaicTileHeight,
infillMosaicMinColor: generation.infillMosaicMinColor,
infillMosaicMaxColor: generation.infillMosaicMaxColor,
})),
[]
);
const { infillMosaicTileWidth, infillMosaicTileHeight, infillMosaicMinColor, infillMosaicMaxColor } =
useAppSelector(selector);
const infillMethod = useAppSelector((s) => s.generation.infillMethod);
const { t } = useTranslation();
const handleInfillMosaicTileWidthChange = useCallback(
(v: number) => {
dispatch(setInfillMosaicTileWidth(v));
},
[dispatch]
);
const handleInfillMosaicTileHeightChange = useCallback(
(v: number) => {
dispatch(setInfillMosaicTileHeight(v));
},
[dispatch]
);
const handleInfillMosaicMinColor = useCallback(
(v: RgbaColor) => {
dispatch(setInfillMosaicMinColor(v));
},
[dispatch]
);
const handleInfillMosaicMaxColor = useCallback(
(v: RgbaColor) => {
dispatch(setInfillMosaicMaxColor(v));
},
[dispatch]
);
return (
<Flex flexDir="column" gap={4}>
<FormControl isDisabled={infillMethod !== 'mosaic'}>
<FormLabel>{t('parameters.infillMosaicTileWidth')}</FormLabel>
<CompositeSlider
min={8}
max={256}
value={infillMosaicTileWidth}
defaultValue={64}
onChange={handleInfillMosaicTileWidthChange}
step={8}
fineStep={8}
marks
/>
<CompositeNumberInput
min={8}
max={1024}
value={infillMosaicTileWidth}
defaultValue={64}
onChange={handleInfillMosaicTileWidthChange}
step={8}
fineStep={8}
/>
</FormControl>
<FormControl isDisabled={infillMethod !== 'mosaic'}>
<FormLabel>{t('parameters.infillMosaicTileHeight')}</FormLabel>
<CompositeSlider
min={8}
max={256}
value={infillMosaicTileHeight}
defaultValue={64}
onChange={handleInfillMosaicTileHeightChange}
step={8}
fineStep={8}
marks
/>
<CompositeNumberInput
min={8}
max={1024}
value={infillMosaicTileHeight}
defaultValue={64}
onChange={handleInfillMosaicTileHeightChange}
step={8}
fineStep={8}
/>
</FormControl>
<FormControl isDisabled={infillMethod !== 'mosaic'}>
<FormLabel>{t('parameters.infillMosaicMinColor')}</FormLabel>
<Box w="full" pt={2} pb={2}>
<IAIColorPicker color={infillMosaicMinColor} onChange={handleInfillMosaicMinColor} />
</Box>
</FormControl>
<FormControl isDisabled={infillMethod !== 'mosaic'}>
<FormLabel>{t('parameters.infillMosaicMaxColor')}</FormLabel>
<Box w="full" pt={2} pb={2}>
<IAIColorPicker color={infillMosaicMaxColor} onChange={handleInfillMosaicMaxColor} />
</Box>
</FormControl>
</Flex>
);
};
export default memo(ParamInfillMosaicTileSize);

View File

@@ -1,8 +1,6 @@
import { useAppSelector } from 'app/store/storeHooks';
import { memo } from 'react';
import ParamInfillColorOptions from './ParamInfillColorOptions';
import ParamInfillMosaicOptions from './ParamInfillMosaicOptions';
import ParamInfillPatchmatchDownscaleSize from './ParamInfillPatchmatchDownscaleSize';
import ParamInfillTilesize from './ParamInfillTilesize';
@@ -16,14 +14,6 @@ const ParamInfillOptions = () => {
return <ParamInfillPatchmatchDownscaleSize />;
}
if (infillMethod === 'mosaic') {
return <ParamInfillMosaicOptions />;
}
if (infillMethod === 'color') {
return <ParamInfillColorOptions />;
}
return null;
};

View File

@@ -19,7 +19,6 @@ import type {
import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';
import { configChanged } from 'features/system/store/configSlice';
import { clamp } from 'lodash-es';
import type { RgbaColor } from 'react-colorful';
import type { ImageDTO } from 'services/api/types';
import type { GenerationState } from './types';
@@ -44,6 +43,8 @@ const initialGenerationState: GenerationState = {
shouldFitToWidthHeight: true,
shouldRandomizeSeed: true,
steps: 50,
infillTileSize: 32,
infillPatchmatchDownscaleSize: 1,
width: 512,
model: null,
vae: null,
@@ -54,13 +55,6 @@ const initialGenerationState: GenerationState = {
shouldUseCpuNoise: true,
shouldShowAdvancedOptions: false,
aspectRatio: { ...initialAspectRatioState },
infillTileSize: 32,
infillPatchmatchDownscaleSize: 1,
infillMosaicTileWidth: 64,
infillMosaicTileHeight: 64,
infillMosaicMinColor: { r: 0, g: 0, b: 0, a: 1 },
infillMosaicMaxColor: { r: 255, g: 255, b: 255, a: 1 },
infillColorValue: { r: 0, g: 0, b: 0, a: 1 },
};
export const generationSlice = createSlice({
@@ -122,6 +116,15 @@ export const generationSlice = createSlice({
setCanvasCoherenceMinDenoise: (state, action: PayloadAction<number>) => {
state.canvasCoherenceMinDenoise = action.payload;
},
setInfillMethod: (state, action: PayloadAction<string>) => {
state.infillMethod = action.payload;
},
setInfillTileSize: (state, action: PayloadAction<number>) => {
state.infillTileSize = action.payload;
},
setInfillPatchmatchDownscaleSize: (state, action: PayloadAction<number>) => {
state.infillPatchmatchDownscaleSize = action.payload;
},
initialImageChanged: (state, action: PayloadAction<ImageDTO>) => {
const { image_name, width, height } = action.payload;
state.initialImage = { imageName: image_name, width, height };
@@ -203,30 +206,6 @@ export const generationSlice = createSlice({
aspectRatioChanged: (state, action: PayloadAction<AspectRatioState>) => {
state.aspectRatio = action.payload;
},
setInfillMethod: (state, action: PayloadAction<string>) => {
state.infillMethod = action.payload;
},
setInfillTileSize: (state, action: PayloadAction<number>) => {
state.infillTileSize = action.payload;
},
setInfillPatchmatchDownscaleSize: (state, action: PayloadAction<number>) => {
state.infillPatchmatchDownscaleSize = action.payload;
},
setInfillMosaicTileWidth: (state, action: PayloadAction<number>) => {
state.infillMosaicTileWidth = action.payload;
},
setInfillMosaicTileHeight: (state, action: PayloadAction<number>) => {
state.infillMosaicTileHeight = action.payload;
},
setInfillMosaicMinColor: (state, action: PayloadAction<RgbaColor>) => {
state.infillMosaicMinColor = action.payload;
},
setInfillMosaicMaxColor: (state, action: PayloadAction<RgbaColor>) => {
state.infillMosaicMaxColor = action.payload;
},
setInfillColorValue: (state, action: PayloadAction<RgbaColor>) => {
state.infillColorValue = action.payload;
},
},
extraReducers: (builder) => {
builder.addCase(configChanged, (state, action) => {
@@ -270,6 +249,8 @@ export const {
setShouldFitToWidthHeight,
setShouldRandomizeSeed,
setSteps,
setInfillTileSize,
setInfillPatchmatchDownscaleSize,
initialImageChanged,
modelChanged,
vaeSelected,
@@ -283,13 +264,6 @@ export const {
heightChanged,
widthRecalled,
heightRecalled,
setInfillTileSize,
setInfillPatchmatchDownscaleSize,
setInfillMosaicTileWidth,
setInfillMosaicTileHeight,
setInfillMosaicMinColor,
setInfillMosaicMaxColor,
setInfillColorValue,
} = generationSlice.actions;
export const { selectOptimalDimension } = generationSlice.selectors;
@@ -306,7 +280,6 @@ const migrateGenerationState = (state: any): GenerationState => {
// The signature of the model has changed, so we need to reset it
state._version = 2;
state.model = null;
state.canvasCoherenceMode = initialGenerationState.canvasCoherenceMode;
}
return state;
};

Some files were not shown because too many files have changed in this diff Show More