Compare commits

..

1 Commits

Author SHA1 Message Date
psychedelicious
75624e9158 Update invocation_cache_memory.py
Remove extra arg
2023-09-26 14:30:20 +10:00
80 changed files with 1553 additions and 1963 deletions

View File

@@ -1,11 +1,13 @@
---
title: Control Adapters
title: ControlNet
---
# :material-loupe: Control Adapters
# :material-loupe: ControlNet
## ControlNet
ControlNet
ControlNet is a powerful set of features developed by the open-source
community (notably, Stanford researcher
[**@ilyasviel**](https://github.com/lllyasviel)) that allows you to
@@ -18,7 +20,7 @@ towards generating images that better fit your desired style or
outcome.
#### How it works
### How it works
ControlNet works by analyzing an input image, pre-processing that
image to identify relevant information that can be interpreted by each
@@ -28,7 +30,7 @@ composition, or other aspects of the image to better achieve a
specific result.
#### Models
### Models
InvokeAI provides access to a series of ControlNet models that provide
different effects or styles in your generated images. Currently
@@ -94,8 +96,6 @@ A model that generates normal maps from input images, allowing for more realisti
**Image Segmentation**:
A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon)
**QR Code Monster**:
A model that helps generate creative QR codes that still scan. Can also be used to create images with text, logos or shapes within them.
**Openpose**:
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
@@ -120,7 +120,7 @@ With Pix2Pix, you can input an image into the controlnet, and then "instruct" th
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.
### Using ControlNet
## Using ControlNet
To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images.
@@ -132,31 +132,3 @@ Weight - Strength of the Controlnet model applied to the generation for the sect
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.
## IP-Adapter
[IP-Adapter](https://ip-adapter.github.io) is a tooling that allows for image prompt capabilities with text-to-image diffusion models. IP-Adapter works by analyzing the given image prompt to extract features, then passing those features to the UNet along with any other conditioning provided.
![IP-Adapter + T2I](https://github.com/tencent-ailab/IP-Adapter/raw/main/assets/demo/ip_adpter_plus_multi.jpg)
![IP-Adapter + IMG2IMG](https://github.com/tencent-ailab/IP-Adapter/blob/main/assets/demo/image-to-image.jpg)
#### Installation
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [5] to download models.
2. Through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
3. **Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders shouid be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.
#### Using IP-Adapter
IP-Adapter can be used by navigating to the *Control Adapters* options and enabling IP-Adapter.
IP-Adapter requires an image to be used as the Image Prompt. It can also be used in conjunction with text prompts, Image-to-Image, Inpainting, Outpainting, ControlNets and LoRAs.
Each IP-Adapter has two settings that are applied to the IP-Adapter:
* Weight - Strength of the IP-Adapter model applied to the generation for the section, defined by start/end
* Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the IP-Adapter applied.

View File

@@ -4,12 +4,12 @@ The workflow editor is a blank canvas allowing for the use of individual functio
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Workflow Editor and build workflows to suit your needs.
## Features
## UI Features
### Linear View
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
To add an input to the Linear UI, right click on the input label and select "Add to Linear View".
To add an input to the Linear UI, right click on the input and select "Add to Linear View".
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
@@ -25,10 +25,6 @@ Any node or input field can be renamed in the workflow editor. If the input fiel
* Backspace/Delete to delete a node
* Shift+Click to drag and select multiple nodes
### Node Caching
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
## Important Concepts

View File

@@ -332,7 +332,6 @@ class InvokeAiInstance:
Configure the InvokeAI runtime directory
"""
auto_install = False
# set sys.argv to a consistent state
new_argv = [sys.argv[0]]
for i in range(1, len(sys.argv)):
@@ -341,17 +340,13 @@ class InvokeAiInstance:
new_argv.append(el)
new_argv.append(sys.argv[i + 1])
elif el in ["-y", "--yes", "--yes-to-all"]:
auto_install = True
new_argv.append(el)
sys.argv = new_argv
import messages
import requests # to catch download exceptions
from messages import introduction
auto_install = auto_install or messages.user_wants_auto_configuration()
if auto_install:
sys.argv.append("--yes")
else:
messages.introduction()
introduction()
from invokeai.frontend.install.invokeai_configure import invokeai_configure

View File

@@ -7,7 +7,7 @@ import os
import platform
from pathlib import Path
from prompt_toolkit import HTML, prompt
from prompt_toolkit import prompt
from prompt_toolkit.completion import PathCompleter
from prompt_toolkit.validation import Validator
from rich import box, print
@@ -65,50 +65,17 @@ def confirm_install(dest: Path) -> bool:
if dest.exists():
print(f":exclamation: Directory {dest} already exists :exclamation:")
dest_confirmed = Confirm.ask(
":stop_sign: (re)install in this location?",
":stop_sign: Are you sure you want to (re)install in this location?",
default=False,
)
else:
print(f"InvokeAI will be installed in {dest}")
dest_confirmed = Confirm.ask("Use this location?", default=True)
dest_confirmed = not Confirm.ask("Would you like to pick a different location?", default=False)
console.line()
return dest_confirmed
def user_wants_auto_configuration() -> bool:
"""Prompt the user to choose between manual and auto configuration."""
console.rule("InvokeAI Configuration Section")
console.print(
Panel(
Group(
"\n".join(
[
"Libraries are installed and InvokeAI will now set up its root directory and configuration. Choose between:",
"",
" * AUTOMATIC configuration: install reasonable defaults and a minimal set of starter models.",
" * MANUAL configuration: manually inspect and adjust configuration options and pick from a larger set of starter models.",
"",
"Later you can fine tune your configuration by selecting option [6] 'Change InvokeAI startup options' from the invoke.bat/invoke.sh launcher script.",
]
),
),
box=box.MINIMAL,
padding=(1, 1),
)
)
choice = (
prompt(
HTML("Choose <b>&lt;a&gt;</b>utomatic or <b>&lt;m&gt;</b>anual configuration [a/m] (a): "),
validator=Validator.from_callable(
lambda n: n == "" or n.startswith(("a", "A", "m", "M")), error_message="Please select 'a' or 'm'"
),
)
or "a"
)
return choice.lower().startswith("a")
def dest_path(dest=None) -> Path:
"""
Prompt the user for the destination path and create the path

View File

@@ -91,9 +91,6 @@ class FieldDescriptions:
board = "The board to save the image to"
image = "The image to process"
tile_size = "Tile size"
inclusive_low = "The inclusive low value"
exclusive_high = "The exclusive high value"
decimal_places = "The number of decimal places to round to"
class Input(str, Enum):

View File

@@ -65,27 +65,13 @@ class DivideInvocation(BaseInvocation):
class RandomIntInvocation(BaseInvocation):
"""Outputs a single random integer."""
low: int = InputField(default=0, description=FieldDescriptions.inclusive_low)
high: int = InputField(default=np.iinfo(np.int32).max, description=FieldDescriptions.exclusive_high)
low: int = InputField(default=0, description="The inclusive low value")
high: int = InputField(default=np.iinfo(np.int32).max, description="The exclusive high value")
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(value=np.random.randint(self.low, self.high))
@invocation("rand_float", title="Random Float", tags=["math", "float", "random"], category="math", version="1.0.0")
class RandomFloatInvocation(BaseInvocation):
"""Outputs a single random float"""
low: float = InputField(default=0.0, description=FieldDescriptions.inclusive_low)
high: float = InputField(default=1.0, description=FieldDescriptions.exclusive_high)
decimals: int = InputField(default=2, description=FieldDescriptions.decimal_places)
def invoke(self, context: InvocationContext) -> FloatOutput:
random_float = np.random.uniform(self.low, self.high)
rounded_float = round(random_float, self.decimals)
return FloatOutput(value=rounded_float)
@invocation(
"float_to_int",
title="Float To Integer",

View File

@@ -241,7 +241,7 @@ class InvokeAIAppConfig(InvokeAISettings):
version : bool = Field(default=False, description="Show InvokeAI version and exit", category="Other")
# CACHE
ram : Union[float, Literal["auto"]] = Field(default=7.5, gt=0, description="Maximum memory amount used by model cache for rapid switching (floating point number or 'auto')", category="Model Cache", )
ram : Union[float, Literal["auto"]] = Field(default=6.0, gt=0, description="Maximum memory amount used by model cache for rapid switching (floating point number or 'auto')", category="Model Cache", )
vram : Union[float, Literal["auto"]] = Field(default=0.25, ge=0, description="Amount of VRAM reserved for model storage (floating point number or 'auto')", category="Model Cache", )
lazy_offload : bool = Field(default=True, description="Keep models in VRAM until their space is needed", category="Model Cache", )

View File

@@ -1,6 +1,7 @@
from collections import OrderedDict
from dataclasses import dataclass, field
from threading import Lock
from time import time
from typing import Optional, Union
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput

View File

@@ -70,6 +70,7 @@ def get_literal_fields(field) -> list[Any]:
config = InvokeAIAppConfig.get_config()
Model_dir = "models"
Default_config_file = config.model_conf_path
SD_Configs = config.legacy_conf_path
@@ -457,7 +458,7 @@ Use cursor arrows to make a checkbox selection, and space to toggle.
)
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Model RAM cache size (GB). Make this at least large enough to hold a single full model (2GB for SD-1, 6GB for SDXL).",
name="Model RAM cache size (GB). Make this at least large enough to hold a single full model.",
begin_entry_at=0,
editable=False,
color="CONTROL",
@@ -650,19 +651,8 @@ def edit_opts(program_opts: Namespace, invokeai_opts: Namespace) -> argparse.Nam
return editApp.new_opts()
def default_ramcache() -> float:
"""Run a heuristic for the default RAM cache based on installed RAM."""
# Note that on my 64 GB machine, psutil.virtual_memory().total gives 62 GB,
# So we adjust everthing down a bit.
return (
15.0 if MAX_RAM >= 60 else 7.5 if MAX_RAM >= 30 else 4 if MAX_RAM >= 14 else 2.1
) # 2.1 is just large enough for sd 1.5 ;-)
def default_startup_options(init_file: Path) -> Namespace:
opts = InvokeAIAppConfig.get_config()
opts.ram = default_ramcache()
return opts

View File

@@ -9,8 +9,6 @@ from diffusers.models import UNet2DConditionModel
from PIL import Image
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
from invokeai.backend.model_management.models.base import calc_model_size_by_data
from .attention_processor import AttnProcessor2_0, IPAttnProcessor2_0
from .resampler import Resampler
@@ -89,20 +87,6 @@ class IPAdapter:
if self._attn_processors is not None:
torch.nn.ModuleList(self._attn_processors.values()).to(device=self.device, dtype=self.dtype)
def calc_size(self):
if self._state_dict is not None:
image_proj_size = sum(
[tensor.nelement() * tensor.element_size() for tensor in self._state_dict["image_proj"].values()]
)
ip_adapter_size = sum(
[tensor.nelement() * tensor.element_size() for tensor in self._state_dict["ip_adapter"].values()]
)
return image_proj_size + ip_adapter_size
else:
return calc_model_size_by_data(self._image_proj_model) + calc_model_size_by_data(
torch.nn.ModuleList(self._attn_processors.values())
)
def _init_image_proj_model(self, state_dict):
return ImageProjModel.from_state_dict(state_dict, self._num_tokens).to(self.device, dtype=self.dtype)

View File

@@ -13,7 +13,6 @@ from invokeai.backend.model_management.models.base import (
ModelConfigBase,
ModelType,
SubModelType,
calc_model_size_by_fs,
classproperty,
)
@@ -31,7 +30,7 @@ class IPAdapterModel(ModelBase):
assert model_type == ModelType.IPAdapter
super().__init__(model_path, base_model, model_type)
self.model_size = calc_model_size_by_fs(self.model_path)
self.model_size = os.path.getsize(self.model_path)
@classmethod
def detect_format(cls, path: str) -> str:
@@ -64,13 +63,10 @@ class IPAdapterModel(ModelBase):
if child_type is not None:
raise ValueError("There are no child models in an IP-Adapter model.")
model = build_ip_adapter(
return build_ip_adapter(
ip_adapter_ckpt_path=os.path.join(self.model_path, "ip_adapter.bin"), device="cpu", dtype=torch_dtype
)
self.model_size = model.calc_size()
return model
@classmethod
def convert_if_required(
cls,

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,280 +0,0 @@
import{w as s,hQ as T,v as l,a2 as V,hR as A,ae as I,hS as z,hT as j,hU as D,hV as F,hW as W,hX as G,hY as K,hZ as U,h_ as Y}from"./index-375621ca.js";import{u as Z,M as H}from"./MantineProvider-f12d896d.js";var P=String.raw,E=P`
:root,
:host {
--chakra-vh: 100vh;
}
@supports (height: -webkit-fill-available) {
:root,
:host {
--chakra-vh: -webkit-fill-available;
}
}
@supports (height: -moz-fill-available) {
:root,
:host {
--chakra-vh: -moz-fill-available;
}
}
@supports (height: 100dvh) {
:root,
:host {
--chakra-vh: 100dvh;
}
}
`,Q=()=>s.jsx(T,{styles:E}),X=({scope:e=""})=>s.jsx(T,{styles:P`
html {
line-height: 1.5;
-webkit-text-size-adjust: 100%;
font-family: system-ui, sans-serif;
-webkit-font-smoothing: antialiased;
text-rendering: optimizeLegibility;
-moz-osx-font-smoothing: grayscale;
touch-action: manipulation;
}
body {
position: relative;
min-height: 100%;
margin: 0;
font-feature-settings: "kern";
}
${e} :where(*, *::before, *::after) {
border-width: 0;
border-style: solid;
box-sizing: border-box;
word-wrap: break-word;
}
main {
display: block;
}
${e} hr {
border-top-width: 1px;
box-sizing: content-box;
height: 0;
overflow: visible;
}
${e} :where(pre, code, kbd,samp) {
font-family: SFMono-Regular, Menlo, Monaco, Consolas, monospace;
font-size: 1em;
}
${e} a {
background-color: transparent;
color: inherit;
text-decoration: inherit;
}
${e} abbr[title] {
border-bottom: none;
text-decoration: underline;
-webkit-text-decoration: underline dotted;
text-decoration: underline dotted;
}
${e} :where(b, strong) {
font-weight: bold;
}
${e} small {
font-size: 80%;
}
${e} :where(sub,sup) {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline;
}
${e} sub {
bottom: -0.25em;
}
${e} sup {
top: -0.5em;
}
${e} img {
border-style: none;
}
${e} :where(button, input, optgroup, select, textarea) {
font-family: inherit;
font-size: 100%;
line-height: 1.15;
margin: 0;
}
${e} :where(button, input) {
overflow: visible;
}
${e} :where(button, select) {
text-transform: none;
}
${e} :where(
button::-moz-focus-inner,
[type="button"]::-moz-focus-inner,
[type="reset"]::-moz-focus-inner,
[type="submit"]::-moz-focus-inner
) {
border-style: none;
padding: 0;
}
${e} fieldset {
padding: 0.35em 0.75em 0.625em;
}
${e} legend {
box-sizing: border-box;
color: inherit;
display: table;
max-width: 100%;
padding: 0;
white-space: normal;
}
${e} progress {
vertical-align: baseline;
}
${e} textarea {
overflow: auto;
}
${e} :where([type="checkbox"], [type="radio"]) {
box-sizing: border-box;
padding: 0;
}
${e} input[type="number"]::-webkit-inner-spin-button,
${e} input[type="number"]::-webkit-outer-spin-button {
-webkit-appearance: none !important;
}
${e} input[type="number"] {
-moz-appearance: textfield;
}
${e} input[type="search"] {
-webkit-appearance: textfield;
outline-offset: -2px;
}
${e} input[type="search"]::-webkit-search-decoration {
-webkit-appearance: none !important;
}
${e} ::-webkit-file-upload-button {
-webkit-appearance: button;
font: inherit;
}
${e} details {
display: block;
}
${e} summary {
display: list-item;
}
template {
display: none;
}
[hidden] {
display: none !important;
}
${e} :where(
blockquote,
dl,
dd,
h1,
h2,
h3,
h4,
h5,
h6,
hr,
figure,
p,
pre
) {
margin: 0;
}
${e} button {
background: transparent;
padding: 0;
}
${e} fieldset {
margin: 0;
padding: 0;
}
${e} :where(ol, ul) {
margin: 0;
padding: 0;
}
${e} textarea {
resize: vertical;
}
${e} :where(button, [role="button"]) {
cursor: pointer;
}
${e} button::-moz-focus-inner {
border: 0 !important;
}
${e} table {
border-collapse: collapse;
}
${e} :where(h1, h2, h3, h4, h5, h6) {
font-size: inherit;
font-weight: inherit;
}
${e} :where(button, input, optgroup, select, textarea) {
padding: 0;
line-height: inherit;
color: inherit;
}
${e} :where(img, svg, video, canvas, audio, iframe, embed, object) {
display: block;
}
${e} :where(img, video) {
max-width: 100%;
height: auto;
}
[data-js-focus-visible]
:focus:not([data-focus-visible-added]):not(
[data-focus-visible-disabled]
) {
outline: none;
box-shadow: none;
}
${e} select::-ms-expand {
display: none;
}
${E}
`}),g={light:"chakra-ui-light",dark:"chakra-ui-dark"};function B(e={}){const{preventTransition:o=!0}=e,n={setDataset:r=>{const t=o?n.preventTransition():void 0;document.documentElement.dataset.theme=r,document.documentElement.style.colorScheme=r,t==null||t()},setClassName(r){document.body.classList.add(r?g.dark:g.light),document.body.classList.remove(r?g.light:g.dark)},query(){return window.matchMedia("(prefers-color-scheme: dark)")},getSystemTheme(r){var t;return((t=n.query().matches)!=null?t:r==="dark")?"dark":"light"},addListener(r){const t=n.query(),a=i=>{r(i.matches?"dark":"light")};return typeof t.addListener=="function"?t.addListener(a):t.addEventListener("change",a),()=>{typeof t.removeListener=="function"?t.removeListener(a):t.removeEventListener("change",a)}},preventTransition(){const r=document.createElement("style");return r.appendChild(document.createTextNode("*{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}")),document.head.appendChild(r),()=>{window.getComputedStyle(document.body),requestAnimationFrame(()=>{requestAnimationFrame(()=>{document.head.removeChild(r)})})}}};return n}var J="chakra-ui-color-mode";function L(e){return{ssr:!1,type:"localStorage",get(o){if(!(globalThis!=null&&globalThis.document))return o;let n;try{n=localStorage.getItem(e)||o}catch{}return n||o},set(o){try{localStorage.setItem(e,o)}catch{}}}}var ee=L(J),M=()=>{};function S(e,o){return e.type==="cookie"&&e.ssr?e.get(o):o}function O(e){const{value:o,children:n,options:{useSystemColorMode:r,initialColorMode:t,disableTransitionOnChange:a}={},colorModeManager:i=ee}=e,d=t==="dark"?"dark":"light",[u,p]=l.useState(()=>S(i,d)),[y,b]=l.useState(()=>S(i)),{getSystemTheme:w,setClassName:k,setDataset:x,addListener:$}=l.useMemo(()=>B({preventTransition:a}),[a]),v=t==="system"&&!u?y:u,h=l.useCallback(c=>{const f=c==="system"?w():c;p(f),k(f==="dark"),x(f),i.set(f)},[i,w,k,x]);V(()=>{t==="system"&&b(w())},[]),l.useEffect(()=>{const c=i.get();if(c){h(c);return}if(t==="system"){h("system");return}h(d)},[i,d,t,h]);const C=l.useCallback(()=>{h(v==="dark"?"light":"dark")},[v,h]);l.useEffect(()=>{if(r)return $(h)},[r,$,h]);const R=l.useMemo(()=>({colorMode:o??v,toggleColorMode:o?M:C,setColorMode:o?M:h,forced:o!==void 0}),[v,C,h,o]);return s.jsx(A.Provider,{value:R,children:n})}O.displayName="ColorModeProvider";var te=["borders","breakpoints","colors","components","config","direction","fonts","fontSizes","fontWeights","letterSpacings","lineHeights","radii","shadows","sizes","space","styles","transition","zIndices"];function re(e){return I(e)?te.every(o=>Object.prototype.hasOwnProperty.call(e,o)):!1}function m(e){return typeof e=="function"}function oe(...e){return o=>e.reduce((n,r)=>r(n),o)}var ne=e=>function(...n){let r=[...n],t=n[n.length-1];return re(t)&&r.length>1?r=r.slice(0,r.length-1):t=e,oe(...r.map(a=>i=>m(a)?a(i):ie(i,a)))(t)},ae=ne(j);function ie(...e){return z({},...e,_)}function _(e,o,n,r){if((m(e)||m(o))&&Object.prototype.hasOwnProperty.call(r,n))return(...t)=>{const a=m(e)?e(...t):e,i=m(o)?o(...t):o;return z({},a,i,_)}}var q=l.createContext({getDocument(){return document},getWindow(){return window}});q.displayName="EnvironmentContext";function N(e){const{children:o,environment:n,disabled:r}=e,t=l.useRef(null),a=l.useMemo(()=>n||{getDocument:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument)!=null?u:document},getWindow:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument.defaultView)!=null?u:window}},[n]),i=!r||!n;return s.jsxs(q.Provider,{value:a,children:[o,i&&s.jsx("span",{id:"__chakra_env",hidden:!0,ref:t})]})}N.displayName="EnvironmentProvider";var se=e=>{const{children:o,colorModeManager:n,portalZIndex:r,resetScope:t,resetCSS:a=!0,theme:i={},environment:d,cssVarsRoot:u,disableEnvironment:p,disableGlobalStyle:y}=e,b=s.jsx(N,{environment:d,disabled:p,children:o});return s.jsx(D,{theme:i,cssVarsRoot:u,children:s.jsxs(O,{colorModeManager:n,options:i.config,children:[a?s.jsx(X,{scope:t}):s.jsx(Q,{}),!y&&s.jsx(F,{}),r?s.jsx(W,{zIndex:r,children:b}):b]})})},le=e=>function({children:n,theme:r=e,toastOptions:t,...a}){return s.jsxs(se,{theme:r,...a,children:[s.jsx(G,{value:t==null?void 0:t.defaultOptions,children:n}),s.jsx(K,{...t})]})},de=le(j);const ue=()=>l.useMemo(()=>({colorScheme:"dark",fontFamily:"'Inter Variable', sans-serif",components:{ScrollArea:{defaultProps:{scrollbarSize:10},styles:{scrollbar:{"&:hover":{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}},thumb:{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}}}}}),[]),he=L("@@invokeai-color-mode");function ce({children:e}){const{i18n:o}=Z(),n=o.dir(),r=l.useMemo(()=>ae({...U,direction:n}),[n]);l.useEffect(()=>{document.body.dir=n},[n]);const t=ue();return s.jsx(H,{theme:t,children:s.jsx(de,{theme:r,colorModeManager:he,toastOptions:Y,children:e})})}const ve=l.memo(ce);export{ve as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -12,7 +12,7 @@
margin: 0;
}
</style>
<script type="module" crossorigin src="./assets/index-375621ca.js"></script>
<script type="module" crossorigin src="./assets/index-f6c3f475.js"></script>
</head>
<body dir="ltr">

View File

@@ -13,15 +13,14 @@
"reset": "Reset",
"rotateClockwise": "Rotate Clockwise",
"rotateCounterClockwise": "Rotate Counter-Clockwise",
"showGalleryPanel": "Show Gallery Panel",
"showGallery": "Show Gallery",
"showOptionsPanel": "Show Side Panel",
"toggleAutoscroll": "Toggle autoscroll",
"toggleLogViewer": "Toggle Log Viewer",
"uploadImage": "Upload Image",
"useThisParameter": "Use this parameter",
"zoomIn": "Zoom In",
"zoomOut": "Zoom Out",
"loadMore": "Load More"
"zoomOut": "Zoom Out"
},
"boards": {
"addBoard": "Add Board",
@@ -58,7 +57,6 @@
"githubLabel": "Github",
"hotkeysLabel": "Hotkeys",
"imagePrompt": "Image Prompt",
"imageFailedToLoad": "Unable to Load Image",
"img2img": "Image To Image",
"langArabic": "العربية",
"langBrPortuguese": "Português do Brasil",
@@ -82,7 +80,6 @@
"load": "Load",
"loading": "Loading",
"loadingInvokeAI": "Loading Invoke AI",
"learnMore": "Learn More",
"modelManager": "Model Manager",
"nodeEditor": "Node Editor",
"nodes": "Workflow Editor",
@@ -113,7 +110,6 @@
"statusModelChanged": "Model Changed",
"statusModelConverted": "Model Converted",
"statusPreparing": "Preparing",
"statusProcessing": "Processing",
"statusProcessingCanceled": "Processing Canceled",
"statusProcessingComplete": "Processing Complete",
"statusRestoringFaces": "Restoring Faces",
@@ -137,8 +133,6 @@
"bgth": "bg_th",
"canny": "Canny",
"cannyDescription": "Canny edge detection",
"colorMap": "Color",
"colorMapDescription": "Generates a color map from the image",
"coarse": "Coarse",
"contentShuffle": "Content Shuffle",
"contentShuffleDescription": "Shuffles the content in an image",
@@ -162,7 +156,6 @@
"hideAdvanced": "Hide Advanced",
"highThreshold": "High Threshold",
"imageResolution": "Image Resolution",
"colorMapTileSize": "Tile Size",
"importImageFromCanvas": "Import Image From Canvas",
"importMaskFromCanvas": "Import Mask From Canvas",
"incompatibleBaseModel": "Incompatible base model:",
@@ -210,81 +203,6 @@
"incompatibleModel": "Incompatible base model:",
"noMatchingEmbedding": "No matching Embeddings"
},
"queue": {
"queue": "Queue",
"queueFront": "Add to Front of Queue",
"queueBack": "Add to Queue",
"queueCountPrediction": "Add {{predicted}} to Queue",
"queueMaxExceeded": "Max of {{max_queue_size}} exceeded, would skip {{skip}}",
"queuedCount": "{{pending}} Pending",
"queueTotal": "{{total}} Total",
"queueEmpty": "Queue Empty",
"enqueueing": "Queueing Batch",
"resume": "Resume",
"resumeTooltip": "Resume Processor",
"resumeSucceeded": "Processor Resumed",
"resumeFailed": "Problem Resuming Processor",
"pause": "Pause",
"pauseTooltip": "Pause Processor",
"pauseSucceeded": "Processor Paused",
"pauseFailed": "Problem Pausing Processor",
"cancel": "Cancel",
"cancelTooltip": "Cancel Current Item",
"cancelSucceeded": "Item Canceled",
"cancelFailed": "Problem Canceling Item",
"prune": "Prune",
"pruneTooltip": "Prune {{item_count}} Completed Items",
"pruneSucceeded": "Pruned {{item_count}} Completed Items from Queue",
"pruneFailed": "Problem Pruning Queue",
"clear": "Clear",
"clearTooltip": "Cancel and Clear All Items",
"clearSucceeded": "Queue Cleared",
"clearFailed": "Problem Clearing Queue",
"cancelBatch": "Cancel Batch",
"cancelItem": "Cancel Item",
"cancelBatchSucceeded": "Batch Canceled",
"cancelBatchFailed": "Problem Canceling Batch",
"clearQueueAlertDialog": "Clearing the queue immediately cancels any processing items and clears the queue entirely.",
"clearQueueAlertDialog2": "Are you sure you want to clear the queue?",
"current": "Current",
"next": "Next",
"status": "Status",
"total": "Total",
"pending": "Pending",
"in_progress": "In Progress",
"completed": "Completed",
"failed": "Failed",
"canceled": "Canceled",
"completedIn": "Completed in",
"batch": "Batch",
"item": "Item",
"session": "Session",
"batchValues": "Batch Values",
"notReady": "Unable to Queue",
"batchQueued": "Batch Queued",
"batchQueuedDesc": "Added {{item_count}} sessions to {{direction}} of queue",
"front": "front",
"back": "back",
"batchFailedToQueue": "Failed to Queue Batch",
"graphQueued": "Graph queued",
"graphFailedToQueue": "Failed to queue graph"
},
"invocationCache": {
"invocationCache": "Invocation Cache",
"cacheSize": "Cache Size",
"maxCacheSize": "Max Cache Size",
"hits": "Cache Hits",
"misses": "Cache Misses",
"clear": "Clear",
"clearSucceeded": "Invocation Cache Cleared",
"clearFailed": "Problem Clearing Invocation Cache",
"enable": "Enable",
"enableSucceeded": "Invocation Cache Enabled",
"enableFailed": "Problem Enabling Invocation Cache",
"disable": "Disable",
"disableSucceeded": "Invocation Cache Disabled",
"disableFailed": "Problem Disabling Invocation Cache"
},
"gallery": {
"allImagesLoaded": "All Images Loaded",
"assets": "Assets",
@@ -706,8 +624,6 @@
"addNodeToolTip": "Add Node (Shift+A, Space)",
"animatedEdges": "Animated Edges",
"animatedEdgesHelp": "Animate selected edges and edges connected to selected nodes",
"boardField": "Board",
"boardFieldDescription": "A gallery board",
"boolean": "Booleans",
"booleanCollection": "Boolean Collection",
"booleanCollectionDescription": "A collection of booleans.",
@@ -717,7 +633,6 @@
"cannotConnectInputToInput": "Cannot connect input to input",
"cannotConnectOutputToOutput": "Cannot connect output to output",
"cannotConnectToSelf": "Cannot connect to self",
"cannotDuplicateConnection": "Cannot create duplicate connections",
"clipField": "Clip",
"clipFieldDescription": "Tokenizer and text_encoder submodels.",
"collection": "Collection",
@@ -726,8 +641,7 @@
"collectionItemDescription": "TODO",
"colorCodeEdges": "Color-Code Edges",
"colorCodeEdgesHelp": "Color-code edges according to their connected fields",
"colorCollection": "A collection of colors.",
"colorCollectionDescription": "TODO",
"colorCollectionDescription": "A collection of colors.",
"colorField": "Color",
"colorFieldDescription": "A RGBA color.",
"colorPolymorphic": "Color Polymorphic",
@@ -774,8 +688,7 @@
"imageFieldDescription": "Images may be passed between nodes.",
"imagePolymorphic": "Image Polymorphic",
"imagePolymorphicDescription": "A collection of images.",
"inputField": "Input Field",
"inputFields": "Input Fields",
"inputFields": "Input Feilds",
"inputMayOnlyHaveOneConnection": "Input may only have one connection",
"inputNode": "Input Node",
"integer": "Integer",
@@ -793,7 +706,6 @@
"latentsPolymorphicDescription": "Latents may be passed between nodes.",
"loadingNodes": "Loading Nodes...",
"loadWorkflow": "Load Workflow",
"noWorkflow": "No Workflow",
"loRAModelField": "LoRA",
"loRAModelFieldDescription": "TODO",
"mainModelField": "Model",
@@ -815,15 +727,14 @@
"noImageFoundState": "No initial image found in state",
"noMatchingNodes": "No matching nodes",
"noNodeSelected": "No node selected",
"nodeOpacity": "Node Opacity",
"noOpacity": "Node Opacity",
"noOutputRecorded": "No outputs recorded",
"noOutputSchemaName": "No output schema name found in ref object",
"notes": "Notes",
"notesDescription": "Add notes about your workflow",
"oNNXModelField": "ONNX Model",
"oNNXModelFieldDescription": "ONNX model field.",
"outputField": "Output Field",
"outputFields": "Output Fields",
"outputFields": "Output Feilds",
"outputNode": "Output node",
"outputSchemaNotFound": "Output schema not found",
"pickOne": "Pick One",
@@ -872,7 +783,6 @@
"unknownNode": "Unknown Node",
"unknownTemplate": "Unknown Template",
"unkownInvocation": "Unknown Invocation type",
"updateNode": "Update Node",
"updateApp": "Update App",
"vaeField": "Vae",
"vaeFieldDescription": "Vae submodel.",
@@ -896,7 +806,7 @@
"zoomOutNodes": "Zoom Out"
},
"parameters": {
"aspectRatio": "Aspect Ratio",
"aspectRatio": "Ratio",
"boundingBoxHeader": "Bounding Box",
"boundingBoxHeight": "Bounding Box Height",
"boundingBoxWidth": "Bounding Box Width",
@@ -909,7 +819,6 @@
},
"cfgScale": "CFG Scale",
"clipSkip": "CLIP Skip",
"clipSkipWithLayerCount": "CLIP Skip {{layerCount}}",
"closeViewer": "Close Viewer",
"codeformerFidelity": "Fidelity",
"coherenceMode": "Mode",
@@ -948,7 +857,6 @@
"noInitialImageSelected": "No initial image selected",
"noModelForControlNet": "ControlNet {{index}} has no model selected.",
"noModelSelected": "No model selected",
"noPrompts": "No prompts generated",
"noNodesInGraph": "No nodes in graph",
"readyToInvoke": "Ready to Invoke",
"systemBusy": "System busy",
@@ -967,12 +875,7 @@
"perlinNoise": "Perlin Noise",
"positivePromptPlaceholder": "Positive Prompt",
"randomizeSeed": "Randomize Seed",
"manualSeed": "Manual Seed",
"randomSeed": "Random Seed",
"restoreFaces": "Restore Faces",
"iterations": "Iterations",
"iterationsWithCount_one": "{{count}} Iteration",
"iterationsWithCount_other": "{{count}} Iterations",
"scale": "Scale",
"scaleBeforeProcessing": "Scale Before Processing",
"scaledHeight": "Scaled H",
@@ -983,17 +886,13 @@
"seamlessTiling": "Seamless Tiling",
"seamlessXAxis": "X Axis",
"seamlessYAxis": "Y Axis",
"seamlessX": "Seamless X",
"seamlessY": "Seamless Y",
"seamlessX&Y": "Seamless X & Y",
"seamLowThreshold": "Low",
"seed": "Seed",
"seedWeights": "Seed Weights",
"imageActions": "Image Actions",
"sendTo": "Send to",
"sendToImg2Img": "Send to Image to Image",
"sendToUnifiedCanvas": "Send To Unified Canvas",
"showOptionsPanel": "Show Side Panel (O or T)",
"showOptionsPanel": "Show Options Panel",
"showPreview": "Show Preview",
"shuffle": "Shuffle Seed",
"steps": "Steps",
@@ -1002,13 +901,11 @@
"tileSize": "Tile Size",
"toggleLoopback": "Toggle Loopback",
"type": "Type",
"upscale": "Upscale (Shift + U)",
"upscale": "Upscale",
"upscaleImage": "Upscale Image",
"upscaling": "Upscaling",
"useAll": "Use All",
"useCpuNoise": "Use CPU Noise",
"cpuNoise": "CPU Noise",
"gpuNoise": "GPU Noise",
"useInitImg": "Use Initial Image",
"usePrompt": "Use Prompt",
"useSeed": "Use Seed",
@@ -1017,20 +914,11 @@
"vSymmetryStep": "V Symmetry Step",
"width": "Width"
},
"dynamicPrompts": {
"prompt": {
"combinatorial": "Combinatorial Generation",
"dynamicPrompts": "Dynamic Prompts",
"enableDynamicPrompts": "Enable Dynamic Prompts",
"maxPrompts": "Max Prompts",
"promptsWithCount_one": "{{count}} Prompt",
"promptsWithCount_other": "{{count}} Prompts",
"seedBehaviour": {
"label": "Seed Behaviour",
"perIterationLabel": "Seed per Iteration",
"perIterationDesc": "Use a different seed for each iteration",
"perPromptLabel": "Seed per Image",
"perPromptDesc": "Use a different seed for each image"
}
"maxPrompts": "Max Prompts"
},
"sdxl": {
"cfgScale": "CFG Scale",
@@ -1178,210 +1066,6 @@
"variations": "Try a variation with a value between 0.1 and 1.0 to change the result for a given seed. Interesting variations of the seed are between 0.1 and 0.3."
}
},
"popovers": {
"clipSkip": {
"heading": "CLIP Skip",
"paragraphs": [
"Choose how many layers of the CLIP model to skip.",
"Some models work better with certain CLIP Skip settings.",
"A higher value typically results in a less detailed image."
]
},
"paramNegativeConditioning": {
"heading": "Negative Prompt",
"paragraphs": [
"The generation process avoids the concepts in the negative prompt. Use this to exclude qualities or objects from the output.",
"Supports Compel syntax and embeddings."
]
},
"paramPositiveConditioning": {
"heading": "Positive Prompt",
"paragraphs": [
"Guides the generation process. You may use any words or phrases.",
"Compel and Dynamic Prompts syntaxes and embeddings."
]
},
"paramScheduler": {
"heading": "Scheduler",
"paragraphs": [
"Scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output."
]
},
"compositingBlur": {
"heading": "Blur",
"paragraphs": ["The blur radius of the mask."]
},
"compositingBlurMethod": {
"heading": "Blur Method",
"paragraphs": ["The method of blur applied to the masked area."]
},
"compositingCoherencePass": {
"heading": "Coherence Pass",
"paragraphs": [
"A second round of denoising helps to composite the Inpainted/Outpainted image."
]
},
"compositingCoherenceMode": {
"heading": "Mode",
"paragraphs": ["The mode of the Coherence Pass."]
},
"compositingCoherenceSteps": {
"heading": "Steps",
"paragraphs": [
"Number of denoising steps used in the Coherence Pass.",
"Same as the main Steps parameter."
]
},
"compositingStrength": {
"heading": "Strength",
"paragraphs": [
"Denoising strength for the Coherence Pass.",
"Same as the Image to Image Denoising Strength parameter."
]
},
"compositingMaskAdjustments": {
"heading": "Mask Adjustments",
"paragraphs": ["Adjust the mask."]
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
"paragraphs": [
"Which steps of the denoising process will have the ControlNet applied.",
"ControlNets applied at the beginning of the process guide composition, and ControlNets applied at the end guide details."
]
},
"controlNetControlMode": {
"heading": "Control Mode",
"paragraphs": [
"Lends more weight to either the prompt or ControlNet."
]
},
"controlNetResizeMode": {
"heading": "Resize Mode",
"paragraphs": [
"How the ControlNet image will be fit to the image output size."
]
},
"controlNet": {
"heading": "ControlNet",
"paragraphs": [
"ControlNets provide guidance to the generation process, helping create images with controlled composition, structure, or style, depending on the model selected."
]
},
"controlNetWeight": {
"heading": "Weight",
"paragraphs": [
"How strongly the ControlNet will impact the generated image."
]
},
"dynamicPrompts": {
"heading": "Dynamic Prompts",
"paragraphs": [
"Dynamic Prompts parses a single prompt into many.",
"The basic syntax is \"a {red|green|blue} ball\". This will produce three prompts: \"a red ball\", \"a green ball\" and \"a blue ball\".",
"You can use the syntax as many times as you like in a single prompt, but be sure to keep the number of prompts generated in check with the Max Prompts setting."
]
},
"dynamicPromptsMaxPrompts": {
"heading": "Max Prompts",
"paragraphs": [
"Limits the number of prompts that can be generated by Dynamic Prompts."
]
},
"dynamicPromptsSeedBehaviour": {
"heading": "Seed Behaviour",
"paragraphs": [
"Controls how the seed is used when generating prompts.",
"Per Iteration will use a unique seed for each iteration. Use this to explore prompt variations on a single seed.",
"For example, if you have 5 prompts, each image will use the same seed.",
"Per Image will use a unique seed for each image. This provides more variation."
]
},
"infillMethod": {
"heading": "Infill Method",
"paragraphs": ["Method to infill the selected area."]
},
"lora": {
"heading": "LoRA Weight",
"paragraphs": [
"Higher LoRA weight will lead to larger impacts on the final image."
]
},
"noiseUseCPU": {
"heading": "Use CPU Noise",
"paragraphs": [
"Controls whether noise is generated on the CPU or GPU.",
"With CPU Noise enabled, a particular seed will produce the same image on any machine.",
"There is no performance impact to enabling CPU Noise."
]
},
"paramCFGScale": {
"heading": "CFG Scale",
"paragraphs": [
"Controls how much your prompt influences the generation process."
]
},
"paramDenoisingStrength": {
"heading": "Denoising Strength",
"paragraphs": [
"How much noise is added to the input image.",
"0 will result in an identical image, while 1 will result in a completely new image."
]
},
"paramIterations": {
"heading": "Iterations",
"paragraphs": [
"The number of images to generate.",
"If Dynamic Prompts is enabled, each of the prompts will be generated this many times."
]
},
"paramModel": {
"heading": "Model",
"paragraphs": [
"Model used for the denoising steps.",
"Different models are typically trained to specialize in producing particular aesthetic results and content."
]
},
"paramRatio": {
"heading": "Aspect Ratio",
"paragraphs": [
"The aspect ratio of the dimensions of the image generated.",
"An image size (in number of pixels) equivalent to 512x512 is recommended for SD1.5 models and a size equivalent to 1024x1024 is recommended for SDXL models."
]
},
"paramSeed": {
"heading": "Seed",
"paragraphs": [
"Controls the starting noise used for generation.",
"Disable “Random Seed” to produce identical results with the same generation settings."
]
},
"paramSteps": {
"heading": "Steps",
"paragraphs": [
"Number of steps that will be performed in each generation.",
"Higher step counts will typically create better images but will require more generation time."
]
},
"paramVAE": {
"heading": "VAE",
"paragraphs": [
"Model used for translating AI output into the final image."
]
},
"paramVAEPrecision": {
"heading": "VAE Precision",
"paragraphs": [
"The precision used during VAE encoding and decoding. FP16/half precision is more efficient, at the expense of minor image variations."
]
},
"scaleBeforeProcessing": {
"heading": "Scale Before Processing",
"paragraphs": [
"Scales the selected area to the size best suited for the model before the image generation process."
]
}
},
"ui": {
"hideProgressImages": "Hide Progress Images",
"lockRatio": "Lock Ratio",
@@ -1444,8 +1128,6 @@
"showCanvasDebugInfo": "Show Additional Canvas Info",
"showGrid": "Show Grid",
"showHide": "Show/Hide",
"showResultsOn": "Show Results (On)",
"showResultsOff": "Show Results (Off)",
"showIntermediates": "Show Intermediates",
"snapToGrid": "Snap to Grid",
"undo": "Undo"

View File

@@ -58,7 +58,6 @@
"githubLabel": "Github",
"hotkeysLabel": "Hotkeys",
"imagePrompt": "Image Prompt",
"imageFailedToLoad": "Unable to Load Image",
"img2img": "Image To Image",
"langArabic": "العربية",
"langBrPortuguese": "Português do Brasil",
@@ -80,7 +79,7 @@
"lightMode": "Light Mode",
"linear": "Linear",
"load": "Load",
"loading": "Loading",
"loading": "Loading $t({{noun}})...",
"loadingInvokeAI": "Loading Invoke AI",
"learnMore": "Learn More",
"modelManager": "Model Manager",
@@ -717,7 +716,6 @@
"cannotConnectInputToInput": "Cannot connect input to input",
"cannotConnectOutputToOutput": "Cannot connect output to output",
"cannotConnectToSelf": "Cannot connect to self",
"cannotDuplicateConnection": "Cannot create duplicate connections",
"clipField": "Clip",
"clipFieldDescription": "Tokenizer and text_encoder submodels.",
"collection": "Collection",
@@ -1444,8 +1442,6 @@
"showCanvasDebugInfo": "Show Additional Canvas Info",
"showGrid": "Show Grid",
"showHide": "Show/Hide",
"showResultsOn": "Show Results (On)",
"showResultsOff": "Show Results (Off)",
"showIntermediates": "Show Intermediates",
"snapToGrid": "Snap to Grid",
"undo": "Undo"

View File

@@ -1,8 +1,6 @@
import { Flex, Grid } from '@chakra-ui/react';
import { useStore } from '@nanostores/react';
import { useLogger } from 'app/logging/useLogger';
import { appStarted } from 'app/store/middleware/listenerMiddleware/listeners/appStarted';
import { $headerComponent } from 'app/store/nanostores/headerComponent';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { PartialAppConfig } from 'app/types/invokeai';
import ImageUploader from 'common/components/ImageUploader';
@@ -16,10 +14,12 @@ import i18n from 'i18n';
import { size } from 'lodash-es';
import { memo, useCallback, useEffect } from 'react';
import { ErrorBoundary } from 'react-error-boundary';
import { usePreselectedImage } from '../../features/parameters/hooks/usePreselectedImage';
import AppErrorBoundaryFallback from './AppErrorBoundaryFallback';
import GlobalHotkeys from './GlobalHotkeys';
import PreselectedImage from './PreselectedImage';
import Toaster from './Toaster';
import { useStore } from '@nanostores/react';
import { $headerComponent } from 'app/store/nanostores/headerComponent';
const DEFAULT_CONFIG = {};
@@ -36,7 +36,8 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage }: Props) => {
const logger = useLogger('system');
const dispatch = useAppDispatch();
const { handleSendToCanvas, handleSendToImg2Img, handleUseAllMetadata } =
usePreselectedImage(selectedImage?.imageName);
const handleReset = useCallback(() => {
localStorage.clear();
location.reload();
@@ -58,6 +59,24 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage }: Props) => {
dispatch(appStarted());
}, [dispatch]);
useEffect(() => {
if (selectedImage && selectedImage.action === 'sendToCanvas') {
handleSendToCanvas();
}
}, [selectedImage, handleSendToCanvas]);
useEffect(() => {
if (selectedImage && selectedImage.action === 'sendToImg2Img') {
handleSendToImg2Img();
}
}, [selectedImage, handleSendToImg2Img]);
useEffect(() => {
if (selectedImage && selectedImage.action === 'useAllParameters') {
handleUseAllMetadata();
}
}, [selectedImage, handleUseAllMetadata]);
const headerComponent = useStore($headerComponent);
return (
@@ -93,7 +112,6 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage }: Props) => {
<ChangeBoardModal />
<Toaster />
<GlobalHotkeys />
<PreselectedImage selectedImage={selectedImage} />
</ErrorBoundary>
);
};

View File

@@ -1,16 +0,0 @@
import { usePreselectedImage } from 'features/parameters/hooks/usePreselectedImage';
import { memo } from 'react';
type Props = {
selectedImage?: {
imageName: string;
action: 'sendToImg2Img' | 'sendToCanvas' | 'useAllParameters';
};
};
const PreselectedImage = (props: Props) => {
usePreselectedImage(props.selectedImage);
return null;
};
export default memo(PreselectedImage);

View File

@@ -5,7 +5,7 @@ import {
} from '@chakra-ui/react';
import { ReactNode, memo, useEffect, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { TOAST_OPTIONS, theme as invokeAITheme } from 'theme/theme';
import { theme as invokeAITheme } from 'theme/theme';
import '@fontsource-variable/inter';
import { MantineProvider } from '@mantine/core';
@@ -39,11 +39,7 @@ function ThemeLocaleProvider({ children }: ThemeLocaleProviderProps) {
return (
<MantineProvider theme={mantineTheme}>
<ChakraProvider
theme={theme}
colorModeManager={manager}
toastOptions={TOAST_OPTIONS}
>
<ChakraProvider theme={theme} colorModeManager={manager}>
{children}
</ChakraProvider>
</MantineProvider>

View File

@@ -54,6 +54,21 @@ import { addModelSelectedListener } from './listeners/modelSelected';
import { addModelsLoadedListener } from './listeners/modelsLoaded';
import { addDynamicPromptsListener } from './listeners/promptChanged';
import { addReceivedOpenAPISchemaListener } from './listeners/receivedOpenAPISchema';
import {
addSessionCanceledFulfilledListener,
addSessionCanceledPendingListener,
addSessionCanceledRejectedListener,
} from './listeners/sessionCanceled';
import {
addSessionCreatedFulfilledListener,
addSessionCreatedPendingListener,
addSessionCreatedRejectedListener,
} from './listeners/sessionCreated';
import {
addSessionInvokedFulfilledListener,
addSessionInvokedPendingListener,
addSessionInvokedRejectedListener,
} from './listeners/sessionInvoked';
import { addSocketConnectedEventListener as addSocketConnectedListener } from './listeners/socketio/socketConnected';
import { addSocketDisconnectedEventListener as addSocketDisconnectedListener } from './listeners/socketio/socketDisconnected';
import { addGeneratorProgressEventListener as addGeneratorProgressListener } from './listeners/socketio/socketGeneratorProgress';
@@ -71,7 +86,6 @@ import { addStagingAreaImageSavedListener } from './listeners/stagingAreaImageSa
import { addTabChangedListener } from './listeners/tabChanged';
import { addUpscaleRequestedListener } from './listeners/upscaleRequested';
import { addWorkflowLoadedListener } from './listeners/workflowLoaded';
import { addBatchEnqueuedListener } from './listeners/batchEnqueued';
export const listenerMiddleware = createListenerMiddleware();
@@ -122,7 +136,6 @@ addEnqueueRequestedCanvasListener();
addEnqueueRequestedNodes();
addEnqueueRequestedLinear();
addAnyEnqueuedListener();
addBatchEnqueuedListener();
// Canvas actions
addCanvasSavedToGalleryListener();
@@ -162,6 +175,21 @@ addSessionRetrievalErrorEventListener();
addInvocationRetrievalErrorEventListener();
addSocketQueueItemStatusChangedEventListener();
// Session Created
addSessionCreatedPendingListener();
addSessionCreatedFulfilledListener();
addSessionCreatedRejectedListener();
// Session Invoked
addSessionInvokedPendingListener();
addSessionInvokedFulfilledListener();
addSessionInvokedRejectedListener();
// Session Canceled
addSessionCanceledPendingListener();
addSessionCanceledFulfilledListener();
addSessionCanceledRejectedListener();
// ControlNet
addControlNetImageProcessedListener();
addControlNetAutoProcessListener();

View File

@@ -1,96 +0,0 @@
import { createStandaloneToast } from '@chakra-ui/react';
import { logger } from 'app/logging/logger';
import { parseify } from 'common/util/serialize';
import { zPydanticValidationError } from 'features/system/store/zodSchemas';
import { t } from 'i18next';
import { get, truncate, upperFirst } from 'lodash-es';
import { queueApi } from 'services/api/endpoints/queue';
import { TOAST_OPTIONS, theme } from 'theme/theme';
import { startAppListening } from '..';
const { toast } = createStandaloneToast({
theme: theme,
defaultOptions: TOAST_OPTIONS.defaultOptions,
});
export const addBatchEnqueuedListener = () => {
// success
startAppListening({
matcher: queueApi.endpoints.enqueueBatch.matchFulfilled,
effect: async (action) => {
const response = action.payload;
const arg = action.meta.arg.originalArgs;
logger('queue').debug(
{ enqueueResult: parseify(response) },
'Batch enqueued'
);
if (!toast.isActive('batch-queued')) {
toast({
id: 'batch-queued',
title: t('queue.batchQueued'),
description: t('queue.batchQueuedDesc', {
item_count: response.enqueued,
direction: arg.prepend ? t('queue.front') : t('queue.back'),
}),
duration: 1000,
status: 'success',
});
}
},
});
// error
startAppListening({
matcher: queueApi.endpoints.enqueueBatch.matchRejected,
effect: async (action) => {
const response = action.payload;
const arg = action.meta.arg.originalArgs;
if (!response) {
toast({
title: t('queue.batchFailedToQueue'),
status: 'error',
description: 'Unknown Error',
});
logger('queue').error(
{ batchConfig: parseify(arg), error: parseify(response) },
t('queue.batchFailedToQueue')
);
return;
}
const result = zPydanticValidationError.safeParse(response);
if (result.success) {
result.data.data.detail.map((e) => {
toast({
id: 'batch-failed-to-queue',
title: truncate(upperFirst(e.msg), { length: 128 }),
status: 'error',
description: truncate(
`Path:
${e.loc.join('.')}`,
{ length: 128 }
),
});
});
} else {
let detail = 'Unknown Error';
if (response.status === 403 && 'body' in response) {
detail = get(response, 'body.detail', 'Unknown Error');
} else if (response.status === 403 && 'error' in response) {
detail = get(response, 'error.detail', 'Unknown Error');
}
toast({
title: t('queue.batchFailedToQueue'),
status: 'error',
description: detail,
});
}
logger('queue').error(
{ batchConfig: parseify(arg), error: parseify(response) },
t('queue.batchFailedToQueue')
);
},
});
};

View File

@@ -25,7 +25,7 @@ export const addBoardIdSelectedListener = () => {
const state = getState();
const board_id = boardIdSelected.match(action)
? action.payload.boardId
? action.payload
: state.gallery.selectedBoardId;
const galleryView = galleryViewChanged.match(action)
@@ -55,12 +55,7 @@ export const addBoardIdSelectedListener = () => {
if (boardImagesData) {
const firstImage = imagesSelectors.selectAll(boardImagesData)[0];
const selectedImage = imagesSelectors.selectById(
boardImagesData,
action.payload.selectedImageName
);
dispatch(imageSelected(selectedImage || firstImage || null));
dispatch(imageSelected(firstImage ?? null));
} else {
// board has no images - deselect
dispatch(imageSelected(null));

View File

@@ -3,9 +3,9 @@ import { canvasImageToControlNet } from 'features/canvas/store/actions';
import { getBaseLayerBlob } from 'features/canvas/util/getBaseLayerBlob';
import { controlNetImageChanged } from 'features/controlNet/store/controlNetSlice';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { imagesApi } from 'services/api/endpoints/images';
import { startAppListening } from '..';
import { t } from 'i18next';
export const addCanvasImageToControlNetListener = () => {
startAppListening({
@@ -16,7 +16,7 @@ export const addCanvasImageToControlNetListener = () => {
let blob;
try {
blob = await getBaseLayerBlob(state, true);
blob = await getBaseLayerBlob(state);
} catch (err) {
log.error(String(err));
dispatch(
@@ -36,10 +36,10 @@ export const addCanvasImageToControlNetListener = () => {
file: new File([blob], 'savedCanvas.png', {
type: 'image/png',
}),
image_category: 'control',
image_category: 'mask',
is_intermediate: false,
board_id: autoAddBoardId === 'none' ? undefined : autoAddBoardId,
crop_visible: false,
crop_visible: true,
postUploadAction: {
type: 'TOAST',
toastOptions: { title: t('toast.canvasSentControlnetAssets') },

View File

@@ -3,9 +3,9 @@ import { canvasMaskToControlNet } from 'features/canvas/store/actions';
import { getCanvasData } from 'features/canvas/util/getCanvasData';
import { controlNetImageChanged } from 'features/controlNet/store/controlNetSlice';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { imagesApi } from 'services/api/endpoints/images';
import { startAppListening } from '..';
import { t } from 'i18next';
export const addCanvasMaskToControlNetListener = () => {
startAppListening({
@@ -50,7 +50,7 @@ export const addCanvasMaskToControlNetListener = () => {
image_category: 'mask',
is_intermediate: false,
board_id: autoAddBoardId === 'none' ? undefined : autoAddBoardId,
crop_visible: false,
crop_visible: true,
postUploadAction: {
type: 'TOAST',
toastOptions: { title: t('toast.maskSentControlnetAssets') },

View File

@@ -12,6 +12,8 @@ import { getCanvasGenerationMode } from 'features/canvas/util/getCanvasGeneratio
import { canvasGraphBuilt } from 'features/nodes/store/actions';
import { buildCanvasGraph } from 'features/nodes/util/graphBuilders/buildCanvasGraph';
import { prepareLinearUIBatch } from 'features/nodes/util/graphBuilders/buildLinearBatchConfig';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { imagesApi } from 'services/api/endpoints/images';
import { queueApi } from 'services/api/endpoints/queue';
import { ImageDTO } from 'services/api/types';
@@ -138,6 +140,8 @@ export const addEnqueueRequestedCanvasListener = () => {
const enqueueResult = await req.unwrap();
req.reset();
log.debug({ enqueueResult: parseify(enqueueResult) }, 'Batch enqueued');
const batchId = enqueueResult.batch.batch_id as string; // we know the is a string, backend provides it
// Prep the canvas staging area if it is not yet initialized
@@ -154,8 +158,28 @@ export const addEnqueueRequestedCanvasListener = () => {
// Associate the session with the canvas session ID
dispatch(canvasBatchIdAdded(batchId));
dispatch(
addToast({
title: t('queue.batchQueued'),
description: t('queue.batchQueuedDesc', {
item_count: enqueueResult.enqueued,
direction: prepend ? t('queue.front') : t('queue.back'),
}),
status: 'success',
})
);
} catch {
// no-op
log.error(
{ batchConfig: parseify(batchConfig) },
t('queue.batchFailedToQueue')
);
dispatch(
addToast({
title: t('queue.batchFailedToQueue'),
status: 'error',
})
);
}
},
});

View File

@@ -1,9 +1,13 @@
import { logger } from 'app/logging/logger';
import { enqueueRequested } from 'app/store/actions';
import { parseify } from 'common/util/serialize';
import { prepareLinearUIBatch } from 'features/nodes/util/graphBuilders/buildLinearBatchConfig';
import { buildLinearImageToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearImageToImageGraph';
import { buildLinearSDXLImageToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearSDXLImageToImageGraph';
import { buildLinearSDXLTextToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearSDXLTextToImageGraph';
import { buildLinearTextToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearTextToImageGraph';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { queueApi } from 'services/api/endpoints/queue';
import { startAppListening } from '..';
@@ -14,6 +18,7 @@ export const addEnqueueRequestedLinear = () => {
(action.payload.tabName === 'txt2img' ||
action.payload.tabName === 'img2img'),
effect: async (action, { getState, dispatch }) => {
const log = logger('queue');
const state = getState();
const model = state.generation.model;
const { prepend } = action.payload;
@@ -36,12 +41,38 @@ export const addEnqueueRequestedLinear = () => {
const batchConfig = prepareLinearUIBatch(state, graph, prepend);
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
fixedCacheKey: 'enqueueBatch',
})
);
req.reset();
try {
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
fixedCacheKey: 'enqueueBatch',
})
);
const enqueueResult = await req.unwrap();
req.reset();
log.debug({ enqueueResult: parseify(enqueueResult) }, 'Batch enqueued');
dispatch(
addToast({
title: t('queue.batchQueued'),
description: t('queue.batchQueuedDesc', {
item_count: enqueueResult.enqueued,
direction: prepend ? t('queue.front') : t('queue.back'),
}),
status: 'success',
})
);
} catch {
log.error(
{ batchConfig: parseify(batchConfig) },
t('queue.batchFailedToQueue')
);
dispatch(
addToast({
title: t('queue.batchFailedToQueue'),
status: 'error',
})
);
}
},
});
};

View File

@@ -1,5 +1,9 @@
import { logger } from 'app/logging/logger';
import { enqueueRequested } from 'app/store/actions';
import { parseify } from 'common/util/serialize';
import { buildNodesGraph } from 'features/nodes/util/graphBuilders/buildNodesGraph';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { queueApi } from 'services/api/endpoints/queue';
import { BatchConfig } from 'services/api/types';
import { startAppListening } from '..';
@@ -9,7 +13,9 @@ export const addEnqueueRequestedNodes = () => {
predicate: (action): action is ReturnType<typeof enqueueRequested> =>
enqueueRequested.match(action) && action.payload.tabName === 'nodes',
effect: async (action, { getState, dispatch }) => {
const log = logger('queue');
const state = getState();
const { prepend } = action.payload;
const graph = buildNodesGraph(state.nodes);
const batchConfig: BatchConfig = {
batch: {
@@ -19,12 +25,38 @@ export const addEnqueueRequestedNodes = () => {
prepend: action.payload.prepend,
};
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
fixedCacheKey: 'enqueueBatch',
})
);
req.reset();
try {
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
fixedCacheKey: 'enqueueBatch',
})
);
const enqueueResult = await req.unwrap();
req.reset();
log.debug({ enqueueResult: parseify(enqueueResult) }, 'Batch enqueued');
dispatch(
addToast({
title: t('queue.batchQueued'),
description: t('queue.batchQueuedDesc', {
item_count: enqueueResult.enqueued,
direction: prepend ? t('queue.front') : t('queue.back'),
}),
status: 'success',
})
);
} catch {
log.error(
{ batchConfig: parseify(batchConfig) },
'Failed to enqueue batch'
);
dispatch(
addToast({
title: t('queue.batchFailedToQueue'),
status: 'error',
})
);
}
},
});
};

View File

@@ -0,0 +1,44 @@
import { logger } from 'app/logging/logger';
import { serializeError } from 'serialize-error';
import { sessionCanceled } from 'services/api/thunks/session';
import { startAppListening } from '..';
export const addSessionCanceledPendingListener = () => {
startAppListening({
actionCreator: sessionCanceled.pending,
effect: () => {
//
},
});
};
export const addSessionCanceledFulfilledListener = () => {
startAppListening({
actionCreator: sessionCanceled.fulfilled,
effect: (action) => {
const log = logger('session');
const { session_id } = action.meta.arg;
log.debug({ session_id }, `Session canceled (${session_id})`);
},
});
};
export const addSessionCanceledRejectedListener = () => {
startAppListening({
actionCreator: sessionCanceled.rejected,
effect: (action) => {
const log = logger('session');
const { session_id } = action.meta.arg;
if (action.payload) {
const { error } = action.payload;
log.error(
{
session_id,
error: serializeError(error),
},
`Problem canceling session`
);
}
},
});
};

View File

@@ -0,0 +1,45 @@
import { logger } from 'app/logging/logger';
import { parseify } from 'common/util/serialize';
import { serializeError } from 'serialize-error';
import { sessionCreated } from 'services/api/thunks/session';
import { startAppListening } from '..';
export const addSessionCreatedPendingListener = () => {
startAppListening({
actionCreator: sessionCreated.pending,
effect: () => {
//
},
});
};
export const addSessionCreatedFulfilledListener = () => {
startAppListening({
actionCreator: sessionCreated.fulfilled,
effect: (action) => {
const log = logger('session');
const session = action.payload;
log.debug(
{ session: parseify(session) },
`Session created (${session.id})`
);
},
});
};
export const addSessionCreatedRejectedListener = () => {
startAppListening({
actionCreator: sessionCreated.rejected,
effect: (action) => {
const log = logger('session');
if (action.payload) {
const { error, status } = action.payload;
const graph = parseify(action.meta.arg);
log.error(
{ graph, status, error: serializeError(error) },
`Problem creating session`
);
}
},
});
};

View File

@@ -0,0 +1,44 @@
import { logger } from 'app/logging/logger';
import { serializeError } from 'serialize-error';
import { sessionInvoked } from 'services/api/thunks/session';
import { startAppListening } from '..';
export const addSessionInvokedPendingListener = () => {
startAppListening({
actionCreator: sessionInvoked.pending,
effect: () => {
//
},
});
};
export const addSessionInvokedFulfilledListener = () => {
startAppListening({
actionCreator: sessionInvoked.fulfilled,
effect: (action) => {
const log = logger('session');
const { session_id } = action.meta.arg;
log.debug({ session_id }, `Session invoked (${session_id})`);
},
});
};
export const addSessionInvokedRejectedListener = () => {
startAppListening({
actionCreator: sessionInvoked.rejected,
effect: (action) => {
const log = logger('session');
const { session_id } = action.meta.arg;
if (action.payload) {
const { error } = action.payload;
log.error(
{
session_id,
error: serializeError(error),
},
`Problem invoking session`
);
}
},
});
};

View File

@@ -81,32 +81,9 @@ export const addInvocationCompleteEventListener = () => {
// If auto-switch is enabled, select the new image
if (shouldAutoSwitch) {
// if auto-add is enabled, switch the gallery view and board if needed as the image comes in
if (gallery.galleryView !== 'images') {
dispatch(galleryViewChanged('images'));
}
if (
imageDTO.board_id &&
imageDTO.board_id !== gallery.selectedBoardId
) {
dispatch(
boardIdSelected({
boardId: imageDTO.board_id,
selectedImageName: imageDTO.image_name,
})
);
}
if (!imageDTO.board_id && gallery.selectedBoardId !== 'none') {
dispatch(
boardIdSelected({
boardId: 'none',
selectedImageName: imageDTO.image_name,
})
);
}
// if auto-add is enabled, switch the board as the image comes in
dispatch(galleryViewChanged('images'));
dispatch(boardIdSelected(imageDTO.board_id ?? 'none'));
dispatch(imageSelected(imageDTO));
}
}

View File

@@ -35,7 +35,6 @@ export const addSocketQueueItemStatusChangedEventListener = () => {
queueApi.util.invalidateTags([
'CurrentSessionQueueItem',
'NextSessionQueueItem',
'InvocationCacheStatus',
{ type: 'SessionQueueItem', id: item_id },
{ type: 'SessionQueueItemDTO', id: item_id },
{ type: 'BatchStatus', id: queue_batch_id },

View File

@@ -0,0 +1,54 @@
import { logger } from 'app/logging/logger';
import { AppThunkDispatch } from 'app/store/store';
import { parseify } from 'common/util/serialize';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { queueApi } from 'services/api/endpoints/queue';
import { BatchConfig } from 'services/api/types';
export const enqueueBatch = async (
batchConfig: BatchConfig,
dispatch: AppThunkDispatch
) => {
const log = logger('session');
const { prepend } = batchConfig;
try {
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
fixedCacheKey: 'enqueueBatch',
})
);
const enqueueResult = await req.unwrap();
req.reset();
dispatch(
queueApi.endpoints.resumeProcessor.initiate(undefined, {
fixedCacheKey: 'resumeProcessor',
})
);
log.debug({ enqueueResult: parseify(enqueueResult) }, 'Batch enqueued');
dispatch(
addToast({
title: t('queue.batchQueued'),
description: t('queue.batchQueuedDesc', {
item_count: enqueueResult.enqueued,
direction: prepend ? t('queue.front') : t('queue.back'),
}),
status: 'success',
})
);
} catch {
log.error(
{ batchConfig: parseify(batchConfig) },
t('queue.batchFailedToQueue')
);
dispatch(
addToast({
title: t('queue.batchFailedToQueue'),
status: 'error',
})
);
}
};

View File

@@ -1,9 +1,18 @@
import { Box, ChakraProps } from '@chakra-ui/react';
import { chakra, ChakraProps } from '@chakra-ui/react';
import { memo } from 'react';
import { RgbaColorPicker } from 'react-colorful';
import { ColorPickerBaseProps, RgbaColor } from 'react-colorful/dist/types';
type IAIColorPickerProps = ColorPickerBaseProps<RgbaColor>;
type IAIColorPickerProps = Omit<ColorPickerBaseProps<RgbaColor>, 'color'> &
ChakraProps & {
pickerColor: RgbaColor;
styleClass?: string;
};
const ChakraRgbaColorPicker = chakra(RgbaColorPicker, {
baseStyle: { paddingInline: 4 },
shouldForwardProp: (prop) => !['pickerColor'].includes(prop),
});
const colorPickerStyles: NonNullable<ChakraProps['sx']> = {
width: 6,
@@ -11,17 +20,19 @@ const colorPickerStyles: NonNullable<ChakraProps['sx']> = {
borderColor: 'base.100',
};
const sx = {
'.react-colorful__hue-pointer': colorPickerStyles,
'.react-colorful__saturation-pointer': colorPickerStyles,
'.react-colorful__alpha-pointer': colorPickerStyles,
};
const IAIColorPicker = (props: IAIColorPickerProps) => {
const { styleClass = '', ...rest } = props;
return (
<Box sx={sx}>
<RgbaColorPicker {...props} />
</Box>
<ChakraRgbaColorPicker
sx={{
'.react-colorful__hue-pointer': colorPickerStyles,
'.react-colorful__saturation-pointer': colorPickerStyles,
'.react-colorful__alpha-pointer': colorPickerStyles,
}}
className={styleClass}
{...rest}
/>
);
};

View File

@@ -139,11 +139,6 @@ const IAICanvas = () => {
const { handleDragStart, handleDragMove, handleDragEnd } =
useCanvasDragMove();
const handleContextMenu = useCallback(
(e: KonvaEventObject<MouseEvent>) => e.evt.preventDefault(),
[]
);
useEffect(() => {
if (!containerRef.current) {
return;
@@ -210,7 +205,9 @@ const IAICanvas = () => {
onDragStart={handleDragStart}
onDragMove={handleDragMove}
onDragEnd={handleDragEnd}
onContextMenu={handleContextMenu}
onContextMenu={(e: KonvaEventObject<MouseEvent>) =>
e.evt.preventDefault()
}
onWheel={handleWheel}
draggable={(tool === 'move' || isStaging) && !isModifyingBoundingBox}
>
@@ -226,11 +223,7 @@ const IAICanvas = () => {
>
<IAICanvasObjectRenderer />
</Layer>
<Layer
id="mask"
visible={isMaskEnabled && !isStaging}
listening={false}
>
<Layer id="mask" visible={isMaskEnabled} listening={false}>
<IAICanvasMaskLines visible={true} listening={false} />
<IAICanvasMaskCompositer listening={false} />
</Layer>

View File

@@ -1,27 +1,26 @@
import { skipToken } from '@reduxjs/toolkit/dist/query';
import { memo } from 'react';
import { Image } from 'react-konva';
import { $authToken } from 'services/api/client';
import { Image, Rect } from 'react-konva';
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
import useImage from 'use-image';
import { CanvasImage } from '../store/canvasTypes';
import IAICanvasImageErrorFallback from './IAICanvasImageErrorFallback';
import { $authToken } from 'services/api/client';
import { memo } from 'react';
type IAICanvasImageProps = {
canvasImage: CanvasImage;
};
const IAICanvasImage = (props: IAICanvasImageProps) => {
const { x, y, imageName } = props.canvasImage;
const { width, height, x, y, imageName } = props.canvasImage;
const { currentData: imageDTO, isError } = useGetImageDTOQuery(
imageName ?? skipToken
);
const [image, status] = useImage(
const [image] = useImage(
imageDTO?.image_url ?? '',
$authToken.get() ? 'use-credentials' : 'anonymous'
);
if (isError || status === 'failed') {
return <IAICanvasImageErrorFallback canvasImage={props.canvasImage} />;
if (isError) {
return <Rect x={x} y={y} width={width} height={height} fill="red" />;
}
return <Image x={x} y={y} image={image} listening={false} />;

View File

@@ -1,44 +0,0 @@
import { useColorModeValue, useToken } from '@chakra-ui/react';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { Group, Rect, Text } from 'react-konva';
import { CanvasImage } from '../store/canvasTypes';
type IAICanvasImageErrorFallbackProps = {
canvasImage: CanvasImage;
};
const IAICanvasImageErrorFallback = ({
canvasImage,
}: IAICanvasImageErrorFallbackProps) => {
const [errorColorLight, errorColorDark, fontColorLight, fontColorDark] =
useToken('colors', ['base.400', 'base.500', 'base.700', 'base.900']);
const errorColor = useColorModeValue(errorColorLight, errorColorDark);
const fontColor = useColorModeValue(fontColorLight, fontColorDark);
const { t } = useTranslation();
return (
<Group>
<Rect
x={canvasImage.x}
y={canvasImage.y}
width={canvasImage.width}
height={canvasImage.height}
fill={errorColor}
/>
<Text
x={canvasImage.x}
y={canvasImage.y}
width={canvasImage.width}
height={canvasImage.height}
align="center"
verticalAlign="middle"
fontFamily='"Inter Variable", sans-serif'
fontSize={canvasImage.width / 16}
fontStyle="600"
text={t('common.imageFailedToLoad')}
fill={fontColor}
/>
</Group>
);
};
export default memo(IAICanvasImageErrorFallback);

View File

@@ -3,9 +3,10 @@ import { useAppSelector } from 'app/store/storeHooks';
import { canvasSelector } from 'features/canvas/store/canvasSelectors';
import { GroupConfig } from 'konva/lib/Group';
import { isEqual } from 'lodash-es';
import { memo } from 'react';
import { Group, Rect } from 'react-konva';
import IAICanvasImage from './IAICanvasImage';
import { memo } from 'react';
const selector = createSelector(
[canvasSelector],
@@ -14,11 +15,11 @@ const selector = createSelector(
layerState,
shouldShowStagingImage,
shouldShowStagingOutline,
boundingBoxCoordinates: stageBoundingBoxCoordinates,
boundingBoxDimensions: stageBoundingBoxDimensions,
boundingBoxCoordinates: { x, y },
boundingBoxDimensions: { width, height },
} = canvas;
const { selectedImageIndex, images, boundingBox } = layerState.stagingArea;
const { selectedImageIndex, images } = layerState.stagingArea;
return {
currentStagingAreaImage:
@@ -29,10 +30,10 @@ const selector = createSelector(
isOnLastImage: selectedImageIndex === images.length - 1,
shouldShowStagingImage,
shouldShowStagingOutline,
x: boundingBox?.x ?? stageBoundingBoxCoordinates.x,
y: boundingBox?.y ?? stageBoundingBoxCoordinates.y,
width: boundingBox?.width ?? stageBoundingBoxDimensions.width,
height: boundingBox?.height ?? stageBoundingBoxDimensions.height,
x,
y,
width,
height,
};
},
{

View File

@@ -14,7 +14,6 @@ import {
import { skipToken } from '@reduxjs/toolkit/dist/query';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIButton from 'common/components/IAIButton';
import { memo, useCallback } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
@@ -24,8 +23,8 @@ import {
FaCheck,
FaEye,
FaEyeSlash,
FaPlus,
FaSave,
FaTimes,
} from 'react-icons/fa';
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
import { stagingAreaImageSaved } from '../store/actions';
@@ -42,10 +41,10 @@ const selector = createSelector(
} = canvas;
return {
currentIndex: selectedImageIndex,
total: images.length,
currentStagingAreaImage:
images.length > 0 ? images[selectedImageIndex] : undefined,
isOnFirstImage: selectedImageIndex === 0,
isOnLastImage: selectedImageIndex === images.length - 1,
shouldShowStagingImage,
shouldShowStagingOutline,
};
@@ -56,10 +55,10 @@ const selector = createSelector(
const IAICanvasStagingAreaToolbar = () => {
const dispatch = useAppDispatch();
const {
isOnFirstImage,
isOnLastImage,
currentStagingAreaImage,
shouldShowStagingImage,
currentIndex,
total,
} = useAppSelector(selector);
const { t } = useTranslation();
@@ -72,6 +71,39 @@ const IAICanvasStagingAreaToolbar = () => {
dispatch(setShouldShowStagingOutline(false));
}, [dispatch]);
useHotkeys(
['left'],
() => {
handlePrevImage();
},
{
enabled: () => true,
preventDefault: true,
}
);
useHotkeys(
['right'],
() => {
handleNextImage();
},
{
enabled: () => true,
preventDefault: true,
}
);
useHotkeys(
['enter'],
() => {
handleAccept();
},
{
enabled: () => true,
preventDefault: true,
}
);
const handlePrevImage = useCallback(
() => dispatch(prevStagingAreaImage()),
[dispatch]
@@ -87,45 +119,10 @@ const IAICanvasStagingAreaToolbar = () => {
[dispatch]
);
useHotkeys(['left'], handlePrevImage, {
enabled: () => true,
preventDefault: true,
});
useHotkeys(['right'], handleNextImage, {
enabled: () => true,
preventDefault: true,
});
useHotkeys(['enter'], () => handleAccept, {
enabled: () => true,
preventDefault: true,
});
const { data: imageDTO } = useGetImageDTOQuery(
currentStagingAreaImage?.imageName ?? skipToken
);
const handleToggleShouldShowStagingImage = useCallback(() => {
dispatch(setShouldShowStagingImage(!shouldShowStagingImage));
}, [dispatch, shouldShowStagingImage]);
const handleSaveToGallery = useCallback(() => {
if (!imageDTO) {
return;
}
dispatch(
stagingAreaImageSaved({
imageDTO,
})
);
}, [dispatch, imageDTO]);
const handleDiscardStagingArea = useCallback(() => {
dispatch(discardStagedImages());
}, [dispatch]);
if (!currentStagingAreaImage) {
return null;
}
@@ -134,12 +131,11 @@ const IAICanvasStagingAreaToolbar = () => {
<Flex
pos="absolute"
bottom={4}
gap={2}
w="100%"
align="center"
justify="center"
onMouseEnter={handleMouseOver}
onMouseLeave={handleMouseOut}
onMouseOver={handleMouseOver}
onMouseOut={handleMouseOut}
>
<ButtonGroup isAttached borderRadius="base" shadow="dark-lg">
<IAIIconButton
@@ -148,29 +144,16 @@ const IAICanvasStagingAreaToolbar = () => {
icon={<FaArrowLeft />}
onClick={handlePrevImage}
colorScheme="accent"
isDisabled={!shouldShowStagingImage}
isDisabled={isOnFirstImage}
/>
<IAIButton
colorScheme="accent"
pointerEvents="none"
isDisabled={!shouldShowStagingImage}
sx={{
background: 'base.600',
_dark: {
background: 'base.800',
},
}}
>{`${currentIndex + 1}/${total}`}</IAIButton>
<IAIIconButton
tooltip={`${t('unifiedCanvas.next')} (Right)`}
aria-label={`${t('unifiedCanvas.next')} (Right)`}
icon={<FaArrowRight />}
onClick={handleNextImage}
colorScheme="accent"
isDisabled={!shouldShowStagingImage}
isDisabled={isOnLastImage}
/>
</ButtonGroup>
<ButtonGroup isAttached borderRadius="base" shadow="dark-lg">
<IAIIconButton
tooltip={`${t('unifiedCanvas.accept')} (Enter)`}
aria-label={`${t('unifiedCanvas.accept')} (Enter)`}
@@ -179,19 +162,13 @@ const IAICanvasStagingAreaToolbar = () => {
colorScheme="accent"
/>
<IAIIconButton
tooltip={
shouldShowStagingImage
? t('unifiedCanvas.showResultsOn')
: t('unifiedCanvas.showResultsOff')
}
aria-label={
shouldShowStagingImage
? t('unifiedCanvas.showResultsOn')
: t('unifiedCanvas.showResultsOff')
}
tooltip={t('unifiedCanvas.showHide')}
aria-label={t('unifiedCanvas.showHide')}
data-alert={!shouldShowStagingImage}
icon={shouldShowStagingImage ? <FaEye /> : <FaEyeSlash />}
onClick={handleToggleShouldShowStagingImage}
onClick={() =>
dispatch(setShouldShowStagingImage(!shouldShowStagingImage))
}
colorScheme="accent"
/>
<IAIIconButton
@@ -199,14 +176,24 @@ const IAICanvasStagingAreaToolbar = () => {
aria-label={t('unifiedCanvas.saveToGallery')}
isDisabled={!imageDTO || !imageDTO.is_intermediate}
icon={<FaSave />}
onClick={handleSaveToGallery}
onClick={() => {
if (!imageDTO) {
return;
}
dispatch(
stagingAreaImageSaved({
imageDTO,
})
);
}}
colorScheme="accent"
/>
<IAIIconButton
tooltip={t('unifiedCanvas.discardAll')}
aria-label={t('unifiedCanvas.discardAll')}
icon={<FaTimes />}
onClick={handleDiscardStagingArea}
icon={<FaPlus style={{ transform: 'rotate(45deg)' }} />}
onClick={() => dispatch(discardStagedImages())}
colorScheme="error"
fontSize={20}
/>

View File

@@ -213,45 +213,45 @@ const IAICanvasBoundingBox = (props: IAICanvasBoundingBoxPreviewProps) => {
[scaledStep]
);
const handleStartedTransforming = useCallback(() => {
const handleStartedTransforming = () => {
dispatch(setIsTransformingBoundingBox(true));
}, [dispatch]);
};
const handleEndedTransforming = useCallback(() => {
const handleEndedTransforming = () => {
dispatch(setIsTransformingBoundingBox(false));
dispatch(setIsMovingBoundingBox(false));
dispatch(setIsMouseOverBoundingBox(false));
setIsMouseOverBoundingBoxOutline(false);
}, [dispatch]);
};
const handleStartedMoving = useCallback(() => {
const handleStartedMoving = () => {
dispatch(setIsMovingBoundingBox(true));
}, [dispatch]);
};
const handleEndedModifying = useCallback(() => {
const handleEndedModifying = () => {
dispatch(setIsTransformingBoundingBox(false));
dispatch(setIsMovingBoundingBox(false));
dispatch(setIsMouseOverBoundingBox(false));
setIsMouseOverBoundingBoxOutline(false);
}, [dispatch]);
};
const handleMouseOver = useCallback(() => {
const handleMouseOver = () => {
setIsMouseOverBoundingBoxOutline(true);
}, []);
};
const handleMouseOut = useCallback(() => {
const handleMouseOut = () => {
!isTransformingBoundingBox &&
!isMovingBoundingBox &&
setIsMouseOverBoundingBoxOutline(false);
}, [isMovingBoundingBox, isTransformingBoundingBox]);
};
const handleMouseEnterBoundingBox = useCallback(() => {
const handleMouseEnterBoundingBox = () => {
dispatch(setIsMouseOverBoundingBox(true));
}, [dispatch]);
};
const handleMouseLeaveBoundingBox = useCallback(() => {
const handleMouseLeaveBoundingBox = () => {
dispatch(setIsMouseOverBoundingBox(false));
}, [dispatch]);
};
return (
<Group {...rest}>

View File

@@ -1,4 +1,4 @@
import { Box, ButtonGroup, Flex } from '@chakra-ui/react';
import { ButtonGroup, Flex } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIButton from 'common/components/IAIButton';
@@ -135,12 +135,11 @@ const IAICanvasMaskOptions = () => {
dispatch(setShouldPreserveMaskedArea(e.target.checked))
}
/>
<Box sx={{ paddingTop: 2, paddingBottom: 2 }}>
<IAIColorPicker
color={maskColor}
onChange={(newColor) => dispatch(setMaskColor(newColor))}
/>
</Box>
<IAIColorPicker
sx={{ paddingTop: 2, paddingBottom: 2 }}
pickerColor={maskColor}
onChange={(newColor) => dispatch(setMaskColor(newColor))}
/>
<IAIButton size="sm" leftIcon={<FaSave />} onClick={handleSaveMask}>
Save Mask
</IAIButton>

View File

@@ -1,4 +1,4 @@
import { ButtonGroup, Flex, Box } from '@chakra-ui/react';
import { ButtonGroup, Flex } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
@@ -237,18 +237,15 @@ const IAICanvasToolChooserOptions = () => {
sliderNumberInputProps={{ max: 500 }}
/>
</Flex>
<Box
<IAIColorPicker
sx={{
width: '100%',
paddingTop: 2,
paddingBottom: 2,
}}
>
<IAIColorPicker
color={brushColor}
onChange={(newColor) => dispatch(setBrushColor(newColor))}
/>
</Box>
pickerColor={brushColor}
onChange={(newColor) => dispatch(setBrushColor(newColor))}
/>
</Flex>
</IAIPopover>
</ButtonGroup>

View File

@@ -6,7 +6,7 @@ export const canvasSelector = (state: RootState): CanvasState => state.canvas;
export const isStagingSelector = createSelector(
[stateSelector],
({ canvas }) => canvas.batchIds.length > 0
({ canvas }) => canvas.layerState.stagingArea.images.length > 0
);
export const initialCanvasImageSelector = (

View File

@@ -8,6 +8,7 @@ import { setAspectRatio } from 'features/parameters/store/generationSlice';
import { IRect, Vector2d } from 'konva/lib/types';
import { clamp, cloneDeep } from 'lodash-es';
import { RgbaColor } from 'react-colorful';
import { sessionCanceled } from 'services/api/thunks/session';
import { ImageDTO } from 'services/api/types';
import calculateCoordinates from '../util/calculateCoordinates';
import calculateScale from '../util/calculateScale';
@@ -186,7 +187,7 @@ export const canvasSlice = createSlice({
state.pastLayerStates.push(cloneDeep(state.layerState));
state.layerState = {
...cloneDeep(initialLayerState),
...initialLayerState,
objects: [
{
kind: 'image',
@@ -200,7 +201,6 @@ export const canvasSlice = createSlice({
],
};
state.futureLayerStates = [];
state.batchIds = [];
const newScale = calculateScale(
stageDimensions.width,
@@ -350,14 +350,11 @@ export const canvasSlice = createSlice({
state.pastLayerStates.shift();
}
state.layerState.stagingArea = cloneDeep(
cloneDeep(initialLayerState)
).stagingArea;
state.layerState.stagingArea = { ...initialLayerState.stagingArea };
state.futureLayerStates = [];
state.shouldShowStagingOutline = true;
state.shouldShowStagingImage = true;
state.batchIds = [];
state.shouldShowStagingOutline = true;
},
addFillRect: (state) => {
const { boundingBoxCoordinates, boundingBoxDimensions, brushColor } =
@@ -494,9 +491,8 @@ export const canvasSlice = createSlice({
resetCanvas: (state) => {
state.pastLayerStates.push(cloneDeep(state.layerState));
state.layerState = cloneDeep(initialLayerState);
state.layerState = initialLayerState;
state.futureLayerStates = [];
state.batchIds = [];
},
canvasResized: (
state,
@@ -621,22 +617,25 @@ export const canvasSlice = createSlice({
return;
}
const nextIndex = state.layerState.stagingArea.selectedImageIndex + 1;
const lastIndex = state.layerState.stagingArea.images.length - 1;
const currentIndex = state.layerState.stagingArea.selectedImageIndex;
const length = state.layerState.stagingArea.images.length;
state.layerState.stagingArea.selectedImageIndex =
nextIndex > lastIndex ? 0 : nextIndex;
state.layerState.stagingArea.selectedImageIndex = Math.min(
currentIndex + 1,
length - 1
);
},
prevStagingAreaImage: (state) => {
if (!state.layerState.stagingArea.images.length) {
return;
}
const prevIndex = state.layerState.stagingArea.selectedImageIndex - 1;
const lastIndex = state.layerState.stagingArea.images.length - 1;
const currentIndex = state.layerState.stagingArea.selectedImageIndex;
state.layerState.stagingArea.selectedImageIndex =
prevIndex < 0 ? lastIndex : prevIndex;
state.layerState.stagingArea.selectedImageIndex = Math.max(
currentIndex - 1,
0
);
},
commitStagingAreaImage: (state) => {
if (!state.layerState.stagingArea.images.length) {
@@ -658,12 +657,13 @@ export const canvasSlice = createSlice({
...imageToCommit,
});
}
state.layerState.stagingArea = cloneDeep(initialLayerState).stagingArea;
state.layerState.stagingArea = {
...initialLayerState.stagingArea,
};
state.futureLayerStates = [];
state.shouldShowStagingOutline = true;
state.shouldShowStagingImage = true;
state.batchIds = [];
},
fitBoundingBoxToStage: (state) => {
const {
@@ -786,6 +786,11 @@ export const canvasSlice = createSlice({
},
},
extraReducers: (builder) => {
builder.addCase(sessionCanceled.pending, (state) => {
if (!state.layerState.stagingArea.images.length) {
state.layerState.stagingArea = initialLayerState.stagingArea;
}
});
builder.addCase(setAspectRatio, (state, action) => {
const ratio = action.payload;
if (ratio) {

View File

@@ -1,14 +1,11 @@
import { RootState } from 'app/store/store';
import { getCanvasBaseLayer } from './konvaInstanceProvider';
import { RootState } from 'app/store/store';
import { konvaNodeToBlob } from './konvaNodeToBlob';
/**
* Get the canvas base layer blob, with or without bounding box according to `shouldCropToBoundingBoxOnSave`
*/
export const getBaseLayerBlob = async (
state: RootState,
alwaysUseBoundingBox: boolean = false
) => {
export const getBaseLayerBlob = async (state: RootState) => {
const canvasBaseLayer = getCanvasBaseLayer();
if (!canvasBaseLayer) {
@@ -27,15 +24,14 @@ export const getBaseLayerBlob = async (
const absPos = clonedBaseLayer.getAbsolutePosition();
const boundingBox =
shouldCropToBoundingBoxOnSave || alwaysUseBoundingBox
? {
x: boundingBoxCoordinates.x + absPos.x,
y: boundingBoxCoordinates.y + absPos.y,
width: boundingBoxDimensions.width,
height: boundingBoxDimensions.height,
}
: clonedBaseLayer.getClientRect();
const boundingBox = shouldCropToBoundingBoxOnSave
? {
x: boundingBoxCoordinates.x + absPos.x,
y: boundingBoxCoordinates.y + absPos.y,
width: boundingBoxDimensions.width,
height: boundingBoxDimensions.height,
}
: clonedBaseLayer.getClientRect();
return konvaNodeToBlob(clonedBaseLayer, boundingBox);
};

View File

@@ -6,6 +6,7 @@ import {
import { cloneDeep, forEach } from 'lodash-es';
import { imagesApi } from 'services/api/endpoints/images';
import { components } from 'services/api/schema';
import { isAnySessionRejected } from 'services/api/thunks/session';
import { ImageDTO } from 'services/api/types';
import { appSocketInvocationError } from 'services/events/actions';
import { controlNetImageProcessed } from './actions';
@@ -98,9 +99,6 @@ export const controlNetSlice = createSlice({
isControlNetEnabledToggled: (state) => {
state.isEnabled = !state.isEnabled;
},
controlNetEnabled: (state) => {
state.isEnabled = true;
},
controlNetAdded: (
state,
action: PayloadAction<{
@@ -114,12 +112,6 @@ export const controlNetSlice = createSlice({
controlNetId,
};
},
controlNetRecalled: (state, action: PayloadAction<ControlNetConfig>) => {
const controlNet = action.payload;
state.controlNets[controlNet.controlNetId] = {
...controlNet,
};
},
controlNetDuplicated: (
state,
action: PayloadAction<{
@@ -426,6 +418,10 @@ export const controlNetSlice = createSlice({
state.pendingControlImages = [];
});
builder.addMatcher(isAnySessionRejected, (state) => {
state.pendingControlImages = [];
});
builder.addMatcher(
imagesApi.endpoints.deleteImage.matchFulfilled,
(state, action) => {
@@ -448,9 +444,7 @@ export const controlNetSlice = createSlice({
export const {
isControlNetEnabledToggled,
controlNetEnabled,
controlNetAdded,
controlNetRecalled,
controlNetDuplicated,
controlNetAddedFromImage,
controlNetRemoved,

View File

@@ -93,7 +93,7 @@ const GalleryBoard = ({
const [localBoardName, setLocalBoardName] = useState(board_name);
const handleSelectBoard = useCallback(() => {
dispatch(boardIdSelected({ boardId: board_id }));
dispatch(boardIdSelected(board_id));
if (autoAssignBoardOnClick) {
dispatch(autoAddBoardIdChanged(board_id));
}

View File

@@ -34,7 +34,7 @@ const NoBoardBoard = memo(({ isSelected }: Props) => {
const { autoAddBoardId, autoAssignBoardOnClick } = useAppSelector(selector);
const boardName = useBoardName('none');
const handleSelectBoard = useCallback(() => {
dispatch(boardIdSelected({ boardId: 'none' }));
dispatch(boardIdSelected('none'));
if (autoAssignBoardOnClick) {
dispatch(autoAddBoardIdChanged('none'));
}

View File

@@ -32,7 +32,7 @@ const SystemBoardButton = ({ board_id }: Props) => {
const boardName = useBoardName(board_id);
const handleClick = useCallback(() => {
dispatch(boardIdSelected({ boardId: board_id }));
dispatch(boardIdSelected(board_id));
}, [board_id, dispatch]);
return (

View File

@@ -1,15 +1,8 @@
import {
ControlNetMetadataItem,
CoreMetadata,
LoRAMetadataItem,
} from 'features/nodes/types/types';
import { CoreMetadata, LoRAMetadataItem } from 'features/nodes/types/types';
import { useRecallParameters } from 'features/parameters/hooks/useRecallParameters';
import { memo, useMemo, useCallback } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import {
isValidControlNetModel,
isValidLoRAModel,
} from '../../../parameters/types/parameterSchemas';
import { isValidLoRAModel } from '../../../parameters/types/parameterSchemas';
import ImageMetadataItem from './ImageMetadataItem';
type Props = {
@@ -33,7 +26,6 @@ const ImageMetadataActions = (props: Props) => {
recallHeight,
recallStrength,
recallLoRA,
recallControlNet,
} = useRecallParameters();
const handleRecallPositivePrompt = useCallback(() => {
@@ -83,21 +75,6 @@ const ImageMetadataActions = (props: Props) => {
[recallLoRA]
);
const handleRecallControlNet = useCallback(
(controlnet: ControlNetMetadataItem) => {
recallControlNet(controlnet);
},
[recallControlNet]
);
const validControlNets: ControlNetMetadataItem[] = useMemo(() => {
return metadata?.controlnets
? metadata.controlnets.filter((controlnet) =>
isValidControlNetModel(controlnet.control_model)
)
: [];
}, [metadata?.controlnets]);
if (!metadata || Object.keys(metadata).length === 0) {
return null;
}
@@ -203,14 +180,6 @@ const ImageMetadataActions = (props: Props) => {
);
}
})}
{validControlNets.map((controlnet, index) => (
<ImageMetadataItem
key={index}
label="ControlNet"
value={`${controlnet.control_model?.model_name} - ${controlnet.control_weight}`}
onClick={() => handleRecallControlNet(controlnet)}
/>
))}
</>
);
};

View File

@@ -35,11 +35,8 @@ export const gallerySlice = createSlice({
autoAssignBoardOnClickChanged: (state, action: PayloadAction<boolean>) => {
state.autoAssignBoardOnClick = action.payload;
},
boardIdSelected: (
state,
action: PayloadAction<{ boardId: BoardId; selectedImageName?: string }>
) => {
state.selectedBoardId = action.payload.boardId;
boardIdSelected: (state, action: PayloadAction<BoardId>) => {
state.selectedBoardId = action.payload;
state.galleryView = 'images';
},
autoAddBoardIdChanged: (state, action: PayloadAction<BoardId>) => {

View File

@@ -12,7 +12,6 @@ import {
OnConnect,
OnConnectEnd,
OnConnectStart,
OnEdgeUpdateFunc,
OnEdgesChange,
OnEdgesDelete,
OnInit,
@@ -22,7 +21,6 @@ import {
OnSelectionChangeFunc,
ProOptions,
ReactFlow,
ReactFlowProps,
XYPosition,
} from 'reactflow';
import { useIsValidConnection } from '../../hooks/useIsValidConnection';
@@ -30,8 +28,6 @@ import {
connectionEnded,
connectionMade,
connectionStarted,
edgeAdded,
edgeDeleted,
edgesChanged,
edgesDeleted,
nodesChanged,
@@ -171,63 +167,6 @@ export const Flow = () => {
}
}, []);
// #region Updatable Edges
/**
* Adapted from https://reactflow.dev/docs/examples/edges/updatable-edge/
* and https://reactflow.dev/docs/examples/edges/delete-edge-on-drop/
*
* - Edges can be dragged from one handle to another.
* - If the user drags the edge away from the node and drops it, delete the edge.
* - Do not delete the edge if the cursor didn't move (resolves annoying behaviour
* where the edge is deleted if you click it accidentally).
*/
// We have a ref for cursor position, but it is the *projected* cursor position.
// Easiest to just keep track of the last mouse event for this particular feature
const edgeUpdateMouseEvent = useRef<MouseEvent>();
const onEdgeUpdateStart: NonNullable<ReactFlowProps['onEdgeUpdateStart']> =
useCallback(
(e, edge, _handleType) => {
// update mouse event
edgeUpdateMouseEvent.current = e;
// always delete the edge when starting an updated
dispatch(edgeDeleted(edge.id));
},
[dispatch]
);
const onEdgeUpdate: OnEdgeUpdateFunc = useCallback(
(_oldEdge, newConnection) => {
// instead of updating the edge (we deleted it earlier), we instead create
// a new one.
dispatch(connectionMade(newConnection));
},
[dispatch]
);
const onEdgeUpdateEnd: NonNullable<ReactFlowProps['onEdgeUpdateEnd']> =
useCallback(
(e, edge, _handleType) => {
// Handle the case where user begins a drag but didn't move the cursor -
// bc we deleted the edge, we need to add it back
if (
// ignore touch events
!('touches' in e) &&
edgeUpdateMouseEvent.current?.clientX === e.clientX &&
edgeUpdateMouseEvent.current?.clientY === e.clientY
) {
dispatch(edgeAdded(edge));
}
// reset mouse event
edgeUpdateMouseEvent.current = undefined;
},
[dispatch]
);
// #endregion
useHotkeys(['Ctrl+c', 'Meta+c'], (e) => {
e.preventDefault();
dispatch(selectionCopied());
@@ -257,9 +196,6 @@ export const Flow = () => {
onNodesChange={onNodesChange}
onEdgesChange={onEdgesChange}
onEdgesDelete={onEdgesDelete}
onEdgeUpdate={onEdgeUpdate}
onEdgeUpdateStart={onEdgeUpdateStart}
onEdgeUpdateEnd={onEdgeUpdateEnd}
onNodesDelete={onNodesDelete}
onConnectStart={onConnectStart}
onConnect={onConnect}

View File

@@ -53,12 +53,13 @@ export const useIsValidConnection = () => {
}
if (
edges.find((edge) => {
edge.target === target &&
edge.targetHandle === targetHandle &&
edge.source === source &&
edge.sourceHandle === sourceHandle;
})
edges
.filter((edge) => {
return edge.target === target && edge.targetHandle === targetHandle;
})
.find((edge) => {
edge.source === source && edge.sourceHandle === sourceHandle;
})
) {
// We already have a connection from this source to this target
return false;

View File

@@ -15,7 +15,6 @@ import {
NodeChange,
OnConnectStartParams,
SelectionMode,
updateEdge,
Viewport,
XYPosition,
} from 'reactflow';
@@ -183,16 +182,6 @@ const nodesSlice = createSlice({
edgesChanged: (state, action: PayloadAction<EdgeChange[]>) => {
state.edges = applyEdgeChanges(action.payload, state.edges);
},
edgeAdded: (state, action: PayloadAction<Edge>) => {
state.edges = addEdge(action.payload, state.edges);
},
edgeUpdated: (
state,
action: PayloadAction<{ oldEdge: Edge; newConnection: Connection }>
) => {
const { oldEdge, newConnection } = action.payload;
state.edges = updateEdge(oldEdge, newConnection, state.edges);
},
connectionStarted: (state, action: PayloadAction<OnConnectStartParams>) => {
state.connectionStartParams = action.payload;
const { nodeId, handleId, handleType } = action.payload;
@@ -377,7 +366,6 @@ const nodesSlice = createSlice({
target: edge.target,
type: 'collapsed',
data: { count: 1 },
updatable: false,
});
}
}
@@ -400,7 +388,6 @@ const nodesSlice = createSlice({
target: edge.target,
type: 'collapsed',
data: { count: 1 },
updatable: false,
});
}
}
@@ -413,9 +400,6 @@ const nodesSlice = createSlice({
}
}
},
edgeDeleted: (state, action: PayloadAction<string>) => {
state.edges = state.edges.filter((e) => e.id !== action.payload);
},
edgesDeleted: (state, action: PayloadAction<Edge[]>) => {
const edges = action.payload;
const collapsedEdges = edges.filter((e) => e.type === 'collapsed');
@@ -906,72 +890,69 @@ const nodesSlice = createSlice({
});
export const {
addNodePopoverClosed,
addNodePopoverOpened,
addNodePopoverToggled,
connectionEnded,
nodesChanged,
edgesChanged,
nodeAdded,
nodesDeleted,
connectionMade,
connectionStarted,
edgeDeleted,
edgesChanged,
edgesDeleted,
edgeUpdated,
connectionEnded,
shouldShowFieldTypeLegendChanged,
shouldShowMinimapPanelChanged,
nodeTemplatesBuilt,
nodeEditorReset,
imageCollectionFieldValueChanged,
fieldStringValueChanged,
fieldNumberValueChanged,
fieldBoardValueChanged,
fieldBooleanValueChanged,
fieldColorValueChanged,
fieldControlNetModelValueChanged,
fieldEnumModelValueChanged,
fieldImageValueChanged,
fieldIPAdapterModelValueChanged,
fieldLabelChanged,
fieldLoRAModelValueChanged,
fieldColorValueChanged,
fieldMainModelValueChanged,
fieldNumberValueChanged,
fieldVaeModelValueChanged,
fieldLoRAModelValueChanged,
fieldEnumModelValueChanged,
fieldControlNetModelValueChanged,
fieldIPAdapterModelValueChanged,
fieldRefinerModelValueChanged,
fieldSchedulerValueChanged,
fieldStringValueChanged,
fieldVaeModelValueChanged,
imageCollectionFieldValueChanged,
mouseOverFieldChanged,
mouseOverNodeChanged,
nodeAdded,
nodeEditorReset,
nodeEmbedWorkflowChanged,
nodeExclusivelySelected,
nodeIsIntermediateChanged,
nodeIsOpenChanged,
nodeLabelChanged,
nodeNotesChanged,
nodeOpacityChanged,
nodesChanged,
nodesDeleted,
nodeTemplatesBuilt,
nodeUseCacheChanged,
notesNodeValueChanged,
selectedAll,
selectedEdgesChanged,
selectedNodesChanged,
selectionCopied,
selectionModeChanged,
selectionPasted,
shouldAnimateEdgesChanged,
shouldColorEdgesChanged,
shouldShowFieldTypeLegendChanged,
shouldShowMinimapPanelChanged,
shouldSnapToGridChanged,
edgesDeleted,
shouldValidateGraphChanged,
viewportChanged,
workflowAuthorChanged,
workflowContactChanged,
shouldAnimateEdgesChanged,
nodeOpacityChanged,
shouldSnapToGridChanged,
shouldColorEdgesChanged,
selectedNodesChanged,
selectedEdgesChanged,
workflowNameChanged,
workflowDescriptionChanged,
workflowTagsChanged,
workflowAuthorChanged,
workflowNotesChanged,
workflowVersionChanged,
workflowContactChanged,
workflowLoaded,
notesNodeValueChanged,
workflowExposedFieldAdded,
workflowExposedFieldRemoved,
workflowLoaded,
workflowNameChanged,
workflowNotesChanged,
workflowTagsChanged,
workflowVersionChanged,
edgeAdded,
fieldLabelChanged,
viewportChanged,
mouseOverFieldChanged,
selectionCopied,
selectionPasted,
selectedAll,
addNodePopoverOpened,
addNodePopoverClosed,
addNodePopoverToggled,
selectionModeChanged,
nodeEmbedWorkflowChanged,
nodeIsIntermediateChanged,
mouseOverNodeChanged,
nodeExclusivelySelected,
nodeUseCacheChanged,
} = nodesSlice.actions;
export default nodesSlice.reducer;

View File

@@ -55,29 +55,9 @@ export const makeConnectionErrorSelector = (
return i18n.t('nodes.cannotConnectInputToInput');
}
// we have to figure out which is the target and which is the source
const target = handleType === 'target' ? nodeId : connectionNodeId;
const targetHandle =
handleType === 'target' ? fieldName : connectionFieldName;
const source = handleType === 'source' ? nodeId : connectionNodeId;
const sourceHandle =
handleType === 'source' ? fieldName : connectionFieldName;
if (
edges.find((edge) => {
edge.target === target &&
edge.targetHandle === targetHandle &&
edge.source === source &&
edge.sourceHandle === sourceHandle;
})
) {
// We already have a connection from this source to this target
return i18n.t('nodes.cannotDuplicateConnection');
}
if (
edges.find((edge) => {
return edge.target === target && edge.targetHandle === targetHandle;
return edge.target === nodeId && edge.targetHandle === fieldName;
}) &&
// except CollectionItem inputs can have multiples
targetType !== 'CollectionItem'

View File

@@ -1141,10 +1141,6 @@ const zLoRAMetadataItem = z.object({
export type LoRAMetadataItem = z.infer<typeof zLoRAMetadataItem>;
const zControlNetMetadataItem = zControlField.deepPartial();
export type ControlNetMetadataItem = z.infer<typeof zControlNetMetadataItem>;
export const zCoreMetadata = z
.object({
app_version: z.string().nullish().catch(null),
@@ -1226,7 +1222,6 @@ export const zInvocationNodeData = z.object({
notes: z.string(),
embedWorkflow: z.boolean(),
isIntermediate: z.boolean(),
useCache: z.boolean().optional(),
version: zSemVer.optional(),
});

View File

@@ -32,8 +32,7 @@ export const addSDXLRefinerToGraph = (
graph: NonNullableGraph,
baseNodeId: string,
modelLoaderNodeId?: string,
canvasInitImage?: ImageDTO,
canvasMaskImage?: ImageDTO
canvasInitImage?: ImageDTO
): void => {
const {
refinerModel,
@@ -258,30 +257,8 @@ export const addSDXLRefinerToGraph = (
};
}
if (graph.id === SDXL_CANVAS_INPAINT_GRAPH) {
if (isUsingScaledDimensions) {
graph.edges.push({
source: {
node_id: MASK_RESIZE_UP,
field: 'image',
},
destination: {
node_id: SDXL_REFINER_INPAINT_CREATE_MASK,
field: 'mask',
},
});
} else {
graph.nodes[SDXL_REFINER_INPAINT_CREATE_MASK] = {
...(graph.nodes[
SDXL_REFINER_INPAINT_CREATE_MASK
] as CreateDenoiseMaskInvocation),
mask: canvasMaskImage,
};
}
}
if (graph.id === SDXL_CANVAS_OUTPAINT_GRAPH) {
graph.edges.push({
graph.edges.push(
{
source: {
node_id: isUsingScaledDimensions ? MASK_RESIZE_UP : MASK_COMBINE,
field: 'image',
@@ -290,19 +267,18 @@ export const addSDXLRefinerToGraph = (
node_id: SDXL_REFINER_INPAINT_CREATE_MASK,
field: 'mask',
},
});
}
graph.edges.push({
source: {
node_id: SDXL_REFINER_INPAINT_CREATE_MASK,
field: 'denoise_mask',
},
destination: {
node_id: SDXL_REFINER_DENOISE_LATENTS,
field: 'denoise_mask',
},
});
{
source: {
node_id: SDXL_REFINER_INPAINT_CREATE_MASK,
field: 'denoise_mask',
},
destination: {
node_id: SDXL_REFINER_DENOISE_LATENTS,
field: 'denoise_mask',
},
}
);
}
if (

View File

@@ -663,8 +663,7 @@ export const buildCanvasSDXLInpaintGraph = (
graph,
CANVAS_COHERENCE_DENOISE_LATENTS,
modelLoaderNodeId,
canvasInitImage,
canvasMaskImage
canvasInitImage
);
if (seamlessXAxis || seamlessYAxis) {
modelLoaderNodeId = SDXL_REFINER_SEAMLESS;

View File

@@ -1,7 +1,7 @@
import { skipToken } from '@reduxjs/toolkit/dist/query';
import { CoreMetadata } from 'features/nodes/types/types';
import { t } from 'i18next';
import { useCallback, useEffect } from 'react';
import { useCallback } from 'react';
import { useAppToaster } from '../../../app/components/Toaster';
import { useAppDispatch } from '../../../app/store/storeHooks';
import {
@@ -13,21 +13,18 @@ import { setActiveTab } from '../../ui/store/uiSlice';
import { initialImageSelected } from '../store/actions';
import { useRecallParameters } from './useRecallParameters';
export const usePreselectedImage = (selectedImage?: {
imageName: string;
action: 'sendToImg2Img' | 'sendToCanvas' | 'useAllParameters';
}) => {
export const usePreselectedImage = (imageName?: string) => {
const dispatch = useAppDispatch();
const { recallAllParameters } = useRecallParameters();
const toaster = useAppToaster();
const { currentData: selectedImageDto } = useGetImageDTOQuery(
selectedImage?.imageName ?? skipToken
imageName ?? skipToken
);
const { currentData: selectedImageMetadata } = useGetImageMetadataQuery(
selectedImage?.imageName ?? skipToken
imageName ?? skipToken
);
const handleSendToCanvas = useCallback(() => {
@@ -57,23 +54,5 @@ export const usePreselectedImage = (selectedImage?: {
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [selectedImageMetadata]);
useEffect(() => {
if (selectedImage && selectedImage.action === 'sendToCanvas') {
handleSendToCanvas();
}
}, [selectedImage, handleSendToCanvas]);
useEffect(() => {
if (selectedImage && selectedImage.action === 'sendToImg2Img') {
handleSendToImg2Img();
}
}, [selectedImage, handleSendToImg2Img]);
useEffect(() => {
if (selectedImage && selectedImage.action === 'useAllParameters') {
handleUseAllMetadata();
}
}, [selectedImage, handleUseAllMetadata]);
return { handleSendToCanvas, handleSendToImg2Img, handleUseAllMetadata };
};

View File

@@ -2,11 +2,7 @@ import { createSelector } from '@reduxjs/toolkit';
import { useAppToaster } from 'app/components/Toaster';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import {
CoreMetadata,
LoRAMetadataItem,
ControlNetMetadataItem,
} from 'features/nodes/types/types';
import { CoreMetadata, LoRAMetadataItem } from 'features/nodes/types/types';
import {
refinerModelChanged,
setNegativeStylePromptSDXL,
@@ -22,18 +18,9 @@ import { useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { ImageDTO } from 'services/api/types';
import {
controlNetModelsAdapter,
loraModelsAdapter,
useGetControlNetModelsQuery,
useGetLoRAModelsQuery,
} from '../../../services/api/endpoints/models';
import {
ControlNetConfig,
controlNetEnabled,
controlNetRecalled,
controlNetReset,
initialControlNet,
} from '../../controlNet/store/controlNetSlice';
import { loraRecalled, lorasCleared } from '../../lora/store/loraSlice';
import { initialImageSelected, modelSelected } from '../store/actions';
import {
@@ -51,7 +38,6 @@ import {
isValidCfgScale,
isValidHeight,
isValidLoRAModel,
isValidControlNetModel,
isValidMainModel,
isValidNegativePrompt,
isValidPositivePrompt,
@@ -67,11 +53,6 @@ import {
isValidStrength,
isValidWidth,
} from '../types/parameterSchemas';
import { v4 as uuidv4 } from 'uuid';
import {
CONTROLNET_PROCESSORS,
CONTROLNET_MODEL_DEFAULT_PROCESSORS,
} from 'features/controlNet/store/constants';
const selector = createSelector(stateSelector, ({ generation }) => {
const { model } = generation;
@@ -409,121 +390,6 @@ export const useRecallParameters = () => {
[prepareLoRAMetadataItem, dispatch, parameterSetToast, parameterNotSetToast]
);
/**
* Recall ControlNet with toast
*/
const { controlnets } = useGetControlNetModelsQuery(undefined, {
selectFromResult: (result) => ({
controlnets: result.data
? controlNetModelsAdapter.getSelectors().selectAll(result.data)
: [],
}),
});
const prepareControlNetMetadataItem = useCallback(
(controlnetMetadataItem: ControlNetMetadataItem) => {
if (!isValidControlNetModel(controlnetMetadataItem.control_model)) {
return { controlnet: null, error: 'Invalid ControlNet model' };
}
const {
image,
control_model,
control_weight,
begin_step_percent,
end_step_percent,
control_mode,
resize_mode,
} = controlnetMetadataItem;
const matchingControlNetModel = controlnets.find(
(c) =>
c.base_model === control_model.base_model &&
c.model_name === control_model.model_name
);
if (!matchingControlNetModel) {
return { controlnet: null, error: 'ControlNet model is not installed' };
}
const isCompatibleBaseModel =
matchingControlNetModel?.base_model === model?.base_model;
if (!isCompatibleBaseModel) {
return {
controlnet: null,
error: 'ControlNet incompatible with currently-selected model',
};
}
const controlNetId = uuidv4();
let processorType = initialControlNet.processorType;
for (const modelSubstring in CONTROLNET_MODEL_DEFAULT_PROCESSORS) {
if (matchingControlNetModel.model_name.includes(modelSubstring)) {
processorType =
CONTROLNET_MODEL_DEFAULT_PROCESSORS[modelSubstring] ||
initialControlNet.processorType;
break;
}
}
const processorNode = CONTROLNET_PROCESSORS[processorType].default;
const controlnet: ControlNetConfig = {
isEnabled: true,
model: matchingControlNetModel,
weight:
typeof control_weight === 'number'
? control_weight
: initialControlNet.weight,
beginStepPct: begin_step_percent || initialControlNet.beginStepPct,
endStepPct: end_step_percent || initialControlNet.endStepPct,
controlMode: control_mode || initialControlNet.controlMode,
resizeMode: resize_mode || initialControlNet.resizeMode,
controlImage: image?.image_name || null,
processedControlImage: image?.image_name || null,
processorType,
processorNode:
processorNode.type !== 'none'
? processorNode
: initialControlNet.processorNode,
shouldAutoConfig: true,
controlNetId,
};
return { controlnet, error: null };
},
[controlnets, model?.base_model]
);
const recallControlNet = useCallback(
(controlnetMetadataItem: ControlNetMetadataItem) => {
const result = prepareControlNetMetadataItem(controlnetMetadataItem);
if (!result.controlnet) {
parameterNotSetToast(result.error);
return;
}
dispatch(
controlNetRecalled({
...result.controlnet,
})
);
dispatch(controlNetEnabled());
parameterSetToast();
},
[
prepareControlNetMetadataItem,
dispatch,
parameterSetToast,
parameterNotSetToast,
]
);
/*
* Sets image as initial image with toast
*/
@@ -562,7 +428,6 @@ export const useRecallParameters = () => {
refiner_negative_aesthetic_score,
refiner_start,
loras,
controlnets,
} = metadata;
if (isValidCfgScale(cfg_scale)) {
@@ -652,15 +517,6 @@ export const useRecallParameters = () => {
}
});
dispatch(controlNetReset());
dispatch(controlNetEnabled());
controlnets?.forEach((controlnet) => {
const result = prepareControlNetMetadataItem(controlnet);
if (result.controlnet) {
dispatch(controlNetRecalled(result.controlnet));
}
});
allParameterSetToast();
},
[
@@ -668,7 +524,6 @@ export const useRecallParameters = () => {
allParameterSetToast,
dispatch,
prepareLoRAMetadataItem,
prepareControlNetMetadataItem,
]
);
@@ -687,7 +542,6 @@ export const useRecallParameters = () => {
recallHeight,
recallStrength,
recallLoRA,
recallControlNet,
recallAllParameters,
sendToImageToImage,
};

View File

@@ -1,7 +1,9 @@
import { ButtonGroup } from '@chakra-ui/react';
import { useAppSelector } from 'app/store/storeHooks';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { useGetInvocationCacheStatusQuery } from 'services/api/endpoints/appInfo';
import { useGetQueueStatusQuery } from 'services/api/endpoints/queue';
import ClearInvocationCacheButton from './ClearInvocationCacheButton';
import ToggleInvocationCacheButton from './ToggleInvocationCacheButton';
import StatusStatGroup from './common/StatusStatGroup';
@@ -9,7 +11,16 @@ import StatusStatItem from './common/StatusStatItem';
const InvocationCacheStatus = () => {
const { t } = useTranslation();
const { data: cacheStatus } = useGetInvocationCacheStatusQuery(undefined);
const isConnected = useAppSelector((state) => state.system.isConnected);
const { data: queueStatus } = useGetQueueStatusQuery(undefined);
const { data: cacheStatus } = useGetInvocationCacheStatusQuery(undefined, {
pollingInterval:
isConnected &&
queueStatus?.processor.is_started &&
queueStatus?.queue.pending > 0
? 5000
: 0,
});
return (
<StatusStatGroup>

View File

@@ -1,6 +1,5 @@
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { addToast } from 'features/system/store/systemSlice';
import { isNil } from 'lodash-es';
import { useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import {
@@ -41,7 +40,7 @@ export const useCancelCurrentQueueItem = () => {
}, [currentQueueItemId, dispatch, t, trigger]);
const isDisabled = useMemo(
() => !isConnected || isNil(currentQueueItemId),
() => !isConnected || !currentQueueItemId,
[isConnected, currentQueueItemId]
);

View File

@@ -1,8 +1,9 @@
import { UseToastOptions } from '@chakra-ui/react';
import { PayloadAction, createSlice, isAnyOf } from '@reduxjs/toolkit';
import { t } from 'i18next';
import { startCase } from 'lodash-es';
import { get, startCase, truncate, upperFirst } from 'lodash-es';
import { LogLevelName } from 'roarr';
import { isAnySessionRejected } from 'services/api/thunks/session';
import {
appSocketConnected,
appSocketDisconnected,
@@ -19,7 +20,8 @@ import {
} from 'services/events/actions';
import { calculateStepPercentage } from '../util/calculateStepPercentage';
import { makeToast } from '../util/makeToast';
import { LANGUAGES, SystemState } from './types';
import { SystemState, LANGUAGES } from './types';
import { zPydanticValidationError } from './zodSchemas';
export const initialSystemState: SystemState = {
isInitialized: false,
@@ -173,6 +175,50 @@ export const systemSlice = createSlice({
// *** Matchers - must be after all cases ***
/**
* Session Invoked - REJECTED
* Session Created - REJECTED
*/
builder.addMatcher(isAnySessionRejected, (state, action) => {
let errorDescription = undefined;
const duration = 5000;
if (action.payload?.status === 422) {
const result = zPydanticValidationError.safeParse(action.payload);
if (result.success) {
result.data.error.detail.map((e) => {
state.toastQueue.push(
makeToast({
title: truncate(upperFirst(e.msg), { length: 128 }),
status: 'error',
description: truncate(
`Path:
${e.loc.join('.')}`,
{ length: 128 }
),
duration,
})
);
});
return;
}
} else if (action.payload?.error) {
errorDescription = action.payload?.error;
}
state.toastQueue.push(
makeToast({
title: t('toast.serverError'),
status: 'error',
description: truncate(
get(errorDescription, 'detail', 'Unknown Error'),
{ length: 128 }
),
duration,
})
);
});
/**
* Any server error
*/

View File

@@ -2,7 +2,7 @@ import { z } from 'zod';
export const zPydanticValidationError = z.object({
status: z.literal(422),
data: z.object({
error: z.object({
detail: z.array(
z.object({
loc: z.array(z.string()),

View File

@@ -14,7 +14,7 @@ import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import ImageGalleryContent from 'features/gallery/components/ImageGalleryContent';
import NodeEditorPanelGroup from 'features/nodes/components/sidePanel/NodeEditorPanelGroup';
import { InvokeTabName } from 'features/ui/store/tabMap';
import { InvokeTabName, tabMap } from 'features/ui/store/tabMap';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { ResourceKey } from 'i18next';
import { isEqual } from 'lodash-es';
@@ -110,7 +110,7 @@ export const NO_GALLERY_TABS: InvokeTabName[] = ['modelManager', 'queue'];
export const NO_SIDE_PANEL_TABS: InvokeTabName[] = ['modelManager', 'queue'];
const InvokeTabs = () => {
const activeTabIndex = useAppSelector(activeTabIndexSelector);
const activeTab = useAppSelector(activeTabIndexSelector);
const activeTabName = useAppSelector(activeTabNameSelector);
const enabledTabs = useAppSelector(enabledTabsSelector);
const { t } = useTranslation();
@@ -150,13 +150,13 @@ const InvokeTabs = () => {
const handleTabChange = useCallback(
(index: number) => {
const tab = enabledTabs[index];
if (!tab) {
const activeTabName = tabMap[index];
if (!activeTabName) {
return;
}
dispatch(setActiveTab(tab.id));
dispatch(setActiveTab(activeTabName));
},
[dispatch, enabledTabs]
[dispatch]
);
const {
@@ -216,8 +216,8 @@ const InvokeTabs = () => {
return (
<Tabs
variant="appTabs"
defaultIndex={activeTabIndex}
index={activeTabIndex}
defaultIndex={activeTab}
index={activeTab}
onChange={handleTabChange}
sx={{
flexGrow: 1,

View File

@@ -95,32 +95,26 @@ export default function UnifiedCanvasColorPicker() {
>
<Flex minWidth={60} direction="column" gap={4} width="100%">
{layer === 'base' && (
<Box
<IAIColorPicker
sx={{
width: '100%',
paddingTop: 2,
paddingBottom: 2,
}}
>
<IAIColorPicker
color={brushColor}
onChange={(newColor) => dispatch(setBrushColor(newColor))}
/>
</Box>
pickerColor={brushColor}
onChange={(newColor) => dispatch(setBrushColor(newColor))}
/>
)}
{layer === 'mask' && (
<Box
<IAIColorPicker
sx={{
width: '100%',
paddingTop: 2,
paddingBottom: 2,
}}
>
<IAIColorPicker
color={maskColor}
onChange={(newColor) => dispatch(setMaskColor(newColor))}
/>
</Box>
pickerColor={maskColor}
onChange={(newColor) => dispatch(setMaskColor(newColor))}
/>
)}
</Flex>
</IAIPopover>

View File

@@ -0,0 +1,13 @@
import { InvokeTabName, tabMap } from './tabMap';
import { UIState } from './uiTypes';
export const setActiveTabReducer = (
state: UIState,
newActiveTab: number | InvokeTabName
) => {
if (typeof newActiveTab === 'number') {
state.activeTab = newActiveTab;
} else {
state.activeTab = tabMap.indexOf(newActiveTab);
}
};

View File

@@ -1,23 +1,27 @@
import { createSelector } from '@reduxjs/toolkit';
import { RootState } from 'app/store/store';
import { isEqual, isString } from 'lodash-es';
import { tabMap } from './tabMap';
import { isEqual } from 'lodash-es';
import { InvokeTabName, tabMap } from './tabMap';
import { UIState } from './uiTypes';
export const activeTabNameSelector = createSelector(
(state: RootState) => state,
/**
* Previously `activeTab` was an integer, but now it's a string.
* Default to first tab in case user has integer.
*/
({ ui }) => (isString(ui.activeTab) ? ui.activeTab : 'txt2img')
(state: RootState) => state.ui,
(ui: UIState) => tabMap[ui.activeTab] as InvokeTabName,
{
memoizeOptions: {
equalityCheck: isEqual,
},
}
);
export const activeTabIndexSelector = createSelector(
(state: RootState) => state,
({ ui, config }) => {
const tabs = tabMap.filter((t) => !config.disabledTabs.includes(t));
const idx = tabs.indexOf(ui.activeTab);
return idx === -1 ? 0 : idx;
(state: RootState) => state.ui,
(ui: UIState) => ui.activeTab,
{
memoizeOptions: {
equalityCheck: isEqual,
},
}
);

View File

@@ -2,11 +2,12 @@ import type { PayloadAction } from '@reduxjs/toolkit';
import { createSlice } from '@reduxjs/toolkit';
import { initialImageChanged } from 'features/parameters/store/generationSlice';
import { SchedulerParam } from 'features/parameters/types/parameterSchemas';
import { setActiveTabReducer } from './extraReducers';
import { InvokeTabName } from './tabMap';
import { UIState } from './uiTypes';
export const initialUIState: UIState = {
activeTab: 'txt2img',
activeTab: 0,
shouldShowImageDetails: false,
shouldUseCanvasBetaLayout: false,
shouldShowExistingModelsInSearch: false,
@@ -25,7 +26,7 @@ export const uiSlice = createSlice({
initialState: initialUIState,
reducers: {
setActiveTab: (state, action: PayloadAction<InvokeTabName>) => {
state.activeTab = action.payload;
setActiveTabReducer(state, action.payload);
},
setShouldShowImageDetails: (state, action: PayloadAction<boolean>) => {
state.shouldShowImageDetails = action.payload;
@@ -72,7 +73,7 @@ export const uiSlice = createSlice({
},
extraReducers(builder) {
builder.addCase(initialImageChanged, (state) => {
state.activeTab = 'img2img';
setActiveTabReducer(state, 'img2img');
});
},
});

View File

@@ -1,5 +1,4 @@
import { SchedulerParam } from 'features/parameters/types/parameterSchemas';
import { InvokeTabName } from './tabMap';
export type Coordinates = {
x: number;
@@ -14,7 +13,7 @@ export type Dimensions = {
export type Rect = Coordinates & Dimensions;
export interface UIState {
activeTab: InvokeTabName;
activeTab: number;
shouldShowImageDetails: boolean;
shouldUseCanvasBetaLayout: boolean;
shouldShowExistingModelsInSearch: boolean;

View File

@@ -0,0 +1,184 @@
import { createAsyncThunk, isAnyOf } from '@reduxjs/toolkit';
import { $queueId } from 'features/queue/store/queueNanoStore';
import { isObject } from 'lodash-es';
import { $client } from 'services/api/client';
import { paths } from 'services/api/schema';
import { O } from 'ts-toolbelt';
type CreateSessionArg = {
graph: NonNullable<
paths['/api/v1/sessions/']['post']['requestBody']
>['content']['application/json'];
};
type CreateSessionResponse = O.Required<
NonNullable<
paths['/api/v1/sessions/']['post']['requestBody']
>['content']['application/json'],
'id'
>;
type CreateSessionThunkConfig = {
rejectValue: { arg: CreateSessionArg; status: number; error: unknown };
};
/**
* `SessionsService.createSession()` thunk
*/
export const sessionCreated = createAsyncThunk<
CreateSessionResponse,
CreateSessionArg,
CreateSessionThunkConfig
>('api/sessionCreated', async (arg, { rejectWithValue }) => {
const { graph } = arg;
const { POST } = $client.get();
const { data, error, response } = await POST('/api/v1/sessions/', {
body: graph,
params: { query: { queue_id: $queueId.get() } },
});
if (error) {
return rejectWithValue({ arg, status: response.status, error });
}
return data;
});
type InvokedSessionArg = {
session_id: paths['/api/v1/sessions/{session_id}/invoke']['put']['parameters']['path']['session_id'];
};
type InvokedSessionResponse =
paths['/api/v1/sessions/{session_id}/invoke']['put']['responses']['200']['content']['application/json'];
type InvokedSessionThunkConfig = {
rejectValue: {
arg: InvokedSessionArg;
error: unknown;
status: number;
};
};
const isErrorWithStatus = (error: unknown): error is { status: number } =>
isObject(error) && 'status' in error;
const isErrorWithDetail = (error: unknown): error is { detail: string } =>
isObject(error) && 'detail' in error;
/**
* `SessionsService.invokeSession()` thunk
*/
export const sessionInvoked = createAsyncThunk<
InvokedSessionResponse,
InvokedSessionArg,
InvokedSessionThunkConfig
>('api/sessionInvoked', async (arg, { rejectWithValue }) => {
const { session_id } = arg;
const { PUT } = $client.get();
const { error, response } = await PUT(
'/api/v1/sessions/{session_id}/invoke',
{
params: {
query: { queue_id: $queueId.get(), all: true },
path: { session_id },
},
}
);
if (error) {
if (isErrorWithStatus(error) && error.status === 403) {
return rejectWithValue({
arg,
status: response.status,
// eslint-disable-next-line @typescript-eslint/no-explicit-any
error: (error as any).body.detail,
});
}
if (isErrorWithDetail(error) && response.status === 403) {
return rejectWithValue({
arg,
status: response.status,
error: error.detail,
});
}
if (error) {
return rejectWithValue({ arg, status: response.status, error });
}
}
});
type CancelSessionArg =
paths['/api/v1/sessions/{session_id}/invoke']['delete']['parameters']['path'];
type CancelSessionResponse =
paths['/api/v1/sessions/{session_id}/invoke']['delete']['responses']['200']['content']['application/json'];
type CancelSessionThunkConfig = {
rejectValue: {
arg: CancelSessionArg;
error: unknown;
};
};
/**
* `SessionsService.cancelSession()` thunk
*/
export const sessionCanceled = createAsyncThunk<
CancelSessionResponse,
CancelSessionArg,
CancelSessionThunkConfig
>('api/sessionCanceled', async (arg, { rejectWithValue }) => {
const { session_id } = arg;
const { DELETE } = $client.get();
const { data, error } = await DELETE('/api/v1/sessions/{session_id}/invoke', {
params: {
path: { session_id },
},
});
if (error) {
return rejectWithValue({ arg, error });
}
return data;
});
type ListSessionsArg = {
params: paths['/api/v1/sessions/']['get']['parameters'];
};
type ListSessionsResponse =
paths['/api/v1/sessions/']['get']['responses']['200']['content']['application/json'];
type ListSessionsThunkConfig = {
rejectValue: {
arg: ListSessionsArg;
error: unknown;
};
};
/**
* `SessionsService.listSessions()` thunk
*/
export const listedSessions = createAsyncThunk<
ListSessionsResponse,
ListSessionsArg,
ListSessionsThunkConfig
>('api/listSessions', async (arg, { rejectWithValue }) => {
const { params } = arg;
const { GET } = $client.get();
const { data, error } = await GET('/api/v1/sessions/', {
params,
});
if (error) {
return rejectWithValue({ arg, error });
}
return data;
});
export const isAnySessionRejected = isAnyOf(
sessionCreated.rejected,
sessionInvoked.rejected
);

View File

@@ -1,4 +1,5 @@
import { ThemeOverride, ToastProviderProps } from '@chakra-ui/react';
import { ThemeOverride } from '@chakra-ui/react';
import { InvokeAIColors } from './colors/colors';
import { accordionTheme } from './components/accordion';
import { buttonTheme } from './components/button';
@@ -148,7 +149,3 @@ export const theme: ThemeOverride = {
Tooltip: tooltipTheme,
},
};
export const TOAST_OPTIONS: ToastProviderProps = {
defaultOptions: { isClosable: true },
};

View File

@@ -1 +1 @@
__version__ = "3.2.0rc3"
__version__ = "3.1.1"

View File

@@ -127,12 +127,12 @@ nav:
- Manual Installation on Windows: 'installation/deprecated_documentation/INSTALL_WINDOWS.md'
- Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md'
- Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md'
- Workflows & Nodes:
- Nodes:
- Community Nodes: 'nodes/communityNodes.md'
- Example Workflows: 'nodes/exampleWorkflows.md'
- Nodes Overview: 'nodes/overview.md'
- List of Default Nodes: 'nodes/defaultNodes.md'
- Workflow Editor Usage: 'nodes/NODES.md'
- Node Editor Usage: 'nodes/NODES.md'
- ComfyUI to InvokeAI: 'nodes/comfyToInvoke.md'
- Contributing Nodes: 'nodes/contributingNodes.md'
- Features:
@@ -140,7 +140,7 @@ nav:
- New to InvokeAI?: 'help/gettingStartedWithAI.md'
- Concepts: 'features/CONCEPTS.md'
- Configuration: 'features/CONFIGURATION.md'
- Control Adapters: 'features/CONTROLNET.md'
- ControlNet: 'features/CONTROLNET.md'
- Image-to-Image: 'features/IMG2IMG.md'
- Controlling Logging: 'features/LOGGING.md'
- Model Merging: 'features/MODEL_MERGING.md'