mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-01-23 08:17:54 -05:00
Compare commits
15 Commits
feat/contr
...
v3.2.0rc3
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6261c0e709 | ||
|
|
81140be718 | ||
|
|
676ccd8ebb | ||
|
|
a263a4f4cc | ||
|
|
ef0754cdec | ||
|
|
8158124679 | ||
|
|
164877b610 | ||
|
|
fc9a7320eb | ||
|
|
7c0a083b13 | ||
|
|
f35dfa06bb | ||
|
|
407bca5063 | ||
|
|
c8b306d9f8 | ||
|
|
edd2c54b9e | ||
|
|
727cc0dafe | ||
|
|
4530bd46dc |
@@ -1,13 +1,11 @@
|
||||
---
|
||||
title: ControlNet
|
||||
title: Control Adapters
|
||||
---
|
||||
|
||||
# :material-loupe: ControlNet
|
||||
# :material-loupe: Control Adapters
|
||||
|
||||
## ControlNet
|
||||
|
||||
ControlNet
|
||||
|
||||
ControlNet is a powerful set of features developed by the open-source
|
||||
community (notably, Stanford researcher
|
||||
[**@ilyasviel**](https://github.com/lllyasviel)) that allows you to
|
||||
@@ -20,7 +18,7 @@ towards generating images that better fit your desired style or
|
||||
outcome.
|
||||
|
||||
|
||||
### How it works
|
||||
#### How it works
|
||||
|
||||
ControlNet works by analyzing an input image, pre-processing that
|
||||
image to identify relevant information that can be interpreted by each
|
||||
@@ -30,7 +28,7 @@ composition, or other aspects of the image to better achieve a
|
||||
specific result.
|
||||
|
||||
|
||||
### Models
|
||||
#### Models
|
||||
|
||||
InvokeAI provides access to a series of ControlNet models that provide
|
||||
different effects or styles in your generated images. Currently
|
||||
@@ -96,6 +94,8 @@ A model that generates normal maps from input images, allowing for more realisti
|
||||
**Image Segmentation**:
|
||||
A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon)
|
||||
|
||||
**QR Code Monster**:
|
||||
A model that helps generate creative QR codes that still scan. Can also be used to create images with text, logos or shapes within them.
|
||||
|
||||
**Openpose**:
|
||||
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
|
||||
@@ -120,7 +120,7 @@ With Pix2Pix, you can input an image into the controlnet, and then "instruct" th
|
||||
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.
|
||||
|
||||
|
||||
## Using ControlNet
|
||||
### Using ControlNet
|
||||
|
||||
To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images.
|
||||
|
||||
@@ -132,3 +132,31 @@ Weight - Strength of the Controlnet model applied to the generation for the sect
|
||||
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
|
||||
|
||||
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.
|
||||
|
||||
|
||||
## IP-Adapter
|
||||
|
||||
[IP-Adapter](https://ip-adapter.github.io) is a tooling that allows for image prompt capabilities with text-to-image diffusion models. IP-Adapter works by analyzing the given image prompt to extract features, then passing those features to the UNet along with any other conditioning provided.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
#### Installation
|
||||
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
|
||||
|
||||
1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [5] to download models.
|
||||
2. Through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
|
||||
3. **Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders shouid be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.
|
||||
|
||||
#### Using IP-Adapter
|
||||
|
||||
IP-Adapter can be used by navigating to the *Control Adapters* options and enabling IP-Adapter.
|
||||
|
||||
IP-Adapter requires an image to be used as the Image Prompt. It can also be used in conjunction with text prompts, Image-to-Image, Inpainting, Outpainting, ControlNets and LoRAs.
|
||||
|
||||
|
||||
Each IP-Adapter has two settings that are applied to the IP-Adapter:
|
||||
|
||||
* Weight - Strength of the IP-Adapter model applied to the generation for the section, defined by start/end
|
||||
* Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the IP-Adapter applied.
|
||||
|
||||
@@ -4,12 +4,12 @@ The workflow editor is a blank canvas allowing for the use of individual functio
|
||||
|
||||
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Workflow Editor and build workflows to suit your needs.
|
||||
|
||||
## UI Features
|
||||
## Features
|
||||
|
||||
### Linear View
|
||||
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
|
||||
|
||||
To add an input to the Linear UI, right click on the input and select "Add to Linear View".
|
||||
To add an input to the Linear UI, right click on the input label and select "Add to Linear View".
|
||||
|
||||
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
|
||||
|
||||
@@ -25,6 +25,10 @@ Any node or input field can be renamed in the workflow editor. If the input fiel
|
||||
* Backspace/Delete to delete a node
|
||||
* Shift+Click to drag and select multiple nodes
|
||||
|
||||
### Node Caching
|
||||
|
||||
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
|
||||
|
||||
|
||||
## Important Concepts
|
||||
|
||||
|
||||
169
invokeai/frontend/web/dist/assets/App-c1f82b8c.js
vendored
Normal file
169
invokeai/frontend/web/dist/assets/App-c1f82b8c.js
vendored
Normal file
File diff suppressed because one or more lines are too long
169
invokeai/frontend/web/dist/assets/App-dbf8f111.js
vendored
169
invokeai/frontend/web/dist/assets/App-dbf8f111.js
vendored
File diff suppressed because one or more lines are too long
1
invokeai/frontend/web/dist/assets/MantineProvider-f12d896d.js
vendored
Normal file
1
invokeai/frontend/web/dist/assets/MantineProvider-f12d896d.js
vendored
Normal file
File diff suppressed because one or more lines are too long
280
invokeai/frontend/web/dist/assets/ThemeLocaleProvider-6a462dce.js
vendored
Normal file
280
invokeai/frontend/web/dist/assets/ThemeLocaleProvider-6a462dce.js
vendored
Normal file
@@ -0,0 +1,280 @@
|
||||
import{w as s,hQ as T,v as l,a2 as V,hR as A,ae as I,hS as z,hT as j,hU as D,hV as F,hW as W,hX as G,hY as K,hZ as U,h_ as Y}from"./index-375621ca.js";import{u as Z,M as H}from"./MantineProvider-f12d896d.js";var P=String.raw,E=P`
|
||||
:root,
|
||||
:host {
|
||||
--chakra-vh: 100vh;
|
||||
}
|
||||
|
||||
@supports (height: -webkit-fill-available) {
|
||||
:root,
|
||||
:host {
|
||||
--chakra-vh: -webkit-fill-available;
|
||||
}
|
||||
}
|
||||
|
||||
@supports (height: -moz-fill-available) {
|
||||
:root,
|
||||
:host {
|
||||
--chakra-vh: -moz-fill-available;
|
||||
}
|
||||
}
|
||||
|
||||
@supports (height: 100dvh) {
|
||||
:root,
|
||||
:host {
|
||||
--chakra-vh: 100dvh;
|
||||
}
|
||||
}
|
||||
`,Q=()=>s.jsx(T,{styles:E}),X=({scope:e=""})=>s.jsx(T,{styles:P`
|
||||
html {
|
||||
line-height: 1.5;
|
||||
-webkit-text-size-adjust: 100%;
|
||||
font-family: system-ui, sans-serif;
|
||||
-webkit-font-smoothing: antialiased;
|
||||
text-rendering: optimizeLegibility;
|
||||
-moz-osx-font-smoothing: grayscale;
|
||||
touch-action: manipulation;
|
||||
}
|
||||
|
||||
body {
|
||||
position: relative;
|
||||
min-height: 100%;
|
||||
margin: 0;
|
||||
font-feature-settings: "kern";
|
||||
}
|
||||
|
||||
${e} :where(*, *::before, *::after) {
|
||||
border-width: 0;
|
||||
border-style: solid;
|
||||
box-sizing: border-box;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
main {
|
||||
display: block;
|
||||
}
|
||||
|
||||
${e} hr {
|
||||
border-top-width: 1px;
|
||||
box-sizing: content-box;
|
||||
height: 0;
|
||||
overflow: visible;
|
||||
}
|
||||
|
||||
${e} :where(pre, code, kbd,samp) {
|
||||
font-family: SFMono-Regular, Menlo, Monaco, Consolas, monospace;
|
||||
font-size: 1em;
|
||||
}
|
||||
|
||||
${e} a {
|
||||
background-color: transparent;
|
||||
color: inherit;
|
||||
text-decoration: inherit;
|
||||
}
|
||||
|
||||
${e} abbr[title] {
|
||||
border-bottom: none;
|
||||
text-decoration: underline;
|
||||
-webkit-text-decoration: underline dotted;
|
||||
text-decoration: underline dotted;
|
||||
}
|
||||
|
||||
${e} :where(b, strong) {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
${e} small {
|
||||
font-size: 80%;
|
||||
}
|
||||
|
||||
${e} :where(sub,sup) {
|
||||
font-size: 75%;
|
||||
line-height: 0;
|
||||
position: relative;
|
||||
vertical-align: baseline;
|
||||
}
|
||||
|
||||
${e} sub {
|
||||
bottom: -0.25em;
|
||||
}
|
||||
|
||||
${e} sup {
|
||||
top: -0.5em;
|
||||
}
|
||||
|
||||
${e} img {
|
||||
border-style: none;
|
||||
}
|
||||
|
||||
${e} :where(button, input, optgroup, select, textarea) {
|
||||
font-family: inherit;
|
||||
font-size: 100%;
|
||||
line-height: 1.15;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
${e} :where(button, input) {
|
||||
overflow: visible;
|
||||
}
|
||||
|
||||
${e} :where(button, select) {
|
||||
text-transform: none;
|
||||
}
|
||||
|
||||
${e} :where(
|
||||
button::-moz-focus-inner,
|
||||
[type="button"]::-moz-focus-inner,
|
||||
[type="reset"]::-moz-focus-inner,
|
||||
[type="submit"]::-moz-focus-inner
|
||||
) {
|
||||
border-style: none;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
${e} fieldset {
|
||||
padding: 0.35em 0.75em 0.625em;
|
||||
}
|
||||
|
||||
${e} legend {
|
||||
box-sizing: border-box;
|
||||
color: inherit;
|
||||
display: table;
|
||||
max-width: 100%;
|
||||
padding: 0;
|
||||
white-space: normal;
|
||||
}
|
||||
|
||||
${e} progress {
|
||||
vertical-align: baseline;
|
||||
}
|
||||
|
||||
${e} textarea {
|
||||
overflow: auto;
|
||||
}
|
||||
|
||||
${e} :where([type="checkbox"], [type="radio"]) {
|
||||
box-sizing: border-box;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
${e} input[type="number"]::-webkit-inner-spin-button,
|
||||
${e} input[type="number"]::-webkit-outer-spin-button {
|
||||
-webkit-appearance: none !important;
|
||||
}
|
||||
|
||||
${e} input[type="number"] {
|
||||
-moz-appearance: textfield;
|
||||
}
|
||||
|
||||
${e} input[type="search"] {
|
||||
-webkit-appearance: textfield;
|
||||
outline-offset: -2px;
|
||||
}
|
||||
|
||||
${e} input[type="search"]::-webkit-search-decoration {
|
||||
-webkit-appearance: none !important;
|
||||
}
|
||||
|
||||
${e} ::-webkit-file-upload-button {
|
||||
-webkit-appearance: button;
|
||||
font: inherit;
|
||||
}
|
||||
|
||||
${e} details {
|
||||
display: block;
|
||||
}
|
||||
|
||||
${e} summary {
|
||||
display: list-item;
|
||||
}
|
||||
|
||||
template {
|
||||
display: none;
|
||||
}
|
||||
|
||||
[hidden] {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
${e} :where(
|
||||
blockquote,
|
||||
dl,
|
||||
dd,
|
||||
h1,
|
||||
h2,
|
||||
h3,
|
||||
h4,
|
||||
h5,
|
||||
h6,
|
||||
hr,
|
||||
figure,
|
||||
p,
|
||||
pre
|
||||
) {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
${e} button {
|
||||
background: transparent;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
${e} fieldset {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
${e} :where(ol, ul) {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
${e} textarea {
|
||||
resize: vertical;
|
||||
}
|
||||
|
||||
${e} :where(button, [role="button"]) {
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
${e} button::-moz-focus-inner {
|
||||
border: 0 !important;
|
||||
}
|
||||
|
||||
${e} table {
|
||||
border-collapse: collapse;
|
||||
}
|
||||
|
||||
${e} :where(h1, h2, h3, h4, h5, h6) {
|
||||
font-size: inherit;
|
||||
font-weight: inherit;
|
||||
}
|
||||
|
||||
${e} :where(button, input, optgroup, select, textarea) {
|
||||
padding: 0;
|
||||
line-height: inherit;
|
||||
color: inherit;
|
||||
}
|
||||
|
||||
${e} :where(img, svg, video, canvas, audio, iframe, embed, object) {
|
||||
display: block;
|
||||
}
|
||||
|
||||
${e} :where(img, video) {
|
||||
max-width: 100%;
|
||||
height: auto;
|
||||
}
|
||||
|
||||
[data-js-focus-visible]
|
||||
:focus:not([data-focus-visible-added]):not(
|
||||
[data-focus-visible-disabled]
|
||||
) {
|
||||
outline: none;
|
||||
box-shadow: none;
|
||||
}
|
||||
|
||||
${e} select::-ms-expand {
|
||||
display: none;
|
||||
}
|
||||
|
||||
${E}
|
||||
`}),g={light:"chakra-ui-light",dark:"chakra-ui-dark"};function B(e={}){const{preventTransition:o=!0}=e,n={setDataset:r=>{const t=o?n.preventTransition():void 0;document.documentElement.dataset.theme=r,document.documentElement.style.colorScheme=r,t==null||t()},setClassName(r){document.body.classList.add(r?g.dark:g.light),document.body.classList.remove(r?g.light:g.dark)},query(){return window.matchMedia("(prefers-color-scheme: dark)")},getSystemTheme(r){var t;return((t=n.query().matches)!=null?t:r==="dark")?"dark":"light"},addListener(r){const t=n.query(),a=i=>{r(i.matches?"dark":"light")};return typeof t.addListener=="function"?t.addListener(a):t.addEventListener("change",a),()=>{typeof t.removeListener=="function"?t.removeListener(a):t.removeEventListener("change",a)}},preventTransition(){const r=document.createElement("style");return r.appendChild(document.createTextNode("*{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}")),document.head.appendChild(r),()=>{window.getComputedStyle(document.body),requestAnimationFrame(()=>{requestAnimationFrame(()=>{document.head.removeChild(r)})})}}};return n}var J="chakra-ui-color-mode";function L(e){return{ssr:!1,type:"localStorage",get(o){if(!(globalThis!=null&&globalThis.document))return o;let n;try{n=localStorage.getItem(e)||o}catch{}return n||o},set(o){try{localStorage.setItem(e,o)}catch{}}}}var ee=L(J),M=()=>{};function S(e,o){return e.type==="cookie"&&e.ssr?e.get(o):o}function O(e){const{value:o,children:n,options:{useSystemColorMode:r,initialColorMode:t,disableTransitionOnChange:a}={},colorModeManager:i=ee}=e,d=t==="dark"?"dark":"light",[u,p]=l.useState(()=>S(i,d)),[y,b]=l.useState(()=>S(i)),{getSystemTheme:w,setClassName:k,setDataset:x,addListener:$}=l.useMemo(()=>B({preventTransition:a}),[a]),v=t==="system"&&!u?y:u,h=l.useCallback(c=>{const f=c==="system"?w():c;p(f),k(f==="dark"),x(f),i.set(f)},[i,w,k,x]);V(()=>{t==="system"&&b(w())},[]),l.useEffect(()=>{const c=i.get();if(c){h(c);return}if(t==="system"){h("system");return}h(d)},[i,d,t,h]);const C=l.useCallback(()=>{h(v==="dark"?"light":"dark")},[v,h]);l.useEffect(()=>{if(r)return $(h)},[r,$,h]);const R=l.useMemo(()=>({colorMode:o??v,toggleColorMode:o?M:C,setColorMode:o?M:h,forced:o!==void 0}),[v,C,h,o]);return s.jsx(A.Provider,{value:R,children:n})}O.displayName="ColorModeProvider";var te=["borders","breakpoints","colors","components","config","direction","fonts","fontSizes","fontWeights","letterSpacings","lineHeights","radii","shadows","sizes","space","styles","transition","zIndices"];function re(e){return I(e)?te.every(o=>Object.prototype.hasOwnProperty.call(e,o)):!1}function m(e){return typeof e=="function"}function oe(...e){return o=>e.reduce((n,r)=>r(n),o)}var ne=e=>function(...n){let r=[...n],t=n[n.length-1];return re(t)&&r.length>1?r=r.slice(0,r.length-1):t=e,oe(...r.map(a=>i=>m(a)?a(i):ie(i,a)))(t)},ae=ne(j);function ie(...e){return z({},...e,_)}function _(e,o,n,r){if((m(e)||m(o))&&Object.prototype.hasOwnProperty.call(r,n))return(...t)=>{const a=m(e)?e(...t):e,i=m(o)?o(...t):o;return z({},a,i,_)}}var q=l.createContext({getDocument(){return document},getWindow(){return window}});q.displayName="EnvironmentContext";function N(e){const{children:o,environment:n,disabled:r}=e,t=l.useRef(null),a=l.useMemo(()=>n||{getDocument:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument)!=null?u:document},getWindow:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument.defaultView)!=null?u:window}},[n]),i=!r||!n;return s.jsxs(q.Provider,{value:a,children:[o,i&&s.jsx("span",{id:"__chakra_env",hidden:!0,ref:t})]})}N.displayName="EnvironmentProvider";var se=e=>{const{children:o,colorModeManager:n,portalZIndex:r,resetScope:t,resetCSS:a=!0,theme:i={},environment:d,cssVarsRoot:u,disableEnvironment:p,disableGlobalStyle:y}=e,b=s.jsx(N,{environment:d,disabled:p,children:o});return s.jsx(D,{theme:i,cssVarsRoot:u,children:s.jsxs(O,{colorModeManager:n,options:i.config,children:[a?s.jsx(X,{scope:t}):s.jsx(Q,{}),!y&&s.jsx(F,{}),r?s.jsx(W,{zIndex:r,children:b}):b]})})},le=e=>function({children:n,theme:r=e,toastOptions:t,...a}){return s.jsxs(se,{theme:r,...a,children:[s.jsx(G,{value:t==null?void 0:t.defaultOptions,children:n}),s.jsx(K,{...t})]})},de=le(j);const ue=()=>l.useMemo(()=>({colorScheme:"dark",fontFamily:"'Inter Variable', sans-serif",components:{ScrollArea:{defaultProps:{scrollbarSize:10},styles:{scrollbar:{"&:hover":{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}},thumb:{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}}}}}),[]),he=L("@@invokeai-color-mode");function ce({children:e}){const{i18n:o}=Z(),n=o.dir(),r=l.useMemo(()=>ae({...U,direction:n}),[n]);l.useEffect(()=>{document.body.dir=n},[n]);const t=ue();return s.jsx(H,{theme:t,children:s.jsx(de,{theme:r,colorModeManager:he,toastOptions:Y,children:e})})}const ve=l.memo(ce);export{ve as default};
|
||||
File diff suppressed because one or more lines are too long
158
invokeai/frontend/web/dist/assets/index-375621ca.js
vendored
Normal file
158
invokeai/frontend/web/dist/assets/index-375621ca.js
vendored
Normal file
File diff suppressed because one or more lines are too long
128
invokeai/frontend/web/dist/assets/index-f6c3f475.js
vendored
128
invokeai/frontend/web/dist/assets/index-f6c3f475.js
vendored
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
2
invokeai/frontend/web/dist/index.html
vendored
2
invokeai/frontend/web/dist/index.html
vendored
@@ -12,7 +12,7 @@
|
||||
margin: 0;
|
||||
}
|
||||
</style>
|
||||
<script type="module" crossorigin src="./assets/index-f6c3f475.js"></script>
|
||||
<script type="module" crossorigin src="./assets/index-375621ca.js"></script>
|
||||
</head>
|
||||
|
||||
<body dir="ltr">
|
||||
|
||||
340
invokeai/frontend/web/dist/locales/en.json
vendored
340
invokeai/frontend/web/dist/locales/en.json
vendored
@@ -13,14 +13,15 @@
|
||||
"reset": "Reset",
|
||||
"rotateClockwise": "Rotate Clockwise",
|
||||
"rotateCounterClockwise": "Rotate Counter-Clockwise",
|
||||
"showGallery": "Show Gallery",
|
||||
"showGalleryPanel": "Show Gallery Panel",
|
||||
"showOptionsPanel": "Show Side Panel",
|
||||
"toggleAutoscroll": "Toggle autoscroll",
|
||||
"toggleLogViewer": "Toggle Log Viewer",
|
||||
"uploadImage": "Upload Image",
|
||||
"useThisParameter": "Use this parameter",
|
||||
"zoomIn": "Zoom In",
|
||||
"zoomOut": "Zoom Out"
|
||||
"zoomOut": "Zoom Out",
|
||||
"loadMore": "Load More"
|
||||
},
|
||||
"boards": {
|
||||
"addBoard": "Add Board",
|
||||
@@ -57,6 +58,7 @@
|
||||
"githubLabel": "Github",
|
||||
"hotkeysLabel": "Hotkeys",
|
||||
"imagePrompt": "Image Prompt",
|
||||
"imageFailedToLoad": "Unable to Load Image",
|
||||
"img2img": "Image To Image",
|
||||
"langArabic": "العربية",
|
||||
"langBrPortuguese": "Português do Brasil",
|
||||
@@ -80,6 +82,7 @@
|
||||
"load": "Load",
|
||||
"loading": "Loading",
|
||||
"loadingInvokeAI": "Loading Invoke AI",
|
||||
"learnMore": "Learn More",
|
||||
"modelManager": "Model Manager",
|
||||
"nodeEditor": "Node Editor",
|
||||
"nodes": "Workflow Editor",
|
||||
@@ -110,6 +113,7 @@
|
||||
"statusModelChanged": "Model Changed",
|
||||
"statusModelConverted": "Model Converted",
|
||||
"statusPreparing": "Preparing",
|
||||
"statusProcessing": "Processing",
|
||||
"statusProcessingCanceled": "Processing Canceled",
|
||||
"statusProcessingComplete": "Processing Complete",
|
||||
"statusRestoringFaces": "Restoring Faces",
|
||||
@@ -133,6 +137,8 @@
|
||||
"bgth": "bg_th",
|
||||
"canny": "Canny",
|
||||
"cannyDescription": "Canny edge detection",
|
||||
"colorMap": "Color",
|
||||
"colorMapDescription": "Generates a color map from the image",
|
||||
"coarse": "Coarse",
|
||||
"contentShuffle": "Content Shuffle",
|
||||
"contentShuffleDescription": "Shuffles the content in an image",
|
||||
@@ -156,6 +162,7 @@
|
||||
"hideAdvanced": "Hide Advanced",
|
||||
"highThreshold": "High Threshold",
|
||||
"imageResolution": "Image Resolution",
|
||||
"colorMapTileSize": "Tile Size",
|
||||
"importImageFromCanvas": "Import Image From Canvas",
|
||||
"importMaskFromCanvas": "Import Mask From Canvas",
|
||||
"incompatibleBaseModel": "Incompatible base model:",
|
||||
@@ -203,6 +210,81 @@
|
||||
"incompatibleModel": "Incompatible base model:",
|
||||
"noMatchingEmbedding": "No matching Embeddings"
|
||||
},
|
||||
"queue": {
|
||||
"queue": "Queue",
|
||||
"queueFront": "Add to Front of Queue",
|
||||
"queueBack": "Add to Queue",
|
||||
"queueCountPrediction": "Add {{predicted}} to Queue",
|
||||
"queueMaxExceeded": "Max of {{max_queue_size}} exceeded, would skip {{skip}}",
|
||||
"queuedCount": "{{pending}} Pending",
|
||||
"queueTotal": "{{total}} Total",
|
||||
"queueEmpty": "Queue Empty",
|
||||
"enqueueing": "Queueing Batch",
|
||||
"resume": "Resume",
|
||||
"resumeTooltip": "Resume Processor",
|
||||
"resumeSucceeded": "Processor Resumed",
|
||||
"resumeFailed": "Problem Resuming Processor",
|
||||
"pause": "Pause",
|
||||
"pauseTooltip": "Pause Processor",
|
||||
"pauseSucceeded": "Processor Paused",
|
||||
"pauseFailed": "Problem Pausing Processor",
|
||||
"cancel": "Cancel",
|
||||
"cancelTooltip": "Cancel Current Item",
|
||||
"cancelSucceeded": "Item Canceled",
|
||||
"cancelFailed": "Problem Canceling Item",
|
||||
"prune": "Prune",
|
||||
"pruneTooltip": "Prune {{item_count}} Completed Items",
|
||||
"pruneSucceeded": "Pruned {{item_count}} Completed Items from Queue",
|
||||
"pruneFailed": "Problem Pruning Queue",
|
||||
"clear": "Clear",
|
||||
"clearTooltip": "Cancel and Clear All Items",
|
||||
"clearSucceeded": "Queue Cleared",
|
||||
"clearFailed": "Problem Clearing Queue",
|
||||
"cancelBatch": "Cancel Batch",
|
||||
"cancelItem": "Cancel Item",
|
||||
"cancelBatchSucceeded": "Batch Canceled",
|
||||
"cancelBatchFailed": "Problem Canceling Batch",
|
||||
"clearQueueAlertDialog": "Clearing the queue immediately cancels any processing items and clears the queue entirely.",
|
||||
"clearQueueAlertDialog2": "Are you sure you want to clear the queue?",
|
||||
"current": "Current",
|
||||
"next": "Next",
|
||||
"status": "Status",
|
||||
"total": "Total",
|
||||
"pending": "Pending",
|
||||
"in_progress": "In Progress",
|
||||
"completed": "Completed",
|
||||
"failed": "Failed",
|
||||
"canceled": "Canceled",
|
||||
"completedIn": "Completed in",
|
||||
"batch": "Batch",
|
||||
"item": "Item",
|
||||
"session": "Session",
|
||||
"batchValues": "Batch Values",
|
||||
"notReady": "Unable to Queue",
|
||||
"batchQueued": "Batch Queued",
|
||||
"batchQueuedDesc": "Added {{item_count}} sessions to {{direction}} of queue",
|
||||
"front": "front",
|
||||
"back": "back",
|
||||
"batchFailedToQueue": "Failed to Queue Batch",
|
||||
"graphQueued": "Graph queued",
|
||||
"graphFailedToQueue": "Failed to queue graph"
|
||||
},
|
||||
"invocationCache": {
|
||||
"invocationCache": "Invocation Cache",
|
||||
"cacheSize": "Cache Size",
|
||||
"maxCacheSize": "Max Cache Size",
|
||||
"hits": "Cache Hits",
|
||||
"misses": "Cache Misses",
|
||||
"clear": "Clear",
|
||||
"clearSucceeded": "Invocation Cache Cleared",
|
||||
"clearFailed": "Problem Clearing Invocation Cache",
|
||||
"enable": "Enable",
|
||||
"enableSucceeded": "Invocation Cache Enabled",
|
||||
"enableFailed": "Problem Enabling Invocation Cache",
|
||||
"disable": "Disable",
|
||||
"disableSucceeded": "Invocation Cache Disabled",
|
||||
"disableFailed": "Problem Disabling Invocation Cache"
|
||||
},
|
||||
"gallery": {
|
||||
"allImagesLoaded": "All Images Loaded",
|
||||
"assets": "Assets",
|
||||
@@ -624,6 +706,8 @@
|
||||
"addNodeToolTip": "Add Node (Shift+A, Space)",
|
||||
"animatedEdges": "Animated Edges",
|
||||
"animatedEdgesHelp": "Animate selected edges and edges connected to selected nodes",
|
||||
"boardField": "Board",
|
||||
"boardFieldDescription": "A gallery board",
|
||||
"boolean": "Booleans",
|
||||
"booleanCollection": "Boolean Collection",
|
||||
"booleanCollectionDescription": "A collection of booleans.",
|
||||
@@ -633,6 +717,7 @@
|
||||
"cannotConnectInputToInput": "Cannot connect input to input",
|
||||
"cannotConnectOutputToOutput": "Cannot connect output to output",
|
||||
"cannotConnectToSelf": "Cannot connect to self",
|
||||
"cannotDuplicateConnection": "Cannot create duplicate connections",
|
||||
"clipField": "Clip",
|
||||
"clipFieldDescription": "Tokenizer and text_encoder submodels.",
|
||||
"collection": "Collection",
|
||||
@@ -641,7 +726,8 @@
|
||||
"collectionItemDescription": "TODO",
|
||||
"colorCodeEdges": "Color-Code Edges",
|
||||
"colorCodeEdgesHelp": "Color-code edges according to their connected fields",
|
||||
"colorCollectionDescription": "A collection of colors.",
|
||||
"colorCollection": "A collection of colors.",
|
||||
"colorCollectionDescription": "TODO",
|
||||
"colorField": "Color",
|
||||
"colorFieldDescription": "A RGBA color.",
|
||||
"colorPolymorphic": "Color Polymorphic",
|
||||
@@ -688,7 +774,8 @@
|
||||
"imageFieldDescription": "Images may be passed between nodes.",
|
||||
"imagePolymorphic": "Image Polymorphic",
|
||||
"imagePolymorphicDescription": "A collection of images.",
|
||||
"inputFields": "Input Feilds",
|
||||
"inputField": "Input Field",
|
||||
"inputFields": "Input Fields",
|
||||
"inputMayOnlyHaveOneConnection": "Input may only have one connection",
|
||||
"inputNode": "Input Node",
|
||||
"integer": "Integer",
|
||||
@@ -706,6 +793,7 @@
|
||||
"latentsPolymorphicDescription": "Latents may be passed between nodes.",
|
||||
"loadingNodes": "Loading Nodes...",
|
||||
"loadWorkflow": "Load Workflow",
|
||||
"noWorkflow": "No Workflow",
|
||||
"loRAModelField": "LoRA",
|
||||
"loRAModelFieldDescription": "TODO",
|
||||
"mainModelField": "Model",
|
||||
@@ -727,14 +815,15 @@
|
||||
"noImageFoundState": "No initial image found in state",
|
||||
"noMatchingNodes": "No matching nodes",
|
||||
"noNodeSelected": "No node selected",
|
||||
"noOpacity": "Node Opacity",
|
||||
"nodeOpacity": "Node Opacity",
|
||||
"noOutputRecorded": "No outputs recorded",
|
||||
"noOutputSchemaName": "No output schema name found in ref object",
|
||||
"notes": "Notes",
|
||||
"notesDescription": "Add notes about your workflow",
|
||||
"oNNXModelField": "ONNX Model",
|
||||
"oNNXModelFieldDescription": "ONNX model field.",
|
||||
"outputFields": "Output Feilds",
|
||||
"outputField": "Output Field",
|
||||
"outputFields": "Output Fields",
|
||||
"outputNode": "Output node",
|
||||
"outputSchemaNotFound": "Output schema not found",
|
||||
"pickOne": "Pick One",
|
||||
@@ -783,6 +872,7 @@
|
||||
"unknownNode": "Unknown Node",
|
||||
"unknownTemplate": "Unknown Template",
|
||||
"unkownInvocation": "Unknown Invocation type",
|
||||
"updateNode": "Update Node",
|
||||
"updateApp": "Update App",
|
||||
"vaeField": "Vae",
|
||||
"vaeFieldDescription": "Vae submodel.",
|
||||
@@ -806,7 +896,7 @@
|
||||
"zoomOutNodes": "Zoom Out"
|
||||
},
|
||||
"parameters": {
|
||||
"aspectRatio": "Ratio",
|
||||
"aspectRatio": "Aspect Ratio",
|
||||
"boundingBoxHeader": "Bounding Box",
|
||||
"boundingBoxHeight": "Bounding Box Height",
|
||||
"boundingBoxWidth": "Bounding Box Width",
|
||||
@@ -819,6 +909,7 @@
|
||||
},
|
||||
"cfgScale": "CFG Scale",
|
||||
"clipSkip": "CLIP Skip",
|
||||
"clipSkipWithLayerCount": "CLIP Skip {{layerCount}}",
|
||||
"closeViewer": "Close Viewer",
|
||||
"codeformerFidelity": "Fidelity",
|
||||
"coherenceMode": "Mode",
|
||||
@@ -857,6 +948,7 @@
|
||||
"noInitialImageSelected": "No initial image selected",
|
||||
"noModelForControlNet": "ControlNet {{index}} has no model selected.",
|
||||
"noModelSelected": "No model selected",
|
||||
"noPrompts": "No prompts generated",
|
||||
"noNodesInGraph": "No nodes in graph",
|
||||
"readyToInvoke": "Ready to Invoke",
|
||||
"systemBusy": "System busy",
|
||||
@@ -875,7 +967,12 @@
|
||||
"perlinNoise": "Perlin Noise",
|
||||
"positivePromptPlaceholder": "Positive Prompt",
|
||||
"randomizeSeed": "Randomize Seed",
|
||||
"manualSeed": "Manual Seed",
|
||||
"randomSeed": "Random Seed",
|
||||
"restoreFaces": "Restore Faces",
|
||||
"iterations": "Iterations",
|
||||
"iterationsWithCount_one": "{{count}} Iteration",
|
||||
"iterationsWithCount_other": "{{count}} Iterations",
|
||||
"scale": "Scale",
|
||||
"scaleBeforeProcessing": "Scale Before Processing",
|
||||
"scaledHeight": "Scaled H",
|
||||
@@ -886,13 +983,17 @@
|
||||
"seamlessTiling": "Seamless Tiling",
|
||||
"seamlessXAxis": "X Axis",
|
||||
"seamlessYAxis": "Y Axis",
|
||||
"seamlessX": "Seamless X",
|
||||
"seamlessY": "Seamless Y",
|
||||
"seamlessX&Y": "Seamless X & Y",
|
||||
"seamLowThreshold": "Low",
|
||||
"seed": "Seed",
|
||||
"seedWeights": "Seed Weights",
|
||||
"imageActions": "Image Actions",
|
||||
"sendTo": "Send to",
|
||||
"sendToImg2Img": "Send to Image to Image",
|
||||
"sendToUnifiedCanvas": "Send To Unified Canvas",
|
||||
"showOptionsPanel": "Show Options Panel",
|
||||
"showOptionsPanel": "Show Side Panel (O or T)",
|
||||
"showPreview": "Show Preview",
|
||||
"shuffle": "Shuffle Seed",
|
||||
"steps": "Steps",
|
||||
@@ -901,11 +1002,13 @@
|
||||
"tileSize": "Tile Size",
|
||||
"toggleLoopback": "Toggle Loopback",
|
||||
"type": "Type",
|
||||
"upscale": "Upscale",
|
||||
"upscale": "Upscale (Shift + U)",
|
||||
"upscaleImage": "Upscale Image",
|
||||
"upscaling": "Upscaling",
|
||||
"useAll": "Use All",
|
||||
"useCpuNoise": "Use CPU Noise",
|
||||
"cpuNoise": "CPU Noise",
|
||||
"gpuNoise": "GPU Noise",
|
||||
"useInitImg": "Use Initial Image",
|
||||
"usePrompt": "Use Prompt",
|
||||
"useSeed": "Use Seed",
|
||||
@@ -914,11 +1017,20 @@
|
||||
"vSymmetryStep": "V Symmetry Step",
|
||||
"width": "Width"
|
||||
},
|
||||
"prompt": {
|
||||
"dynamicPrompts": {
|
||||
"combinatorial": "Combinatorial Generation",
|
||||
"dynamicPrompts": "Dynamic Prompts",
|
||||
"enableDynamicPrompts": "Enable Dynamic Prompts",
|
||||
"maxPrompts": "Max Prompts"
|
||||
"maxPrompts": "Max Prompts",
|
||||
"promptsWithCount_one": "{{count}} Prompt",
|
||||
"promptsWithCount_other": "{{count}} Prompts",
|
||||
"seedBehaviour": {
|
||||
"label": "Seed Behaviour",
|
||||
"perIterationLabel": "Seed per Iteration",
|
||||
"perIterationDesc": "Use a different seed for each iteration",
|
||||
"perPromptLabel": "Seed per Image",
|
||||
"perPromptDesc": "Use a different seed for each image"
|
||||
}
|
||||
},
|
||||
"sdxl": {
|
||||
"cfgScale": "CFG Scale",
|
||||
@@ -1066,6 +1178,210 @@
|
||||
"variations": "Try a variation with a value between 0.1 and 1.0 to change the result for a given seed. Interesting variations of the seed are between 0.1 and 0.3."
|
||||
}
|
||||
},
|
||||
"popovers": {
|
||||
"clipSkip": {
|
||||
"heading": "CLIP Skip",
|
||||
"paragraphs": [
|
||||
"Choose how many layers of the CLIP model to skip.",
|
||||
"Some models work better with certain CLIP Skip settings.",
|
||||
"A higher value typically results in a less detailed image."
|
||||
]
|
||||
},
|
||||
"paramNegativeConditioning": {
|
||||
"heading": "Negative Prompt",
|
||||
"paragraphs": [
|
||||
"The generation process avoids the concepts in the negative prompt. Use this to exclude qualities or objects from the output.",
|
||||
"Supports Compel syntax and embeddings."
|
||||
]
|
||||
},
|
||||
"paramPositiveConditioning": {
|
||||
"heading": "Positive Prompt",
|
||||
"paragraphs": [
|
||||
"Guides the generation process. You may use any words or phrases.",
|
||||
"Compel and Dynamic Prompts syntaxes and embeddings."
|
||||
]
|
||||
},
|
||||
"paramScheduler": {
|
||||
"heading": "Scheduler",
|
||||
"paragraphs": [
|
||||
"Scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output."
|
||||
]
|
||||
},
|
||||
"compositingBlur": {
|
||||
"heading": "Blur",
|
||||
"paragraphs": ["The blur radius of the mask."]
|
||||
},
|
||||
"compositingBlurMethod": {
|
||||
"heading": "Blur Method",
|
||||
"paragraphs": ["The method of blur applied to the masked area."]
|
||||
},
|
||||
"compositingCoherencePass": {
|
||||
"heading": "Coherence Pass",
|
||||
"paragraphs": [
|
||||
"A second round of denoising helps to composite the Inpainted/Outpainted image."
|
||||
]
|
||||
},
|
||||
"compositingCoherenceMode": {
|
||||
"heading": "Mode",
|
||||
"paragraphs": ["The mode of the Coherence Pass."]
|
||||
},
|
||||
"compositingCoherenceSteps": {
|
||||
"heading": "Steps",
|
||||
"paragraphs": [
|
||||
"Number of denoising steps used in the Coherence Pass.",
|
||||
"Same as the main Steps parameter."
|
||||
]
|
||||
},
|
||||
"compositingStrength": {
|
||||
"heading": "Strength",
|
||||
"paragraphs": [
|
||||
"Denoising strength for the Coherence Pass.",
|
||||
"Same as the Image to Image Denoising Strength parameter."
|
||||
]
|
||||
},
|
||||
"compositingMaskAdjustments": {
|
||||
"heading": "Mask Adjustments",
|
||||
"paragraphs": ["Adjust the mask."]
|
||||
},
|
||||
"controlNetBeginEnd": {
|
||||
"heading": "Begin / End Step Percentage",
|
||||
"paragraphs": [
|
||||
"Which steps of the denoising process will have the ControlNet applied.",
|
||||
"ControlNets applied at the beginning of the process guide composition, and ControlNets applied at the end guide details."
|
||||
]
|
||||
},
|
||||
"controlNetControlMode": {
|
||||
"heading": "Control Mode",
|
||||
"paragraphs": [
|
||||
"Lends more weight to either the prompt or ControlNet."
|
||||
]
|
||||
},
|
||||
"controlNetResizeMode": {
|
||||
"heading": "Resize Mode",
|
||||
"paragraphs": [
|
||||
"How the ControlNet image will be fit to the image output size."
|
||||
]
|
||||
},
|
||||
"controlNet": {
|
||||
"heading": "ControlNet",
|
||||
"paragraphs": [
|
||||
"ControlNets provide guidance to the generation process, helping create images with controlled composition, structure, or style, depending on the model selected."
|
||||
]
|
||||
},
|
||||
"controlNetWeight": {
|
||||
"heading": "Weight",
|
||||
"paragraphs": [
|
||||
"How strongly the ControlNet will impact the generated image."
|
||||
]
|
||||
},
|
||||
"dynamicPrompts": {
|
||||
"heading": "Dynamic Prompts",
|
||||
"paragraphs": [
|
||||
"Dynamic Prompts parses a single prompt into many.",
|
||||
"The basic syntax is \"a {red|green|blue} ball\". This will produce three prompts: \"a red ball\", \"a green ball\" and \"a blue ball\".",
|
||||
"You can use the syntax as many times as you like in a single prompt, but be sure to keep the number of prompts generated in check with the Max Prompts setting."
|
||||
]
|
||||
},
|
||||
"dynamicPromptsMaxPrompts": {
|
||||
"heading": "Max Prompts",
|
||||
"paragraphs": [
|
||||
"Limits the number of prompts that can be generated by Dynamic Prompts."
|
||||
]
|
||||
},
|
||||
"dynamicPromptsSeedBehaviour": {
|
||||
"heading": "Seed Behaviour",
|
||||
"paragraphs": [
|
||||
"Controls how the seed is used when generating prompts.",
|
||||
"Per Iteration will use a unique seed for each iteration. Use this to explore prompt variations on a single seed.",
|
||||
"For example, if you have 5 prompts, each image will use the same seed.",
|
||||
"Per Image will use a unique seed for each image. This provides more variation."
|
||||
]
|
||||
},
|
||||
"infillMethod": {
|
||||
"heading": "Infill Method",
|
||||
"paragraphs": ["Method to infill the selected area."]
|
||||
},
|
||||
"lora": {
|
||||
"heading": "LoRA Weight",
|
||||
"paragraphs": [
|
||||
"Higher LoRA weight will lead to larger impacts on the final image."
|
||||
]
|
||||
},
|
||||
"noiseUseCPU": {
|
||||
"heading": "Use CPU Noise",
|
||||
"paragraphs": [
|
||||
"Controls whether noise is generated on the CPU or GPU.",
|
||||
"With CPU Noise enabled, a particular seed will produce the same image on any machine.",
|
||||
"There is no performance impact to enabling CPU Noise."
|
||||
]
|
||||
},
|
||||
"paramCFGScale": {
|
||||
"heading": "CFG Scale",
|
||||
"paragraphs": [
|
||||
"Controls how much your prompt influences the generation process."
|
||||
]
|
||||
},
|
||||
"paramDenoisingStrength": {
|
||||
"heading": "Denoising Strength",
|
||||
"paragraphs": [
|
||||
"How much noise is added to the input image.",
|
||||
"0 will result in an identical image, while 1 will result in a completely new image."
|
||||
]
|
||||
},
|
||||
"paramIterations": {
|
||||
"heading": "Iterations",
|
||||
"paragraphs": [
|
||||
"The number of images to generate.",
|
||||
"If Dynamic Prompts is enabled, each of the prompts will be generated this many times."
|
||||
]
|
||||
},
|
||||
"paramModel": {
|
||||
"heading": "Model",
|
||||
"paragraphs": [
|
||||
"Model used for the denoising steps.",
|
||||
"Different models are typically trained to specialize in producing particular aesthetic results and content."
|
||||
]
|
||||
},
|
||||
"paramRatio": {
|
||||
"heading": "Aspect Ratio",
|
||||
"paragraphs": [
|
||||
"The aspect ratio of the dimensions of the image generated.",
|
||||
"An image size (in number of pixels) equivalent to 512x512 is recommended for SD1.5 models and a size equivalent to 1024x1024 is recommended for SDXL models."
|
||||
]
|
||||
},
|
||||
"paramSeed": {
|
||||
"heading": "Seed",
|
||||
"paragraphs": [
|
||||
"Controls the starting noise used for generation.",
|
||||
"Disable “Random Seed” to produce identical results with the same generation settings."
|
||||
]
|
||||
},
|
||||
"paramSteps": {
|
||||
"heading": "Steps",
|
||||
"paragraphs": [
|
||||
"Number of steps that will be performed in each generation.",
|
||||
"Higher step counts will typically create better images but will require more generation time."
|
||||
]
|
||||
},
|
||||
"paramVAE": {
|
||||
"heading": "VAE",
|
||||
"paragraphs": [
|
||||
"Model used for translating AI output into the final image."
|
||||
]
|
||||
},
|
||||
"paramVAEPrecision": {
|
||||
"heading": "VAE Precision",
|
||||
"paragraphs": [
|
||||
"The precision used during VAE encoding and decoding. FP16/half precision is more efficient, at the expense of minor image variations."
|
||||
]
|
||||
},
|
||||
"scaleBeforeProcessing": {
|
||||
"heading": "Scale Before Processing",
|
||||
"paragraphs": [
|
||||
"Scales the selected area to the size best suited for the model before the image generation process."
|
||||
]
|
||||
}
|
||||
},
|
||||
"ui": {
|
||||
"hideProgressImages": "Hide Progress Images",
|
||||
"lockRatio": "Lock Ratio",
|
||||
@@ -1128,6 +1444,8 @@
|
||||
"showCanvasDebugInfo": "Show Additional Canvas Info",
|
||||
"showGrid": "Show Grid",
|
||||
"showHide": "Show/Hide",
|
||||
"showResultsOn": "Show Results (On)",
|
||||
"showResultsOff": "Show Results (Off)",
|
||||
"showIntermediates": "Show Intermediates",
|
||||
"snapToGrid": "Snap to Grid",
|
||||
"undo": "Undo"
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import { Flex, Grid } from '@chakra-ui/react';
|
||||
import { useStore } from '@nanostores/react';
|
||||
import { useLogger } from 'app/logging/useLogger';
|
||||
import { appStarted } from 'app/store/middleware/listenerMiddleware/listeners/appStarted';
|
||||
import { $headerComponent } from 'app/store/nanostores/headerComponent';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { PartialAppConfig } from 'app/types/invokeai';
|
||||
import ImageUploader from 'common/components/ImageUploader';
|
||||
@@ -14,12 +16,10 @@ import i18n from 'i18n';
|
||||
import { size } from 'lodash-es';
|
||||
import { memo, useCallback, useEffect } from 'react';
|
||||
import { ErrorBoundary } from 'react-error-boundary';
|
||||
import { usePreselectedImage } from '../../features/parameters/hooks/usePreselectedImage';
|
||||
import AppErrorBoundaryFallback from './AppErrorBoundaryFallback';
|
||||
import GlobalHotkeys from './GlobalHotkeys';
|
||||
import PreselectedImage from './PreselectedImage';
|
||||
import Toaster from './Toaster';
|
||||
import { useStore } from '@nanostores/react';
|
||||
import { $headerComponent } from 'app/store/nanostores/headerComponent';
|
||||
|
||||
const DEFAULT_CONFIG = {};
|
||||
|
||||
@@ -36,8 +36,7 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage }: Props) => {
|
||||
|
||||
const logger = useLogger('system');
|
||||
const dispatch = useAppDispatch();
|
||||
const { handleSendToCanvas, handleSendToImg2Img, handleUseAllMetadata } =
|
||||
usePreselectedImage(selectedImage?.imageName);
|
||||
|
||||
const handleReset = useCallback(() => {
|
||||
localStorage.clear();
|
||||
location.reload();
|
||||
@@ -59,24 +58,6 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage }: Props) => {
|
||||
dispatch(appStarted());
|
||||
}, [dispatch]);
|
||||
|
||||
useEffect(() => {
|
||||
if (selectedImage && selectedImage.action === 'sendToCanvas') {
|
||||
handleSendToCanvas();
|
||||
}
|
||||
}, [selectedImage, handleSendToCanvas]);
|
||||
|
||||
useEffect(() => {
|
||||
if (selectedImage && selectedImage.action === 'sendToImg2Img') {
|
||||
handleSendToImg2Img();
|
||||
}
|
||||
}, [selectedImage, handleSendToImg2Img]);
|
||||
|
||||
useEffect(() => {
|
||||
if (selectedImage && selectedImage.action === 'useAllParameters') {
|
||||
handleUseAllMetadata();
|
||||
}
|
||||
}, [selectedImage, handleUseAllMetadata]);
|
||||
|
||||
const headerComponent = useStore($headerComponent);
|
||||
|
||||
return (
|
||||
@@ -112,6 +93,7 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage }: Props) => {
|
||||
<ChangeBoardModal />
|
||||
<Toaster />
|
||||
<GlobalHotkeys />
|
||||
<PreselectedImage selectedImage={selectedImage} />
|
||||
</ErrorBoundary>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -0,0 +1,16 @@
|
||||
import { usePreselectedImage } from 'features/parameters/hooks/usePreselectedImage';
|
||||
import { memo } from 'react';
|
||||
|
||||
type Props = {
|
||||
selectedImage?: {
|
||||
imageName: string;
|
||||
action: 'sendToImg2Img' | 'sendToCanvas' | 'useAllParameters';
|
||||
};
|
||||
};
|
||||
|
||||
const PreselectedImage = (props: Props) => {
|
||||
usePreselectedImage(props.selectedImage);
|
||||
return null;
|
||||
};
|
||||
|
||||
export default memo(PreselectedImage);
|
||||
@@ -1,7 +1,7 @@
|
||||
import { skipToken } from '@reduxjs/toolkit/dist/query';
|
||||
import { CoreMetadata } from 'features/nodes/types/types';
|
||||
import { t } from 'i18next';
|
||||
import { useCallback } from 'react';
|
||||
import { useCallback, useEffect } from 'react';
|
||||
import { useAppToaster } from '../../../app/components/Toaster';
|
||||
import { useAppDispatch } from '../../../app/store/storeHooks';
|
||||
import {
|
||||
@@ -13,18 +13,21 @@ import { setActiveTab } from '../../ui/store/uiSlice';
|
||||
import { initialImageSelected } from '../store/actions';
|
||||
import { useRecallParameters } from './useRecallParameters';
|
||||
|
||||
export const usePreselectedImage = (imageName?: string) => {
|
||||
export const usePreselectedImage = (selectedImage?: {
|
||||
imageName: string;
|
||||
action: 'sendToImg2Img' | 'sendToCanvas' | 'useAllParameters';
|
||||
}) => {
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
const { recallAllParameters } = useRecallParameters();
|
||||
const toaster = useAppToaster();
|
||||
|
||||
const { currentData: selectedImageDto } = useGetImageDTOQuery(
|
||||
imageName ?? skipToken
|
||||
selectedImage?.imageName ?? skipToken
|
||||
);
|
||||
|
||||
const { currentData: selectedImageMetadata } = useGetImageMetadataQuery(
|
||||
imageName ?? skipToken
|
||||
selectedImage?.imageName ?? skipToken
|
||||
);
|
||||
|
||||
const handleSendToCanvas = useCallback(() => {
|
||||
@@ -54,5 +57,23 @@ export const usePreselectedImage = (imageName?: string) => {
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [selectedImageMetadata]);
|
||||
|
||||
useEffect(() => {
|
||||
if (selectedImage && selectedImage.action === 'sendToCanvas') {
|
||||
handleSendToCanvas();
|
||||
}
|
||||
}, [selectedImage, handleSendToCanvas]);
|
||||
|
||||
useEffect(() => {
|
||||
if (selectedImage && selectedImage.action === 'sendToImg2Img') {
|
||||
handleSendToImg2Img();
|
||||
}
|
||||
}, [selectedImage, handleSendToImg2Img]);
|
||||
|
||||
useEffect(() => {
|
||||
if (selectedImage && selectedImage.action === 'useAllParameters') {
|
||||
handleUseAllMetadata();
|
||||
}
|
||||
}, [selectedImage, handleUseAllMetadata]);
|
||||
|
||||
return { handleSendToCanvas, handleSendToImg2Img, handleUseAllMetadata };
|
||||
};
|
||||
|
||||
@@ -1 +1 @@
|
||||
__version__ = "3.1.1"
|
||||
__version__ = "3.2.0rc3"
|
||||
|
||||
@@ -127,12 +127,12 @@ nav:
|
||||
- Manual Installation on Windows: 'installation/deprecated_documentation/INSTALL_WINDOWS.md'
|
||||
- Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md'
|
||||
- Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md'
|
||||
- Nodes:
|
||||
- Workflows & Nodes:
|
||||
- Community Nodes: 'nodes/communityNodes.md'
|
||||
- Example Workflows: 'nodes/exampleWorkflows.md'
|
||||
- Nodes Overview: 'nodes/overview.md'
|
||||
- List of Default Nodes: 'nodes/defaultNodes.md'
|
||||
- Node Editor Usage: 'nodes/NODES.md'
|
||||
- Workflow Editor Usage: 'nodes/NODES.md'
|
||||
- ComfyUI to InvokeAI: 'nodes/comfyToInvoke.md'
|
||||
- Contributing Nodes: 'nodes/contributingNodes.md'
|
||||
- Features:
|
||||
@@ -140,7 +140,7 @@ nav:
|
||||
- New to InvokeAI?: 'help/gettingStartedWithAI.md'
|
||||
- Concepts: 'features/CONCEPTS.md'
|
||||
- Configuration: 'features/CONFIGURATION.md'
|
||||
- ControlNet: 'features/CONTROLNET.md'
|
||||
- Control Adapters: 'features/CONTROLNET.md'
|
||||
- Image-to-Image: 'features/IMG2IMG.md'
|
||||
- Controlling Logging: 'features/LOGGING.md'
|
||||
- Model Merging: 'features/MODEL_MERGING.md'
|
||||
|
||||
Reference in New Issue
Block a user