Compare commits

...

119 Commits

Author SHA1 Message Date
psychedelicious
13703d8f55 chore: bump version to v5.4.3rc2 2024-12-02 15:02:30 -08:00
psychedelicious
60d838d0a5 chore(ui): update whats new copy 2024-12-02 15:02:30 -08:00
Riccardo Giovanetti
2a157a44bf translationBot(ui): update translation (Italian)
Currently translated at 99.3% (1633 of 1643 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-12-02 14:52:05 -08:00
James Reynolds
d61b5833c2 Fix documentation broken links and remove whitespace at end of lines 2024-12-02 14:49:53 -08:00
Jonathan
c094838c6a Update model_util.py 2024-12-02 14:35:02 -08:00
Hosted Weblate
2d334c8dd8 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-12-02 14:05:51 -08:00
Mary Hipp
a6be26e174 fix(worker): only apply processor cancel logic if cancel event is for current queue item 2024-12-02 14:03:05 -08:00
psychedelicious
f8c7adddd0 feat(ui): add vietnamese to language picker
Closes #7384
2024-12-02 08:12:14 -05:00
psychedelicious
17da1d92e9 fix(ui): remove "adding to" text on Invoke tooltip on Workflows/Upscaling tabs
The "adding to" text indicates if images are going to the gallery or staging area. This info is relevant only to the canvas tab, but was displayed on Upscaling and Workflows tabs. Removed it from those tabs.
2024-12-02 08:08:16 -05:00
psychedelicious
1cc57a4854 chore(ui): lint 2024-12-02 07:59:12 -05:00
psychedelicious
3993fae331 fix(ui): unable to invoke w/ empty inpaint mask or raster layer
Removed the empty state checks for these layer types - it's always OK to invoke when they are empty.
2024-12-02 07:59:12 -05:00
psychedelicious
1446526d55 tidy(ui): translation keys for canvas layer warnings 2024-12-02 07:59:12 -05:00
psychedelicious
62c024e725 feat(ui): add gallery image ctx menu items to create ref image from image
Appears these actions disappeared at some point. Restoring them.
2024-12-02 07:52:58 -05:00
psychedelicious
1e92bb4e94 fix(ui): ref image defaults to prev ref image's image selection
A redux selector is used to get the "default" IP Adapter. The selector uses the model list query result to select an IP Adapter model to be preset by default.

The selector is memoized, so if we mutate the returned default IP Adapter state, it mutates the result of the selector for all consumers.

For example, the `image` property of the default IP Adapter selector result is `null`. When we set the `image` property of the selector result while creating an IP Adapter, this does not trigger the selector to recompute its result. We end up setting the image for the selector result directly, and all other consumers now have that same image set.

Solution - we need to clone the selector result everywhere it is used. This was missed in a few spots, causing the issue.
2024-12-02 07:48:39 -05:00
psychedelicious
db6398fdf6 feat(ui): less confusing empty state for rg ref images
It was easy to misunderstand the empty state for a regional guidance reference image. There was no label, so it seemed like it was the whole region that was empty.

This small change adds the "Reference Image" heading to the empty state, so it's clear that the empty state messaging refers to this reference image, not the whole regional guidance layer.
2024-12-02 07:46:10 -05:00
Riccardo Giovanetti
ebd73a2ac2 translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1622 of 1643 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-12-02 02:13:51 -08:00
Hosted Weblate
8ee95cab00 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-12-02 02:13:51 -08:00
Linos
d1184201a8 translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (1643 of 1643 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 100.0% (1638 of 1638 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2024-12-02 02:13:51 -08:00
Nik Nikovsky
5887891654 translationBot(ui): update translation (Polish)
Currently translated at 4.9% (81 of 1638 strings)

Co-authored-by: Nik Nikovsky <zejdzztegomaila@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pl/
Translation: InvokeAI/Web UI
2024-12-02 02:13:51 -08:00
Riku
765ca4e004 translationBot(ui): update translation (German)
Currently translated at 69.7% (1142 of 1638 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-12-02 02:13:51 -08:00
Riku
159b00a490 fix(app): adjust session queue api type 2024-12-01 20:06:05 -08:00
Riku
3fbf6f2d2a chore(ui): update typegen schema 2024-12-01 19:56:09 -08:00
Riku
931fca7cd1 fix(ui): call cancel instead of clear queue 2024-12-01 19:53:12 -08:00
Riku
db84a3a5d4 refactor(ui): move clear queue hook to separate file 2024-12-01 19:42:25 -08:00
psychedelicious
ca8313e805 feat(ui): add new layer from image menu items for staging area
The layers are disabled when created so as to not interfere with the canvas state.
2024-12-01 19:37:49 -08:00
psychedelicious
df849035ee feat(ui): allow setting isEnabled, isLocked and name in createNewCanvasEntityFromImage util 2024-12-01 19:37:49 -08:00
psychedelicious
8d97fe69ca feat(ui): use imageDTOToFile in staging area save to gallery button 2024-12-01 19:37:49 -08:00
psychedelicious
9044e53a9b feat(ui): add imageDTOToFile util 2024-12-01 19:37:49 -08:00
Jonathan
6012b0f912 Update flux_text_encoder.py
Updated version number for FLUX Text Encoding.
2024-11-30 08:29:21 -05:00
Jonathan
bb0ed5dc8a Update flux_denoise.py
Updated node version for FLUX Denoise.
2024-11-30 08:29:21 -05:00
Ryan Dick
021552fd81 Avoid unnecessary dtype conversions with rope encodings. 2024-11-29 12:32:50 -05:00
Ryan Dick
be73dbba92 Use view() instead of rearrange() for better performance. 2024-11-29 12:32:50 -05:00
Ryan Dick
db9c0cad7c Replace custom RMSNorm implementation with torch.nn.functional.rms_norm(...) for improved speed. 2024-11-29 12:32:50 -05:00
Ryan Dick
54b7f9a063 FLUX Regional Prompting (#7388)
## Summary

This PR adds support for regional prompting with FLUX.

### Example 1
Global prompt: `An architecture rendering of the reception area of a
corporate office with modern decor.`
<img width="1386" alt="image"
src="https://github.com/user-attachments/assets/c8169bdb-49a9-44bc-bd9e-58d98e09094b">

![image](https://github.com/user-attachments/assets/4a426be9-9d7a-4527-b27c-2d2514ee73fe)

## QA Instructions

- [x] Test that there is no slowdown in the base case with a single
global prompt.
- [x] Test image fully covered by regional masks.
- [x] Test image covered by region masks with small gaps.
- [x] Test region masks with large unmasked ‘background’ regions
- [x] Test region masks with significant overlap
- [x] Test multiple global prompts.
- [x] Test no global prompt.
- [x] Test regional negative prompts (It runs... but results are not
great. Needs more tuning to be useful.)
- Test compatibility with:
    - [x] ControlNet
    - [x] LoRA
    - [x] IP-Adapter

## Remaining TODO

- [x] Disable the following UI features for FLUX prompt regions:
negative prompts, reference images, auto-negative.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2024-11-29 08:56:42 -05:00
psychedelicious
7d488a5352 feat(ui): add delete button to regional ref image empty state 2024-11-29 15:51:24 +10:00
psychedelicious
4d7667f63d fix(ui): add missing translations 2024-11-29 15:43:49 +10:00
psychedelicious
08704ee8ec feat(ui): use canvas layer validators in control/ip adapter graph builders 2024-11-29 15:32:48 +10:00
psychedelicious
5910892c33 Merge remote-tracking branch 'origin/main' into ryan/flux-regional-prompting 2024-11-29 15:19:39 +10:00
psychedelicious
46a09d9e90 feat(ui): format warnings tooltip 2024-11-29 13:32:51 +10:00
psychedelicious
df0c7d73f3 feat(ui): use regional guidance validation utils in graph builders 2024-11-29 13:26:09 +10:00
psychedelicious
3905c97e32 feat(ui): return translation keys from validation utils instead of translated strings 2024-11-29 13:25:09 +10:00
psychedelicious
0be796a808 feat(ui): use layer validation utils in invoke readiness utils 2024-11-29 13:14:26 +10:00
psychedelicious
7dd33b0f39 feat(ui): add indicator to canvas layer headers, displaying validation warnings
If there are any issues with the layer, the icon is displayed. If the layer is disabled, the icon is greyed out but still visible.
2024-11-29 13:13:47 +10:00
psychedelicious
484aaf1595 feat(ui): add canvas layer validation utils
These helpers consolidate layer validation checks. For example, checking that the layer has content drawn, is compatible with the selected main model, has valid reference images, etc.
2024-11-29 13:12:32 +10:00
psychedelicious
c276b60af9 tidy(ui): use object for addRegions graph builder util arg 2024-11-29 08:49:41 +10:00
Ryan Dick
5d8dd6e26e Fix FLUX regional negative prompts. 2024-11-28 18:49:29 +00:00
Emmanuel Ferdman
5bca68d873 docs: update code of conduct reference
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2024-11-27 17:38:33 -08:00
Ryan Dick
64364e7911 Short-circuit if there are no region masks in FLUX and don't apply attention masking. 2024-11-27 22:40:10 +00:00
Ryan Dick
6565cea039 Comment unused _prepare_unrestricted_attn_mask(...) for future reference. 2024-11-27 22:16:44 +00:00
Ryan Dick
3ebd8d6c07 Delete outdated TODO comment. 2024-11-27 22:13:25 +00:00
Ryan Dick
e970185161 Tweak flux regional prompting attention scheme based on latest experimentation. 2024-11-27 22:13:07 +00:00
Ryan Dick
fa5653cdf7 Remove unused 'denoise' param to addRegions(). 2024-11-27 17:08:42 +00:00
Ryan Dick
9a7b000995 Update frontend to support regional prompts with FLUX in the canvas. 2024-11-27 17:04:43 +00:00
Ryan Dick
3a27242838 Bump transformers. The main motivation for this bump is to ingest a fix for DepthAnything postprocessing artifacts. 2024-11-27 07:46:16 -08:00
Ryan Dick
8cfb032051 Add utility ImagePanelLayoutInvocation for working with In-Context LoRA workflows. 2024-11-26 20:58:31 -08:00
Ryan Dick
06a9d4e2b2 Use a Textarea component for the FluxTextEncoderInvocation prompt field. 2024-11-26 20:58:31 -08:00
Brandon Rising
ed46acee79 fix: Fail scan on InvalidMagicError in picklescan, update default for read_checkpoint_meta to scan unless explicitly told not to 2024-11-26 16:17:12 -05:00
Ryan Dick
b54463d294 Allow regional prompting background regions to attend to themselves and to the entire txt embedding. 2024-11-26 17:57:31 +00:00
Ryan Dick
faee79dc95 Distinguish between restricted and unrestricted attn masks in FLUX regional prompting. 2024-11-26 16:55:52 +00:00
Mary Hipp
965cd76e33 lint fix 2024-11-26 11:25:53 -05:00
Mary Hipp
e5e8cbf34c shorten reference image mode descriptions; 2024-11-26 11:25:53 -05:00
Mary Hipp
3412a52594 (ui): updates various informational tooltips, adds descriptons to IP adapter method options 2024-11-26 11:25:53 -05:00
Ryan Dick
e01f66b026 Apply regional attention masks in the single stream blocks in addition to the double stream blocks. 2024-11-25 22:40:08 +00:00
Ryan Dick
53abdde242 Update Flux RegionalPromptingExtension to prepare both a mask with restricted image self-attention and a mask with unrestricted image self attention. 2024-11-25 22:04:23 +00:00
Ryan Dick
94c088300f Be smarter about selecting the global CLIP embedding for FLUX regional prompting. 2024-11-25 20:15:04 +00:00
Ryan Dick
3741a6f5e0 Fix device handling for regional masks and apply the attention mask in the FLUX double stream block. 2024-11-25 16:02:03 +00:00
Kent Keirsey
059336258f Create SECURITY.md 2024-11-25 04:10:03 -08:00
Ryan Dick
2c23b8414c Use a single global CLIP embedding for FLUX regional guidance. 2024-11-22 23:01:43 +00:00
Mary Hipp
271cc52c80 fix(ui): use token for download if its in store 2024-11-22 12:08:05 -05:00
Ryan Dick
20356c0746 Fixup the logic for preparing FLUX regional prompt attention masks. 2024-11-21 22:46:25 +00:00
psychedelicious
e44458609f chore: bump version to v5.4.3rc1 2024-11-21 10:32:43 -08:00
psychedelicious
69d86a7696 feat(ui): address feedback 2024-11-21 09:54:35 -08:00
Hippalectryon
56db1a9292 Use proxyrect and setEntityPosition to sync transformer position 2024-11-21 09:54:35 -08:00
Hippalectryon
cf50e5eeee Make sure the canvas is focused 2024-11-21 09:54:35 -08:00
Hippalectryon
c9c07968d2 lint 2024-11-21 09:54:35 -08:00
Hippalectryon
97d0757176 use $isInteractable instead of $isDisabled 2024-11-21 09:54:35 -08:00
Hippalectryon
0f51b677a9 refactor 2024-11-21 09:54:35 -08:00
Hippalectryon
56ca94c3a9 Don't move if the layer is disabled
Lint
2024-11-21 09:54:35 -08:00
Hippalectryon
28d169f859 Allow moving layers using the keyboard 2024-11-21 09:54:35 -08:00
psychedelicious
92f71d99ee tweak(ui): use X icon for rg ref image delete button 2024-11-21 08:50:39 -08:00
psychedelicious
0764c02b1d tweak(ui): code style 2024-11-21 08:50:39 -08:00
psychedelicious
081c7569fe feat(ui): add global ref image empty state 2024-11-21 08:50:39 -08:00
psychedelicious
20f6532ee8 feat(ui): add empty state for regional guidance ref image 2024-11-21 08:50:39 -08:00
Mary Hipp
b9e8910478 feat(ui): add actions for video modal clicks 2024-11-21 11:15:55 -05:00
Mary Hipp
ded8391e3c use nanostore for schema parsed instead 2024-11-20 20:13:31 -05:00
Mary Hipp
e9dd2c396a limit to one hook 2024-11-20 20:13:31 -05:00
Mary Hipp
0d86de0cb5 fix(ui): make sure schema has loaded before trying to load any workflows 2024-11-20 20:13:31 -05:00
Ryan Dick
bad1149504 WIP - add rough logic for preparing the FLUX regional prompting attention mask. 2024-11-20 22:29:36 +00:00
Ryan Dick
fda7aaa7ca Pass RegionalPromptingExtension down to the CustomDoubleStreamBlockProcessor in FLUX. 2024-11-20 19:48:04 +00:00
Ryan Dick
85c616fa34 WIP - Pass prompt masks to FLUX model during denoising. 2024-11-20 18:51:43 +00:00
psychedelicious
549f4e9794 feat(ui): set default infill method to lama 2024-11-20 11:19:17 -05:00
psychedelicious
ef8ededd2f fix(ui): disable width and height output on image batch output
There's a technical challenge with outputting these values directly. `ImageField` does not store them, so the batch's `ImageField` collection does not have width and height for each image.

In order to set up the batch and pass along width and height for each image, we'd need to make a network request for each image when the user clicks Invoke. It would often be cached, but this will eventually create a scaling issue and poor user experience.

As a very simple workaround, users can output the batch image output into an `Image Primitive` node to access the width and height.

This change is implemented by adding some simple special handling when parsing the output fields for the `image_batch` node.

I'll keep this situation in mind when extending the batching system to other field types.
2024-11-20 11:16:54 -05:00
Mary Hipp
1948ffe106 make sure Soft Edge Detection has preprocessor applied 2024-11-20 08:46:02 -05:00
psychedelicious
c70f4404c4 fix(ui): special node icon tooltip 2024-11-19 14:29:09 -08:00
psychedelicious
b157ae928c chore(ui): update what's new copy 2024-11-19 14:29:09 -08:00
psychedelicious
7a0871992d chore: bump version to v5.4.2 2024-11-19 14:29:09 -08:00
Hosted Weblate
b38e2e14f4 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-11-19 14:12:00 -08:00
psychedelicious
7c0e70ec84 tweak(ui): "Watch on YouTube" -> "Watch" 2024-11-19 14:02:11 -08:00
psychedelicious
a89ae9d2bf feat(ui): add links to studio sessions/discord 2024-11-19 14:02:11 -08:00
psychedelicious
ad1fcb3f07 chore(ui): bump @invoke-ai/ui-library
Brings in a fix for `ExternalLink`
2024-11-19 14:02:11 -08:00
psychedelicious
87d74b910b feat(ui): support videos modal 2024-11-19 14:02:11 -08:00
psychedelicious
7ad1c297a4 feat(ui): add actions for reset canvas layers / generation settings to session menus 2024-11-19 13:55:16 -08:00
psychedelicious
fbc629faa6 feat(ui): change reset canvas button to new session menu 2024-11-19 13:55:16 -08:00
psychedelicious
7baa6b3c09 feat(ui): split up new from image into submenus
- `New Canvas from Image` -> `As Raster Layer`, `As Raster Layer (Resize)`, `As Control Layer`, `As Control Layer (Resize)`
- `New Layer from Image` -> (each layer type)
2024-11-19 10:34:00 -08:00
psychedelicious
53d482bade feat(ui): add image ctx menu new canvas without resize option 2024-11-19 10:34:00 -08:00
psychedelicious
5aca04b51b feat(ui): change reset canvas icon to "empty" 2024-11-19 09:56:25 -08:00
psychedelicious
ea8787c8ff feat(ui): update invoke button tooltip for batching
- Split up logic to determine reason why the user cannot invoke for each tab.
- Fix issue where the workflows tab would show reasons related to canvas/upscale tab. The tooltip now only shows information relevant to the current tab.
- Add calculation for batch size to the queue count prediction.
- Use a constant for the enqueue mutation's fixed cache key, instead of a string. Just some typo protection.
2024-11-19 09:53:59 -08:00
psychedelicious
cead2c4445 feat(ui): split up selector utils for useIsReadyToEnqueue 2024-11-19 09:53:59 -08:00
Mary Hipp
f76ac1808c fix(ui): simplify logic for non-local invocation progress alerts 2024-11-19 12:40:40 -05:00
psychedelicious
f01210861b chore: ruff 2024-11-19 07:02:37 -08:00
psychedelicious
f757f23ef0 chore(ui): typegen 2024-11-19 07:02:37 -08:00
psychedelicious
872a6ef209 tidy(nodes): extract slerp from lblend to util fn 2024-11-19 07:02:37 -08:00
psychedelicious
4267e5ffc4 tidy(nodes): bring masked blend latents masking logic into invoke core 2024-11-19 07:02:37 -08:00
Brandon Rising
a69c5ff9ef Add copyright notice for CIELab_to_UPLab.icc 2024-11-19 07:02:37 -08:00
Brandon Rising
3ebd8d7d1b Fix .icc asset file in pyproject.toml 2024-11-19 07:02:37 -08:00
Brandon Rising
1fd80d54a4 Run Ruff 2024-11-19 07:02:37 -08:00
Brandon Rising
991f63e455 Store CIELab_to_UPLab.icc within the repo 2024-11-19 07:02:37 -08:00
Brandon Rising
6a1efd3527 Add validation to some of the node inputs 2024-11-19 07:02:37 -08:00
Brandon Rising
0eadc0dd9e feat: Support a subset of composition nodes within base invokeai 2024-11-19 07:02:37 -08:00
120 changed files with 7090 additions and 1438 deletions

14
SECURITY.md Normal file
View File

@@ -0,0 +1,14 @@
# Security Policy
## Supported Versions
Only the latest version of Invoke will receive security updates.
We do not currently maintain multiple versions of the application with updates.
## Reporting a Vulnerability
To report a vulnerability, contact the Invoke team directly at security@invoke.ai
At this time, we do not maintain a formal bug bounty program.
You can also share identified security issues with our team on huntr.com

View File

@@ -50,7 +50,7 @@ Applications are built on top of the invoke framework. They should construct `in
### Web UI
The Web UI is built on top of an HTTP API built with [FastAPI](https://fastapi.tiangolo.com/) and [Socket.IO](https://socket.io/). The frontend code is found in `/frontend` and the backend code is found in `/ldm/invoke/app/api_app.py` and `/ldm/invoke/app/api/`. The code is further organized as such:
The Web UI is built on top of an HTTP API built with [FastAPI](https://fastapi.tiangolo.com/) and [Socket.IO](https://socket.io/). The frontend code is found in `/invokeai/frontend` and the backend code is found in `/invokeai/app/api_app.py` and `/invokeai/app/api/`. The code is further organized as such:
| Component | Description |
| --- | --- |
@@ -62,7 +62,7 @@ The Web UI is built on top of an HTTP API built with [FastAPI](https://fastapi.t
### CLI
The CLI is built automatically from invocation metadata, and also supports invocation piping and auto-linking. Code is available in `/ldm/invoke/app/cli_app.py`.
The CLI is built automatically from invocation metadata, and also supports invocation piping and auto-linking. Code is available in `/invokeai/frontend/cli`.
## Invoke
@@ -70,7 +70,7 @@ The Invoke framework provides the interface to the underlying AI systems and is
### Invoker
The invoker (`/ldm/invoke/app/services/invoker.py`) is the primary interface through which applications interact with the framework. Its primary purpose is to create, manage, and invoke sessions. It also maintains two sets of services:
The invoker (`/invokeai/app/services/invoker.py`) is the primary interface through which applications interact with the framework. Its primary purpose is to create, manage, and invoke sessions. It also maintains two sets of services:
- **invocation services**, which are used by invocations to interact with core functionality.
- **invoker services**, which are used by the invoker to manage sessions and manage the invocation queue.
@@ -82,12 +82,12 @@ The session graph does not support looping. This is left as an application probl
### Invocations
Invocations represent individual units of execution, with inputs and outputs. All invocations are located in `/ldm/invoke/app/invocations`, and are all automatically discovered and made available in the applications. These are the primary way to expose new functionality in Invoke.AI, and the [implementation guide](INVOCATIONS.md) explains how to add new invocations.
Invocations represent individual units of execution, with inputs and outputs. All invocations are located in `/invokeai/app/invocations`, and are all automatically discovered and made available in the applications. These are the primary way to expose new functionality in Invoke.AI, and the [implementation guide](INVOCATIONS.md) explains how to add new invocations.
### Services
Services provide invocations access AI Core functionality and other necessary functionality (e.g. image storage). These are available in `/ldm/invoke/app/services`. As a general rule, new services should provide an interface as an abstract base class, and may provide a lightweight local implementation by default in their module. The goal for all services should be to enable the usage of different implementations (e.g. using cloud storage for image storage), but should not load any module dependencies unless that implementation has been used (i.e. don't import anything that won't be used, especially if it's expensive to import).
Services provide invocations access AI Core functionality and other necessary functionality (e.g. image storage). These are available in `/invokeai/app/services`. As a general rule, new services should provide an interface as an abstract base class, and may provide a lightweight local implementation by default in their module. The goal for all services should be to enable the usage of different implementations (e.g. using cloud storage for image storage), but should not load any module dependencies unless that implementation has been used (i.e. don't import anything that won't be used, especially if it's expensive to import).
## AI Core
The AI Core is represented by the rest of the code base (i.e. the code outside of `/ldm/invoke/app/`).
The AI Core is represented by the rest of the code base (i.e. the code outside of `/invokeai/app/`).

View File

@@ -287,8 +287,8 @@ new Invocation ready to be used.
Once you've created a Node, the next step is to share it with the community! The
best way to do this is to submit a Pull Request to add the Node to the
[Community Nodes](nodes/communityNodes) list. If you're not sure how to do that,
take a look a at our [contributing nodes overview](contributingNodes).
[Community Nodes](../nodes/communityNodes.md) list. If you're not sure how to do that,
take a look a at our [contributing nodes overview](../nodes/contributingNodes.md).
## Advanced

View File

@@ -9,20 +9,20 @@ model. These are the:
configuration information. Among other things, the record service
tracks the type of the model, its provenance, and where it can be
found on disk.
* _ModelInstallServiceBase_ A service for installing models to
disk. It uses `DownloadQueueServiceBase` to download models and
their metadata, and `ModelRecordServiceBase` to store that
information. It is also responsible for managing the InvokeAI
`models` directory and its contents.
* _DownloadQueueServiceBase_
A multithreaded downloader responsible
for downloading models from a remote source to disk. The download
queue has special methods for downloading repo_id folders from
Hugging Face, as well as discriminating among model versions in
Civitai, but can be used for arbitrary content.
* _ModelLoadServiceBase_
Responsible for loading a model from disk
into RAM and VRAM and getting it ready for inference.
@@ -207,9 +207,9 @@ for use in the InvokeAI web server. Its signature is:
```
def open(
cls,
config: InvokeAIAppConfig,
conn: Optional[sqlite3.Connection] = None,
cls,
config: InvokeAIAppConfig,
conn: Optional[sqlite3.Connection] = None,
lock: Optional[threading.Lock] = None
) -> Union[ModelRecordServiceSQL, ModelRecordServiceFile]:
```
@@ -363,7 +363,7 @@ functionality:
* Registering a model config record for a model already located on the
local filesystem, without moving it or changing its path.
* Installing a model alreadiy located on the local filesystem, by
moving it into the InvokeAI root directory under the
`models` folder (or wherever config parameter `models_dir`
@@ -371,21 +371,21 @@ functionality:
* Probing of models to determine their type, base type and other key
information.
* Interface with the InvokeAI event bus to provide status updates on
the download, installation and registration process.
* Downloading a model from an arbitrary URL and installing it in
`models_dir`.
* Special handling for HuggingFace repo_ids to recursively download
the contents of the repository, paying attention to alternative
variants such as fp16.
* Saving tags and other metadata about the model into the invokeai database
when fetching from a repo that provides that type of information,
(currently only HuggingFace).
### Initializing the installer
A default installer is created at InvokeAI api startup time and stored
@@ -461,7 +461,7 @@ revision.
`config` is an optional dict of values that will override the
autoprobed values for model type, base, scheduler prediction type, and
so forth. See [Model configuration and
probing](#Model-configuration-and-probing) for details.
probing](#model-configuration-and-probing) for details.
`access_token` is an optional access token for accessing resources
that need authentication.
@@ -494,7 +494,7 @@ source8 = URLModelSource(url='https://civitai.com/api/download/models/63006', ac
for source in [source1, source2, source3, source4, source5, source6, source7]:
install_job = installer.install_model(source)
source2job = installer.wait_for_installs(timeout=120)
for source in sources:
job = source2job[source]
@@ -504,7 +504,7 @@ for source in sources:
print(f"{source} installed as {model_key}")
elif job.errored:
print(f"{source}: {job.error_type}.\nStack trace:\n{job.error}")
```
As shown here, the `import_model()` method accepts a variety of

View File

@@ -1,6 +1,6 @@
# InvokeAI Backend Tests
We use `pytest` to run the backend python tests. (See [pyproject.toml](/pyproject.toml) for the default `pytest` options.)
We use `pytest` to run the backend python tests. (See [pyproject.toml](https://github.com/invoke-ai/InvokeAI/blob/main/pyproject.toml) for the default `pytest` options.)
## Fast vs. Slow
All tests are categorized as either 'fast' (no test annotation) or 'slow' (annotated with the `@pytest.mark.slow` decorator).
@@ -33,7 +33,7 @@ pytest tests -m ""
## Test Organization
All backend tests are in the [`tests/`](/tests/) directory. This directory mirrors the organization of the `invokeai/` directory. For example, tests for `invokeai/model_management/model_manager.py` would be found in `tests/model_management/test_model_manager.py`.
All backend tests are in the [`tests/`](https://github.com/invoke-ai/InvokeAI/tree/main/tests) directory. This directory mirrors the organization of the `invokeai/` directory. For example, tests for `invokeai/model_management/model_manager.py` would be found in `tests/model_management/test_model_manager.py`.
TODO: The above statement is aspirational. A re-organization of legacy tests is required to make it true.

View File

@@ -2,7 +2,7 @@
## **What do I need to know to help?**
If you are looking to help with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
If you are looking to help with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
## **Get Started**
@@ -12,7 +12,7 @@ To get started, take a look at our [new contributors checklist](newContributorCh
Once you're setup, for more information, you can review the documentation specific to your area of interest:
* #### [InvokeAI Architecure](../ARCHITECTURE.md)
* #### [Frontend Documentation](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web)
* #### [Frontend Documentation](../frontend/index.md)
* #### [Node Documentation](../INVOCATIONS.md)
* #### [Local Development](../LOCAL_DEVELOPMENT.md)
@@ -20,15 +20,15 @@ Once you're setup, for more information, you can review the documentation specif
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md), [translation](translation.md) or helping support other users and triage issues as they're reported in GitHub.
There are two paths to making a development contribution:
There are two paths to making a development contribution:
1. Choosing an open issue to address. Open issues can be found in the [Issues](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen) section of the InvokeAI repository. These are tagged by the issue type (bug, enhancement, etc.) along with the “good first issues” tag denoting if they are suitable for first time contributors.
1. Additional items can be found on our [roadmap](https://github.com/orgs/invoke-ai/projects/7). The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item youd like to help with, reach out to the contributor assigned to the item to see how you can help.
1. Additional items can be found on our [roadmap](https://github.com/orgs/invoke-ai/projects/7). The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item youd like to help with, reach out to the contributor assigned to the item to see how you can help.
2. Opening a new issue or feature to add. **Please make sure you have searched through existing issues before creating new ones.**
*Regardless of what you choose, please post in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord before you start development in order to confirm that the issue or feature is aligned with the current direction of the project. We value our contributors time and effort and want to ensure that no ones time is being misspent.*
## Best Practices:
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviewers easily understand your contribution
* Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
@@ -38,7 +38,7 @@ There are two paths to making a development contribution:
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@psychedelicious** is the best person to reach out to.
For frontend related work, **@psychedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@psychedelicious**.

View File

@@ -22,15 +22,15 @@ Before starting these steps, ensure you have your local environment [configured
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface. 4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:

View File

@@ -27,9 +27,9 @@ If you just want to use Invoke, you should use the [installer][installer link].
5. Activate the venv (you'll need to do this every time you want to run the app):
```sh
source .venv/bin/activate
```
```sh
source .venv/bin/activate
```
6. Install the repo as an [editable install][editable install link]:
@@ -37,7 +37,7 @@ If you just want to use Invoke, you should use the [installer][installer link].
pip install -e ".[dev,test,xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
Refer to the [manual installation][manual install link]] instructions for more determining the correct install options. `xformers` is optional, but `dev` and `test` are not.
Refer to the [manual installation][manual install link] instructions for more determining the correct install options. `xformers` is optional, but `dev` and `test` are not.
7. Install the frontend dev toolchain:

View File

@@ -34,11 +34,11 @@ Please reach out to @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy)
## Contributors
This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](contributors.md). We thank them for their time, hard work and effort.
## Code of Conduct
The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment.
The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](../CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment.
By making a contribution to this project, you certify that:

View File

@@ -110,7 +110,7 @@ async def cancel_by_batch_ids(
@session_queue_router.put(
"/{queue_id}/cancel_by_destination",
operation_id="cancel_by_destination",
responses={200: {"model": CancelByBatchIDsResult}},
responses={200: {"model": CancelByDestinationResult}},
)
async def cancel_by_destination(
queue_id: str = Path(description="The queue id to perform this operation on"),

View File

@@ -1,98 +1,120 @@
from typing import Any, Union
from typing import Optional, Union
import numpy as np
import numpy.typing as npt
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, LatentsField
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, Input, InputField, LatentsField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.util.devices import TorchDevice
def slerp(
t: Union[float, np.ndarray],
v0: Union[torch.Tensor, np.ndarray],
v1: Union[torch.Tensor, np.ndarray],
device: torch.device,
DOT_THRESHOLD: float = 0.9995,
):
"""
Spherical linear interpolation
Args:
t (float/np.ndarray): Float value between 0.0 and 1.0
v0 (np.ndarray): Starting vector
v1 (np.ndarray): Final vector
DOT_THRESHOLD (float): Threshold for considering the two vectors as
colineal. Not recommended to alter this.
Returns:
v2 (np.ndarray): Interpolation vector between v0 and v1
"""
inputs_are_torch = False
if not isinstance(v0, np.ndarray):
inputs_are_torch = True
v0 = v0.detach().cpu().numpy()
if not isinstance(v1, np.ndarray):
inputs_are_torch = True
v1 = v1.detach().cpu().numpy()
dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
if np.abs(dot) > DOT_THRESHOLD:
v2 = (1 - t) * v0 + t * v1
else:
theta_0 = np.arccos(dot)
sin_theta_0 = np.sin(theta_0)
theta_t = theta_0 * t
sin_theta_t = np.sin(theta_t)
s0 = np.sin(theta_0 - theta_t) / sin_theta_0
s1 = sin_theta_t / sin_theta_0
v2 = s0 * v0 + s1 * v1
if inputs_are_torch:
v2 = torch.from_numpy(v2).to(device)
return v2
@invocation(
"lblend",
title="Blend Latents",
tags=["latents", "blend"],
tags=["latents", "blend", "mask"],
category="latents",
version="1.0.3",
version="1.1.0",
)
class BlendLatentsInvocation(BaseInvocation):
"""Blend two latents using a given alpha. Latents must have same size."""
"""Blend two latents using a given alpha. If a mask is provided, the second latents will be masked before blending.
Latents must have same size. Masking functionality added by @dwringer."""
latents_a: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
latents_b: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
alpha: float = InputField(default=0.5, description=FieldDescriptions.blend_alpha)
latents_a: LatentsField = InputField(description=FieldDescriptions.latents, input=Input.Connection)
latents_b: LatentsField = InputField(description=FieldDescriptions.latents, input=Input.Connection)
mask: Optional[ImageField] = InputField(default=None, description="Mask for blending in latents B")
alpha: float = InputField(ge=0, default=0.5, description=FieldDescriptions.blend_alpha)
def prep_mask_tensor(self, mask_image: Image.Image) -> torch.Tensor:
if mask_image.mode != "L":
mask_image = mask_image.convert("L")
mask_tensor = image_resized_to_grid_as_tensor(mask_image, normalize=False)
if mask_tensor.dim() == 3:
mask_tensor = mask_tensor.unsqueeze(0)
return mask_tensor
def replace_tensor_from_masked_tensor(
self, tensor: torch.Tensor, other_tensor: torch.Tensor, mask_tensor: torch.Tensor
):
output = tensor.clone()
mask_tensor = mask_tensor.expand(output.shape)
if output.dtype != torch.float16:
output = torch.add(output, mask_tensor * torch.sub(other_tensor, tensor))
else:
output = torch.add(output, mask_tensor.half() * torch.sub(other_tensor, tensor))
return output
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents_a = context.tensors.load(self.latents_a.latents_name)
latents_b = context.tensors.load(self.latents_b.latents_name)
if self.mask is None:
mask_tensor = torch.zeros(latents_a.shape[-2:])
else:
mask_tensor = self.prep_mask_tensor(context.images.get_pil(self.mask.image_name))
mask_tensor = tv_resize(mask_tensor, latents_a.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
latents_b = self.replace_tensor_from_masked_tensor(latents_b, latents_a, mask_tensor)
if latents_a.shape != latents_b.shape:
raise Exception("Latents to blend must be the same size.")
raise ValueError("Latents to blend must be the same size.")
device = TorchDevice.choose_torch_device()
def slerp(
t: Union[float, npt.NDArray[Any]], # FIXME: maybe use np.float32 here?
v0: Union[torch.Tensor, npt.NDArray[Any]],
v1: Union[torch.Tensor, npt.NDArray[Any]],
DOT_THRESHOLD: float = 0.9995,
) -> Union[torch.Tensor, npt.NDArray[Any]]:
"""
Spherical linear interpolation
Args:
t (float/np.ndarray): Float value between 0.0 and 1.0
v0 (np.ndarray): Starting vector
v1 (np.ndarray): Final vector
DOT_THRESHOLD (float): Threshold for considering the two vectors as
colineal. Not recommended to alter this.
Returns:
v2 (np.ndarray): Interpolation vector between v0 and v1
"""
inputs_are_torch = False
if not isinstance(v0, np.ndarray):
inputs_are_torch = True
v0 = v0.detach().cpu().numpy()
if not isinstance(v1, np.ndarray):
inputs_are_torch = True
v1 = v1.detach().cpu().numpy()
dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
if np.abs(dot) > DOT_THRESHOLD:
v2 = (1 - t) * v0 + t * v1
else:
theta_0 = np.arccos(dot)
sin_theta_0 = np.sin(theta_0)
theta_t = theta_0 * t
sin_theta_t = np.sin(theta_t)
s0 = np.sin(theta_0 - theta_t) / sin_theta_0
s1 = sin_theta_t / sin_theta_0
v2 = s0 * v0 + s1 * v1
if inputs_are_torch:
v2_torch: torch.Tensor = torch.from_numpy(v2).to(device)
return v2_torch
else:
assert isinstance(v2, np.ndarray)
return v2
# blend
bl = slerp(self.alpha, latents_a, latents_b)
assert isinstance(bl, torch.Tensor)
blended_latents: torch.Tensor = bl # for type checking convenience
blended_latents = slerp(self.alpha, latents_a, latents_b, device)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
blended_latents = blended_latents.to("cpu")
TorchDevice.empty_cache()
torch.cuda.empty_cache()
name = context.tensors.save(tensor=blended_latents)
return LatentsOutput.build(latents_name=name, latents=blended_latents, seed=self.latents_a.seed)
return LatentsOutput.build(latents_name=name, latents=blended_latents)

File diff suppressed because it is too large Load Diff

View File

@@ -250,6 +250,11 @@ class FluxConditioningField(BaseModel):
"""A conditioning tensor primitive value"""
conditioning_name: str = Field(description="The name of conditioning tensor")
mask: Optional[TensorField] = Field(
default=None,
description="The mask associated with this conditioning tensor. Excluded regions should be set to False, "
"included regions should be set to True.",
)
class SD3ConditioningField(BaseModel):

View File

@@ -30,6 +30,7 @@ from invokeai.backend.flux.controlnet.xlabs_controlnet_flux import XLabsControlN
from invokeai.backend.flux.denoise import denoise
from invokeai.backend.flux.extensions.inpaint_extension import InpaintExtension
from invokeai.backend.flux.extensions.instantx_controlnet_extension import InstantXControlNetExtension
from invokeai.backend.flux.extensions.regional_prompting_extension import RegionalPromptingExtension
from invokeai.backend.flux.extensions.xlabs_controlnet_extension import XLabsControlNetExtension
from invokeai.backend.flux.extensions.xlabs_ip_adapter_extension import XLabsIPAdapterExtension
from invokeai.backend.flux.ip_adapter.xlabs_ip_adapter_flux import XlabsIpAdapterFlux
@@ -42,6 +43,7 @@ from invokeai.backend.flux.sampling_utils import (
pack,
unpack,
)
from invokeai.backend.flux.text_conditioning import FluxTextConditioning
from invokeai.backend.lora.conversions.flux_lora_constants import FLUX_LORA_TRANSFORMER_PREFIX
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoRAPatcher
@@ -56,7 +58,7 @@ from invokeai.backend.util.devices import TorchDevice
title="FLUX Denoise",
tags=["image", "flux"],
category="image",
version="3.2.1",
version="3.2.2",
classification=Classification.Prototype,
)
class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
@@ -87,10 +89,10 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
input=Input.Connection,
title="Transformer",
)
positive_text_conditioning: FluxConditioningField = InputField(
positive_text_conditioning: FluxConditioningField | list[FluxConditioningField] = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
)
negative_text_conditioning: FluxConditioningField | None = InputField(
negative_text_conditioning: FluxConditioningField | list[FluxConditioningField] | None = InputField(
default=None,
description="Negative conditioning tensor. Can be None if cfg_scale is 1.0.",
input=Input.Connection,
@@ -139,36 +141,12 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)
def _load_text_conditioning(
self, context: InvocationContext, conditioning_name: str, dtype: torch.dtype
) -> Tuple[torch.Tensor, torch.Tensor]:
# Load the conditioning data.
cond_data = context.conditioning.load(conditioning_name)
assert len(cond_data.conditionings) == 1
flux_conditioning = cond_data.conditionings[0]
assert isinstance(flux_conditioning, FLUXConditioningInfo)
flux_conditioning = flux_conditioning.to(dtype=dtype)
t5_embeddings = flux_conditioning.t5_embeds
clip_embeddings = flux_conditioning.clip_embeds
return t5_embeddings, clip_embeddings
def _run_diffusion(
self,
context: InvocationContext,
):
inference_dtype = torch.bfloat16
# Load the conditioning data.
pos_t5_embeddings, pos_clip_embeddings = self._load_text_conditioning(
context, self.positive_text_conditioning.conditioning_name, inference_dtype
)
neg_t5_embeddings: torch.Tensor | None = None
neg_clip_embeddings: torch.Tensor | None = None
if self.negative_text_conditioning is not None:
neg_t5_embeddings, neg_clip_embeddings = self._load_text_conditioning(
context, self.negative_text_conditioning.conditioning_name, inference_dtype
)
# Load the input latents, if provided.
init_latents = context.tensors.load(self.latents.latents_name) if self.latents else None
if init_latents is not None:
@@ -183,15 +161,45 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
dtype=inference_dtype,
seed=self.seed,
)
b, _c, latent_h, latent_w = noise.shape
packed_h = latent_h // 2
packed_w = latent_w // 2
# Load the conditioning data.
pos_text_conditionings = self._load_text_conditioning(
context=context,
cond_field=self.positive_text_conditioning,
packed_height=packed_h,
packed_width=packed_w,
dtype=inference_dtype,
device=TorchDevice.choose_torch_device(),
)
neg_text_conditionings: list[FluxTextConditioning] | None = None
if self.negative_text_conditioning is not None:
neg_text_conditionings = self._load_text_conditioning(
context=context,
cond_field=self.negative_text_conditioning,
packed_height=packed_h,
packed_width=packed_w,
dtype=inference_dtype,
device=TorchDevice.choose_torch_device(),
)
pos_regional_prompting_extension = RegionalPromptingExtension.from_text_conditioning(
pos_text_conditionings, img_seq_len=packed_h * packed_w
)
neg_regional_prompting_extension = (
RegionalPromptingExtension.from_text_conditioning(neg_text_conditionings, img_seq_len=packed_h * packed_w)
if neg_text_conditionings
else None
)
transformer_info = context.models.load(self.transformer.transformer)
is_schnell = "schnell" in transformer_info.config.config_path
# Calculate the timestep schedule.
image_seq_len = noise.shape[-1] * noise.shape[-2] // 4
timesteps = get_schedule(
num_steps=self.num_steps,
image_seq_len=image_seq_len,
image_seq_len=packed_h * packed_w,
shift=not is_schnell,
)
@@ -228,28 +236,17 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
inpaint_mask = self._prep_inpaint_mask(context, x)
b, _c, latent_h, latent_w = x.shape
img_ids = generate_img_ids(h=latent_h, w=latent_w, batch_size=b, device=x.device, dtype=x.dtype)
pos_bs, pos_t5_seq_len, _ = pos_t5_embeddings.shape
pos_txt_ids = torch.zeros(
pos_bs, pos_t5_seq_len, 3, dtype=inference_dtype, device=TorchDevice.choose_torch_device()
)
neg_txt_ids: torch.Tensor | None = None
if neg_t5_embeddings is not None:
neg_bs, neg_t5_seq_len, _ = neg_t5_embeddings.shape
neg_txt_ids = torch.zeros(
neg_bs, neg_t5_seq_len, 3, dtype=inference_dtype, device=TorchDevice.choose_torch_device()
)
# Pack all latent tensors.
init_latents = pack(init_latents) if init_latents is not None else None
inpaint_mask = pack(inpaint_mask) if inpaint_mask is not None else None
noise = pack(noise)
x = pack(x)
# Now that we have 'packed' the latent tensors, verify that we calculated the image_seq_len correctly.
assert image_seq_len == x.shape[1]
# Now that we have 'packed' the latent tensors, verify that we calculated the image_seq_len, packed_h, and
# packed_w correctly.
assert packed_h * packed_w == x.shape[1]
# Prepare inpaint extension.
inpaint_extension: InpaintExtension | None = None
@@ -338,12 +335,8 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
model=transformer,
img=x,
img_ids=img_ids,
txt=pos_t5_embeddings,
txt_ids=pos_txt_ids,
vec=pos_clip_embeddings,
neg_txt=neg_t5_embeddings,
neg_txt_ids=neg_txt_ids,
neg_vec=neg_clip_embeddings,
pos_regional_prompting_extension=pos_regional_prompting_extension,
neg_regional_prompting_extension=neg_regional_prompting_extension,
timesteps=timesteps,
step_callback=self._build_step_callback(context),
guidance=self.guidance,
@@ -357,6 +350,43 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
x = unpack(x.float(), self.height, self.width)
return x
def _load_text_conditioning(
self,
context: InvocationContext,
cond_field: FluxConditioningField | list[FluxConditioningField],
packed_height: int,
packed_width: int,
dtype: torch.dtype,
device: torch.device,
) -> list[FluxTextConditioning]:
"""Load text conditioning data from a FluxConditioningField or a list of FluxConditioningFields."""
# Normalize to a list of FluxConditioningFields.
cond_list = [cond_field] if isinstance(cond_field, FluxConditioningField) else cond_field
text_conditionings: list[FluxTextConditioning] = []
for cond_field in cond_list:
# Load the text embeddings.
cond_data = context.conditioning.load(cond_field.conditioning_name)
assert len(cond_data.conditionings) == 1
flux_conditioning = cond_data.conditionings[0]
assert isinstance(flux_conditioning, FLUXConditioningInfo)
flux_conditioning = flux_conditioning.to(dtype=dtype, device=device)
t5_embeddings = flux_conditioning.t5_embeds
clip_embeddings = flux_conditioning.clip_embeds
# Load the mask, if provided.
mask: Optional[torch.Tensor] = None
if cond_field.mask is not None:
mask = context.tensors.load(cond_field.mask.tensor_name)
mask = mask.to(device=device)
mask = RegionalPromptingExtension.preprocess_regional_prompt_mask(
mask, packed_height, packed_width, dtype, device
)
text_conditionings.append(FluxTextConditioning(t5_embeddings, clip_embeddings, mask))
return text_conditionings
@classmethod
def prep_cfg_scale(
cls, cfg_scale: float | list[float], timesteps: list[float], cfg_scale_start_step: int, cfg_scale_end_step: int

View File

@@ -1,11 +1,18 @@
from contextlib import ExitStack
from typing import Iterator, Literal, Tuple
from typing import Iterator, Literal, Optional, Tuple
import torch
from transformers import CLIPTextModel, CLIPTokenizer, T5EncoderModel, T5Tokenizer
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField
from invokeai.app.invocations.fields import (
FieldDescriptions,
FluxConditioningField,
Input,
InputField,
TensorField,
UIComponent,
)
from invokeai.app.invocations.model import CLIPField, T5EncoderField
from invokeai.app.invocations.primitives import FluxConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
@@ -22,7 +29,7 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import Condit
title="FLUX Text Encoding",
tags=["prompt", "conditioning", "flux"],
category="conditioning",
version="1.1.0",
version="1.1.1",
classification=Classification.Prototype,
)
class FluxTextEncoderInvocation(BaseInvocation):
@@ -41,7 +48,10 @@ class FluxTextEncoderInvocation(BaseInvocation):
t5_max_seq_len: Literal[256, 512] = InputField(
description="Max sequence length for the T5 encoder. Expected to be 256 for FLUX schnell models and 512 for FLUX dev models."
)
prompt: str = InputField(description="Text prompt to encode.")
prompt: str = InputField(description="Text prompt to encode.", ui_component=UIComponent.Textarea)
mask: Optional[TensorField] = InputField(
default=None, description="A mask defining the region that this conditioning prompt applies to."
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> FluxConditioningOutput:
@@ -54,7 +64,9 @@ class FluxTextEncoderInvocation(BaseInvocation):
)
conditioning_name = context.conditioning.save(conditioning_data)
return FluxConditioningOutput.build(conditioning_name)
return FluxConditioningOutput(
conditioning=FluxConditioningField(conditioning_name=conditioning_name, mask=self.mask)
)
def _t5_encode(self, context: InvocationContext) -> torch.Tensor:
t5_tokenizer_info = context.models.load(self.t5_encoder.tokenizer)

View File

@@ -0,0 +1,59 @@
from pydantic import ValidationInfo, field_validator
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import InputField, OutputField
from invokeai.app.services.shared.invocation_context import InvocationContext
@invocation_output("image_panel_coordinate_output")
class ImagePanelCoordinateOutput(BaseInvocationOutput):
x_left: int = OutputField(description="The left x-coordinate of the panel.")
y_top: int = OutputField(description="The top y-coordinate of the panel.")
width: int = OutputField(description="The width of the panel.")
height: int = OutputField(description="The height of the panel.")
@invocation(
"image_panel_layout",
title="Image Panel Layout",
tags=["image", "panel", "layout"],
category="image",
version="1.0.0",
classification=Classification.Prototype,
)
class ImagePanelLayoutInvocation(BaseInvocation):
"""Get the coordinates of a single panel in a grid. (If the full image shape cannot be divided evenly into panels,
then the grid may not cover the entire image.)
"""
width: int = InputField(description="The width of the entire grid.")
height: int = InputField(description="The height of the entire grid.")
num_cols: int = InputField(ge=1, default=1, description="The number of columns in the grid.")
num_rows: int = InputField(ge=1, default=1, description="The number of rows in the grid.")
panel_col_idx: int = InputField(ge=0, default=0, description="The column index of the panel to be processed.")
panel_row_idx: int = InputField(ge=0, default=0, description="The row index of the panel to be processed.")
@field_validator("panel_col_idx")
def validate_panel_col_idx(cls, v: int, info: ValidationInfo) -> int:
if v < 0 or v >= info.data["num_cols"]:
raise ValueError(f"panel_col_idx must be between 0 and {info.data['num_cols'] - 1}")
return v
@field_validator("panel_row_idx")
def validate_panel_row_idx(cls, v: int, info: ValidationInfo) -> int:
if v < 0 or v >= info.data["num_rows"]:
raise ValueError(f"panel_row_idx must be between 0 and {info.data['num_rows'] - 1}")
return v
def invoke(self, context: InvocationContext) -> ImagePanelCoordinateOutput:
x_left = self.panel_col_idx * (self.width // self.num_cols)
y_top = self.panel_row_idx * (self.height // self.num_rows)
width = self.width // self.num_cols
height = self.height // self.num_rows
return ImagePanelCoordinateOutput(x_left=x_left, y_top=y_top, width=width, height=height)

View File

@@ -86,7 +86,7 @@ class ModelLoadService(ModelLoadServiceBase):
def torch_load_file(checkpoint: Path) -> AnyModel:
scan_result = scan_file_path(checkpoint)
if scan_result.infected_files != 0:
if scan_result.infected_files != 0 or scan_result.scan_err:
raise Exception("The model at {checkpoint} is potentially infected by malware. Aborting load.")
result = torch_load(checkpoint, map_location="cpu")
return result

View File

@@ -378,6 +378,9 @@ class DefaultSessionProcessor(SessionProcessorBase):
self._poll_now()
async def _on_queue_item_status_changed(self, event: FastAPIEvent[QueueItemStatusChangedEvent]) -> None:
# Make sure the cancel event is for the currently processing queue item
if self._queue_item and self._queue_item.item_id is not event[1].item_id:
return
if self._queue_item and event[1].status in ["completed", "failed", "canceled"]:
# When the queue item is canceled via HTTP, the queue item status is set to `"canceled"` and this event is
# emitted. We need to respond to this event and stop graph execution. This is done by setting the cancel

View File

@@ -1,9 +1,10 @@
import einops
import torch
from invokeai.backend.flux.extensions.regional_prompting_extension import RegionalPromptingExtension
from invokeai.backend.flux.extensions.xlabs_ip_adapter_extension import XLabsIPAdapterExtension
from invokeai.backend.flux.math import attention
from invokeai.backend.flux.modules.layers import DoubleStreamBlock
from invokeai.backend.flux.modules.layers import DoubleStreamBlock, SingleStreamBlock
class CustomDoubleStreamBlockProcessor:
@@ -13,7 +14,12 @@ class CustomDoubleStreamBlockProcessor:
@staticmethod
def _double_stream_block_forward(
block: DoubleStreamBlock, img: torch.Tensor, txt: torch.Tensor, vec: torch.Tensor, pe: torch.Tensor
block: DoubleStreamBlock,
img: torch.Tensor,
txt: torch.Tensor,
vec: torch.Tensor,
pe: torch.Tensor,
attn_mask: torch.Tensor | None = None,
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""This function is a direct copy of DoubleStreamBlock.forward(), but it returns some of the intermediate
values.
@@ -40,7 +46,7 @@ class CustomDoubleStreamBlockProcessor:
k = torch.cat((txt_k, img_k), dim=2)
v = torch.cat((txt_v, img_v), dim=2)
attn = attention(q, k, v, pe=pe)
attn = attention(q, k, v, pe=pe, attn_mask=attn_mask)
txt_attn, img_attn = attn[:, : txt.shape[1]], attn[:, txt.shape[1] :]
# calculate the img bloks
@@ -63,11 +69,15 @@ class CustomDoubleStreamBlockProcessor:
vec: torch.Tensor,
pe: torch.Tensor,
ip_adapter_extensions: list[XLabsIPAdapterExtension],
regional_prompting_extension: RegionalPromptingExtension,
) -> tuple[torch.Tensor, torch.Tensor]:
"""A custom implementation of DoubleStreamBlock.forward() with additional features:
- IP-Adapter support
"""
img, txt, img_q = CustomDoubleStreamBlockProcessor._double_stream_block_forward(block, img, txt, vec, pe)
attn_mask = regional_prompting_extension.get_double_stream_attn_mask(block_index)
img, txt, img_q = CustomDoubleStreamBlockProcessor._double_stream_block_forward(
block, img, txt, vec, pe, attn_mask=attn_mask
)
# Apply IP-Adapter conditioning.
for ip_adapter_extension in ip_adapter_extensions:
@@ -81,3 +91,48 @@ class CustomDoubleStreamBlockProcessor:
)
return img, txt
class CustomSingleStreamBlockProcessor:
"""A class containing a custom implementation of SingleStreamBlock.forward() with additional features (masking,
etc.)
"""
@staticmethod
def _single_stream_block_forward(
block: SingleStreamBlock,
x: torch.Tensor,
vec: torch.Tensor,
pe: torch.Tensor,
attn_mask: torch.Tensor | None = None,
) -> torch.Tensor:
"""This function is a direct copy of SingleStreamBlock.forward()."""
mod, _ = block.modulation(vec)
x_mod = (1 + mod.scale) * block.pre_norm(x) + mod.shift
qkv, mlp = torch.split(block.linear1(x_mod), [3 * block.hidden_size, block.mlp_hidden_dim], dim=-1)
q, k, v = einops.rearrange(qkv, "B L (K H D) -> K B H L D", K=3, H=block.num_heads)
q, k = block.norm(q, k, v)
# compute attention
attn = attention(q, k, v, pe=pe, attn_mask=attn_mask)
# compute activation in mlp stream, cat again and run second linear layer
output = block.linear2(torch.cat((attn, block.mlp_act(mlp)), 2))
return x + mod.gate * output
@staticmethod
def custom_single_block_forward(
timestep_index: int,
total_num_timesteps: int,
block_index: int,
block: SingleStreamBlock,
img: torch.Tensor,
vec: torch.Tensor,
pe: torch.Tensor,
regional_prompting_extension: RegionalPromptingExtension,
) -> torch.Tensor:
"""A custom implementation of SingleStreamBlock.forward() with additional features:
- Masking
"""
attn_mask = regional_prompting_extension.get_single_stream_attn_mask(block_index)
return CustomSingleStreamBlockProcessor._single_stream_block_forward(block, img, vec, pe, attn_mask=attn_mask)

View File

@@ -7,6 +7,7 @@ from tqdm import tqdm
from invokeai.backend.flux.controlnet.controlnet_flux_output import ControlNetFluxOutput, sum_controlnet_flux_outputs
from invokeai.backend.flux.extensions.inpaint_extension import InpaintExtension
from invokeai.backend.flux.extensions.instantx_controlnet_extension import InstantXControlNetExtension
from invokeai.backend.flux.extensions.regional_prompting_extension import RegionalPromptingExtension
from invokeai.backend.flux.extensions.xlabs_controlnet_extension import XLabsControlNetExtension
from invokeai.backend.flux.extensions.xlabs_ip_adapter_extension import XLabsIPAdapterExtension
from invokeai.backend.flux.model import Flux
@@ -18,14 +19,8 @@ def denoise(
# model input
img: torch.Tensor,
img_ids: torch.Tensor,
# positive text conditioning
txt: torch.Tensor,
txt_ids: torch.Tensor,
vec: torch.Tensor,
# negative text conditioning
neg_txt: torch.Tensor | None,
neg_txt_ids: torch.Tensor | None,
neg_vec: torch.Tensor | None,
pos_regional_prompting_extension: RegionalPromptingExtension,
neg_regional_prompting_extension: RegionalPromptingExtension | None,
# sampling parameters
timesteps: list[float],
step_callback: Callable[[PipelineIntermediateState], None],
@@ -61,9 +56,9 @@ def denoise(
total_num_timesteps=total_steps,
img=img,
img_ids=img_ids,
txt=txt,
txt_ids=txt_ids,
y=vec,
txt=pos_regional_prompting_extension.regional_text_conditioning.t5_embeddings,
txt_ids=pos_regional_prompting_extension.regional_text_conditioning.t5_txt_ids,
y=pos_regional_prompting_extension.regional_text_conditioning.clip_embeddings,
timesteps=t_vec,
guidance=guidance_vec,
)
@@ -78,9 +73,9 @@ def denoise(
pred = model(
img=img,
img_ids=img_ids,
txt=txt,
txt_ids=txt_ids,
y=vec,
txt=pos_regional_prompting_extension.regional_text_conditioning.t5_embeddings,
txt_ids=pos_regional_prompting_extension.regional_text_conditioning.t5_txt_ids,
y=pos_regional_prompting_extension.regional_text_conditioning.clip_embeddings,
timesteps=t_vec,
guidance=guidance_vec,
timestep_index=step_index,
@@ -88,6 +83,7 @@ def denoise(
controlnet_double_block_residuals=merged_controlnet_residuals.double_block_residuals,
controlnet_single_block_residuals=merged_controlnet_residuals.single_block_residuals,
ip_adapter_extensions=pos_ip_adapter_extensions,
regional_prompting_extension=pos_regional_prompting_extension,
)
step_cfg_scale = cfg_scale[step_index]
@@ -97,15 +93,15 @@ def denoise(
# TODO(ryand): Add option to run positive and negative predictions in a single batch for better performance
# on systems with sufficient VRAM.
if neg_txt is None or neg_txt_ids is None or neg_vec is None:
if neg_regional_prompting_extension is None:
raise ValueError("Negative text conditioning is required when cfg_scale is not 1.0.")
neg_pred = model(
img=img,
img_ids=img_ids,
txt=neg_txt,
txt_ids=neg_txt_ids,
y=neg_vec,
txt=neg_regional_prompting_extension.regional_text_conditioning.t5_embeddings,
txt_ids=neg_regional_prompting_extension.regional_text_conditioning.t5_txt_ids,
y=neg_regional_prompting_extension.regional_text_conditioning.clip_embeddings,
timesteps=t_vec,
guidance=guidance_vec,
timestep_index=step_index,
@@ -113,6 +109,7 @@ def denoise(
controlnet_double_block_residuals=None,
controlnet_single_block_residuals=None,
ip_adapter_extensions=neg_ip_adapter_extensions,
regional_prompting_extension=neg_regional_prompting_extension,
)
pred = neg_pred + step_cfg_scale * (pred - neg_pred)

View File

@@ -0,0 +1,276 @@
from typing import Optional
import torch
import torchvision
from invokeai.backend.flux.text_conditioning import FluxRegionalTextConditioning, FluxTextConditioning
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import Range
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.mask import to_standard_float_mask
class RegionalPromptingExtension:
"""A class for managing regional prompting with FLUX.
This implementation is inspired by https://arxiv.org/pdf/2411.02395 (though there are significant differences).
"""
def __init__(
self,
regional_text_conditioning: FluxRegionalTextConditioning,
restricted_attn_mask: torch.Tensor | None = None,
):
self.regional_text_conditioning = regional_text_conditioning
self.restricted_attn_mask = restricted_attn_mask
def get_double_stream_attn_mask(self, block_index: int) -> torch.Tensor | None:
order = [self.restricted_attn_mask, None]
return order[block_index % len(order)]
def get_single_stream_attn_mask(self, block_index: int) -> torch.Tensor | None:
order = [self.restricted_attn_mask, None]
return order[block_index % len(order)]
@classmethod
def from_text_conditioning(cls, text_conditioning: list[FluxTextConditioning], img_seq_len: int):
"""Create a RegionalPromptingExtension from a list of text conditionings.
Args:
text_conditioning (list[FluxTextConditioning]): The text conditionings to use for regional prompting.
img_seq_len (int): The image sequence length (i.e. packed_height * packed_width).
"""
regional_text_conditioning = cls._concat_regional_text_conditioning(text_conditioning)
attn_mask_with_restricted_img_self_attn = cls._prepare_restricted_attn_mask(
regional_text_conditioning, img_seq_len
)
return cls(
regional_text_conditioning=regional_text_conditioning,
restricted_attn_mask=attn_mask_with_restricted_img_self_attn,
)
# Keeping _prepare_unrestricted_attn_mask for reference as an alternative masking strategy:
#
# @classmethod
# def _prepare_unrestricted_attn_mask(
# cls,
# regional_text_conditioning: FluxRegionalTextConditioning,
# img_seq_len: int,
# ) -> torch.Tensor:
# """Prepare an 'unrestricted' attention mask. In this context, 'unrestricted' means that:
# - img self-attention is not masked.
# - img regions attend to both txt within their own region and to global prompts.
# """
# device = TorchDevice.choose_torch_device()
# # Infer txt_seq_len from the t5_embeddings tensor.
# txt_seq_len = regional_text_conditioning.t5_embeddings.shape[1]
# # In the attention blocks, the txt seq and img seq are concatenated and then attention is applied.
# # Concatenation happens in the following order: [txt_seq, img_seq].
# # There are 4 portions of the attention mask to consider as we prepare it:
# # 1. txt attends to itself
# # 2. txt attends to corresponding regional img
# # 3. regional img attends to corresponding txt
# # 4. regional img attends to itself
# # Initialize empty attention mask.
# regional_attention_mask = torch.zeros(
# (txt_seq_len + img_seq_len, txt_seq_len + img_seq_len), device=device, dtype=torch.float16
# )
# for image_mask, t5_embedding_range in zip(
# regional_text_conditioning.image_masks, regional_text_conditioning.t5_embedding_ranges, strict=True
# ):
# # 1. txt attends to itself
# regional_attention_mask[
# t5_embedding_range.start : t5_embedding_range.end, t5_embedding_range.start : t5_embedding_range.end
# ] = 1.0
# # 2. txt attends to corresponding regional img
# # Note that we reshape to (1, img_seq_len) to ensure broadcasting works as desired.
# fill_value = image_mask.view(1, img_seq_len) if image_mask is not None else 1.0
# regional_attention_mask[t5_embedding_range.start : t5_embedding_range.end, txt_seq_len:] = fill_value
# # 3. regional img attends to corresponding txt
# # Note that we reshape to (img_seq_len, 1) to ensure broadcasting works as desired.
# fill_value = image_mask.view(img_seq_len, 1) if image_mask is not None else 1.0
# regional_attention_mask[txt_seq_len:, t5_embedding_range.start : t5_embedding_range.end] = fill_value
# # 4. regional img attends to itself
# # Allow unrestricted img self attention.
# regional_attention_mask[txt_seq_len:, txt_seq_len:] = 1.0
# # Convert attention mask to boolean.
# regional_attention_mask = regional_attention_mask > 0.5
# return regional_attention_mask
@classmethod
def _prepare_restricted_attn_mask(
cls,
regional_text_conditioning: FluxRegionalTextConditioning,
img_seq_len: int,
) -> torch.Tensor | None:
"""Prepare a 'restricted' attention mask. In this context, 'restricted' means that:
- img self-attention is only allowed within regions.
- img regions only attend to txt within their own region, not to global prompts.
"""
# Identify background region. I.e. the region that is not covered by any region masks.
background_region_mask: None | torch.Tensor = None
for image_mask in regional_text_conditioning.image_masks:
if image_mask is not None:
if background_region_mask is None:
background_region_mask = torch.ones_like(image_mask)
background_region_mask *= 1 - image_mask
if background_region_mask is None:
# There are no region masks, short-circuit and return None.
# TODO(ryand): We could restrict txt-txt attention across multiple global prompts, but this would
# is a rare use case and would make the logic here significantly more complicated.
return None
device = TorchDevice.choose_torch_device()
# Infer txt_seq_len from the t5_embeddings tensor.
txt_seq_len = regional_text_conditioning.t5_embeddings.shape[1]
# In the attention blocks, the txt seq and img seq are concatenated and then attention is applied.
# Concatenation happens in the following order: [txt_seq, img_seq].
# There are 4 portions of the attention mask to consider as we prepare it:
# 1. txt attends to itself
# 2. txt attends to corresponding regional img
# 3. regional img attends to corresponding txt
# 4. regional img attends to itself
# Initialize empty attention mask.
regional_attention_mask = torch.zeros(
(txt_seq_len + img_seq_len, txt_seq_len + img_seq_len), device=device, dtype=torch.float16
)
for image_mask, t5_embedding_range in zip(
regional_text_conditioning.image_masks, regional_text_conditioning.t5_embedding_ranges, strict=True
):
# 1. txt attends to itself
regional_attention_mask[
t5_embedding_range.start : t5_embedding_range.end, t5_embedding_range.start : t5_embedding_range.end
] = 1.0
if image_mask is not None:
# 2. txt attends to corresponding regional img
# Note that we reshape to (1, img_seq_len) to ensure broadcasting works as desired.
regional_attention_mask[t5_embedding_range.start : t5_embedding_range.end, txt_seq_len:] = (
image_mask.view(1, img_seq_len)
)
# 3. regional img attends to corresponding txt
# Note that we reshape to (img_seq_len, 1) to ensure broadcasting works as desired.
regional_attention_mask[txt_seq_len:, t5_embedding_range.start : t5_embedding_range.end] = (
image_mask.view(img_seq_len, 1)
)
# 4. regional img attends to itself
image_mask = image_mask.view(img_seq_len, 1)
regional_attention_mask[txt_seq_len:, txt_seq_len:] += image_mask @ image_mask.T
else:
# We don't allow attention between non-background image regions and global prompts. This helps to ensure
# that regions focus on their local prompts. We do, however, allow attention between background regions
# and global prompts. If we didn't do this, then the background regions would not attend to any txt
# embeddings, which we found experimentally to cause artifacts.
# 2. global txt attends to background region
# Note that we reshape to (1, img_seq_len) to ensure broadcasting works as desired.
regional_attention_mask[t5_embedding_range.start : t5_embedding_range.end, txt_seq_len:] = (
background_region_mask.view(1, img_seq_len)
)
# 3. background region attends to global txt
# Note that we reshape to (img_seq_len, 1) to ensure broadcasting works as desired.
regional_attention_mask[txt_seq_len:, t5_embedding_range.start : t5_embedding_range.end] = (
background_region_mask.view(img_seq_len, 1)
)
# Allow background regions to attend to themselves.
regional_attention_mask[txt_seq_len:, txt_seq_len:] += background_region_mask.view(img_seq_len, 1)
regional_attention_mask[txt_seq_len:, txt_seq_len:] += background_region_mask.view(1, img_seq_len)
# Convert attention mask to boolean.
regional_attention_mask = regional_attention_mask > 0.5
return regional_attention_mask
@classmethod
def _concat_regional_text_conditioning(
cls,
text_conditionings: list[FluxTextConditioning],
) -> FluxRegionalTextConditioning:
"""Concatenate regional text conditioning data into a single conditioning tensor (with associated masks)."""
concat_t5_embeddings: list[torch.Tensor] = []
concat_t5_embedding_ranges: list[Range] = []
image_masks: list[torch.Tensor | None] = []
# Choose global CLIP embedding.
# Use the first global prompt's CLIP embedding as the global CLIP embedding. If there is no global prompt, use
# the first prompt's CLIP embedding.
global_clip_embedding: torch.Tensor = text_conditionings[0].clip_embeddings
for text_conditioning in text_conditionings:
if text_conditioning.mask is None:
global_clip_embedding = text_conditioning.clip_embeddings
break
cur_t5_embedding_len = 0
for text_conditioning in text_conditionings:
concat_t5_embeddings.append(text_conditioning.t5_embeddings)
concat_t5_embedding_ranges.append(
Range(start=cur_t5_embedding_len, end=cur_t5_embedding_len + text_conditioning.t5_embeddings.shape[1])
)
image_masks.append(text_conditioning.mask)
cur_t5_embedding_len += text_conditioning.t5_embeddings.shape[1]
t5_embeddings = torch.cat(concat_t5_embeddings, dim=1)
# Initialize the txt_ids tensor.
pos_bs, pos_t5_seq_len, _ = t5_embeddings.shape
t5_txt_ids = torch.zeros(
pos_bs, pos_t5_seq_len, 3, dtype=t5_embeddings.dtype, device=TorchDevice.choose_torch_device()
)
return FluxRegionalTextConditioning(
t5_embeddings=t5_embeddings,
clip_embeddings=global_clip_embedding,
t5_txt_ids=t5_txt_ids,
image_masks=image_masks,
t5_embedding_ranges=concat_t5_embedding_ranges,
)
@staticmethod
def preprocess_regional_prompt_mask(
mask: Optional[torch.Tensor], packed_height: int, packed_width: int, dtype: torch.dtype, device: torch.device
) -> torch.Tensor:
"""Preprocess a regional prompt mask to match the target height and width.
If mask is None, returns a mask of all ones with the target height and width.
If mask is not None, resizes the mask to the target height and width using 'nearest' interpolation.
packed_height and packed_width are the target height and width of the mask in the 'packed' latent space.
Returns:
torch.Tensor: The processed mask. shape: (1, 1, packed_height * packed_width).
"""
if mask is None:
return torch.ones((1, 1, packed_height * packed_width), dtype=dtype, device=device)
mask = to_standard_float_mask(mask, out_dtype=dtype)
tf = torchvision.transforms.Resize(
(packed_height, packed_width), interpolation=torchvision.transforms.InterpolationMode.NEAREST
)
# Add a batch dimension to the mask, because torchvision expects shape (batch, channels, h, w).
mask = mask.unsqueeze(0) # Shape: (1, h, w) -> (1, 1, h, w)
resized_mask = tf(mask)
# Flatten the height and width dimensions into a single image_seq_len dimension.
return resized_mask.flatten(start_dim=2)

View File

@@ -5,10 +5,10 @@ from einops import rearrange
from torch import Tensor
def attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor) -> Tensor:
def attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor, attn_mask: Tensor | None = None) -> Tensor:
q, k = apply_rope(q, k, pe)
x = torch.nn.functional.scaled_dot_product_attention(q, k, v)
x = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask)
x = rearrange(x, "B H L D -> B L (H D)")
return x
@@ -24,12 +24,12 @@ def rope(pos: Tensor, dim: int, theta: int) -> Tensor:
out = torch.einsum("...n,d->...nd", pos, omega)
out = torch.stack([torch.cos(out), -torch.sin(out), torch.sin(out), torch.cos(out)], dim=-1)
out = rearrange(out, "b n d (i j) -> b n d i j", i=2, j=2)
return out.float()
return out.to(dtype=pos.dtype, device=pos.device)
def apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor) -> tuple[Tensor, Tensor]:
xq_ = xq.float().reshape(*xq.shape[:-1], -1, 1, 2)
xk_ = xk.float().reshape(*xk.shape[:-1], -1, 1, 2)
xq_ = xq.view(*xq.shape[:-1], -1, 1, 2)
xk_ = xk.view(*xk.shape[:-1], -1, 1, 2)
xq_out = freqs_cis[..., 0] * xq_[..., 0] + freqs_cis[..., 1] * xq_[..., 1]
xk_out = freqs_cis[..., 0] * xk_[..., 0] + freqs_cis[..., 1] * xk_[..., 1]
return xq_out.reshape(*xq.shape).type_as(xq), xk_out.reshape(*xk.shape).type_as(xk)
return xq_out.view(*xq.shape), xk_out.view(*xk.shape)

View File

@@ -5,7 +5,11 @@ from dataclasses import dataclass
import torch
from torch import Tensor, nn
from invokeai.backend.flux.custom_block_processor import CustomDoubleStreamBlockProcessor
from invokeai.backend.flux.custom_block_processor import (
CustomDoubleStreamBlockProcessor,
CustomSingleStreamBlockProcessor,
)
from invokeai.backend.flux.extensions.regional_prompting_extension import RegionalPromptingExtension
from invokeai.backend.flux.extensions.xlabs_ip_adapter_extension import XLabsIPAdapterExtension
from invokeai.backend.flux.modules.layers import (
DoubleStreamBlock,
@@ -95,6 +99,7 @@ class Flux(nn.Module):
controlnet_double_block_residuals: list[Tensor] | None,
controlnet_single_block_residuals: list[Tensor] | None,
ip_adapter_extensions: list[XLabsIPAdapterExtension],
regional_prompting_extension: RegionalPromptingExtension,
) -> Tensor:
if img.ndim != 3 or txt.ndim != 3:
raise ValueError("Input img and txt tensors must have 3 dimensions.")
@@ -117,7 +122,6 @@ class Flux(nn.Module):
assert len(controlnet_double_block_residuals) == len(self.double_blocks)
for block_index, block in enumerate(self.double_blocks):
assert isinstance(block, DoubleStreamBlock)
img, txt = CustomDoubleStreamBlockProcessor.custom_double_block_forward(
timestep_index=timestep_index,
total_num_timesteps=total_num_timesteps,
@@ -128,6 +132,7 @@ class Flux(nn.Module):
vec=vec,
pe=pe,
ip_adapter_extensions=ip_adapter_extensions,
regional_prompting_extension=regional_prompting_extension,
)
if controlnet_double_block_residuals is not None:
@@ -140,7 +145,17 @@ class Flux(nn.Module):
assert len(controlnet_single_block_residuals) == len(self.single_blocks)
for block_index, block in enumerate(self.single_blocks):
img = block(img, vec=vec, pe=pe)
assert isinstance(block, SingleStreamBlock)
img = CustomSingleStreamBlockProcessor.custom_single_block_forward(
timestep_index=timestep_index,
total_num_timesteps=total_num_timesteps,
block_index=block_index,
block=block,
img=img,
vec=vec,
pe=pe,
regional_prompting_extension=regional_prompting_extension,
)
if controlnet_single_block_residuals is not None:
img[:, txt.shape[1] :, ...] += controlnet_single_block_residuals[block_index]

View File

@@ -66,10 +66,7 @@ class RMSNorm(torch.nn.Module):
self.scale = nn.Parameter(torch.ones(dim))
def forward(self, x: Tensor):
x_dtype = x.dtype
x = x.float()
rrms = torch.rsqrt(torch.mean(x**2, dim=-1, keepdim=True) + 1e-6)
return (x * rrms).to(dtype=x_dtype) * self.scale
return torch.nn.functional.rms_norm(x, self.scale.shape, self.scale, eps=1e-6)
class QKNorm(torch.nn.Module):

View File

@@ -0,0 +1,36 @@
from dataclasses import dataclass
import torch
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import Range
@dataclass
class FluxTextConditioning:
t5_embeddings: torch.Tensor
clip_embeddings: torch.Tensor
# If mask is None, the prompt is a global prompt.
mask: torch.Tensor | None
@dataclass
class FluxRegionalTextConditioning:
# Concatenated text embeddings.
# Shape: (1, concatenated_txt_seq_len, 4096)
t5_embeddings: torch.Tensor
# Shape: (1, concatenated_txt_seq_len, 3)
t5_txt_ids: torch.Tensor
# Global CLIP embeddings.
# Shape: (1, 768)
clip_embeddings: torch.Tensor
# A binary mask indicating the regions of the image that the prompt should be applied to. If None, the prompt is a
# global prompt.
# image_masks[i] is the mask for the ith prompt.
# image_masks[i] has shape (1, image_seq_len) and dtype torch.bool.
image_masks: list[torch.Tensor | None]
# List of ranges that represent the embedding ranges for each mask.
# t5_embedding_ranges[i] contains the range of the t5 embeddings that correspond to image_masks[i].
t5_embedding_ranges: list[Range]

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -469,7 +469,7 @@ class ModelProbe(object):
"""
# scan model
scan_result = scan_file_path(checkpoint)
if scan_result.infected_files != 0:
if scan_result.infected_files != 0 or scan_result.scan_err:
raise Exception("The model {model_name} is potentially infected by malware. Aborting import.")
@@ -485,6 +485,7 @@ MODEL_NAME_TO_PREPROCESSOR = {
"lineart anime": "lineart_anime_image_processor",
"lineart_anime": "lineart_anime_image_processor",
"lineart": "lineart_image_processor",
"soft": "hed_image_processor",
"softedge": "hed_image_processor",
"hed": "hed_image_processor",
"shuffle": "content_shuffle_image_processor",

View File

@@ -44,7 +44,7 @@ def _fast_safetensors_reader(path: str) -> Dict[str, torch.Tensor]:
return checkpoint
def read_checkpoint_meta(path: Union[str, Path], scan: bool = False) -> Dict[str, torch.Tensor]:
def read_checkpoint_meta(path: Union[str, Path], scan: bool = True) -> Dict[str, torch.Tensor]:
if str(path).endswith(".safetensors"):
try:
path_str = path.as_posix() if isinstance(path, Path) else path
@@ -52,16 +52,15 @@ def read_checkpoint_meta(path: Union[str, Path], scan: bool = False) -> Dict[str
except Exception:
# TODO: create issue for support "meta"?
checkpoint = safetensors.torch.load_file(path, device="cpu")
elif str(path).endswith(".gguf"):
# The GGUF reader used here uses numpy memmap, so these tensors are not loaded into memory during this function
checkpoint = gguf_sd_loader(Path(path), compute_dtype=torch.float32)
else:
if scan:
scan_result = scan_file_path(path)
if scan_result.infected_files != 0:
if scan_result.infected_files != 0 or scan_result.scan_err:
raise Exception(f'The model file "{path}" is potentially infected by malware. Aborting import.')
if str(path).endswith(".gguf"):
# The GGUF reader used here uses numpy memmap, so these tensors are not loaded into memory during this function
checkpoint = gguf_sd_loader(Path(path), compute_dtype=torch.float32)
else:
checkpoint = torch.load(path, map_location=torch.device("meta"))
checkpoint = torch.load(path, map_location=torch.device("meta"))
return checkpoint

View File

@@ -1,3 +1,3 @@
# Invoke UI
<https://invoke-ai.github.io/InvokeAI/contributing/frontend/OVERVIEW/>
<https://invoke-ai.github.io/InvokeAI/contributing/frontend/>

View File

@@ -58,7 +58,7 @@
"@dagrejs/dagre": "^1.1.4",
"@dagrejs/graphlib": "^2.2.4",
"@fontsource-variable/inter": "^5.1.0",
"@invoke-ai/ui-library": "^0.0.43",
"@invoke-ai/ui-library": "^0.0.44",
"@nanostores/react": "^0.7.3",
"@reduxjs/toolkit": "2.2.3",
"@roarr/browser-log-writer": "^1.3.0",

View File

@@ -24,8 +24,8 @@ dependencies:
specifier: ^5.1.0
version: 5.1.0
'@invoke-ai/ui-library':
specifier: ^0.0.43
version: 0.0.43(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1)
specifier: ^0.0.44
version: 0.0.44(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1)
'@nanostores/react':
specifier: ^0.7.3
version: 0.7.3(nanostores@0.11.3)(react@18.3.1)
@@ -515,8 +515,8 @@ packages:
resolution: {integrity: sha512-MV6D4VLRIHr4PkW4zMyqfrNS1mPlCTiCXwvYGtDFQYr+xHFfonhAuf9WjsSc0nyp2m0OdkSLnzmVKkZFLo25Tg==}
dev: false
/@chakra-ui/anatomy@2.3.4:
resolution: {integrity: sha512-fFIYN7L276gw0Q7/ikMMlZxP7mvnjRaWJ7f3Jsf9VtDOi6eAYIBRrhQe6+SZ0PGmoOkRaBc7gSE5oeIbgFFyrw==}
/@chakra-ui/anatomy@2.3.5:
resolution: {integrity: sha512-3im33cUOxCbISjaBlINE2u8BOwJSCdzpjCX0H+0JxK2xz26UaVA5xeI3NYHUoxDnr/QIrgfrllGxS0szYwOcyg==}
dev: false
/@chakra-ui/breakpoint-utils@2.0.8:
@@ -573,12 +573,12 @@ packages:
react: 18.3.1
dev: false
/@chakra-ui/hooks@2.4.2(react@18.3.1):
resolution: {integrity: sha512-LRKiVE1oA7afT5tbbSKAy7Uas2xFHE6IkrQdbhWCHmkHBUtPvjQQDgwtnd4IRZPmoEfNGwoJ/MQpwOM/NRTTwA==}
/@chakra-ui/hooks@2.4.3(react@18.3.1):
resolution: {integrity: sha512-Sr2zsoTZw3p7HbrUy4aLpTIkE2XXUelAUgg3NGwMzrmx75bE0qVyiuuTFOuyEzGxYVV2Fe8QtcKKilm6RwzTGg==}
peerDependencies:
react: '>=18'
dependencies:
'@chakra-ui/utils': 2.2.2(react@18.3.1)
'@chakra-ui/utils': 2.2.3(react@18.3.1)
'@zag-js/element-size': 0.31.1
copy-to-clipboard: 3.3.3
framesync: 6.1.2
@@ -596,13 +596,13 @@ packages:
react: 18.3.1
dev: false
/@chakra-ui/icons@2.2.4(@chakra-ui/react@2.10.2)(react@18.3.1):
/@chakra-ui/icons@2.2.4(@chakra-ui/react@2.10.4)(react@18.3.1):
resolution: {integrity: sha512-l5QdBgwrAg3Sc2BRqtNkJpfuLw/pWRDwwT58J6c4PqQT6wzXxyNa8Q0PForu1ltB5qEiFb1kxr/F/HO1EwNa6g==}
peerDependencies:
'@chakra-ui/react': '>=2.0.0'
react: '>=18'
dependencies:
'@chakra-ui/react': 2.10.2(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/react': 2.10.4(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1)
react: 18.3.1
dev: false
@@ -825,8 +825,8 @@ packages:
react: 18.3.1
dev: false
/@chakra-ui/react@2.10.2(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-TfIHTqTlxTHYJZBtpiR5EZasPUrLYKJxdbHkdOJb5G1OQ+2c5kKl5XA7c2pMtsEptzb7KxAAIB62t3hxdfWp1w==}
/@chakra-ui/react@2.10.4(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-XyRWnuZ1Uw7Mlj5pKUGO5/WhnIHP/EOrpy6lGZC1yWlkd0eIfIpYMZ1ALTZx4KPEdbBaes48dgiMT2ROCqLhkA==}
peerDependencies:
'@emotion/react': '>=11'
'@emotion/styled': '>=11'
@@ -834,10 +834,10 @@ packages:
react: '>=18'
react-dom: '>=18'
dependencies:
'@chakra-ui/hooks': 2.4.2(react@18.3.1)
'@chakra-ui/styled-system': 2.11.2(react@18.3.1)
'@chakra-ui/theme': 3.4.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1)
'@chakra-ui/utils': 2.2.2(react@18.3.1)
'@chakra-ui/hooks': 2.4.3(react@18.3.1)
'@chakra-ui/styled-system': 2.12.1(react@18.3.1)
'@chakra-ui/theme': 3.4.7(@chakra-ui/styled-system@2.12.1)(react@18.3.1)
'@chakra-ui/utils': 2.2.3(react@18.3.1)
'@emotion/react': 11.13.3(@types/react@18.3.11)(react@18.3.1)
'@emotion/styled': 11.13.0(@emotion/react@11.13.3)(@types/react@18.3.11)(react@18.3.1)
'@popperjs/core': 2.11.8
@@ -868,10 +868,10 @@ packages:
react: 18.3.1
dev: false
/@chakra-ui/styled-system@2.11.2(react@18.3.1):
resolution: {integrity: sha512-y++z2Uop+hjfZX9mbH88F1ikazPv32asD2er56zMJBemUAzweXnHTpiCQbluEDSUDhqmghVZAdb+5L4XLbsRxA==}
/@chakra-ui/styled-system@2.12.1(react@18.3.1):
resolution: {integrity: sha512-DQph1nDiCPtgze7nDe0a36530ByXb5VpPosKGyWMvKocVeZJcDtYG6XM0+V5a0wKuFBXsViBBRIFUTiUesJAcg==}
dependencies:
'@chakra-ui/utils': 2.2.2(react@18.3.1)
'@chakra-ui/utils': 2.2.3(react@18.3.1)
csstype: 3.1.3
transitivePeerDependencies:
- react
@@ -915,14 +915,14 @@ packages:
color2k: 2.0.3
dev: false
/@chakra-ui/theme-tools@2.2.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1):
resolution: {integrity: sha512-3UhKPyzKbV3l/bg1iQN9PBvffYp+EBOoYMUaeTUdieQRPFzo2jbYR0lNCxqv8h5aGM/k54nCHU2M/GStyi9F2A==}
/@chakra-ui/theme-tools@2.2.7(@chakra-ui/styled-system@2.12.1)(react@18.3.1):
resolution: {integrity: sha512-K/VJd0QcnKik7m+qZTkggqNLep6+MPUu8IP5TUpHsnSM5R/RVjsJIR7gO8IZVAIMIGLLTIhGshHxeMekqv6LcQ==}
peerDependencies:
'@chakra-ui/styled-system': '>=2.0.0'
dependencies:
'@chakra-ui/anatomy': 2.3.4
'@chakra-ui/styled-system': 2.11.2(react@18.3.1)
'@chakra-ui/utils': 2.2.2(react@18.3.1)
'@chakra-ui/anatomy': 2.3.5
'@chakra-ui/styled-system': 2.12.1(react@18.3.1)
'@chakra-ui/utils': 2.2.3(react@18.3.1)
color2k: 2.0.3
transitivePeerDependencies:
- react
@@ -948,15 +948,15 @@ packages:
'@chakra-ui/theme-tools': 2.1.2(@chakra-ui/styled-system@2.9.2)
dev: false
/@chakra-ui/theme@3.4.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1):
resolution: {integrity: sha512-ZwFBLfiMC3URwaO31ONXoKH9k0TX0OW3UjdPF3EQkQpYyrk/fm36GkkzajjtdpWEd7rzDLRsQjPmvwNaSoNDtg==}
/@chakra-ui/theme@3.4.7(@chakra-ui/styled-system@2.12.1)(react@18.3.1):
resolution: {integrity: sha512-pfewthgZTFNUYeUwGvhPQO/FTIyf375cFV1AT8N1y0aJiw4KDe7YTGm7p0aFy4AwAjH2ydMgeEx/lua4tx8qyQ==}
peerDependencies:
'@chakra-ui/styled-system': '>=2.8.0'
dependencies:
'@chakra-ui/anatomy': 2.3.4
'@chakra-ui/styled-system': 2.11.2(react@18.3.1)
'@chakra-ui/theme-tools': 2.2.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1)
'@chakra-ui/utils': 2.2.2(react@18.3.1)
'@chakra-ui/anatomy': 2.3.5
'@chakra-ui/styled-system': 2.12.1(react@18.3.1)
'@chakra-ui/theme-tools': 2.2.7(@chakra-ui/styled-system@2.12.1)(react@18.3.1)
'@chakra-ui/utils': 2.2.3(react@18.3.1)
transitivePeerDependencies:
- react
dev: false
@@ -981,8 +981,8 @@ packages:
lodash.mergewith: 4.6.2
dev: false
/@chakra-ui/utils@2.2.2(react@18.3.1):
resolution: {integrity: sha512-jUPLT0JzRMWxpdzH6c+t0YMJYrvc5CLericgITV3zDSXblkfx3DsYXqU11DJTSGZI9dUKzM1Wd0Wswn4eJwvFQ==}
/@chakra-ui/utils@2.2.3(react@18.3.1):
resolution: {integrity: sha512-cldoCQuexZ6e07/9hWHKD4l1QXXlM1Nax9tuQOBvVf/EgwNZt3nZu8zZRDFlhAOKCTQDkmpLTTu+eXXjChNQOw==}
peerDependencies:
react: '>=16.8.0'
dependencies:
@@ -1675,20 +1675,20 @@ packages:
prettier: 3.3.3
dev: true
/@invoke-ai/ui-library@0.0.43(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-t3fPYyks07ue3dEBPJuTHbeDLnDckDCOrtvc07mMDbLOnlPEZ0StaeiNGH+oO8qLzAuMAlSTdswgHfzTc2MmPw==}
/@invoke-ai/ui-library@0.0.44(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-PDseHmdr8oi8cmrpx3UwIYHn4NduAJX2R0pM0pyM54xrCMPMgYiCbC/eOs8Gt4fBc2ziiPZ9UGoW4evnE3YJsg==}
peerDependencies:
'@fontsource-variable/inter': ^5.0.16
react: ^18.2.0
react-dom: ^18.2.0
dependencies:
'@chakra-ui/anatomy': 2.3.4
'@chakra-ui/icons': 2.2.4(@chakra-ui/react@2.10.2)(react@18.3.1)
'@chakra-ui/anatomy': 2.2.2
'@chakra-ui/icons': 2.2.4(@chakra-ui/react@2.10.4)(react@18.3.1)
'@chakra-ui/layout': 2.3.1(@chakra-ui/system@2.6.2)(react@18.3.1)
'@chakra-ui/portal': 2.1.0(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/react': 2.10.2(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/styled-system': 2.11.2(react@18.3.1)
'@chakra-ui/theme-tools': 2.2.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1)
'@chakra-ui/react': 2.10.4(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/styled-system': 2.9.2
'@chakra-ui/theme-tools': 2.1.2(@chakra-ui/styled-system@2.9.2)
'@emotion/react': 11.13.3(@types/react@18.3.11)(react@18.3.1)
'@emotion/styled': 11.13.0(@emotion/react@11.13.3)(@types/react@18.3.11)(react@18.3.1)
'@fontsource-variable/inter': 5.1.0

View File

@@ -96,7 +96,9 @@
"new": "Neu",
"ok": "OK",
"close": "Schließen",
"clipboard": "Zwischenablage"
"clipboard": "Zwischenablage",
"generating": "Generieren",
"loadingModel": "Lade Modell"
},
"gallery": {
"galleryImageSize": "Bildgröße",
@@ -591,7 +593,15 @@
"loraTriggerPhrases": "LoRA-Auslösephrasen",
"installingBundle": "Bündel wird installiert",
"triggerPhrases": "Auslösephrasen",
"mainModelTriggerPhrases": "Hauptmodell-Auslösephrasen"
"mainModelTriggerPhrases": "Hauptmodell-Auslösephrasen",
"noDefaultSettings": "Für dieses Modell sind keine Standardeinstellungen konfiguriert. Besuchen Sie den Modell-Manager, um Standardeinstellungen hinzuzufügen.",
"defaultSettingsOutOfSync": "Einige Einstellungen stimmen nicht mit den Standardeinstellungen des Modells überein:",
"clipLEmbed": "CLIP-L einbetten",
"clipGEmbed": "CLIP-G einbetten",
"hfTokenLabel": "HuggingFace-Token (für einige Modelle erforderlich)",
"hfTokenHelperText": "Für die Nutzung einiger Modelle ist ein HF-Token erforderlich. Klicken Sie hier, um Ihr Token zu erstellen oder zu erhalten.",
"hfForbidden": "Sie haben keinen Zugriff auf dieses HF-Modell",
"hfTokenInvalid": "Ungültiges oder fehlendes HF-Token"
},
"parameters": {
"images": "Bilder",
@@ -632,12 +642,6 @@
"remixImage": "Remix des Bilds erstellen",
"imageActions": "Weitere Bildaktionen",
"invoke": {
"layer": {
"t2iAdapterIncompatibleBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, Bbox-Breite ist {{width}}",
"t2iAdapterIncompatibleScaledBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, Skalierte Bbox-Breite ist {{width}}",
"t2iAdapterIncompatibleScaledBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, Skalierte Bbox-Höhe ist {{height}}",
"t2iAdapterIncompatibleBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, Bbox-Höhe ist {{height}}"
},
"fluxModelIncompatibleScaledBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), Skalierte Bbox-Breite ist {{width}}",
"fluxModelIncompatibleScaledBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), Skalierte Bbox-Höhe ist {{height}}",
"fluxModelIncompatibleBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), Bbox-Breite ist {{width}}",
@@ -841,7 +845,8 @@
"upscaling": "Hochskalierung",
"canvas": "Leinwand",
"prompts_one": "Prompt",
"prompts_other": "Prompts"
"prompts_other": "Prompts",
"batchSize": "Stapelgröße"
},
"metadata": {
"negativePrompt": "Negativ Beschreibung",
@@ -1081,6 +1086,21 @@
},
"patchmatchDownScaleSize": {
"heading": "Herunterskalieren"
},
"paramHeight": {
"heading": "Höhe",
"paragraphs": [
"Höhe des generierten Bildes. Muss ein Vielfaches von 8 sein."
]
},
"paramUpscaleMethod": {
"heading": "Vergrößerungsmethode",
"paragraphs": [
"Methode zum Hochskalieren des Bildes für High Resolution Fix."
]
},
"paramHrf": {
"heading": "High Resolution Fix aktivieren"
}
},
"invocationCache": {
@@ -1443,7 +1463,6 @@
"deleteReferenceImage": "Referenzbild löschen",
"referenceImage": "Referenzbild",
"opacity": "Opazität",
"resetCanvas": "Leinwand zurücksetzen",
"removeBookmark": "Lesezeichen entfernen",
"rasterLayer": "Raster-Ebene",
"rasterLayers_withCount_visible": "Raster-Ebenen ({{count}})",

View File

@@ -176,7 +176,8 @@
"reset": "Reset",
"none": "None",
"new": "New",
"generating": "Generating"
"generating": "Generating",
"warnings": "Warnings"
},
"hrf": {
"hrf": "High Resolution Fix",
@@ -263,7 +264,8 @@
"iterations_one": "Iteration",
"iterations_other": "Iterations",
"generations_one": "Generation",
"generations_other": "Generations"
"generations_other": "Generations",
"batchSize": "Batch Size"
},
"invocationCache": {
"invocationCache": "Invocation Cache",
@@ -977,6 +979,8 @@
"zoomOutNodes": "Zoom Out",
"betaDesc": "This invocation is in beta. Until it is stable, it may have breaking changes during app updates. We plan to support this invocation long-term.",
"prototypeDesc": "This invocation is a prototype. It may have breaking changes during app updates and may be removed at any time.",
"internalDesc": "This invocation is used internally by Invoke. It may have breaking changes during app updates and may be removed at any time.",
"specialDesc": "This invocation some special handling in the app. For example, Batch nodes are used to queue multiple graphs from a single workflow.",
"imageAccessError": "Unable to find image {{image_name}}, resetting to default",
"boardAccessError": "Unable to find board {{board_id}}, resetting to default",
"modelAccessError": "Unable to find model {{key}}, resetting to default",
@@ -1035,20 +1039,7 @@
"canvasIsSelectingObject": "Canvas is busy (selecting object)",
"noPrompts": "No prompts generated",
"noNodesInGraph": "No nodes in graph",
"systemDisconnected": "System disconnected",
"layer": {
"controlAdapterNoModelSelected": "no Control Adapter model selected",
"controlAdapterIncompatibleBaseModel": "incompatible Control Adapter base model",
"t2iAdapterIncompatibleBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, bbox width is {{width}}",
"t2iAdapterIncompatibleBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, bbox height is {{height}}",
"t2iAdapterIncompatibleScaledBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, scaled bbox width is {{width}}",
"t2iAdapterIncompatibleScaledBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, scaled bbox height is {{height}}",
"ipAdapterNoModelSelected": "no IP adapter selected",
"ipAdapterIncompatibleBaseModel": "incompatible IP Adapter base model",
"ipAdapterNoImageSelected": "no IP Adapter image selected",
"rgNoPromptsOrIPAdapters": "no text prompts or IP Adapters",
"rgNoRegion": "no region selected"
}
"systemDisconnected": "System disconnected"
},
"maskBlur": "Mask Blur",
"negativePromptPlaceholder": "Negative Prompt",
@@ -1316,8 +1307,9 @@
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
"paragraphs": [
"The part of the of the denoising process that will have the Control Adapter applied.",
"Generally, Control Adapters applied at the start of the process guide composition, and Control Adapters applied at the end guide details."
"This setting determines which portion of the denoising (generation) process incorporates the guidance from this layer.",
"• Start Step (%): Specifies when to begin applying the guidance from this layer during the generation process.",
"• End Step (%): Specifies when to stop applying this layer's guidance and revert general guidance from the model and other settings."
]
},
"controlNetControlMode": {
@@ -1335,13 +1327,15 @@
"paragraphs": ["Method to fit Control Adapter's input image size to the output generation size."]
},
"ipAdapterMethod": {
"heading": "Method",
"paragraphs": ["Method by which to apply the current IP Adapter."]
"heading": "Mode",
"paragraphs": ["The mode defines how the reference image will guide the generation process."]
},
"controlNetWeight": {
"heading": "Weight",
"paragraphs": [
"Weight of the Control Adapter. Higher weight will lead to larger impacts on the final image."
"Adjusts how strongly the layer influences the generation process",
"• Higher Weight (.75-2): Creates a more significant impact on the final result.",
"• Lower Weight (0-.75): Creates a smaller impact on the final result."
]
},
"dynamicPrompts": {
@@ -1663,7 +1657,6 @@
"newControlLayerError": "Problem Creating Control Layer",
"newRasterLayerOk": "Created Raster Layer",
"newRasterLayerError": "Problem Creating Raster Layer",
"newFromImage": "New from Image",
"pullBboxIntoLayerOk": "Bbox Pulled Into Layer",
"pullBboxIntoLayerError": "Problem Pulling BBox Into Layer",
"pullBboxIntoReferenceImageOk": "Bbox Pulled Into ReferenceImage",
@@ -1676,7 +1669,7 @@
"mergingLayers": "Merging layers",
"clearHistory": "Clear History",
"bboxOverlay": "Show Bbox Overlay",
"resetCanvas": "Reset Canvas",
"newSession": "New Session",
"clearCaches": "Clear Caches",
"recalculateRects": "Recalculate Rects",
"clipToBbox": "Clip Strokes to Bbox",
@@ -1708,8 +1701,12 @@
"controlLayer": "Control Layer",
"inpaintMask": "Inpaint Mask",
"regionalGuidance": "Regional Guidance",
"canvasAsRasterLayer": "$t(controlLayers.canvas) as $t(controlLayers.rasterLayer)",
"canvasAsControlLayer": "$t(controlLayers.canvas) as $t(controlLayers.controlLayer)",
"referenceImageRegional": "Reference Image (Regional)",
"referenceImageGlobal": "Reference Image (Global)",
"asRasterLayer": "As $t(controlLayers.rasterLayer)",
"asRasterLayerResize": "As $t(controlLayers.rasterLayer) (Resize)",
"asControlLayer": "As $t(controlLayers.controlLayer)",
"asControlLayerResize": "As $t(controlLayers.controlLayer) (Resize)",
"referenceImage": "Reference Image",
"regionalReferenceImage": "Regional Reference Image",
"globalReferenceImage": "Global Reference Image",
@@ -1777,6 +1774,7 @@
"pullBboxIntoLayer": "Pull Bbox into Layer",
"pullBboxIntoReferenceImage": "Pull Bbox into Reference Image",
"showProgressOnCanvas": "Show Progress on Canvas",
"useImage": "Use Image",
"prompt": "Prompt",
"negativePrompt": "Negative Prompt",
"beginEndStepPercentShort": "Begin/End %",
@@ -1785,8 +1783,26 @@
"newGallerySessionDesc": "This will clear the canvas and all settings except for your model selection. Generations will be sent to the gallery.",
"newCanvasSession": "New Canvas Session",
"newCanvasSessionDesc": "This will clear the canvas and all settings except for your model selection. Generations will be staged on the canvas.",
"resetCanvasLayers": "Reset Canvas Layers",
"resetGenerationSettings": "Reset Generation Settings",
"replaceCurrent": "Replace Current",
"controlLayerEmptyState": "<UploadButton>Upload an image</UploadButton>, drag an image from the <GalleryButton>gallery</GalleryButton> onto this layer, or draw on the canvas to get started.",
"referenceImageEmptyState": "<UploadButton>Upload an image</UploadButton> or drag an image from the <GalleryButton>gallery</GalleryButton> onto this layer to get started.",
"warnings": {
"problemsFound": "Problems found",
"unsupportedModel": "layer not supported for selected base model",
"controlAdapterNoModelSelected": "no Control Layer model selected",
"controlAdapterIncompatibleBaseModel": "incompatible Control Layer base model",
"controlAdapterNoControl": "no control selected/drawn",
"ipAdapterNoModelSelected": "no Reference Image model selected",
"ipAdapterIncompatibleBaseModel": "incompatible Reference Image base model",
"ipAdapterNoImageSelected": "no Reference Image image selected",
"rgNoPromptsOrIPAdapters": "no text prompts or Reference Images",
"rgNegativePromptNotSupported": "Negative Prompt not supported for selected base model",
"rgReferenceImagesNotSupported": "regional Reference Images not supported for selected base model",
"rgAutoNegativeNotSupported": "Auto-Negative not supported for selected base model",
"rgNoRegion": "no region drawn"
},
"controlMode": {
"controlMode": "Control Mode",
"balanced": "Balanced (recommended)",
@@ -1795,10 +1811,13 @@
"megaControl": "Mega Control"
},
"ipAdapterMethod": {
"ipAdapterMethod": "IP Adapter Method",
"ipAdapterMethod": "Mode",
"full": "Style and Composition",
"fullDesc": "Applies visual style (colors, textures) & composition (layout, structure).",
"style": "Style Only",
"composition": "Composition Only"
"styleDesc": "Applies visual style (colors, textures) without considering its layout.",
"composition": "Composition Only",
"compositionDesc": "Replicates layout & structure while ignoring the reference's style."
},
"fill": {
"fillColor": "Fill Color",
@@ -2114,11 +2133,73 @@
"whatsNew": {
"whatsNewInInvoke": "What's New in Invoke",
"items": [
"<StrongComponent>SD 3.5</StrongComponent>: Support for SD 3.5 Medium and Large.",
"<StrongComponent>Canvas</StrongComponent>: Streamlined Control Layer processing and improved default Control settings."
"<StrongComponent>FLUX Regional Guidance (beta)</StrongComponent>: Our beta release of FLUX Regional Guidance is live for regional prompt control.",
"<StrongComponent>Various UX Improvements</StrongComponent>: A number of small UX and Quality of Life improvements throughout the app."
],
"readReleaseNotes": "Read Release Notes",
"watchRecentReleaseVideos": "Watch Recent Release Videos",
"watchUiUpdatesOverview": "Watch UI Updates Overview"
},
"supportVideos": {
"supportVideos": "Support Videos",
"gettingStarted": "Getting Started",
"controlCanvas": "Control Canvas",
"watch": "Watch",
"studioSessionsDesc1": "Check out the <StudioSessionsPlaylistLink /> for Invoke deep dives.",
"studioSessionsDesc2": "Join our <DiscordLink /> to participate in the live sessions and ask questions. Sessions are uploaded to the playlist the following week.",
"videos": {
"creatingYourFirstImage": {
"title": "Creating Your First Image",
"description": "Introduction to creating an image from scratch using Invoke's tools."
},
"usingControlLayersAndReferenceGuides": {
"title": "Using Control Layers and Reference Guides",
"description": "Learn how to guide your image creation with control layers and reference images."
},
"understandingImageToImageAndDenoising": {
"title": "Understanding Image-to-Image and Denoising",
"description": "Overview of image-to-image transformations and denoising in Invoke."
},
"exploringAIModelsAndConceptAdapters": {
"title": "Exploring AI Models and Concept Adapters",
"description": "Dive into AI models and how to use concept adapters for creative control."
},
"creatingAndComposingOnInvokesControlCanvas": {
"title": "Creating and Composing on Invoke's Control Canvas",
"description": "Learn to compose images using Invoke's control canvas."
},
"upscaling": {
"title": "Upscaling",
"description": "How to upscale images with Invoke's tools to enhance resolution."
},
"howDoIGenerateAndSaveToTheGallery": {
"title": "How Do I Generate and Save to the Gallery?",
"description": "Steps to generate and save images to the gallery."
},
"howDoIEditOnTheCanvas": {
"title": "How Do I Edit on the Canvas?",
"description": "Guide to editing images directly on the canvas."
},
"howDoIDoImageToImageTransformation": {
"title": "How Do I Do Image-to-Image Transformation?",
"description": "Tutorial on performing image-to-image transformations in Invoke."
},
"howDoIUseControlNetsAndControlLayers": {
"title": "How Do I Use Control Nets and Control Layers?",
"description": "Learn to apply control layers and controlnets to your images."
},
"howDoIUseGlobalIPAdaptersAndReferenceImages": {
"title": "How Do I Use Global IP Adapters and Reference Images?",
"description": "Introduction to adding reference images and global IP adapters."
},
"howDoIUseInpaintMasks": {
"title": "How Do I Use Inpaint Masks?",
"description": "How to apply inpaint masks for image correction and variation."
},
"howDoIOutpaint": {
"title": "How Do I Outpaint?",
"description": "Guide to outpainting beyond the original image borders."
}
}
}
}

View File

@@ -317,19 +317,6 @@
"info": "Info",
"showOptionsPanel": "Afficher le panneau latéral (O ou T)",
"invoke": {
"layer": {
"rgNoPromptsOrIPAdapters": "aucun prompts ou IP Adapters",
"t2iAdapterIncompatibleScaledBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, la largeur de la bounding box mise à l'échelle est {{width}}",
"t2iAdapterIncompatibleScaledBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, la hauteur de la bounding box mise à l'échelle est {{height}}",
"ipAdapterNoModelSelected": "aucun IP adapter sélectionné",
"ipAdapterNoImageSelected": "aucune image d'IP adapter sélectionnée",
"controlAdapterIncompatibleBaseModel": "modèle de base de Control Adapter incompatible",
"t2iAdapterIncompatibleBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, la hauteur de la bounding box est {{height}}",
"t2iAdapterIncompatibleBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, la largeur de la bounding box est {{width}}",
"ipAdapterIncompatibleBaseModel": "modèle de base d'IP adapter incompatible",
"rgNoRegion": "aucune zone sélectionnée",
"controlAdapterNoModelSelected": "aucun modèle de Control Adapter sélectionné"
},
"noPrompts": "Aucun prompts généré",
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}} entrée manquante",
"missingFieldTemplate": "Modèle de champ manquant",
@@ -1985,7 +1972,6 @@
"inpaintMask_withCount_many": "Remplir les masques",
"inpaintMask_withCount_other": "Remplir les masques",
"newImg2ImgCanvasFromImage": "Nouvelle Img2Img à partir de l'image",
"resetCanvas": "Réinitialiser la Toile",
"bboxOverlay": "Afficher la superposition des Bounding Box",
"moveToFront": "Déplacer vers le permier plan",
"moveToBack": "Déplacer vers l'arrière plan",
@@ -2034,7 +2020,6 @@
"help2": "Commencez par un point <Bold>Inclure</Bold> au sein de l'objet cible. Ajoutez d'autres points pour affiner la sélection. Moins de points produisent généralement de meilleurs résultats.",
"help3": "Inversez la sélection pour sélectionner tout sauf l'objet cible."
},
"canvasAsControlLayer": "$t(controlLayers.canvas) en tant que $t(controlLayers.controlLayer)",
"convertRegionalGuidanceTo": "Convertir $t(controlLayers.regionalGuidance) vers",
"copyRasterLayerTo": "Copier $t(controlLayers.rasterLayer) vers",
"newControlLayer": "Nouveau $t(controlLayers.controlLayer)",
@@ -2044,8 +2029,7 @@
"convertInpaintMaskTo": "Convertir $t(controlLayers.inpaintMask) vers",
"copyControlLayerTo": "Copier $t(controlLayers.controlLayer) vers",
"newInpaintMask": "Nouveau $t(controlLayers.inpaintMask)",
"newRasterLayer": "Nouveau $t(controlLayers.rasterLayer)",
"canvasAsRasterLayer": "$t(controlLayers.canvas) en tant que $t(controlLayers.rasterLayer)"
"newRasterLayer": "Nouveau $t(controlLayers.rasterLayer)"
},
"upscaling": {
"exceedsMaxSizeDetails": "La limite maximale d'agrandissement est de {{maxUpscaleDimension}}x{{maxUpscaleDimension}} pixels. Veuillez essayer une image plus petite ou réduire votre sélection d'échelle.",

View File

@@ -96,7 +96,8 @@
"clipboard": "Appunti",
"ok": "Ok",
"generating": "Generazione",
"loadingModel": "Caricamento del modello"
"loadingModel": "Caricamento del modello",
"warnings": "Avvisi"
},
"gallery": {
"galleryImageSize": "Dimensione dell'immagine",
@@ -662,21 +663,8 @@
"addingImagesTo": "Aggiungi immagini a",
"systemDisconnected": "Sistema disconnesso",
"missingNodeTemplate": "Modello di nodo mancante",
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}} ingresso mancante",
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}}: ingresso mancante",
"missingFieldTemplate": "Modello di campo mancante",
"layer": {
"controlAdapterNoModelSelected": "Nessun modello di adattatore di controllo selezionato",
"controlAdapterIncompatibleBaseModel": "Il modello base dell'adattatore di controllo non è compatibile",
"ipAdapterNoModelSelected": "Nessun adattatore IP selezionato",
"ipAdapterIncompatibleBaseModel": "Il modello base dell'adattatore IP non è compatibile",
"ipAdapterNoImageSelected": "Nessuna immagine dell'adattatore IP selezionata",
"rgNoPromptsOrIPAdapters": "Nessun prompt o adattatore IP",
"rgNoRegion": "Nessuna regione selezionata",
"t2iAdapterIncompatibleBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, larghezza riquadro è {{width}}",
"t2iAdapterIncompatibleBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, altezza riquadro è {{height}}",
"t2iAdapterIncompatibleScaledBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, larghezza del riquadro scalato {{width}}",
"t2iAdapterIncompatibleScaledBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, altezza del riquadro scalato {{height}}"
},
"fluxModelIncompatibleBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), altezza riquadro è {{height}}",
"fluxModelIncompatibleBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), larghezza riquadro è {{width}}",
"fluxModelIncompatibleScaledBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), larghezza del riquadro scalato è {{width}}",
@@ -684,10 +672,14 @@
"noT5EncoderModelSelected": "Nessun modello di encoder T5 selezionato per la generazione con FLUX",
"noCLIPEmbedModelSelected": "Nessun modello CLIP Embed selezionato per la generazione con FLUX",
"noFLUXVAEModelSelected": "Nessun modello VAE selezionato per la generazione con FLUX",
"canvasIsTransforming": "La tela sta trasformando",
"canvasIsRasterizing": "La tela sta rasterizzando",
"canvasIsCompositing": "La tela è in fase di composizione",
"canvasIsFiltering": "La tela sta filtrando"
"canvasIsTransforming": "La tela è occupata (sta trasformando)",
"canvasIsRasterizing": "La tela è occupata (sta rasterizzando)",
"canvasIsCompositing": "La tela è occupata (in composizione)",
"canvasIsFiltering": "La tela è occupata (sta filtrando)",
"collectionTooManyItems": "{{nodeLabel}} -> {{fieldLabel}}: troppi elementi, massimo {{maxItems}}",
"canvasIsSelectingObject": "La tela è occupata (selezione dell'oggetto)",
"collectionTooFewItems": "{{nodeLabel}} -> {{fieldLabel}}: troppi pochi elementi, minimo {{minItems}}",
"collectionEmpty": "{{nodeLabel}} -> {{fieldLabel}} raccolta vuota"
},
"useCpuNoise": "Usa la CPU per generare rumore",
"iterations": "Iterazioni",
@@ -972,7 +964,9 @@
"saveToGallery": "Salva nella Galleria",
"noMatchingWorkflows": "Nessun flusso di lavoro corrispondente",
"noWorkflows": "Nessun flusso di lavoro",
"workflowHelpText": "Hai bisogno di aiuto? Consulta la nostra guida <LinkComponent>Introduzione ai flussi di lavoro</LinkComponent>."
"workflowHelpText": "Hai bisogno di aiuto? Consulta la nostra guida <LinkComponent>Introduzione ai flussi di lavoro</LinkComponent>.",
"specialDesc": "Questa invocazione comporta una gestione speciale nell'applicazione. Ad esempio, i nodi Lotto vengono utilizzati per mettere in coda più grafici da un singolo flusso di lavoro.",
"internalDesc": "Questa invocazione è utilizzata internamente da Invoke. Potrebbe subire modifiche significative durante gli aggiornamenti dell'app e potrebbe essere rimossa in qualsiasi momento."
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@@ -1093,7 +1087,8 @@
"workflows": "Flussi di lavoro",
"generation": "Generazione",
"other": "Altro",
"gallery": "Galleria"
"gallery": "Galleria",
"batchSize": "Dimensione del lotto"
},
"models": {
"noMatchingModels": "Nessun modello corrispondente",
@@ -1195,8 +1190,9 @@
"controlNetBeginEnd": {
"heading": "Percentuale passi Inizio / Fine",
"paragraphs": [
"La parte del processo di rimozione del rumore in cui verrà applicato l'adattatore di controllo.",
"In genere, gli adattatori di controllo applicati all'inizio del processo guidano la composizione, mentre quelli applicati alla fine guidano i dettagli."
"Questa impostazione determina quale parte del processo di rimozione del rumore (generazione) incorpora la guida da questo livello.",
"• Passo iniziale (%): specifica quando iniziare ad applicare la guida da questo livello durante il processo di generazione.",
"• Passo finale (%): specifica quando interrompere l'applicazione della guida di questo livello e ripristinare la guida generale dal modello e altre impostazioni."
]
},
"noiseUseCPU": {
@@ -1300,7 +1296,9 @@
"controlNetWeight": {
"heading": "Peso",
"paragraphs": [
"Peso dell'adattatore di controllo. Un peso maggiore porterà a impatti maggiori sull'immagine finale."
"Regola la forza con cui il livello influenza il processo di generazione",
"• Peso maggiore (0.75-2): crea un impatto più significativo sul risultato finale.",
"• Peso inferiore (0-0.75): crea un impatto minore sul risultato finale."
]
},
"paramCFGScale": {
@@ -1477,9 +1475,9 @@
]
},
"ipAdapterMethod": {
"heading": "Metodo",
"heading": "Modalità",
"paragraphs": [
"Metodo con cui applicare l'adattatore IP corrente."
"La modalità definisce il modo in cui l'immagine di riferimento guiderà il processo di generazione."
]
},
"scale": {
@@ -1750,7 +1748,6 @@
"newRegionalReferenceImageError": "Problema nella creazione dell'immagine di riferimento regionale",
"newControlLayerOk": "Livello di controllo creato",
"bboxOverlay": "Mostra sovrapposizione riquadro",
"resetCanvas": "Reimposta la tela",
"outputOnlyMaskedRegions": "In uscita solo le regioni generate",
"enableAutoNegative": "Abilita Auto Negativo",
"disableAutoNegative": "Disabilita Auto Negativo",
@@ -1802,7 +1799,10 @@
"full": "Stile e Composizione",
"style": "Solo Stile",
"composition": "Solo Composizione",
"ipAdapterMethod": "Metodo Adattatore IP"
"ipAdapterMethod": "Modalità",
"fullDesc": "Applica lo stile visivo (colori, texture) e la composizione (disposizione, struttura).",
"styleDesc": "Applica lo stile visivo (colori, texture) senza considerare la disposizione.",
"compositionDesc": "Replica disposizione e struttura ignorando lo stile di riferimento."
},
"showingType": "Mostra {{type}}",
"dynamicGrid": "Griglia dinamica",
@@ -2036,8 +2036,6 @@
"convertControlLayerTo": "Converti $t(controlLayers.controlLayer) in",
"newRasterLayer": "Nuovo $t(controlLayers.rasterLayer)",
"newRegionalGuidance": "Nuova $t(controlLayers.regionalGuidance)",
"canvasAsRasterLayer": "$t(controlLayers.canvas) come $t(controlLayers.rasterLayer)",
"canvasAsControlLayer": "$t(controlLayers.canvas) come $t(controlLayers.controlLayer)",
"convertInpaintMaskTo": "Converti $t(controlLayers.inpaintMask) in",
"copyRegionalGuidanceTo": "Copia $t(controlLayers.regionalGuidance) in",
"convertRasterLayerTo": "Converti $t(controlLayers.rasterLayer) in",
@@ -2046,9 +2044,34 @@
"newInpaintMask": "Nuova $t(controlLayers.inpaintMask)",
"replaceCurrent": "Sostituisci corrente",
"mergeDown": "Unire in basso",
"newFromImage": "Nuovo da Immagine",
"mergingLayers": "Unione dei livelli",
"controlLayerEmptyState": "<UploadButton>Carica un'immagine</UploadButton>, trascina un'immagine dalla <GalleryButton>galleria</GalleryButton> su questo livello oppure disegna sulla tela per iniziare."
"controlLayerEmptyState": "<UploadButton>Carica un'immagine</UploadButton>, trascina un'immagine dalla <GalleryButton>galleria</GalleryButton> su questo livello oppure disegna sulla tela per iniziare.",
"useImage": "Usa immagine",
"resetGenerationSettings": "Ripristina impostazioni di generazione",
"referenceImageEmptyState": "Per iniziare, <UploadButton>carica un'immagine</UploadButton> oppure trascina un'immagine dalla <GalleryButton>galleria</GalleryButton> su questo livello.",
"asRasterLayer": "Come $t(controlLayers.rasterLayer)",
"asRasterLayerResize": "Come $t(controlLayers.rasterLayer) (Ridimensiona)",
"asControlLayer": "Come $t(controlLayers.controlLayer)",
"asControlLayerResize": "Come $t(controlLayers.controlLayer) (Ridimensiona)",
"newSession": "Nuova sessione",
"resetCanvasLayers": "Ripristina livelli Tela",
"referenceImageRegional": "Immagine di riferimento (regionale)",
"referenceImageGlobal": "Immagine di riferimento (globale)",
"warnings": {
"controlAdapterNoModelSelected": "nessun modello selezionato per il livello di controllo",
"controlAdapterNoControl": "nessun controllo selezionato/disegnato",
"ipAdapterNoModelSelected": "nessun modello di immagine di riferimento selezionato",
"rgNoPromptsOrIPAdapters": "nessun prompt testuale o immagini di riferimento",
"rgReferenceImagesNotSupported": "Immagini di riferimento regionali non supportate per il modello base selezionato",
"rgNoRegion": "nessuna regione disegnata",
"problemsFound": "Problemi riscontrati",
"unsupportedModel": "livello non supportato per il modello base selezionato",
"controlAdapterIncompatibleBaseModel": "modello di base del livello di controllo incompatibile",
"rgNegativePromptNotSupported": "Prompt negativo non supportato per il modello base selezionato",
"ipAdapterIncompatibleBaseModel": "modello base dell'immagine di riferimento incompatibile",
"ipAdapterNoImageSelected": "nessuna immagine di riferimento selezionata",
"rgAutoNegativeNotSupported": "Auto-Negativo non supportato per il modello base selezionato"
}
},
"ui": {
"tabs": {
@@ -2148,8 +2171,8 @@
"watchRecentReleaseVideos": "Guarda i video su questa versione",
"watchUiUpdatesOverview": "Guarda le novità dell'interfaccia",
"items": [
"<StrongComponent>SD 3.5</StrongComponent>: supporto per SD 3.5 Medium e Large.",
"<StrongComponent>Tela</StrongComponent>: elaborazione semplificata del livello di controllo e impostazioni di controllo predefinite migliorate."
"<StrongComponent>Flussi di lavoro</StrongComponent>: esegui un flusso di lavoro per una raccolta di immagini utilizzando il nuovo nodo <StrongComponent>Lotto di immagini</StrongComponent>.",
"<StrongComponent>FLUX</StrongComponent>: Supporto per XLabs IP Adapter v2."
]
},
"system": {
@@ -2176,5 +2199,67 @@
"logNamespaces": "Elementi del registro"
},
"enableLogging": "Abilita la registrazione"
},
"supportVideos": {
"gettingStarted": "Iniziare",
"supportVideos": "Video di supporto",
"videos": {
"usingControlLayersAndReferenceGuides": {
"title": "Utilizzo di livelli di controllo e guide di riferimento",
"description": "Scopri come guidare la creazione delle tue immagini con livelli di controllo e immagini di riferimento."
},
"creatingYourFirstImage": {
"description": "Introduzione alla creazione di un'immagine da zero utilizzando gli strumenti di Invoke.",
"title": "Creazione della tua prima immagine"
},
"understandingImageToImageAndDenoising": {
"description": "Panoramica delle trasformazioni immagine-a-immagine e della riduzione del rumore in Invoke.",
"title": "Comprendere immagine-a-immagine e riduzione del rumore"
},
"howDoIDoImageToImageTransformation": {
"description": "Tutorial su come eseguire trasformazioni da immagine a immagine in Invoke.",
"title": "Come si esegue la trasformazione da immagine-a-immagine?"
},
"howDoIUseInpaintMasks": {
"title": "Come si usano le maschere Inpaint?",
"description": "Come applicare maschere inpaint per la correzione e la variazione delle immagini."
},
"howDoIOutpaint": {
"description": "Guida all'outpainting oltre i confini dell'immagine originale.",
"title": "Come posso eseguire l'outpainting?"
},
"exploringAIModelsAndConceptAdapters": {
"description": "Approfondisci i modelli di intelligenza artificiale e scopri come utilizzare gli adattatori concettuali per il controllo creativo.",
"title": "Esplorazione dei modelli di IA e degli adattatori concettuali"
},
"upscaling": {
"title": "Ampliamento",
"description": "Come ampliare le immagini con gli strumenti di Invoke per migliorarne la risoluzione."
},
"creatingAndComposingOnInvokesControlCanvas": {
"description": "Impara a comporre immagini utilizzando la tela di controllo di Invoke.",
"title": "Creare e comporre sulla tela di controllo di Invoke"
},
"howDoIGenerateAndSaveToTheGallery": {
"description": "Passaggi per generare e salvare le immagini nella galleria.",
"title": "Come posso generare e salvare nella Galleria?"
},
"howDoIEditOnTheCanvas": {
"title": "Come posso apportare modifiche sulla tela?",
"description": "Guida alla modifica delle immagini direttamente sulla tela."
},
"howDoIUseControlNetsAndControlLayers": {
"title": "Come posso utilizzare le Reti di Controllo e i Livelli di Controllo?",
"description": "Impara ad applicare livelli di controllo e reti di controllo alle tue immagini."
},
"howDoIUseGlobalIPAdaptersAndReferenceImages": {
"title": "Come si utilizzano gli adattatori IP globali e le immagini di riferimento?",
"description": "Introduzione all'aggiunta di immagini di riferimento e adattatori IP globali."
}
},
"controlCanvas": "Tela di Controllo",
"watch": "Guarda",
"studioSessionsDesc1": "Dai un'occhiata a <StudioSessionsPlaylistLink /> per approfondimenti su Invoke.",
"studioSessionsDesc2": "Unisciti al nostro <DiscordLink /> per partecipare alle sessioni live e fare domande. Le sessioni vengono caricate sulla playlist la settimana successiva."
}
}

View File

@@ -637,7 +637,6 @@
"cancel": "キャンセル",
"reset": "リセット"
},
"resetCanvas": "キャンバスをリセット",
"cropLayerToBbox": "レイヤーをバウンディングボックスでクロップ",
"convertInpaintMaskTo": "$t(controlLayers.inpaintMask)を変換",
"regionalGuidance_withCount_other": "領域ガイダンス",

View File

@@ -230,16 +230,7 @@
"systemDisconnected": "Systeem is niet verbonden",
"missingNodeTemplate": "Knooppuntsjabloon ontbreekt",
"missingFieldTemplate": "Veldsjabloon ontbreekt",
"addingImagesTo": "Bezig met toevoegen van afbeeldingen aan",
"layer": {
"controlAdapterNoModelSelected": "geen controle-adaptermodel geselecteerd",
"controlAdapterIncompatibleBaseModel": "niet-compatibele basismodel voor controle-adapter",
"ipAdapterIncompatibleBaseModel": "niet-compatibele basismodel voor IP-adapter",
"ipAdapterNoImageSelected": "geen afbeelding voor IP-adapter geselecteerd",
"rgNoRegion": "geen gebied geselecteerd",
"rgNoPromptsOrIPAdapters": "geen tekstprompts of IP-adapters",
"ipAdapterNoModelSelected": "geen IP-adapter geselecteerd"
}
"addingImagesTo": "Bezig met toevoegen van afbeeldingen aan"
},
"patchmatchDownScaleSize": "Verklein",
"useCpuNoise": "Gebruik CPU-ruis",

View File

@@ -10,7 +10,24 @@
"load": "Załaduj",
"statusDisconnected": "Odłączono od serwera",
"githubLabel": "GitHub",
"discordLabel": "Discord"
"discordLabel": "Discord",
"clipboard": "Schowek",
"aboutDesc": "Wykorzystujesz Invoke do pracy? Sprawdź:",
"ai": "SI",
"areYouSure": "Czy jesteś pewien?",
"copyError": "$t(gallery.copy) Błąd",
"apply": "Zastosuj",
"copy": "Kopiuj",
"or": "albo",
"add": "Dodaj",
"off": "Wyłączony",
"accept": "Zaakceptuj",
"cancel": "Anuluj",
"advanced": "Zawansowane",
"back": "Do tyłu",
"auto": "Automatyczny",
"beta": "Beta",
"close": "Wyjdź"
},
"gallery": {
"galleryImageSize": "Rozmiar obrazów",
@@ -65,6 +82,42 @@
"uploadImage": "Wgrywanie obrazu",
"previousImage": "Poprzedni obraz",
"nextImage": "Następny obraz",
"menu": "Menu"
"menu": "Menu",
"mode": "Tryb"
},
"boards": {
"cancel": "Anuluj",
"noBoards": "Brak tablic typu {{boardType}}",
"imagesWithCount_one": "{{count}} zdjęcie",
"imagesWithCount_few": "{{count}} zdjęcia",
"imagesWithCount_many": "{{count}} zdjęcia",
"private": "Prywatne tablice",
"updateBoardError": "Błąd aktualizacji tablicy",
"uncategorized": "Nieskategoryzowane",
"selectBoard": "Wybierz tablicę",
"downloadBoard": "Pobierz tablice",
"loading": "Ładowanie...",
"move": "Przenieś",
"noMatching": "Brak pasujących tablic"
},
"accordions": {
"compositing": {
"title": "Kompozycja",
"infillTab": "Inskrypcja",
"coherenceTab": "Przebieg Koherencji"
},
"generation": {
"title": "Generowanie"
},
"image": {
"title": "Zdjęcie"
},
"advanced": {
"options": "$t(accordions.advanced.title) Opcje",
"title": "Zaawansowane"
},
"control": {
"title": "Kontrola"
}
}
}

View File

@@ -648,19 +648,6 @@
"missingFieldTemplate": "Отсутствует шаблон поля",
"addingImagesTo": "Добавление изображений в",
"invoke": "Создать",
"layer": {
"ipAdapterNoModelSelected": "IP адаптер не выбран",
"controlAdapterNoModelSelected": "не выбрана модель адаптера контроля",
"controlAdapterIncompatibleBaseModel": "несовместимая базовая модель адаптера контроля",
"rgNoRegion": "регион не выбран",
"rgNoPromptsOrIPAdapters": "нет текстовых запросов или IP-адаптеров",
"ipAdapterIncompatibleBaseModel": "несовместимая базовая модель IP-адаптера",
"ipAdapterNoImageSelected": "изображение IP-адаптера не выбрано",
"t2iAdapterIncompatibleScaledBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, масштабированная ширина рамки {{width}}",
"t2iAdapterIncompatibleBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, высота рамки {{height}}",
"t2iAdapterIncompatibleBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, ширина рамки {{width}}",
"t2iAdapterIncompatibleScaledBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, масштабированная высота рамки {{height}}"
},
"fluxModelIncompatibleBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), ширина рамки {{width}}",
"fluxModelIncompatibleBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), высота рамки {{height}}",
"fluxModelIncompatibleScaledBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), масштабированная высота рамки {{height}}",
@@ -1660,7 +1647,6 @@
"clearCaches": "Очистить кэши",
"recalculateRects": "Пересчитать прямоугольники",
"saveBboxToGallery": "Сохранить рамку в галерею",
"resetCanvas": "Сбросить холст",
"canvas": "Холст",
"global": "Глобальный",
"newGlobalReferenceImageError": "Проблема с созданием глобального эталонного изображения",

View File

@@ -217,7 +217,10 @@
"direction": "Phương Hướng",
"unknownError": "Lỗi Không Rõ",
"selected": "Đã chọn",
"tab": "Tab"
"tab": "Tab",
"loadingModel": "Đang Tải Model",
"generating": "Đang Tạo Sinh",
"warnings": "Cảnh Báo"
},
"prompt": {
"addPromptTrigger": "Thêm Prompt Trigger",
@@ -290,7 +293,8 @@
"cancelSucceeded": "Mục Đã Huỷ Bỏ",
"completedIn": "Hoàn tất trong",
"graphQueued": "Đồ Thị Đã Vào Hàng",
"batchQueuedDesc_other": "Thêm {{count}} phiên vào {{direction}} của hàng"
"batchQueuedDesc_other": "Thêm {{count}} phiên vào {{direction}} của hàng",
"batchSize": "Kích Thước Vùng Hàng Loạt"
},
"hotkeys": {
"canvas": {
@@ -733,7 +737,9 @@
"textualInversions": "Bộ Đảo Ngược Văn Bản",
"loraTriggerPhrases": "Từ Ngữ Kích Hoạt Cho LoRA",
"width": "Chiều Rộng",
"starterModelsInModelManager": "Model khởi đầu có thể tìm thấy ở Trình Quản Lý Model"
"starterModelsInModelManager": "Model khởi đầu có thể tìm thấy ở Trình Quản Lý Model",
"clipLEmbed": "CLIP-L Embed",
"clipGEmbed": "CLIP-G Embed"
},
"metadata": {
"guidance": "Hướng Dẫn",
@@ -905,7 +911,7 @@
"unknownNode": "Node Không Rõ",
"unknownNodeType": "Loại Node Không Rõ",
"unknownTemplate": "Mẫu Trình Bày Không Rõ",
"cannotConnectOutputToOutput": "Không thế kết nối đầu ra với đầu vào",
"cannotConnectOutputToOutput": "Không thế kết nối đầu ra với đầu ra",
"cannotConnectToSelf": "Không thể kết nối với chính nó",
"workflow": "Workflow",
"addNodeToolTip": "Thêm Node (Shift+A, Space)",
@@ -952,7 +958,9 @@
"executionStateInProgress": "Đang Xử Lý",
"showLegendNodes": "Hiển Thị Vùng Nhập",
"outputFieldTypeParseError": "Không thể phân tích loại dữ liệu đầu ra của {{node}}.{{field}} ({{message}})",
"modelAccessError": "Không thể tìm thấy model {{key}}, chuyển về mặc định"
"modelAccessError": "Không thể tìm thấy model {{key}}, chuyển về mặc định",
"internalDesc": "Trình kích hoạt này được dùng bên trong bởi Invoke. Nó có thể phá hỏng thay đổi trong khi cập nhật ứng dụng và có thể bị xoá bất cứ lúc nào.",
"specialDesc": "Trình kích hoạt này có một số xử lý đặc biệt trong ứng dụng. Ví dụ, Node Hàng Loạt được dùng để xếp vào nhiều đồ thị từ một workflow."
},
"popovers": {
"paramCFGRescaleMultiplier": {
@@ -1105,7 +1113,9 @@
},
"controlNetWeight": {
"paragraphs": [
"Trọng lượng của Control Adapter. Trọng lượng càng cao sẽ dẫn đến tác động càng lớn lên ảnh cuối cùng."
"Điều chỉnh mức độ layer ảnh hưởng đến quá trình xử lý tạo sinh.",
"• Trọng Lượng Lớn Hơn (.75-2): Gây ra ảnh hưởng lớn hơn lên kết quả cuối cùng.",
"• Trọng Lượng Nhỏ Hơn (0-.75): Gây ra ảnh hưởng nhỏ hơn lên kết quả cuối cùng."
],
"heading": "Trọng Lượng"
},
@@ -1149,7 +1159,7 @@
},
"ipAdapterMethod": {
"paragraphs": [
"Cách thức dùng để áp dụng IP Adapter hiện tại."
"Phương thức định nghĩa cách ảnh mẫu sẽ chỉ dẫn quá trình xử lý tạo sinh."
],
"heading": "Cách Thức"
},
@@ -1196,8 +1206,9 @@
},
"controlNetBeginEnd": {
"paragraphs": [
"Một phần trong quá trình xử lý khử nhiễu mà sẽ được Control Adapter áp dụng.",
"Nói chung, Control Adapter áp dụng vào lúc bắt đầu của quá trình hướng dẫn thành phần, và cũng áp dụng vào lúc kết thúc hướng dẫn chi tiết."
"Cài đặt này xác định phần xử lý khử nhiễu (trong khi tạo sinh) kết hợp với chỉ dẫn từ layer này.",
"• Bước Bắt Đầu (%): Chỉ định lúc bắt đầu áp dụng chỉ dẫn từ layer này trong quá trình tạo sinh.",
"• Bước Kết Thúc (%): Chỉ định lúc dừng áp dụng chỉ dẫn của layer này và trở về chỉ dẫn chung từ model và các thiết lập khác."
],
"heading": "Phần Trăm Tham Số Bước Khi Bắt Đầu/Kết Thúc"
},
@@ -1399,26 +1410,13 @@
"processImage": "Xử Lý Hình Ảnh",
"useSize": "Dùng Kích Thước",
"invoke": {
"layer": {
"t2iAdapterIncompatibleScaledBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, tỉ lệ chiều dài hộp giới hạn là {{height}}",
"rgNoRegion": "không có vùng được chọn",
"ipAdapterNoModelSelected": "không có IP Adapter được lựa chọn",
"ipAdapterNoImageSelected": "không có ảnh IP Adapter được lựa chọn",
"t2iAdapterIncompatibleBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, chiều dài hộp giới hạn là {{height}}",
"t2iAdapterIncompatibleScaledBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, tỉ lệ chiều rộng hộp giới hạn là {{width}}",
"t2iAdapterIncompatibleBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, chiều rộng hộp giới hạn là {{width}}",
"rgNoPromptsOrIPAdapters": "không có lệnh chữ hoặc IP Adapter",
"controlAdapterIncompatibleBaseModel": "model cơ sở của Control Adapter không tương thích",
"ipAdapterIncompatibleBaseModel": "dạng model cơ sở của IP Adapter không tương thích",
"controlAdapterNoModelSelected": "không có model Control Adapter được chọn"
},
"fluxModelIncompatibleBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), chiều rộng hộp giới hạn là {{width}}",
"noModelSelected": "Không có model được lựa chọn",
"fluxModelIncompatibleScaledBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), tỉ lệ chiều dài hộp giới hạn là {{height}}",
"canvasIsFiltering": "Canvas đang được lọc",
"canvasIsRasterizing": "Canvas đang được raster hoá",
"canvasIsTransforming": "Canvas đang được biến đổi",
"canvasIsCompositing": "Canvas đang được kết hợp",
"canvasIsFiltering": "Canvas đang bận (đang lọc)",
"canvasIsRasterizing": "Canvas đang bận (đang raster hoá)",
"canvasIsTransforming": "Canvas đang bận (đang biến đổi)",
"canvasIsCompositing": "Canvas đang bận (đang kết hợp)",
"noPrompts": "Không có lệnh được tạo",
"noNodesInGraph": "Không có node trong đồ thị",
"addingImagesTo": "Thêm ảnh vào",
@@ -1430,8 +1428,12 @@
"missingNodeTemplate": "Thiếu mẫu trình bày node",
"fluxModelIncompatibleBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), chiều dài hộp giới hạn là {{height}}",
"fluxModelIncompatibleScaledBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), tỉ lệ chiều rộng hộp giới hạn là {{width}}",
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}} thiếu đầu ra",
"missingFieldTemplate": "Thiếu vùng mẫu trình bày"
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}}: thiếu đầu vào",
"missingFieldTemplate": "Thiếu vùng mẫu trình bày",
"collectionEmpty": "{{nodeLabel}} -> {{fieldLabel}} tài nguyên trống",
"collectionTooFewItems": "{{nodeLabel}} -> {{fieldLabel}}: quá ít mục, tối thiểu {{minItems}}",
"collectionTooManyItems": "{{nodeLabel}} -> {{fieldLabel}}: quá nhiều mục, tối đa {{maxItems}}",
"canvasIsSelectingObject": "Canvas đang bận (đang chọn đồ vật)"
},
"cfgScale": "Thước Đo CFG",
"useSeed": "Dùng Tham Số Hạt Giống",
@@ -1542,7 +1544,8 @@
"resetWebUIDesc2": "Nếu ảnh không được xuất hiện trong thư viện hoặc điều gì đó không ổn đang diễn ra, hãy thử khởi động lại trước khi báo lỗi trên Github.",
"displayInProgress": "Hiển Thị Hình Ảnh Đang Xử Lý",
"intermediatesClearedFailed": "Có Vấn Đề Khi Dọn Sạch Sản Phẩm Trung Gian",
"enableInvisibleWatermark": "Bật Chế Độ Ẩn Watermark"
"enableInvisibleWatermark": "Bật Chế Độ Ẩn Watermark",
"showDetailedInvocationProgress": "Hiện Dữ Liệu Xử Lý"
},
"sdxl": {
"loading": "Đang Tải...",
@@ -1594,26 +1597,27 @@
"pullBboxIntoLayerError": "Có Vấn Đề Khi Chuyển Hộp Giới Hạn Thành Layer",
"pullBboxIntoReferenceImageOk": "Chuyển Hộp Giới Hạn Thành Ảnh Mẫu",
"clearCaches": "Xoá Bộ Nhớ Đệm",
"outputOnlyMaskedRegions": "Chỉ Xuất Đầu Ra Ở Vùng Phủ",
"outputOnlyMaskedRegions": "Chỉ Xuất Đầu Ra Ở Vùng Tạo Sinh",
"addLayer": "Thêm Layer",
"regional": "Khu Vực",
"regionIsEmpty": "Vùng được chọn trống",
"bookmark": "Đánh Dấu Để Đổi Nhanh",
"saveCanvasToGallery": "Lưu Canvas Vào Thư Viện",
"cropLayerToBbox": "Xén Layer Vào Hộp Giới Hạn",
"newFromImage": "Mới Từ Ảnh",
"mergeDown": "Gộp Xuống",
"mergeVisibleError": "Lỗi khi gộp layer",
"bboxOverlay": "Hiển Thị Lớp Phủ Trên Hộp Giới Hạn",
"resetCanvas": "Khởi Động Lại Canvas",
"duplicate": "Nhân Bản",
"moveForward": "Chuyển Lên Đầu",
"fitBboxToLayers": "Xếp Vừa Hộp Giới Hạn Vào Layer",
"ipAdapterMethod": {
"full": "Đầy Đủ",
"full": "Phong Cách Và Thành Phần",
"style": "Chỉ Lấy Phong Cách",
"composition": "Chỉ Lấy Thành Phần",
"ipAdapterMethod": "Cách Thức IP Adapter"
"ipAdapterMethod": "Cách Thức",
"compositionDesc": "Áp dụng cách trình bày và bỏ qua phong cách mẫu.",
"fullDesc": "Áp dụng phong cách trực quan (màu, cấu tạo) & thành phần (cách trình bày).",
"styleDesc": "Áp dụng phong cách trực quan (màu, cấu tạo) và bỏ qua cách trình bày."
},
"deletePrompt": "Xoá Lệnh",
"rasterLayer": "Layer Dạng Raster",
@@ -1643,7 +1647,6 @@
"replaceCurrent": "Thay Đổi Cái Hiện Tại",
"controlLayers_withCount_visible": "Layer Điều Khiển Được ({{count}})",
"hidingType": "Ẩn {{type}}",
"canvasAsRasterLayer": "Biến $t(controlLayers.canvas) Thành $t(controlLayers.rasterLayer)",
"newImg2ImgCanvasFromImage": "Chuyển Đổi Ảnh Sang Ảnh Mới Từ Ảnh",
"copyToClipboard": "Sao Chép Vào Clipboard",
"logDebugInfo": "Thông Tin Log Gỡ Lỗi",
@@ -1670,7 +1673,6 @@
"sendToGallery": "Chuyển Tới Thư Viện",
"unlocked": "Mở Khoá",
"addReferenceImage": "Thêm $t(controlLayers.referenceImage)",
"canvasAsControlLayer": "Biến $t(controlLayers.canvas) Thành $t(controlLayers.controlLayer)",
"sendingToCanvas": "Chuyển Ảnh Tạo Sinh Vào Canvas",
"sendingToGallery": "Chuyển Ảnh Tạo Sinh Vào Thư Viện",
"viewProgressOnCanvas": "Xem quá trình xử lý và ảnh đầu ra trong <Btn>Canvas</Btn>.",
@@ -1903,7 +1905,16 @@
"colorPicker": "Chọn Màu"
},
"mergingLayers": "Đang gộp layer",
"controlLayerEmptyState": "<UploadButton>Tải lên ảnh</UploadButton>, kéo thả ảnh từ <GalleryButton>thư viện</GalleryButton> vào layer này, hoặc vẽ trên canvas để bắt đầu."
"controlLayerEmptyState": "<UploadButton>Tải lên ảnh</UploadButton>, kéo thả ảnh từ <GalleryButton>thư viện</GalleryButton> vào layer này, hoặc vẽ trên canvas để bắt đầu.",
"referenceImageEmptyState": "<UploadButton>Tải lên ảnh</UploadButton> hoặc kéo thả ảnh từ <GalleryButton>thư viện</GalleryButton> vào layer này để bắt đầu.",
"useImage": "Dùng Hình Ảnh",
"resetCanvasLayers": "Khởi Động Lại Layer Canvas",
"asRasterLayer": "Như $t(controlLayers.rasterLayer)",
"asRasterLayerResize": "Như $t(controlLayers.rasterLayer) (Thay Đổi Kích Thước)",
"asControlLayer": "Như $t(controlLayers.controlLayer)",
"asControlLayerResize": "Như $t(controlLayers.controlLayer) (Thay Đổi Kích Thước)",
"newSession": "Phiên Làm Việc Mới",
"resetGenerationSettings": "Khởi Động Lại Cài Đặt Tạo Sinh"
},
"stylePresets": {
"negativePrompt": "Lệnh Tiêu Cực",
@@ -2128,8 +2139,8 @@
"watchRecentReleaseVideos": "Xem Video Phát Hành Mới Nhất",
"watchUiUpdatesOverview": "Xem Tổng Quan Về Những Cập Nhật Cho Giao Diện Người Dùng",
"items": [
"<StrongComponent>SD 3.5</StrongComponent>: Hỗ trợ cho Từ ngữ Sang Hình Ảnh trong Workflow với phiên bản SD 3.5 Medium hoặc Large.",
"<StrongComponent>Canvas</StrongComponent>: Hợp lý hoá cách xử lý Layer Điều Khiển Được và cải thiện thiết lập điều khiển mặc định."
"<StrongComponent>Workflows</StrongComponent>: Chạy một workflow cho nhiều ảnh bằng node <StrongComponent>Ảnh Hàng Loạt</StrongComponent> mới.",
"<StrongComponent>FLUX</StrongComponent>: Hỗ trợ cho XLabs IP Adapter v2."
]
},
"upsell": {
@@ -2137,5 +2148,67 @@
"inviteTeammates": "Thêm Đồng Đội",
"shareAccess": "Chia Sẻ Quyền Truy Cập",
"professionalUpsell": "Không có sẵn Phiên Bản Chuyên Nghiệp cho Invoke. Bấm vào đây hoặc đến invoke.com/pricing để thêm chi tiết."
},
"supportVideos": {
"supportVideos": "Video Hỗ Trợ",
"gettingStarted": "Bắt Đầu Làm Quen",
"studioSessionsDesc1": "Xem thử <StudioSessionsPlaylistLink /> để hiểu rõ Invoke hơn.",
"studioSessionsDesc2": "Đến <DiscordLink /> để tham gia vào phiên trực tiếp và hỏi câu hỏi. Các phiên được tải lên danh sách phát vào các tuần.",
"videos": {
"howDoIDoImageToImageTransformation": {
"title": "Làm Sao Để Tôi Dùng Trình Biến Đổi Hình Ảnh Sang Hình Ảnh?",
"description": "Hướng dẫn cách thực hiện biến đổi ảnh sang ảnh trong Invoke."
},
"howDoIUseGlobalIPAdaptersAndReferenceImages": {
"description": "Giới thiệu về ảnh mẫu và IP adapter toàn vùng.",
"title": "Làm Sao Để Tôi Dùng IP Adapter Toàn Vùng Và Ảnh Mẫu?"
},
"creatingAndComposingOnInvokesControlCanvas": {
"description": "Học cách sáng tạo ảnh bằng trình điều khiển canvas của Invoke.",
"title": "Sáng Tạo Trong Trình Kiểm Soát Canvas Của Invoke"
},
"upscaling": {
"description": "Cách upscale ảnh bằng bộ công cụ của Invoke để nâng cấp độ phân giải.",
"title": "Upscale (Nâng Cấp Chất Lượng Hình Ảnh)"
},
"howDoIGenerateAndSaveToTheGallery": {
"title": "Làm Sao Để Tôi Tạo Sinh Và Lưu Vào Thư Viện?",
"description": "Các bước để tạo sinh và lưu ảnh vào thư viện."
},
"howDoIEditOnTheCanvas": {
"description": "Hướng dẫn chỉnh sửa ảnh trực tiếp trên canvas.",
"title": "Làm Sao Để Tôi Chỉnh Sửa Trên Canvas?"
},
"howDoIUseControlNetsAndControlLayers": {
"title": "Làm Sao Để Tôi Dùng ControlNet và Layer Điều Khiển Được?",
"description": "Học cách áp dụng layer điều khiển được và controlnet vào ảnh của bạn."
},
"howDoIUseInpaintMasks": {
"title": "Làm Sao Để Tôi Dùng Lớp Phủ Inpaint?",
"description": "Cách áp dụng lớp phủ inpaint vào chỉnh sửa và thay đổi ảnh."
},
"howDoIOutpaint": {
"title": "Làm Sao Để Tôi Outpaint?",
"description": "Hướng dẫn outpaint bên ngoài viền ảnh gốc."
},
"creatingYourFirstImage": {
"description": "Giới thiệu về cách tạo ảnh từ ban đầu bằng công cụ Invoke.",
"title": "Tạo Hình Ảnh Đầu Tiên Của Bạn"
},
"usingControlLayersAndReferenceGuides": {
"description": "Học cách chỉ dẫn ảnh được tạo ra bằng layer điều khiển được và ảnh mẫu.",
"title": "Dùng Layer Điều Khiển Được và Chỉ Dẫn Mẫu"
},
"understandingImageToImageAndDenoising": {
"title": "Hiểu Rõ Trình Hình Ảnh Sang Hình Ảnh Và Trình Khử Nhiễu",
"description": "Tổng quan về trình biến đổi ảnh sang ảnh và trình khử nhiễu trong Invoke."
},
"exploringAIModelsAndConceptAdapters": {
"title": "Khám Phá Model AI Và Khái Niệm Về Adapter",
"description": "Đào sâu vào model AI và cách dùng những adapter để điều khiển một cách sáng tạo."
}
},
"controlCanvas": "Điều Khiển Canvas",
"watch": "Xem"
}
}

View File

@@ -661,19 +661,6 @@
"missingFieldTemplate": "缺失模板",
"addingImagesTo": "添加图像到",
"noPrompts": "没有已生成的提示词",
"layer": {
"ipAdapterNoModelSelected": "未选择IP adapter",
"controlAdapterNoModelSelected": "未选择Control Adapter模型",
"rgNoPromptsOrIPAdapters": "无文本提示或IP Adapters",
"controlAdapterIncompatibleBaseModel": "Control Adapter的基础模型不兼容",
"ipAdapterIncompatibleBaseModel": "IP Adapter的基础模型不兼容",
"ipAdapterNoImageSelected": "未选择IP Adapter图像",
"rgNoRegion": "未选择区域",
"t2iAdapterIncompatibleBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}},边界框宽度为 {{width}}",
"t2iAdapterIncompatibleScaledBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}},缩放后的边界框高度为 {{height}}",
"t2iAdapterIncompatibleBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}},边界框高度为 {{height}}",
"t2iAdapterIncompatibleScaledBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}},缩放后的边界框宽度为 {{width}}"
},
"canvasIsFiltering": "画布正在过滤",
"fluxModelIncompatibleScaledBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16),缩放后的边界框高度为 {{height}}",
"noCLIPEmbedModelSelected": "未为FLUX生成选择CLIP嵌入模型",
@@ -1720,8 +1707,6 @@
"sendToCanvas": "发送到画布",
"controlLayers_withCount_visible": "控制图层({{count}} 个)",
"rasterLayers_withCount_visible": "栅格图层({{count}} 个)",
"canvasAsRasterLayer": "将 $t(controlLayers.canvas) 转换为 $t(controlLayers.rasterLayer)",
"canvasAsControlLayer": "将 $t(controlLayers.canvas) 转换为 $t(controlLayers.controlLayer)",
"convertRegionalGuidanceTo": "将 $t(controlLayers.regionalGuidance) 转换为",
"newInpaintMask": "新建 $t(controlLayers.inpaintMask)",
"regionIsEmpty": "选定区域为空",
@@ -1760,11 +1745,9 @@
"pullBboxIntoLayerError": "将边界框导入图层时出现问题",
"pullBboxIntoLayerOk": "边界框已导入到图层",
"sendToCanvasDesc": "按下“Invoke”按钮会将您的工作进度暂存到画布上。",
"resetCanvas": "重置画布",
"sendToGallery": "发送到图库",
"sendToGalleryDesc": "按下“Invoke”键会生成并保存一张唯一的图像到您的图库中。",
"rasterLayer_withCount_other": "栅格图层",
"newFromImage": "从图像创建新内容",
"mergeDown": "向下合并",
"clearCaches": "清除缓存",
"recalculateRects": "重新计算矩形",

View File

@@ -27,6 +27,7 @@ import { ClearQueueConfirmationsAlertDialog } from 'features/queue/components/Cl
import { DeleteStylePresetDialog } from 'features/stylePresets/components/DeleteStylePresetDialog';
import { StylePresetModal } from 'features/stylePresets/components/StylePresetForm/StylePresetModal';
import RefreshAfterResetModal from 'features/system/components/SettingsModal/RefreshAfterResetModal';
import { VideosModal } from 'features/system/components/VideosModal/VideosModal';
import { configChanged } from 'features/system/store/configSlice';
import { selectLanguage } from 'features/system/store/systemSelectors';
import { AppContent } from 'features/ui/components/AppContent';
@@ -108,6 +109,7 @@ const App = ({ config = DEFAULT_CONFIG, studioInitAction }: Props) => {
<NewCanvasSessionDialog />
<ImageContextMenu />
<FullscreenDropzone />
<VideosModal />
</ErrorBoundary>
);
};

View File

@@ -1,3 +1,4 @@
import { useStore } from '@nanostores/react';
import { useAppStore } from 'app/store/storeHooks';
import { useAssertSingleton } from 'common/hooks/useAssertSingleton';
import { withResultAsync } from 'common/util/result';
@@ -9,6 +10,7 @@ import { imageDTOToImageObject } from 'features/controlLayers/store/util';
import { $imageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import { sentImageToCanvas } from 'features/gallery/store/actions';
import { parseAndRecallAllMetadata } from 'features/metadata/util/handlers';
import { $hasTemplates } from 'features/nodes/store/nodesSlice';
import { $isWorkflowListMenuIsOpen } from 'features/nodes/store/workflowListMenu';
import { $isStylePresetsMenuOpen, activeStylePresetIdChanged } from 'features/stylePresets/store/stylePresetSlice';
import { toast } from 'features/toast/toast';
@@ -51,6 +53,7 @@ export const useStudioInitAction = (action?: StudioInitAction) => {
const { t } = useTranslation();
// Use a ref to ensure that we only perform the action once
const didInit = useRef(false);
const didParseOpenAPISchema = useStore($hasTemplates);
const store = useAppStore();
const { getAndLoadWorkflow } = useGetAndLoadLibraryWorkflow();
@@ -174,7 +177,7 @@ export const useStudioInitAction = (action?: StudioInitAction) => {
);
useEffect(() => {
if (didInit.current || !action) {
if (didInit.current || !action || !didParseOpenAPISchema) {
return;
}
@@ -187,22 +190,29 @@ export const useStudioInitAction = (action?: StudioInitAction) => {
case 'selectStylePreset':
handleSelectStylePreset(action.data.stylePresetId);
break;
case 'sendToCanvas':
handleSendToCanvas(action.data.imageName);
break;
case 'useAllParameters':
handleUseAllMetadata(action.data.imageName);
break;
case 'goToDestination':
handleGoToDestination(action.data.destination);
break;
default:
break;
}
}, [
handleSendToCanvas,
handleUseAllMetadata,
action,
handleLoadWorkflow,
handleSelectStylePreset,
handleGoToDestination,
handleLoadWorkflow,
didParseOpenAPISchema,
]);
};

View File

@@ -4,7 +4,7 @@ import type { AppStartListening } from 'app/store/middleware/listenerMiddleware'
import { buildAdHocPostProcessingGraph } from 'features/nodes/util/graph/buildAdHocPostProcessingGraph';
import { toast } from 'features/toast/toast';
import { t } from 'i18next';
import { queueApi } from 'services/api/endpoints/queue';
import { enqueueMutationFixedCacheKeyOptions, queueApi } from 'services/api/endpoints/queue';
import type { BatchConfig, ImageDTO } from 'services/api/types';
import type { JsonObject } from 'type-fest';
@@ -32,9 +32,7 @@ export const addAdHocPostProcessingRequestedListener = (startAppListening: AppSt
try {
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(enqueueBatchArg, {
fixedCacheKey: 'enqueueBatch',
})
queueApi.endpoints.enqueueBatch.initiate(enqueueBatchArg, enqueueMutationFixedCacheKeyOptions)
);
const enqueueResult = await req.unwrap();

View File

@@ -13,7 +13,7 @@ import { buildSDXLGraph } from 'features/nodes/util/graph/generation/buildSDXLGr
import type { Graph } from 'features/nodes/util/graph/generation/Graph';
import { toast } from 'features/toast/toast';
import { serializeError } from 'serialize-error';
import { queueApi } from 'services/api/endpoints/queue';
import { enqueueMutationFixedCacheKeyOptions, queueApi } from 'services/api/endpoints/queue';
import type { Invocation } from 'services/api/types';
import { assert, AssertionError } from 'tsafe';
import type { JsonObject } from 'type-fest';
@@ -91,9 +91,7 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
}
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(prepareBatchResult.value, {
fixedCacheKey: 'enqueueBatch',
})
queueApi.endpoints.enqueueBatch.initiate(prepareBatchResult.value, enqueueMutationFixedCacheKeyOptions)
);
req.reset();

View File

@@ -6,7 +6,7 @@ import { isImageFieldCollectionInputInstance } from 'features/nodes/types/field'
import { isInvocationNode } from 'features/nodes/types/invocation';
import { buildNodesGraph } from 'features/nodes/util/graph/buildNodesGraph';
import { buildWorkflowWithValidation } from 'features/nodes/util/workflow/buildWorkflow';
import { queueApi } from 'services/api/endpoints/queue';
import { enqueueMutationFixedCacheKeyOptions, queueApi } from 'services/api/endpoints/queue';
import type { Batch, BatchConfig } from 'services/api/types';
const log = logger('workflows');
@@ -70,11 +70,7 @@ export const addEnqueueRequestedNodes = (startAppListening: AppStartListening) =
prepend: action.payload.prepend,
};
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
fixedCacheKey: 'enqueueBatch',
})
);
const req = dispatch(queueApi.endpoints.enqueueBatch.initiate(batchConfig, enqueueMutationFixedCacheKeyOptions));
try {
await req.unwrap();
} finally {

View File

@@ -2,7 +2,7 @@ import { enqueueRequested } from 'app/store/actions';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
import { buildMultidiffusionUpscaleGraph } from 'features/nodes/util/graph/buildMultidiffusionUpscaleGraph';
import { queueApi } from 'services/api/endpoints/queue';
import { enqueueMutationFixedCacheKeyOptions, queueApi } from 'services/api/endpoints/queue';
export const addEnqueueRequestedUpscale = (startAppListening: AppStartListening) => {
startAppListening({
@@ -16,11 +16,7 @@ export const addEnqueueRequestedUpscale = (startAppListening: AppStartListening)
const batchConfig = prepareLinearUIBatch(state, g, prepend, noise, posCond, 'upscaling', 'gallery');
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
fixedCacheKey: 'enqueueBatch',
})
);
const req = dispatch(queueApi.endpoints.enqueueBatch.initiate(batchConfig, enqueueMutationFixedCacheKeyOptions));
try {
await req.unwrap();
} finally {

View File

@@ -25,9 +25,7 @@ export type AppFeature =
| 'invocationCache'
| 'bulkDownload'
| 'starterModels'
| 'hfToken'
| 'invocationProgressAlert';
| 'hfToken';
/**
* A disable-able Stable Diffusion feature
*/

View File

@@ -0,0 +1,42 @@
import { MenuItem } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import {
useNewCanvasSession,
useNewGallerySession,
} from 'features/controlLayers/components/NewSessionConfirmationAlertDialog';
import { canvasReset } from 'features/controlLayers/store/actions';
import { paramsReset } from 'features/controlLayers/store/paramsSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiArrowsCounterClockwiseBold, PiFilePlusBold } from 'react-icons/pi';
export const SessionMenuItems = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const { newGallerySessionWithDialog } = useNewGallerySession();
const { newCanvasSessionWithDialog } = useNewCanvasSession();
const resetCanvasLayers = useCallback(() => {
dispatch(canvasReset());
}, [dispatch]);
const resetGenerationSettings = useCallback(() => {
dispatch(paramsReset());
}, [dispatch]);
return (
<>
<MenuItem icon={<PiFilePlusBold />} onClick={newGallerySessionWithDialog}>
{t('controlLayers.newGallerySession')}
</MenuItem>
<MenuItem icon={<PiFilePlusBold />} onClick={newCanvasSessionWithDialog}>
{t('controlLayers.newCanvasSession')}
</MenuItem>
<MenuItem icon={<PiArrowsCounterClockwiseBold />} onClick={resetCanvasLayers}>
{t('controlLayers.resetCanvasLayers')}
</MenuItem>
<MenuItem icon={<PiArrowsCounterClockwiseBold />} onClick={resetGenerationSettings}>
{t('controlLayers.resetGenerationSettings')}
</MenuItem>
</>
);
});
SessionMenuItems.displayName = 'SessionMenuItems';

View File

@@ -46,7 +46,7 @@ const REGION_TARGETS: Record<FocusRegionName, Set<HTMLElement>> = {
/**
* The currently-focused region or `null` if no region is focused.
*/
const $focusedRegion = atom<FocusRegionName | null>(null);
export const $focusedRegion = atom<FocusRegionName | null>(null);
/**
* A map of focus regions to atoms that indicate if that region is focused.

View File

@@ -1,6 +1,6 @@
import { useAppDispatch } from 'app/store/storeHooks';
import { useClearQueue } from 'features/queue/components/ClearQueueConfirmationAlertDialog';
import { useCancelCurrentQueueItem } from 'features/queue/hooks/useCancelCurrentQueueItem';
import { useClearQueue } from 'features/queue/hooks/useClearQueue';
import { useInvoke } from 'features/queue/hooks/useInvoke';
import { useRegisteredHotkeys } from 'features/system/components/HotkeysModal/useHotkeyData';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';

View File

@@ -1,387 +0,0 @@
import { useStore } from '@nanostores/react';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { $true } from 'app/store/nanostores/util';
import { useAppSelector } from 'app/store/storeHooks';
import { useCanvasManagerSafe } from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import { selectParamsSlice } from 'features/controlLayers/store/paramsSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { selectDynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
import { $templates } from 'features/nodes/store/nodesSlice';
import { selectNodesSlice } from 'features/nodes/store/selectors';
import type { Templates } from 'features/nodes/store/types';
import { selectWorkflowSettingsSlice } from 'features/nodes/store/workflowSettingsSlice';
import { isImageFieldCollectionInputInstance, isImageFieldCollectionInputTemplate } from 'features/nodes/types/field';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { selectUpscaleSlice } from 'features/parameters/store/upscaleSlice';
import { selectConfigSlice } from 'features/system/store/configSlice';
import { selectSystemSlice } from 'features/system/store/systemSlice';
import { selectActiveTab } from 'features/ui/store/uiSelectors';
import i18n from 'i18next';
import { forEach, upperFirst } from 'lodash-es';
import { useMemo } from 'react';
import { getConnectedEdges } from 'reactflow';
import { $isConnected } from 'services/events/stores';
const LAYER_TYPE_TO_TKEY = {
reference_image: 'controlLayers.referenceImage',
inpaint_mask: 'controlLayers.inpaintMask',
regional_guidance: 'controlLayers.regionalGuidance',
raster_layer: 'controlLayers.rasterLayer',
control_layer: 'controlLayers.controlLayer',
} as const;
const createSelector = (arg: {
templates: Templates;
isConnected: boolean;
canvasIsFiltering: boolean;
canvasIsTransforming: boolean;
canvasIsRasterizing: boolean;
canvasIsCompositing: boolean;
canvasIsSelectingObject: boolean;
}) => {
const {
templates,
isConnected,
canvasIsFiltering,
canvasIsTransforming,
canvasIsRasterizing,
canvasIsCompositing,
canvasIsSelectingObject,
} = arg;
return createMemoizedSelector(
[
selectSystemSlice,
selectNodesSlice,
selectWorkflowSettingsSlice,
selectDynamicPromptsSlice,
selectCanvasSlice,
selectParamsSlice,
selectUpscaleSlice,
selectConfigSlice,
selectActiveTab,
],
(system, nodes, workflowSettings, dynamicPrompts, canvas, params, upscale, config, activeTabName) => {
const { bbox } = canvas;
const { model, positivePrompt } = params;
const reasons: { prefix?: string; content: string }[] = [];
// Cannot generate if not connected
if (!isConnected) {
reasons.push({ content: i18n.t('parameters.invoke.systemDisconnected') });
}
if (activeTabName === 'workflows') {
if (workflowSettings.shouldValidateGraph) {
if (!nodes.nodes.length) {
reasons.push({ content: i18n.t('parameters.invoke.noNodesInGraph') });
}
nodes.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
return;
}
const nodeTemplate = templates[node.data.type];
if (!nodeTemplate) {
// Node type not found
reasons.push({ content: i18n.t('parameters.invoke.missingNodeTemplate') });
return;
}
const connectedEdges = getConnectedEdges([node], nodes.edges);
forEach(node.data.inputs, (field) => {
const fieldTemplate = nodeTemplate.inputs[field.name];
const hasConnection = connectedEdges.some(
(edge) => edge.target === node.id && edge.targetHandle === field.name
);
if (!fieldTemplate) {
reasons.push({ content: i18n.t('parameters.invoke.missingFieldTemplate') });
return;
}
const baseTKeyOptions = {
nodeLabel: node.data.label || nodeTemplate.title,
fieldLabel: field.label || fieldTemplate.title,
};
if (fieldTemplate.required && field.value === undefined && !hasConnection) {
reasons.push({ content: i18n.t('parameters.invoke.missingInputForField', baseTKeyOptions) });
return;
} else if (
field.value &&
isImageFieldCollectionInputInstance(field) &&
isImageFieldCollectionInputTemplate(fieldTemplate)
) {
// Image collections may have min or max items to validate
// TODO(psyche): generalize this to other collection types
if (fieldTemplate.minItems !== undefined && fieldTemplate.minItems > 0 && field.value.length === 0) {
reasons.push({ content: i18n.t('parameters.invoke.collectionEmpty', baseTKeyOptions) });
return;
}
if (fieldTemplate.minItems !== undefined && field.value.length < fieldTemplate.minItems) {
reasons.push({
content: i18n.t('parameters.invoke.collectionTooFewItems', {
...baseTKeyOptions,
size: field.value.length,
minItems: fieldTemplate.minItems,
}),
});
return;
}
if (fieldTemplate.maxItems !== undefined && field.value.length > fieldTemplate.maxItems) {
reasons.push({
content: i18n.t('parameters.invoke.collectionTooManyItems', {
...baseTKeyOptions,
size: field.value.length,
maxItems: fieldTemplate.maxItems,
}),
});
return;
}
}
});
});
}
} else if (activeTabName === 'upscaling') {
if (!upscale.upscaleInitialImage) {
reasons.push({ content: i18n.t('upscaling.missingUpscaleInitialImage') });
} else if (config.maxUpscaleDimension) {
const { width, height } = upscale.upscaleInitialImage;
const { scale } = upscale;
const maxPixels = config.maxUpscaleDimension ** 2;
const upscaledPixels = width * scale * height * scale;
if (upscaledPixels > maxPixels) {
reasons.push({ content: i18n.t('upscaling.exceedsMaxSize') });
}
}
if (model && !['sd-1', 'sdxl'].includes(model.base)) {
// When we are using an upsupported model, do not add the other warnings
reasons.push({ content: i18n.t('upscaling.incompatibleBaseModel') });
} else {
// Using a compatible model, add all warnings
if (!model) {
reasons.push({ content: i18n.t('parameters.invoke.noModelSelected') });
}
if (!upscale.upscaleModel) {
reasons.push({ content: i18n.t('upscaling.missingUpscaleModel') });
}
if (!upscale.tileControlnetModel) {
reasons.push({ content: i18n.t('upscaling.missingTileControlNetModel') });
}
}
} else {
if (canvasIsFiltering) {
reasons.push({ content: i18n.t('parameters.invoke.canvasIsFiltering') });
}
if (canvasIsTransforming) {
reasons.push({ content: i18n.t('parameters.invoke.canvasIsTransforming') });
}
if (canvasIsRasterizing) {
reasons.push({ content: i18n.t('parameters.invoke.canvasIsRasterizing') });
}
if (canvasIsCompositing) {
reasons.push({ content: i18n.t('parameters.invoke.canvasIsCompositing') });
}
if (canvasIsSelectingObject) {
reasons.push({ content: i18n.t('parameters.invoke.canvasIsSelectingObject') });
}
if (dynamicPrompts.prompts.length === 0 && getShouldProcessPrompt(positivePrompt)) {
reasons.push({ content: i18n.t('parameters.invoke.noPrompts') });
}
if (!model) {
reasons.push({ content: i18n.t('parameters.invoke.noModelSelected') });
}
if (model?.base === 'flux') {
if (!params.t5EncoderModel) {
reasons.push({ content: i18n.t('parameters.invoke.noT5EncoderModelSelected') });
}
if (!params.clipEmbedModel) {
reasons.push({ content: i18n.t('parameters.invoke.noCLIPEmbedModelSelected') });
}
if (!params.fluxVAE) {
reasons.push({ content: i18n.t('parameters.invoke.noFLUXVAEModelSelected') });
}
if (bbox.scaleMethod === 'none') {
if (bbox.rect.width % 16 !== 0) {
reasons.push({
content: i18n.t('parameters.invoke.fluxModelIncompatibleBboxWidth', { width: bbox.rect.width }),
});
}
if (bbox.rect.height % 16 !== 0) {
reasons.push({
content: i18n.t('parameters.invoke.fluxModelIncompatibleBboxHeight', { height: bbox.rect.height }),
});
}
} else {
if (bbox.scaledSize.width % 16 !== 0) {
reasons.push({
content: i18n.t('parameters.invoke.fluxModelIncompatibleScaledBboxWidth', {
width: bbox.scaledSize.width,
}),
});
}
if (bbox.scaledSize.height % 16 !== 0) {
reasons.push({
content: i18n.t('parameters.invoke.fluxModelIncompatibleScaledBboxHeight', {
height: bbox.scaledSize.height,
}),
});
}
}
}
canvas.controlLayers.entities
.filter((controlLayer) => controlLayer.isEnabled)
.forEach((controlLayer, i) => {
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY['control_layer']);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
const problems: string[] = [];
// Must have model
if (!controlLayer.controlAdapter.model) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterNoModelSelected'));
}
// Model base must match
if (controlLayer.controlAdapter.model?.base !== model?.base) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterIncompatibleBaseModel'));
}
if (problems.length) {
const content = upperFirst(problems.join(', '));
reasons.push({ prefix, content });
}
});
canvas.referenceImages.entities
.filter((entity) => entity.isEnabled)
.forEach((entity, i) => {
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[entity.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
const problems: string[] = [];
// Must have model
if (!entity.ipAdapter.model) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoModelSelected'));
}
// Model base must match
if (entity.ipAdapter.model?.base !== model?.base) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterIncompatibleBaseModel'));
}
// Must have an image
if (!entity.ipAdapter.image) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoImageSelected'));
}
if (problems.length) {
const content = upperFirst(problems.join(', '));
reasons.push({ prefix, content });
}
});
canvas.regionalGuidance.entities
.filter((entity) => entity.isEnabled)
.forEach((entity, i) => {
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[entity.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
const problems: string[] = [];
// Must have a region
if (entity.objects.length === 0) {
problems.push(i18n.t('parameters.invoke.layer.rgNoRegion'));
}
// Must have at least 1 prompt or IP Adapter
if (
entity.positivePrompt === null &&
entity.negativePrompt === null &&
entity.referenceImages.length === 0
) {
problems.push(i18n.t('parameters.invoke.layer.rgNoPromptsOrIPAdapters'));
}
entity.referenceImages.forEach(({ ipAdapter }) => {
// Must have model
if (!ipAdapter.model) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoModelSelected'));
}
// Model base must match
if (ipAdapter.model?.base !== model?.base) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterIncompatibleBaseModel'));
}
// Must have an image
if (!ipAdapter.image) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoImageSelected'));
}
});
if (problems.length) {
const content = upperFirst(problems.join(', '));
reasons.push({ prefix, content });
}
});
canvas.rasterLayers.entities
.filter((entity) => entity.isEnabled)
.forEach((entity, i) => {
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[entity.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
const problems: string[] = [];
if (problems.length) {
const content = upperFirst(problems.join(', '));
reasons.push({ prefix, content });
}
});
}
return { isReady: !reasons.length, reasons };
}
);
};
export const useIsReadyToEnqueue = () => {
const templates = useStore($templates);
const isConnected = useStore($isConnected);
const canvasManager = useCanvasManagerSafe();
const canvasIsFiltering = useStore(canvasManager?.stateApi.$isFiltering ?? $true);
const canvasIsTransforming = useStore(canvasManager?.stateApi.$isTransforming ?? $true);
const canvasIsRasterizing = useStore(canvasManager?.stateApi.$isRasterizing ?? $true);
const canvasIsSelectingObject = useStore(canvasManager?.stateApi.$isSegmenting ?? $true);
const canvasIsCompositing = useStore(canvasManager?.compositor.$isBusy ?? $true);
const selector = useMemo(
() =>
createSelector({
templates,
isConnected,
canvasIsFiltering,
canvasIsTransforming,
canvasIsRasterizing,
canvasIsCompositing,
canvasIsSelectingObject,
}),
[
templates,
isConnected,
canvasIsFiltering,
canvasIsTransforming,
canvasIsRasterizing,
canvasIsCompositing,
canvasIsSelectingObject,
]
);
const value = useAppSelector(selector);
return value;
};

View File

@@ -63,7 +63,7 @@ export const CanvasAddEntityButtons = memo(() => {
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addRegionalGuidance}
isDisabled={isFLUX || isSD3}
isDisabled={isSD3}
>
{t('controlLayers.regionalGuidance')}
</Button>

View File

@@ -2,7 +2,6 @@ import { Alert, AlertDescription, AlertIcon, AlertTitle } from '@invoke-ai/ui-li
import { useStore } from '@nanostores/react';
import { useAppSelector } from 'app/store/storeHooks';
import { useDeferredModelLoadingInvocationProgressMessage } from 'features/controlLayers/hooks/useDeferredModelLoadingInvocationProgressMessage';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { selectIsLocal } from 'features/system/store/configSlice';
import { selectSystemShouldShowInvocationProgressDetail } from 'features/system/store/systemSlice';
import { memo } from 'react';
@@ -44,20 +43,14 @@ const CanvasAlertsInvocationProgressContentCommercial = memo(() => {
CanvasAlertsInvocationProgressContentCommercial.displayName = 'CanvasAlertsInvocationProgressContentCommercial';
export const CanvasAlertsInvocationProgress = memo(() => {
const isProgressMessageAlertEnabled = useFeatureStatus('invocationProgressAlert');
const shouldShowInvocationProgressDetail = useAppSelector(selectSystemShouldShowInvocationProgressDetail);
const isLocal = useAppSelector(selectIsLocal);
// The alert is disabled at the system level
if (!isProgressMessageAlertEnabled) {
return null;
}
if (!isLocal) {
return <CanvasAlertsInvocationProgressContentCommercial />;
}
// The alert is disabled at the user level
// OSS user setting
if (!shouldShowInvocationProgressDetail) {
return null;
}

View File

@@ -49,7 +49,7 @@ export const EntityListGlobalActionBarAddLayerMenu = memo(() => {
<MenuItem icon={<PiPlusBold />} onClick={addInpaintMask}>
{t('controlLayers.inpaintMask')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addRegionalGuidance} isDisabled={isFLUX || isSD3}>
<MenuItem icon={<PiPlusBold />} onClick={addRegionalGuidance} isDisabled={isSD3}>
{t('controlLayers.regionalGuidance')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addRegionalReferenceImage} isDisabled={isFLUX || isSD3}>

View File

@@ -1,8 +1,10 @@
import type { ComboboxOnChange } from '@invoke-ai/ui-library';
import { Combobox, FormControl, FormLabel } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
import type { IPMethodV2 } from 'features/controlLayers/store/types';
import { isIPMethodV2 } from 'features/controlLayers/store/types';
import { selectSystemShouldEnableModelDescriptions } from 'features/system/store/systemSlice';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { assert } from 'tsafe';
@@ -14,13 +16,27 @@ type Props = {
export const IPAdapterMethod = memo(({ method, onChange }: Props) => {
const { t } = useTranslation();
const shouldShowModelDescriptions = useAppSelector(selectSystemShouldEnableModelDescriptions);
const options: { label: string; value: IPMethodV2 }[] = useMemo(
() => [
{ label: t('controlLayers.ipAdapterMethod.full'), value: 'full' },
{ label: t('controlLayers.ipAdapterMethod.style'), value: 'style' },
{ label: t('controlLayers.ipAdapterMethod.composition'), value: 'composition' },
{
label: t('controlLayers.ipAdapterMethod.full'),
value: 'full',
description: shouldShowModelDescriptions ? t('controlLayers.ipAdapterMethod.fullDesc') : undefined,
},
{
label: t('controlLayers.ipAdapterMethod.style'),
value: 'style',
description: shouldShowModelDescriptions ? t('controlLayers.ipAdapterMethod.styleDesc') : undefined,
},
{
label: t('controlLayers.ipAdapterMethod.composition'),
value: 'composition',
description: shouldShowModelDescriptions ? t('controlLayers.ipAdapterMethod.compositionDesc') : undefined,
},
],
[t]
[t, shouldShowModelDescriptions]
);
const _onChange = useCallback<ComboboxOnChange>(
(v) => {

View File

@@ -5,6 +5,7 @@ import { BeginEndStepPct } from 'features/controlLayers/components/common/BeginE
import { CanvasEntitySettingsWrapper } from 'features/controlLayers/components/common/CanvasEntitySettingsWrapper';
import { Weight } from 'features/controlLayers/components/common/Weight';
import { IPAdapterMethod } from 'features/controlLayers/components/IPAdapter/IPAdapterMethod';
import { IPAdapterSettingsEmptyState } from 'features/controlLayers/components/IPAdapter/IPAdapterSettingsEmptyState';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { usePullBboxIntoGlobalReferenceImage } from 'features/controlLayers/hooks/saveCanvasHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
@@ -17,7 +18,7 @@ import {
referenceImageIPAdapterWeightChanged,
} from 'features/controlLayers/store/canvasSlice';
import { selectIsFLUX } from 'features/controlLayers/store/paramsSlice';
import { selectCanvasSlice, selectEntityOrThrow } from 'features/controlLayers/store/selectors';
import { selectCanvasSlice, selectEntity, selectEntityOrThrow } from 'features/controlLayers/store/selectors';
import type { CanvasEntityIdentifier, CLIPVisionModelV2, IPMethodV2 } from 'features/controlLayers/store/types';
import type { SetGlobalReferenceImageDndTargetData } from 'features/dnd/dnd';
import { setGlobalReferenceImageDndTarget } from 'features/dnd/dnd';
@@ -35,7 +36,7 @@ const buildSelectIPAdapter = (entityIdentifier: CanvasEntityIdentifier<'referenc
(canvas) => selectEntityOrThrow(canvas, entityIdentifier, 'IPAdapterSettings').ipAdapter
);
export const IPAdapterSettings = memo(() => {
const IPAdapterSettingsContent = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const entityIdentifier = useEntityIdentifierContext('reference_image');
@@ -134,4 +135,25 @@ export const IPAdapterSettings = memo(() => {
);
});
IPAdapterSettingsContent.displayName = 'IPAdapterSettingsContent';
const buildSelectIPAdapterHasImage = (entityIdentifier: CanvasEntityIdentifier<'reference_image'>) =>
createSelector(selectCanvasSlice, (canvas) => {
const referenceImage = selectEntity(canvas, entityIdentifier);
return !!referenceImage && referenceImage.ipAdapter.image !== null;
});
export const IPAdapterSettings = memo(() => {
const entityIdentifier = useEntityIdentifierContext('reference_image');
const selectIPAdapterHasImage = useMemo(() => buildSelectIPAdapterHasImage(entityIdentifier), [entityIdentifier]);
const hasImage = useAppSelector(selectIPAdapterHasImage);
if (!hasImage) {
return <IPAdapterSettingsEmptyState />;
}
return <IPAdapterSettingsContent />;
});
IPAdapterSettings.displayName = 'IPAdapterSettings';

View File

@@ -0,0 +1,64 @@
import { Button, Flex, Text } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import type { SetGlobalReferenceImageDndTargetData } from 'features/dnd/dnd';
import { setGlobalReferenceImageDndTarget } from 'features/dnd/dnd';
import { DndDropTarget } from 'features/dnd/DndDropTarget';
import { setGlobalReferenceImage } from 'features/imageActions/actions';
import { activeTabCanvasRightPanelChanged } from 'features/ui/store/uiSlice';
import { memo, useCallback, useMemo } from 'react';
import { Trans, useTranslation } from 'react-i18next';
import type { ImageDTO } from 'services/api/types';
export const IPAdapterSettingsEmptyState = memo(() => {
const { t } = useTranslation();
const entityIdentifier = useEntityIdentifierContext('reference_image');
const dispatch = useAppDispatch();
const isBusy = useCanvasIsBusy();
const onUpload = useCallback(
(imageDTO: ImageDTO) => {
setGlobalReferenceImage({ imageDTO, entityIdentifier, dispatch });
},
[dispatch, entityIdentifier]
);
const uploadApi = useImageUploadButton({ onUpload, allowMultiple: false });
const onClickGalleryButton = useCallback(() => {
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}, [dispatch]);
const dndTargetData = useMemo<SetGlobalReferenceImageDndTargetData>(
() => setGlobalReferenceImageDndTarget.getData({ entityIdentifier }),
[entityIdentifier]
);
const components = useMemo(
() => ({
UploadButton: (
<Button isDisabled={isBusy} size="sm" variant="link" color="base.300" {...uploadApi.getUploadButtonProps()} />
),
GalleryButton: (
<Button onClick={onClickGalleryButton} isDisabled={isBusy} size="sm" variant="link" color="base.300" />
),
}),
[isBusy, onClickGalleryButton, uploadApi]
);
return (
<Flex flexDir="column" gap={3} position="relative" w="full" p={4}>
<Text textAlign="center" color="base.300">
<Trans i18nKey="controlLayers.referenceImageEmptyState" components={components} />
</Text>
<input {...uploadApi.getUploadInputProps()} />
<DndDropTarget
dndTarget={setGlobalReferenceImageDndTarget}
dndTargetData={dndTargetData}
label={t('controlLayers.useImage')}
isDisabled={isBusy}
/>
</Flex>
);
});
IPAdapterSettingsEmptyState.displayName = 'IPAdapterSettingsEmptyState';

View File

@@ -1,27 +1,28 @@
import { IconButton, Tooltip } from '@invoke-ai/ui-library';
import type { IconButtonProps } from '@invoke-ai/ui-library';
import { IconButton } from '@invoke-ai/ui-library';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiTrashSimpleFill } from 'react-icons/pi';
import { PiXBold } from 'react-icons/pi';
type Props = {
type Props = Omit<IconButtonProps, 'aria-label'> & {
onDelete: () => void;
};
export const RegionalGuidanceDeletePromptButton = memo(({ onDelete }: Props) => {
export const RegionalGuidanceDeletePromptButton = memo(({ onDelete, ...rest }: Props) => {
const { t } = useTranslation();
return (
<Tooltip label={t('controlLayers.deletePrompt')}>
<IconButton
variant="link"
aria-label={t('controlLayers.deletePrompt')}
icon={<PiTrashSimpleFill />}
onClick={onDelete}
flexGrow={0}
size="sm"
p={0}
colorScheme="error"
/>
</Tooltip>
<IconButton
tooltip={t('common.delete')}
variant="link"
aria-label={t('common.delete')}
icon={<PiXBold />}
onClick={onDelete}
flexGrow={0}
size="sm"
p={0}
colorScheme="error"
{...rest}
/>
);
});

View File

@@ -6,6 +6,7 @@ import { Weight } from 'features/controlLayers/components/common/Weight';
import { IPAdapterImagePreview } from 'features/controlLayers/components/IPAdapter/IPAdapterImagePreview';
import { IPAdapterMethod } from 'features/controlLayers/components/IPAdapter/IPAdapterMethod';
import { IPAdapterModel } from 'features/controlLayers/components/IPAdapter/IPAdapterModel';
import { RegionalGuidanceIPAdapterSettingsEmptyState } from 'features/controlLayers/components/RegionalGuidance/RegionalGuidanceIPAdapterSettingsEmptyState';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { usePullBboxIntoRegionalGuidanceReferenceImage } from 'features/controlLayers/hooks/saveCanvasHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
@@ -19,12 +20,12 @@ import {
rgIPAdapterWeightChanged,
} from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice, selectRegionalGuidanceReferenceImage } from 'features/controlLayers/store/selectors';
import type { CLIPVisionModelV2, IPMethodV2 } from 'features/controlLayers/store/types';
import type { CanvasEntityIdentifier, CLIPVisionModelV2, IPMethodV2 } from 'features/controlLayers/store/types';
import type { SetRegionalGuidanceReferenceImageDndTargetData } from 'features/dnd/dnd';
import { setRegionalGuidanceReferenceImageDndTarget } from 'features/dnd/dnd';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiBoundingBoxBold, PiTrashSimpleFill } from 'react-icons/pi';
import { PiBoundingBoxBold, PiXBold } from 'react-icons/pi';
import type { ImageDTO, IPAdapterModelConfig } from 'services/api/types';
import { assert } from 'tsafe';
@@ -32,7 +33,7 @@ type Props = {
referenceImageId: string;
};
export const RegionalGuidanceIPAdapterSettings = memo(({ referenceImageId }: Props) => {
const RegionalGuidanceIPAdapterSettingsContent = memo(({ referenceImageId }: Props) => {
const entityIdentifier = useEntityIdentifierContext('regional_guidance');
const { t } = useTranslation();
const dispatch = useAppDispatch();
@@ -115,7 +116,7 @@ export const RegionalGuidanceIPAdapterSettings = memo(({ referenceImageId }: Pro
size="sm"
variant="link"
alignSelf="stretch"
icon={<PiTrashSimpleFill />}
icon={<PiXBold />}
tooltip={t('controlLayers.deleteReferenceImage')}
aria-label={t('controlLayers.deleteReferenceImage')}
onClick={onDeleteIPAdapter}
@@ -161,4 +162,31 @@ export const RegionalGuidanceIPAdapterSettings = memo(({ referenceImageId }: Pro
);
});
RegionalGuidanceIPAdapterSettingsContent.displayName = 'RegionalGuidanceIPAdapterSettingsContent';
const buildSelectIPAdapterHasImage = (
entityIdentifier: CanvasEntityIdentifier<'regional_guidance'>,
referenceImageId: string
) =>
createSelector(selectCanvasSlice, (canvas) => {
const referenceImage = selectRegionalGuidanceReferenceImage(canvas, entityIdentifier, referenceImageId);
return !!referenceImage && referenceImage.ipAdapter.image !== null;
});
export const RegionalGuidanceIPAdapterSettings = memo(({ referenceImageId }: Props) => {
const entityIdentifier = useEntityIdentifierContext('regional_guidance');
const selectIPAdapterHasImage = useMemo(
() => buildSelectIPAdapterHasImage(entityIdentifier, referenceImageId),
[entityIdentifier, referenceImageId]
);
const hasImage = useAppSelector(selectIPAdapterHasImage);
if (!hasImage) {
return <RegionalGuidanceIPAdapterSettingsEmptyState referenceImageId={referenceImageId} />;
}
return <RegionalGuidanceIPAdapterSettingsContent referenceImageId={referenceImageId} />;
});
RegionalGuidanceIPAdapterSettings.displayName = 'RegionalGuidanceIPAdapterSettings';

View File

@@ -0,0 +1,99 @@
import { Button, Flex, IconButton, Spacer, Text } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { rgIPAdapterDeleted } from 'features/controlLayers/store/canvasSlice';
import type { SetRegionalGuidanceReferenceImageDndTargetData } from 'features/dnd/dnd';
import { setRegionalGuidanceReferenceImageDndTarget } from 'features/dnd/dnd';
import { DndDropTarget } from 'features/dnd/DndDropTarget';
import { setRegionalGuidanceReferenceImage } from 'features/imageActions/actions';
import { activeTabCanvasRightPanelChanged } from 'features/ui/store/uiSlice';
import { memo, useCallback, useMemo } from 'react';
import { Trans, useTranslation } from 'react-i18next';
import { PiXBold } from 'react-icons/pi';
import type { ImageDTO } from 'services/api/types';
type Props = {
referenceImageId: string;
};
export const RegionalGuidanceIPAdapterSettingsEmptyState = memo(({ referenceImageId }: Props) => {
const { t } = useTranslation();
const entityIdentifier = useEntityIdentifierContext('regional_guidance');
const dispatch = useAppDispatch();
const isBusy = useCanvasIsBusy();
const onUpload = useCallback(
(imageDTO: ImageDTO) => {
setRegionalGuidanceReferenceImage({ imageDTO, entityIdentifier, referenceImageId, dispatch });
},
[dispatch, entityIdentifier, referenceImageId]
);
const uploadApi = useImageUploadButton({ onUpload, allowMultiple: false });
const onClickGalleryButton = useCallback(() => {
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}, [dispatch]);
const onDeleteIPAdapter = useCallback(() => {
dispatch(rgIPAdapterDeleted({ entityIdentifier, referenceImageId }));
}, [dispatch, entityIdentifier, referenceImageId]);
const dndTargetData = useMemo<SetRegionalGuidanceReferenceImageDndTargetData>(
() =>
setRegionalGuidanceReferenceImageDndTarget.getData({
entityIdentifier,
referenceImageId,
}),
[entityIdentifier, referenceImageId]
);
return (
<Flex flexDir="column" gap={2} position="relative" w="full">
<Flex alignItems="center" gap={2}>
<Text fontWeight="semibold" color="base.400">
{t('controlLayers.referenceImage')}
</Text>
<Spacer />
<IconButton
size="sm"
variant="link"
alignSelf="stretch"
icon={<PiXBold />}
tooltip={t('controlLayers.deleteReferenceImage')}
aria-label={t('controlLayers.deleteReferenceImage')}
onClick={onDeleteIPAdapter}
colorScheme="error"
/>
</Flex>
<Flex alignItems="center" gap={2} p={4}>
<Text textAlign="center" color="base.300">
<Trans
i18nKey="controlLayers.referenceImageEmptyState"
components={{
UploadButton: (
<Button
isDisabled={isBusy}
size="sm"
variant="link"
color="base.300"
{...uploadApi.getUploadButtonProps()}
/>
),
GalleryButton: (
<Button onClick={onClickGalleryButton} isDisabled={isBusy} size="sm" variant="link" color="base.300" />
),
}}
/>
</Text>
</Flex>
<input {...uploadApi.getUploadInputProps()} />
<DndDropTarget
dndTarget={setRegionalGuidanceReferenceImageDndTarget}
dndTargetData={dndTargetData}
label={t('controlLayers.useImage')}
isDisabled={isBusy}
/>
</Flex>
);
});
RegionalGuidanceIPAdapterSettingsEmptyState.displayName = 'RegionalGuidanceIPAdapterSettingsEmptyState';

View File

@@ -5,6 +5,7 @@ import { StagingAreaToolbarDiscardSelectedButton } from 'features/controlLayers/
import { StagingAreaToolbarImageCountButton } from 'features/controlLayers/components/StagingArea/StagingAreaToolbarImageCountButton';
import { StagingAreaToolbarNextButton } from 'features/controlLayers/components/StagingArea/StagingAreaToolbarNextButton';
import { StagingAreaToolbarPrevButton } from 'features/controlLayers/components/StagingArea/StagingAreaToolbarPrevButton';
import { StagingAreaToolbarSaveAsMenu } from 'features/controlLayers/components/StagingArea/StagingAreaToolbarSaveAsMenu';
import { StagingAreaToolbarSaveSelectedToGalleryButton } from 'features/controlLayers/components/StagingArea/StagingAreaToolbarSaveSelectedToGalleryButton';
import { StagingAreaToolbarToggleShowResultsButton } from 'features/controlLayers/components/StagingArea/StagingAreaToolbarToggleShowResultsButton';
import { memo } from 'react';
@@ -21,6 +22,7 @@ export const StagingAreaToolbar = memo(() => {
<StagingAreaToolbarAcceptButton />
<StagingAreaToolbarToggleShowResultsButton />
<StagingAreaToolbarSaveSelectedToGalleryButton />
<StagingAreaToolbarSaveAsMenu />
<StagingAreaToolbarDiscardSelectedButton />
<StagingAreaToolbarDiscardAllButton />
</ButtonGroup>

View File

@@ -0,0 +1,136 @@
import { IconButton, Menu, MenuButton, MenuItem, MenuList } from '@invoke-ai/ui-library';
import { useAppStore } from 'app/store/nanostores/store';
import { useAppSelector } from 'app/store/storeHooks';
import { NewLayerIcon } from 'features/controlLayers/components/common/icons';
import { selectSelectedImage } from 'features/controlLayers/store/canvasStagingAreaSlice';
import { createNewCanvasEntityFromImage } from 'features/imageActions/actions';
import { toast } from 'features/toast/toast';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiDotsThreeBold } from 'react-icons/pi';
import { imageDTOToFile, uploadImage } from 'services/api/endpoints/images';
const uploadImageArg = { image_category: 'general', is_intermediate: true, silent: true } as const;
export const StagingAreaToolbarSaveAsMenu = memo(() => {
const { t } = useTranslation();
const selectedImage = useAppSelector(selectSelectedImage);
const store = useAppStore();
const onClickNewRasterLayerFromImage = useCallback(async () => {
if (!selectedImage) {
return;
}
const { dispatch, getState } = store;
const file = await imageDTOToFile(selectedImage.imageDTO);
const imageDTO = await uploadImage({ file, ...uploadImageArg });
createNewCanvasEntityFromImage({
imageDTO,
type: 'raster_layer',
dispatch,
getState,
overrides: { isEnabled: false }, // We are adding the layer while staging, it should be disabled by default
});
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [selectedImage, store, t]);
const onClickNewControlLayerFromImage = useCallback(async () => {
if (!selectedImage) {
return;
}
const { dispatch, getState } = store;
const file = await imageDTOToFile(selectedImage.imageDTO);
const imageDTO = await uploadImage({ file, ...uploadImageArg });
createNewCanvasEntityFromImage({
imageDTO,
type: 'control_layer',
dispatch,
getState,
overrides: { isEnabled: false }, // We are adding the layer while staging, it should be disabled by default
});
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [selectedImage, store, t]);
const onClickNewInpaintMaskFromImage = useCallback(async () => {
if (!selectedImage) {
return;
}
const { dispatch, getState } = store;
const file = await imageDTOToFile(selectedImage.imageDTO);
const imageDTO = await uploadImage({ file, ...uploadImageArg });
createNewCanvasEntityFromImage({
imageDTO,
type: 'inpaint_mask',
dispatch,
getState,
overrides: { isEnabled: false }, // We are adding the layer while staging, it should be disabled by default
});
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [selectedImage, store, t]);
const onClickNewRegionalGuidanceFromImage = useCallback(async () => {
if (!selectedImage) {
return;
}
const { dispatch, getState } = store;
const file = await imageDTOToFile(selectedImage.imageDTO);
const imageDTO = await uploadImage({ file, ...uploadImageArg });
createNewCanvasEntityFromImage({
imageDTO,
type: 'regional_guidance',
dispatch,
getState,
overrides: { isEnabled: false }, // We are adding the layer while staging, it should be disabled by default
});
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [selectedImage, store, t]);
return (
<Menu>
<MenuButton
as={IconButton}
aria-label={t('controlLayers.newLayerFromImage')}
tooltip={t('controlLayers.newLayerFromImage')}
icon={<PiDotsThreeBold />}
colorScheme="invokeBlue"
isDisabled={!selectedImage}
/>
<MenuList>
<MenuItem icon={<NewLayerIcon />} onClickCapture={onClickNewInpaintMaskFromImage} isDisabled={!selectedImage}>
{t('controlLayers.inpaintMask')}
</MenuItem>
<MenuItem
icon={<NewLayerIcon />}
onClickCapture={onClickNewRegionalGuidanceFromImage}
isDisabled={!selectedImage}
>
{t('controlLayers.regionalGuidance')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} onClickCapture={onClickNewControlLayerFromImage} isDisabled={!selectedImage}>
{t('controlLayers.controlLayer')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} onClickCapture={onClickNewRasterLayerFromImage} isDisabled={!selectedImage}>
{t('controlLayers.rasterLayer')}
</MenuItem>
</MenuList>
</Menu>
);
});
StagingAreaToolbarSaveAsMenu.displayName = 'StagingAreaToolbarSaveAsMenu';

View File

@@ -7,7 +7,7 @@ import { toast } from 'features/toast/toast';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFloppyDiskBold } from 'react-icons/pi';
import { uploadImage } from 'services/api/endpoints/images';
import { imageDTOToFile, uploadImage } from 'services/api/endpoints/images';
const TOAST_ID = 'SAVE_STAGING_AREA_IMAGE_TO_GALLERY';
@@ -25,11 +25,8 @@ export const StagingAreaToolbarSaveSelectedToGalleryButton = memo(() => {
// To save the image to gallery, we will download it and re-upload it. This allows the user to delete the image
// the gallery without borking the canvas, which may need this image to exist.
const result = await withResultAsync(async () => {
// Download the image
const res = await fetch(selectedImage.imageDTO.image_url);
const blob = await res.blob();
// Create a new file with the same name, which we will upload
const file = new File([blob], `copy_of_${selectedImage.imageDTO.image_name}`, { type: 'image/png' });
const file = await imageDTOToFile(selectedImage.imageDTO);
await uploadImage({
file,

View File

@@ -4,8 +4,8 @@ import { CanvasSettingsPopover } from 'features/controlLayers/components/Setting
import { ToolColorPicker } from 'features/controlLayers/components/Tool/ToolFillColorPicker';
import { ToolSettings } from 'features/controlLayers/components/Tool/ToolSettings';
import { CanvasToolbarFitBboxToLayersButton } from 'features/controlLayers/components/Toolbar/CanvasToolbarFitBboxToLayersButton';
import { CanvasToolbarNewSessionMenuButton } from 'features/controlLayers/components/Toolbar/CanvasToolbarNewSessionMenuButton';
import { CanvasToolbarRedoButton } from 'features/controlLayers/components/Toolbar/CanvasToolbarRedoButton';
import { CanvasToolbarResetCanvasButton } from 'features/controlLayers/components/Toolbar/CanvasToolbarResetCanvasButton';
import { CanvasToolbarResetViewButton } from 'features/controlLayers/components/Toolbar/CanvasToolbarResetViewButton';
import { CanvasToolbarSaveToGalleryButton } from 'features/controlLayers/components/Toolbar/CanvasToolbarSaveToGalleryButton';
import { CanvasToolbarScale } from 'features/controlLayers/components/Toolbar/CanvasToolbarScale';
@@ -43,7 +43,7 @@ export const CanvasToolbar = memo(() => {
<CanvasToolbarSaveToGalleryButton />
<CanvasToolbarUndoButton />
<CanvasToolbarRedoButton />
<CanvasToolbarResetCanvasButton />
<CanvasToolbarNewSessionMenuButton />
<CanvasSettingsPopover />
</Flex>
</Flex>

View File

@@ -0,0 +1,25 @@
import { IconButton, Menu, MenuButton, MenuList } from '@invoke-ai/ui-library';
import { SessionMenuItems } from 'common/components/SessionMenuItems';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFilePlusBold } from 'react-icons/pi';
export const CanvasToolbarNewSessionMenuButton = memo(() => {
const { t } = useTranslation();
return (
<Menu placement="bottom-end">
<MenuButton
as={IconButton}
aria-label={t('controlLayers.newSession')}
icon={<PiFilePlusBold />}
variant="link"
alignSelf="stretch"
/>
<MenuList>
<SessionMenuItems />
</MenuList>
</Menu>
);
});
CanvasToolbarNewSessionMenuButton.displayName = 'CanvasToolbarNewSessionMenuButton';

View File

@@ -1,30 +0,0 @@
import { IconButton } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useCanvasManager } from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import { canvasReset } from 'features/controlLayers/store/actions';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiTrashBold } from 'react-icons/pi';
export const CanvasToolbarResetCanvasButton = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const canvasManager = useCanvasManager();
const onClick = useCallback(() => {
dispatch(canvasReset());
canvasManager.stage.fitLayersToStage();
}, [canvasManager.stage, dispatch]);
return (
<IconButton
aria-label={t('controlLayers.resetCanvas')}
tooltip={t('controlLayers.resetCanvas')}
onClick={onClick}
colorScheme="error"
icon={<PiTrashBold />}
variant="link"
alignSelf="stretch"
/>
);
});
CanvasToolbarResetCanvasButton.displayName = 'CanvasToolbarResetCanvasButton';

View File

@@ -1,6 +1,7 @@
import { Flex } from '@invoke-ai/ui-library';
import { CanvasEntityDeleteButton } from 'features/controlLayers/components/common/CanvasEntityDeleteButton';
import { CanvasEntityEnabledToggle } from 'features/controlLayers/components/common/CanvasEntityEnabledToggle';
import { CanvasEntityHeaderWarnings } from 'features/controlLayers/components/common/CanvasEntityHeaderWarnings';
import { CanvasEntityIsBookmarkedForQuickSwitchToggle } from 'features/controlLayers/components/common/CanvasEntityIsBookmarkedForQuickSwitchToggle';
import { CanvasEntityIsLockedToggle } from 'features/controlLayers/components/common/CanvasEntityIsLockedToggle';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
@@ -11,6 +12,7 @@ export const CanvasEntityHeaderCommonActions = memo(() => {
return (
<Flex alignSelf="stretch">
<CanvasEntityHeaderWarnings />
<CanvasEntityIsBookmarkedForQuickSwitchToggle />
{entityIdentifier.type !== 'reference_image' && <CanvasEntityIsLockedToggle />}
<CanvasEntityEnabledToggle />

View File

@@ -0,0 +1,101 @@
import { Flex, IconButton, ListItem, Text, UnorderedList } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { EMPTY_ARRAY } from 'app/store/constants';
import { useAppSelector } from 'app/store/storeHooks';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { useEntityIsEnabled } from 'features/controlLayers/hooks/useEntityIsEnabled';
import { selectModel } from 'features/controlLayers/store/paramsSlice';
import { selectCanvasSlice, selectEntityOrThrow } from 'features/controlLayers/store/selectors';
import type { CanvasEntityIdentifier } from 'features/controlLayers/store/types';
import {
getControlLayerWarnings,
getGlobalReferenceImageWarnings,
getInpaintMaskWarnings,
getRasterLayerWarnings,
getRegionalGuidanceWarnings,
} from 'features/controlLayers/store/validators';
import type { TFunction } from 'i18next';
import { upperFirst } from 'lodash-es';
import { memo, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiWarningBold } from 'react-icons/pi';
import type { Equals } from 'tsafe';
import { assert } from 'tsafe';
const buildSelectWarnings = (entityIdentifier: CanvasEntityIdentifier, t: TFunction) => {
return createSelector(selectCanvasSlice, selectModel, (canvas, model) => {
// This component is used within a <CanvasEntityStateGate /> so we can safely assume that the entity exists.
// Should never throw.
const entity = selectEntityOrThrow(canvas, entityIdentifier, 'CanvasEntityHeaderWarnings');
let warnings: string[] = [];
const entityType = entity.type;
if (entityType === 'control_layer') {
warnings = getControlLayerWarnings(entity, model);
} else if (entityType === 'regional_guidance') {
warnings = getRegionalGuidanceWarnings(entity, model);
} else if (entityType === 'inpaint_mask') {
warnings = getInpaintMaskWarnings(entity, model);
} else if (entityType === 'raster_layer') {
warnings = getRasterLayerWarnings(entity, model);
} else if (entityType === 'reference_image') {
warnings = getGlobalReferenceImageWarnings(entity, model);
} else {
assert<Equals<typeof entityType, never>>(false, 'Unexpected entity type');
}
// Return a stable reference if there are no warnings
if (warnings.length === 0) {
return EMPTY_ARRAY;
}
return warnings.map((w) => t(w)).map(upperFirst);
});
};
export const CanvasEntityHeaderWarnings = memo(() => {
const entityIdentifier = useEntityIdentifierContext();
const { t } = useTranslation();
const isEnabled = useEntityIsEnabled(entityIdentifier);
const selectWarnings = useMemo(() => buildSelectWarnings(entityIdentifier, t), [entityIdentifier, t]);
const warnings = useAppSelector(selectWarnings);
if (warnings.length === 0) {
return null;
}
return (
// Using IconButton here bc it matches the styling of the actual buttons in the header without any fanagling, but
// it's not a button
<IconButton
as="span"
size="sm"
variant="link"
alignSelf="stretch"
aria-label="warnings"
tooltip={<TooltipContent warnings={warnings} />}
icon={<PiWarningBold />}
colorScheme="warning"
isDisabled={!isEnabled}
/>
);
});
CanvasEntityHeaderWarnings.displayName = 'CanvasEntityHeaderWarnings';
const TooltipContent = memo((props: { warnings: string[] }) => {
const { t } = useTranslation();
return (
<Flex flexDir="column">
<Text>{t('controlLayers.warnings.problemsFound')}:</Text>
<UnorderedList>
{props.warnings.map((warning, index) => (
<ListItem key={index}>{warning}</ListItem>
))}
</UnorderedList>
</Flex>
);
});
TooltipContent.displayName = 'TooltipContent';

View File

@@ -29,7 +29,13 @@ import { modelConfigsAdapterSelectors, selectModelConfigsQuery } from 'services/
import type { ControlNetModelConfig, IPAdapterModelConfig, T2IAdapterModelConfig } from 'services/api/types';
import { isControlNetOrT2IAdapterModelConfig, isIPAdapterModelConfig } from 'services/api/types';
/** @knipignore */
/**
* Selects the default control adapter configuration based on the model configurations and the base.
*
* Be sure to clone the output of this selector before modifying it!
*
* @knipignore
*/
export const selectDefaultControlAdapter = createSelector(
selectModelConfigsQuery,
selectBase,
@@ -52,6 +58,11 @@ export const selectDefaultControlAdapter = createSelector(
}
);
/**
* Selects the default IP adapter configuration based on the model configurations and the base.
*
* Be sure to clone the output of this selector before modifying it!
*/
export const selectDefaultIPAdapter = createSelector(
selectModelConfigsQuery,
selectBase,
@@ -117,7 +128,9 @@ export const useAddRegionalReferenceImage = () => {
const func = useCallback(() => {
const overrides: Partial<CanvasRegionalGuidanceState> = {
referenceImages: [{ id: getPrefixedId('regional_guidance_reference_image'), ipAdapter: defaultIPAdapter }],
referenceImages: [
{ id: getPrefixedId('regional_guidance_reference_image'), ipAdapter: deepClone(defaultIPAdapter) },
],
};
dispatch(rgAdded({ isSelected: true, overrides }));
}, [defaultIPAdapter, dispatch]);
@@ -129,7 +142,7 @@ export const useAddGlobalReferenceImage = () => {
const dispatch = useAppDispatch();
const defaultIPAdapter = useAppSelector(selectDefaultIPAdapter);
const func = useCallback(() => {
const overrides = { ipAdapter: defaultIPAdapter };
const overrides = { ipAdapter: deepClone(defaultIPAdapter) };
dispatch(referenceImageAdded({ isSelected: true, overrides }));
}, [defaultIPAdapter, dispatch]);
@@ -140,7 +153,7 @@ export const useAddRegionalGuidanceIPAdapter = (entityIdentifier: CanvasEntityId
const dispatch = useAppDispatch();
const defaultIPAdapter = useAppSelector(selectDefaultIPAdapter);
const func = useCallback(() => {
dispatch(rgIPAdapterAdded({ entityIdentifier, overrides: { ipAdapter: defaultIPAdapter } }));
dispatch(rgIPAdapterAdded({ entityIdentifier, overrides: { ipAdapter: deepClone(defaultIPAdapter) } }));
}, [defaultIPAdapter, dispatch, entityIdentifier]);
return func;

View File

@@ -9,6 +9,7 @@ import {
getEmptyRect,
getKonvaNodeDebugAttrs,
getPrefixedId,
offsetCoord,
} from 'features/controlLayers/konva/util';
import { selectSelectedEntityIdentifier } from 'features/controlLayers/store/selectors';
import type { Coordinate, Rect, RectWithRotation } from 'features/controlLayers/store/types';
@@ -558,6 +559,25 @@ export class CanvasEntityTransformer extends CanvasModuleBase {
this.manager.stateApi.setEntityPosition({ entityIdentifier: this.parent.entityIdentifier, position });
};
nudgeBy = (offset: Coordinate) => {
// We can immediately move both the proxy rect and layer objects so we don't have to wait for a redux round-trip,
// which can take up to 2ms in my testing. This is optional, but can make the interaction feel more responsive,
// especially on lower-end devices.
// Get the relative position of the layer's objects, according to konva
const position = this.konva.proxyRect.position();
// Offset the position by the nudge amount
const newPosition = offsetCoord(position, offset);
// Set the new position of the proxy rect - this doesn't move the layer objects - only the outline rect
this.konva.proxyRect.setAttrs(newPosition);
// Sync the layer objects with the proxy rect - moves them to the new position
this.syncObjectGroupWithProxyRect();
// Push to redux. The state change will do a round-trip, and eventually make it back to the canvas classes, at
// which point the layer will be moved to the new position.
this.manager.stateApi.moveEntityBy({ entityIdentifier: this.parent.entityIdentifier, offset });
this.log.trace({ offset }, 'Nudged');
};
syncObjectGroupWithProxyRect = () => {
this.parent.renderer.konva.objectGroup.setAttrs({
x: this.konva.proxyRect.x(),

View File

@@ -20,7 +20,8 @@ import {
controlLayerAdded,
entityBrushLineAdded,
entityEraserLineAdded,
entityMoved,
entityMovedBy,
entityMovedTo,
entityRasterized,
entityRectAdded,
entityReset,
@@ -40,7 +41,8 @@ import type {
EntityBrushLineAddedPayload,
EntityEraserLineAddedPayload,
EntityIdentifierPayload,
EntityMovedPayload,
EntityMovedByPayload,
EntityMovedToPayload,
EntityRasterizedPayload,
EntityRectAddedPayload,
Rect,
@@ -51,7 +53,7 @@ import type { Graph } from 'features/nodes/util/graph/generation/Graph';
import { atom, computed } from 'nanostores';
import type { Logger } from 'roarr';
import { getImageDTO } from 'services/api/endpoints/images';
import { queueApi } from 'services/api/endpoints/queue';
import { enqueueMutationFixedCacheKeyOptions, queueApi } from 'services/api/endpoints/queue';
import type { BatchConfig, ImageDTO, S } from 'services/api/types';
import { QueueError } from 'services/events/errors';
import type { Param0 } from 'tsafe';
@@ -139,8 +141,15 @@ export class CanvasStateApiModule extends CanvasModuleBase {
/**
* Updates an entity's position, pushing state to redux.
*/
setEntityPosition = (arg: EntityMovedPayload) => {
this.store.dispatch(entityMoved(arg));
setEntityPosition = (arg: EntityMovedToPayload) => {
this.store.dispatch(entityMovedTo(arg));
};
/**
* Moves an entity by the give offset, pushing state to redux.
*/
moveEntityBy = (arg: EntityMovedByPayload) => {
this.store.dispatch(entityMovedBy(arg));
};
/**
@@ -402,7 +411,7 @@ export class CanvasStateApiModule extends CanvasModuleBase {
queueApi.endpoints.enqueueBatch.initiate(batch, {
// Use the same cache key for all enqueueBatch requests, so that all consumers of this query get the same status
// updates.
fixedCacheKey: 'enqueueBatch',
...enqueueMutationFixedCacheKeyOptions,
// We do not need RTK to track this request in the store
track: false,
})

View File

@@ -1,9 +1,24 @@
import { $focusedRegion } from 'common/hooks/focus';
import type { CanvasManager } from 'features/controlLayers/konva/CanvasManager';
import { CanvasModuleBase } from 'features/controlLayers/konva/CanvasModuleBase';
import type { CanvasToolModule } from 'features/controlLayers/konva/CanvasTool/CanvasToolModule';
import { getPrefixedId } from 'features/controlLayers/konva/util';
import type { Coordinate } from 'features/controlLayers/store/types';
import type { Logger } from 'roarr';
type CanvasMoveToolModuleConfig = {
/**
* The number of pixels to nudge the entity by when moving with the arrow keys.
*/
NUDGE_PX: number;
};
const DEFAULT_CONFIG: CanvasMoveToolModuleConfig = {
NUDGE_PX: 1,
};
type NudgeKey = 'ArrowLeft' | 'ArrowRight' | 'ArrowUp' | 'ArrowDown';
export class CanvasMoveToolModule extends CanvasModuleBase {
readonly type = 'move_tool';
readonly id: string;
@@ -12,6 +27,9 @@ export class CanvasMoveToolModule extends CanvasModuleBase {
readonly manager: CanvasManager;
readonly log: Logger;
config: CanvasMoveToolModuleConfig = DEFAULT_CONFIG;
nudgeOffsets: Record<NudgeKey, Coordinate>;
constructor(parent: CanvasToolModule) {
super();
this.id = getPrefixedId(this.type);
@@ -19,8 +37,18 @@ export class CanvasMoveToolModule extends CanvasModuleBase {
this.manager = this.parent.manager;
this.path = this.manager.buildPath(this);
this.log = this.manager.buildLogger(this);
this.log.debug('Creating module');
this.nudgeOffsets = {
ArrowLeft: { x: -this.config.NUDGE_PX, y: 0 },
ArrowRight: { x: this.config.NUDGE_PX, y: 0 },
ArrowUp: { x: 0, y: -this.config.NUDGE_PX },
ArrowDown: { x: 0, y: this.config.NUDGE_PX },
};
}
isNudgeKey(key: string): key is NudgeKey {
return this.nudgeOffsets[key as NudgeKey] !== undefined;
}
syncCursorStyle = () => {
@@ -32,4 +60,45 @@ export class CanvasMoveToolModule extends CanvasModuleBase {
selectedEntity.transformer.syncCursorStyle();
}
};
nudge = (nudgeKey: NudgeKey) => {
if ($focusedRegion.get() !== 'canvas') {
return;
}
const selectedEntity = this.manager.stateApi.getSelectedEntityAdapter();
if (!selectedEntity) {
return;
}
if (
selectedEntity.$isDisabled.get() ||
selectedEntity.$isEmpty.get() ||
selectedEntity.$isLocked.get() ||
selectedEntity.$isEntityTypeHidden.get()
) {
return;
}
const isBusy = this.manager.$isBusy.get();
const isMoveToolSelected = this.parent.$tool.get() === 'move';
const isThisEntityTransforming = this.manager.stateApi.$transformingAdapter.get() === selectedEntity;
if (isBusy) {
// When the canvas is busy, we shouldn't allow nudging - except when the canvas is busy transforming the selected
// entity. Nudging is allowed during transformation, regardless of the selected tool.
if (!isThisEntityTransforming) {
return;
}
} else {
// Otherwise, the canvas is not busy, and we should only allow nudging when the move tool is selected.
if (!isMoveToolSelected) {
return;
}
}
const offset = this.nudgeOffsets[nudgeKey];
selectedEntity.transformer.nudgeBy(offset);
};
}

View File

@@ -528,11 +528,16 @@ export class CanvasToolModule extends CanvasModuleBase {
};
onKeyDown = (e: KeyboardEvent) => {
if (e.repeat) {
if (e.target instanceof HTMLInputElement || e.target instanceof HTMLTextAreaElement) {
return;
}
if (e.target instanceof HTMLInputElement || e.target instanceof HTMLTextAreaElement) {
// Handle nudging - must be before repeat, as we may want to catch repeating keys
if (this.tools.move.isNudgeKey(e.key)) {
this.tools.move.nudge(e.key);
}
if (e.repeat) {
return;
}

View File

@@ -18,6 +18,7 @@ import type {
CanvasEntityType,
CanvasInpaintMaskState,
CanvasMetadata,
EntityMovedByPayload,
FillStyle,
RegionalGuidanceReferenceImageState,
RgbColor,
@@ -51,7 +52,7 @@ import type {
EntityBrushLineAddedPayload,
EntityEraserLineAddedPayload,
EntityIdentifierPayload,
EntityMovedPayload,
EntityMovedToPayload,
EntityRasterizedPayload,
EntityRectAddedPayload,
IPMethodV2,
@@ -1201,7 +1202,7 @@ export const canvasSlice = createSlice({
}
entity.fill.style = style;
},
entityMoved: (state, action: PayloadAction<EntityMovedPayload>) => {
entityMovedTo: (state, action: PayloadAction<EntityMovedToPayload>) => {
const { entityIdentifier, position } = action.payload;
const entity = selectEntity(state, entityIdentifier);
if (!entity) {
@@ -1212,6 +1213,20 @@ export const canvasSlice = createSlice({
entity.position = position;
}
},
entityMovedBy: (state, action: PayloadAction<EntityMovedByPayload>) => {
const { entityIdentifier, offset } = action.payload;
const entity = selectEntity(state, entityIdentifier);
if (!entity) {
return;
}
if (!isRenderableEntity(entity)) {
return;
}
entity.position.x += offset.x;
entity.position.y += offset.y;
},
entityRasterized: (state, action: PayloadAction<EntityRasterizedPayload>) => {
const { entityIdentifier, imageObject, position, replaceObjects, isSelected } = action.payload;
const entity = selectEntity(state, entityIdentifier);
@@ -1505,7 +1520,8 @@ export const {
entityIsLockedToggled,
entityFillColorChanged,
entityFillStyleChanged,
entityMoved,
entityMovedTo,
entityMovedBy,
entityDuplicated,
entityRasterized,
entityBrushLineAdded,

View File

@@ -83,7 +83,7 @@ const initialState: ParamsState = {
canvasCoherenceMode: 'Gaussian Blur',
canvasCoherenceMinDenoise: 0,
canvasCoherenceEdgeSize: 16,
infillMethod: 'patchmatch',
infillMethod: 'lama',
infillTileSize: 32,
infillPatchmatchDownscaleSize: 1,
infillColorValue: { r: 0, g: 0, b: 0, a: 1 },
@@ -273,24 +273,27 @@ export const paramsSlice = createSlice({
setCanvasCoherenceMinDenoise: (state, action: PayloadAction<number>) => {
state.canvasCoherenceMinDenoise = action.payload;
},
paramsReset: (state) => resetState(state),
},
extraReducers(builder) {
builder.addMatcher(newSessionRequested, (state) => {
// When a new session is requested, we need to keep the current model selections, plus dependent state
// like VAE precision. Everything else gets reset to default.
const newState = deepClone(initialState);
newState.model = state.model;
newState.vae = state.vae;
newState.fluxVAE = state.fluxVAE;
newState.vaePrecision = state.vaePrecision;
newState.t5EncoderModel = state.t5EncoderModel;
newState.clipEmbedModel = state.clipEmbedModel;
newState.refinerModel = state.refinerModel;
return newState;
});
builder.addMatcher(newSessionRequested, (state) => resetState(state));
},
});
const resetState = (state: ParamsState): ParamsState => {
// When a new session is requested, we need to keep the current model selections, plus dependent state
// like VAE precision. Everything else gets reset to default.
const newState = deepClone(initialState);
newState.model = state.model;
newState.vae = state.vae;
newState.fluxVAE = state.fluxVAE;
newState.vaePrecision = state.vaePrecision;
newState.t5EncoderModel = state.t5EncoderModel;
newState.clipEmbedModel = state.clipEmbedModel;
newState.refinerModel = state.refinerModel;
return newState;
};
export const {
setInfillMethod,
setInfillTileSize,
@@ -334,6 +337,7 @@ export const {
setRefinerNegativeAestheticScore,
setRefinerStart,
modelChanged,
paramsReset,
} = paramsSlice.actions;
/* eslint-disable-next-line @typescript-eslint/no-explicit-any */

View File

@@ -439,7 +439,8 @@ export type EntityIdentifierPayload<
entityIdentifier: CanvasEntityIdentifier<U>;
} & T;
export type EntityMovedPayload = EntityIdentifierPayload<{ position: Coordinate }>;
export type EntityMovedToPayload = EntityIdentifierPayload<{ position: Coordinate }>;
export type EntityMovedByPayload = EntityIdentifierPayload<{ offset: Coordinate }>;
export type EntityBrushLineAddedPayload = EntityIdentifierPayload<{
brushLine: CanvasBrushLineState | CanvasBrushLineWithPressureState;
}>;

View File

@@ -0,0 +1,153 @@
import type {
CanvasControlLayerState,
CanvasInpaintMaskState,
CanvasRasterLayerState,
CanvasReferenceImageState,
CanvasRegionalGuidanceState,
} from 'features/controlLayers/store/types';
import type { ParameterModel } from 'features/parameters/types/parameterSchemas';
const WARNINGS = {
UNSUPPORTED_MODEL: 'controlLayers.warnings.unsupportedModel',
RG_NO_PROMPTS_OR_IP_ADAPTERS: 'controlLayers.warnings.rgNoPromptsOrIPAdapters',
RG_NEGATIVE_PROMPT_NOT_SUPPORTED: 'controlLayers.warnings.rgNegativePromptNotSupported',
RG_REFERENCE_IMAGES_NOT_SUPPORTED: 'controlLayers.warnings.rgReferenceImagesNotSupported',
RG_AUTO_NEGATIVE_NOT_SUPPORTED: 'controlLayers.warnings.rgAutoNegativeNotSupported',
RG_NO_REGION: 'controlLayers.warnings.rgNoRegion',
IP_ADAPTER_NO_MODEL_SELECTED: 'controlLayers.warnings.ipAdapterNoModelSelected',
IP_ADAPTER_INCOMPATIBLE_BASE_MODEL: 'controlLayers.warnings.ipAdapterIncompatibleBaseModel',
IP_ADAPTER_NO_IMAGE_SELECTED: 'controlLayers.warnings.ipAdapterNoImageSelected',
CONTROL_ADAPTER_NO_MODEL_SELECTED: 'controlLayers.warnings.controlAdapterNoModelSelected',
CONTROL_ADAPTER_INCOMPATIBLE_BASE_MODEL: 'controlLayers.warnings.controlAdapterIncompatibleBaseModel',
CONTROL_ADAPTER_NO_CONTROL: 'controlLayers.warnings.controlAdapterNoControl',
} as const;
type WarningTKey = (typeof WARNINGS)[keyof typeof WARNINGS];
export const getRegionalGuidanceWarnings = (
entity: CanvasRegionalGuidanceState,
model: ParameterModel | null
): WarningTKey[] => {
const warnings: WarningTKey[] = [];
if (entity.objects.length === 0) {
// Layer is in empty state
warnings.push(WARNINGS.RG_NO_REGION);
}
if (entity.positivePrompt === null && entity.negativePrompt === null && entity.referenceImages.length === 0) {
// Must have at least 1 prompt or IP Adapter
warnings.push(WARNINGS.RG_NO_PROMPTS_OR_IP_ADAPTERS);
}
if (model) {
if (model.base === 'sd-3' || model.base === 'sd-2') {
// Unsupported model architecture
warnings.push(WARNINGS.UNSUPPORTED_MODEL);
} else if (model.base === 'flux') {
// Some features are not supported for flux models
if (entity.negativePrompt !== null) {
warnings.push(WARNINGS.RG_NEGATIVE_PROMPT_NOT_SUPPORTED);
}
if (entity.referenceImages.length > 0) {
warnings.push(WARNINGS.RG_REFERENCE_IMAGES_NOT_SUPPORTED);
}
if (entity.autoNegative) {
warnings.push(WARNINGS.RG_AUTO_NEGATIVE_NOT_SUPPORTED);
}
} else {
entity.referenceImages.forEach(({ ipAdapter }) => {
if (!ipAdapter.model) {
// No model selected
warnings.push(WARNINGS.IP_ADAPTER_NO_MODEL_SELECTED);
} else if (ipAdapter.model.base !== model.base) {
// Supported model architecture but doesn't match
warnings.push(WARNINGS.IP_ADAPTER_INCOMPATIBLE_BASE_MODEL);
}
if (!ipAdapter.image) {
// No image selected
warnings.push(WARNINGS.IP_ADAPTER_NO_IMAGE_SELECTED);
}
});
}
}
return warnings;
};
export const getGlobalReferenceImageWarnings = (
entity: CanvasReferenceImageState,
model: ParameterModel | null
): WarningTKey[] => {
const warnings: WarningTKey[] = [];
if (!entity.ipAdapter.model) {
// No model selected
warnings.push(WARNINGS.IP_ADAPTER_NO_MODEL_SELECTED);
} else if (model) {
if (model.base === 'sd-3' || model.base === 'sd-2') {
// Unsupported model architecture
warnings.push(WARNINGS.UNSUPPORTED_MODEL);
} else if (entity.ipAdapter.model.base !== model.base) {
// Supported model architecture but doesn't match
warnings.push(WARNINGS.IP_ADAPTER_INCOMPATIBLE_BASE_MODEL);
}
}
if (!entity.ipAdapter.image) {
// No image selected
warnings.push(WARNINGS.IP_ADAPTER_NO_IMAGE_SELECTED);
}
return warnings;
};
export const getControlLayerWarnings = (
entity: CanvasControlLayerState,
model: ParameterModel | null
): WarningTKey[] => {
const warnings: WarningTKey[] = [];
if (entity.objects.length === 0) {
// Layer is in empty state
warnings.push(WARNINGS.CONTROL_ADAPTER_NO_CONTROL);
}
if (!entity.controlAdapter.model) {
// No model selected
warnings.push(WARNINGS.CONTROL_ADAPTER_NO_MODEL_SELECTED);
} else if (model) {
if (model.base === 'sd-3' || model.base === 'sd-2') {
// Unsupported model architecture
warnings.push(WARNINGS.UNSUPPORTED_MODEL);
} else if (entity.controlAdapter.model.base !== model.base) {
// Supported model architecture but doesn't match
warnings.push(WARNINGS.CONTROL_ADAPTER_INCOMPATIBLE_BASE_MODEL);
}
}
return warnings;
};
export const getRasterLayerWarnings = (
_entity: CanvasRasterLayerState,
_model: ParameterModel | null
): WarningTKey[] => {
const warnings: WarningTKey[] = [];
// There are no warnings at the moment for raster layers.
return warnings;
};
export const getInpaintMaskWarnings = (
_entity: CanvasInpaintMaskState,
_model: ParameterModel | null
): WarningTKey[] => {
const warnings: WarningTKey[] = [];
// There are no warnings at the moment for inpaint masks.
return warnings;
};

View File

@@ -7,7 +7,7 @@ const zSeedBehaviour = z.enum(['PER_ITERATION', 'PER_PROMPT']);
type SeedBehaviour = z.infer<typeof zSeedBehaviour>;
export const isSeedBehaviour = (v: unknown): v is SeedBehaviour => zSeedBehaviour.safeParse(v).success;
interface DynamicPromptsState {
export interface DynamicPromptsState {
_version: 1;
maxPrompts: number;
combinatorial: boolean;

View File

@@ -0,0 +1,110 @@
import { Menu, MenuButton, MenuItem, MenuList } from '@invoke-ai/ui-library';
import { useAppStore } from 'app/store/nanostores/store';
import { useAppSelector } from 'app/store/storeHooks';
import { SubMenuButtonContent, useSubMenu } from 'common/hooks/useSubMenu';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { selectIsSD3 } from 'features/controlLayers/store/paramsSlice';
import { useImageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import { useImageDTOContext } from 'features/gallery/contexts/ImageDTOContext';
import { newCanvasFromImage } from 'features/imageActions/actions';
import { toast } from 'features/toast/toast';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFileBold, PiPlusBold } from 'react-icons/pi';
export const ImageMenuItemNewCanvasFromImageSubMenu = memo(() => {
const { t } = useTranslation();
const subMenu = useSubMenu();
const store = useAppStore();
const imageDTO = useImageDTOContext();
const imageViewer = useImageViewer();
const isBusy = useCanvasIsBusy();
const isSD3 = useAppSelector(selectIsSD3);
const onClickNewCanvasWithRasterLayerFromImage = useCallback(() => {
const { dispatch, getState } = store;
newCanvasFromImage({ imageDTO, withResize: false, type: 'raster_layer', dispatch, getState });
dispatch(setActiveTab('canvas'));
imageViewer.close();
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [imageDTO, imageViewer, store, t]);
const onClickNewCanvasWithControlLayerFromImage = useCallback(() => {
const { dispatch, getState } = store;
newCanvasFromImage({ imageDTO, withResize: false, type: 'control_layer', dispatch, getState });
dispatch(setActiveTab('canvas'));
imageViewer.close();
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [imageDTO, imageViewer, store, t]);
const onClickNewCanvasWithRasterLayerFromImageWithResize = useCallback(() => {
const { dispatch, getState } = store;
newCanvasFromImage({ imageDTO, withResize: true, type: 'raster_layer', dispatch, getState });
dispatch(setActiveTab('canvas'));
imageViewer.close();
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [imageDTO, imageViewer, store, t]);
const onClickNewCanvasWithControlLayerFromImageWithResize = useCallback(() => {
const { dispatch, getState } = store;
newCanvasFromImage({ imageDTO, withResize: true, type: 'control_layer', dispatch, getState });
dispatch(setActiveTab('canvas'));
imageViewer.close();
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [imageDTO, imageViewer, store, t]);
return (
<MenuItem {...subMenu.parentMenuItemProps} icon={<PiPlusBold />}>
<Menu {...subMenu.menuProps}>
<MenuButton {...subMenu.menuButtonProps}>
<SubMenuButtonContent label={t('controlLayers.newCanvasFromImage')} />
</MenuButton>
<MenuList {...subMenu.menuListProps}>
<MenuItem icon={<PiFileBold />} onClickCapture={onClickNewCanvasWithRasterLayerFromImage} isDisabled={isBusy}>
{t('controlLayers.asRasterLayer')}
</MenuItem>
<MenuItem
icon={<PiFileBold />}
onClickCapture={onClickNewCanvasWithRasterLayerFromImageWithResize}
isDisabled={isBusy}
>
{t('controlLayers.asRasterLayerResize')}
</MenuItem>
<MenuItem
icon={<PiFileBold />}
onClickCapture={onClickNewCanvasWithControlLayerFromImage}
isDisabled={isBusy || isSD3}
>
{t('controlLayers.asControlLayer')}
</MenuItem>
<MenuItem
icon={<PiFileBold />}
onClickCapture={onClickNewCanvasWithControlLayerFromImageWithResize}
isDisabled={isBusy || isSD3}
>
{t('controlLayers.asControlLayerResize')}
</MenuItem>
</MenuList>
</Menu>
</MenuItem>
);
});
ImageMenuItemNewCanvasFromImageSubMenu.displayName = 'ImageMenuItemNewCanvasFromImageSubMenu';

View File

@@ -8,14 +8,14 @@ import { selectIsFLUX, selectIsSD3 } from 'features/controlLayers/store/paramsSl
import { useImageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import { useImageDTOContext } from 'features/gallery/contexts/ImageDTOContext';
import { sentImageToCanvas } from 'features/gallery/store/actions';
import { createNewCanvasEntityFromImage, newCanvasFromImage } from 'features/imageActions/actions';
import { createNewCanvasEntityFromImage } from 'features/imageActions/actions';
import { toast } from 'features/toast/toast';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFileBold, PiPlusBold } from 'react-icons/pi';
import { PiPlusBold } from 'react-icons/pi';
export const ImageMenuItemNewFromImageSubMenu = memo(() => {
export const ImageMenuItemNewLayerFromImageSubMenu = memo(() => {
const { t } = useTranslation();
const subMenu = useSubMenu();
const store = useAppStore();
@@ -25,30 +25,6 @@ export const ImageMenuItemNewFromImageSubMenu = memo(() => {
const isFLUX = useAppSelector(selectIsFLUX);
const isSD3 = useAppSelector(selectIsSD3);
const onClickNewCanvasWithRasterLayerFromImage = useCallback(() => {
const { dispatch, getState } = store;
newCanvasFromImage({ imageDTO, type: 'raster_layer', dispatch, getState });
dispatch(setActiveTab('canvas'));
imageViewer.close();
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [imageDTO, imageViewer, store, t]);
const onClickNewCanvasWithControlLayerFromImage = useCallback(() => {
const { dispatch, getState } = store;
newCanvasFromImage({ imageDTO, type: 'control_layer', dispatch, getState });
dispatch(setActiveTab('canvas'));
imageViewer.close();
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [imageDTO, imageViewer, store, t]);
const onClickNewRasterLayerFromImage = useCallback(() => {
const { dispatch, getState } = store;
createNewCanvasEntityFromImage({ imageDTO, type: 'raster_layer', dispatch, getState });
@@ -101,23 +77,39 @@ export const ImageMenuItemNewFromImageSubMenu = memo(() => {
});
}, [imageDTO, imageViewer, store, t]);
const onClickNewRegionalReferenceImageFromImage = useCallback(() => {
const { dispatch, getState } = store;
createNewCanvasEntityFromImage({ imageDTO, type: 'reference_image', dispatch, getState });
dispatch(sentImageToCanvas());
dispatch(setActiveTab('canvas'));
imageViewer.close();
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [imageDTO, imageViewer, store, t]);
const onClickNewGlobalReferenceImageFromImage = useCallback(() => {
const { dispatch, getState } = store;
createNewCanvasEntityFromImage({ imageDTO, type: 'regional_guidance_with_reference_image', dispatch, getState });
dispatch(sentImageToCanvas());
dispatch(setActiveTab('canvas'));
imageViewer.close();
toast({
id: 'SENT_TO_CANVAS',
title: t('toast.sentToCanvas'),
status: 'success',
});
}, [imageDTO, imageViewer, store, t]);
return (
<MenuItem {...subMenu.parentMenuItemProps} icon={<PiPlusBold />}>
<Menu {...subMenu.menuProps}>
<MenuButton {...subMenu.menuButtonProps}>
<SubMenuButtonContent label={t('controlLayers.newFromImage')} />
<SubMenuButtonContent label={t('controlLayers.newLayerFromImage')} />
</MenuButton>
<MenuList {...subMenu.menuListProps}>
<MenuItem icon={<PiFileBold />} onClickCapture={onClickNewCanvasWithRasterLayerFromImage} isDisabled={isBusy}>
{t('controlLayers.canvasAsRasterLayer')}
</MenuItem>
<MenuItem
icon={<PiFileBold />}
onClickCapture={onClickNewCanvasWithControlLayerFromImage}
isDisabled={isBusy || isSD3}
>
{t('controlLayers.canvasAsControlLayer')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} onClickCapture={onClickNewInpaintMaskFromImage} isDisabled={isBusy}>
{t('controlLayers.inpaintMask')}
</MenuItem>
@@ -138,10 +130,24 @@ export const ImageMenuItemNewFromImageSubMenu = memo(() => {
<MenuItem icon={<NewLayerIcon />} onClickCapture={onClickNewRasterLayerFromImage} isDisabled={isBusy}>
{t('controlLayers.rasterLayer')}
</MenuItem>
<MenuItem
icon={<NewLayerIcon />}
onClickCapture={onClickNewRegionalReferenceImageFromImage}
isDisabled={isBusy}
>
{t('controlLayers.referenceImageRegional')}
</MenuItem>
<MenuItem
icon={<NewLayerIcon />}
onClickCapture={onClickNewGlobalReferenceImageFromImage}
isDisabled={isBusy}
>
{t('controlLayers.referenceImageGlobal')}
</MenuItem>
</MenuList>
</Menu>
</MenuItem>
);
});
ImageMenuItemNewFromImageSubMenu.displayName = 'ImageMenuItemNewFromImageSubMenu';
ImageMenuItemNewLayerFromImageSubMenu.displayName = 'ImageMenuItemNewLayerFromImageSubMenu';

View File

@@ -7,7 +7,8 @@ import { ImageMenuItemDelete } from 'features/gallery/components/ImageContextMen
import { ImageMenuItemDownload } from 'features/gallery/components/ImageContextMenu/ImageMenuItemDownload';
import { ImageMenuItemLoadWorkflow } from 'features/gallery/components/ImageContextMenu/ImageMenuItemLoadWorkflow';
import { ImageMenuItemMetadataRecallActions } from 'features/gallery/components/ImageContextMenu/ImageMenuItemMetadataRecallActions';
import { ImageMenuItemNewFromImageSubMenu } from 'features/gallery/components/ImageContextMenu/ImageMenuItemNewFromImageSubMenu';
import { ImageMenuItemNewCanvasFromImageSubMenu } from 'features/gallery/components/ImageContextMenu/ImageMenuItemNewCanvasFromImageSubMenu';
import { ImageMenuItemNewLayerFromImageSubMenu } from 'features/gallery/components/ImageContextMenu/ImageMenuItemNewLayerFromImageSubMenu';
import { ImageMenuItemOpenInNewTab } from 'features/gallery/components/ImageContextMenu/ImageMenuItemOpenInNewTab';
import { ImageMenuItemOpenInViewer } from 'features/gallery/components/ImageContextMenu/ImageMenuItemOpenInViewer';
import { ImageMenuItemSelectForCompare } from 'features/gallery/components/ImageContextMenu/ImageMenuItemSelectForCompare';
@@ -38,7 +39,8 @@ const SingleSelectionMenuItems = ({ imageDTO }: SingleSelectionMenuItemsProps) =
<MenuDivider />
<ImageMenuItemSendToUpscale />
<CanvasManagerProviderGate>
<ImageMenuItemNewFromImageSubMenu />
<ImageMenuItemNewCanvasFromImageSubMenu />
<ImageMenuItemNewLayerFromImageSubMenu />
</CanvasManagerProviderGate>
<MenuDivider />
<ImageMenuItemChangeBoard />

View File

@@ -20,6 +20,7 @@ import { selectBboxModelBase, selectBboxRect } from 'features/controlLayers/stor
import type {
CanvasControlLayerState,
CanvasEntityIdentifier,
CanvasEntityState,
CanvasEntityType,
CanvasInpaintMaskState,
CanvasRasterLayerState,
@@ -134,14 +135,16 @@ export const createNewCanvasEntityFromImage = (arg: {
type: CanvasEntityType | 'regional_guidance_with_reference_image';
dispatch: AppDispatch;
getState: () => RootState;
overrides?: Partial<Pick<CanvasEntityState, 'isEnabled' | 'isLocked' | 'name'>>;
}) => {
const { type, imageDTO, dispatch, getState } = arg;
const { type, imageDTO, dispatch, getState, overrides: _overrides } = arg;
const state = getState();
const imageObject = imageDTOToImageObject(imageDTO);
const { x, y } = selectBboxRect(state);
const overrides = {
objects: [imageObject],
position: { x, y },
..._overrides,
};
switch (type) {
case 'raster_layer': {
@@ -166,13 +169,13 @@ export const createNewCanvasEntityFromImage = (arg: {
break;
}
case 'reference_image': {
const ipAdapter = selectDefaultIPAdapter(getState());
const ipAdapter = deepClone(selectDefaultIPAdapter(getState()));
ipAdapter.image = imageDTOToImageWithDims(imageDTO);
dispatch(referenceImageAdded({ overrides: { ipAdapter }, isSelected: true }));
break;
}
case 'regional_guidance_with_reference_image': {
const ipAdapter = selectDefaultIPAdapter(getState());
const ipAdapter = deepClone(selectDefaultIPAdapter(getState()));
ipAdapter.image = imageDTOToImageWithDims(imageDTO);
const referenceImages = [{ id: getPrefixedId('regional_guidance_reference_image'), ipAdapter }];
dispatch(rgAdded({ overrides: { referenceImages }, isSelected: true }));
@@ -182,21 +185,24 @@ export const createNewCanvasEntityFromImage = (arg: {
};
/**
* Creates a new canvas with the given image as the initial image, replicating the img2img flow:
* Creates a new canvas with the given image as the only layer:
* - Reset the canvas
* - Resize the bbox to the image's aspect ratio at the optimal size for the selected model
* - Add the image as a raster layer
* - Resizes the layer to fit the bbox using the 'fill' strategy
* - Add the image as a layer of the given type
* - If `withResize`: Resizes the layer to fit the bbox using the 'fill' strategy
*
* This allows the user to immediately generate a new image from the given image without any additional steps.
*
* Using 'raster_layer' for the type and enabling `withResize` replicates the common img2img flow.
*/
export const newCanvasFromImage = (arg: {
imageDTO: ImageDTO;
type: CanvasEntityType | 'regional_guidance_with_reference_image';
withResize: boolean;
dispatch: AppDispatch;
getState: () => RootState;
}) => {
const { type, imageDTO, dispatch, getState } = arg;
const { type, imageDTO, withResize, dispatch, getState } = arg;
const state = getState();
const base = selectBboxModelBase(state);
@@ -229,7 +235,9 @@ export const newCanvasFromImage = (arg: {
objects: [imageObject],
position: { x, y },
} satisfies Partial<CanvasRasterLayerState>;
addInitCallback(overrides.id);
if (withResize) {
addInitCallback(overrides.id);
}
dispatch(canvasReset());
// The `bboxChangedFromCanvas` reducer does no validation! Careful!
dispatch(bboxChangedFromCanvas({ x: 0, y: 0, width, height }));
@@ -243,7 +251,9 @@ export const newCanvasFromImage = (arg: {
position: { x, y },
controlAdapter: deepClone(initialControlNet),
} satisfies Partial<CanvasControlLayerState>;
addInitCallback(overrides.id);
if (withResize) {
addInitCallback(overrides.id);
}
dispatch(canvasReset());
// The `bboxChangedFromCanvas` reducer does no validation! Careful!
dispatch(bboxChangedFromCanvas({ x: 0, y: 0, width, height }));
@@ -256,7 +266,9 @@ export const newCanvasFromImage = (arg: {
objects: [imageObject],
position: { x, y },
} satisfies Partial<CanvasInpaintMaskState>;
addInitCallback(overrides.id);
if (withResize) {
addInitCallback(overrides.id);
}
dispatch(canvasReset());
// The `bboxChangedFromCanvas` reducer does no validation! Careful!
dispatch(bboxChangedFromCanvas({ x: 0, y: 0, width, height }));
@@ -269,7 +281,9 @@ export const newCanvasFromImage = (arg: {
objects: [imageObject],
position: { x, y },
} satisfies Partial<CanvasRegionalGuidanceState>;
addInitCallback(overrides.id);
if (withResize) {
addInitCallback(overrides.id);
}
dispatch(canvasReset());
// The `bboxChangedFromCanvas` reducer does no validation! Careful!
dispatch(bboxChangedFromCanvas({ x: 0, y: 0, width, height }));
@@ -277,14 +291,14 @@ export const newCanvasFromImage = (arg: {
break;
}
case 'reference_image': {
const ipAdapter = selectDefaultIPAdapter(getState());
const ipAdapter = deepClone(selectDefaultIPAdapter(getState()));
ipAdapter.image = imageDTOToImageWithDims(imageDTO);
dispatch(canvasReset());
dispatch(referenceImageAdded({ overrides: { ipAdapter }, isSelected: true }));
break;
}
case 'regional_guidance_with_reference_image': {
const ipAdapter = selectDefaultIPAdapter(getState());
const ipAdapter = deepClone(selectDefaultIPAdapter(getState()));
ipAdapter.image = imageDTOToImageWithDims(imageDTO);
const referenceImages = [{ id: getPrefixedId('regional_guidance_reference_image'), ipAdapter }];
dispatch(canvasReset());

View File

@@ -41,7 +41,11 @@ const ClassificationTooltipContent = memo(({ classification }: { classification:
}
if (classification === 'internal') {
return t('nodes.prototypeDesc');
return t('nodes.internalDesc');
}
if (classification === 'special') {
return t('nodes.specialDesc');
}
return null;

View File

@@ -4,7 +4,7 @@ import type { PersistConfig, RootState } from 'app/store/store';
import type { Selector } from 'react-redux';
import { SelectionMode } from 'reactflow';
type WorkflowSettingsState = {
export type WorkflowSettingsState = {
_version: 1;
shouldShowMinimapPanel: boolean;
shouldValidateGraph: boolean;

View File

@@ -1,35 +1,41 @@
import { logger } from 'app/logging/logger';
import { withResultAsync } from 'common/util/result';
import type { CanvasManager } from 'features/controlLayers/konva/CanvasManager';
import type {
CanvasControlLayerState,
ControlNetConfig,
Rect,
T2IAdapterConfig,
} from 'features/controlLayers/store/types';
import type { CanvasControlLayerState, Rect } from 'features/controlLayers/store/types';
import { getControlLayerWarnings } from 'features/controlLayers/store/validators';
import type { Graph } from 'features/nodes/util/graph/generation/Graph';
import type { ParameterModel } from 'features/parameters/types/parameterSchemas';
import { serializeError } from 'serialize-error';
import type { BaseModelType, ImageDTO, Invocation } from 'services/api/types';
import type { ImageDTO, Invocation } from 'services/api/types';
import { assert } from 'tsafe';
const log = logger('system');
type AddControlNetsArg = {
manager: CanvasManager;
entities: CanvasControlLayerState[];
g: Graph;
rect: Rect;
collector: Invocation<'collect'>;
model: ParameterModel;
};
type AddControlNetsResult = {
addedControlNets: number;
};
export const addControlNets = async (
manager: CanvasManager,
layers: CanvasControlLayerState[],
g: Graph,
rect: Rect,
collector: Invocation<'collect'>,
base: BaseModelType
): Promise<AddControlNetsResult> => {
const validControlLayers = layers
.filter((layer) => layer.isEnabled)
.filter((layer) => isValidControlAdapter(layer.controlAdapter, base))
.filter((layer) => layer.controlAdapter.type === 'controlnet');
export const addControlNets = async ({
manager,
entities,
g,
rect,
collector,
model,
}: AddControlNetsArg): Promise<AddControlNetsResult> => {
const validControlLayers = entities
.filter((entity) => entity.isEnabled)
.filter((entity) => entity.controlAdapter.type === 'controlnet')
.filter((entity) => getControlLayerWarnings(entity, model).length === 0);
const result: AddControlNetsResult = {
addedControlNets: 0,
@@ -54,22 +60,31 @@ export const addControlNets = async (
return result;
};
type AddT2IAdaptersArg = {
manager: CanvasManager;
entities: CanvasControlLayerState[];
g: Graph;
rect: Rect;
collector: Invocation<'collect'>;
model: ParameterModel;
};
type AddT2IAdaptersResult = {
addedT2IAdapters: number;
};
export const addT2IAdapters = async (
manager: CanvasManager,
layers: CanvasControlLayerState[],
g: Graph,
rect: Rect,
collector: Invocation<'collect'>,
base: BaseModelType
): Promise<AddT2IAdaptersResult> => {
const validControlLayers = layers
.filter((layer) => layer.isEnabled)
.filter((layer) => isValidControlAdapter(layer.controlAdapter, base))
.filter((layer) => layer.controlAdapter.type === 't2i_adapter');
export const addT2IAdapters = async ({
manager,
entities,
g,
rect,
collector,
model,
}: AddT2IAdaptersArg): Promise<AddT2IAdaptersResult> => {
const validControlLayers = entities
.filter((entity) => entity.isEnabled)
.filter((entity) => entity.controlAdapter.type === 't2i_adapter')
.filter((entity) => getControlLayerWarnings(entity, model).length === 0);
const result: AddT2IAdaptersResult = {
addedT2IAdapters: 0,
@@ -145,11 +160,3 @@ const addT2IAdapterToGraph = (
g.addEdge(t2iAdapter, 't2i_adapter', collector, 'item');
};
const isValidControlAdapter = (controlAdapter: ControlNetConfig | T2IAdapterConfig, base: BaseModelType): boolean => {
// Must be have a model
const hasModel = Boolean(controlAdapter.model);
// Model must match the current base model
const modelMatchesBase = controlAdapter.model?.base === base;
return hasModel && modelMatchesBase;
};

View File

@@ -1,19 +1,23 @@
import type { CanvasReferenceImageState } from 'features/controlLayers/store/types';
import { getGlobalReferenceImageWarnings } from 'features/controlLayers/store/validators';
import type { Graph } from 'features/nodes/util/graph/generation/Graph';
import type { BaseModelType, Invocation } from 'services/api/types';
import type { ParameterModel } from 'features/parameters/types/parameterSchemas';
import type { Invocation } from 'services/api/types';
import { assert } from 'tsafe';
type AddIPAdaptersResult = {
addedIPAdapters: number;
};
export const addIPAdapters = (
ipAdapters: CanvasReferenceImageState[],
g: Graph,
collector: Invocation<'collect'>,
base: BaseModelType
): AddIPAdaptersResult => {
const validIPAdapters = ipAdapters.filter((entity) => isValidIPAdapter(entity, base));
type AddIPAdaptersArg = {
entities: CanvasReferenceImageState[];
g: Graph;
collector: Invocation<'collect'>;
model: ParameterModel;
};
export const addIPAdapters = ({ entities, g, collector, model }: AddIPAdaptersArg): AddIPAdaptersResult => {
const validIPAdapters = entities.filter((entity) => getGlobalReferenceImageWarnings(entity, model).length === 0);
const result: AddIPAdaptersResult = {
addedIPAdapters: 0,
@@ -76,11 +80,3 @@ const addIPAdapter = (entity: CanvasReferenceImageState, g: Graph, collector: In
g.addEdge(ipAdapterNode, 'ip_adapter', collector, 'item');
};
const isValidIPAdapter = ({ isEnabled, ipAdapter }: CanvasReferenceImageState, base: BaseModelType): boolean => {
// Must be have a model that matches the current base and must have a control image
const hasModel = Boolean(ipAdapter.model);
const modelMatchesBase = ipAdapter.model?.base === base;
const hasImage = Boolean(ipAdapter.image);
return isEnabled && hasModel && modelMatchesBase && hasImage;
};

View File

@@ -3,15 +3,12 @@ import { deepClone } from 'common/util/deepClone';
import { withResultAsync } from 'common/util/result';
import type { CanvasManager } from 'features/controlLayers/konva/CanvasManager';
import { getPrefixedId } from 'features/controlLayers/konva/util';
import type {
CanvasRegionalGuidanceState,
IPAdapterConfig,
Rect,
RegionalGuidanceReferenceImageState,
} from 'features/controlLayers/store/types';
import type { CanvasRegionalGuidanceState, Rect } from 'features/controlLayers/store/types';
import { getRegionalGuidanceWarnings } from 'features/controlLayers/store/validators';
import type { Graph } from 'features/nodes/util/graph/generation/Graph';
import type { ParameterModel } from 'features/parameters/types/parameterSchemas';
import { serializeError } from 'serialize-error';
import type { BaseModelType, Invocation } from 'services/api/types';
import type { Invocation } from 'services/api/types';
import { assert } from 'tsafe';
const log = logger('system');
@@ -23,19 +20,26 @@ type AddedRegionResult = {
addedIPAdapters: number;
};
const isValidRegion = (rg: CanvasRegionalGuidanceState, base: BaseModelType) => {
const isEnabled = rg.isEnabled;
const hasTextPrompt = Boolean(rg.positivePrompt || rg.negativePrompt);
const hasIPAdapter = rg.referenceImages.filter(({ ipAdapter }) => isValidIPAdapter(ipAdapter, base)).length > 0;
return isEnabled && (hasTextPrompt || hasIPAdapter);
type AddRegionsArg = {
manager: CanvasManager;
regions: CanvasRegionalGuidanceState[];
g: Graph;
bbox: Rect;
model: ParameterModel;
posCond: Invocation<'compel' | 'sdxl_compel_prompt' | 'flux_text_encoder'>;
negCond: Invocation<'compel' | 'sdxl_compel_prompt' | 'flux_text_encoder'> | null;
posCondCollect: Invocation<'collect'>;
negCondCollect: Invocation<'collect'> | null;
ipAdapterCollect: Invocation<'collect'>;
};
/**
* Adds regional guidance to the graph
* @param manager The canvas manager
* @param regions Array of regions to add
* @param g The graph to add the layers to
* @param base The base model type
* @param denoise The main denoise node
* @param bbox The bounding box
* @param model The main model
* @param posCond The positive conditioning node
* @param negCond The negative conditioning node
* @param posCondCollect The positive conditioning collector
@@ -44,22 +48,28 @@ const isValidRegion = (rg: CanvasRegionalGuidanceState, base: BaseModelType) =>
* @returns A promise that resolves to the regions that were successfully added to the graph
*/
export const addRegions = async (
manager: CanvasManager,
regions: CanvasRegionalGuidanceState[],
g: Graph,
bbox: Rect,
base: BaseModelType,
denoise: Invocation<'denoise_latents'>,
posCond: Invocation<'compel'> | Invocation<'sdxl_compel_prompt'>,
negCond: Invocation<'compel'> | Invocation<'sdxl_compel_prompt'>,
posCondCollect: Invocation<'collect'>,
negCondCollect: Invocation<'collect'>,
ipAdapterCollect: Invocation<'collect'>
): Promise<AddedRegionResult[]> => {
const isSDXL = base === 'sdxl';
export const addRegions = async ({
manager,
regions,
g,
bbox,
model,
posCond,
negCond,
posCondCollect,
negCondCollect,
ipAdapterCollect,
}: AddRegionsArg): Promise<AddedRegionResult[]> => {
const isSDXL = model.base === 'sdxl';
const isFLUX = model.base === 'flux';
const validRegions = regions.filter((rg) => {
if (!rg.isEnabled) {
return false;
}
return getRegionalGuidanceWarnings(rg, model).length === 0;
});
const validRegions = regions.filter((rg) => isValidRegion(rg, base));
const results: AddedRegionResult[] = [];
for (const region of validRegions) {
@@ -94,20 +104,27 @@ export const addRegions = async (
if (region.positivePrompt) {
// The main positive conditioning node
result.addedPositivePrompt = true;
const regionalPosCond = g.addNode(
isSDXL
? {
type: 'sdxl_compel_prompt',
id: getPrefixedId('prompt_region_positive_cond'),
prompt: region.positivePrompt,
style: region.positivePrompt, // TODO: Should we put the positive prompt in both fields?
}
: {
type: 'compel',
id: getPrefixedId('prompt_region_positive_cond'),
prompt: region.positivePrompt,
}
);
let regionalPosCond: Invocation<'compel' | 'sdxl_compel_prompt' | 'flux_text_encoder'>;
if (isSDXL) {
regionalPosCond = g.addNode({
type: 'sdxl_compel_prompt',
id: getPrefixedId('prompt_region_positive_cond'),
prompt: region.positivePrompt,
style: region.positivePrompt, // TODO: Should we put the positive prompt in both fields?
});
} else if (isFLUX) {
regionalPosCond = g.addNode({
type: 'flux_text_encoder',
id: getPrefixedId('prompt_region_positive_cond'),
prompt: region.positivePrompt,
});
} else {
regionalPosCond = g.addNode({
type: 'compel',
id: getPrefixedId('prompt_region_positive_cond'),
prompt: region.positivePrompt,
});
}
// Connect the mask to the conditioning
g.addEdge(maskToTensor, 'mask', regionalPosCond, 'mask');
// Connect the conditioning to the collector
@@ -115,38 +132,55 @@ export const addRegions = async (
// Copy the connections to the "global" positive conditioning node to the regional cond
if (posCond.type === 'compel') {
for (const edge of g.getEdgesTo(posCond, ['clip', 'mask'])) {
// Clone the edge, but change the destination node to the regional conditioning node
const clone = deepClone(edge);
clone.destination.node_id = regionalPosCond.id;
g.addEdgeFromObj(clone);
}
} else if (posCond.type === 'sdxl_compel_prompt') {
for (const edge of g.getEdgesTo(posCond, ['clip', 'clip2', 'mask'])) {
const clone = deepClone(edge);
clone.destination.node_id = regionalPosCond.id;
g.addEdgeFromObj(clone);
}
} else if (posCond.type === 'flux_text_encoder') {
for (const edge of g.getEdgesTo(posCond, ['clip', 't5_encoder', 't5_max_seq_len', 'mask'])) {
const clone = deepClone(edge);
clone.destination.node_id = regionalPosCond.id;
g.addEdgeFromObj(clone);
}
} else {
for (const edge of g.getEdgesTo(posCond, ['clip', 'clip2', 'mask'])) {
// Clone the edge, but change the destination node to the regional conditioning node
const clone = deepClone(edge);
clone.destination.node_id = regionalPosCond.id;
g.addEdgeFromObj(clone);
}
assert(false, 'Unsupported positive conditioning node type.');
}
}
if (region.negativePrompt) {
result.addedNegativePrompt = true;
assert(negCond, 'Negative conditioning node is required if there is a negative prompt');
assert(negCondCollect, 'Negative conditioning collector is required if there is a negative prompt');
// The main negative conditioning node
const regionalNegCond = g.addNode(
isSDXL
? {
type: 'sdxl_compel_prompt',
id: getPrefixedId('prompt_region_negative_cond'),
prompt: region.negativePrompt,
style: region.negativePrompt,
}
: {
type: 'compel',
id: getPrefixedId('prompt_region_negative_cond'),
prompt: region.negativePrompt,
}
);
result.addedNegativePrompt = true;
let regionalNegCond: Invocation<'compel' | 'sdxl_compel_prompt' | 'flux_text_encoder'>;
if (isSDXL) {
regionalNegCond = g.addNode({
type: 'sdxl_compel_prompt',
id: getPrefixedId('prompt_region_negative_cond'),
prompt: region.negativePrompt,
style: region.negativePrompt,
});
} else if (isFLUX) {
regionalNegCond = g.addNode({
type: 'flux_text_encoder',
id: getPrefixedId('prompt_region_negative_cond'),
prompt: region.negativePrompt,
});
} else {
regionalNegCond = g.addNode({
type: 'compel',
id: getPrefixedId('prompt_region_negative_cond'),
prompt: region.negativePrompt,
});
}
// Connect the mask to the conditioning
g.addEdge(maskToTensor, 'mask', regionalNegCond, 'mask');
// Connect the conditioning to the collector
@@ -158,17 +192,27 @@ export const addRegions = async (
clone.destination.node_id = regionalNegCond.id;
g.addEdgeFromObj(clone);
}
} else {
} else if (negCond.type === 'sdxl_compel_prompt') {
for (const edge of g.getEdgesTo(negCond, ['clip', 'clip2', 'mask'])) {
const clone = deepClone(edge);
clone.destination.node_id = regionalNegCond.id;
g.addEdgeFromObj(clone);
}
} else if (negCond.type === 'flux_text_encoder') {
for (const edge of g.getEdgesTo(negCond, ['clip', 't5_encoder', 't5_max_seq_len', 'mask'])) {
const clone = deepClone(edge);
clone.destination.node_id = regionalNegCond.id;
g.addEdgeFromObj(clone);
}
} else {
assert(false, 'Unsupported negative conditioning node type.');
}
}
// If we are using the "invert" auto-negative setting, we need to add an additional negative conditioning node
if (region.autoNegative && region.positivePrompt) {
assert(negCondCollect, 'Negative conditioning collector is required if there is an auto-negative setting');
result.addedAutoNegativePositivePrompt = true;
// We re-use the mask image, but invert it when converting to tensor
const invertTensorMask = g.addNode({
@@ -178,20 +222,27 @@ export const addRegions = async (
// Connect the OG mask image to the inverted mask-to-tensor node
g.addEdge(maskToTensor, 'mask', invertTensorMask, 'mask');
// Create the conditioning node. It's going to be connected to the negative cond collector, but it uses the positive prompt
const regionalPosCondInverted = g.addNode(
isSDXL
? {
type: 'sdxl_compel_prompt',
id: getPrefixedId('prompt_region_positive_cond_inverted'),
prompt: region.positivePrompt,
style: region.positivePrompt,
}
: {
type: 'compel',
id: getPrefixedId('prompt_region_positive_cond_inverted'),
prompt: region.positivePrompt,
}
);
let regionalPosCondInverted: Invocation<'compel' | 'sdxl_compel_prompt' | 'flux_text_encoder'>;
if (isSDXL) {
regionalPosCondInverted = g.addNode({
type: 'sdxl_compel_prompt',
id: getPrefixedId('prompt_region_positive_cond_inverted'),
prompt: region.positivePrompt,
style: region.positivePrompt,
});
} else if (isFLUX) {
regionalPosCondInverted = g.addNode({
type: 'flux_text_encoder',
id: getPrefixedId('prompt_region_positive_cond_inverted'),
prompt: region.positivePrompt,
});
} else {
regionalPosCondInverted = g.addNode({
type: 'compel',
id: getPrefixedId('prompt_region_positive_cond_inverted'),
prompt: region.positivePrompt,
});
}
// Connect the inverted mask to the conditioning
g.addEdge(invertTensorMask, 'mask', regionalPosCondInverted, 'mask');
// Connect the conditioning to the negative collector
@@ -203,20 +254,26 @@ export const addRegions = async (
clone.destination.node_id = regionalPosCondInverted.id;
g.addEdgeFromObj(clone);
}
} else {
} else if (posCond.type === 'sdxl_compel_prompt') {
for (const edge of g.getEdgesTo(posCond, ['clip', 'clip2', 'mask'])) {
const clone = deepClone(edge);
clone.destination.node_id = regionalPosCondInverted.id;
g.addEdgeFromObj(clone);
}
} else if (posCond.type === 'flux_text_encoder') {
for (const edge of g.getEdgesTo(posCond, ['clip', 't5_encoder', 't5_max_seq_len', 'mask'])) {
const clone = deepClone(edge);
clone.destination.node_id = regionalPosCondInverted.id;
g.addEdgeFromObj(clone);
}
} else {
assert(false, 'Unsupported positive conditioning node type.');
}
}
const validRGIPAdapters: RegionalGuidanceReferenceImageState[] = region.referenceImages.filter(({ ipAdapter }) =>
isValidIPAdapter(ipAdapter, base)
);
for (const { id, ipAdapter } of region.referenceImages) {
assert(!isFLUX, 'Regional IP adapters are not supported for FLUX.');
for (const { id, ipAdapter } of validRGIPAdapters) {
result.addedIPAdapters++;
const { weight, model, clipVisionModel, method, beginEndStepPct, image } = ipAdapter;
assert(model, 'IP Adapter model is required');
@@ -248,11 +305,3 @@ export const addRegions = async (
return results;
};
const isValidIPAdapter = (ipAdapter: IPAdapterConfig, base: BaseModelType): boolean => {
// Must be have a model that matches the current base and must have a control image
const hasModel = Boolean(ipAdapter.model);
const modelMatchesBase = ipAdapter.model?.base === base;
const hasImage = Boolean(ipAdapter.image);
return hasModel && modelMatchesBase && hasImage;
};

View File

@@ -11,6 +11,7 @@ import { addImageToImage } from 'features/nodes/util/graph/generation/addImageTo
import { addInpaint } from 'features/nodes/util/graph/generation/addInpaint';
import { addNSFWChecker } from 'features/nodes/util/graph/generation/addNSFWChecker';
import { addOutpaint } from 'features/nodes/util/graph/generation/addOutpaint';
import { addRegions } from 'features/nodes/util/graph/generation/addRegions';
import { addTextToImage } from 'features/nodes/util/graph/generation/addTextToImage';
import { addWatermarker } from 'features/nodes/util/graph/generation/addWatermarker';
import { Graph } from 'features/nodes/util/graph/generation/Graph';
@@ -79,7 +80,10 @@ export const buildFLUXGraph = async (
id: getPrefixedId('flux_text_encoder'),
prompt: positivePrompt,
});
const posCondCollect = g.addNode({
type: 'collect',
id: getPrefixedId('pos_cond_collect'),
});
const denoise = g.addNode({
type: 'flux_denoise',
id: getPrefixedId('flux_denoise'),
@@ -104,13 +108,12 @@ export const buildFLUXGraph = async (
g.addEdge(modelLoader, 'clip', posCond, 'clip');
g.addEdge(modelLoader, 't5_encoder', posCond, 't5_encoder');
g.addEdge(modelLoader, 'max_seq_len', posCond, 't5_max_seq_len');
g.addEdge(posCond, 'conditioning', posCondCollect, 'item');
g.addEdge(posCondCollect, 'collection', denoise, 'positive_text_conditioning');
g.addEdge(denoise, 'latents', l2i, 'latents');
addFLUXLoRAs(state, g, denoise, modelLoader, posCond);
g.addEdge(posCond, 'conditioning', denoise, 'positive_text_conditioning');
g.addEdge(denoise, 'latents', l2i, 'latents');
const modelConfig = await fetchModelConfigWithTypeGuard(model.key, isNonRefinerMainModelConfig);
assert(modelConfig.base === 'flux');
@@ -196,31 +199,50 @@ export const buildFLUXGraph = async (
type: 'collect',
id: getPrefixedId('control_net_collector'),
});
const controlNetResult = await addControlNets(
const controlNetResult = await addControlNets({
manager,
canvas.controlLayers.entities,
entities: canvas.controlLayers.entities,
g,
canvas.bbox.rect,
controlNetCollector,
modelConfig.base
);
rect: canvas.bbox.rect,
collector: controlNetCollector,
model: modelConfig,
});
if (controlNetResult.addedControlNets > 0) {
g.addEdge(controlNetCollector, 'collection', denoise, 'control');
} else {
g.deleteNode(controlNetCollector.id);
}
const ipAdapterCollector = g.addNode({
const ipAdapterCollect = g.addNode({
type: 'collect',
id: getPrefixedId('ip_adapter_collector'),
});
const ipAdapterResult = addIPAdapters(canvas.referenceImages.entities, g, ipAdapterCollector, modelConfig.base);
const ipAdapterResult = addIPAdapters({
entities: canvas.referenceImages.entities,
g,
collector: ipAdapterCollect,
model: modelConfig,
});
const totalIPAdaptersAdded = ipAdapterResult.addedIPAdapters;
const regionsResult = await addRegions({
manager,
regions: canvas.regionalGuidance.entities,
g,
bbox: canvas.bbox.rect,
model: modelConfig,
posCond,
negCond: null,
posCondCollect,
negCondCollect: null,
ipAdapterCollect,
});
const totalIPAdaptersAdded =
ipAdapterResult.addedIPAdapters + regionsResult.reduce((acc, r) => acc + r.addedIPAdapters, 0);
if (totalIPAdaptersAdded > 0) {
g.addEdge(ipAdapterCollector, 'collection', denoise, 'ip_adapter');
g.addEdge(ipAdapterCollect, 'collection', denoise, 'ip_adapter');
} else {
g.deleteNode(ipAdapterCollector.id);
g.deleteNode(ipAdapterCollect.id);
}
if (state.system.shouldUseNSFWChecker) {

View File

@@ -227,14 +227,14 @@ export const buildSD1Graph = async (
type: 'collect',
id: getPrefixedId('control_net_collector'),
});
const controlNetResult = await addControlNets(
const controlNetResult = await addControlNets({
manager,
canvas.controlLayers.entities,
entities: canvas.controlLayers.entities,
g,
canvas.bbox.rect,
controlNetCollector,
modelConfig.base
);
rect: canvas.bbox.rect,
collector: controlNetCollector,
model: modelConfig,
});
if (controlNetResult.addedControlNets > 0) {
g.addEdge(controlNetCollector, 'collection', denoise, 'control');
} else {
@@ -245,46 +245,50 @@ export const buildSD1Graph = async (
type: 'collect',
id: getPrefixedId('t2i_adapter_collector'),
});
const t2iAdapterResult = await addT2IAdapters(
const t2iAdapterResult = await addT2IAdapters({
manager,
canvas.controlLayers.entities,
entities: canvas.controlLayers.entities,
g,
canvas.bbox.rect,
t2iAdapterCollector,
modelConfig.base
);
rect: canvas.bbox.rect,
collector: t2iAdapterCollector,
model: modelConfig,
});
if (t2iAdapterResult.addedT2IAdapters > 0) {
g.addEdge(t2iAdapterCollector, 'collection', denoise, 't2i_adapter');
} else {
g.deleteNode(t2iAdapterCollector.id);
}
const ipAdapterCollector = g.addNode({
const ipAdapterCollect = g.addNode({
type: 'collect',
id: getPrefixedId('ip_adapter_collector'),
});
const ipAdapterResult = addIPAdapters(canvas.referenceImages.entities, g, ipAdapterCollector, modelConfig.base);
const regionsResult = await addRegions(
manager,
canvas.regionalGuidance.entities,
const ipAdapterResult = addIPAdapters({
entities: canvas.referenceImages.entities,
g,
canvas.bbox.rect,
modelConfig.base,
denoise,
collector: ipAdapterCollect,
model: modelConfig,
});
const regionsResult = await addRegions({
manager,
regions: canvas.regionalGuidance.entities,
g,
bbox: canvas.bbox.rect,
model: modelConfig,
posCond,
negCond,
posCondCollect,
negCondCollect,
ipAdapterCollector
);
ipAdapterCollect,
});
const totalIPAdaptersAdded =
ipAdapterResult.addedIPAdapters + regionsResult.reduce((acc, r) => acc + r.addedIPAdapters, 0);
if (totalIPAdaptersAdded > 0) {
g.addEdge(ipAdapterCollector, 'collection', denoise, 'ip_adapter');
g.addEdge(ipAdapterCollect, 'collection', denoise, 'ip_adapter');
} else {
g.deleteNode(ipAdapterCollector.id);
g.deleteNode(ipAdapterCollect.id);
}
if (state.system.shouldUseNSFWChecker) {

View File

@@ -232,14 +232,14 @@ export const buildSDXLGraph = async (
type: 'collect',
id: getPrefixedId('control_net_collector'),
});
const controlNetResult = await addControlNets(
const controlNetResult = await addControlNets({
manager,
canvas.controlLayers.entities,
entities: canvas.controlLayers.entities,
g,
canvas.bbox.rect,
controlNetCollector,
modelConfig.base
);
rect: canvas.bbox.rect,
collector: controlNetCollector,
model: modelConfig,
});
if (controlNetResult.addedControlNets > 0) {
g.addEdge(controlNetCollector, 'collection', denoise, 'control');
} else {
@@ -250,46 +250,50 @@ export const buildSDXLGraph = async (
type: 'collect',
id: getPrefixedId('t2i_adapter_collector'),
});
const t2iAdapterResult = await addT2IAdapters(
const t2iAdapterResult = await addT2IAdapters({
manager,
canvas.controlLayers.entities,
entities: canvas.controlLayers.entities,
g,
canvas.bbox.rect,
t2iAdapterCollector,
modelConfig.base
);
rect: canvas.bbox.rect,
collector: t2iAdapterCollector,
model: modelConfig,
});
if (t2iAdapterResult.addedT2IAdapters > 0) {
g.addEdge(t2iAdapterCollector, 'collection', denoise, 't2i_adapter');
} else {
g.deleteNode(t2iAdapterCollector.id);
}
const ipAdapterCollector = g.addNode({
const ipAdapterCollect = g.addNode({
type: 'collect',
id: getPrefixedId('ip_adapter_collector'),
});
const ipAdapterResult = addIPAdapters(canvas.referenceImages.entities, g, ipAdapterCollector, modelConfig.base);
const regionsResult = await addRegions(
manager,
canvas.regionalGuidance.entities,
const ipAdapterResult = addIPAdapters({
entities: canvas.referenceImages.entities,
g,
canvas.bbox.rect,
modelConfig.base,
denoise,
collector: ipAdapterCollect,
model: modelConfig,
});
const regionsResult = await addRegions({
manager,
regions: canvas.regionalGuidance.entities,
g,
bbox: canvas.bbox.rect,
model: modelConfig,
posCond,
negCond,
posCondCollect,
negCondCollect,
ipAdapterCollector
);
ipAdapterCollect,
});
const totalIPAdaptersAdded =
ipAdapterResult.addedIPAdapters + regionsResult.reduce((acc, r) => acc + r.addedIPAdapters, 0);
if (totalIPAdaptersAdded > 0) {
g.addEdge(ipAdapterCollector, 'collection', denoise, 'ip_adapter');
g.addEdge(ipAdapterCollect, 'collection', denoise, 'ip_adapter');
} else {
g.deleteNode(ipAdapterCollector.id);
g.deleteNode(ipAdapterCollect.id);
}
if (state.system.shouldUseNSFWChecker) {

View File

@@ -58,6 +58,9 @@ const isAllowedOutputField = (nodeType: string, fieldName: string) => {
if (RESERVED_OUTPUT_FIELD_NAMES.includes(fieldName)) {
return false;
}
if (nodeType === 'image_batch' && fieldName !== 'image') {
return false;
}
return true;
};

View File

@@ -4,13 +4,13 @@ import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiTrashSimpleFill } from 'react-icons/pi';
import { useClearQueue } from './ClearQueueConfirmationAlertDialog';
import { useClearQueueDialog } from './ClearQueueConfirmationAlertDialog';
type Props = ButtonProps;
const ClearQueueButton = (props: Props) => {
const { t } = useTranslation();
const clearQueue = useClearQueue();
const clearQueue = useClearQueueDialog();
return (
<>

View File

@@ -1,51 +1,15 @@
import { ConfirmationAlertDialog, Text } from '@invoke-ai/ui-library';
import { useStore } from '@nanostores/react';
import { useAppDispatch } from 'app/store/storeHooks';
import { useAssertSingleton } from 'common/hooks/useAssertSingleton';
import { buildUseBoolean } from 'common/hooks/useBoolean';
import { listCursorChanged, listPriorityChanged } from 'features/queue/store/queueSlice';
import { toast } from 'features/toast/toast';
import { memo, useCallback, useMemo } from 'react';
import { useClearQueue } from 'features/queue/hooks/useClearQueue';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { useClearQueueMutation, useGetQueueStatusQuery } from 'services/api/endpoints/queue';
import { $isConnected } from 'services/events/stores';
const [useClearQueueConfirmationAlertDialog] = buildUseBoolean(false);
export const useClearQueue = () => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
export const useClearQueueDialog = () => {
const dialog = useClearQueueConfirmationAlertDialog();
const { data: queueStatus } = useGetQueueStatusQuery();
const isConnected = useStore($isConnected);
const [trigger, { isLoading }] = useClearQueueMutation({
fixedCacheKey: 'clearQueue',
});
const clearQueue = useCallback(async () => {
if (!queueStatus?.queue.total) {
return;
}
try {
await trigger().unwrap();
toast({
id: 'QUEUE_CLEAR_SUCCEEDED',
title: t('queue.clearSucceeded'),
status: 'success',
});
dispatch(listCursorChanged(undefined));
dispatch(listPriorityChanged(undefined));
} catch {
toast({
id: 'QUEUE_CLEAR_FAILED',
title: t('queue.clearFailed'),
status: 'error',
});
}
}, [queueStatus?.queue.total, trigger, dispatch, t]);
const isDisabled = useMemo(() => !isConnected || !queueStatus?.queue.total, [isConnected, queueStatus?.queue.total]);
const { clearQueue, isLoading, isDisabled, queueStatus } = useClearQueue();
return {
clearQueue,
@@ -61,7 +25,7 @@ export const useClearQueue = () => {
export const ClearQueueConfirmationsAlertDialog = memo(() => {
useAssertSingleton('ClearQueueConfirmationsAlertDialog');
const { t } = useTranslation();
const clearQueue = useClearQueue();
const clearQueue = useClearQueueDialog();
return (
<ConfirmationAlertDialog

View File

@@ -4,11 +4,11 @@ import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiTrashSimpleBold, PiXBold } from 'react-icons/pi';
import { useClearQueue } from './ClearQueueConfirmationAlertDialog';
import { useClearQueueDialog } from './ClearQueueConfirmationAlertDialog';
export const ClearQueueIconButton = memo((_) => {
const { t } = useTranslation();
const clearQueue = useClearQueue();
const clearQueue = useClearQueueDialog();
const cancelCurrentQueueItem = useCancelCurrentQueueItem();
// Show the single item clear button when shift is pressed

View File

@@ -0,0 +1,306 @@
import type { TooltipProps } from '@invoke-ai/ui-library';
import { Divider, Flex, ListItem, Text, Tooltip, UnorderedList } from '@invoke-ai/ui-library';
import { useStore } from '@nanostores/react';
import { $true } from 'app/store/nanostores/util';
import { useAppSelector } from 'app/store/storeHooks';
import { useCanvasManagerSafe } from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import { selectSendToCanvas } from 'features/controlLayers/store/canvasSettingsSlice';
import { selectIterations } from 'features/controlLayers/store/paramsSlice';
import { selectDynamicPromptsIsLoading } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { selectAutoAddBoardId } from 'features/gallery/store/gallerySelectors';
import { $templates } from 'features/nodes/store/nodesSlice';
import type { Reason } from 'features/queue/store/readiness';
import {
buildSelectIsReadyToEnqueueCanvasTab,
buildSelectIsReadyToEnqueueUpscaleTab,
buildSelectIsReadyToEnqueueWorkflowsTab,
buildSelectReasonsWhyCannotEnqueueCanvasTab,
buildSelectReasonsWhyCannotEnqueueUpscaleTab,
buildSelectReasonsWhyCannotEnqueueWorkflowsTab,
selectPromptsCount,
selectWorkflowsBatchSize,
} from 'features/queue/store/readiness';
import { selectActiveTab } from 'features/ui/store/uiSelectors';
import type { PropsWithChildren } from 'react';
import { memo, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { enqueueMutationFixedCacheKeyOptions, useEnqueueBatchMutation } from 'services/api/endpoints/queue';
import { useBoardName } from 'services/api/hooks/useBoardName';
import { $isConnected } from 'services/events/stores';
type Props = TooltipProps & {
prepend?: boolean;
};
export const InvokeButtonTooltip = ({ prepend, children, ...rest }: PropsWithChildren<Props>) => {
return (
<Tooltip label={<TooltipContent prepend={prepend} />} maxW={512} {...rest}>
{children}
</Tooltip>
);
};
const TooltipContent = memo(({ prepend = false }: { prepend?: boolean }) => {
const activeTab = useAppSelector(selectActiveTab);
if (activeTab === 'canvas') {
return <CanvasTabTooltipContent prepend={prepend} />;
}
if (activeTab === 'workflows') {
return <WorkflowsTabTooltipContent prepend={prepend} />;
}
if (activeTab === 'upscaling') {
return <UpscaleTabTooltipContent prepend={prepend} />;
}
return null;
});
TooltipContent.displayName = 'TooltipContent';
const CanvasTabTooltipContent = memo(({ prepend = false }: { prepend?: boolean }) => {
const isConnected = useStore($isConnected);
const canvasManager = useCanvasManagerSafe();
const canvasIsFiltering = useStore(canvasManager?.stateApi.$isFiltering ?? $true);
const canvasIsTransforming = useStore(canvasManager?.stateApi.$isTransforming ?? $true);
const canvasIsRasterizing = useStore(canvasManager?.stateApi.$isRasterizing ?? $true);
const canvasIsSelectingObject = useStore(canvasManager?.stateApi.$isSegmenting ?? $true);
const canvasIsCompositing = useStore(canvasManager?.compositor.$isBusy ?? $true);
const selectIsReady = useMemo(
() =>
buildSelectIsReadyToEnqueueCanvasTab({
isConnected,
canvasIsFiltering,
canvasIsTransforming,
canvasIsRasterizing,
canvasIsSelectingObject,
canvasIsCompositing,
}),
[
isConnected,
canvasIsCompositing,
canvasIsFiltering,
canvasIsRasterizing,
canvasIsSelectingObject,
canvasIsTransforming,
]
);
const selectReasons = useMemo(
() =>
buildSelectReasonsWhyCannotEnqueueCanvasTab({
isConnected,
canvasIsFiltering,
canvasIsTransforming,
canvasIsRasterizing,
canvasIsSelectingObject,
canvasIsCompositing,
}),
[
isConnected,
canvasIsCompositing,
canvasIsFiltering,
canvasIsRasterizing,
canvasIsSelectingObject,
canvasIsTransforming,
]
);
const isReady = useAppSelector(selectIsReady);
const reasons = useAppSelector(selectReasons);
return (
<Flex flexDir="column" gap={1}>
<IsReadyText isReady={isReady} prepend={prepend} />
<QueueCountPredictionCanvasOrUpscaleTab />
{reasons.length > 0 && (
<>
<StyledDivider />
<ReasonsList reasons={reasons} />
</>
)}
<StyledDivider />
<AddingToText />
</Flex>
);
});
CanvasTabTooltipContent.displayName = 'CanvasTabTooltipContent';
const UpscaleTabTooltipContent = memo(({ prepend = false }: { prepend?: boolean }) => {
const isConnected = useStore($isConnected);
const selectIsReady = useMemo(() => buildSelectIsReadyToEnqueueUpscaleTab({ isConnected }), [isConnected]);
const selectReasons = useMemo(() => buildSelectReasonsWhyCannotEnqueueUpscaleTab({ isConnected }), [isConnected]);
const isReady = useAppSelector(selectIsReady);
const reasons = useAppSelector(selectReasons);
return (
<Flex flexDir="column" gap={1}>
<IsReadyText isReady={isReady} prepend={prepend} />
<QueueCountPredictionCanvasOrUpscaleTab />
{reasons.length > 0 && (
<>
<StyledDivider />
<ReasonsList reasons={reasons} />
</>
)}
</Flex>
);
});
UpscaleTabTooltipContent.displayName = 'UpscaleTabTooltipContent';
const WorkflowsTabTooltipContent = memo(({ prepend = false }: { prepend?: boolean }) => {
const isConnected = useStore($isConnected);
const templates = useStore($templates);
const selectIsReady = useMemo(
() => buildSelectIsReadyToEnqueueWorkflowsTab({ isConnected, templates }),
[isConnected, templates]
);
const selectReasons = useMemo(
() => buildSelectReasonsWhyCannotEnqueueWorkflowsTab({ isConnected, templates }),
[isConnected, templates]
);
const isReady = useAppSelector(selectIsReady);
const reasons = useAppSelector(selectReasons);
return (
<Flex flexDir="column" gap={1}>
<IsReadyText isReady={isReady} prepend={prepend} />
<QueueCountPredictionWorkflowsTab />
{reasons.length > 0 && (
<>
<StyledDivider />
<ReasonsList reasons={reasons} />
</>
)}
</Flex>
);
});
WorkflowsTabTooltipContent.displayName = 'WorkflowsTabTooltipContent';
const QueueCountPredictionCanvasOrUpscaleTab = memo(() => {
const { t } = useTranslation();
const promptsCount = useAppSelector(selectPromptsCount);
const iterationsCount = useAppSelector(selectIterations);
const text = useMemo(() => {
const generationCount = Math.min(promptsCount * iterationsCount, 10000);
const prompts = t('queue.prompts', { count: promptsCount });
const iterations = t('queue.iterations', { count: iterationsCount });
const generations = t('queue.generations', { count: generationCount });
return `${promptsCount} ${prompts} \u00d7 ${iterationsCount} ${iterations} -> ${generationCount} ${generations}`.toLowerCase();
}, [iterationsCount, promptsCount, t]);
return <Text>{text}</Text>;
});
QueueCountPredictionCanvasOrUpscaleTab.displayName = 'QueueCountPredictionCanvasOrUpscaleTab';
const QueueCountPredictionWorkflowsTab = memo(() => {
const { t } = useTranslation();
const batchSize = useAppSelector(selectWorkflowsBatchSize);
const iterationsCount = useAppSelector(selectIterations);
const text = useMemo(() => {
const generationCount = Math.min(batchSize * iterationsCount, 10000);
const iterations = t('queue.iterations', { count: iterationsCount });
const generations = t('queue.generations', { count: generationCount });
return `${batchSize} ${t('queue.batchSize')} \u00d7 ${iterationsCount} ${iterations} -> ${generationCount} ${generations}`.toLowerCase();
}, [batchSize, iterationsCount, t]);
return <Text>{text}</Text>;
});
QueueCountPredictionWorkflowsTab.displayName = 'QueueCountPredictionWorkflowsTab';
const IsReadyText = memo(({ isReady, prepend }: { isReady: boolean; prepend: boolean }) => {
const { t } = useTranslation();
const isLoadingDynamicPrompts = useAppSelector(selectDynamicPromptsIsLoading);
const [_, enqueueMutation] = useEnqueueBatchMutation(enqueueMutationFixedCacheKeyOptions);
const text = useMemo(() => {
if (enqueueMutation.isLoading) {
return t('queue.enqueueing');
}
if (isLoadingDynamicPrompts) {
return t('dynamicPrompts.loading');
}
if (isReady) {
if (prepend) {
return t('queue.queueFront');
}
return t('queue.queueBack');
}
return t('queue.notReady');
}, [enqueueMutation.isLoading, isLoadingDynamicPrompts, isReady, prepend, t]);
return <Text fontWeight="semibold">{text}</Text>;
});
IsReadyText.displayName = 'IsReadyText';
const ReasonsList = memo(({ reasons }: { reasons: Reason[] }) => {
return (
<UnorderedList>
{reasons.map((reason, i) => (
<ReasonListItem key={`${reason.content}.${i}`} reason={reason} />
))}
</UnorderedList>
);
});
ReasonsList.displayName = 'ReasonsList';
const ReasonListItem = memo(({ reason }: { reason: Reason }) => {
return (
<ListItem>
<span>
{reason.prefix && (
<Text as="span" fontWeight="semibold">
{reason.prefix}:{' '}
</Text>
)}
<Text as="span">{reason.content}</Text>
</span>
</ListItem>
);
});
ReasonListItem.displayName = 'ReasonListItem';
const StyledDivider = memo(() => <Divider opacity={0.2} borderColor="base.900" />);
StyledDivider.displayName = 'StyledDivider';
const AddingToText = memo(() => {
const { t } = useTranslation();
const sendToCanvas = useAppSelector(selectSendToCanvas);
const autoAddBoardId = useAppSelector(selectAutoAddBoardId);
const autoAddBoardName = useBoardName(autoAddBoardId);
const addingTo = useMemo(() => {
if (sendToCanvas) {
return t('controlLayers.stagingOnCanvas');
}
return t('parameters.invoke.addingImagesTo');
}, [sendToCanvas, t]);
const destination = useMemo(() => {
if (sendToCanvas) {
return t('queue.canvas');
}
if (autoAddBoardName) {
return autoAddBoardName;
}
return t('boards.uncategorized');
}, [autoAddBoardName, sendToCanvas, t]);
return (
<Text fontStyle="oblique 10deg">
{addingTo}{' '}
<Text as="span" fontWeight="semibold">
{destination}
</Text>
</Text>
);
});
AddingToText.displayName = 'AddingToText';

View File

@@ -6,7 +6,7 @@ import { useInvoke } from 'features/queue/hooks/useInvoke';
import { memo } from 'react';
import { PiLightningFill, PiSparkleFill } from 'react-icons/pi';
import { QueueButtonTooltip } from './QueueButtonTooltip';
import { InvokeButtonTooltip } from './InvokeButtonTooltip/InvokeButtonTooltip';
const invoke = 'Invoke';
@@ -18,7 +18,7 @@ export const InvokeButton = memo(() => {
return (
<Flex pos="relative" w="200px">
<QueueIterationsNumberInput />
<QueueButtonTooltip prepend={shift}>
<InvokeButtonTooltip prepend={shift}>
<Button
onClick={shift ? queue.queueFront : queue.queueBack}
isLoading={queue.isLoading || isLoadingDynamicPrompts}
@@ -36,7 +36,7 @@ export const InvokeButton = memo(() => {
<span>{invoke}</span>
<Spacer />
</Button>
</QueueButtonTooltip>
</InvokeButtonTooltip>
</Flex>
);
});

View File

@@ -1,27 +1,16 @@
import { IconButton, Menu, MenuButton, MenuGroup, MenuItem, MenuList } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import {
useNewCanvasSession,
useNewGallerySession,
} from 'features/controlLayers/components/NewSessionConfirmationAlertDialog';
import { useClearQueue } from 'features/queue/components/ClearQueueConfirmationAlertDialog';
import { SessionMenuItems } from 'common/components/SessionMenuItems';
import { useClearQueueDialog } from 'features/queue/components/ClearQueueConfirmationAlertDialog';
import { QueueCountBadge } from 'features/queue/components/QueueCountBadge';
import { useCancelCurrentQueueItem } from 'features/queue/hooks/useCancelCurrentQueueItem';
import { usePauseProcessor } from 'features/queue/hooks/usePauseProcessor';
import { useResumeProcessor } from 'features/queue/hooks/useResumeProcessor';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { memo, useCallback, useRef } from 'react';
import { useTranslation } from 'react-i18next';
import {
PiImageBold,
PiListBold,
PiPaintBrushBold,
PiPauseFill,
PiPlayFill,
PiQueueBold,
PiTrashSimpleBold,
PiXBold,
} from 'react-icons/pi';
import { PiListBold, PiPauseFill, PiPlayFill, PiQueueBold, PiTrashSimpleBold, PiXBold } from 'react-icons/pi';
export const QueueActionsMenuButton = memo(() => {
const ref = useRef<HTMLDivElement>(null);
@@ -29,9 +18,8 @@ export const QueueActionsMenuButton = memo(() => {
const { t } = useTranslation();
const isPauseEnabled = useFeatureStatus('pauseQueue');
const isResumeEnabled = useFeatureStatus('resumeQueue');
const { newGallerySessionWithDialog } = useNewGallerySession();
const { newCanvasSessionWithDialog } = useNewCanvasSession();
const clearQueue = useClearQueue();
const cancelCurrent = useCancelCurrentQueueItem();
const clearQueue = useClearQueueDialog();
const {
resumeProcessor,
isLoading: isLoadingResumeProcessor,
@@ -52,20 +40,15 @@ export const QueueActionsMenuButton = memo(() => {
<MenuButton ref={ref} as={IconButton} size="lg" aria-label="Queue Actions Menu" icon={<PiListBold />} />
<MenuList>
<MenuGroup title={t('common.new')}>
<MenuItem icon={<PiImageBold />} onClick={newGallerySessionWithDialog}>
{t('controlLayers.newGallerySession')}
</MenuItem>
<MenuItem icon={<PiPaintBrushBold />} onClick={newCanvasSessionWithDialog}>
{t('controlLayers.newCanvasSession')}
</MenuItem>
<SessionMenuItems />
</MenuGroup>
<MenuGroup title={t('queue.queue')}>
<MenuItem
isDestructive
icon={<PiXBold />}
onClick={clearQueue.openDialog}
isLoading={clearQueue.isLoading}
isDisabled={clearQueue.isDisabled}
onClick={cancelCurrent.cancelQueueItem}
isLoading={cancelCurrent.isLoading}
isDisabled={cancelCurrent.isDisabled}
>
{t('queue.cancelTooltip')}
</MenuItem>

View File

@@ -1,123 +0,0 @@
import type { TooltipProps } from '@invoke-ai/ui-library';
import { Divider, Flex, ListItem, Text, Tooltip, UnorderedList } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/store/storeHooks';
import { useIsReadyToEnqueue } from 'common/hooks/useIsReadyToEnqueue';
import { selectSendToCanvas } from 'features/controlLayers/store/canvasSettingsSlice';
import { selectIterations, selectParamsSlice } from 'features/controlLayers/store/paramsSlice';
import {
selectDynamicPromptsIsLoading,
selectDynamicPromptsSlice,
} from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
import { selectAutoAddBoardId } from 'features/gallery/store/gallerySelectors';
import type { PropsWithChildren } from 'react';
import { memo, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { useEnqueueBatchMutation } from 'services/api/endpoints/queue';
import { useBoardName } from 'services/api/hooks/useBoardName';
const selectPromptsCount = createSelector(selectParamsSlice, selectDynamicPromptsSlice, (params, dynamicPrompts) =>
getShouldProcessPrompt(params.positivePrompt) ? dynamicPrompts.prompts.length : 1
);
type Props = TooltipProps & {
prepend?: boolean;
};
export const QueueButtonTooltip = ({ prepend, children, ...rest }: PropsWithChildren<Props>) => {
return (
<Tooltip label={<TooltipContent prepend={prepend} />} maxW={512} {...rest}>
{children}
</Tooltip>
);
};
const TooltipContent = memo(({ prepend = false }: { prepend?: boolean }) => {
const { t } = useTranslation();
const { isReady, reasons } = useIsReadyToEnqueue();
const sendToCanvas = useAppSelector(selectSendToCanvas);
const isLoadingDynamicPrompts = useAppSelector(selectDynamicPromptsIsLoading);
const promptsCount = useAppSelector(selectPromptsCount);
const iterationsCount = useAppSelector(selectIterations);
const autoAddBoardId = useAppSelector(selectAutoAddBoardId);
const autoAddBoardName = useBoardName(autoAddBoardId);
const [_, { isLoading }] = useEnqueueBatchMutation({
fixedCacheKey: 'enqueueBatch',
});
const queueCountPredictionLabel = useMemo(() => {
const generationCount = Math.min(promptsCount * iterationsCount, 10000);
const prompts = t('queue.prompts', { count: promptsCount });
const iterations = t('queue.iterations', { count: iterationsCount });
const generations = t('queue.generations', { count: generationCount });
return `${promptsCount} ${prompts} \u00d7 ${iterationsCount} ${iterations} -> ${generationCount} ${generations}`.toLowerCase();
}, [iterationsCount, promptsCount, t]);
const label = useMemo(() => {
if (isLoading) {
return t('queue.enqueueing');
}
if (isLoadingDynamicPrompts) {
return t('dynamicPrompts.loading');
}
if (isReady) {
if (prepend) {
return t('queue.queueFront');
}
return t('queue.queueBack');
}
return t('queue.notReady');
}, [isLoading, isLoadingDynamicPrompts, isReady, prepend, t]);
const addingTo = useMemo(() => {
if (sendToCanvas) {
return t('controlLayers.stagingOnCanvas');
}
return t('parameters.invoke.addingImagesTo');
}, [sendToCanvas, t]);
const destination = useMemo(() => {
if (sendToCanvas) {
return t('queue.canvas');
}
if (autoAddBoardName) {
return autoAddBoardName;
}
return t('boards.uncategorized');
}, [autoAddBoardName, sendToCanvas, t]);
return (
<Flex flexDir="column" gap={1}>
<Text fontWeight="semibold">{label}</Text>
<Text>{queueCountPredictionLabel}</Text>
{reasons.length > 0 && (
<>
<Divider opacity={0.2} borderColor="base.900" />
<UnorderedList>
{reasons.map((reason, i) => (
<ListItem key={`${reason.content}.${i}`}>
<span>
{reason.prefix && (
<Text as="span" fontWeight="semibold">
{reason.prefix}:{' '}
</Text>
)}
<Text as="span">{reason.content}</Text>
</span>
</ListItem>
))}
</UnorderedList>
</>
)}
<Divider opacity={0.2} borderColor="base.900" />
<Text fontStyle="oblique 10deg">
{addingTo}{' '}
<Text as="span" fontWeight="semibold">
{destination}
</Text>
</Text>
</Flex>
);
});
TooltipContent.displayName = 'QueueButtonTooltipContent';

Some files were not shown because too many files have changed in this diff Show More