Compare commits

...

207 Commits

Author SHA1 Message Date
psychedelicious
ab4dec4aa6 chore(ui): lint 2025-06-16 19:02:01 +10:00
psychedelicious
3ae6714188 feat(ui): rework simple session initial state 2025-06-16 18:59:01 +10:00
psychedelicious
0b461fa8da fix(ui): invoke button tooltip on generate tab 2025-06-16 18:27:29 +10:00
psychedelicious
26a7e342c3 fix(ui): progress image fixes 2025-06-16 18:25:37 +10:00
psychedelicious
55e5f8be1b feat(ui): make autoswitch on/off
When the invocation cache is used, we might skip all progress images. This can prevent auto-switch-on-first-progress from working, as we don't get any of those events.

It's much easier to only support auto-switch on complete.
2025-06-16 17:41:47 +10:00
psychedelicious
5a65121247 feat(ui): refine ref images UI 2025-06-16 17:33:17 +10:00
psychedelicious
c2fe280c38 feat(ui): toggleable negative prompt 2025-06-16 17:03:19 +10:00
psychedelicious
e7dee83dad fix(ui): remove old isSelected from refImageAdded call 2025-06-16 16:34:29 +10:00
psychedelicious
3bf08b6d88 chore: bump version to v6.0.0a2 2025-06-13 21:22:30 +10:00
psychedelicious
f821bc30a0 fix(ui): update queue item preview images on init of queue items context 2025-06-13 17:07:50 +10:00
psychedelicious
bc6f493931 fix(ui): hack to close chakra tooltips on drag 2025-06-13 17:07:42 +10:00
psychedelicious
fa332d1a56 tweak(ui): ref image header 2025-06-13 15:39:57 +10:00
psychedelicious
4452ddf6c6 experiment(ui): add generate tab 2025-06-13 15:38:53 +10:00
psychedelicious
5dec79fde5 refactor(ui): ref images (WIP) 2025-06-13 15:19:02 +10:00
psychedelicious
df833d7563 refactor(ui): ref images (WIP) 2025-06-13 13:08:03 +10:00
psychedelicious
c2556f99bc refactor(ui): refImage.ipAdapter -> refImage.config 2025-06-13 12:22:02 +10:00
psychedelicious
391b883a87 feat(ui): split out ref images into own slice (WIP) 2025-06-12 17:18:06 +10:00
psychedelicious
9a06ffe3f5 feat(ui): simple session initial state cards are buttons 2025-06-11 12:47:46 +10:00
psychedelicious
548273643e chore(ui): dpdm 2025-06-11 12:34:01 +10:00
psychedelicious
d158027565 refactor(ui): async modal pattern; use for deleting images
This was needed for a canvas flow change which is currently paused, but the new API is much much nicer to use, so I am keeping it.
2025-06-11 12:34:01 +10:00
psychedelicious
419773cde0 fix(ui): use imageDTO in staging area 2025-06-11 12:34:01 +10:00
psychedelicious
e64075b913 fix(ui): wait until last queue item deleted before flagging canvas session finished 2025-06-11 12:34:01 +10:00
psychedelicious
c9e48fc195 feat(ui): store output image DTO in session context instead of just the name 2025-06-11 12:34:01 +10:00
psychedelicious
67b11d3e0c feat(ui): add AppGetState type 2025-06-11 12:34:01 +10:00
psychedelicious
b782d8c7cd chore: bump version to v6.0.0a1 2025-06-11 12:34:01 +10:00
psychedelicious
d34256b788 feat(ui): close viewer on escape 2025-06-11 12:33:48 +10:00
psychedelicious
4c9553af51 fix(ui): switch only on first progress image 2025-06-11 12:33:48 +10:00
psychedelicious
dbe7fbea2e feat(ui): add on first progress autoswitch mode 2025-06-11 12:33:48 +10:00
psychedelicious
10ba402437 feat(ui): move canvas-specific staging subscriptions to CanvasStagingAreaModule 2025-06-11 12:33:48 +10:00
psychedelicious
25c67f0c68 chore(ui): lint 2025-06-11 12:33:48 +10:00
psychedelicious
144485aa0b feat(ui): make main panel styling and title consistent 2025-06-11 12:33:48 +10:00
psychedelicious
b4c10509f5 feat(ui): add startover button to canvas toolbar 2025-06-11 12:33:48 +10:00
psychedelicious
1cd4e23072 feat(ui): fiddle w/ staging area header 2025-06-11 12:33:48 +10:00
psychedelicious
8f23c4513d feat(ui): remove technical progress message from full preview 2025-06-11 12:33:48 +10:00
psychedelicious
a430872e60 feat(ui): simple session initial state 2025-06-11 12:33:48 +10:00
psychedelicious
af838e8ebb feat(ui): remove vary and edit as control buttons 2025-06-11 12:33:48 +10:00
psychedelicious
7f5fdcd54c refactor(ui): migrate from canceling queue items to deleteing, make queue hook APIs consistent 2025-06-11 12:33:48 +10:00
psychedelicious
ba5fd32f20 fix(ui): mini preview bg color 2025-06-11 12:33:47 +10:00
psychedelicious
9f3d09dc01 fix(ui): hide layers when not on canvas tab 2025-06-11 12:33:47 +10:00
psychedelicious
081942b72e build(ui): temporarily ignore all knip issues 2025-06-11 12:33:47 +10:00
psychedelicious
2b54b32740 feat(ui): finish generation when discarding last item 2025-06-11 12:33:47 +10:00
psychedelicious
1145d67d0d feat(ui): when discarding last item, select new last instead of first 2025-06-11 12:33:47 +10:00
psychedelicious
3d0dd13d8c feat(ui): tweak staging image display 2025-06-11 12:33:47 +10:00
psychedelicious
efb28d55a2 feat(ui): add staging area toolbar to simple session 2025-06-11 12:33:47 +10:00
psychedelicious
e41050359f fix(ui): ensure canvas tool modules are destroyed 2025-06-11 12:33:47 +10:00
psychedelicious
667ed6ab09 fix(ui): reset layers when changing session type 2025-06-11 12:33:47 +10:00
psychedelicious
f5ad063253 feat(ui): improved staging placeholders 2025-06-11 12:33:47 +10:00
psychedelicious
ef2324d72a feat(ui): improved staging placeholders 2025-06-11 12:33:47 +10:00
psychedelicious
26a01d544f feat(ui): more staging fixes 2025-06-11 12:33:46 +10:00
psychedelicious
1f5572cf75 feat(ui): update canvas session state handling for new staging strat 2025-06-11 12:33:46 +10:00
psychedelicious
ad137cdc33 chore(ui): lint (partial cleanup) 2025-06-11 12:33:46 +10:00
psychedelicious
250a834f44 feat(ui): rough out canvas staging area 2025-06-11 12:33:46 +10:00
psychedelicious
0cb3a7c654 feat(app): support deleting queue items by id or destination 2025-06-11 12:33:46 +10:00
psychedelicious
34460984a9 feat(ui): tweak canvas scroll to zoom feel 2025-06-11 12:33:46 +10:00
psychedelicious
4d628c10db docs(ui): add comment about auto-switch not being quite right yet 2025-06-11 12:33:46 +10:00
psychedelicious
f240f1a5d0 feat: canvas flow rework (wip) 2025-06-11 12:33:46 +10:00
psychedelicious
88d2878a11 feat(ui): prevent flicker of image action buttons 2025-06-11 12:33:46 +10:00
psychedelicious
0df8ab51ee feat(ui): move socket events handling into ctx component 2025-06-11 12:33:46 +10:00
psychedelicious
f66f2b3c71 feat(ui): modularize all staging area logic so it can be shared w/ canvas more easily 2025-06-11 12:33:46 +10:00
psychedelicious
f9366ffeff perf(ui): queue actions menu is lazy 2025-06-11 12:33:45 +10:00
psychedelicious
d7fc9604f2 fix(ui): cursor on staging area preview image 2025-06-11 12:33:45 +10:00
psychedelicious
cbda3f1c86 feat(ui): remove clear queue ui components 2025-06-11 12:33:45 +10:00
psychedelicious
973b2a9b45 feat(app): do not prune queue on startup
With the new canvas design, this will result in loss of staging area images.
2025-06-11 12:33:45 +10:00
psychedelicious
5bea0cd431 tidy(ui): component organization 2025-06-11 12:33:45 +10:00
psychedelicious
7a01278537 fix(ui): prevent drag of progress images 2025-06-11 12:33:45 +10:00
psychedelicious
ea42d08bc2 feat: canvas flow rework (wip) 2025-06-11 12:33:45 +10:00
psychedelicious
4d3089f870 feat: canvas flow rework (wip) 2025-06-11 12:33:45 +10:00
psychedelicious
ebd88f59ad chore(ui): typegen 2025-06-11 12:33:45 +10:00
psychedelicious
cce66d90cc feat(api): remove status from list all queue items query 2025-06-11 12:33:45 +10:00
psychedelicious
67c1f900bb tidy(ui): app layout components 2025-06-11 12:33:44 +10:00
psychedelicious
8df45ce671 feat: canvas flow rework (wip) 2025-06-11 12:33:44 +10:00
psychedelicious
cc411fd244 feat: canvas flow rework (wip) 2025-06-11 12:33:44 +10:00
psychedelicious
eae40cae2b feat: canvas flow rework (wip) 2025-06-11 12:33:44 +10:00
psychedelicious
1e739dc003 fix(ui): unstable selector results in lora drop down 2025-06-11 12:33:44 +10:00
psychedelicious
ea63e16b69 feat: canvas flow rework (wip) 2025-06-11 12:33:44 +10:00
psychedelicious
6923a23f31 feat: canvas flow rework (wip) 2025-06-11 12:33:44 +10:00
psychedelicious
cb0e6da5cf wip progress events 2025-06-11 12:33:44 +10:00
psychedelicious
ae35d67c9a refactor(ui): canvas flow (wip) 2025-06-11 12:33:44 +10:00
psychedelicious
7174768152 fix(ui): ref goes undefined in GalleryImage
This appears to be a bug in Chakra UI v2 - use of a fallback component makes the ref passed to an image end up undefined. Had to remove the skeleton loader fallback component.
2025-06-11 12:33:44 +10:00
psychedelicious
d750a2c6c0 fix(ui): merge refs when forwardingin DndImage 2025-06-11 12:33:44 +10:00
psychedelicious
41eafcf47a fix(ui): remove unused sessionId field from type 2025-06-11 12:33:44 +10:00
psychedelicious
4bcb24eb82 fix(ui): ensure all args are passed to handler when creating new canvas from image 2025-06-11 12:33:43 +10:00
psychedelicious
926c29b91d feat(ui): bookmark new inpaint masks 2025-06-11 12:33:43 +10:00
psychedelicious
8dad22ef93 feat(ui): support bookmarking an entity when adding it 2025-06-11 12:33:43 +10:00
psychedelicious
172142ce03 fix(ui): ensure images are added to gallery in simple sessions 2025-06-11 12:33:43 +10:00
psychedelicious
dc31eaa3f9 feat(ui): images always added to gallery in simple session 2025-06-11 12:33:43 +10:00
psychedelicious
19371d70fe wip 2025-06-11 12:33:43 +10:00
psychedelicious
d8d69891c8 refactor(ui): canvas flow (wip) 2025-06-11 12:33:43 +10:00
psychedelicious
168875327b refactor(ui): canvas flow (wip) 2025-06-11 12:33:43 +10:00
psychedelicious
c7fb3d3906 refactor(ui): canvas flow events (wip) 2025-06-11 12:33:43 +10:00
psychedelicious
5aa5ca13ec refactor(ui): canvas flow (wip) 2025-06-11 12:33:43 +10:00
psychedelicious
eb9edff186 refactor(ui): canvas flow (wip) 2025-06-11 12:33:43 +10:00
psychedelicious
839c2e376a refactor(ui): canvas flow (wip) 2025-06-11 12:33:43 +10:00
psychedelicious
1ba3e85e68 refactor(ui): canvas flow (wip) 2025-06-11 12:33:42 +10:00
psychedelicious
28ee1d911a fix(ui): circular import issue 2025-06-11 12:33:42 +10:00
psychedelicious
74a2cb7b77 refactor(ui): params state zodification 2025-06-11 12:33:42 +10:00
psychedelicious
e139158a81 refactor(ui): move params state to big file of canvas zod stuff 2025-06-11 12:33:42 +10:00
psychedelicious
2b383de39c refactor(ui): zod-ify params slice state 2025-06-11 12:33:42 +10:00
psychedelicious
dd136a63a2 refactor(ui): org state in prep for new flow 2025-06-11 12:33:42 +10:00
psychedelicious
325f0a4c5b refactor(ui): image viewer & comparison convolutedness 2025-06-11 12:33:42 +10:00
psychedelicious
7b5ab0d458 feat(ui): default canvas tool is move 2025-06-11 12:33:42 +10:00
psychedelicious
4fa69176cb chore(ui): bump @reduxjs/toolkit to latest 2025-06-11 12:33:42 +10:00
psychedelicious
1418b0546c feat(ui): viewer is a modal (wip) 2025-06-11 12:33:42 +10:00
psychedelicious
85f98ab3eb fix(app): error on upload + resize for unusual image modes 2025-06-11 11:18:08 +10:00
Mary Hipp
dac75685be disable publish and cancel buttons once it begins 2025-06-10 19:50:09 -04:00
psychedelicious
d7b5a8b298 fix: opencv dependency conflict (#8095)
* build: prevent `opencv-python` from being installed

Fixes this error: `AttributeError: module 'cv2.ximgproc' has no attribute 'thinning'`

`opencv-contrib-python` supersedes `opencv-python`, providing the same API + additional features. The two packages should not be installed at the same time to avoid conflicts and/or errors.

The `invisible-watermark` package requires `opencv-python`, but we require the contrib variant.

This change updates `pyproject.toml` to prevent `opencv-python` from ever being installed using a `uv` features called dependency overrides.

* feat(ui): data viewer supports disabling wrap

* feat(api): list _all_ pkgs in app deps endpoint

* chore(ui): typegen

* feat(ui): update about modal to display new full deps list

* chore: uv lock
2025-06-10 08:33:41 -04:00
Kent Keirsey
d3ecaa740f Add Precise Reference to Starter Models 2025-06-09 22:02:11 +10:00
dunkeroni
b5a6765a3d also search image creation date 2025-06-09 21:54:26 +10:00
psychedelicious
3704573ef8 chore: bump version to v5.14.0 2025-06-06 22:36:32 +10:00
Hiroto N
01fbf2ce4d translationBot(ui): update translation (Japanese)
Currently translated at 76.5% (1467 of 1917 strings)

Co-authored-by: Hiroto N <hironow365@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2025-06-06 20:56:13 +10:00
Riccardo Giovanetti
96e7003449 translationBot(ui): update translation (Italian)
Currently translated at 98.9% (1896 of 1917 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-06-06 20:56:13 +10:00
RyoKoba
80197b8856 translationBot(ui): update translation (Japanese)
Currently translated at 76.1% (1460 of 1917 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2025-06-06 20:52:36 +10:00
Hosted Weblate
0187bc671e translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-06-06 20:52:36 +10:00
psychedelicious
31584daabe feat(ui): display canvas spinner during compositing operations 2025-06-06 20:50:02 +10:00
psychedelicious
a6cb522fed feat(ui): add bboxUpdated callback to transformer, use it to fit layer to stage when creating new canvas from an image
When a layer is initialized, we do not yet know its bbox, so we cannot fit the stage view to the layer. We have to wait for the bbox calculation to finish. Previously, we had no way to wait unti lthat bbox calculation was complete to take an action.

For example, this means we could not fit the layers to the stage immediately after creating a new layer, bc we don't know the dimensions of the layer yet.

This callback lets us do that. When creating a new canvas from an image, we now...
- Register a bbox update callback to fit the layers to stage
- Layer is created
- Canvas initializes the layer's entity adapter module (layer's width and height are set to zero at this point)
- Canvas calculates the bbox
- Bbox is updated (width and height are now correct)
- Callback is ran, fitting layer to stage
2025-06-06 20:50:02 +10:00
psychedelicious
f70be1e415 feat(ui): animate stage fit operations (e.g. fit layers to stage) 2025-06-06 20:50:02 +10:00
psychedelicious
a2901f2b46 feat(ui): add method to stage to fit to union of bbox and layers
This ensures that _both_ bbox and layers are visible
2025-06-06 20:50:02 +10:00
psychedelicious
b61c66c3a9 feat(ui): add spinner indicator to canvas during rasterizing operations and while pending rect calculations 2025-06-06 20:50:02 +10:00
psychedelicious
c77f9ec202 feat(ui): add hook to get all entity adapters in array 2025-06-06 20:50:02 +10:00
psychedelicious
2c5c35647f fix(ui): new canvas from image places image in bbox correctly 2025-06-06 20:50:02 +10:00
dunkeroni
bf0fdbd10e Fix: inpaint model mask using wrong tensor name 2025-06-05 11:31:35 -04:00
psychedelicious
731d317a42 chore(ui): update whatsnew 2025-06-04 22:29:37 +10:00
psychedelicious
e81579f752 fix(mm): handle invoke syntax for HF repo ids when fetching HF model metadata
Closes #8074
2025-06-04 22:27:15 +10:00
Linos
9a10e98c0b translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (1918 of 1918 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-06-04 17:03:06 +10:00
Riccardo Giovanetti
27fdc139b7 translationBot(ui): update translation (Italian)
Currently translated at 98.9% (1897 of 1918 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-06-04 17:03:06 +10:00
psychedelicious
0a00805afc chore: bump version to v5.13.0 2025-06-04 05:55:34 +10:00
psychedelicious
7b38143fbd chore: bump version to v5.13.0rc3 2025-05-30 21:44:21 +10:00
mickr777
4c5ad1b7d7 Ruff Fix 2025-05-30 19:03:43 +10:00
mickr777
d80cc962ad Delay Imports that require torch 2025-05-30 19:03:43 +10:00
RyoKoba
7ccabfa200 translationBot(ui): update translation (Japanese)
Currently translated at 68.0% (1304 of 1915 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2025-05-30 14:48:41 +10:00
Riccardo Giovanetti
936d59cc52 translationBot(ui): update translation (Italian)
Currently translated at 98.9% (1894 of 1915 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-05-30 14:48:41 +10:00
psychedelicious
fc16fb6099 chore: bump version to v5.13.0rc2 2025-05-30 14:16:33 +10:00
psychedelicious
c848cbc2e3 feat(app): move output annotation checking to run_app
Also change import order to ensure CLI args are handled correctly. Had to do this bc importing `InvocationRegistry` before parsing args resulted in the `--root` CLI arg being ignored.
2025-05-30 14:10:13 +10:00
psychedelicious
66fd0f0d8a feat(ui): warn on unregistered invocation output 2025-05-30 14:10:13 +10:00
psychedelicious
c266f39f06 chore(ui): typegen 2025-05-30 13:36:04 +10:00
psychedelicious
98a44fa4d7 fix(ui): conditional display of message 2025-05-30 13:36:04 +10:00
Mary Hipp
c1d230f961 add support to delete all uncategorized images 2025-05-30 13:36:04 +10:00
Kevin Turner
68108435ae feat(LoRA): allow LoRA layer patcher to continue past unknown layers 2025-05-30 13:29:02 +10:00
psychedelicious
e121bf1f62 feat(ui): persist sizes of all 4 prompt boxes 2025-05-30 12:36:06 +10:00
psychedelicious
4835c344b3 feat(ui): implement generalized textarea size tracking system 2025-05-30 12:36:06 +10:00
Mary Hipp
a589dec122 store positive prompt textarea height in redux so it persists across refresh 2025-05-30 12:36:06 +10:00
dunkeroni
bc67d5c841 add invert logic to grayscale mask composite 2025-05-30 11:19:37 +10:00
Mary Hipp
f3d5691c04 use onClickGoToModelManager for empty model picker 2025-05-29 11:13:55 -04:00
psychedelicious
b98abc2457 chore(ui): typegen 2025-05-29 13:49:07 +10:00
psychedelicious
7e527ccfb7 feat(api): add validationg for max resize_to on upload endpoint 2025-05-29 13:49:07 +10:00
psychedelicious
0f0c911845 chore: uv lock 2025-05-29 13:49:07 +10:00
psychedelicious
e4818b967b tidy(api): remove benchmark logging 2025-05-29 13:49:07 +10:00
psychedelicious
ce3eede26f feat(nodes): revised heuristic_resize
better handling for smaller image sizes
2025-05-29 13:49:07 +10:00
psychedelicious
d98725c5e9 feat(nodes): use guo-hall thinning 2025-05-29 13:49:07 +10:00
psychedelicious
31a96d2945 feat(ui): use resize on uplaod functionality when creating new canvas from image 2025-05-29 13:49:07 +10:00
psychedelicious
845a321a43 feat(ui): support resize_to when uploading images 2025-05-29 13:49:07 +10:00
psychedelicious
87a44a28ef chore(ui): typegen 2025-05-29 13:49:07 +10:00
psychedelicious
d5b9c3ee5a feat(api): support resizing image on upload 2025-05-29 13:49:07 +10:00
psychedelicious
91db136cd1 feat(nodes): much faster heuristic resize utility
Add `heuristic_resize_fast`, which does the same thing as `heuristic_resize`, except it's about 20x faster.

This is achieved by using opencv for the binary edge handling isntead of python, and checking only 100k pixels to determine what kind of image we are working with.

Besides being much faster, it results in cleaner lines for resized binary canny edge maps, and has results in fewer misidentified segmentation maps.

Tested against normal images, binary canny edge maps, grayscale HED edge maps, segmentation maps, and normal images.

Tested resizing up and down for each.

Besides the new utility function, I needed to swap the `opencv-python` dep for `opencv-contrib-python`, which includes `cv2.ximgproc.thinning`. This function accounts for a good chunk of the perf improvement.
2025-05-29 13:49:07 +10:00
Jonathan
f351ad4b66 Update communityNodes.md
Added some of JPPhoto's nodes.
2025-05-28 07:26:44 +10:00
psychedelicious
fb6fb9abbd gh: update CODEOWNERS
Added myself to everything so we do not get into situations where we need to rely on vic or lincoln to approve
2025-05-27 22:37:44 +10:00
psychedelicious
675c990486 docs: add comments to classifiers stuff 2025-05-27 22:02:48 +10:00
psychedelicious
6ee5cde4bb ci: do not install project when checking classifiers 2025-05-27 22:02:48 +10:00
psychedelicious
c8077f9430 ci: check classifiers in python-checks workflow 2025-05-27 22:02:48 +10:00
psychedelicious
6aabe9959e chore: fix license classifier 2025-05-27 22:02:48 +10:00
psychedelicious
0b58d172d2 build: update build script to check classifiers 2025-05-27 22:02:48 +10:00
psychedelicious
d7c6e293d7 scripts: add script to check pypi classifiers 2025-05-27 22:02:48 +10:00
psychedelicious
c600bc867d chore: bump version to v5.13.0rc1 2025-05-27 13:30:34 +10:00
Riccardo Giovanetti
f4140dd772 translationBot(ui): update translation (Italian)
Currently translated at 98.9% (1890 of 1911 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.9% (1890 of 1911 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-05-27 13:18:06 +10:00
psychedelicious
a2d8261d40 feat(ui): canvas scroll scale snap 2025-05-27 13:10:57 +10:00
psychedelicious
bce88a8873 perf(ui): lazy mount scale slider popover 2025-05-27 13:10:57 +10:00
psychedelicious
b37e1a3ad6 feat(ui): do not round scale
Makes it a lot smoother, don't think it breaks anything...
2025-05-27 13:10:57 +10:00
psychedelicious
35a088e0a6 perf(ui): optimize <CanvasToolbarScale /> 2025-05-27 13:10:57 +10:00
psychedelicious
b936cab039 feat(ui): add computed for stage scale 2025-05-27 13:10:57 +10:00
psychedelicious
34e4093408 fix(ui): revert snapping logic, doesn't work w/ certain input devices 2025-05-27 13:10:57 +10:00
Kent Keirsey
d7f93c3cc0 uv update 2025-05-26 22:54:15 -04:00
Kent Keirsey
d4c4926caa Update Compel to 2.1.1 and apply Sentences Split logic 2025-05-26 22:54:15 -04:00
psychedelicious
558c7db055 chore(ui): knipignore InpaintMaskAddButtons 2025-05-27 07:28:47 +10:00
psychedelicious
2ece59b51b feat(ui): remove unnecessary type casts 2025-05-27 07:28:47 +10:00
psychedelicious
7dbe39957c feat(ui): bbox rect is always defined, no need for fallback logic 2025-05-27 07:28:47 +10:00
psychedelicious
6fa46d35a5 feat(ui): inpaint mask settings layout 2025-05-27 07:28:47 +10:00
psychedelicious
b2a2b38ea8 feat(ui): split inpaint mask setting selectors to avoid manual memoization 2025-05-27 07:28:47 +10:00
dunkeroni
12934da390 Use Optional instead of Nullable for mask settings 2025-05-27 07:28:47 +10:00
dunkeroni
231bc18188 remove buttons, change denoise limit format 2025-05-27 07:28:47 +10:00
dunkeroni
530cd180c5 chore:ruff 2025-05-27 07:28:47 +10:00
dunkeroni
2a92e7b920 Flux/CogView/SD3 compatible with gradient masks 2025-05-27 07:28:47 +10:00
dunkeroni
019e057e29 chore: typegen 2025-05-27 07:28:47 +10:00
dunkeroni
9aa26f883e chore: ruff 2025-05-27 07:28:47 +10:00
dunkeroni
3f727e24b1 change default noise level to 0.15 2025-05-27 07:28:47 +10:00
dunkeroni
9e90bf1b20 fix gradient mask broken with flux gen 2025-05-27 07:28:47 +10:00
dunkeroni
db3964797f clean up comments 2025-05-27 07:28:47 +10:00
dunkeroni
881efbda1b fix: inpaint breaks when scaled processing 2025-05-27 07:28:47 +10:00
dunkeroni
e9ce2ed5f2 inpaint mask sliders compatible with outpainting 2025-05-27 07:28:47 +10:00
dunkeroni
53ac9eafbf reuse inpaint image noise seed for caching 2025-05-27 07:28:47 +10:00
dunkeroni
9e095006a5 remove some AI detritus 2025-05-27 07:28:47 +10:00
dunkeroni
21b24c3ba6 change denoise limit default to 1.0 2025-05-27 07:28:47 +10:00
dunkeroni
139ecc10ce ruff 2025-05-27 07:28:47 +10:00
dunkeroni
78ea143b46 composite masks based on denoise level 2025-05-27 07:28:47 +10:00
dunkeroni
174249ec15 grtadient mask node works on greyscale now 2025-05-27 07:28:47 +10:00
dunkeroni
2510ad7431 consolidate code 2025-05-27 07:28:47 +10:00
dunkeroni
ba5e855a60 Correctly composite grey values on white for masks 2025-05-27 07:28:47 +10:00
dunkeroni
23627cf18d compositing in frontend 2025-05-27 07:28:47 +10:00
dunkeroni
5e20c9a1ca mask noise slider option 2025-05-27 07:28:47 +10:00
Kent Keirsey
933cf5f276 update prettier 2025-05-25 23:53:16 -04:00
Kent Keirsey
41316de659 Update order 2025-05-25 23:53:16 -04:00
Kent Keirsey
041ccfd68e Enable 'pull into bounding box' from empty Control Layer 2025-05-25 23:53:16 -04:00
dunkeroni
ad24c203a4 preserve SDXL training values for bounding box 2025-05-25 08:15:37 -04:00
Kent Keirsey
3fd28ce600 Update scaling math to land on 100% consistently. 2025-05-25 07:59:27 -04:00
Mary Hipp
32df3bdf6e typegen 2025-05-22 14:09:10 -04:00
Mary Hipp
ba69e89e8c typegen 2025-05-22 14:09:10 -04:00
Mary Hipp
a8e0c48ddc add new method types to metadata 2025-05-22 14:09:10 -04:00
Jonathan
66f6571086 Update manual installation for v5.12.0 2025-05-22 09:00:58 -04:00
374 changed files with 9512 additions and 6154 deletions

24
.github/CODEOWNERS vendored
View File

@@ -1,5 +1,5 @@
# continuous integration
/.github/workflows/ @lstein @blessedcoolant @hipsterusername @ebr @jazzhaiku
/.github/workflows/ @lstein @blessedcoolant @hipsterusername @ebr @jazzhaiku @psychedelicious
# documentation
/docs/ @lstein @blessedcoolant @hipsterusername @psychedelicious
@@ -9,13 +9,13 @@
/invokeai/app/ @blessedcoolant @psychedelicious @hipsterusername @jazzhaiku
# installation and configuration
/pyproject.toml @lstein @blessedcoolant @hipsterusername
/docker/ @lstein @blessedcoolant @hipsterusername @ebr
/scripts/ @ebr @lstein @hipsterusername
/installer/ @lstein @ebr @hipsterusername
/invokeai/assets @lstein @ebr @hipsterusername
/invokeai/configs @lstein @hipsterusername
/invokeai/version @lstein @blessedcoolant @hipsterusername
/pyproject.toml @lstein @blessedcoolant @psychedelicious @hipsterusername
/docker/ @lstein @blessedcoolant @psychedelicious @hipsterusername @ebr
/scripts/ @ebr @lstein @psychedelicious @hipsterusername
/installer/ @lstein @ebr @psychedelicious @hipsterusername
/invokeai/assets @lstein @ebr @psychedelicious @hipsterusername
/invokeai/configs @lstein @psychedelicious @hipsterusername
/invokeai/version @lstein @blessedcoolant @psychedelicious @hipsterusername
# web ui
/invokeai/frontend @blessedcoolant @psychedelicious @lstein @maryhipp @hipsterusername
@@ -24,8 +24,8 @@
/invokeai/backend @lstein @blessedcoolant @hipsterusername @jazzhaiku @psychedelicious @maryhipp
# front ends
/invokeai/frontend/CLI @lstein @hipsterusername
/invokeai/frontend/install @lstein @ebr @hipsterusername
/invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/CLI @lstein @psychedelicious @hipsterusername
/invokeai/frontend/install @lstein @ebr @psychedelicious @hipsterusername
/invokeai/frontend/merge @lstein @blessedcoolant @psychedelicious @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @psychedelicious @hipsterusername
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp @hipsterusername

View File

@@ -67,6 +67,10 @@ jobs:
version: '0.6.10'
enable-cache: true
- name: check pypi classifiers
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: uv run --no-project scripts/check_classifiers.py ./pyproject.toml
- name: ruff check
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: uv tool run ruff@0.11.2 check --output-format=github .

View File

@@ -71,7 +71,14 @@ The following commands vary depending on the version of Invoke being installed a
7. Determine the `PyPI` index URL to use for installation, if any. This is necessary to get the right version of torch installed.
=== "Invoke v5.10.0 and later"
=== "Invoke v5.12 and later"
- If you are on Windows or Linux with an Nvidia GPU, use `https://download.pytorch.org/whl/cu128`.
- If you are on Linux with no GPU, use `https://download.pytorch.org/whl/cpu`.
- If you are on Linux with an AMD GPU, use `https://download.pytorch.org/whl/rocm6.2.4`.
- **In all other cases, do not use an index.**
=== "Invoke v5.10.0 to v5.11.0"
- If you are on Windows or Linux with an Nvidia GPU, use `https://download.pytorch.org/whl/cu126`.
- If you are on Linux with no GPU, use `https://download.pytorch.org/whl/cpu`.

View File

@@ -13,6 +13,7 @@ If you'd prefer, you can also just download the whole node folder from the linke
To use a community workflow, download the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
- Community Nodes
+ [Anamorphic Tools](#anamorphic-tools)
+ [Adapters-Linked](#adapters-linked-nodes)
+ [Autostereogram](#autostereogram-nodes)
+ [Average Images](#average-images)
@@ -20,9 +21,12 @@ To use a community workflow, download the `.json` node graph file and load it in
+ [Close Color Mask](#close-color-mask)
+ [Clothing Mask](#clothing-mask)
+ [Contrast Limited Adaptive Histogram Equalization](#contrast-limited-adaptive-histogram-equalization)
+ [Curves](#curves)
+ [Depth Map from Wavefront OBJ](#depth-map-from-wavefront-obj)
+ [Enhance Detail](#enhance-detail)
+ [Film Grain](#film-grain)
+ [Flip Pose](#flip-pose)
+ [Flux Ideal Size](#flux-ideal-size)
+ [Generative Grammar-Based Prompt Nodes](#generative-grammar-based-prompt-nodes)
+ [GPT2RandomPromptMaker](#gpt2randompromptmaker)
+ [Grid to Gif](#grid-to-gif)
@@ -61,6 +65,13 @@ To use a community workflow, download the `.json` node graph file and load it in
- [Help](#help)
--------------------------------
### Anamorphic Tools
**Description:** A set of nodes to perform anamorphic modifications to images, like lens blur, streaks, spherical distortion, and vignetting.
**Node Link:** https://github.com/JPPhoto/anamorphic-tools
--------------------------------
### Adapters Linked Nodes
@@ -132,6 +143,13 @@ Node Link: https://github.com/VeyDlin/clahe-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/clahe-node/master/.readme/node.png" width="500" />
--------------------------------
### Curves
**Description:** Adjust an image's curve based on a user-defined string.
**Node Link:** https://github.com/JPPhoto/curves-node
--------------------------------
### Depth Map from Wavefront OBJ
@@ -162,6 +180,20 @@ To be imported, an .obj must use triangulated meshes, so make sure to enable tha
**Node Link:** https://github.com/JPPhoto/film-grain-node
--------------------------------
### Flip Pose
**Description:** This node will flip an openpose image horizontally, recoloring it to make sure that it isn't facing the wrong direction. Note that it does not work with openpose hands.
**Node Link:** https://github.com/JPPhoto/flip-pose-node
--------------------------------
### Flux Ideal Size
**Description:** This node returns an ideal size to use for the first stage of a Flux image generation pipeline. Generating at the right size helps limit duplication and odd subject placement.
**Node Link:** https://github.com/JPPhoto/flux-ideal-size
--------------------------------
### Generative Grammar-Based Prompt Nodes

View File

@@ -1,8 +1,7 @@
import typing
from enum import Enum
from importlib.metadata import PackageNotFoundError, version
from importlib.metadata import distributions
from pathlib import Path
from platform import python_version
from typing import Optional
import torch
@@ -44,24 +43,6 @@ class AppVersion(BaseModel):
highlights: Optional[list[str]] = Field(default=None, description="Highlights of release")
class AppDependencyVersions(BaseModel):
"""App depencency Versions Response"""
accelerate: str = Field(description="accelerate version")
compel: str = Field(description="compel version")
cuda: Optional[str] = Field(description="CUDA version")
diffusers: str = Field(description="diffusers version")
numpy: str = Field(description="Numpy version")
opencv: str = Field(description="OpenCV version")
onnx: str = Field(description="ONNX version")
pillow: str = Field(description="Pillow (PIL) version")
python: str = Field(description="Python version")
torch: str = Field(description="PyTorch version")
torchvision: str = Field(description="PyTorch Vision version")
transformers: str = Field(description="transformers version")
xformers: Optional[str] = Field(description="xformers version")
class AppConfig(BaseModel):
"""App Config Response"""
@@ -76,27 +57,19 @@ async def get_version() -> AppVersion:
return AppVersion(version=__version__)
@app_router.get("/app_deps", operation_id="get_app_deps", status_code=200, response_model=AppDependencyVersions)
async def get_app_deps() -> AppDependencyVersions:
@app_router.get("/app_deps", operation_id="get_app_deps", status_code=200, response_model=dict[str, str])
async def get_app_deps() -> dict[str, str]:
deps: dict[str, str] = {dist.metadata["Name"]: dist.version for dist in distributions()}
try:
xformers = version("xformers")
except PackageNotFoundError:
xformers = None
return AppDependencyVersions(
accelerate=version("accelerate"),
compel=version("compel"),
cuda=torch.version.cuda,
diffusers=version("diffusers"),
numpy=version("numpy"),
opencv=version("opencv-python"),
onnx=version("onnx"),
pillow=version("pillow"),
python=python_version(),
torch=torch.version.__version__,
torchvision=version("torchvision"),
transformers=version("transformers"),
xformers=xformers,
)
cuda = torch.version.cuda or "N/A"
except Exception:
cuda = "N/A"
deps["CUDA"] = cuda
sorted_deps = dict(sorted(deps.items(), key=lambda item: item[0].lower()))
return sorted_deps
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)

View File

@@ -146,7 +146,7 @@ async def list_boards(
response_model=list[str],
)
async def list_all_board_image_names(
board_id: str = Path(description="The id of the board"),
board_id: str = Path(description="The id of the board or 'none' for uncategorized images"),
categories: list[ImageCategory] | None = Query(default=None, description="The categories of image to include."),
is_intermediate: bool | None = Query(default=None, description="Whether to list intermediate images."),
) -> list[str]:

View File

@@ -1,12 +1,13 @@
import io
import json
import traceback
from typing import Optional
from typing import ClassVar, Optional
from fastapi import BackgroundTasks, Body, HTTPException, Path, Query, Request, Response, UploadFile
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, Field
from pydantic import BaseModel, Field, model_validator
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.api.extract_metadata_from_image import extract_metadata_from_image
@@ -19,6 +20,8 @@ from invokeai.app.services.image_records.image_records_common import (
from invokeai.app.services.images.images_common import ImageDTO, ImageUrlsDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
from invokeai.app.util.controlnet_utils import heuristic_resize_fast
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
images_router = APIRouter(prefix="/v1/images", tags=["images"])
@@ -27,6 +30,19 @@ images_router = APIRouter(prefix="/v1/images", tags=["images"])
IMAGE_MAX_AGE = 31536000
class ResizeToDimensions(BaseModel):
width: int = Field(..., gt=0)
height: int = Field(..., gt=0)
MAX_SIZE: ClassVar[int] = 4096 * 4096
@model_validator(mode="after")
def validate_total_output_size(self):
if self.width * self.height > self.MAX_SIZE:
raise ValueError(f"Max total output size for resizing is {self.MAX_SIZE} pixels")
return self
@images_router.post(
"/upload",
operation_id="upload_image",
@@ -46,6 +62,11 @@ async def upload_image(
board_id: Optional[str] = Query(default=None, description="The board to add this image to, if any"),
session_id: Optional[str] = Query(default=None, description="The session ID associated with this upload, if any"),
crop_visible: Optional[bool] = Query(default=False, description="Whether to crop the image"),
resize_to: Optional[str] = Body(
default=None,
description=f"Dimensions to resize the image to, must be stringified tuple of 2 integers. Max total pixel count: {ResizeToDimensions.MAX_SIZE}",
example='"[1024,1024]"',
),
metadata: Optional[str] = Body(
default=None,
description="The metadata to associate with the image, must be a stringified JSON dict",
@@ -59,13 +80,33 @@ async def upload_image(
contents = await file.read()
try:
pil_image = Image.open(io.BytesIO(contents))
if crop_visible:
bbox = pil_image.getbbox()
pil_image = pil_image.crop(bbox)
except Exception:
ApiDependencies.invoker.services.logger.error(traceback.format_exc())
raise HTTPException(status_code=415, detail="Failed to read image")
if crop_visible:
try:
bbox = pil_image.getbbox()
pil_image = pil_image.crop(bbox)
except Exception:
raise HTTPException(status_code=500, detail="Failed to crop image")
if resize_to:
try:
dims = json.loads(resize_to)
resize_dims = ResizeToDimensions(**dims)
except Exception:
raise HTTPException(status_code=400, detail="Invalid resize_to format or size")
try:
# heuristic_resize_fast expects an RGB or RGBA image
pil_rgba = pil_image.convert("RGBA")
np_image = pil_to_np(pil_rgba)
np_image = heuristic_resize_fast(np_image, (resize_dims.width, resize_dims.height))
pil_image = np_to_pil(np_image)
except Exception:
raise HTTPException(status_code=500, detail="Failed to resize image")
extracted_metadata = extract_metadata_from_image(
pil_image=pil_image,
invokeai_metadata_override=metadata,
@@ -356,6 +397,29 @@ async def delete_images_from_list(
raise HTTPException(status_code=500, detail="Failed to delete images")
@images_router.delete(
"/uncategorized", operation_id="delete_uncategorized_images", response_model=DeleteImagesFromListResult
)
async def delete_uncategorized_images() -> DeleteImagesFromListResult:
"""Deletes all images that are uncategorized"""
image_names = ApiDependencies.invoker.services.board_images.get_all_board_image_names_for_board(
board_id="none", categories=None, is_intermediate=None
)
try:
deleted_images: list[str] = []
for image_name in image_names:
try:
ApiDependencies.invoker.services.images.delete(image_name)
deleted_images.append(image_name)
except Exception:
pass
return DeleteImagesFromListResult(deleted_images=deleted_images)
except Exception:
raise HTTPException(status_code=500, detail="Failed to delete images")
class ImagesUpdatedFromListResult(BaseModel):
updated_image_names: list[str] = Field(description="The image names that were updated")

View File

@@ -14,13 +14,14 @@ from invokeai.app.services.session_queue.session_queue_common import (
CancelByBatchIDsResult,
CancelByDestinationResult,
ClearResult,
DeleteAllExceptCurrentResult,
DeleteByDestinationResult,
EnqueueBatchResult,
FieldIdentifier,
PruneResult,
RetryItemsResult,
SessionQueueCountsByDestination,
SessionQueueItem,
SessionQueueItemDTO,
SessionQueueStatus,
)
from invokeai.app.services.shared.pagination import CursorPaginatedResults
@@ -68,7 +69,7 @@ async def enqueue_batch(
"/{queue_id}/list",
operation_id="list_queue_items",
responses={
200: {"model": CursorPaginatedResults[SessionQueueItemDTO]},
200: {"model": CursorPaginatedResults[SessionQueueItem]},
},
)
async def list_queue_items(
@@ -77,11 +78,36 @@ async def list_queue_items(
status: Optional[QUEUE_ITEM_STATUS] = Query(default=None, description="The status of items to fetch"),
cursor: Optional[int] = Query(default=None, description="The pagination cursor"),
priority: int = Query(default=0, description="The pagination cursor priority"),
) -> CursorPaginatedResults[SessionQueueItemDTO]:
"""Gets all queue items (without graphs)"""
destination: Optional[str] = Query(default=None, description="The destination of queue items to fetch"),
) -> CursorPaginatedResults[SessionQueueItem]:
"""Gets cursor-paginated queue items"""
return ApiDependencies.invoker.services.session_queue.list_queue_items(
queue_id=queue_id, limit=limit, status=status, cursor=cursor, priority=priority
queue_id=queue_id,
limit=limit,
status=status,
cursor=cursor,
priority=priority,
destination=destination,
)
@session_queue_router.get(
"/{queue_id}/list_all",
operation_id="list_all_queue_items",
responses={
200: {"model": list[SessionQueueItem]},
},
)
async def list_all_queue_items(
queue_id: str = Path(description="The queue id to perform this operation on"),
destination: Optional[str] = Query(default=None, description="The destination of queue items to fetch"),
) -> list[SessionQueueItem]:
"""Gets all queue items"""
return ApiDependencies.invoker.services.session_queue.list_all_queue_items(
queue_id=queue_id,
destination=destination,
)
@@ -121,6 +147,18 @@ async def cancel_all_except_current(
return ApiDependencies.invoker.services.session_queue.cancel_all_except_current(queue_id=queue_id)
@session_queue_router.put(
"/{queue_id}/delete_all_except_current",
operation_id="delete_all_except_current",
responses={200: {"model": DeleteAllExceptCurrentResult}},
)
async def delete_all_except_current(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> DeleteAllExceptCurrentResult:
"""Immediately deletes all queue items except in-processing items"""
return ApiDependencies.invoker.services.session_queue.delete_all_except_current(queue_id=queue_id)
@session_queue_router.put(
"/{queue_id}/cancel_by_batch_ids",
operation_id="cancel_by_batch_ids",
@@ -269,6 +307,18 @@ async def get_queue_item(
return ApiDependencies.invoker.services.session_queue.get_queue_item(item_id)
@session_queue_router.delete(
"/{queue_id}/i/{item_id}",
operation_id="delete_queue_item",
)
async def delete_queue_item(
queue_id: str = Path(description="The queue id to perform this operation on"),
item_id: int = Path(description="The queue item to delete"),
) -> None:
"""Deletes a queue item"""
ApiDependencies.invoker.services.session_queue.delete_queue_item(item_id)
@session_queue_router.put(
"/{queue_id}/i/{item_id}/cancel",
operation_id="cancel_queue_item",
@@ -298,3 +348,18 @@ async def counts_by_destination(
return ApiDependencies.invoker.services.session_queue.get_counts_by_destination(
queue_id=queue_id, destination=destination
)
@session_queue_router.delete(
"/{queue_id}/d/{destination}",
operation_id="delete_by_destination",
responses={200: {"model": DeleteByDestinationResult}},
)
async def delete_by_destination(
queue_id: str = Path(description="The queue id to query"),
destination: str = Path(description="The destination to query"),
) -> DeleteByDestinationResult:
"""Deletes all items with the given destination"""
return ApiDependencies.invoker.services.session_queue.delete_by_destination(
queue_id=queue_id, destination=destination
)

View File

@@ -643,6 +643,16 @@ def invocation(
fields["type"] = (invocation_type_annotation, invocation_type_field_info)
# Invocation outputs must be registered using the @invocation_output decorator, but it is possible that the
# output is registered _after_ this invocation is registered. It depends on module import ordering.
#
# We can only confirm the output for an invocation is registered after all modules are imported. There's
# only really one good time to do that - during application startup, in `run_app.py`, after loading all
# custom nodes.
#
# We can still do some basic validation here - ensure the invoke method is defined and returns an instance
# of BaseInvocationOutput.
# Validate the `invoke()` method is implemented
if "invoke" in cls.__abstractmethods__:
raise ValueError(f'Invocation "{invocation_type}" must implement the "invoke" method')

View File

@@ -1,7 +1,7 @@
from typing import Iterator, List, Optional, Tuple, Union, cast
import torch
from compel import Compel, ReturnedEmbeddingsType
from compel import Compel, ReturnedEmbeddingsType, SplitLongTextMode
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
@@ -104,6 +104,7 @@ class CompelInvocation(BaseInvocation):
dtype_for_device_getter=TorchDevice.choose_torch_dtype,
truncate_long_prompts=False,
device=TorchDevice.choose_torch_device(),
split_long_text_mode=SplitLongTextMode.SENTENCES,
)
conjunction = Compel.parse_prompt_string(self.prompt)
@@ -205,6 +206,7 @@ class SDXLPromptInvocationBase:
returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, # TODO: clip skip
requires_pooled=get_pooled,
device=TorchDevice.choose_torch_device(),
split_long_text_mode=SplitLongTextMode.SENTENCES,
)
conjunction = Compel.parse_prompt_string(prompt)

View File

@@ -22,7 +22,11 @@ from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES, heuristic_resize
from invokeai.app.util.controlnet_utils import (
CONTROLNET_MODE_VALUES,
CONTROLNET_RESIZE_VALUES,
heuristic_resize_fast,
)
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
@@ -109,7 +113,7 @@ class ControlNetInvocation(BaseInvocation):
title="Heuristic Resize",
tags=["image, controlnet"],
category="image",
version="1.0.1",
version="1.1.1",
classification=Classification.Prototype,
)
class HeuristicResizeInvocation(BaseInvocation):
@@ -122,7 +126,7 @@ class HeuristicResizeInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name, "RGB")
np_img = pil_to_np(image)
np_resized = heuristic_resize(np_img, (self.width, self.height))
np_resized = heuristic_resize_fast(np_img, (self.width, self.height))
resized = np_to_pil(np_resized)
image_dto = context.images.save(image=resized)
return ImageOutput.build(image_dto)

View File

@@ -1,12 +1,14 @@
from typing import Literal, Optional
import cv2
import numpy as np
import torch
import torchvision.transforms as T
from PIL import Image, ImageFilter
from PIL import Image
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
@@ -42,15 +44,13 @@ class GradientMaskOutput(BaseInvocationOutput):
title="Create Gradient Mask",
tags=["mask", "denoise"],
category="latents",
version="1.2.1",
version="1.3.0",
)
class CreateGradientMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run."""
"""Creates mask for denoising."""
mask: ImageField = InputField(description="Image which will be masked", ui_order=1)
edge_radius: int = InputField(
default=16, ge=0, description="How far to blur/expand the edges of the mask", ui_order=2
)
edge_radius: int = InputField(default=16, ge=0, description="How far to expand the edges of the mask", ui_order=2)
coherence_mode: Literal["Gaussian Blur", "Box Blur", "Staged"] = InputField(default="Gaussian Blur", ui_order=3)
minimum_denoise: float = InputField(
default=0.0, ge=0, le=1, description="Minimum denoise level for the coherence region", ui_order=4
@@ -81,45 +81,110 @@ class CreateGradientMaskInvocation(BaseInvocation):
@torch.no_grad()
def invoke(self, context: InvocationContext) -> GradientMaskOutput:
mask_image = context.images.get_pil(self.mask.image_name, mode="L")
# Resize the mask_image. Makes the filter 64x faster and doesn't hurt quality in latent scale anyway
mask_image = mask_image.resize(
(
mask_image.width // LATENT_SCALE_FACTOR,
mask_image.height // LATENT_SCALE_FACTOR,
),
resample=Image.Resampling.BILINEAR,
)
mask_np_orig = np.array(mask_image, dtype=np.float32)
self.edge_radius = self.edge_radius // LATENT_SCALE_FACTOR # scale the edge radius to match the mask size
if self.edge_radius > 0:
mask_np = 255 - mask_np_orig # invert so 0 is unmasked (higher values = higher denoise strength)
dilated_mask = mask_np.copy()
# Create kernel based on coherence mode
if self.coherence_mode == "Box Blur":
blur_mask = mask_image.filter(ImageFilter.BoxBlur(self.edge_radius))
else: # Gaussian Blur OR Staged
# Gaussian Blur uses standard deviation. 1/2 radius is a good approximation
blur_mask = mask_image.filter(ImageFilter.GaussianBlur(self.edge_radius / 2))
# Create a circular distance kernel that fades from center outward
kernel_size = self.edge_radius * 2 + 1
center = self.edge_radius
kernel = np.zeros((kernel_size, kernel_size), dtype=np.float32)
for i in range(kernel_size):
for j in range(kernel_size):
dist = np.sqrt((i - center) ** 2 + (j - center) ** 2)
if dist <= self.edge_radius:
kernel[i, j] = 1.0 - (dist / self.edge_radius)
else: # Gaussian Blur or Staged
# Create a Gaussian kernel
kernel_size = self.edge_radius * 2 + 1
kernel = cv2.getGaussianKernel(
kernel_size, self.edge_radius / 2.5
) # 2.5 is a magic number (standard deviation capturing)
kernel = kernel * kernel.T # Make 2D gaussian kernel
kernel = kernel / np.max(kernel) # Normalize center to 1.0
blur_tensor: torch.Tensor = image_resized_to_grid_as_tensor(blur_mask, normalize=False)
# Ensure values outside radius are 0
center = self.edge_radius
for i in range(kernel_size):
for j in range(kernel_size):
dist = np.sqrt((i - center) ** 2 + (j - center) ** 2)
if dist > self.edge_radius:
kernel[i, j] = 0
# redistribute blur so that the original edges are 0 and blur outwards to 1
blur_tensor = (blur_tensor - 0.5) * 2
blur_tensor[blur_tensor < 0] = 0.0
# 2D max filter
mask_tensor = torch.tensor(mask_np)
kernel_tensor = torch.tensor(kernel)
dilated_mask = 255 - self.max_filter2D_torch(mask_tensor, kernel_tensor).cpu()
dilated_mask = dilated_mask.numpy()
threshold = 1 - self.minimum_denoise
threshold = (1 - self.minimum_denoise) * 255
if self.coherence_mode == "Staged":
# wherever the blur_tensor is less than fully masked, convert it to threshold
blur_tensor = torch.where((blur_tensor < 1) & (blur_tensor > 0), threshold, blur_tensor)
else:
# wherever the blur_tensor is above threshold but less than 1, drop it to threshold
blur_tensor = torch.where((blur_tensor > threshold) & (blur_tensor < 1), threshold, blur_tensor)
# wherever expanded mask is darker than the original mask but original was above threshhold, set it to the threshold
# makes any expansion areas drop to threshhold. Raising minimum across the image happen outside of this if
threshold_mask = (dilated_mask < mask_np_orig) & (mask_np_orig > threshold)
dilated_mask = np.where(threshold_mask, threshold, mask_np_orig)
# wherever expanded mask is less than 255 but greater than threshold, drop it to threshold (minimum denoise)
threshold_mask = (dilated_mask > threshold) & (dilated_mask < 255)
dilated_mask = np.where(threshold_mask, threshold, dilated_mask)
else:
blur_tensor: torch.Tensor = image_resized_to_grid_as_tensor(mask_image, normalize=False)
dilated_mask = mask_np_orig.copy()
mask_name = context.tensors.save(tensor=blur_tensor.unsqueeze(1))
# convert to tensor
dilated_mask = np.clip(dilated_mask, 0, 255).astype(np.uint8)
mask_tensor = torch.tensor(dilated_mask, device=torch.device("cpu"))
# compute a [0, 1] mask from the blur_tensor
expanded_mask = torch.where((blur_tensor < 1), 0, 1)
expanded_mask_image = Image.fromarray((expanded_mask.squeeze(0).numpy() * 255).astype(np.uint8), mode="L")
# binary mask for compositing
expanded_mask = np.where((dilated_mask < 255), 0, 255)
expanded_mask_image = Image.fromarray(expanded_mask.astype(np.uint8), mode="L")
expanded_mask_image = expanded_mask_image.resize(
(
mask_image.width * LATENT_SCALE_FACTOR,
mask_image.height * LATENT_SCALE_FACTOR,
),
resample=Image.Resampling.NEAREST,
)
expanded_image_dto = context.images.save(expanded_mask_image)
# restore the original mask size
dilated_mask = Image.fromarray(dilated_mask.astype(np.uint8))
dilated_mask = dilated_mask.resize(
(
mask_image.width * LATENT_SCALE_FACTOR,
mask_image.height * LATENT_SCALE_FACTOR,
),
resample=Image.Resampling.NEAREST,
)
# stack the mask as a tensor, repeating 4 times on dimmension 1
dilated_mask_tensor = image_resized_to_grid_as_tensor(dilated_mask, normalize=False)
mask_name = context.tensors.save(tensor=dilated_mask_tensor.unsqueeze(0))
masked_latents_name = None
if self.unet is not None and self.vae is not None and self.image is not None:
# all three fields must be present at the same time
main_model_config = context.models.get_config(self.unet.unet.key)
assert isinstance(main_model_config, MainConfigBase)
if main_model_config.variant is ModelVariantType.Inpaint:
mask = blur_tensor
mask = dilated_mask_tensor
vae_info: LoadedModel = context.models.load(self.vae.vae)
image = context.images.get_pil(self.image.image_name)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
@@ -137,3 +202,29 @@ class CreateGradientMaskInvocation(BaseInvocation):
denoise_mask=DenoiseMaskField(mask_name=mask_name, masked_latents_name=masked_latents_name, gradient=True),
expanded_mask_area=ImageField(image_name=expanded_image_dto.image_name),
)
def max_filter2D_torch(self, image: torch.Tensor, kernel: torch.Tensor) -> torch.Tensor:
"""
This morphological operation is much faster in torch than numpy or opencv
For reasonable kernel sizes, the overhead of copying the data to the GPU is not worth it.
"""
h, w = kernel.shape
pad_h, pad_w = h // 2, w // 2
padded = torch.nn.functional.pad(image, (pad_w, pad_w, pad_h, pad_h), mode="constant", value=0)
result = torch.zeros_like(image)
# This looks like it's inside out, but it does the same thing and is more efficient
for i in range(h):
for j in range(w):
weight = kernel[i, j]
if weight <= 0:
continue
# Extract the region from padded tensor
region = padded[i : i + image.shape[0], j : j + image.shape[1]]
# Apply weight and update max
result = torch.maximum(result, region * weight)
return result

View File

@@ -1218,12 +1218,15 @@ class ApplyMaskToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Add Image Noise",
tags=["image", "noise"],
category="image",
version="1.0.1",
version="1.1.0",
)
class ImageNoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Add noise to an image"""
image: ImageField = InputField(description="The image to add noise to")
mask: Optional[ImageField] = InputField(
default=None, description="Optional mask determining where to apply noise (black=noise, white=no noise)"
)
seed: int = InputField(
default=0,
ge=0,
@@ -1267,12 +1270,27 @@ class ImageNoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
noise = Image.fromarray(noise.astype(numpy.uint8), mode="RGB").resize(
(image.width, image.height), Image.Resampling.NEAREST
)
# Create a noisy version of the input image
noisy_image = Image.blend(image.convert("RGB"), noise, self.amount).convert("RGBA")
# Paste back the alpha channel
noisy_image.putalpha(alpha)
# Apply mask if provided
if self.mask is not None:
mask_image = context.images.get_pil(self.mask.image_name, mode="L")
image_dto = context.images.save(image=noisy_image)
if mask_image.size != image.size:
mask_image = mask_image.resize(image.size, Image.Resampling.LANCZOS)
result_image = image.copy()
mask_image = ImageOps.invert(mask_image)
result_image.paste(noisy_image, (0, 0), mask=mask_image)
else:
result_image = noisy_image
# Paste back the alpha channel from the original image
result_image.putalpha(alpha)
image_dto = context.images.save(image=result_image)
return ImageOutput.build(image_dto)

View File

@@ -42,7 +42,9 @@ class IPAdapterMetadataField(BaseModel):
image: ImageField = Field(description="The IP-Adapter image prompt.")
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model.")
clip_vision_model: Literal["ViT-L", "ViT-H", "ViT-G"] = Field(description="The CLIP Vision model")
method: Literal["full", "style", "composition"] = Field(description="Method to apply IP Weights with")
method: Literal["full", "style", "composition", "style_strong", "style_precise"] = Field(
description="Method to apply IP Weights with"
)
weight: Union[float, list[float]] = Field(description="The weight given to the IP-Adapter")
begin_step_percent: float = Field(description="When the IP-Adapter is first applied (% of total steps)")
end_step_percent: float = Field(description="When the IP-Adapter is last applied (% of total steps)")

View File

@@ -1,12 +1,3 @@
import uvicorn
from invokeai.app.invocations.load_custom_nodes import load_custom_nodes
from invokeai.app.services.config.config_default import get_config
from invokeai.app.util.torch_cuda_allocator import configure_torch_cuda_allocator
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.frontend.cli.arg_parser import InvokeAIArgs
def get_app():
"""Import the app and event loop. We wrap this in a function to more explicitly control when it happens, because
importing from api_app does a bunch of stuff - it's more like calling a function than importing a module.
@@ -18,9 +9,18 @@ def get_app():
def run_app() -> None:
"""The main entrypoint for the app."""
# Parse the CLI arguments.
from invokeai.frontend.cli.arg_parser import InvokeAIArgs
# Parse the CLI arguments before doing anything else, which ensures CLI args correctly override settings from other
# sources like `invokeai.yaml` or env vars.
InvokeAIArgs.parse_args()
import uvicorn
from invokeai.app.services.config.config_default import get_config
from invokeai.app.util.torch_cuda_allocator import configure_torch_cuda_allocator
from invokeai.backend.util.logging import InvokeAILogger
# Load config.
app_config = get_config()
@@ -32,6 +32,8 @@ def run_app() -> None:
configure_torch_cuda_allocator(app_config.pytorch_cuda_alloc_conf, logger)
# This import must happen after configure_torch_cuda_allocator() is called, because the module imports torch.
from invokeai.app.invocations.baseinvocation import InvocationRegistry
from invokeai.app.invocations.load_custom_nodes import load_custom_nodes
from invokeai.backend.util.devices import TorchDevice
torch_device_name = TorchDevice.get_torch_device_name()
@@ -66,6 +68,15 @@ def run_app() -> None:
# core nodes have been imported so that we can catch when a custom node clobbers a core node.
load_custom_nodes(custom_nodes_path=app_config.custom_nodes_path, logger=logger)
# Check all invocations and ensure their outputs are registered.
for invocation in InvocationRegistry.get_invocation_classes():
invocation_type = invocation.get_type()
output_annotation = invocation.get_output_annotation()
if output_annotation not in InvocationRegistry.get_output_classes():
logger.warning(
f'Invocation "{invocation_type}" has unregistered output class "{output_annotation.__name__}"'
)
if app_config.dev_reload:
# load_custom_nodes seems to bypass jurrigged's import sniffer, so be sure to call it *after* they're already
# imported.

View File

@@ -98,9 +98,18 @@ class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
FROM images
LEFT JOIN board_images ON board_images.image_name = images.image_name
WHERE 1=1
"""
# Handle board_id filter
if board_id == "none":
stmt += """--sql
AND board_images.board_id IS NULL
"""
else:
stmt += """--sql
AND board_images.board_id = ?
"""
params.append(board_id)
params.append(board_id)
# Add the category filter
if categories is not None:

View File

@@ -196,9 +196,13 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
# Search term condition
if search_term:
query_conditions += """--sql
AND images.metadata LIKE ?
AND (
images.metadata LIKE ?
OR images.created_at LIKE ?
)
"""
query_params.append(f"%{search_term.lower()}%")
query_params.append(f"%{search_term.lower()}%")
if starred_first:
query_pagination = f"""--sql

View File

@@ -10,6 +10,8 @@ from invokeai.app.services.session_queue.session_queue_common import (
CancelByDestinationResult,
CancelByQueueIDResult,
ClearResult,
DeleteAllExceptCurrentResult,
DeleteByDestinationResult,
EnqueueBatchResult,
IsEmptyResult,
IsFullResult,
@@ -17,7 +19,6 @@ from invokeai.app.services.session_queue.session_queue_common import (
RetryItemsResult,
SessionQueueCountsByDestination,
SessionQueueItem,
SessionQueueItemDTO,
SessionQueueStatus,
)
from invokeai.app.services.shared.graph import GraphExecutionState
@@ -92,6 +93,11 @@ class SessionQueueBase(ABC):
"""Cancels a session queue item"""
pass
@abstractmethod
def delete_queue_item(self, item_id: int) -> None:
"""Deletes a session queue item"""
pass
@abstractmethod
def fail_queue_item(
self, item_id: int, error_type: str, error_message: str, error_traceback: str
@@ -109,6 +115,11 @@ class SessionQueueBase(ABC):
"""Cancels all queue items with the given batch destination"""
pass
@abstractmethod
def delete_by_destination(self, queue_id: str, destination: str) -> DeleteByDestinationResult:
"""Deletes all queue items with the given batch destination"""
pass
@abstractmethod
def cancel_by_queue_id(self, queue_id: str) -> CancelByQueueIDResult:
"""Cancels all queue items with matching queue ID"""
@@ -119,6 +130,11 @@ class SessionQueueBase(ABC):
"""Cancels all queue items except in-progress items"""
pass
@abstractmethod
def delete_all_except_current(self, queue_id: str) -> DeleteAllExceptCurrentResult:
"""Deletes all queue items except in-progress items"""
pass
@abstractmethod
def list_queue_items(
self,
@@ -127,10 +143,20 @@ class SessionQueueBase(ABC):
priority: int,
cursor: Optional[int] = None,
status: Optional[QUEUE_ITEM_STATUS] = None,
) -> CursorPaginatedResults[SessionQueueItemDTO]:
destination: Optional[str] = None,
) -> CursorPaginatedResults[SessionQueueItem]:
"""Gets a page of session queue items"""
pass
@abstractmethod
def list_all_queue_items(
self,
queue_id: str,
destination: Optional[str] = None,
) -> list[SessionQueueItem]:
"""Gets all queue items that match the given parameters"""
pass
@abstractmethod
def get_queue_item(self, item_id: int) -> SessionQueueItem:
"""Gets a session queue item by ID"""

View File

@@ -207,7 +207,7 @@ class FieldIdentifier(BaseModel):
field_name: str = Field(description="The name of the field")
class SessionQueueItemWithoutGraph(BaseModel):
class SessionQueueItem(BaseModel):
"""Session queue item without the full graph. Used for serialization."""
item_id: int = Field(description="The identifier of the session queue item")
@@ -251,42 +251,7 @@ class SessionQueueItemWithoutGraph(BaseModel):
default=None,
description="The ID of the published workflow associated with this queue item",
)
api_input_fields: Optional[list[FieldIdentifier]] = Field(
default=None, description="The fields that were used as input to the API"
)
api_output_fields: Optional[list[FieldIdentifier]] = Field(
default=None, description="The nodes that were used as output from the API"
)
credits: Optional[float] = Field(default=None, description="The total credits used for this queue item")
@classmethod
def queue_item_dto_from_dict(cls, queue_item_dict: dict) -> "SessionQueueItemDTO":
# must parse these manually
queue_item_dict["field_values"] = get_field_values(queue_item_dict)
return SessionQueueItemDTO(**queue_item_dict)
model_config = ConfigDict(
json_schema_extra={
"required": [
"item_id",
"status",
"batch_id",
"queue_id",
"session_id",
"priority",
"session_id",
"created_at",
"updated_at",
]
}
)
class SessionQueueItemDTO(SessionQueueItemWithoutGraph):
pass
class SessionQueueItem(SessionQueueItemWithoutGraph):
session: GraphExecutionState = Field(description="The fully-populated session to be executed")
workflow: Optional[WorkflowWithoutID] = Field(
default=None, description="The workflow associated with this queue item"
@@ -397,6 +362,18 @@ class CancelByDestinationResult(CancelByBatchIDsResult):
pass
class DeleteByDestinationResult(BaseModel):
"""Result of deleting by a destination"""
deleted: int = Field(..., description="Number of queue items deleted")
class DeleteAllExceptCurrentResult(DeleteByDestinationResult):
"""Result of deleting all except current"""
pass
class CancelByQueueIDResult(CancelByBatchIDsResult):
"""Result of canceling by queue id"""

View File

@@ -17,6 +17,8 @@ from invokeai.app.services.session_queue.session_queue_common import (
CancelByDestinationResult,
CancelByQueueIDResult,
ClearResult,
DeleteAllExceptCurrentResult,
DeleteByDestinationResult,
EnqueueBatchResult,
IsEmptyResult,
IsFullResult,
@@ -24,7 +26,6 @@ from invokeai.app.services.session_queue.session_queue_common import (
RetryItemsResult,
SessionQueueCountsByDestination,
SessionQueueItem,
SessionQueueItemDTO,
SessionQueueItemNotFoundError,
SessionQueueStatus,
ValueToInsertTuple,
@@ -46,10 +47,6 @@ class SqliteSessionQueue(SessionQueueBase):
clear_result = self.clear(DEFAULT_QUEUE_ID)
if clear_result.deleted > 0:
self.__invoker.services.logger.info(f"Cleared all {clear_result.deleted} queue items")
else:
prune_result = self.prune(DEFAULT_QUEUE_ID)
if prune_result.deleted > 0:
self.__invoker.services.logger.info(f"Pruned {prune_result.deleted} finished queue items")
def __init__(self, db: SqliteDatabase) -> None:
super().__init__()
@@ -220,6 +217,19 @@ class SqliteSessionQueue(SessionQueueBase):
) -> SessionQueueItem:
try:
cursor = self._conn.cursor()
cursor.execute(
"""--sql
SELECT status FROM session_queue WHERE item_id = ?
""",
(item_id,),
)
row = cursor.fetchone()
if row is None:
raise SessionQueueItemNotFoundError(f"No queue item with id {item_id}")
current_status = row[0]
# Only update if not already finished (completed, failed or canceled)
if current_status in ("completed", "failed", "canceled"):
return self.get_queue_item(item_id)
cursor.execute(
"""--sql
UPDATE session_queue
@@ -331,6 +341,27 @@ class SqliteSessionQueue(SessionQueueBase):
queue_item = self._set_queue_item_status(item_id=item_id, status="canceled")
return queue_item
def delete_queue_item(self, item_id: int) -> None:
"""Deletes a session queue item"""
try:
self.cancel_queue_item(item_id)
except SessionQueueItemNotFoundError:
pass
try:
cursor = self._conn.cursor()
cursor.execute(
"""--sql
DELETE
FROM session_queue
WHERE item_id = ?
""",
(item_id,),
)
self._conn.commit()
except Exception:
self._conn.rollback()
raise
def complete_queue_item(self, item_id: int) -> SessionQueueItem:
queue_item = self._set_queue_item_status(item_id=item_id, status="completed")
return queue_item
@@ -428,6 +459,71 @@ class SqliteSessionQueue(SessionQueueBase):
raise
return CancelByDestinationResult(canceled=count)
def delete_by_destination(self, queue_id: str, destination: str) -> DeleteByDestinationResult:
try:
cursor = self._conn.cursor()
current_queue_item = self.get_current(queue_id)
if current_queue_item is not None and current_queue_item.destination == destination:
self.cancel_queue_item(current_queue_item.item_id)
params = (queue_id, destination)
cursor.execute(
"""--sql
SELECT COUNT(*)
FROM session_queue
WHERE
queue_id = ?
AND destination = ?;
""",
params,
)
count = cursor.fetchone()[0]
cursor.execute(
"""--sql
DELETE
FROM session_queue
WHERE
queue_id = ?
AND destination = ?;
""",
params,
)
self._conn.commit()
except Exception:
self._conn.rollback()
raise
return DeleteByDestinationResult(deleted=count)
def delete_all_except_current(self, queue_id: str) -> DeleteAllExceptCurrentResult:
try:
cursor = self._conn.cursor()
where = """--sql
WHERE
queue_id == ?
AND status == 'pending'
"""
cursor.execute(
f"""--sql
SELECT COUNT(*)
FROM session_queue
{where};
""",
(queue_id,),
)
count = cursor.fetchone()[0]
cursor.execute(
f"""--sql
DELETE
FROM session_queue
{where};
""",
(queue_id,),
)
self._conn.commit()
except Exception:
self._conn.rollback()
raise
return DeleteAllExceptCurrentResult(deleted=count)
def cancel_by_queue_id(self, queue_id: str) -> CancelByQueueIDResult:
try:
cursor = self._conn.cursor()
@@ -543,26 +639,12 @@ class SqliteSessionQueue(SessionQueueBase):
priority: int,
cursor: Optional[int] = None,
status: Optional[QUEUE_ITEM_STATUS] = None,
) -> CursorPaginatedResults[SessionQueueItemDTO]:
destination: Optional[str] = None,
) -> CursorPaginatedResults[SessionQueueItem]:
cursor_ = self._conn.cursor()
item_id = cursor
query = """--sql
SELECT item_id,
status,
priority,
field_values,
error_type,
error_message,
error_traceback,
created_at,
updated_at,
completed_at,
started_at,
session_id,
batch_id,
queue_id,
origin,
destination
SELECT *
FROM session_queue
WHERE queue_id = ?
"""
@@ -574,6 +656,12 @@ class SqliteSessionQueue(SessionQueueBase):
"""
params.append(status)
if destination is not None:
query += """---sql
AND destination = ?
"""
params.append(destination)
if item_id is not None:
query += """--sql
AND (priority < ?) OR (priority = ? AND item_id > ?)
@@ -589,7 +677,7 @@ class SqliteSessionQueue(SessionQueueBase):
params.append(limit + 1)
cursor_.execute(query, params)
results = cast(list[sqlite3.Row], cursor_.fetchall())
items = [SessionQueueItemDTO.queue_item_dto_from_dict(dict(result)) for result in results]
items = [SessionQueueItem.queue_item_from_dict(dict(result)) for result in results]
has_more = False
if len(items) > limit:
# remove the extra item
@@ -597,6 +685,37 @@ class SqliteSessionQueue(SessionQueueBase):
has_more = True
return CursorPaginatedResults(items=items, limit=limit, has_more=has_more)
def list_all_queue_items(
self,
queue_id: str,
destination: Optional[str] = None,
) -> list[SessionQueueItem]:
"""Gets all queue items that match the given parameters"""
cursor_ = self._conn.cursor()
query = """--sql
SELECT *
FROM session_queue
WHERE queue_id = ?
"""
params: list[Union[str, int]] = [queue_id]
if destination is not None:
query += """---sql
AND destination = ?
"""
params.append(destination)
query += """--sql
ORDER BY
priority DESC,
item_id ASC
;
"""
cursor_.execute(query, params)
results = cast(list[sqlite3.Row], cursor_.fetchall())
items = [SessionQueueItem.queue_item_from_dict(dict(result)) for result in results]
return items
def get_queue_status(self, queue_id: str) -> SessionQueueStatus:
cursor = self._conn.cursor()
cursor.execute(

View File

@@ -7,6 +7,7 @@ from typing import Any, Optional, TypeVar, Union, get_args, get_origin, get_type
import networkx as nx
from pydantic import (
BaseModel,
ConfigDict,
GetCoreSchemaHandler,
GetJsonSchemaHandler,
ValidationError,
@@ -787,6 +788,22 @@ class GraphExecutionState(BaseModel):
default_factory=dict,
)
model_config = ConfigDict(
json_schema_extra={
"required": [
"id",
"graph",
"execution_graph",
"executed",
"executed_history",
"results",
"errors",
"prepared_source_mapping",
"source_prepared_mapping",
]
}
)
@field_validator("graph")
def graph_is_valid(cls, v: Graph):
"""Validates that the graph is valid"""

View File

@@ -230,6 +230,86 @@ def heuristic_resize(np_img: np.ndarray[Any, Any], size: tuple[int, int]) -> np.
return resized
# precompute common kernels
_KERNEL3 = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
# directional masks for NMS
_DIRS = [
np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], np.uint8),
np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], np.uint8),
np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], np.uint8),
np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], np.uint8),
]
def heuristic_resize_fast(np_img: np.ndarray, size: tuple[int, int]) -> np.ndarray:
h, w = np_img.shape[:2]
# early exit
if (w, h) == size:
return np_img
# separate alpha channel
img = np_img
alpha = None
if img.ndim == 3 and img.shape[2] == 4:
alpha, img = img[:, :, 3], img[:, :, :3]
# build small sample for uniquecolor & binary detection
flat = img.reshape(-1, img.shape[-1])
N = flat.shape[0]
# include four corners to avoid missing extreme values
corners = np.vstack([img[0, 0], img[0, w - 1], img[h - 1, 0], img[h - 1, w - 1]])
cnt = min(N, 100_000)
samp = np.vstack([corners, flat[np.random.choice(N, cnt, replace=False)]])
uc = np.unique(samp, axis=0).shape[0]
vmin, vmax = samp.min(), samp.max()
# detect binary edge map & onepixeledge case
is_binary = uc == 2 and vmin < 16 and vmax > 240
one_pixel_edge = False
if is_binary:
# single gray conversion
gray0 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
grad = cv2.morphologyEx(gray0, cv2.MORPH_GRADIENT, _KERNEL3)
cnt_edge = cv2.countNonZero(grad)
cnt_all = cv2.countNonZero((gray0 > 127).astype(np.uint8))
one_pixel_edge = (2 * cnt_edge) > cnt_all
# choose interp for color/seg/grayscale
area_new, area_old = size[0] * size[1], w * h
if 2 < uc < 200: # segmentation map
interp = cv2.INTER_NEAREST
elif area_new < area_old:
interp = cv2.INTER_AREA
else:
interp = cv2.INTER_CUBIC
# single resize pass on RGB
resized = cv2.resize(img, size, interpolation=interp)
if is_binary:
# convert to gray & apply NMS via C++ dilate
gray_r = cv2.cvtColor(resized, cv2.COLOR_BGR2GRAY)
nms = np.zeros_like(gray_r)
for K in _DIRS:
d = cv2.dilate(gray_r, K)
mask = d == gray_r
nms[mask] = gray_r[mask]
# threshold + thinning if needed
_, bw = cv2.threshold(nms, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
out_bin = cv2.ximgproc.thinning(bw) if one_pixel_edge else bw
# restore 3 channels
resized = np.stack([out_bin] * 3, axis=2)
# restore alpha with same interp as RGB for consistency
if alpha is not None:
am = cv2.resize(alpha, size, interpolation=interp)
am = (am > 127).astype(np.uint8) * 255
resized = np.dstack((resized, am))
return resized
###########################################################################
# Copied from detectmap_proc method in scripts/detectmap_proc.py in Mikubill/sd-webui-controlnet
# modified for InvokeAI
@@ -244,7 +324,7 @@ def np_img_resize(
np_img = normalize_image_channel_count(np_img)
if resize_mode == "just_resize": # RESIZE
np_img = heuristic_resize(np_img, (w, h))
np_img = heuristic_resize_fast(np_img, (w, h))
np_img = clone_contiguous(np_img)
return np_img_to_torch(np_img, device), np_img
@@ -265,7 +345,7 @@ def np_img_resize(
# Inpaint hijack
high_quality_border_color[3] = 255
high_quality_background = np.tile(high_quality_border_color[None, None], [h, w, 1])
np_img = heuristic_resize(np_img, (safeint(old_w * k), safeint(old_h * k)))
np_img = heuristic_resize_fast(np_img, (safeint(old_w * k), safeint(old_h * k)))
new_h, new_w, _ = np_img.shape
pad_h = max(0, (h - new_h) // 2)
pad_w = max(0, (w - new_w) // 2)
@@ -275,7 +355,7 @@ def np_img_resize(
return np_img_to_torch(np_img, device), np_img
else: # resize_mode == "crop_resize" (INNER_FIT)
k = max(k0, k1)
np_img = heuristic_resize(np_img, (safeint(old_w * k), safeint(old_h * k)))
np_img = heuristic_resize_fast(np_img, (safeint(old_w * k), safeint(old_h * k)))
new_h, new_w, _ = np_img.shape
pad_h = max(0, (new_h - h) // 2)
pad_w = max(0, (new_w - w) // 2)

View File

@@ -12,6 +12,9 @@ from invokeai.app.invocations.fields import InputFieldJSONSchemaExtra, OutputFie
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.events.events_common import EventBase
from invokeai.app.services.session_processor.session_processor_common import ProgressImage
from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.get_logger()
def move_defs_to_top_level(openapi_schema: dict[str, Any], component_schema: dict[str, Any]) -> None:

View File

@@ -42,4 +42,5 @@ IP-Adapters:
- [InvokeAI/ip_adapter_plus_sd15](https://huggingface.co/InvokeAI/ip_adapter_plus_sd15)
- [InvokeAI/ip_adapter_plus_face_sd15](https://huggingface.co/InvokeAI/ip_adapter_plus_face_sd15)
- [InvokeAI/ip_adapter_sdxl](https://huggingface.co/InvokeAI/ip_adapter_sdxl)
- [InvokeAI/ip_adapter_sdxl_vit_h](https://huggingface.co/InvokeAI/ip_adapter_sdxl_vit_h)
- [InvokeAI/ip_adapter_sdxl_vit_h](https://huggingface.co/InvokeAI/ip_adapter_sdxl_vit_h)
- [InvokeAI/ip-adapter-plus_sdxl_vit-h](https://huggingface.co/InvokeAI/ip-adapter-plus_sdxl_vit-h)

View File

@@ -62,11 +62,14 @@ class HuggingFaceMetadataFetch(ModelMetadataFetchBase):
# If this too fails, raise exception.
model_info = None
# Handling for our special syntax - we only want the base HF `org/repo` here.
repo_id = id.split("::")[0] or id
while not model_info:
try:
model_info = HfApi().model_info(repo_id=id, files_metadata=True, revision=variant)
model_info = HfApi().model_info(repo_id=repo_id, files_metadata=True, revision=variant)
except RepositoryNotFoundError as excp:
raise UnknownMetadataException(f"'{id}' not found. See trace for details.") from excp
raise UnknownMetadataException(f"'{repo_id}' not found. See trace for details.") from excp
except RevisionNotFoundError:
if variant is None:
raise
@@ -75,14 +78,14 @@ class HuggingFaceMetadataFetch(ModelMetadataFetchBase):
files: list[RemoteModelFile] = []
_, name = id.split("/")
_, name = repo_id.split("/")
for s in model_info.siblings or []:
assert s.rfilename is not None
assert s.size is not None
files.append(
RemoteModelFile(
url=hf_hub_url(id, s.rfilename, revision=variant or "main"),
url=hf_hub_url(repo_id, s.rfilename, revision=variant or "main"),
path=Path(name, s.rfilename),
size=s.size,
sha256=s.lfs.get("sha256") if s.lfs else None,

View File

@@ -297,6 +297,15 @@ ip_adapter_sdxl = StarterModel(
dependencies=[ip_adapter_sdxl_image_encoder],
previous_names=["IP Adapter SDXL"],
)
ip_adapter_plus_sdxl = StarterModel(
name="Precise Reference (IP Adapter Plus ViT-H)",
base=BaseModelType.StableDiffusionXL,
source="https://huggingface.co/InvokeAI/ip-adapter-plus_sdxl_vit-h/resolve/main/ip-adapter-plus_sdxl_vit-h.safetensors",
description="References images with a higher degree of precision.",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sdxl_image_encoder],
previous_names=["IP Adapter Plus SDXL"],
)
ip_adapter_flux = StarterModel(
name="Standard Reference (XLabs FLUX IP-Adapter v2)",
base=BaseModelType.Flux,
@@ -672,6 +681,7 @@ STARTER_MODELS: list[StarterModel] = [
ip_adapter_plus_sd1,
ip_adapter_plus_face_sd1,
ip_adapter_sdxl,
ip_adapter_plus_sdxl,
ip_adapter_flux,
qr_code_cnet_sd1,
qr_code_cnet_sdxl,
@@ -744,6 +754,7 @@ sdxl_bundle: list[StarterModel] = [
juggernaut_sdxl,
sdxl_fp16_vae_fix,
ip_adapter_sdxl,
ip_adapter_plus_sdxl,
canny_sdxl,
depth_sdxl,
softedge_sdxl,

View File

@@ -1,3 +1,4 @@
import re
from contextlib import contextmanager
from typing import Dict, Iterable, Optional, Tuple
@@ -7,6 +8,7 @@ from invokeai.backend.patches.layers.base_layer_patch import BaseLayerPatch
from invokeai.backend.patches.layers.flux_control_lora_layer import FluxControlLoRALayer
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
from invokeai.backend.patches.pad_with_zeros import pad_with_zeros
from invokeai.backend.util import InvokeAILogger
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.original_weights_storage import OriginalWeightsStorage
@@ -23,6 +25,7 @@ class LayerPatcher:
cached_weights: Optional[Dict[str, torch.Tensor]] = None,
force_direct_patching: bool = False,
force_sidecar_patching: bool = False,
suppress_warning_layers: Optional[re.Pattern] = None,
):
"""Apply 'smart' model patching that chooses whether to use direct patching or a sidecar wrapper for each
module.
@@ -44,6 +47,7 @@ class LayerPatcher:
dtype=dtype,
force_direct_patching=force_direct_patching,
force_sidecar_patching=force_sidecar_patching,
suppress_warning_layers=suppress_warning_layers,
)
yield
@@ -70,6 +74,7 @@ class LayerPatcher:
dtype: torch.dtype,
force_direct_patching: bool,
force_sidecar_patching: bool,
suppress_warning_layers: Optional[re.Pattern] = None,
):
"""Apply a single LoRA patch to a model using the 'smart' patching strategy that chooses whether to use direct
patching or a sidecar wrapper for each module.
@@ -89,9 +94,17 @@ class LayerPatcher:
if not layer_key.startswith(prefix):
continue
module_key, module = LayerPatcher._get_submodule(
model, layer_key[prefix_len:], layer_key_is_flattened=layer_keys_are_flattened
)
try:
module_key, module = LayerPatcher._get_submodule(
model, layer_key[prefix_len:], layer_key_is_flattened=layer_keys_are_flattened
)
except AttributeError:
if suppress_warning_layers and suppress_warning_layers.search(layer_key):
pass
else:
logger = InvokeAILogger.get_logger(LayerPatcher.__name__)
logger.warning("Failed to find module for LoRA layer key: %s", layer_key)
continue
# Decide whether to use direct patching or a sidecar patch.
# Direct patching is preferred, because it results in better runtime speed.

View File

@@ -30,18 +30,13 @@ class RectifiedFlowInpaintExtension:
def _apply_mask_gradient_adjustment(self, t_prev: float) -> torch.Tensor:
"""Applies inpaint mask gradient adjustment and returns the inpaint mask to be used at the current timestep."""
# As we progress through the denoising process, we promote gradient regions of the mask to have a full weight of
# 1.0. This helps to produce more coherent seams around the inpainted region. We experimented with a (small)
# number of promotion strategies (e.g. gradual promotion based on timestep), but found that a simple cutoff
# threshold worked well.
# 1.0. This helps to produce more coherent seams around the inpainted region.
# We use a small epsilon to avoid any potential issues with floating point precision.
eps = 1e-4
mask_gradient_t_cutoff = 0.5
if t_prev > mask_gradient_t_cutoff:
# Early in the denoising process, use the inpaint mask as-is.
return self._inpaint_mask
else:
# After the cut-off, promote all non-zero mask values to 1.0.
mask = self._inpaint_mask.where(self._inpaint_mask <= (0.0 + eps), 1.0)
mask = torch.where(self._inpaint_mask >= t_prev + eps, 1.0, 0.0).to(
dtype=self._inpaint_mask.dtype, device=self._inpaint_mask.device
)
return mask

View File

@@ -9,7 +9,8 @@ module.exports = {
// https://github.com/qdanik/eslint-plugin-path
'path/no-relative-imports': ['error', { maxDepth: 0 }],
// https://github.com/edvardchen/eslint-plugin-i18next/blob/HEAD/docs/rules/no-literal-string.md
'i18next/no-literal-string': 'error',
// TODO: ENABLE THIS RULE BEFORE v6.0.0
// 'i18next/no-literal-string': 'error',
// https://eslint.org/docs/latest/rules/no-console
'no-console': 'error',
// https://eslint.org/docs/latest/rules/no-promise-executor-return

View File

@@ -3,6 +3,8 @@ import type { KnipConfig } from 'knip';
const config: KnipConfig = {
project: ['src/**/*.{ts,tsx}!'],
ignore: [
// TODO(psyche): temporarily ignored all files for test build purposes
'src/**',
// This file is only used during debugging
'src/app/store/middleware/debugLoggerMiddleware.ts',
// Autogenerated types - shouldn't ever touch these
@@ -14,6 +16,8 @@ const config: KnipConfig = {
'src/features/controlLayers/konva/util.ts',
// TODO(psyche): restore HRF functionality?
'src/features/hrf/**',
// This feature is (temprarily?) disabled
'src/features/controlLayers/components/InpaintMask/InpaintMaskAddButtons.tsx',
],
ignoreBinaries: ['only-allow'],
paths: {

View File

@@ -60,7 +60,7 @@
"@fontsource-variable/inter": "^5.2.5",
"@invoke-ai/ui-library": "^0.0.46",
"@nanostores/react": "^1.0.0",
"@reduxjs/toolkit": "2.7.0",
"@reduxjs/toolkit": "2.8.2",
"@roarr/browser-log-writer": "^1.3.0",
"@xyflow/react": "^12.6.0",
"async-mutex": "^0.5.0",

View File

@@ -30,8 +30,8 @@ dependencies:
specifier: ^1.0.0
version: 1.0.0(nanostores@1.0.1)(react@18.3.1)
'@reduxjs/toolkit':
specifier: 2.7.0
version: 2.7.0(react-redux@9.2.0)(react@18.3.1)
specifier: 2.8.2
version: 2.8.2(react-redux@9.2.0)(react@18.3.1)
'@roarr/browser-log-writer':
specifier: ^1.3.0
version: 1.3.0
@@ -2161,8 +2161,8 @@ packages:
- supports-color
dev: true
/@reduxjs/toolkit@2.7.0(react-redux@9.2.0)(react@18.3.1):
resolution: {integrity: sha512-XVwolG6eTqwV0N8z/oDlN93ITCIGIop6leXlGJI/4EKy+0POYkR+ABHRSdGXY+0MQvJBP8yAzh+EYFxTuvmBiQ==}
/@reduxjs/toolkit@2.8.2(react-redux@9.2.0)(react@18.3.1):
resolution: {integrity: sha512-MYlOhQ0sLdw4ud48FoC5w0dH9VfWQjtCjreKwYTT3l+r427qYC5Y8PihNutepr8XrNaBUDQo9khWUwQxZaqt5A==}
peerDependencies:
react: ^16.9.0 || ^17.0.0 || ^18 || ^19
react-redux: ^7.2.1 || ^8.1.3 || ^9.0.0

View File

@@ -24,15 +24,18 @@
"autoAddBoard": "Auto-Add Board",
"boards": "Boards",
"selectedForAutoAdd": "Selected for Auto-Add",
"bottomMessage": "Deleting this board and its images will reset any features currently using them.",
"bottomMessage": "Deleting images will reset any features currently using them.",
"cancel": "Cancel",
"changeBoard": "Change Board",
"clearSearch": "Clear Search",
"deleteBoard": "Delete Board",
"deleteBoardAndImages": "Delete Board and Images",
"deleteBoardOnly": "Delete Board Only",
"deletedBoardsCannotbeRestored": "Deleted boards cannot be restored. Selecting 'Delete Board Only' will move images to an uncategorized state.",
"deletedPrivateBoardsCannotbeRestored": "Deleted boards cannot be restored. Selecting 'Delete Board Only' will move images to a private uncategorized state for the image's creator.",
"deletedBoardsCannotbeRestored": "Deleted boards and images cannot be restored. Selecting 'Delete Board Only' will move images to an uncategorized state.",
"deletedPrivateBoardsCannotbeRestored": "Deleted boards and images cannot be restored. Selecting 'Delete Board Only' will move images to a private uncategorized state for the image's creator.",
"uncategorizedImages": "Uncategorized Images",
"deleteAllUncategorizedImages": "Delete All Uncategorized Images",
"deletedImagesCannotBeRestored": "Deleted images cannot be restored.",
"hideBoards": "Hide Boards",
"loading": "Loading...",
"menuItemAutoAdd": "Auto-add to this Board",
@@ -46,7 +49,7 @@
"searchBoard": "Search Boards...",
"selectBoard": "Select a Board",
"shared": "Shared Boards",
"topMessage": "This board contains images used in the following features:",
"topMessage": "This selection contains images used in the following features:",
"unarchiveBoard": "Unarchive Board",
"uncategorized": "Uncategorized",
"viewBoards": "View Boards",
@@ -1907,11 +1910,13 @@
"addPositivePrompt": "Add $t(controlLayers.prompt)",
"addNegativePrompt": "Add $t(controlLayers.negativePrompt)",
"addReferenceImage": "Add $t(controlLayers.referenceImage)",
"addImageNoise": "Add $t(controlLayers.imageNoise)",
"addRasterLayer": "Add $t(controlLayers.rasterLayer)",
"addControlLayer": "Add $t(controlLayers.controlLayer)",
"addInpaintMask": "Add $t(controlLayers.inpaintMask)",
"addRegionalGuidance": "Add $t(controlLayers.regionalGuidance)",
"addGlobalReferenceImage": "Add $t(controlLayers.globalReferenceImage)",
"addDenoiseLimit": "Add $t(controlLayers.denoiseLimit)",
"rasterLayer": "Raster Layer",
"controlLayer": "Control Layer",
"inpaintMask": "Inpaint Mask",
@@ -2009,8 +2014,12 @@
"resetCanvasLayers": "Reset Canvas Layers",
"resetGenerationSettings": "Reset Generation Settings",
"replaceCurrent": "Replace Current",
"controlLayerEmptyState": "<UploadButton>Upload an image</UploadButton>, drag an image from the <GalleryButton>gallery</GalleryButton> onto this layer, or draw on the canvas to get started.",
"referenceImageEmptyState": "<UploadButton>Upload an image</UploadButton>, drag an image from the <GalleryButton>gallery</GalleryButton> onto this layer, or <PullBboxButton>pull the bounding box into this layer</PullBboxButton> to get started.",
"controlLayerEmptyState": "<UploadButton>Upload an image</UploadButton>, drag an image from the <GalleryButton>gallery</GalleryButton> onto this layer, <PullBboxButton>pull the bounding box into this layer</PullBboxButton>, or draw on the canvas to get started.",
"referenceImageEmptyStateWithCanvasOptions": "<UploadButton>Upload an image</UploadButton>, drag an image from the <GalleryButton>gallery</GalleryButton> onto this Reference Image or <PullBboxButton>pull the bounding box into this Reference Image</PullBboxButton> to get started.",
"referenceImageEmptyState": "<UploadButton>Upload an image</UploadButton> or drag an image from the <GalleryButton>gallery</GalleryButton> onto this Reference Image to get started.",
"uploadOrDragAnImage": "Drag an image from the gallery or <UploadButton>upload an image</UploadButton>.",
"imageNoise": "Image Noise",
"denoiseLimit": "Denoise Limit",
"warnings": {
"problemsFound": "Problems found",
"unsupportedModel": "layer not supported for selected base model",
@@ -2419,9 +2428,8 @@
"whatsNew": {
"whatsNewInInvoke": "What's New in Invoke",
"items": [
"Nvidia 50xx GPUs: Invoke uses PyTorch 2.7.0, which is required for these GPUs.",
"Model Relationships: Link LoRAs to main models, and the LoRAs will show up first in the list.",
"IP Adapter: New Style (Strong) and Style (Precise) methods for SDXL and SD1.5 models."
"Inpainting: Per-mask noise levels and denoise limits.",
"Canvas: Smarter aspect ratios for SDXL and improved scroll-to-zoom."
],
"readReleaseNotes": "Read Release Notes",
"watchRecentReleaseVideos": "Watch Recent Release Videos",

View File

@@ -883,7 +883,8 @@
"problemUnpublishingWorkflow": "Problema durante l'annullamento della pubblicazione del flusso di lavoro",
"problemUnpublishingWorkflowDescription": "Si è verificato un problema durante l'annullamento della pubblicazione del flusso di lavoro. Riprova.",
"workflowUnpublished": "Flusso di lavoro non pubblicato",
"chatGPT4oIncompatibleGenerationMode": "ChatGPT 4o supporta solo la conversione da testo a immagine e da immagine a immagine. Utilizza altri modelli per le attività di Inpainting e Outpainting."
"chatGPT4oIncompatibleGenerationMode": "ChatGPT 4o supporta solo la conversione da testo a immagine e da immagine a immagine. Utilizza altri modelli per le attività di Inpainting e Outpainting.",
"imagenIncompatibleGenerationMode": "Google {{model}} supporta solo la generazione da testo a immagine. Utilizza altri modelli per le attività di conversione da immagine a immagine, inpainting e outpainting."
},
"accessibility": {
"invokeProgressBar": "Barra di avanzamento generazione",
@@ -1085,11 +1086,11 @@
"menuItemAutoAdd": "Aggiungi automaticamente a questa bacheca",
"cancel": "Annulla",
"addBoard": "Aggiungi Bacheca",
"bottomMessage": "L'eliminazione di questa bacheca e delle sue immagini ripristinerà tutte le funzionalità che le stanno attualmente utilizzando.",
"bottomMessage": "L'eliminazione delle immagini reimposterà tutte le funzionalità che le stanno utilizzando.",
"changeBoard": "Cambia Bacheca",
"loading": "Caricamento in corso ...",
"clearSearch": "Cancella Ricerca",
"topMessage": "Questa bacheca contiene immagini utilizzate nelle seguenti funzionalità:",
"topMessage": "Questa selezione contiene immagini utilizzate nelle seguenti funzionalità:",
"move": "Sposta",
"myBoard": "Bacheca",
"searchBoard": "Cerca bacheche ...",
@@ -1100,7 +1101,7 @@
"deleteBoardOnly": "solo la Bacheca",
"deleteBoard": "Elimina Bacheca",
"deleteBoardAndImages": "Bacheca e Immagini",
"deletedBoardsCannotbeRestored": "Le bacheche eliminate non possono essere ripristinate. Selezionando \"Elimina solo bacheca\" le immagini verranno spostate nella bacheca \"Non categorizzato\".",
"deletedBoardsCannotbeRestored": "Le bacheche e le immagini eliminate non possono essere ripristinate. Selezionando \"Elimina solo bacheca\" le immagini verranno spostate in uno stato non categorizzato.",
"movingImagesToBoard_one": "Spostare {{count}} immagine nella bacheca:",
"movingImagesToBoard_many": "Spostare {{count}} immagini nella bacheca:",
"movingImagesToBoard_other": "Spostare {{count}} immagini nella bacheca:",
@@ -1122,8 +1123,11 @@
"noBoards": "Nessuna bacheca {{boardType}}",
"hideBoards": "Nascondi bacheche",
"viewBoards": "Visualizza bacheche",
"deletedPrivateBoardsCannotbeRestored": "Le bacheche cancellate non possono essere ripristinate. Selezionando 'Cancella solo bacheca', le immagini verranno spostate nella bacheca \"Non categorizzato\" privata dell'autore dell'immagine.",
"updateBoardError": "Errore durante l'aggiornamento della bacheca"
"deletedPrivateBoardsCannotbeRestored": "Le bacheche e le immagini eliminate non possono essere ripristinate. Selezionando \"Elimina solo bacheca\", le immagini verranno spostate in uno stato privato e non categorizzato per l'autore dell'immagine.",
"updateBoardError": "Errore durante l'aggiornamento della bacheca",
"uncategorizedImages": "Immagini non categorizzate",
"deleteAllUncategorizedImages": "Elimina tutte le immagini non categorizzate",
"deletedImagesCannotBeRestored": "Le immagini eliminate non possono essere ripristinate."
},
"queue": {
"queueFront": "Aggiungi all'inizio della coda",
@@ -2295,7 +2299,7 @@
"replaceCurrent": "Sostituisci corrente",
"mergeDown": "Unire in basso",
"mergingLayers": "Unione dei livelli",
"controlLayerEmptyState": "<UploadButton>Carica un'immagine</UploadButton>, trascina un'immagine dalla <GalleryButton>galleria</GalleryButton> su questo livello oppure disegna sulla tela per iniziare.",
"controlLayerEmptyState": "<UploadButton>Carica un'immagine</UploadButton>, trascina un'immagine dalla <GalleryButton>galleria</GalleryButton> su questo livello, <PullBboxButton>trascina il riquadro di delimitazione in questo livello</PullBboxButton> oppure disegna sulla tela per iniziare.",
"useImage": "Usa immagine",
"resetGenerationSettings": "Ripristina impostazioni di generazione",
"referenceImageEmptyState": "Per iniziare, <UploadButton>carica un'immagine</UploadButton>, trascina un'immagine dalla <GalleryButton>galleria</GalleryButton>, oppure <PullBboxButton>trascina il riquadro di delimitazione in questo livello</PullBboxButton> su questo livello.",
@@ -2344,7 +2348,11 @@
"lowest": "Il più basso",
"medium": "Medio",
"highest": "La più alta"
}
},
"denoiseLimit": "Limite di riduzione del rumore",
"addImageNoise": "Aggiungi $t(controlLayers.imageNoise)",
"addDenoiseLimit": "Aggiungi $t(controlLayers.denoiseLimit)",
"imageNoise": "Rumore dell'immagine"
},
"ui": {
"tabs": {
@@ -2444,8 +2452,8 @@
"watchRecentReleaseVideos": "Guarda i video su questa versione",
"watchUiUpdatesOverview": "Guarda le novità dell'interfaccia",
"items": [
"GPU Nvidia 50xx: Invoke utilizza PyTorch 2.7.0, necessario per queste GPU.",
"Relazioni tra modelli: collega i LoRA ai modelli principali e i LoRA verranno visualizzati per primi nell'elenco."
"Inpainting: livelli di rumore per maschera e limiti di denoise.",
"Canvas: proporzioni più intelligenti per SDXL e scorrimento e zoom migliorati."
]
},
"system": {

View File

@@ -392,7 +392,7 @@
"title": "全選択"
},
"addNode": {
"desc": "ノード追加メニューを開く",
"desc": "ノード追加メニューを開く",
"title": "ノードを追加"
},
"pasteSelectionWithEdges": {
@@ -652,7 +652,9 @@
"filterModels": "フィルターモデル",
"modelPickerFallbackNoModelsInstalled": "モデルがインストールされていません.",
"manageModels": "モデル管理",
"hfTokenReset": "ハギングフェイストークンリセット"
"hfTokenReset": "ハギングフェイストークンリセット",
"relatedModels": "関連のあるモデル",
"showOnlyRelatedModels": "関連している"
},
"parameters": {
"images": "画像",
@@ -872,7 +874,8 @@
"problemDeletingWorkflow": "ワークフローが削除された問題",
"imageNotLoadedDesc": "画像を見つけられません",
"parameterNotSetDesc": "{{parameter}}を呼び出せません",
"chatGPT4oIncompatibleGenerationMode": "ChatGPT 4oは,テキストから画像への生成と画像から画像への生成のみをサポートしています.インペインティングおよび,アウトペインティングタスクには他のモデルを使用してください."
"chatGPT4oIncompatibleGenerationMode": "ChatGPT 4oは,テキストから画像への生成と画像から画像への生成のみをサポートしています.インペインティングおよび,アウトペインティングタスクには他のモデルを使用してください.",
"imagenIncompatibleGenerationMode": "Google {{model}} はテキストから画像への変換のみをサポートしています. 画像から画像への変換, インペインティング,アウトペインティングのタスクには他のモデルを使用してください."
},
"accessibility": {
"invokeProgressBar": "進捗バー",
@@ -1153,11 +1156,11 @@
"unknownField": "不明なフィールド",
"unexpectedField_withName": "予期しないフィールド\"{{name}}\"",
"loadingTemplates": "読み込み中 {{name}}",
"validateConnectionsHelp": "無効な接続が行われたり,無効なグラフが呼び出されたりしないようにします.",
"validateConnectionsHelp": "無効な接続が行われたり,無効なグラフが呼び出されたりしないようにします",
"validateConnections": "接続とグラフを確認する",
"saveToGallery": "ギャラリーに保存",
"newWorkflowDesc": "新しいワークフローを作りますか?",
"unknownFieldType": "$t(nodes.unknownField)型:{type}}",
"unknownFieldType": "$t(nodes.unknownField)型: {{type}}",
"unsupportedArrayItemType": "サポートされていない配列項目型です \"{{type}}\"",
"unableToLoadWorkflow": "ワークフローが読み込めません",
"unableToValidateWorkflow": "ワークフローを確認できません",
@@ -1200,13 +1203,13 @@
"downloadBoard": "ボードをダウンロード",
"changeBoard": "ボードを変更",
"loading": "ロード中...",
"topMessage": "このボードには、以下の機能で使用されている画像が含まれています",
"bottomMessage": "このボードおよび画像を削除すると、現在これらを利用している機能はリセットされます。",
"topMessage": "この選択には、の機能で使用される画像が含まれています:",
"bottomMessage": "この画像を削除すると、現在利用している機能はリセットされます。",
"clearSearch": "検索をクリア",
"deleteBoard": "ボードの削除",
"deleteBoardAndImages": "ボードと画像の削除",
"deleteBoardOnly": "ボードのみ削除",
"deletedBoardsCannotbeRestored": "削除されたボードは復元できません。\"ボードのみ削除\"を選択すると画像は未分類に移動されます。",
"deletedBoardsCannotbeRestored": "削除たボードと画像は復元できません。ボードのみ削除を選択すると画像は未分類の状態になります。",
"movingImagesToBoard_other": "{{count}} の画像をボードに移動:",
"hideBoards": "ボードを隠す",
"assetsWithCount_other": "{{count}} のアセット",
@@ -1221,9 +1224,12 @@
"imagesWithCount_other": "{{count}} の画像",
"updateBoardError": "ボード更新エラー",
"selectedForAutoAdd": "自動追加に選択済み",
"deletedPrivateBoardsCannotbeRestored": "削除されたボードは復元できません。\"ボードのみ削除\"を選択すると画像はその作成者のプライベートな未分類に移動されます。",
"deletedPrivateBoardsCannotbeRestored": "削除されたボードと画像は復元できません。ボードのみ削除を選択すると画像は作成者に対して非公開の未分類状態になります。",
"noBoards": "{{boardType}} ボードがありません",
"viewBoards": "ボードを表示"
"viewBoards": "ボードを表示",
"uncategorizedImages": "分類されていない画像",
"deleteAllUncategorizedImages": "分類されていないすべての画像を削除",
"deletedImagesCannotBeRestored": "削除した画像は復元できません."
},
"invocationCache": {
"invocationCache": "呼び出しキャッシュ",
@@ -1246,7 +1252,8 @@
"paramRatio": {
"heading": "縦横比",
"paragraphs": [
"生成された画像の縦横比。"
"生成された画像の縦横比。",
"SD1.5 モデルの場合は 512x512 に相当する画像サイズ (ピクセル数) が推奨され, SDXL モデルの場合は 1024x1024 に相当するサイズが推奨されます."
]
},
"regionalGuidanceAndReferenceImage": {
@@ -1288,25 +1295,49 @@
]
},
"paramUpscaleMethod": {
"heading": "アップスケール手法"
"heading": "アップスケール手法",
"paragraphs": [
"高解像度修正のために画像を拡大するために使用される方法。"
]
},
"upscaleModel": {
"heading": "アップスケールモデル"
"heading": "アップスケールモデル",
"paragraphs": [
"アップスケールモデルは、ディテールを追加する前に画像を出力サイズに合わせて拡大縮小します。サポートされているアップスケールモデルであればどれでも使用できますが、写真や線画など、特定の種類の画像に特化したモデルもあります。"
]
},
"paramAspect": {
"heading": "縦横比"
"heading": "縦横比",
"paragraphs": [
"生成される画像のアスペクト比。比率を変更すると、幅と高さもそれに応じて更新されます。",
"「最適化」は、選択したモデルの幅と高さを最適な寸法に設定します。"
]
},
"refinerSteps": {
"heading": "ステップ"
"heading": "ステップ",
"paragraphs": [
"生成プロセスのリファイナー部分で実行されるステップの数。",
"生成ステップと似ています。"
]
},
"paramVAE": {
"heading": "VAE"
"heading": "VAE",
"paragraphs": [
"AI 出力を最終画像に変換するために使用されるモデル。"
]
},
"scale": {
"heading": "スケール"
"heading": "スケール",
"paragraphs": [
"スケールは出力画像のサイズを制御し、入力画像の解像度の倍数に基づいて決定されます。例えば、1024x1024の画像を2倍に拡大すると、2048x2048の出力が生成されます。"
]
},
"refinerScheduler": {
"heading": "スケジューラー"
"heading": "スケジューラー",
"paragraphs": [
"生成プロセスのリファイナー部分で使用されるスケジューラ。",
"生成スケジューラに似ています。"
]
},
"compositingCoherenceMode": {
"heading": "モード",
@@ -1315,13 +1346,23 @@
]
},
"paramModel": {
"heading": "モデル"
"heading": "モデル",
"paragraphs": [
"生成に使用されるモデル。異なるモデルは、異なる美的結果とコンテンツを生成するように特化するようにトレーニングされています。"
]
},
"paramHeight": {
"heading": "高さ"
"heading": "高さ",
"paragraphs": [
"生成される画像の高さ。8の倍数にする必要があります。"
]
},
"paramSteps": {
"heading": "ステップ"
"heading": "ステップ",
"paragraphs": [
"各生成で実行されるステップの数.",
"通常, ステップ数が多いほど, より高品質な画像が作成されますが生成時間も長くなります."
]
},
"ipAdapterMethod": {
"heading": "モード",
@@ -1330,10 +1371,18 @@
]
},
"paramSeed": {
"heading": "シード"
"heading": "シード",
"paragraphs": [
"生成に使用する始動ノイズを制御します.",
"同じ生成設定で同一の結果を生成するには, 「ランダム」オプションを無効にします."
]
},
"paramIterations": {
"heading": "生成回数"
"heading": "生成回数",
"paragraphs": [
"生成する画像の数。",
"動的プロンプトが有効になっている場合、各プロンプトはこの回数生成されます。"
]
},
"controlNet": {
"heading": "ControlNet",
@@ -1342,16 +1391,29 @@
]
},
"paramWidth": {
"heading": "幅"
"heading": "幅",
"paragraphs": [
"生成される画像の幅。8の倍数にする必要があります。"
]
},
"lora": {
"heading": "LoRA"
"heading": "LoRA",
"paragraphs": [
"ベースモデルと組み合わせて使用する軽量モデル."
]
},
"loraWeight": {
"heading": "重み"
"heading": "重み",
"paragraphs": [
"LoRA の重み. 重みを大きくすると, 最終的な画像への影響が大きくなります."
]
},
"patchmatchDownScaleSize": {
"heading": "Downscale"
"heading": "Downscale",
"paragraphs": [
"埋め込む前にどの程度のダウンスケーリングが行われるか。",
"ダウンスケーリングを大きくするとパフォーマンスは向上しますが、品質は低下します。"
]
},
"controlNetWeight": {
"heading": "重み",
@@ -1437,7 +1499,8 @@
"heading": "ダイナミックプロンプト",
"paragraphs": [
"ダイナミック プロンプトは,単一のプロンプトを複数のプロンプトに解析します.",
"基本的な構文は「{赤|緑|青}のボール」です.これにより,「赤いボール」「緑のボール」「青いボール」という3つのプロンプトが生成されます."
"基本的な構文は「{赤|緑|青}のボール」です.これにより,「赤いボール」「緑のボール」「青いボール」という3つのプロンプトが生成されます.",
"1 つのプロンプト内で構文を何度でも使用できますが, 生成されるプロンプトの数を Max Prompts 設定で制限するようにしてください."
]
},
"controlNetResizeMode": {
@@ -1457,6 +1520,159 @@
"paragraphs": [
"プロンプトまたは コントロールネットのいずれかを重視します."
]
},
"noiseUseCPU": {
"paragraphs": [
"CPU または GPU でノイズを生成するかどうかを制御します.",
"CPU ノイズを有効にすると, 特定のシードによってどのマシンでも同じ画像が生成されます.",
"CPU ノイズを有効にしてもパフォーマンスに影響はありません."
],
"heading": "CPUイズを使用する"
},
"dynamicPromptsMaxPrompts": {
"heading": "最大プロンプト",
"paragraphs": [
"ダイナミック プロンプトによって生成できるプロンプトの数を制限します."
]
},
"dynamicPromptsSeedBehaviour": {
"paragraphs": [
"プロンプトを生成するときにシードがどのように使用されるかを制御します.",
"反復ごとに固有のシードを使用します. 単一のシードでプロンプトのバリエーションを試す場合に使用します.",
"たとえば, プロンプトが 5 つある場合, 各画像は同じシードを使用します.",
"「画像ごと」では, 画像ごとに固有のシード値が使用されます. これにより、より多くのバリエーションが得られます."
],
"heading": "シード行動"
},
"imageFit": {
"paragraphs": [
"初期画像の幅と高さを出力画像に合わせてサイズ変更します. 有効にすることをお勧めします."
],
"heading": "初期画像を出力サイズに合わせる"
},
"infillMethod": {
"heading": "充填方法",
"paragraphs": [
"アウトペインティングまたはインペインティングのプロセス中に埋め込む方法."
]
},
"paramGuidance": {
"paragraphs": [
"プロンプトが生成プロセスにどの程度影響するかを制御します。",
"ガイダンス値が高すぎると過飽和状態になる可能性があり、ガイダンス値が高すぎるか低すぎると生成結果に歪みが生じる可能性があります。ガイダンスはFLUX DEVモデルにのみ適用されます。"
],
"heading": "ガイダンス"
},
"paramDenoisingStrength": {
"paragraphs": [
"生成されたイメージがラスター レイヤーとどの程度異なるかを制御します。",
"強度が低いほど、結合された表示ラスターレイヤーに近くなります。強度が高いほど、グローバルプロンプトに大きく依存します。",
"表示されるコンテンツを持つラスター レイヤーがない場合、この設定は無視されます。"
],
"heading": "ディノイジングストレングス"
},
"refinerStart": {
"heading": "リファイナースタート",
"paragraphs": [
"生成プロセスのどの時点でリファイナーが使用され始めるか。",
"0 はリファイナーが生成プロセス全体で使用されることを意味し、0.8 は、リファイナーが生成プロセスの最後の 20% で使用されることを意味します。"
]
},
"optimizedDenoising": {
"heading": "イメージtoイメージの最適化",
"paragraphs": [
"「イメージtoイメージを最適化」を有効にすると、Fluxモデルを用いた画像間変換およびインペインティング変換において、より段階的なイズ除去強度スケールが適用されます。この設定により、画像に適用される変化量を制御する能力が向上しますが、標準のイズ除去強度スケールを使用したい場合はオフにすることができます。この設定は現在調整中で、ベータ版です。"
]
},
"refinerPositiveAestheticScore": {
"heading": "ポジティブ美的スコア",
"paragraphs": [
"トレーニング データに基づいて、美的スコアの高い画像に類似するように生成を重み付けします。"
]
},
"paramCFGScale": {
"paragraphs": [
"プロンプトが生成プロセスにどの程度影響するかを制御します。",
"CFG スケールの値が高すぎると、飽和しすぎて生成結果が歪む可能性があります。 "
],
"heading": "CFGスケール"
},
"paramVAEPrecision": {
"paragraphs": [
"VAE エンコードおよびデコード時に使用される精度。",
"Fp16/Half 精度は、画像のわずかな変化を犠牲にして、より効率的です。"
],
"heading": "VAE精度"
},
"refinerModel": {
"heading": "リファイナーモデル",
"paragraphs": [
"生成プロセスの精製部分で使用されるモデル。",
"世代モデルに似ています。"
]
},
"refinerCfgScale": {
"heading": "CFGスケール",
"paragraphs": [
"プロンプトが生成プロセスに与える影響を制御する。",
"生成CFG スケールに似ています。"
]
},
"seamlessTilingYAxis": {
"heading": "シームレスタイリングY軸",
"paragraphs": [
"画像を垂直軸に沿ってシームレスに並べます。"
]
},
"scaleBeforeProcessing": {
"heading": "プロセス前のスケール値",
"paragraphs": [
"「自動」は、画像生成プロセスの前に、選択した領域をモデルに最適なサイズに拡大縮小します。",
"「手動」では、画像生成プロセスの前に、選択した領域を拡大縮小する幅と高さを選択できます。"
]
},
"creativity": {
"heading": "クリエイティビティ",
"paragraphs": [
"クリエイティビティは、ディテールを追加する際のモデルに与えられる自由度を制御します。クリエイティビティが低いと元のイメージに近いままになり、クリエイティビティが高いとより多くの変化を加えることができます。プロンプトを使用する場合、クリエイティビティが高いとプロンプトの影響が増します。"
]
},
"paramHrf": {
"heading": "高解像度修正を有効にする",
"paragraphs": [
"モデルに最適な解像度よりも高い解像度で、高品質な画像を生成します。通常、生成された画像内の重複を防ぐために使用されます。"
]
},
"seamlessTilingXAxis": {
"heading": "シームレスタイリングX軸",
"paragraphs": [
"画像を水平軸に沿ってシームレスに並べます。"
]
},
"paramCFGRescaleMultiplier": {
"paragraphs": [
"ゼロ端末 SNR (ztsnr) を使用してトレーニングされたモデルに使用される、CFG ガイダンスのリスケールマルチプライヤー。",
"これらのモデルの場合、推奨値は 0.7 です。"
],
"heading": "CFG リスケールマルチプライヤー"
},
"structure": {
"heading": "ストラクチャ",
"paragraphs": [
"ストラクチャは、出力画像が元のレイアウトにどれだけ忠実に従うかを制御します。低いストラクチャでは大幅な変更が可能ですが、高いストラクチャでは元の構成とレイアウトが厳密に維持されます。"
]
},
"refinerNegativeAestheticScore": {
"paragraphs": [
"トレーニング データに基づいて、美観スコアが低い画像に類似するように生成に重み付けします。"
],
"heading": "ネガティブ美的スコア"
},
"fluxDevLicense": {
"heading": "非商用ライセンス",
"paragraphs": [
"FLUX.1 [dev]モデルは、FLUX [dev]非商用ライセンスに基づいてライセンスされています。Invokeでこのモデルタイプを商用目的で使用する場合は、当社のウェブサイトをご覧ください。"
]
}
},
"accordions": {
@@ -1629,7 +1845,106 @@
"workflows": "ワークフロー",
"ascending": "昇順",
"name": "名前",
"descending": "降順"
"descending": "降順",
"searchPlaceholder": "名前、説明、タグで検索",
"projectWorkflows": "プロジェクトワークフロー",
"searchWorkflows": "ワークフローを検索",
"updated": "アップデート",
"published": "公表",
"builder": {
"label": "ラベル",
"containerPlaceholder": "空のコンテナ",
"showDescription": "説明を表示",
"emptyRootPlaceholderEditMode": "開始するには、フォーム要素またはノード フィールドをここにドラッグします。",
"divider": "仕切り",
"deleteAllElements": "すべてのフォーム要素を削除",
"heading": "見出し",
"nodeField": "ノードフィールド",
"zoomToNode": "ノードにズーム",
"dropdown": "ドロップダウン",
"resetOptions": "オプションをリセット",
"both": "両方",
"builder": "フォームビルダー",
"text": "テキスト",
"row": "行",
"multiLine": "マルチライン",
"resetAllNodeFields": "すべてのノードフィールドをリセット",
"slider": "スライダー",
"layout": "レイアウト",
"addToForm": "フォームに追加",
"headingPlaceholder": "空の見出し",
"nodeFieldTooltip": "ノード フィールドを追加するには、ワークフロー エディターのフィールドにある小さなプラス記号ボタンをクリックするか、フィールド名をフォームにドラッグします。",
"workflowBuilderAlphaWarning": "ワークフロービルダーは現在アルファ版です。安定版リリースまでに互換性に影響する変更が発生する可能性があります。",
"component": "コンポーネント",
"textPlaceholder": "空のテキスト",
"emptyRootPlaceholderViewMode": "このワークフローのフォームの作成を開始するには、[編集] をクリックします。",
"addOption": "オプションを追加",
"singleLine": "単線",
"numberInput": "数値入力",
"column": "列",
"container": "コンテナ",
"containerRowLayout": "コンテナ(行レイアウト)",
"containerColumnLayout": "コンテナ(列レイアウト)",
"maximum": "最大",
"published": "公開済み",
"publishedWorkflowOutputs": "アウトプット",
"minimum": "最小",
"publish": "公開",
"unpublish": "非公開",
"publishedWorkflowInputs": "インプット"
},
"chooseWorkflowFromLibrary": "ライブラリからワークフローを選択",
"unnamedWorkflow": "名前のないワークフロー",
"download": "ダウンロード",
"savingWorkflow": "ワークフローを保存しています...",
"problemSavingWorkflow": "ワークフローの保存に関する問題",
"convertGraph": "グラフを変換",
"downloadWorkflow": "ファイルに保存",
"saveWorkflow": "ワークフローを保存",
"userWorkflows": "ユーザーワークフロー",
"yourWorkflows": "あなたのワークフロー",
"edit": "編集",
"workflowLibrary": "ワークフローライブラリ",
"workflowSaved": "ワークフローが保存されました",
"clearWorkflowSearchFilter": "ワークフロー検索フィルタをクリア",
"workflowCleared": "ワークフローが作成されました",
"autoLayout": "オートレイアウト",
"view": "ビュー",
"saveChanges": "変更を保存",
"noDescription": "説明なし",
"recommended": "あなたへのおすすめ",
"noRecentWorkflows": "最近のワークフローがありません",
"problemLoading": "ワークフローのローディングに関する問題",
"newWorkflowCreated": "新しいワークフローが作成されました",
"noWorkflows": "ワークフローがありません",
"copyShareLink": "共有リンクをコピー",
"copyShareLinkForWorkflow": "ワークフローの共有リンクをコピー",
"workflowThumbnail": "ワークフローサムネイル",
"loadWorkflow": "$t(common.load) ワークフロー",
"shared": "共有",
"openWorkflow": "ワークフローを開く",
"emptyStringPlaceholder": "<空の文字列>",
"browseWorkflows": "ワークフローを閲覧する",
"saveWorkflowAs": "ワークフローとして保存",
"private": "プライベート",
"deselectAll": "すべて選択解除",
"delete": "削除",
"openLibrary": "ライブラリを開く",
"loadMore": "もっと読み込む",
"saveWorkflowToProject": "ワークフローをプロジェクトに保存",
"created": "作成されました",
"workflowEditorMenu": "ワークフローエディターメニュー",
"defaultWorkflows": "デフォルトワークフロー",
"allLoaded": "すべてのワークフローが読み込まれました",
"filterByTags": "タグでフィルター",
"recentlyOpened": "最近開いた",
"opened": "オープン",
"deleteWorkflow": "ワークフローを削除",
"deleteWorkflow2": "このワークフローを削除してもよろしいですか? 元に戻すことはできません。",
"loadFromGraph": "グラフからワークフローをロード",
"workflowName": "ワークフロー名",
"loading": "ワークフローをロードしています",
"uploadWorkflow": "ファイルからロードする"
},
"system": {
"logNamespaces": {

View File

@@ -30,7 +30,7 @@
"boards": "Bảng",
"selectedForAutoAdd": "Đã Chọn Để Tự động thêm",
"myBoard": "Bảng Của Tôi",
"deletedPrivateBoardsCannotbeRestored": "Bảng đã xoá sẽ không thể khôi phục lại. Chọn 'Chỉ Xoá Bảng' sẽ dời ảnh vào trạng thái chưa phân loại riêng cho chủ ảnh.",
"deletedPrivateBoardsCannotbeRestored": "Bảng và ảnh đã xoá sẽ không thể khôi phục lại. Chọn 'Chỉ Xoá Bảng' sẽ dời ảnh vào trạng thái chưa phân loại riêng cho chủ ảnh.",
"changeBoard": "Thay Đổi Bảng",
"clearSearch": "Làm Sạch Thanh Tìm Kiếm",
"updateBoardError": "Lỗi khi cập nhật Bảng",
@@ -41,18 +41,21 @@
"deleteBoard": "Xoá Bảng",
"deleteBoardAndImages": "Xoá Bảng Lẫn Hình ảnh",
"deleteBoardOnly": "Chỉ Xoá Bảng",
"deletedBoardsCannotbeRestored": "Bảng đã xoá sẽ không thể khôi phục lại. Chọn 'Chỉ Xoá Bảng' sẽ dời ảnh vào trạng thái chưa phân loại.",
"bottomMessage": "Xoá bảng này lẫn ảnh của nó sẽ khởi động lại mọi tính năng đang sử dụng chúng.",
"deletedBoardsCannotbeRestored": "Bảng và ảnh đã xoá sẽ không thể khôi phục lại. Chọn 'Chỉ Xoá Bảng' sẽ dời ảnh vào trạng thái chưa phân loại.",
"bottomMessage": "Việc xóa ảnh sẽ khởi động lại mọi tính năng đang sử dụng chúng.",
"menuItemAutoAdd": "Tự động thêm cho Bảng này",
"move": "Di Chuyển",
"topMessage": "Bảng này chứa ảnh được dùng với những tính năng sau:",
"topMessage": "Lựa chọn này chứa ảnh được dùng với những tính năng sau:",
"uncategorized": "Chưa Sắp Xếp",
"archived": "Được Lưu Trữ",
"loading": "Đang Tải...",
"selectBoard": "Chọn Bảng",
"archiveBoard": "Lưu trữ Bảng",
"unarchiveBoard": "Ngừng Lưu Trữ Bảng",
"assetsWithCount_other": "{{count}} tài nguyên"
"assetsWithCount_other": "{{count}} tài nguyên",
"uncategorizedImages": "Ảnh Chưa Sắp Xếp",
"deleteAllUncategorizedImages": "Xoá Tất Cả Ảnh Chưa Sắp Xếp",
"deletedImagesCannotBeRestored": "Ảnh đã xoá không thể phục hồi lại."
},
"gallery": {
"swapImages": "Đổi Hình Ảnh",
@@ -2059,7 +2062,7 @@
"colorPicker": "Chọn Màu"
},
"mergingLayers": "Đang gộp layer",
"controlLayerEmptyState": "<UploadButton>Tải lên ảnh</UploadButton>, kéo thả ảnh từ <GalleryButton>thư viện</GalleryButton> vào layer này, hoặc vẽ trên canvas để bắt đầu.",
"controlLayerEmptyState": "<UploadButton>Tải lên ảnh</UploadButton>, kéo thả ảnh từ <GalleryButton>thư viện</GalleryButton> vào layer này, <PullBboxButton>kéo hộp giới hạn vào layer này</PullBboxButton>, hoặc vẽ trên canvas để bắt đầu.",
"referenceImageEmptyState": "<UploadButton>Tải lên hình ảnh</UploadButton>, kéo ảnh từ <GalleryButton>thư viện ảnh</GalleryButton> vào layer này, hoặc <PullBboxButton>kéo hộp giới hạn vào layer này</PullBboxButton> để bắt đầu.",
"useImage": "Dùng Hình Ảnh",
"resetCanvasLayers": "Khởi Động Lại Layer Canvas",
@@ -2108,7 +2111,11 @@
"imageInfluence": "Ảnh Chi Phối",
"medium": "Vừa",
"highest": "Cao Nhất"
}
},
"addDenoiseLimit": "Thêm $t(controlLayers.denoiseLimit)",
"imageNoise": "Độ Nhiễu Hình Ảnh",
"denoiseLimit": "Giới Hạn Khử Nhiễu",
"addImageNoise": "Thêm $t(controlLayers.imageNoise)"
},
"stylePresets": {
"negativePrompt": "Lệnh Tiêu Cực",
@@ -2249,7 +2256,8 @@
"problemUnpublishingWorkflowDescription": "Có vấn đề khi ngừng đăng tải workflow. Vui lòng thử lại sau.",
"workflowUnpublished": "Workflow Đã Được Ngừng Đăng Tải",
"problemUnpublishingWorkflow": "Có Vấn Đề Khi Ngừng Đăng Tải Workflow",
"chatGPT4oIncompatibleGenerationMode": "ChatGPT 4o chỉ hỗ trợ Từ Ngữ Sang Hình Ảnh và Hình Ảnh Sang Hình Ảnh. Hãy dùng model khác cho các tác vụ Inpaint và Outpaint."
"chatGPT4oIncompatibleGenerationMode": "ChatGPT 4o chỉ hỗ trợ Từ Ngữ Sang Hình Ảnh và Hình Ảnh Sang Hình Ảnh. Hãy dùng model khác cho các tác vụ Inpaint và Outpaint.",
"imagenIncompatibleGenerationMode": "Google {{model}} chỉ hỗ trợ Từ Ngữ Sang Hình Ảnh. Dùng các model khác cho Hình Ảnh Sang Hình Ảnh, Inpaint và Outpaint."
},
"ui": {
"tabs": {

View File

@@ -8,19 +8,25 @@ import { appStarted } from 'app/store/middleware/listenerMiddleware/listeners/ap
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import type { PartialAppConfig } from 'app/types/invokeai';
import { useFocusRegionWatcher } from 'common/hooks/focus';
import { useCloseChakraTooltipsOnDragFix } from 'common/hooks/useCloseChakraTooltipsOnDragFix';
import { useGlobalHotkeys } from 'common/hooks/useGlobalHotkeys';
import { useDynamicPromptsWatcher } from 'features/dynamicPrompts/hooks/useDynamicPromptsWatcher';
import { toggleImageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import { useStarterModelsToast } from 'features/modelManagerV2/hooks/useStarterModelsToast';
import { useWorkflowBuilderWatcher } from 'features/nodes/components/sidePanel/workflow/IsolatedWorkflowBuilderWatcher';
import { useReadinessWatcher } from 'features/queue/store/readiness';
import { useRegisteredHotkeys } from 'features/system/components/HotkeysModal/useHotkeyData';
import { configChanged } from 'features/system/store/configSlice';
import { selectLanguage } from 'features/system/store/systemSelectors';
import i18n from 'i18n';
import { size } from 'lodash-es';
import { memo, useEffect } from 'react';
import { useGetOpenAPISchemaQuery } from 'services/api/endpoints/appInfo';
import { useGetQueueCountsByDestinationQuery } from 'services/api/endpoints/queue';
import { useSocketIO } from 'services/events/useSocketIO';
const queueCountArg = { destination: 'canvas' };
/**
* GlobalHookIsolator is a logical component that runs global hooks in an isolated component, so that they do not
* cause needless re-renders of any other components.
@@ -38,6 +44,11 @@ export const GlobalHookIsolator = memo(
useGlobalHotkeys();
useGetOpenAPISchemaQuery();
useSyncLoggingConfig();
useCloseChakraTooltipsOnDragFix();
// Persistent subscription to the queue counts query - canvas relies on this to know if there are pending
// and/or in progress canvas sessions.
useGetQueueCountsByDestinationQuery(queueCountArg);
useEffect(() => {
i18n.changeLanguage(language);
@@ -61,6 +72,12 @@ export const GlobalHookIsolator = memo(
useWorkflowBuilderWatcher();
useDynamicPromptsWatcher();
useRegisteredHotkeys({
id: 'toggleViewer',
category: 'viewer',
callback: toggleImageViewer,
});
return null;
}
);

View File

@@ -6,15 +6,17 @@ import {
NewGallerySessionDialog,
} from 'features/controlLayers/components/NewSessionConfirmationAlertDialog';
import { CanvasManagerProviderGate } from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import DeleteImageModal from 'features/deleteImageModal/components/DeleteImageModal';
import { DeleteImageModal } from 'features/deleteImageModal/components/DeleteImageModal';
import { FullscreenDropzone } from 'features/dnd/FullscreenDropzone';
import { DynamicPromptsModal } from 'features/dynamicPrompts/components/DynamicPromptsPreviewModal';
import DeleteBoardModal from 'features/gallery/components/Boards/DeleteBoardModal';
import { ImageContextMenu } from 'features/gallery/components/ImageContextMenu/ImageContextMenu';
import { ImageViewerModal } from 'features/gallery/components/ImageViewer/ImageViewer';
import { ShareWorkflowModal } from 'features/nodes/components/sidePanel/workflow/WorkflowLibrary/ShareWorkflowModal';
import { WorkflowLibraryModal } from 'features/nodes/components/sidePanel/workflow/WorkflowLibrary/WorkflowLibraryModal';
import { CancelAllExceptCurrentQueueItemConfirmationAlertDialog } from 'features/queue/components/CancelAllExceptCurrentQueueItemConfirmationAlertDialog';
import { ClearQueueConfirmationsAlertDialog } from 'features/queue/components/ClearQueueConfirmationAlertDialog';
import { DeleteAllExceptCurrentQueueItemConfirmationAlertDialog } from 'features/queue/components/DeleteAllExceptCurrentQueueItemConfirmationAlertDialog';
import { DeleteStylePresetDialog } from 'features/stylePresets/components/DeleteStylePresetDialog';
import { StylePresetModal } from 'features/stylePresets/components/StylePresetForm/StylePresetModal';
import RefreshAfterResetModal from 'features/system/components/SettingsModal/RefreshAfterResetModal';
@@ -39,6 +41,7 @@ export const GlobalModalIsolator = memo(() => {
<StylePresetModal />
<WorkflowLibraryModal />
<CancelAllExceptCurrentQueueItemConfirmationAlertDialog />
<DeleteAllExceptCurrentQueueItemConfirmationAlertDialog />
<ClearQueueConfirmationsAlertDialog />
<NewWorkflowConfirmationAlertDialog />
<LoadWorkflowConfirmationAlertDialog />
@@ -58,6 +61,7 @@ export const GlobalModalIsolator = memo(() => {
<CanvasPasteModal />
</CanvasManagerProviderGate>
<LoadWorkflowFromGraphModal />
<ImageViewerModal />
</>
);
});

View File

@@ -3,8 +3,8 @@ import { useAppStore } from 'app/store/storeHooks';
import { useAssertSingleton } from 'common/hooks/useAssertSingleton';
import { withResultAsync } from 'common/util/result';
import { canvasReset } from 'features/controlLayers/store/actions';
import { settingsSendToCanvasChanged } from 'features/controlLayers/store/canvasSettingsSlice';
import { rasterLayerAdded } from 'features/controlLayers/store/canvasSlice';
import { paramsReset } from 'features/controlLayers/store/paramsSlice';
import type { CanvasRasterLayerState } from 'features/controlLayers/store/types';
import { imageDTOToImageObject } from 'features/controlLayers/store/util';
import { $imageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
@@ -93,8 +93,6 @@ export const useStudioInitAction = (action?: StudioInitAction) => {
};
store.dispatch(canvasReset());
store.dispatch(rasterLayerAdded({ overrides, isSelected: true }));
store.dispatch(settingsSendToCanvasChanged(true));
store.dispatch(setActiveTab('canvas'));
store.dispatch(sentImageToCanvas());
$imageViewer.set(false);
toast({
@@ -118,9 +116,9 @@ export const useStudioInitAction = (action?: StudioInitAction) => {
return;
}
const metadata = getImageMetadataResult.value;
store.dispatch(canvasReset());
// This shows a toast
await parseAndRecallAllMetadata(metadata, true);
store.dispatch(setActiveTab('canvas'));
},
[store, t]
);
@@ -164,15 +162,13 @@ export const useStudioInitAction = (action?: StudioInitAction) => {
switch (destination) {
case 'generation':
// Go to the canvas tab, open the image viewer, and enable send-to-gallery mode
store.dispatch(setActiveTab('canvas'));
store.dispatch(paramsReset());
store.dispatch(activeTabCanvasRightPanelChanged('gallery'));
store.dispatch(settingsSendToCanvasChanged(false));
$imageViewer.set(true);
break;
case 'canvas':
// Go to the canvas tab, close the image viewer, and disable send-to-gallery mode
store.dispatch(setActiveTab('canvas'));
store.dispatch(settingsSendToCanvasChanged(true));
store.dispatch(canvasReset());
$imageViewer.set(false);
break;
case 'workflows':

View File

@@ -1,7 +1,6 @@
import type { TypedStartListening } from '@reduxjs/toolkit';
import { addListener, createListenerMiddleware } from '@reduxjs/toolkit';
import { addAdHocPostProcessingRequestedListener } from 'app/store/middleware/listenerMiddleware/listeners/addAdHocPostProcessingRequestedListener';
import { addStagingListeners } from 'app/store/middleware/listenerMiddleware/listeners/addCommitStagingAreaImageListener';
import { addAnyEnqueuedListener } from 'app/store/middleware/listenerMiddleware/listeners/anyEnqueued';
import { addAppConfigReceivedListener } from 'app/store/middleware/listenerMiddleware/listeners/appConfigReceived';
import { addAppStartedListener } from 'app/store/middleware/listenerMiddleware/listeners/appStarted';
@@ -10,15 +9,14 @@ import { addDeleteBoardAndImagesFulfilledListener } from 'app/store/middleware/l
import { addBoardIdSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/boardIdSelected';
import { addBulkDownloadListeners } from 'app/store/middleware/listenerMiddleware/listeners/bulkDownload';
import { addEnqueueRequestedLinear } from 'app/store/middleware/listenerMiddleware/listeners/enqueueRequestedLinear';
import { addEnsureImageIsSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/ensureImageIsSelectedListener';
import { addGalleryImageClickedListener } from 'app/store/middleware/listenerMiddleware/listeners/galleryImageClicked';
import { addGalleryOffsetChangedListener } from 'app/store/middleware/listenerMiddleware/listeners/galleryOffsetChanged';
import { addGetOpenAPISchemaListener } from 'app/store/middleware/listenerMiddleware/listeners/getOpenAPISchema';
import { addImageAddedToBoardFulfilledListener } from 'app/store/middleware/listenerMiddleware/listeners/imageAddedToBoard';
import { addImageDeletionListeners } from 'app/store/middleware/listenerMiddleware/listeners/imageDeletionListeners';
import { addImageRemovedFromBoardFulfilledListener } from 'app/store/middleware/listenerMiddleware/listeners/imageRemovedFromBoard';
import { addImagesStarredListener } from 'app/store/middleware/listenerMiddleware/listeners/imagesStarred';
import { addImagesUnstarredListener } from 'app/store/middleware/listenerMiddleware/listeners/imagesUnstarred';
import { addImageToDeleteSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/imageToDeleteSelected';
import { addImageUploadedFulfilledListener } from 'app/store/middleware/listenerMiddleware/listeners/imageUploaded';
import { addModelSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/modelSelected';
import { addModelsLoadedListener } from 'app/store/middleware/listenerMiddleware/listeners/modelsLoaded';
@@ -47,9 +45,7 @@ export const addAppListener = addListener.withTypes<RootState, AppDispatch>();
addImageUploadedFulfilledListener(startAppListening);
// Image deleted
addImageDeletionListeners(startAppListening);
addDeleteBoardAndImagesFulfilledListener(startAppListening);
addImageToDeleteSelectedListener(startAppListening);
// Image starred
addImagesStarredListener(startAppListening);
@@ -65,9 +61,6 @@ addEnqueueRequestedUpscale(startAppListening);
addAnyEnqueuedListener(startAppListening);
addBatchEnqueuedListener(startAppListening);
// Canvas actions
addStagingListeners(startAppListening);
// Socket.IO
addSocketConnectedEventListener(startAppListening);
@@ -95,3 +88,5 @@ addAppConfigReceivedListener(startAppListening);
addAdHocPostProcessingRequestedListener(startAppListening);
addSetDefaultSettingsListener(startAppListening);
addEnsureImageIsSelectedListener(startAppListening);

View File

@@ -1,46 +0,0 @@
import { isAnyOf } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { canvasReset, newSessionRequested } from 'features/controlLayers/store/actions';
import { stagingAreaReset } from 'features/controlLayers/store/canvasStagingAreaSlice';
import { toast } from 'features/toast/toast';
import { t } from 'i18next';
import { queueApi } from 'services/api/endpoints/queue';
const log = logger('canvas');
const matchCanvasOrStagingAreaReset = isAnyOf(stagingAreaReset, canvasReset, newSessionRequested);
export const addStagingListeners = (startAppListening: AppStartListening) => {
startAppListening({
matcher: matchCanvasOrStagingAreaReset,
effect: async (_, { dispatch }) => {
try {
const req = dispatch(
queueApi.endpoints.cancelByBatchDestination.initiate(
{ destination: 'canvas' },
{ fixedCacheKey: 'cancelByBatchOrigin' }
)
);
const { canceled } = await req.unwrap();
req.reset();
if (canceled > 0) {
log.debug(`Canceled ${canceled} canvas batches`);
toast({
id: 'CANCEL_BATCH_SUCCEEDED',
title: t('queue.cancelBatchSucceeded'),
status: 'success',
});
}
} catch {
log.error('Failed to cancel canvas batches');
toast({
id: 'CANCEL_BATCH_FAILED',
title: t('queue.cancelBatchFailed'),
status: 'error',
});
}
},
});
};

View File

@@ -1,6 +1,7 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { selectRefImagesSlice } from 'features/controlLayers/store/refImagesSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { getImageUsage } from 'features/deleteImageModal/store/selectors';
import { getImageUsage } from 'features/deleteImageModal/store/state';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { selectNodesSlice } from 'features/nodes/store/selectors';
import { selectUpscaleSlice } from 'features/parameters/store/upscaleSlice';
@@ -20,9 +21,10 @@ export const addDeleteBoardAndImagesFulfilledListener = (startAppListening: AppS
const nodes = selectNodesSlice(state);
const canvas = selectCanvasSlice(state);
const upscale = selectUpscaleSlice(state);
const refImages = selectRefImagesSlice(state);
deleted_images.forEach((image_name) => {
const imageUsage = getImageUsage(nodes, canvas, upscale, image_name);
const imageUsage = getImageUsage(nodes, canvas, upscale, refImages, image_name);
if (imageUsage.isNodesImage && !wasNodeEditorReset) {
dispatch(nodeEditorReset());

View File

@@ -5,6 +5,12 @@ import type { AppStartListening } from 'app/store/middleware/listenerMiddleware'
import { extractMessageFromAssertionError } from 'common/util/extractMessageFromAssertionError';
import { withResult, withResultAsync } from 'common/util/result';
import { parseify } from 'common/util/serialize';
import {
canvasSessionIdCreated,
generateSessionIdCreated,
selectCanvasSessionId,
selectGenerateSessionId,
} from 'features/controlLayers/store/canvasStagingAreaSlice';
import { $canvasManager } from 'features/controlLayers/store/ephemeral';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
import { buildChatGPT4oGraph } from 'features/nodes/util/graph/generation/buildChatGPT4oGraph';
@@ -17,6 +23,7 @@ import { buildSD3Graph } from 'features/nodes/util/graph/generation/buildSD3Grap
import { buildSDXLGraph } from 'features/nodes/util/graph/generation/buildSDXLGraph';
import { UnsupportedGenerationModeError } from 'features/nodes/util/graph/types';
import { toast } from 'features/toast/toast';
import { selectActiveTab } from 'features/ui/store/uiSelectors';
import { serializeError } from 'serialize-error';
import { enqueueMutationFixedCacheKeyOptions, queueApi } from 'services/api/endpoints/queue';
import { assert, AssertionError } from 'tsafe';
@@ -30,11 +37,34 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
actionCreator: enqueueRequestedCanvas,
effect: async (action, { getState, dispatch }) => {
log.debug('Enqueue requested');
const tab = selectActiveTab(getState());
let sessionId = null;
if (tab === 'generate') {
sessionId = selectGenerateSessionId(getState());
if (!sessionId) {
dispatch(generateSessionIdCreated());
sessionId = selectGenerateSessionId(getState());
}
} else if (tab === 'canvas') {
sessionId = selectCanvasSessionId(getState());
if (!sessionId) {
dispatch(canvasSessionIdCreated());
sessionId = selectCanvasSessionId(getState());
}
} else {
log.warn(`Enqueue requested in unsupported tab ${tab}`);
return;
}
const state = getState();
const destination = sessionId;
assert(destination !== null);
const { prepend } = action.payload;
const manager = $canvasManager.get();
assert(manager, 'No canvas manager');
// assert(manager, 'No canvas manager');
const model = state.params.model;
assert(model, 'No model found in state');
@@ -87,8 +117,6 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
const { g, seedFieldIdentifier, positivePromptFieldIdentifier } = buildGraphResult.value;
const destination = state.canvasSettings.sendToCanvas ? 'canvas' : 'gallery';
const prepareBatchResult = withResult(() =>
prepareLinearUIBatch({
state,

View File

@@ -0,0 +1,16 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { imageSelected } from 'features/gallery/store/gallerySlice';
import { imagesApi } from 'services/api/endpoints/images';
export const addEnsureImageIsSelectedListener = (startAppListening: AppStartListening) => {
// When we list images, if no images is selected, select the first one.
startAppListening({
matcher: imagesApi.endpoints.listImages.matchFulfilled,
effect: (action, { dispatch, getState }) => {
const selection = getState().gallery.selection;
if (selection.length === 0) {
dispatch(imageSelected(action.payload.items[0] ?? null));
}
},
});
};

View File

@@ -1,221 +0,0 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { AppDispatch, RootState } from 'app/store/store';
import { entityDeleted, referenceImageIPAdapterImageChanged } from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { getEntityIdentifier } from 'features/controlLayers/store/types';
import { imageDeletionConfirmed } from 'features/deleteImageModal/store/actions';
import { isModalOpenChanged } from 'features/deleteImageModal/store/slice';
import { selectListImagesQueryArgs } from 'features/gallery/store/gallerySelectors';
import { imageSelected } from 'features/gallery/store/gallerySlice';
import { fieldImageCollectionValueChanged, fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { isImageFieldCollectionInputInstance, isImageFieldInputInstance } from 'features/nodes/types/field';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { forEach, intersectionBy } from 'lodash-es';
import { imagesApi } from 'services/api/endpoints/images';
import type { ImageDTO } from 'services/api/types';
import type { Param0 } from 'tsafe';
const log = logger('gallery');
//TODO(psyche): handle image deletion (canvas staging area?)
// Some utils to delete images from different parts of the app
const deleteNodesImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
const actions: Param0<typeof dispatch>[] = [];
state.nodes.present.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
return;
}
forEach(node.data.inputs, (input) => {
if (isImageFieldInputInstance(input) && input.value?.image_name === imageDTO.image_name) {
actions.push(
fieldImageValueChanged({
nodeId: node.data.id,
fieldName: input.name,
value: undefined,
})
);
return;
}
if (isImageFieldCollectionInputInstance(input)) {
actions.push(
fieldImageCollectionValueChanged({
nodeId: node.data.id,
fieldName: input.name,
value: input.value?.filter((value) => value?.image_name !== imageDTO.image_name),
})
);
}
});
});
actions.forEach(dispatch);
};
const deleteControlLayerImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
selectCanvasSlice(state).controlLayers.entities.forEach(({ id, objects }) => {
let shouldDelete = false;
for (const obj of objects) {
if (obj.type === 'image' && obj.image.image_name === imageDTO.image_name) {
shouldDelete = true;
break;
}
}
if (shouldDelete) {
dispatch(entityDeleted({ entityIdentifier: { id, type: 'control_layer' } }));
}
});
};
const deleteReferenceImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
selectCanvasSlice(state).referenceImages.entities.forEach((entity) => {
if (entity.ipAdapter.image?.image_name === imageDTO.image_name) {
dispatch(referenceImageIPAdapterImageChanged({ entityIdentifier: getEntityIdentifier(entity), imageDTO: null }));
}
});
};
const deleteRasterLayerImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
selectCanvasSlice(state).rasterLayers.entities.forEach(({ id, objects }) => {
let shouldDelete = false;
for (const obj of objects) {
if (obj.type === 'image' && obj.image.image_name === imageDTO.image_name) {
shouldDelete = true;
break;
}
}
if (shouldDelete) {
dispatch(entityDeleted({ entityIdentifier: { id, type: 'raster_layer' } }));
}
});
};
export const addImageDeletionListeners = (startAppListening: AppStartListening) => {
// Handle single image deletion
startAppListening({
actionCreator: imageDeletionConfirmed,
effect: async (action, { dispatch, getState }) => {
const { imageDTOs, imagesUsage } = action.payload;
if (imageDTOs.length !== 1 || imagesUsage.length !== 1) {
// handle multiples in separate listener
return;
}
const imageDTO = imageDTOs[0];
const imageUsage = imagesUsage[0];
if (!imageDTO || !imageUsage) {
// satisfy noUncheckedIndexedAccess
return;
}
try {
const state = getState();
await dispatch(imagesApi.endpoints.deleteImage.initiate(imageDTO)).unwrap();
if (state.gallery.selection.some((i) => i.image_name === imageDTO.image_name)) {
// The deleted image was a selected image, we need to select the next image
const newSelection = state.gallery.selection.filter((i) => i.image_name !== imageDTO.image_name);
if (newSelection.length > 0) {
return;
}
// Get the current list of images and select the same index
const baseQueryArgs = selectListImagesQueryArgs(state);
const data = imagesApi.endpoints.listImages.select(baseQueryArgs)(state).data;
if (data) {
const deletedImageIndex = data.items.findIndex((i) => i.image_name === imageDTO.image_name);
const nextImage = data.items[deletedImageIndex + 1] ?? data.items[0] ?? null;
if (nextImage?.image_name === imageDTO.image_name) {
// If the next image is the same as the deleted one, it means it was the last image, reset selection
dispatch(imageSelected(null));
} else {
dispatch(imageSelected(nextImage));
}
}
}
deleteNodesImages(state, dispatch, imageDTO);
deleteReferenceImages(state, dispatch, imageDTO);
deleteRasterLayerImages(state, dispatch, imageDTO);
deleteControlLayerImages(state, dispatch, imageDTO);
} catch {
// no-op
} finally {
dispatch(isModalOpenChanged(false));
}
},
});
// Handle multiple image deletion
startAppListening({
actionCreator: imageDeletionConfirmed,
effect: async (action, { dispatch, getState }) => {
const { imageDTOs, imagesUsage } = action.payload;
if (imageDTOs.length <= 1 || imagesUsage.length <= 1) {
// handle singles in separate listener
return;
}
try {
const state = getState();
await dispatch(imagesApi.endpoints.deleteImages.initiate({ imageDTOs })).unwrap();
if (intersectionBy(state.gallery.selection, imageDTOs, 'image_name').length > 0) {
// Some selected images were deleted, need to select the next image
const queryArgs = selectListImagesQueryArgs(state);
const { data } = imagesApi.endpoints.listImages.select(queryArgs)(state);
if (data) {
// When we delete multiple images, we clear the selection. Then, the the next time we load images, we will
// select the first one. This is handled below in the listener for `imagesApi.endpoints.listImages.matchFulfilled`.
dispatch(imageSelected(null));
}
}
// We need to reset the features where the image is in use - none of these work if their image(s) don't exist
imageDTOs.forEach((imageDTO) => {
deleteNodesImages(state, dispatch, imageDTO);
deleteControlLayerImages(state, dispatch, imageDTO);
deleteReferenceImages(state, dispatch, imageDTO);
deleteRasterLayerImages(state, dispatch, imageDTO);
});
} catch {
// no-op
} finally {
dispatch(isModalOpenChanged(false));
}
},
});
// When we list images, if no images is selected, select the first one.
startAppListening({
matcher: imagesApi.endpoints.listImages.matchFulfilled,
effect: (action, { dispatch, getState }) => {
const selection = getState().gallery.selection;
if (selection.length === 0) {
dispatch(imageSelected(action.payload.items[0] ?? null));
}
},
});
startAppListening({
matcher: imagesApi.endpoints.deleteImage.matchFulfilled,
effect: (action) => {
log.debug({ imageDTO: action.meta.arg.originalArgs }, 'Image deleted');
},
});
startAppListening({
matcher: imagesApi.endpoints.deleteImage.matchRejected,
effect: (action) => {
log.debug({ imageDTO: action.meta.arg.originalArgs }, 'Unable to delete image');
},
});
};

View File

@@ -1,32 +0,0 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { imageDeletionConfirmed } from 'features/deleteImageModal/store/actions';
import { selectImageUsage } from 'features/deleteImageModal/store/selectors';
import { imagesToDeleteSelected, isModalOpenChanged } from 'features/deleteImageModal/store/slice';
export const addImageToDeleteSelectedListener = (startAppListening: AppStartListening) => {
startAppListening({
actionCreator: imagesToDeleteSelected,
effect: (action, { dispatch, getState }) => {
const imageDTOs = action.payload;
const state = getState();
const { shouldConfirmOnDelete } = state.system;
const imagesUsage = selectImageUsage(getState());
const isImageInUse =
imagesUsage.some((i) => i.isRasterLayerImage) ||
imagesUsage.some((i) => i.isControlLayerImage) ||
imagesUsage.some((i) => i.isReferenceImage) ||
imagesUsage.some((i) => i.isInpaintMaskImage) ||
imagesUsage.some((i) => i.isUpscaleImage) ||
imagesUsage.some((i) => i.isNodesImage) ||
imagesUsage.some((i) => i.isRegionalGuidanceImage);
if (shouldConfirmOnDelete || isImageInUse) {
dispatch(isModalOpenChanged(true));
return;
}
dispatch(imageDeletionConfirmed({ imageDTOs, imagesUsage }));
},
});
};

View File

@@ -1,11 +1,7 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { AppDispatch, RootState } from 'app/store/store';
import {
controlLayerModelChanged,
referenceImageIPAdapterModelChanged,
rgIPAdapterModelChanged,
} from 'features/controlLayers/store/canvasSlice';
import { controlLayerModelChanged, rgRefImageModelChanged } from 'features/controlLayers/store/canvasSlice';
import { loraDeleted } from 'features/controlLayers/store/lorasSlice';
import {
clipEmbedModelSelected,
@@ -15,8 +11,9 @@ import {
t5EncoderModelSelected,
vaeSelected,
} from 'features/controlLayers/store/paramsSlice';
import { refImageModelChanged, selectRefImagesSlice } from 'features/controlLayers/store/refImagesSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { getEntityIdentifier } from 'features/controlLayers/store/types';
import { getEntityIdentifier, isFLUXReduxConfig, isIPAdapterConfig } from 'features/controlLayers/store/types';
import { modelSelected } from 'features/parameters/store/actions';
import { postProcessingModelChanged, upscaleModelChanged } from 'features/parameters/store/upscaleSlice';
import {
@@ -210,12 +207,12 @@ const handleControlAdapterModels: ModelHandler = (models, state, dispatch, log)
const handleIPAdapterModels: ModelHandler = (models, state, dispatch, log) => {
const ipaModels = models.filter(isIPAdapterModelConfig);
selectCanvasSlice(state).referenceImages.entities.forEach((entity) => {
if (entity.ipAdapter.type !== 'ip_adapter') {
selectRefImagesSlice(state).entities.forEach((entity) => {
if (!isIPAdapterConfig(entity.config)) {
return;
}
const selectedIPAdapterModel = entity.ipAdapter.model;
const selectedIPAdapterModel = entity.config.model;
// `null` is a valid IP adapter model - no need to do anything.
if (!selectedIPAdapterModel) {
return;
@@ -225,16 +222,16 @@ const handleIPAdapterModels: ModelHandler = (models, state, dispatch, log) => {
return;
}
log.debug({ selectedIPAdapterModel }, 'Selected IP adapter model is not available, clearing');
dispatch(referenceImageIPAdapterModelChanged({ entityIdentifier: getEntityIdentifier(entity), modelConfig: null }));
dispatch(refImageModelChanged({ id: entity.id, modelConfig: null }));
});
selectCanvasSlice(state).regionalGuidance.entities.forEach((entity) => {
entity.referenceImages.forEach(({ id: referenceImageId, ipAdapter }) => {
if (ipAdapter.type !== 'ip_adapter') {
entity.referenceImages.forEach(({ id: referenceImageId, config }) => {
if (!isIPAdapterConfig(config)) {
return;
}
const selectedIPAdapterModel = ipAdapter.model;
const selectedIPAdapterModel = config.model;
// `null` is a valid IP adapter model - no need to do anything.
if (!selectedIPAdapterModel) {
return;
@@ -245,7 +242,7 @@ const handleIPAdapterModels: ModelHandler = (models, state, dispatch, log) => {
}
log.debug({ selectedIPAdapterModel }, 'Selected IP adapter model is not available, clearing');
dispatch(
rgIPAdapterModelChanged({ entityIdentifier: getEntityIdentifier(entity), referenceImageId, modelConfig: null })
rgRefImageModelChanged({ entityIdentifier: getEntityIdentifier(entity), referenceImageId, modelConfig: null })
);
});
});
@@ -254,11 +251,11 @@ const handleIPAdapterModels: ModelHandler = (models, state, dispatch, log) => {
const handleFLUXReduxModels: ModelHandler = (models, state, dispatch, log) => {
const fluxReduxModels = models.filter(isFluxReduxModelConfig);
selectCanvasSlice(state).referenceImages.entities.forEach((entity) => {
if (entity.ipAdapter.type !== 'flux_redux') {
selectRefImagesSlice(state).entities.forEach((entity) => {
if (!isFLUXReduxConfig(entity.config)) {
return;
}
const selectedFLUXReduxModel = entity.ipAdapter.model;
const selectedFLUXReduxModel = entity.config.model;
// `null` is a valid FLUX Redux model - no need to do anything.
if (!selectedFLUXReduxModel) {
return;
@@ -268,16 +265,16 @@ const handleFLUXReduxModels: ModelHandler = (models, state, dispatch, log) => {
return;
}
log.debug({ selectedFLUXReduxModel }, 'Selected FLUX Redux model is not available, clearing');
dispatch(referenceImageIPAdapterModelChanged({ entityIdentifier: getEntityIdentifier(entity), modelConfig: null }));
dispatch(refImageModelChanged({ id: entity.id, modelConfig: null }));
});
selectCanvasSlice(state).regionalGuidance.entities.forEach((entity) => {
entity.referenceImages.forEach(({ id: referenceImageId, ipAdapter }) => {
if (ipAdapter.type !== 'flux_redux') {
entity.referenceImages.forEach(({ id: referenceImageId, config }) => {
if (!isFLUXReduxConfig(config)) {
return;
}
const selectedFLUXReduxModel = ipAdapter.model;
const selectedFLUXReduxModel = config.model;
// `null` is a valid FLUX Redux model - no need to do anything.
if (!selectedFLUXReduxModel) {
return;
@@ -288,7 +285,7 @@ const handleFLUXReduxModels: ModelHandler = (models, state, dispatch, log) => {
}
log.debug({ selectedFLUXReduxModel }, 'Selected FLUX Redux model is not available, clearing');
dispatch(
rgIPAdapterModelChanged({ entityIdentifier: getEntityIdentifier(entity), referenceImageId, modelConfig: null })
rgRefImageModelChanged({ entityIdentifier: getEntityIdentifier(entity), referenceImageId, modelConfig: null })
);
});
});

View File

@@ -8,12 +8,12 @@ import { changeBoardModalSlice } from 'features/changeBoardModal/store/slice';
import { canvasSettingsPersistConfig, canvasSettingsSlice } from 'features/controlLayers/store/canvasSettingsSlice';
import { canvasPersistConfig, canvasSlice, canvasUndoableConfig } from 'features/controlLayers/store/canvasSlice';
import {
canvasSessionSlice,
canvasStagingAreaPersistConfig,
canvasStagingAreaSlice,
} from 'features/controlLayers/store/canvasStagingAreaSlice';
import { lorasPersistConfig, lorasSlice } from 'features/controlLayers/store/lorasSlice';
import { paramsPersistConfig, paramsSlice } from 'features/controlLayers/store/paramsSlice';
import { deleteImageModalSlice } from 'features/deleteImageModal/store/slice';
import { refImagesPersistConfig, refImagesSlice } from 'features/controlLayers/store/refImagesSlice';
import { dynamicPromptsPersistConfig, dynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { galleryPersistConfig, gallerySlice } from 'features/gallery/store/gallerySlice';
import { hrfPersistConfig, hrfSlice } from 'features/hrf/store/hrfSlice';
@@ -54,7 +54,6 @@ const allReducers = {
[configSlice.name]: configSlice.reducer,
[uiSlice.name]: uiSlice.reducer,
[dynamicPromptsSlice.name]: dynamicPromptsSlice.reducer,
[deleteImageModalSlice.name]: deleteImageModalSlice.reducer,
[changeBoardModalSlice.name]: changeBoardModalSlice.reducer,
[modelManagerV2Slice.name]: modelManagerV2Slice.reducer,
[queueSlice.name]: queueSlice.reducer,
@@ -65,9 +64,10 @@ const allReducers = {
[stylePresetSlice.name]: stylePresetSlice.reducer,
[paramsSlice.name]: paramsSlice.reducer,
[canvasSettingsSlice.name]: canvasSettingsSlice.reducer,
[canvasStagingAreaSlice.name]: canvasStagingAreaSlice.reducer,
[canvasSessionSlice.name]: canvasSessionSlice.reducer,
[lorasSlice.name]: lorasSlice.reducer,
[workflowLibrarySlice.name]: workflowLibrarySlice.reducer,
[refImagesSlice.name]: refImagesSlice.reducer,
};
const rootReducer = combineReducers(allReducers);
@@ -113,6 +113,7 @@ const persistConfigs: { [key in keyof typeof allReducers]?: PersistConfig } = {
[canvasStagingAreaPersistConfig.name]: canvasStagingAreaPersistConfig,
[lorasPersistConfig.name]: lorasPersistConfig,
[workflowLibraryPersistConfig.name]: workflowLibraryPersistConfig,
[refImagesSlice.name]: refImagesPersistConfig,
};
const unserialize: UnserializeFunction = (data, key) => {
@@ -175,6 +176,7 @@ export const createStore = (uniqueStoreKey?: string, persist = true) =>
.concat(api.middleware)
.concat(dynamicMiddlewares)
.concat(authToastMiddleware)
// .concat(getDebugLoggerMiddleware())
.prepend(listenerMiddleware.middleware),
enhancers: (getDefaultEnhancers) => {
const _enhancers = getDefaultEnhancers().concat(autoBatchEnhancer());
@@ -209,3 +211,4 @@ export type RootState = ReturnType<AppStore['getState']>;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export type AppThunkDispatch = ThunkDispatch<RootState, any, UnknownAction>;
export type AppDispatch = ReturnType<typeof createStore>['dispatch'];
export type AppGetState = ReturnType<typeof createStore>['getState'];

View File

@@ -11,13 +11,14 @@ import { memo, useEffect, useMemo, useState } from 'react';
type Props = PropsWithChildren & {
maxHeight?: ChakraProps['maxHeight'];
maxWidth?: ChakraProps['maxWidth'];
overflowX?: 'hidden' | 'scroll';
overflowY?: 'hidden' | 'scroll';
};
const styles: CSSProperties = { position: 'absolute', top: 0, left: 0, right: 0, bottom: 0 };
const ScrollableContent = ({ children, maxHeight, overflowX = 'hidden', overflowY = 'scroll' }: Props) => {
const ScrollableContent = ({ children, maxHeight, maxWidth, overflowX = 'hidden', overflowY = 'scroll' }: Props) => {
const overlayscrollbarsOptions = useMemo(
() => getOverlayScrollbarsParams({ overflowX, overflowY }).options,
[overflowX, overflowY]
@@ -44,7 +45,7 @@ const ScrollableContent = ({ children, maxHeight, overflowX = 'hidden', overflow
}, [os]);
return (
<Flex w="full" h="full" maxHeight={maxHeight} position="relative">
<Flex w="full" h="full" maxHeight={maxHeight} maxWidth={maxWidth} position="relative">
<Box position="absolute" top={0} left={0} right={0} bottom={0}>
<OverlayScrollbarsComponent ref={osRef} style={styles} options={overlayscrollbarsOptions}>
{children}

View File

@@ -73,7 +73,7 @@ export const useBoolean = (initialValue: boolean): UseBoolean => {
};
};
type UseDisclosure = {
export type UseDisclosure = {
isOpen: boolean;
open: () => void;
close: () => void;

View File

@@ -0,0 +1,19 @@
import { useEffect } from 'react';
// Chakra tooltips sometimes open during a drag operation. We can fix it by dispatching an event that chakra listens
// for to close tooltips. It's reaching into the internals but it seems to work.
const closeEventName = 'chakra-ui:close-tooltip';
export const useCloseChakraTooltipsOnDragFix = () => {
useEffect(() => {
const closeTooltips = () => {
document.dispatchEvent(new window.CustomEvent(closeEventName));
};
document.addEventListener('drag', closeTooltips);
return () => {
document.removeEventListener('drag', closeTooltips);
};
}, []);
};

View File

@@ -0,0 +1,165 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
/**
* Adapted from https://github.com/chakra-ui/chakra-ui/blob/v2/packages/hooks/src/use-outside-click.ts
*
* The main change here is to support filtering of outside clicks via a `filter` function.
*
* This lets us work around issues with portals and components like popovers, which typically close on an outside click.
*
* For example, consider a popover that has a custom drop-down component inside it, which uses a portal to render
* the drop-down options. The original outside click handler would close the popover when clicking on the drop-down options,
* because the click is outside the popover - but we expect the popover to stay open in this case.
*
* A filter function like this can fix that:
*
* ```ts
* const filter = (el: HTMLElement) => el.className.includes('chakra-portal') || el.id.includes('react-select')
* ```
*
* This ignores clicks on react-select-based drop-downs and Chakra UI portals and is used as the default filter.
*/
import { useCallback, useEffect, useRef } from 'react';
type FilterFunction = (el: HTMLElement | SVGElement) => boolean;
export function useCallbackRef<T extends (...args: any[]) => any>(
callback: T | undefined,
deps: React.DependencyList = []
) {
const callbackRef = useRef(callback);
useEffect(() => {
callbackRef.current = callback;
});
// eslint-disable-next-line react-hooks/exhaustive-deps
return useCallback(((...args) => callbackRef.current?.(...args)) as T, deps);
}
export interface UseOutsideClickProps {
/**
* Whether the hook is enabled
*/
enabled?: boolean;
/**
* The reference to a DOM element.
*/
ref: React.RefObject<HTMLElement | null>;
/**
* Function invoked when a click is triggered outside the referenced element.
*/
handler?: (e: Event) => void;
/**
* A function that filters the elements that should be considered as outside clicks.
*
* If omitted, a default filter function that ignores clicks in Chakra UI portals and react-select components is used.
*/
filter?: FilterFunction;
}
export const DEFAULT_FILTER: FilterFunction = (el) => {
if (el instanceof SVGElement) {
// SVGElement's type appears to be incorrect. Its className is not a string, which causes `includes` to fail.
// Let's assume that SVG elements with a class name are not part of the portal and should not be filtered.
return false;
}
return el.className.includes('chakra-portal') || el.id.includes('react-select');
};
/**
* Example, used in components like Dialogs and Popovers, so they can close
* when a user clicks outside them.
*/
export function useFilterableOutsideClick(props: UseOutsideClickProps) {
const { ref, handler, enabled = true, filter = DEFAULT_FILTER } = props;
const savedHandler = useCallbackRef(handler);
const stateRef = useRef({
isPointerDown: false,
ignoreEmulatedMouseEvents: false,
});
const state = stateRef.current;
useEffect(() => {
if (!enabled) {
return;
}
const onPointerDown: any = (e: PointerEvent) => {
if (isValidEvent(e, ref, filter)) {
state.isPointerDown = true;
}
};
const onMouseUp: any = (event: MouseEvent) => {
if (state.ignoreEmulatedMouseEvents) {
state.ignoreEmulatedMouseEvents = false;
return;
}
if (state.isPointerDown && handler && isValidEvent(event, ref)) {
state.isPointerDown = false;
savedHandler(event);
}
};
const onTouchEnd = (event: TouchEvent) => {
state.ignoreEmulatedMouseEvents = true;
if (handler && state.isPointerDown && isValidEvent(event, ref)) {
state.isPointerDown = false;
savedHandler(event);
}
};
const doc = getOwnerDocument(ref.current);
doc.addEventListener('mousedown', onPointerDown, true);
doc.addEventListener('mouseup', onMouseUp, true);
doc.addEventListener('touchstart', onPointerDown, true);
doc.addEventListener('touchend', onTouchEnd, true);
return () => {
doc.removeEventListener('mousedown', onPointerDown, true);
doc.removeEventListener('mouseup', onMouseUp, true);
doc.removeEventListener('touchstart', onPointerDown, true);
doc.removeEventListener('touchend', onTouchEnd, true);
};
}, [handler, ref, savedHandler, state, enabled, filter]);
}
function isValidEvent(event: Event, ref: React.RefObject<HTMLElement | null>, filter?: FilterFunction): boolean {
const target = (event.composedPath?.()[0] ?? event.target) as HTMLElement;
if (target) {
const doc = getOwnerDocument(target);
if (!doc.contains(target)) {
return false;
}
}
if (ref.current?.contains(target)) {
return false;
}
// This is the main logic change from the original hook.
if (filter) {
// Check if the click is inside an element matching the filter.
// This is used for portal-awareness or other general exclusion cases.
let currentElement: HTMLElement | null = target;
// Traverse up the DOM tree from the target element.
while (currentElement && currentElement !== document.body) {
if (filter(currentElement)) {
return false;
}
currentElement = currentElement.parentElement;
}
}
// If the click is not inside the ref and not inside a portal, it's a valid outside click.
return true;
}
function getOwnerDocument(node?: Element | null): Document {
return node?.ownerDocument ?? document;
}

View File

@@ -1,6 +1,6 @@
import { useAppDispatch } from 'app/store/storeHooks';
import { useCancelCurrentQueueItem } from 'features/queue/hooks/useCancelCurrentQueueItem';
import { useClearQueue } from 'features/queue/hooks/useClearQueue';
import { useDeleteCurrentQueueItem } from 'features/queue/hooks/useDeleteCurrentQueueItem';
import { useInvoke } from 'features/queue/hooks/useInvoke';
import { useRegisteredHotkeys } from 'features/system/components/HotkeysModal/useHotkeyData';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
@@ -35,34 +35,30 @@ export const useGlobalHotkeys = () => {
dependencies: [queue],
});
const {
cancelQueueItem,
isDisabled: isDisabledCancelQueueItem,
isLoading: isLoadingCancelQueueItem,
} = useCancelCurrentQueueItem();
const deleteCurrentQueueItem = useDeleteCurrentQueueItem();
useRegisteredHotkeys({
id: 'cancelQueueItem',
category: 'app',
callback: cancelQueueItem,
callback: deleteCurrentQueueItem.trigger,
options: {
enabled: !isDisabledCancelQueueItem && !isLoadingCancelQueueItem,
enabled: !deleteCurrentQueueItem.isDisabled && !deleteCurrentQueueItem.isLoading,
preventDefault: true,
},
dependencies: [cancelQueueItem, isDisabledCancelQueueItem, isLoadingCancelQueueItem],
dependencies: [deleteCurrentQueueItem],
});
const { clearQueue, isDisabled: isDisabledClearQueue, isLoading: isLoadingClearQueue } = useClearQueue();
const clearQueue = useClearQueue();
useRegisteredHotkeys({
id: 'clearQueue',
category: 'app',
callback: clearQueue,
callback: clearQueue.trigger,
options: {
enabled: !isDisabledClearQueue && !isLoadingClearQueue,
enabled: !clearQueue.isDisabled && !clearQueue.isLoading,
preventDefault: true,
},
dependencies: [clearQueue, isDisabledClearQueue, isLoadingClearQueue],
dependencies: [clearQueue],
});
useRegisteredHotkeys({

View File

@@ -1,11 +1,11 @@
import type { IconButtonProps, SystemStyleObject } from '@invoke-ai/ui-library';
import { IconButton } from '@invoke-ai/ui-library';
import type { ButtonProps, IconButtonProps, SystemStyleObject } from '@invoke-ai/ui-library';
import { Button, IconButton } from '@invoke-ai/ui-library';
import { logger } from 'app/logging/logger';
import { useAppSelector } from 'app/store/storeHooks';
import { selectAutoAddBoardId } from 'features/gallery/store/gallerySelectors';
import { selectIsClientSideUploadEnabled } from 'features/system/store/configSlice';
import { toast } from 'features/toast/toast';
import { useCallback } from 'react';
import { memo, useCallback } from 'react';
import type { FileRejection } from 'react-dropzone';
import { useDropzone } from 'react-dropzone';
import { useTranslation } from 'react-i18next';
@@ -163,32 +163,63 @@ const sx = {
},
} satisfies SystemStyleObject;
export const UploadImageButton = ({
isDisabled = false,
onUpload,
isError = false,
...rest
}: {
export const UploadImageIconButton = memo(
({
isDisabled = false,
onUpload,
isError = false,
...rest
}: {
onUpload?: (imageDTO: ImageDTO) => void;
isError?: boolean;
} & SetOptional<IconButtonProps, 'aria-label'>) => {
const uploadApi = useImageUploadButton({ isDisabled, allowMultiple: false, onUpload });
return (
<>
<IconButton
aria-label="Upload image"
variant="outline"
sx={sx}
data-error={isError}
icon={<PiUploadBold />}
isLoading={uploadApi.request.isLoading}
{...rest}
{...uploadApi.getUploadButtonProps()}
/>
<input {...uploadApi.getUploadInputProps()} />
</>
);
}
);
UploadImageIconButton.displayName = 'UploadImageIconButton';
type UploadImageButtonProps = {
onUpload?: (imageDTO: ImageDTO) => void;
isError?: boolean;
} & SetOptional<IconButtonProps, 'aria-label'>) => {
} & ButtonProps;
const UploadImageButton = memo((props: UploadImageButtonProps) => {
const { children, isDisabled = false, onUpload, isError = false, ...rest } = props;
const uploadApi = useImageUploadButton({ isDisabled, allowMultiple: false, onUpload });
return (
<>
<IconButton
<Button
aria-label="Upload image"
variant="outline"
sx={sx}
data-error={isError}
icon={<PiUploadBold />}
rightIcon={<PiUploadBold />}
isLoading={uploadApi.request.isLoading}
{...rest}
{...uploadApi.getUploadButtonProps()}
/>
>
{children ?? 'Upload'}
</Button>
<input {...uploadApi.getUploadInputProps()} />
</>
);
};
});
UploadImageButton.displayName = 'UploadImageButton';
export const UploadMultipleImageButton = ({
isDisabled = false,

View File

@@ -0,0 +1,108 @@
import { useAppStore } from 'app/store/nanostores/store';
import type { Dimensions } from 'features/controlLayers/store/types';
import { selectUiSlice, textAreaSizesStateChanged } from 'features/ui/store/uiSlice';
import { debounce } from 'lodash-es';
import { type RefObject, useCallback, useEffect, useMemo } from 'react';
type Options = {
trackWidth: boolean;
trackHeight: boolean;
initialWidth?: number;
initialHeight?: number;
};
/**
* Persists the width and/or height of a text area to redux.
* @param id The unique id of this textarea, used as key to storage
* @param ref A ref to the textarea element
* @param options.trackWidth Whether to track width
* @param options.trackHeight Whether to track width
* @param options.initialWidth An optional initial width in pixels
* @param options.initialHeight An optional initial height in pixels
*/
export const usePersistedTextAreaSize = (id: string, ref: RefObject<HTMLTextAreaElement>, options: Options) => {
const { dispatch, getState } = useAppStore();
const onResize = useCallback(
(size: Partial<Dimensions>) => {
dispatch(textAreaSizesStateChanged({ id, size }));
},
[dispatch, id]
);
const debouncedOnResize = useMemo(() => debounce(onResize, 300), [onResize]);
useEffect(() => {
const el = ref.current;
if (!el) {
return;
}
// Nothing to do here if we are not tracking anything.
if (!options.trackHeight && !options.trackWidth) {
return;
}
// Before registering the observer, grab the stored size from state - we may need to restore the size.
const storedSize = selectUiSlice(getState()).textAreaSizes[id];
// Prefer to restore the stored size, falling back to initial size if it exists
if (storedSize?.width !== undefined) {
el.style.width = `${storedSize.width}px`;
} else if (options.initialWidth !== undefined) {
el.style.width = `${options.initialWidth}px`;
}
if (storedSize?.height !== undefined) {
el.style.height = `${storedSize.height}px`;
} else if (options.initialHeight !== undefined) {
el.style.height = `${options.initialHeight}px`;
}
let currentHeight = el.offsetHeight;
let currentWidth = el.offsetWidth;
const resizeObserver = new ResizeObserver(() => {
// We only want to push the changes if a tracked dimension changes
let didChange = false;
const newSize: Partial<Dimensions> = {};
if (options.trackHeight) {
if (el.offsetHeight !== currentHeight) {
didChange = true;
currentHeight = el.offsetHeight;
}
newSize.height = currentHeight;
}
if (options.trackWidth) {
if (el.offsetWidth !== currentWidth) {
didChange = true;
currentWidth = el.offsetWidth;
}
newSize.width = currentWidth;
}
if (didChange) {
debouncedOnResize(newSize);
}
});
resizeObserver.observe(el);
return () => {
debouncedOnResize.cancel();
resizeObserver.disconnect();
};
}, [
debouncedOnResize,
dispatch,
getState,
id,
options.initialHeight,
options.initialWidth,
options.trackHeight,
options.trackWidth,
ref,
]);
};

View File

@@ -1,12 +1,18 @@
import type { ComboboxOnChange, ComboboxOption } from '@invoke-ai/ui-library';
import { EMPTY_ARRAY } from 'app/store/constants';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import type { GroupBase } from 'chakra-react-select';
import { selectLoRAsSlice } from 'features/controlLayers/store/lorasSlice';
import { selectParamsSlice } from 'features/controlLayers/store/paramsSlice';
import type { ModelIdentifierField } from 'features/nodes/types/common';
import { uniq } from 'lodash-es';
import { useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { useGetRelatedModelIdsBatchQuery } from 'services/api/endpoints/modelRelationships';
import type { AnyModelConfig } from 'services/api/types';
import { useGroupedModelCombobox } from './useGroupedModelCombobox';
import { useRelatedModelKeys } from './useRelatedModelKeys';
import { useSelectedModelKeys } from './useSelectedModelKeys';
type UseRelatedGroupedModelComboboxArg<T extends AnyModelConfig> = {
modelConfigs: T[];
@@ -29,6 +35,32 @@ type UseRelatedGroupedModelComboboxReturn = {
noOptionsMessage: () => string;
};
const selectSelectedModelKeys = createMemoizedSelector(selectParamsSlice, selectLoRAsSlice, (params, loras) => {
const keys: string[] = [];
const main = params.model;
const vae = params.vae;
const refiner = params.refinerModel;
const controlnet = params.controlLora;
if (main) {
keys.push(main.key);
}
if (vae) {
keys.push(vae.key);
}
if (refiner) {
keys.push(refiner.key);
}
if (controlnet) {
keys.push(controlnet.key);
}
for (const { model } of loras.loras) {
keys.push(model.key);
}
return uniq(keys);
});
export function useRelatedGroupedModelCombobox<T extends AnyModelConfig>({
modelConfigs,
selectedModel,
@@ -39,9 +71,15 @@ export function useRelatedGroupedModelCombobox<T extends AnyModelConfig>({
}: UseRelatedGroupedModelComboboxArg<T>): UseRelatedGroupedModelComboboxReturn {
const { t } = useTranslation();
const selectedKeys = useSelectedModelKeys();
const relatedKeys = useRelatedModelKeys(selectedKeys);
const selectedKeys = useAppSelector(selectSelectedModelKeys);
const { relatedKeys } = useGetRelatedModelIdsBatchQuery(selectedKeys, {
selectFromResult: ({ data }) => {
if (!data) {
return { relatedKeys: EMPTY_ARRAY };
}
return { relatedKeys: data };
},
});
// Base grouped options
const base = useGroupedModelCombobox({
@@ -53,40 +91,42 @@ export function useRelatedGroupedModelCombobox<T extends AnyModelConfig>({
groupByType,
});
// If no related models selected, just return base
if (relatedKeys.size === 0) {
return base;
}
const options = useMemo(() => {
if (relatedKeys.length === 0) {
return base.options;
}
const relatedOptions: ComboboxOption[] = [];
const updatedGroups: GroupBase<ComboboxOption>[] = [];
const relatedOptions: ComboboxOption[] = [];
const updatedGroups: GroupBase<ComboboxOption>[] = [];
for (const group of base.options) {
const remainingOptions: ComboboxOption[] = [];
for (const group of base.options) {
const remainingOptions: ComboboxOption[] = [];
for (const option of group.options) {
if (relatedKeys.has(option.value)) {
relatedOptions.push({ ...option, label: `* ${option.label}` });
} else {
remainingOptions.push(option);
for (const option of group.options) {
if (relatedKeys.includes(option.value)) {
relatedOptions.push({ ...option, label: `* ${option.label}` });
} else {
remainingOptions.push(option);
}
}
if (remainingOptions.length > 0) {
updatedGroups.push({
label: group.label,
options: remainingOptions,
});
}
}
if (remainingOptions.length > 0) {
updatedGroups.push({
label: group.label,
options: remainingOptions,
});
if (relatedOptions.length > 0) {
return [{ label: t('modelManager.relatedModels'), options: relatedOptions }, ...updatedGroups];
} else {
return updatedGroups;
}
}
const finalOptions: GroupBase<ComboboxOption>[] =
relatedOptions.length > 0
? [{ label: t('modelManager.relatedModels'), options: relatedOptions }, ...updatedGroups]
: updatedGroups;
}, [base.options, relatedKeys, t]);
return {
...base,
options: finalOptions,
options,
};
}

View File

@@ -1,14 +0,0 @@
import { useMemo } from 'react';
import { useGetRelatedModelIdsBatchQuery } from 'services/api/endpoints/modelRelationships';
/**
* Fetches related model keys for a given set of selected model keys.
* Returns a Set<string> for fast lookup.
*/
export const useRelatedModelKeys = (selectedKeys: Set<string>) => {
const { data: related = [] } = useGetRelatedModelIdsBatchQuery([...selectedKeys], {
skip: selectedKeys.size === 0,
});
return useMemo(() => new Set(related), [related]);
};

View File

@@ -1,34 +0,0 @@
import { useAppSelector } from 'app/store/storeHooks';
/**
* Gathers all currently selected model keys from parameters and loras.
* This includes the main model, VAE, refiner model, controlnet, and loras.
*/
export const useSelectedModelKeys = () => {
return useAppSelector((state) => {
const keys = new Set<string>();
const main = state.params.model;
const vae = state.params.vae;
const refiner = state.params.refinerModel;
const controlnet = state.params.controlLora;
const loras = state.loras.loras.map((l) => l.model);
if (main) {
keys.add(main.key);
}
if (vae) {
keys.add(vae.key);
}
if (refiner) {
keys.add(refiner.key);
}
if (controlnet) {
keys.add(controlnet.key);
}
for (const lora of loras) {
keys.add(lora.key);
}
return keys;
});
};

View File

@@ -0,0 +1,10 @@
import type { z } from 'zod';
/**
* Helper to create a type guard from a zod schema. The type guard will infer the schema's TS type.
* @param schema The zod schema to create a type guard from.
* @returns A type guard function for the schema.
*/
export const buildZodTypeGuard = <T extends z.ZodTypeAny>(schema: T) => {
return (val: unknown): val is z.infer<T> => schema.safeParse(val).success;
};

View File

@@ -1,17 +1,10 @@
import {
ContextMenu,
Flex,
IconButton,
Menu,
MenuButton,
MenuList,
type SystemStyleObject,
} from '@invoke-ai/ui-library';
import type { SystemStyleObject } from '@invoke-ai/ui-library';
import { ContextMenu, Divider, Flex, IconButton, Menu, MenuButton, MenuList } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { FocusRegionWrapper } from 'common/components/FocusRegionWrapper';
import { CanvasAlertsInvocationProgress } from 'features/controlLayers/components/CanvasAlerts/CanvasAlertsInvocationProgress';
import { CanvasAlertsPreserveMask } from 'features/controlLayers/components/CanvasAlerts/CanvasAlertsPreserveMask';
import { CanvasAlertsSelectedEntityStatus } from 'features/controlLayers/components/CanvasAlerts/CanvasAlertsSelectedEntityStatus';
import { CanvasAlertsSendingToGallery } from 'features/controlLayers/components/CanvasAlerts/CanvasAlertsSendingTo';
import { CanvasContextMenuGlobalMenuItems } from 'features/controlLayers/components/CanvasContextMenu/CanvasContextMenuGlobalMenuItems';
import { CanvasContextMenuSelectedEntityMenuItems } from 'features/controlLayers/components/CanvasContextMenu/CanvasContextMenuSelectedEntityMenuItems';
import { CanvasDropArea } from 'features/controlLayers/components/CanvasDropArea';
@@ -19,24 +12,22 @@ import { Filter } from 'features/controlLayers/components/Filters/Filter';
import { CanvasHUD } from 'features/controlLayers/components/HUD/CanvasHUD';
import { InvokeCanvasComponent } from 'features/controlLayers/components/InvokeCanvasComponent';
import { SelectObject } from 'features/controlLayers/components/SelectObject/SelectObject';
import { StagingAreaIsStagingGate } from 'features/controlLayers/components/StagingArea/StagingAreaIsStagingGate';
import { CanvasSessionContextProvider } from 'features/controlLayers/components/SimpleSession/context';
import { StagingAreaItemsList } from 'features/controlLayers/components/SimpleSession/StagingAreaItemsList';
import { StagingAreaToolbar } from 'features/controlLayers/components/StagingArea/StagingAreaToolbar';
import { CanvasToolbar } from 'features/controlLayers/components/Toolbar/CanvasToolbar';
import { Transform } from 'features/controlLayers/components/Transform/Transform';
import { CanvasManagerProviderGate } from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import { selectDynamicGrid, selectShowHUD } from 'features/controlLayers/store/canvasSettingsSlice';
import { GatedImageViewer } from 'features/gallery/components/ImageViewer/ImageViewer';
import { memo, useCallback } from 'react';
import { PiDotsThreeOutlineVerticalFill } from 'react-icons/pi';
import { CanvasAlertsInvocationProgress } from './CanvasAlerts/CanvasAlertsInvocationProgress';
const FOCUS_REGION_STYLES: SystemStyleObject = {
width: 'full',
height: 'full',
};
const MenuContent = () => {
const MenuContent = memo(() => {
return (
<CanvasManagerProviderGate>
<MenuList>
@@ -45,9 +36,22 @@ const MenuContent = () => {
</MenuList>
</CanvasManagerProviderGate>
);
});
MenuContent.displayName = 'MenuContent';
const canvasBgSx = {
position: 'relative',
w: 'full',
h: 'full',
borderRadius: 'base',
overflow: 'hidden',
bg: 'base.900',
'&[data-dynamic-grid="true"]': {
bg: 'base.850',
},
};
export const CanvasMainPanelContent = memo(() => {
export const AdvancedSession = memo(({ id }: { id: string | null }) => {
const dynamicGrid = useAppSelector(selectDynamicGrid);
const showHUD = useAppSelector(selectShowHUD);
@@ -72,17 +76,10 @@ export const CanvasMainPanelContent = memo(() => {
<CanvasManagerProviderGate>
<CanvasToolbar />
</CanvasManagerProviderGate>
<Divider />
<ContextMenu<HTMLDivElement> renderMenu={renderMenu} withLongPress={false}>
{(ref) => (
<Flex
ref={ref}
position="relative"
w="full"
h="full"
bg={dynamicGrid ? 'base.850' : 'base.900'}
borderRadius="base"
overflow="hidden"
>
<Flex ref={ref} sx={canvasBgSx} data-dynamic-grid={dynamicGrid}>
<InvokeCanvasComponent />
<CanvasManagerProviderGate>
<Flex
@@ -97,7 +94,6 @@ export const CanvasMainPanelContent = memo(() => {
{showHUD && <CanvasHUD />}
<CanvasAlertsSelectedEntityStatus />
<CanvasAlertsPreserveMask />
<CanvasAlertsSendingToGallery />
<CanvasAlertsInvocationProgress />
</Flex>
<Flex position="absolute" top={1} insetInlineEnd={1}>
@@ -110,13 +106,29 @@ export const CanvasMainPanelContent = memo(() => {
</Flex>
)}
</ContextMenu>
<Flex position="absolute" bottom={4} gap={2} align="center" justify="center">
{id !== null && (
<CanvasManagerProviderGate>
<StagingAreaIsStagingGate>
<StagingAreaToolbar />
</StagingAreaIsStagingGate>
<CanvasSessionContextProvider type="advanced" id={id}>
<Flex
position="absolute"
flexDir="column"
bottom={4}
gap={2}
align="center"
justify="center"
left={4}
right={4}
>
<Flex position="relative" maxW="full" w="full" h={108}>
<StagingAreaItemsList />
</Flex>
<Flex gap={2}>
<StagingAreaToolbar />
</Flex>
</Flex>
</CanvasSessionContextProvider>
</CanvasManagerProviderGate>
</Flex>
)}
<Flex position="absolute" bottom={4}>
<CanvasManagerProviderGate>
<Filter />
@@ -127,10 +139,8 @@ export const CanvasMainPanelContent = memo(() => {
<CanvasManagerProviderGate>
<CanvasDropArea />
</CanvasManagerProviderGate>
<GatedImageViewer />
</Flex>
</FocusRegionWrapper>
);
});
CanvasMainPanelContent.displayName = 'CanvasMainPanelContent';
AdvancedSession.displayName = 'AdvancedSession';

View File

@@ -2,11 +2,10 @@ import { Button, Flex, Heading } from '@invoke-ai/ui-library';
import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
import {
useAddControlLayer,
useAddGlobalReferenceImage,
useAddInpaintMask,
useAddNewRegionalGuidanceWithARefImage,
useAddRasterLayer,
useAddRegionalGuidance,
useAddRegionalReferenceImage,
} from 'features/controlLayers/hooks/addLayerHooks';
import { useIsEntityTypeEnabled } from 'features/controlLayers/hooks/useIsEntityTypeEnabled';
import { memo } from 'react';
@@ -19,9 +18,7 @@ export const CanvasAddEntityButtons = memo(() => {
const addRegionalGuidance = useAddRegionalGuidance();
const addRasterLayer = useAddRasterLayer();
const addControlLayer = useAddControlLayer();
const addGlobalReferenceImage = useAddGlobalReferenceImage();
const addRegionalReferenceImage = useAddRegionalReferenceImage();
const isReferenceImageEnabled = useIsEntityTypeEnabled('reference_image');
const addRegionalReferenceImage = useAddNewRegionalGuidanceWithARefImage();
const isRegionalGuidanceEnabled = useIsEntityTypeEnabled('regional_guidance');
const isControlLayerEnabled = useIsEntityTypeEnabled('control_layer');
const isInpaintLayerEnabled = useIsEntityTypeEnabled('inpaint_mask');
@@ -29,21 +26,6 @@ export const CanvasAddEntityButtons = memo(() => {
return (
<Flex w="full" h="full" justifyContent="center" gap={4}>
<Flex position="relative" flexDir="column" gap={4} top="20%">
<Flex flexDir="column" justifyContent="flex-start" gap={2}>
<Heading size="xs">{t('controlLayers.global')}</Heading>
<InformationalPopover feature="globalReferenceImage">
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addGlobalReferenceImage}
isDisabled={!isReferenceImageEnabled}
>
{t('controlLayers.globalReferenceImage')}
</Button>
</InformationalPopover>
</Flex>
<Flex flexDir="column" gap={2}>
<Heading size="xs">{t('controlLayers.regional')}</Heading>
<InformationalPopover feature="inpainting">

View File

@@ -6,11 +6,11 @@ import { selectIsLocal } from 'features/system/store/configSlice';
import { selectSystemShouldShowInvocationProgressDetail } from 'features/system/store/systemSlice';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { $invocationProgressMessage } from 'services/events/stores';
import { $lastProgressMessage } from 'services/events/stores';
const CanvasAlertsInvocationProgressContentLocal = memo(() => {
const { t } = useTranslation();
const invocationProgressMessage = useStore($invocationProgressMessage);
const invocationProgressMessage = useStore($lastProgressMessage);
if (!invocationProgressMessage) {
return null;

View File

@@ -1,146 +0,0 @@
import { Alert, AlertDescription, AlertIcon, AlertTitle, Button, Flex } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useBoolean } from 'common/hooks/useBoolean';
import { selectIsStaging } from 'features/controlLayers/store/canvasStagingAreaSlice';
import { useImageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import { useCurrentDestination } from 'features/queue/hooks/useCurrentDestination';
import { selectActiveTab } from 'features/ui/store/uiSelectors';
import { activeTabCanvasRightPanelChanged, setActiveTab } from 'features/ui/store/uiSlice';
import { AnimatePresence, motion } from 'framer-motion';
import type { PropsWithChildren, ReactNode } from 'react';
import { useCallback, useMemo } from 'react';
import { Trans, useTranslation } from 'react-i18next';
const ActivateImageViewerButton = (props: PropsWithChildren) => {
const imageViewer = useImageViewer();
const dispatch = useAppDispatch();
const onClick = useCallback(() => {
imageViewer.open();
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}, [imageViewer, dispatch]);
return (
<Button onClick={onClick} size="sm" variant="link" color="base.50">
{props.children}
</Button>
);
};
export const CanvasAlertsSendingToGallery = () => {
const { t } = useTranslation();
const destination = useCurrentDestination();
const tab = useAppSelector(selectActiveTab);
const isVisible = useMemo(() => {
// This alert should only be visible when the destination is gallery and the tab is canvas
if (tab !== 'canvas') {
return false;
}
if (!destination) {
return false;
}
return destination === 'gallery';
}, [destination, tab]);
return (
<AlertWrapper
title={t('controlLayers.sendingToGallery')}
description={
<Trans i18nKey="controlLayers.viewProgressInViewer" components={{ Btn: <ActivateImageViewerButton /> }} />
}
isVisible={isVisible}
/>
);
};
const ActivateCanvasButton = (props: PropsWithChildren) => {
const dispatch = useAppDispatch();
const imageViewer = useImageViewer();
const onClick = useCallback(() => {
dispatch(setActiveTab('canvas'));
dispatch(activeTabCanvasRightPanelChanged('layers'));
imageViewer.close();
}, [dispatch, imageViewer]);
return (
<Button onClick={onClick} size="sm" variant="link" color="base.50">
{props.children}
</Button>
);
};
export const CanvasAlertsSendingToCanvas = () => {
const { t } = useTranslation();
const destination = useCurrentDestination();
const isStaging = useAppSelector(selectIsStaging);
const tab = useAppSelector(selectActiveTab);
const isVisible = useMemo(() => {
// When we are on a non-canvas tab, and the current generation's destination is not the canvas, we don't show the alert
// For example, on the workflows tab, when the destinatin is gallery, we don't show the alert
if (tab !== 'canvas' && destination !== 'canvas') {
return false;
}
if (isStaging) {
return true;
}
if (!destination) {
return false;
}
return destination === 'canvas';
}, [destination, isStaging, tab]);
return (
<AlertWrapper
title={t('controlLayers.sendingToCanvas')}
description={
<Trans i18nKey="controlLayers.viewProgressOnCanvas" components={{ Btn: <ActivateCanvasButton /> }} />
}
isVisible={isVisible}
/>
);
};
const AlertWrapper = ({
title,
description,
isVisible,
}: {
title: ReactNode;
description: ReactNode;
isVisible: boolean;
}) => {
const isHovered = useBoolean(false);
return (
<AnimatePresence>
{(isVisible || isHovered.isTrue) && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1, transition: { duration: 0.1, ease: 'easeOut' } }}
exit={{
opacity: 0,
transition: { duration: 0.1, delay: !isHovered.isTrue ? 1 : 0.1, ease: 'easeIn' },
}}
onMouseEnter={isHovered.setTrue}
onMouseLeave={isHovered.setFalse}
>
<Alert
status="warning"
flexDir="column"
pointerEvents="auto"
borderRadius="base"
fontSize="sm"
shadow="md"
w="fit-content"
>
<Flex w="full" alignItems="center">
<AlertIcon />
<AlertTitle>{title}</AlertTitle>
</Flex>
<AlertDescription>{description}</AlertDescription>
</Alert>
</motion.div>
)}
</AnimatePresence>
);
};

View File

@@ -0,0 +1,28 @@
import { Spinner } from '@invoke-ai/ui-library';
import { useStore } from '@nanostores/react';
import { useCanvasManager } from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import { useAllEntityAdapters } from 'features/controlLayers/contexts/EntityAdapterContext';
import { computed } from 'nanostores';
import { memo, useMemo } from 'react';
export const CanvasBusySpinner = memo(() => {
const canvasManager = useCanvasManager();
const allEntityAdapters = useAllEntityAdapters();
const $isPendingRectCalculation = useMemo(
() =>
computed(
allEntityAdapters.map(({ transformer }) => transformer.$isPendingRectCalculation),
(...values) => values.some((v) => v)
),
[allEntityAdapters]
);
const isPendingRectCalculation = useStore($isPendingRectCalculation);
const isRasterizing = useStore(canvasManager.stateApi.$isRasterizing);
const isCompositing = useStore(canvasManager.compositor.$isBusy);
if (isRasterizing || isCompositing || isPendingRectCalculation) {
return <Spinner opacity={0.3} />;
}
return null;
});
CanvasBusySpinner.displayName = 'CanvasBusySpinner';

View File

@@ -2,8 +2,8 @@ import { MenuGroup } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { ControlLayerMenuItems } from 'features/controlLayers/components/ControlLayer/ControlLayerMenuItems';
import { InpaintMaskMenuItems } from 'features/controlLayers/components/InpaintMask/InpaintMaskMenuItems';
import { IPAdapterMenuItems } from 'features/controlLayers/components/IPAdapter/IPAdapterMenuItems';
import { RasterLayerMenuItems } from 'features/controlLayers/components/RasterLayer/RasterLayerMenuItems';
import { IPAdapterMenuItems } from 'features/controlLayers/components/RefImage/IPAdapterMenuItems';
import { RegionalGuidanceMenuItems } from 'features/controlLayers/components/RegionalGuidance/RegionalGuidanceMenuItems';
import { CanvasEntityStateGate } from 'features/controlLayers/contexts/CanvasEntityStateGate';
import {

View File

@@ -2,7 +2,6 @@ import { Grid, GridItem } from '@invoke-ai/ui-library';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { newCanvasEntityFromImageDndTarget } from 'features/dnd/dnd';
import { DndDropTarget } from 'features/dnd/DndDropTarget';
import { useImageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
@@ -13,19 +12,11 @@ const addControlLayerFromImageDndTargetData = newCanvasEntityFromImageDndTarget.
const addRegionalGuidanceReferenceImageFromImageDndTargetData = newCanvasEntityFromImageDndTarget.getData({
type: 'regional_guidance_with_reference_image',
});
const addGlobalReferenceImageFromImageDndTargetData = newCanvasEntityFromImageDndTarget.getData({
type: 'reference_image',
});
export const CanvasDropArea = memo(() => {
const { t } = useTranslation();
const imageViewer = useImageViewer();
const isBusy = useCanvasIsBusy();
if (imageViewer.isOpen) {
return null;
}
return (
<>
<Grid
@@ -63,14 +54,6 @@ export const CanvasDropArea = memo(() => {
isDisabled={isBusy}
/>
</GridItem>
<GridItem position="relative">
<DndDropTarget
dndTarget={newCanvasEntityFromImageDndTarget}
dndTargetData={addGlobalReferenceImageFromImageDndTargetData}
label={t('controlLayers.canvasContextMenu.newGlobalReferenceImage')}
isDisabled={isBusy}
/>
</GridItem>
</Grid>
</>
);

View File

@@ -14,7 +14,6 @@ import { useEntityTypeInformationalPopover } from 'features/controlLayers/hooks/
import { useEntityTypeTitle } from 'features/controlLayers/hooks/useEntityTypeTitle';
import { entitiesReordered } from 'features/controlLayers/store/canvasSlice';
import type { CanvasEntityIdentifier } from 'features/controlLayers/store/types';
import { isRenderableEntityType } from 'features/controlLayers/store/types';
import { singleCanvasEntityDndSource } from 'features/dnd/dnd';
import { triggerPostMoveFlash } from 'features/dnd/util';
import type { PropsWithChildren } from 'react';
@@ -165,8 +164,8 @@ export const CanvasEntityGroupList = memo(({ isSelected, type, children, entityI
<Spacer />
</Flex>
{isRenderableEntityType(type) && <CanvasEntityMergeVisibleButton type={type} />}
{isRenderableEntityType(type) && <CanvasEntityTypeIsHiddenToggle type={type} />}
<CanvasEntityMergeVisibleButton type={type} />
<CanvasEntityTypeIsHiddenToggle type={type} />
<CanvasEntityAddOfTypeButton type={type} />
</Flex>
<Collapse in={collapse.isTrue} style={fixTooltipCloseOnScrollStyles}>

View File

@@ -2,7 +2,6 @@ import { Flex } from '@invoke-ai/ui-library';
import ScrollableContent from 'common/components/OverlayScrollbars/ScrollableContent';
import { ControlLayerEntityList } from 'features/controlLayers/components/ControlLayer/ControlLayerEntityList';
import { InpaintMaskList } from 'features/controlLayers/components/InpaintMask/InpaintMaskList';
import { IPAdapterList } from 'features/controlLayers/components/IPAdapter/IPAdapterList';
import { RasterLayerEntityList } from 'features/controlLayers/components/RasterLayer/RasterLayerEntityList';
import { RegionalGuidanceEntityList } from 'features/controlLayers/components/RegionalGuidance/RegionalGuidanceEntityList';
import { memo } from 'react';
@@ -11,7 +10,6 @@ export const CanvasEntityList = memo(() => {
return (
<ScrollableContent>
<Flex flexDir="column" gap={2} data-testid="control-layers-layer-list" w="full" h="full">
<IPAdapterList />
<InpaintMaskList />
<RegionalGuidanceEntityList />
<ControlLayerEntityList />

View File

@@ -1,11 +1,10 @@
import { IconButton, Menu, MenuButton, MenuGroup, MenuItem, MenuList } from '@invoke-ai/ui-library';
import {
useAddControlLayer,
useAddGlobalReferenceImage,
useAddInpaintMask,
useAddNewRegionalGuidanceWithARefImage,
useAddRasterLayer,
useAddRegionalGuidance,
useAddRegionalReferenceImage,
} from 'features/controlLayers/hooks/addLayerHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { useIsEntityTypeEnabled } from 'features/controlLayers/hooks/useIsEntityTypeEnabled';
@@ -16,13 +15,11 @@ import { PiPlusBold } from 'react-icons/pi';
export const EntityListGlobalActionBarAddLayerMenu = memo(() => {
const { t } = useTranslation();
const isBusy = useCanvasIsBusy();
const addGlobalReferenceImage = useAddGlobalReferenceImage();
const addInpaintMask = useAddInpaintMask();
const addRegionalGuidance = useAddRegionalGuidance();
const addRegionalReferenceImage = useAddRegionalReferenceImage();
const addRegionalReferenceImage = useAddNewRegionalGuidanceWithARefImage();
const addRasterLayer = useAddRasterLayer();
const addControlLayer = useAddControlLayer();
const isReferenceImageEnabled = useIsEntityTypeEnabled('reference_image');
const isRegionalGuidanceEnabled = useIsEntityTypeEnabled('regional_guidance');
const isControlLayerEnabled = useIsEntityTypeEnabled('control_layer');
const isInpaintLayerEnabled = useIsEntityTypeEnabled('inpaint_mask');
@@ -41,11 +38,6 @@ export const EntityListGlobalActionBarAddLayerMenu = memo(() => {
isDisabled={isBusy}
/>
<MenuList>
<MenuGroup title={t('controlLayers.global')}>
<MenuItem icon={<PiPlusBold />} onClick={addGlobalReferenceImage} isDisabled={!isReferenceImageEnabled}>
{t('controlLayers.globalReferenceImage')}
</MenuItem>
</MenuGroup>
<MenuGroup title={t('controlLayers.regional')}>
<MenuItem icon={<PiPlusBold />} onClick={addInpaintMask} isDisabled={!isInpaintLayerEnabled}>
{t('controlLayers.inpaintMask')}

View File

@@ -22,7 +22,6 @@ import {
selectEntity,
selectSelectedEntityIdentifier,
} from 'features/controlLayers/store/selectors';
import { isRenderableEntity } from 'features/controlLayers/store/types';
import { clamp, round } from 'lodash-es';
import type { KeyboardEvent } from 'react';
import { memo, useCallback, useEffect, useState } from 'react';
@@ -70,9 +69,6 @@ const selectOpacity = createSelector(selectCanvasSlice, (canvas) => {
if (!selectedEntity) {
return 1; // fallback to 100% opacity
}
if (!isRenderableEntity(selectedEntity)) {
return 1; // fallback to 100% opacity
}
// Opacity is a float from 0-1, but we want to display it as a percentage
return selectedEntity.opacity;
});
@@ -134,11 +130,7 @@ export const EntityListSelectedEntityActionBarOpacity = memo(() => {
return (
<Popover>
<FormControl
w="min-content"
gap={2}
isDisabled={selectedEntityIdentifier === null || selectedEntityIdentifier.type === 'reference_image'}
>
<FormControl w="min-content" gap={2} isDisabled={selectedEntityIdentifier === null}>
<FormLabel m={0}>{t('controlLayers.opacity')}</FormLabel>
<PopoverAnchor>
<NumberInput
@@ -167,7 +159,7 @@ export const EntityListSelectedEntityActionBarOpacity = memo(() => {
position="absolute"
insetInlineEnd={0}
h="full"
isDisabled={selectedEntityIdentifier === null || selectedEntityIdentifier.type === 'reference_image'}
isDisabled={selectedEntityIdentifier === null}
/>
</PopoverTrigger>
</NumberInput>
@@ -185,7 +177,7 @@ export const EntityListSelectedEntityActionBarOpacity = memo(() => {
marks={marks}
formatValue={formatSliderValue}
alwaysShowMarks
isDisabled={selectedEntityIdentifier === null || selectedEntityIdentifier.type === 'reference_image'}
isDisabled={selectedEntityIdentifier === null}
/>
</PopoverBody>
</PopoverContent>

View File

@@ -1,272 +0,0 @@
import { combine } from '@atlaskit/pragmatic-drag-and-drop/combine';
import { dropTargetForElements, monitorForElements } from '@atlaskit/pragmatic-drag-and-drop/element/adapter';
import { dropTargetForExternal, monitorForExternal } from '@atlaskit/pragmatic-drag-and-drop/external/adapter';
import { Box, Button, Spacer, Tab, TabList, TabPanel, TabPanels, Tabs } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector, useAppStore } from 'app/store/storeHooks';
import { CanvasLayersPanelContent } from 'features/controlLayers/components/CanvasLayersPanelContent';
import { CanvasManagerProviderGate } from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import { selectEntityCountActive } from 'features/controlLayers/store/selectors';
import { multipleImageDndSource, singleImageDndSource } from 'features/dnd/dnd';
import { DndDropOverlay } from 'features/dnd/DndDropOverlay';
import type { DndTargetState } from 'features/dnd/types';
import GalleryPanelContent from 'features/gallery/components/GalleryPanelContent';
import { useImageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import { useRegisteredHotkeys } from 'features/system/components/HotkeysModal/useHotkeyData';
import { selectActiveTabCanvasRightPanel } from 'features/ui/store/uiSelectors';
import { activeTabCanvasRightPanelChanged } from 'features/ui/store/uiSlice';
import { memo, useCallback, useEffect, useMemo, useRef, useState } from 'react';
import { useTranslation } from 'react-i18next';
export const CanvasRightPanel = memo(() => {
const { t } = useTranslation();
const activeTab = useAppSelector(selectActiveTabCanvasRightPanel);
const imageViewer = useImageViewer();
const dispatch = useAppDispatch();
const tabIndex = useMemo(() => {
if (activeTab === 'gallery') {
return 1;
} else {
return 0;
}
}, [activeTab]);
const onClickViewerToggleButton = useCallback(() => {
if (activeTab !== 'gallery') {
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}
imageViewer.toggle();
}, [imageViewer, activeTab, dispatch]);
const onChangeTab = useCallback(
(index: number) => {
if (index === 0) {
dispatch(activeTabCanvasRightPanelChanged('layers'));
} else {
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}
},
[dispatch]
);
useRegisteredHotkeys({
id: 'toggleViewer',
category: 'viewer',
callback: imageViewer.toggle,
dependencies: [imageViewer],
});
return (
<Tabs index={tabIndex} onChange={onChangeTab} w="full" h="full" display="flex" flexDir="column">
<TabList alignItems="center">
<PanelTabs />
<Spacer />
<Button size="sm" variant="ghost" onClick={onClickViewerToggleButton}>
{imageViewer.isOpen ? t('gallery.closeViewer') : t('gallery.openViewer')}
</Button>
</TabList>
<TabPanels w="full" h="full">
<TabPanel w="full" h="full" p={0} pt={3}>
<CanvasManagerProviderGate>
<CanvasLayersPanelContent />
</CanvasManagerProviderGate>
</TabPanel>
<TabPanel w="full" h="full" p={0} pt={3}>
<GalleryPanelContent />
</TabPanel>
</TabPanels>
</Tabs>
);
});
CanvasRightPanel.displayName = 'CanvasRightPanel';
const PanelTabs = memo(() => {
const { t } = useTranslation();
const store = useAppStore();
const activeEntityCount = useAppSelector(selectEntityCountActive);
const [layersTabDndState, setLayersTabDndState] = useState<DndTargetState>('idle');
const [galleryTabDndState, setGalleryTabDndState] = useState<DndTargetState>('idle');
const layersTabRef = useRef<HTMLDivElement>(null);
const galleryTabRef = useRef<HTMLDivElement>(null);
const timeoutRef = useRef<number | null>(null);
const layersTabLabel = useMemo(() => {
if (activeEntityCount === 0) {
return t('controlLayers.layer_other');
}
return `${t('controlLayers.layer_other')} (${activeEntityCount})`;
}, [activeEntityCount, t]);
useEffect(() => {
if (!layersTabRef.current) {
return;
}
const getIsOnLayersTab = () => selectActiveTabCanvasRightPanel(store.getState()) === 'layers';
const onDragEnter = () => {
// If we are already on the layers tab, do nothing
if (getIsOnLayersTab()) {
return;
}
// Else set the state to active and switch to the layers tab after a timeout
setLayersTabDndState('over');
timeoutRef.current = window.setTimeout(() => {
timeoutRef.current = null;
store.dispatch(activeTabCanvasRightPanelChanged('layers'));
// When we switch tabs, the other tab should be pending
setLayersTabDndState('idle');
setGalleryTabDndState('potential');
}, 300);
};
const onDragLeave = () => {
// Set the state to idle or pending depending on the current tab
if (getIsOnLayersTab()) {
setLayersTabDndState('idle');
} else {
setLayersTabDndState('potential');
}
// Abort the tab switch if it hasn't happened yet
if (timeoutRef.current !== null) {
clearTimeout(timeoutRef.current);
}
};
const onDragStart = () => {
// Set the state to pending when a drag starts
setLayersTabDndState('potential');
};
return combine(
dropTargetForElements({
element: layersTabRef.current,
onDragEnter,
onDragLeave,
}),
monitorForElements({
canMonitor: ({ source }) => {
if (!singleImageDndSource.typeGuard(source.data) && !multipleImageDndSource.typeGuard(source.data)) {
return false;
}
// Only monitor if we are not already on the gallery tab
return !getIsOnLayersTab();
},
onDragStart,
}),
dropTargetForExternal({
element: layersTabRef.current,
onDragEnter,
onDragLeave,
}),
monitorForExternal({
canMonitor: () => !getIsOnLayersTab(),
onDragStart,
})
);
}, [store]);
useEffect(() => {
if (!galleryTabRef.current) {
return;
}
const getIsOnGalleryTab = () => selectActiveTabCanvasRightPanel(store.getState()) === 'gallery';
const onDragEnter = () => {
// If we are already on the gallery tab, do nothing
if (getIsOnGalleryTab()) {
return;
}
// Else set the state to active and switch to the gallery tab after a timeout
setGalleryTabDndState('over');
timeoutRef.current = window.setTimeout(() => {
timeoutRef.current = null;
store.dispatch(activeTabCanvasRightPanelChanged('gallery'));
// When we switch tabs, the other tab should be pending
setGalleryTabDndState('idle');
setLayersTabDndState('potential');
}, 300);
};
const onDragLeave = () => {
// Set the state to idle or pending depending on the current tab
if (getIsOnGalleryTab()) {
setGalleryTabDndState('idle');
} else {
setGalleryTabDndState('potential');
}
// Abort the tab switch if it hasn't happened yet
if (timeoutRef.current !== null) {
clearTimeout(timeoutRef.current);
}
};
const onDragStart = () => {
// Set the state to pending when a drag starts
setGalleryTabDndState('potential');
};
return combine(
dropTargetForElements({
element: galleryTabRef.current,
onDragEnter,
onDragLeave,
}),
monitorForElements({
canMonitor: ({ source }) => {
if (!singleImageDndSource.typeGuard(source.data) && !multipleImageDndSource.typeGuard(source.data)) {
return false;
}
// Only monitor if we are not already on the gallery tab
return !getIsOnGalleryTab();
},
onDragStart,
}),
dropTargetForExternal({
element: galleryTabRef.current,
onDragEnter,
onDragLeave,
}),
monitorForExternal({
canMonitor: () => !getIsOnGalleryTab(),
onDragStart,
})
);
}, [store]);
useEffect(() => {
const onDrop = () => {
// Reset the dnd state when a drop happens
setGalleryTabDndState('idle');
setLayersTabDndState('idle');
};
const cleanup = combine(monitorForElements({ onDrop }), monitorForExternal({ onDrop }));
return () => {
cleanup();
if (timeoutRef.current !== null) {
clearTimeout(timeoutRef.current);
}
};
}, []);
return (
<>
<Tab ref={layersTabRef} position="relative" w={32}>
<Box as="span" w="full">
{layersTabLabel}
</Box>
<DndDropOverlay dndState={layersTabDndState} withBackdrop={false} />
</Tab>
<Tab ref={galleryTabRef} position="relative" w={32}>
<Box as="span" w="full">
{t('gallery.gallery')}
</Box>
<DndDropOverlay dndState={galleryTabDndState} withBackdrop={false} />
</Tab>
</>
);
});
PanelTabs.displayName = 'PanelTabs';

View File

@@ -2,10 +2,11 @@ import { Button, Flex, Text } from '@invoke-ai/ui-library';
import { useAppStore } from 'app/store/nanostores/store';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { usePullBboxIntoLayer } from 'features/controlLayers/hooks/saveCanvasHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { replaceCanvasEntityObjectsWithImage } from 'features/imageActions/actions';
import { activeTabCanvasRightPanelChanged } from 'features/ui/store/uiSlice';
import { memo, useCallback } from 'react';
import { memo, useCallback, useMemo } from 'react';
import { Trans } from 'react-i18next';
import type { ImageDTO } from 'services/api/types';
@@ -23,27 +24,27 @@ export const ControlLayerSettingsEmptyState = memo(() => {
const onClickGalleryButton = useCallback(() => {
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}, [dispatch]);
const pullBboxIntoLayer = usePullBboxIntoLayer(entityIdentifier);
const components = useMemo(
() => ({
UploadButton: (
<Button isDisabled={isBusy} size="sm" variant="link" color="base.300" {...uploadApi.getUploadButtonProps()} />
),
GalleryButton: (
<Button onClick={onClickGalleryButton} isDisabled={isBusy} size="sm" variant="link" color="base.300" />
),
PullBboxButton: (
<Button onClick={pullBboxIntoLayer} isDisabled={isBusy} size="sm" variant="link" color="base.300" />
),
}),
[isBusy, onClickGalleryButton, pullBboxIntoLayer, uploadApi]
);
return (
<Flex flexDir="column" gap={3} position="relative" w="full" p={4}>
<Text textAlign="center" color="base.300">
<Trans
i18nKey="controlLayers.controlLayerEmptyState"
components={{
UploadButton: (
<Button
isDisabled={isBusy}
size="sm"
variant="link"
color="base.300"
{...uploadApi.getUploadButtonProps()}
/>
),
GalleryButton: (
<Button onClick={onClickGalleryButton} isDisabled={isBusy} size="sm" variant="link" color="base.300" />
),
}}
/>
<Trans i18nKey="controlLayers.controlLayerEmptyState" components={components} />
</Text>
<input {...uploadApi.getUploadInputProps()} />
</Flex>

View File

@@ -1,35 +0,0 @@
import { Spacer } from '@invoke-ai/ui-library';
import { CanvasEntityContainer } from 'features/controlLayers/components/CanvasEntityList/CanvasEntityContainer';
import { CanvasEntityHeader } from 'features/controlLayers/components/common/CanvasEntityHeader';
import { CanvasEntityHeaderCommonActions } from 'features/controlLayers/components/common/CanvasEntityHeaderCommonActions';
import { CanvasEntityEditableTitle } from 'features/controlLayers/components/common/CanvasEntityTitleEdit';
import { IPAdapterSettings } from 'features/controlLayers/components/IPAdapter/IPAdapterSettings';
import { CanvasEntityStateGate } from 'features/controlLayers/contexts/CanvasEntityStateGate';
import { EntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import type { CanvasEntityIdentifier } from 'features/controlLayers/store/types';
import { memo, useMemo } from 'react';
type Props = {
id: string;
};
export const IPAdapter = memo(({ id }: Props) => {
const entityIdentifier = useMemo<CanvasEntityIdentifier>(() => ({ id, type: 'reference_image' }), [id]);
return (
<EntityIdentifierContext.Provider value={entityIdentifier}>
<CanvasEntityStateGate entityIdentifier={entityIdentifier}>
<CanvasEntityContainer>
<CanvasEntityHeader ps={4} py={5}>
<CanvasEntityEditableTitle />
<Spacer />
<CanvasEntityHeaderCommonActions />
</CanvasEntityHeader>
<IPAdapterSettings />
</CanvasEntityContainer>
</CanvasEntityStateGate>
</EntityIdentifierContext.Provider>
);
});
IPAdapter.displayName = 'IPAdapter';

View File

@@ -1,37 +0,0 @@
/* eslint-disable i18next/no-literal-string */
import { createSelector } from '@reduxjs/toolkit';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { CanvasEntityGroupList } from 'features/controlLayers/components/CanvasEntityList/CanvasEntityGroupList';
import { IPAdapter } from 'features/controlLayers/components/IPAdapter/IPAdapter';
import { selectCanvasSlice, selectSelectedEntityIdentifier } from 'features/controlLayers/store/selectors';
import { getEntityIdentifier } from 'features/controlLayers/store/types';
import { memo } from 'react';
const selectEntityIdentifiers = createMemoizedSelector(selectCanvasSlice, (canvas) => {
return canvas.referenceImages.entities.map(getEntityIdentifier).toReversed();
});
const selectIsSelected = createSelector(selectSelectedEntityIdentifier, (selectedEntityIdentifier) => {
return selectedEntityIdentifier?.type === 'reference_image';
});
export const IPAdapterList = memo(() => {
const isSelected = useAppSelector(selectIsSelected);
const entityIdentifiers = useAppSelector(selectEntityIdentifiers);
if (entityIdentifiers.length === 0) {
return null;
}
if (entityIdentifiers.length > 0) {
return (
<CanvasEntityGroupList type="reference_image" isSelected={isSelected} entityIdentifiers={entityIdentifiers}>
{entityIdentifiers.map((entityIdentifiers) => (
<IPAdapter key={entityIdentifiers.id} id={entityIdentifiers.id} />
))}
</CanvasEntityGroupList>
);
}
});
IPAdapterList.displayName = 'IPAdapterList';

View File

@@ -1,180 +0,0 @@
import { Flex, IconButton } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { BeginEndStepPct } from 'features/controlLayers/components/common/BeginEndStepPct';
import { CanvasEntitySettingsWrapper } from 'features/controlLayers/components/common/CanvasEntitySettingsWrapper';
import { Weight } from 'features/controlLayers/components/common/Weight';
import { CLIPVisionModel } from 'features/controlLayers/components/IPAdapter/CLIPVisionModel';
import { FLUXReduxImageInfluence } from 'features/controlLayers/components/IPAdapter/FLUXReduxImageInfluence';
import { GlobalReferenceImageModel } from 'features/controlLayers/components/IPAdapter/GlobalReferenceImageModel';
import { IPAdapterMethod } from 'features/controlLayers/components/IPAdapter/IPAdapterMethod';
import { IPAdapterSettingsEmptyState } from 'features/controlLayers/components/IPAdapter/IPAdapterSettingsEmptyState';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { usePullBboxIntoGlobalReferenceImage } from 'features/controlLayers/hooks/saveCanvasHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import {
referenceImageIPAdapterBeginEndStepPctChanged,
referenceImageIPAdapterCLIPVisionModelChanged,
referenceImageIPAdapterFLUXReduxImageInfluenceChanged,
referenceImageIPAdapterImageChanged,
referenceImageIPAdapterMethodChanged,
referenceImageIPAdapterModelChanged,
referenceImageIPAdapterWeightChanged,
} from 'features/controlLayers/store/canvasSlice';
import { selectIsFLUX } from 'features/controlLayers/store/paramsSlice';
import { selectCanvasSlice, selectEntity, selectEntityOrThrow } from 'features/controlLayers/store/selectors';
import type {
CanvasEntityIdentifier,
CLIPVisionModelV2,
FLUXReduxImageInfluence as FLUXReduxImageInfluenceType,
IPMethodV2,
} from 'features/controlLayers/store/types';
import type { SetGlobalReferenceImageDndTargetData } from 'features/dnd/dnd';
import { setGlobalReferenceImageDndTarget } from 'features/dnd/dnd';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiBoundingBoxBold } from 'react-icons/pi';
import type { ApiModelConfig, FLUXReduxModelConfig, ImageDTO, IPAdapterModelConfig } from 'services/api/types';
import { IPAdapterImagePreview } from './IPAdapterImagePreview';
const buildSelectIPAdapter = (entityIdentifier: CanvasEntityIdentifier<'reference_image'>) =>
createSelector(
selectCanvasSlice,
(canvas) => selectEntityOrThrow(canvas, entityIdentifier, 'IPAdapterSettings').ipAdapter
);
const IPAdapterSettingsContent = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const entityIdentifier = useEntityIdentifierContext('reference_image');
const selectIPAdapter = useMemo(() => buildSelectIPAdapter(entityIdentifier), [entityIdentifier]);
const ipAdapter = useAppSelector(selectIPAdapter);
const onChangeBeginEndStepPct = useCallback(
(beginEndStepPct: [number, number]) => {
dispatch(referenceImageIPAdapterBeginEndStepPctChanged({ entityIdentifier, beginEndStepPct }));
},
[dispatch, entityIdentifier]
);
const onChangeWeight = useCallback(
(weight: number) => {
dispatch(referenceImageIPAdapterWeightChanged({ entityIdentifier, weight }));
},
[dispatch, entityIdentifier]
);
const onChangeIPMethod = useCallback(
(method: IPMethodV2) => {
dispatch(referenceImageIPAdapterMethodChanged({ entityIdentifier, method }));
},
[dispatch, entityIdentifier]
);
const onChangeFLUXReduxImageInfluence = useCallback(
(imageInfluence: FLUXReduxImageInfluenceType) => {
dispatch(referenceImageIPAdapterFLUXReduxImageInfluenceChanged({ entityIdentifier, imageInfluence }));
},
[dispatch, entityIdentifier]
);
const onChangeModel = useCallback(
(modelConfig: IPAdapterModelConfig | FLUXReduxModelConfig | ApiModelConfig) => {
dispatch(referenceImageIPAdapterModelChanged({ entityIdentifier, modelConfig }));
},
[dispatch, entityIdentifier]
);
const onChangeCLIPVisionModel = useCallback(
(clipVisionModel: CLIPVisionModelV2) => {
dispatch(referenceImageIPAdapterCLIPVisionModelChanged({ entityIdentifier, clipVisionModel }));
},
[dispatch, entityIdentifier]
);
const onChangeImage = useCallback(
(imageDTO: ImageDTO | null) => {
dispatch(referenceImageIPAdapterImageChanged({ entityIdentifier, imageDTO }));
},
[dispatch, entityIdentifier]
);
const dndTargetData = useMemo<SetGlobalReferenceImageDndTargetData>(
() => setGlobalReferenceImageDndTarget.getData({ entityIdentifier }, ipAdapter.image?.image_name),
[entityIdentifier, ipAdapter.image?.image_name]
);
const pullBboxIntoIPAdapter = usePullBboxIntoGlobalReferenceImage(entityIdentifier);
const isBusy = useCanvasIsBusy();
const isFLUX = useAppSelector(selectIsFLUX);
return (
<CanvasEntitySettingsWrapper>
<Flex flexDir="column" gap={2} position="relative" w="full">
<Flex gap={2} alignItems="center" w="full">
<GlobalReferenceImageModel modelKey={ipAdapter.model?.key ?? null} onChangeModel={onChangeModel} />
{ipAdapter.type === 'ip_adapter' && (
<CLIPVisionModel model={ipAdapter.clipVisionModel} onChange={onChangeCLIPVisionModel} />
)}
<IconButton
onClick={pullBboxIntoIPAdapter}
isDisabled={isBusy}
variant="ghost"
aria-label={t('controlLayers.pullBboxIntoReferenceImage')}
tooltip={t('controlLayers.pullBboxIntoReferenceImage')}
icon={<PiBoundingBoxBold />}
/>
</Flex>
<Flex gap={2} w="full">
{ipAdapter.type === 'ip_adapter' && (
<Flex flexDir="column" gap={2} w="full">
{!isFLUX && <IPAdapterMethod method={ipAdapter.method} onChange={onChangeIPMethod} />}
<Weight weight={ipAdapter.weight} onChange={onChangeWeight} />
<BeginEndStepPct beginEndStepPct={ipAdapter.beginEndStepPct} onChange={onChangeBeginEndStepPct} />
</Flex>
)}
{ipAdapter.type === 'flux_redux' && (
<Flex flexDir="column" gap={2} w="full" alignItems="flex-start">
<FLUXReduxImageInfluence
imageInfluence={ipAdapter.imageInfluence ?? 'lowest'}
onChange={onChangeFLUXReduxImageInfluence}
/>
</Flex>
)}
<Flex alignItems="center" justifyContent="center" h={32} w={32} aspectRatio="1/1" flexGrow={1}>
<IPAdapterImagePreview
image={ipAdapter.image}
onChangeImage={onChangeImage}
dndTarget={setGlobalReferenceImageDndTarget}
dndTargetData={dndTargetData}
/>
</Flex>
</Flex>
</Flex>
</CanvasEntitySettingsWrapper>
);
});
IPAdapterSettingsContent.displayName = 'IPAdapterSettingsContent';
const buildSelectIPAdapterHasImage = (entityIdentifier: CanvasEntityIdentifier<'reference_image'>) =>
createSelector(selectCanvasSlice, (canvas) => {
const referenceImage = selectEntity(canvas, entityIdentifier);
return !!referenceImage && referenceImage.ipAdapter.image !== null;
});
export const IPAdapterSettings = memo(() => {
const entityIdentifier = useEntityIdentifierContext('reference_image');
const selectIPAdapterHasImage = useMemo(() => buildSelectIPAdapterHasImage(entityIdentifier), [entityIdentifier]);
const hasImage = useAppSelector(selectIPAdapterHasImage);
if (!hasImage) {
return <IPAdapterSettingsEmptyState />;
}
return <IPAdapterSettingsContent />;
});
IPAdapterSettings.displayName = 'IPAdapterSettings';

View File

@@ -4,6 +4,7 @@ import { CanvasEntityHeader } from 'features/controlLayers/components/common/Can
import { CanvasEntityHeaderCommonActions } from 'features/controlLayers/components/common/CanvasEntityHeaderCommonActions';
import { CanvasEntityPreviewImage } from 'features/controlLayers/components/common/CanvasEntityPreviewImage';
import { CanvasEntityEditableTitle } from 'features/controlLayers/components/common/CanvasEntityTitleEdit';
import { InpaintMaskSettings } from 'features/controlLayers/components/InpaintMask/InpaintMaskSettings';
import { CanvasEntityStateGate } from 'features/controlLayers/contexts/CanvasEntityStateGate';
import { InpaintMaskAdapterGate } from 'features/controlLayers/contexts/EntityAdapterContext';
import { EntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
@@ -28,6 +29,7 @@ export const InpaintMask = memo(({ id }: Props) => {
<Spacer />
<CanvasEntityHeaderCommonActions />
</CanvasEntityHeader>
<InpaintMaskSettings />
</CanvasEntityContainer>
</CanvasEntityStateGate>
</InpaintMaskAdapterGate>

View File

@@ -0,0 +1,27 @@
// import { Button, Flex } from '@invoke-ai/ui-library';
// import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
// import { useAddInpaintMaskDenoiseLimit, useAddInpaintMaskNoise } from 'features/controlLayers/hooks/addLayerHooks';
// import { useTranslation } from 'react-i18next';
// import { PiPlusBold } from 'react-icons/pi';
// Removed buttons because denosie limit is not helpful for many architectures
// Users can access with right click menu instead.
// If buttons for noise or new features are deemed important in the future, add them back here.
export const InpaintMaskAddButtons = () => {
// Buttons are temporarily hidden. To restore, uncomment the code below.
return null;
// const entityIdentifier = useEntityIdentifierContext('inpaint_mask');
// const { t } = useTranslation();
// const addInpaintMaskDenoiseLimit = useAddInpaintMaskDenoiseLimit(entityIdentifier);
// const addInpaintMaskNoise = useAddInpaintMaskNoise(entityIdentifier);
// return (
// <Flex w="full" p={2} justifyContent="center">
// <Button size="sm" variant="ghost" leftIcon={<PiPlusBold />} onClick={addInpaintMaskDenoiseLimit}>
// {t('controlLayers.denoiseLimit')}
// </Button>
// <Button size="sm" variant="ghost" leftIcon={<PiPlusBold />} onClick={addInpaintMaskNoise}>
// {t('controlLayers.imageNoise')}
// </Button>
// </Flex>
// );
};

View File

@@ -0,0 +1,29 @@
import type { IconButtonProps } from '@invoke-ai/ui-library';
import { IconButton } from '@invoke-ai/ui-library';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiXBold } from 'react-icons/pi';
type Props = Omit<IconButtonProps, 'aria-label'> & {
onDelete: () => void;
};
export const InpaintMaskDeleteModifierButton = memo(({ onDelete, ...rest }: Props) => {
const { t } = useTranslation();
return (
<IconButton
tooltip={t('common.delete')}
variant="link"
aria-label={t('common.delete')}
icon={<PiXBold />}
onClick={onDelete}
flexGrow={0}
size="sm"
p={0}
colorScheme="error"
{...rest}
/>
);
});
InpaintMaskDeleteModifierButton.displayName = 'InpaintMaskDeleteNoiseButton';

View File

@@ -0,0 +1,70 @@
import { Flex, Slider, SliderFilledTrack, SliderThumb, SliderTrack, Text } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { InpaintMaskDeleteModifierButton } from 'features/controlLayers/components/InpaintMask/InpaintMaskDeleteModifierButton';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import {
inpaintMaskDenoiseLimitChanged,
inpaintMaskDenoiseLimitDeleted,
} from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice, selectEntityOrThrow } from 'features/controlLayers/store/selectors';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
export const InpaintMaskDenoiseLimitSlider = memo(() => {
const entityIdentifier = useEntityIdentifierContext('inpaint_mask');
const { t } = useTranslation();
const dispatch = useAppDispatch();
const selectDenoiseLimit = useMemo(
() =>
createSelector(
selectCanvasSlice,
(canvas) => selectEntityOrThrow(canvas, entityIdentifier, 'InpaintMaskDenoiseLimitSlider').denoiseLimit
),
[entityIdentifier]
);
const denoiseLimit = useAppSelector(selectDenoiseLimit);
const handleDenoiseLimitChange = useCallback(
(value: number) => {
dispatch(inpaintMaskDenoiseLimitChanged({ entityIdentifier, denoiseLimit: value }));
},
[dispatch, entityIdentifier]
);
const onDeleteDenoiseLimit = useCallback(() => {
dispatch(inpaintMaskDenoiseLimitDeleted({ entityIdentifier }));
}, [dispatch, entityIdentifier]);
if (denoiseLimit === undefined) {
return null;
}
return (
<Flex direction="column" gap={1} w="full" px={2} pb={2}>
<Flex justifyContent="space-between" w="full" alignItems="center">
<Text fontSize="sm">{t('controlLayers.denoiseLimit')}</Text>
<Flex alignItems="center" gap={1}>
<Text fontSize="sm">{denoiseLimit.toFixed(2)}</Text>
<InpaintMaskDeleteModifierButton onDelete={onDeleteDenoiseLimit} />
</Flex>
</Flex>
<Slider
aria-label={t('controlLayers.denoiseLimit')}
value={denoiseLimit}
min={0}
max={1}
step={0.01}
onChange={handleDenoiseLimitChange}
>
<SliderTrack>
<SliderFilledTrack />
</SliderTrack>
<SliderThumb />
</Slider>
</Flex>
);
});
InpaintMaskDenoiseLimitSlider.displayName = 'InpaintMaskDenoiseLimitSlider';

View File

@@ -7,6 +7,7 @@ import { CanvasEntityMenuItemsDuplicate } from 'features/controlLayers/component
import { CanvasEntityMenuItemsMergeDown } from 'features/controlLayers/components/common/CanvasEntityMenuItemsMergeDown';
import { CanvasEntityMenuItemsSave } from 'features/controlLayers/components/common/CanvasEntityMenuItemsSave';
import { CanvasEntityMenuItemsTransform } from 'features/controlLayers/components/common/CanvasEntityMenuItemsTransform';
import { InpaintMaskMenuItemsAddModifiers } from 'features/controlLayers/components/InpaintMask/InpaintMaskMenuItemsAddModifiers';
import { InpaintMaskMenuItemsConvertToSubMenu } from 'features/controlLayers/components/InpaintMask/InpaintMaskMenuItemsConvertToSubMenu';
import { InpaintMaskMenuItemsCopyToSubMenu } from 'features/controlLayers/components/InpaintMask/InpaintMaskMenuItemsCopyToSubMenu';
import { memo } from 'react';
@@ -20,6 +21,8 @@ export const InpaintMaskMenuItems = memo(() => {
<CanvasEntityMenuItemsDelete asIcon />
</IconMenuItemGroup>
<MenuDivider />
<InpaintMaskMenuItemsAddModifiers />
<MenuDivider />
<CanvasEntityMenuItemsTransform />
<MenuDivider />
<CanvasEntityMenuItemsMergeDown />

View File

@@ -0,0 +1,27 @@
import { MenuItem } from '@invoke-ai/ui-library';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { useAddInpaintMaskDenoiseLimit, useAddInpaintMaskNoise } from 'features/controlLayers/hooks/addLayerHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
export const InpaintMaskMenuItemsAddModifiers = memo(() => {
const entityIdentifier = useEntityIdentifierContext('inpaint_mask');
const { t } = useTranslation();
const isBusy = useCanvasIsBusy();
const addInpaintMaskNoise = useAddInpaintMaskNoise(entityIdentifier);
const addInpaintMaskDenoiseLimit = useAddInpaintMaskDenoiseLimit(entityIdentifier);
return (
<>
<MenuItem onClick={addInpaintMaskNoise} isDisabled={isBusy}>
{t('controlLayers.addImageNoise')}
</MenuItem>
<MenuItem onClick={addInpaintMaskDenoiseLimit} isDisabled={isBusy}>
{t('controlLayers.addDenoiseLimit')}
</MenuItem>
</>
);
});
InpaintMaskMenuItemsAddModifiers.displayName = 'InpaintMaskMenuItemsAddNoise';

View File

@@ -0,0 +1,67 @@
import { Flex, Slider, SliderFilledTrack, SliderThumb, SliderTrack, Text } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { InpaintMaskDeleteModifierButton } from 'features/controlLayers/components/InpaintMask/InpaintMaskDeleteModifierButton';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { inpaintMaskNoiseChanged, inpaintMaskNoiseDeleted } from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice, selectEntityOrThrow } from 'features/controlLayers/store/selectors';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
export const InpaintMaskNoiseSlider = memo(() => {
const entityIdentifier = useEntityIdentifierContext('inpaint_mask');
const { t } = useTranslation();
const dispatch = useAppDispatch();
const selectNoiseLevel = useMemo(
() =>
createSelector(
selectCanvasSlice,
(canvas) => selectEntityOrThrow(canvas, entityIdentifier, 'InpaintMaskNoiseSlider').noiseLevel
),
[entityIdentifier]
);
const noiseLevel = useAppSelector(selectNoiseLevel);
const handleNoiseChange = useCallback(
(value: number) => {
dispatch(inpaintMaskNoiseChanged({ entityIdentifier, noiseLevel: value }));
},
[dispatch, entityIdentifier]
);
const onDeleteNoise = useCallback(() => {
dispatch(inpaintMaskNoiseDeleted({ entityIdentifier }));
}, [dispatch, entityIdentifier]);
if (noiseLevel === undefined) {
return null;
}
return (
<Flex direction="column" gap={1} w="full" px={2} pb={2}>
<Flex justifyContent="space-between" w="full" alignItems="center">
<Text fontSize="sm">{t('controlLayers.imageNoise')}</Text>
<Flex alignItems="center" gap={1}>
<Text fontSize="sm">{Math.round(noiseLevel * 100)}%</Text>
<InpaintMaskDeleteModifierButton onDelete={onDeleteNoise} />
</Flex>
</Flex>
<Slider
aria-label={t('controlLayers.imageNoise')}
value={noiseLevel}
min={0}
max={1}
step={0.01}
onChange={handleNoiseChange}
>
<SliderTrack>
<SliderFilledTrack />
</SliderTrack>
<SliderThumb />
</Slider>
</Flex>
);
});
InpaintMaskNoiseSlider.displayName = 'InpaintMaskNoiseSlider';

View File

@@ -0,0 +1,47 @@
import { createSelector } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/store/storeHooks';
import { CanvasEntitySettingsWrapper } from 'features/controlLayers/components/common/CanvasEntitySettingsWrapper';
import { InpaintMaskDenoiseLimitSlider } from 'features/controlLayers/components/InpaintMask/InpaintMaskDenoiseLimitSlider';
import { InpaintMaskNoiseSlider } from 'features/controlLayers/components/InpaintMask/InpaintMaskNoiseSlider';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { selectCanvasSlice, selectEntityOrThrow } from 'features/controlLayers/store/selectors';
import type { CanvasEntityIdentifier } from 'features/controlLayers/store/types';
import { memo, useMemo } from 'react';
const buildSelectHasDenoiseLimit = (entityIdentifier: CanvasEntityIdentifier<'inpaint_mask'>) =>
createSelector(selectCanvasSlice, (canvas) => {
const entity = selectEntityOrThrow(canvas, entityIdentifier, 'InpaintMaskSettings');
return entity.denoiseLimit !== undefined;
});
const buildSelectHasNoiseLevel = (entityIdentifier: CanvasEntityIdentifier<'inpaint_mask'>) =>
createSelector(selectCanvasSlice, (canvas) => {
const entity = selectEntityOrThrow(canvas, entityIdentifier, 'InpaintMaskSettings');
return entity.noiseLevel !== undefined;
});
export const InpaintMaskSettings = memo(() => {
const entityIdentifier = useEntityIdentifierContext('inpaint_mask');
const selectHasDenoiseLimit = useMemo(() => buildSelectHasDenoiseLimit(entityIdentifier), [entityIdentifier]);
const selectHasNoiseLevel = useMemo(() => buildSelectHasNoiseLevel(entityIdentifier), [entityIdentifier]);
const hasDenoiseLimit = useAppSelector(selectHasDenoiseLimit);
const hasNoiseLevel = useAppSelector(selectHasNoiseLevel);
if (!hasNoiseLevel && !hasDenoiseLimit) {
// If we show the <InpaintMaskAddButtons /> below, we can remove this check.
// Until then, if there are no sliders to show for the mask settings, return null. This prevents rendering an
// empty settings wrapper div, which adds unnecessary space in the UI.
return null;
}
return (
<CanvasEntitySettingsWrapper>
{/* {!hasNoiseLevel && !hasDenoiseLimit && <InpaintMaskAddButtons />} */}
{hasNoiseLevel && <InpaintMaskNoiseSlider />}
{hasDenoiseLimit && <InpaintMaskDenoiseLimitSlider />}
</CanvasEntitySettingsWrapper>
);
});
InpaintMaskSettings.displayName = 'InpaintMaskSettings';

View File

@@ -2,8 +2,7 @@ import { Checkbox, ConfirmationAlertDialog, Flex, FormControl, FormLabel, Text }
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useAssertSingleton } from 'common/hooks/useAssertSingleton';
import { buildUseBoolean } from 'common/hooks/useBoolean';
import { newCanvasSessionRequested, newGallerySessionRequested } from 'features/controlLayers/store/actions';
import { useImageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import { canvasSessionReset, generateSessionReset } from 'features/controlLayers/store/canvasStagingAreaSlice';
import {
selectSystemShouldConfirmOnNewSession,
shouldConfirmOnNewSessionToggled,
@@ -17,15 +16,13 @@ const [useNewCanvasSessionDialog] = buildUseBoolean(false);
export const useNewGallerySession = () => {
const dispatch = useAppDispatch();
const imageViewer = useImageViewer();
const shouldConfirmOnNewSession = useAppSelector(selectSystemShouldConfirmOnNewSession);
const newSessionDialog = useNewGallerySessionDialog();
const newGallerySessionImmediate = useCallback(() => {
dispatch(newGallerySessionRequested());
imageViewer.open();
dispatch(generateSessionReset());
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}, [dispatch, imageViewer]);
}, [dispatch]);
const newGallerySessionWithDialog = useCallback(() => {
if (shouldConfirmOnNewSession) {
@@ -40,15 +37,13 @@ export const useNewGallerySession = () => {
export const useNewCanvasSession = () => {
const dispatch = useAppDispatch();
const imageViewer = useImageViewer();
const shouldConfirmOnNewSession = useAppSelector(selectSystemShouldConfirmOnNewSession);
const newSessionDialog = useNewCanvasSessionDialog();
const newCanvasSessionImmediate = useCallback(() => {
dispatch(newCanvasSessionRequested());
imageViewer.close();
dispatch(canvasSessionReset());
dispatch(activeTabCanvasRightPanelChanged('layers'));
}, [dispatch, imageViewer]);
}, [dispatch]);
const newCanvasSessionWithDialog = useCallback(() => {
if (shouldConfirmOnNewSession) {

View File

@@ -1,5 +1,5 @@
import { MenuItem } from '@invoke-ai/ui-library';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { useRefImageIdContext } from 'features/controlLayers/contexts/RefImageIdContext';
import { usePullBboxIntoGlobalReferenceImage } from 'features/controlLayers/hooks/saveCanvasHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { memo } from 'react';
@@ -8,8 +8,8 @@ import { PiBoundingBoxBold } from 'react-icons/pi';
export const IPAdapterMenuItemPullBbox = memo(() => {
const { t } = useTranslation();
const entityIdentifier = useEntityIdentifierContext('reference_image');
const pullBboxIntoIPAdapter = usePullBboxIntoGlobalReferenceImage(entityIdentifier);
const id = useRefImageIdContext();
const pullBboxIntoIPAdapter = usePullBboxIntoGlobalReferenceImage(id);
const isBusy = useCanvasIsBusy();
return (

View File

@@ -3,7 +3,7 @@ import { IconMenuItemGroup } from 'common/components/IconMenuItem';
import { CanvasEntityMenuItemsArrange } from 'features/controlLayers/components/common/CanvasEntityMenuItemsArrange';
import { CanvasEntityMenuItemsDelete } from 'features/controlLayers/components/common/CanvasEntityMenuItemsDelete';
import { CanvasEntityMenuItemsDuplicate } from 'features/controlLayers/components/common/CanvasEntityMenuItemsDuplicate';
import { IPAdapterMenuItemPullBbox } from 'features/controlLayers/components/IPAdapter/IPAdapterMenuItemPullBbox';
import { IPAdapterMenuItemPullBbox } from 'features/controlLayers/components/RefImage/IPAdapterMenuItemPullBbox';
import { memo } from 'react';
export const IPAdapterMenuItems = memo(() => {

View File

@@ -0,0 +1,132 @@
import {
Divider,
Flex,
IconButton,
Image,
Popover,
PopoverAnchor,
PopoverArrow,
PopoverBody,
PopoverContent,
Portal,
Skeleton,
} from '@invoke-ai/ui-library';
import { skipToken } from '@reduxjs/toolkit/query';
import { POPPER_MODIFIERS } from 'common/components/InformationalPopover/constants';
import type { UseDisclosure } from 'common/hooks/useBoolean';
import { useDisclosure } from 'common/hooks/useBoolean';
import { DEFAULT_FILTER, useFilterableOutsideClick } from 'common/hooks/useFilterableOutsideClick';
import { RefImageHeader } from 'features/controlLayers/components/RefImage/RefImageHeader';
import { RefImageSettings } from 'features/controlLayers/components/RefImage/RefImageSettings';
import { useRefImageEntity } from 'features/controlLayers/components/RefImage/useRefImageEntity';
import { useRefImageIdContext } from 'features/controlLayers/contexts/RefImageIdContext';
import { memo, useCallback, useRef } from 'react';
import { PiImageBold } from 'react-icons/pi';
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
// There is some awkwardness here with closing the popover when clicking outside of it, related to Chakra's
// handling of refs, portals, outside clicks, and a race condition with framer-motion animations that can leave
// the popover closed when its internal state is still open.
//
// We have to manually manage the popover open state to work around the race condition, and then have to do special
// handling to close the popover when clicking outside of it.
// We have to reach outside react to identify the popover trigger element instead of using refs, thanks to how Chakra
// handles refs for PopoverAnchor internally. Maybe there is some way to merge them but I couldn't figure it out.
const getRefImagePopoverTriggerId = (id: string) => `ref-image-popover-trigger-${id}`;
export const RefImage = memo(() => {
const id = useRefImageIdContext();
const ref = useRef<HTMLDivElement>(null);
const disclosure = useDisclosure(false);
// This filter prevents the popover from closing when clicking on a sibling portal element, like the dropdown menu
// inside the ref image settings popover. It also prevents the popover from closing when clicking on the popover's
// own trigger element.
const filter = useCallback(
(el: HTMLElement | SVGElement) => {
return DEFAULT_FILTER(el) || el.id === getRefImagePopoverTriggerId(id);
},
[id]
);
useFilterableOutsideClick({ ref, handler: disclosure.close, filter });
return (
<Popover
// The popover contains a react-select component, which uses a portal to render its options. This portal
// is itself not lazy. As a result, if we do not unmount the popover when it is closed, the react-select
// component still exists but is invisible, and intercepts clicks!
isLazy
lazyBehavior="unmount"
isOpen={disclosure.isOpen}
closeOnBlur={false}
modifiers={POPPER_MODIFIERS}
>
<Thumbnail disclosure={disclosure} />
<Portal>
<PopoverContent ref={ref} w={400}>
<PopoverArrow />
<PopoverBody>
<Flex flexDir="column" gap={2} w="full" h="full">
<RefImageHeader />
<Divider />
<RefImageSettings />
</Flex>
</PopoverBody>
</PopoverContent>
</Portal>
</Popover>
);
});
RefImage.displayName = 'RefImage';
const Thumbnail = memo(({ disclosure }: { disclosure: UseDisclosure }) => {
const id = useRefImageIdContext();
const entity = useRefImageEntity(id);
const { data: imageDTO } = useGetImageDTOQuery(entity.config.image?.image_name ?? skipToken);
if (!entity.config.image) {
return (
<PopoverAnchor>
<IconButton
id={getRefImagePopoverTriggerId(id)}
aria-label="Open Reference Image Settings"
h="full"
variant="ghost"
aspectRatio="1/1"
borderWidth="2px !important"
borderStyle="dashed !important"
borderColor="errorAlpha.500"
borderRadius="base"
icon={<PiImageBold />}
colorScheme="error"
onClick={disclosure.toggle}
flexShrink={0}
/>
</PopoverAnchor>
);
}
return (
<PopoverAnchor>
<Image
borderWidth={1}
borderStyle="solid"
id={getRefImagePopoverTriggerId(id)}
role="button"
src={imageDTO?.thumbnail_url}
objectFit="contain"
aspectRatio="1/1"
// width={imageDTO?.width}
height={imageDTO?.height}
fallback={<Skeleton h="full" aspectRatio="1/1" />}
maxW="full"
maxH="full"
borderRadius="base"
onClick={disclosure.toggle}
flexShrink={0}
// sx={imageSx}
// data-is-open={disclosure.isOpen}
/>
</PopoverAnchor>
);
});
Thumbnail.displayName = 'Thumbnail';

View File

@@ -0,0 +1,41 @@
import { Flex, IconButton, Text } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useRefImageEntity } from 'features/controlLayers/components/RefImage/useRefImageEntity';
import { useRefImageIdContext } from 'features/controlLayers/contexts/RefImageIdContext';
import { refImageDeleted } from 'features/controlLayers/store/refImagesSlice';
import { memo, useCallback } from 'react';
import { PiTrashBold } from 'react-icons/pi';
export const RefImageHeader = memo(() => {
const id = useRefImageIdContext();
const dispatch = useAppDispatch();
const entity = useRefImageEntity(id);
const deleteRefImage = useCallback(() => {
dispatch(refImageDeleted({ id }));
}, [dispatch, id]);
return (
<Flex justifyContent="space-between" alignItems="center" w="full">
{entity.config.image !== null && (
<Text fontWeight="semibold" color="base.300">
Reference Image
</Text>
)}
{entity.config.image === null && (
<Text fontWeight="semibold" color="base.300">
No Reference Image Selected
</Text>
)}
<IconButton
size="xs"
variant="link"
alignSelf="stretch"
icon={<PiTrashBold />}
onClick={deleteRefImage}
aria-label="Delete reference image"
colorScheme="error"
/>
</Flex>
);
});
RefImageHeader.displayName = 'RefImageHeader';

View File

@@ -1,7 +1,7 @@
import { Flex } from '@invoke-ai/ui-library';
import { useStore } from '@nanostores/react';
import { skipToken } from '@reduxjs/toolkit/query';
import { UploadImageButton } from 'common/hooks/useImageUploadButton';
import { UploadImageIconButton } from 'common/hooks/useImageUploadButton';
import type { ImageWithDims } from 'features/controlLayers/store/types';
import type { setGlobalReferenceImageDndTarget, setRegionalGuidanceReferenceImageDndTarget } from 'features/dnd/dnd';
import { DndDropTarget } from 'features/dnd/DndDropTarget';
@@ -21,7 +21,7 @@ type Props<T extends typeof setGlobalReferenceImageDndTarget | typeof setRegiona
dndTargetData: ReturnType<T['getData']>;
};
export const IPAdapterImagePreview = memo(
export const RefImageImage = memo(
<T extends typeof setGlobalReferenceImageDndTarget | typeof setRegionalGuidanceReferenceImageDndTarget>({
image,
onChangeImage,
@@ -51,7 +51,7 @@ export const IPAdapterImagePreview = memo(
return (
<Flex position="relative" w="full" h="full" alignItems="center" data-error={!imageDTO && !image?.image_name}>
{!imageDTO && (
<UploadImageButton
<UploadImageIconButton
w="full"
h="full"
isError={!imageDTO && !image?.image_name}
@@ -77,4 +77,4 @@ export const IPAdapterImagePreview = memo(
}
);
IPAdapterImagePreview.displayName = 'IPAdapterImagePreview';
RefImageImage.displayName = 'RefImageImage';

View File

@@ -0,0 +1,94 @@
import type { FlexProps } from '@invoke-ai/ui-library';
import { Button, Flex } from '@invoke-ai/ui-library';
import { useAppStore } from 'app/store/nanostores/store';
import { useAppSelector } from 'app/store/storeHooks';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import { RefImage } from 'features/controlLayers/components/RefImage/RefImage';
import { RefImageIdContext } from 'features/controlLayers/contexts/RefImageIdContext';
import { getDefaultRefImageConfig } from 'features/controlLayers/hooks/addLayerHooks';
import { refImageAdded, selectRefImageEntityIds } from 'features/controlLayers/store/refImagesSlice';
import { imageDTOToImageWithDims } from 'features/controlLayers/store/util';
import { addGlobalReferenceImageDndTarget } from 'features/dnd/dnd';
import { DndDropTarget } from 'features/dnd/DndDropTarget';
import { memo, useMemo } from 'react';
import { PiUploadBold } from 'react-icons/pi';
import type { ImageDTO } from 'services/api/types';
export const RefImageList = memo((props: FlexProps) => {
const ids = useAppSelector(selectRefImageEntityIds);
return (
<Flex gap={2} h={16} {...props}>
{ids.map((id) => (
<RefImageIdContext.Provider key={id} value={id}>
<RefImage />
</RefImageIdContext.Provider>
))}
{ids.length < 5 && <AddRefImageDropTargetAndButton />}
{ids.length >= 5 && <MaxRefImages />}
</Flex>
);
});
RefImageList.displayName = 'RefImageList';
const dndTargetData = addGlobalReferenceImageDndTarget.getData();
const MaxRefImages = memo(() => {
return (
<Button
position="relative"
size="sm"
variant="ghost"
h="full"
w="full"
borderWidth="2px !important"
borderStyle="dashed !important"
borderRadius="base"
isDisabled
>
Max Ref Images
</Button>
);
});
MaxRefImages.displayName = 'MaxRefImages';
const AddRefImageDropTargetAndButton = memo(() => {
const { dispatch, getState } = useAppStore();
const uploadOptions = useMemo(
() =>
({
onUpload: (imageDTO: ImageDTO) => {
const config = getDefaultRefImageConfig(getState);
config.image = imageDTOToImageWithDims(imageDTO);
dispatch(refImageAdded({ overrides: { config } }));
},
allowMultiple: false,
}) as const,
[dispatch, getState]
);
const uploadApi = useImageUploadButton(uploadOptions);
return (
<>
<Button
position="relative"
size="sm"
variant="ghost"
h="full"
w="full"
borderWidth="2px !important"
borderStyle="dashed !important"
borderRadius="base"
leftIcon={<PiUploadBold />}
{...uploadApi.getUploadButtonProps()}
>
Reference Image
<input {...uploadApi.getUploadInputProps()} />
<DndDropTarget label="Drop" dndTarget={addGlobalReferenceImageDndTarget} dndTargetData={dndTargetData} />
</Button>
</>
);
});
AddRefImageDropTargetAndButton.displayName = 'AddRefImageDropTargetAndButton';

View File

@@ -12,7 +12,7 @@ type Props = {
onChangeModel: (modelConfig: IPAdapterModelConfig | FLUXReduxModelConfig | ApiModelConfig) => void;
};
export const GlobalReferenceImageModel = memo(({ modelKey, onChangeModel }: Props) => {
export const RefImageModel = memo(({ modelKey, onChangeModel }: Props) => {
const { t } = useTranslation();
const currentBaseModel = useAppSelector(selectBase);
const [modelConfigs, { isLoading }] = useGlobalReferenceImageModels();
@@ -60,4 +60,4 @@ export const GlobalReferenceImageModel = memo(({ modelKey, onChangeModel }: Prop
);
});
GlobalReferenceImageModel.displayName = 'GlobalReferenceImageModel';
RefImageModel.displayName = 'RefImageModel';

View File

@@ -0,0 +1,57 @@
import { Button, Flex, Text } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import { useRefImageIdContext } from 'features/controlLayers/contexts/RefImageIdContext';
import type { SetGlobalReferenceImageDndTargetData } from 'features/dnd/dnd';
import { setGlobalReferenceImageDndTarget } from 'features/dnd/dnd';
import { DndDropTarget } from 'features/dnd/DndDropTarget';
import { setGlobalReferenceImage } from 'features/imageActions/actions';
import { activeTabCanvasRightPanelChanged } from 'features/ui/store/uiSlice';
import { memo, useCallback, useMemo } from 'react';
import { Trans, useTranslation } from 'react-i18next';
import type { ImageDTO } from 'services/api/types';
export const RefImageNoImageState = memo(() => {
const { t } = useTranslation();
const id = useRefImageIdContext();
const dispatch = useAppDispatch();
const onUpload = useCallback(
(imageDTO: ImageDTO) => {
setGlobalReferenceImage({ imageDTO, id, dispatch });
},
[dispatch, id]
);
const uploadApi = useImageUploadButton({ onUpload, allowMultiple: false });
const onClickGalleryButton = useCallback(() => {
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}, [dispatch]);
const dndTargetData = useMemo<SetGlobalReferenceImageDndTargetData>(
() => setGlobalReferenceImageDndTarget.getData({ id }),
[id]
);
const components = useMemo(
() => ({
UploadButton: <Button size="sm" variant="link" color="base.300" {...uploadApi.getUploadButtonProps()} />,
GalleryButton: <Button onClick={onClickGalleryButton} size="sm" variant="link" color="base.300" />,
}),
[onClickGalleryButton, uploadApi]
);
return (
<Flex flexDir="column" gap={3} position="relative" w="full" p={4}>
<Text textAlign="center" color="base.300">
<Trans i18nKey="controlLayers.referenceImageEmptyState" components={components} />
</Text>
<input {...uploadApi.getUploadInputProps()} />
<DndDropTarget
dndTarget={setGlobalReferenceImageDndTarget}
dndTargetData={dndTargetData}
label={t('controlLayers.useImage')}
/>
</Flex>
);
});
RefImageNoImageState.displayName = 'RefImageNoImageState';

View File

@@ -1,7 +1,7 @@
import { Button, Flex, Text } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { useRefImageIdContext } from 'features/controlLayers/contexts/RefImageIdContext';
import { usePullBboxIntoGlobalReferenceImage } from 'features/controlLayers/hooks/saveCanvasHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import type { SetGlobalReferenceImageDndTargetData } from 'features/dnd/dnd';
@@ -13,26 +13,26 @@ import { memo, useCallback, useMemo } from 'react';
import { Trans, useTranslation } from 'react-i18next';
import type { ImageDTO } from 'services/api/types';
export const IPAdapterSettingsEmptyState = memo(() => {
export const RefImageNoImageStateWithCanvasOptions = memo(() => {
const { t } = useTranslation();
const entityIdentifier = useEntityIdentifierContext('reference_image');
const id = useRefImageIdContext();
const dispatch = useAppDispatch();
const isBusy = useCanvasIsBusy();
const onUpload = useCallback(
(imageDTO: ImageDTO) => {
setGlobalReferenceImage({ imageDTO, entityIdentifier, dispatch });
setGlobalReferenceImage({ imageDTO, id, dispatch });
},
[dispatch, entityIdentifier]
[dispatch, id]
);
const uploadApi = useImageUploadButton({ onUpload, allowMultiple: false });
const onClickGalleryButton = useCallback(() => {
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}, [dispatch]);
const pullBboxIntoIPAdapter = usePullBboxIntoGlobalReferenceImage(entityIdentifier);
const pullBboxIntoIPAdapter = usePullBboxIntoGlobalReferenceImage(id);
const dndTargetData = useMemo<SetGlobalReferenceImageDndTargetData>(
() => setGlobalReferenceImageDndTarget.getData({ entityIdentifier }),
[entityIdentifier]
() => setGlobalReferenceImageDndTarget.getData({ id }),
[id]
);
const components = useMemo(
@@ -53,7 +53,7 @@ export const IPAdapterSettingsEmptyState = memo(() => {
return (
<Flex flexDir="column" gap={3} position="relative" w="full" p={4}>
<Text textAlign="center" color="base.300">
<Trans i18nKey="controlLayers.referenceImageEmptyState" components={components} />
<Trans i18nKey="controlLayers.referenceImageEmptyStateWithCanvasOptions" components={components} />
</Text>
<input {...uploadApi.getUploadInputProps()} />
<DndDropTarget
@@ -66,4 +66,4 @@ export const IPAdapterSettingsEmptyState = memo(() => {
);
});
IPAdapterSettingsEmptyState.displayName = 'IPAdapterSettingsEmptyState';
RefImageNoImageStateWithCanvasOptions.displayName = 'RefImageNoImageStateWithCanvasOptions';

View File

@@ -0,0 +1,186 @@
import { Flex } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { BeginEndStepPct } from 'features/controlLayers/components/common/BeginEndStepPct';
import { FLUXReduxImageInfluence } from 'features/controlLayers/components/common/FLUXReduxImageInfluence';
import { IPAdapterCLIPVisionModel } from 'features/controlLayers/components/common/IPAdapterCLIPVisionModel';
import { PullBboxIntoRefImageIconButton } from 'features/controlLayers/components/common/PullBboxIntoRefImageIconButton';
import { Weight } from 'features/controlLayers/components/common/Weight';
import { IPAdapterMethod } from 'features/controlLayers/components/RefImage/IPAdapterMethod';
import { RefImageModel } from 'features/controlLayers/components/RefImage/RefImageModel';
import { RefImageNoImageState } from 'features/controlLayers/components/RefImage/RefImageNoImageState';
import { RefImageNoImageStateWithCanvasOptions } from 'features/controlLayers/components/RefImage/RefImageNoImageStateWithCanvasOptions';
import {
CanvasManagerProviderGate,
useCanvasManagerSafe,
} from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import { useRefImageIdContext } from 'features/controlLayers/contexts/RefImageIdContext';
import { selectIsFLUX } from 'features/controlLayers/store/paramsSlice';
import {
refImageFLUXReduxImageInfluenceChanged,
refImageImageChanged,
refImageIPAdapterBeginEndStepPctChanged,
refImageIPAdapterCLIPVisionModelChanged,
refImageIPAdapterMethodChanged,
refImageIPAdapterWeightChanged,
refImageModelChanged,
selectRefImageEntity,
selectRefImageEntityOrThrow,
selectRefImagesSlice,
} from 'features/controlLayers/store/refImagesSlice';
import type {
CLIPVisionModelV2,
FLUXReduxImageInfluence as FLUXReduxImageInfluenceType,
IPMethodV2,
} from 'features/controlLayers/store/types';
import { isFLUXReduxConfig, isIPAdapterConfig } from 'features/controlLayers/store/types';
import type { SetGlobalReferenceImageDndTargetData } from 'features/dnd/dnd';
import { setGlobalReferenceImageDndTarget } from 'features/dnd/dnd';
import { selectActiveTab } from 'features/ui/store/uiSelectors';
import { memo, useCallback, useMemo } from 'react';
import type { ApiModelConfig, FLUXReduxModelConfig, ImageDTO, IPAdapterModelConfig } from 'services/api/types';
import { RefImageImage } from './RefImageImage';
const buildSelectConfig = (id: string) =>
createSelector(
selectRefImagesSlice,
(refImages) => selectRefImageEntityOrThrow(refImages, id, 'IPAdapterSettings').config
);
const RefImageSettingsContent = memo(() => {
const dispatch = useAppDispatch();
const id = useRefImageIdContext();
const selectConfig = useMemo(() => buildSelectConfig(id), [id]);
const config = useAppSelector(selectConfig);
const tab = useAppSelector(selectActiveTab);
const onChangeBeginEndStepPct = useCallback(
(beginEndStepPct: [number, number]) => {
dispatch(refImageIPAdapterBeginEndStepPctChanged({ id, beginEndStepPct }));
},
[dispatch, id]
);
const onChangeWeight = useCallback(
(weight: number) => {
dispatch(refImageIPAdapterWeightChanged({ id, weight }));
},
[dispatch, id]
);
const onChangeIPMethod = useCallback(
(method: IPMethodV2) => {
dispatch(refImageIPAdapterMethodChanged({ id, method }));
},
[dispatch, id]
);
const onChangeFLUXReduxImageInfluence = useCallback(
(imageInfluence: FLUXReduxImageInfluenceType) => {
dispatch(refImageFLUXReduxImageInfluenceChanged({ id, imageInfluence }));
},
[dispatch, id]
);
const onChangeModel = useCallback(
(modelConfig: IPAdapterModelConfig | FLUXReduxModelConfig | ApiModelConfig) => {
dispatch(refImageModelChanged({ id, modelConfig }));
},
[dispatch, id]
);
const onChangeCLIPVisionModel = useCallback(
(clipVisionModel: CLIPVisionModelV2) => {
dispatch(refImageIPAdapterCLIPVisionModelChanged({ id, clipVisionModel }));
},
[dispatch, id]
);
const onChangeImage = useCallback(
(imageDTO: ImageDTO | null) => {
dispatch(refImageImageChanged({ id, imageDTO }));
},
[dispatch, id]
);
const dndTargetData = useMemo<SetGlobalReferenceImageDndTargetData>(
() => setGlobalReferenceImageDndTarget.getData({ id }, config.image?.image_name),
[id, config.image?.image_name]
);
const isFLUX = useAppSelector(selectIsFLUX);
return (
<Flex flexDir="column" gap={2} position="relative" w="full">
<Flex gap={2} alignItems="center" w="full">
<RefImageModel modelKey={config.model?.key ?? null} onChangeModel={onChangeModel} />
{isIPAdapterConfig(config) && (
<IPAdapterCLIPVisionModel model={config.clipVisionModel} onChange={onChangeCLIPVisionModel} />
)}
{tab === 'canvas' && (
<CanvasManagerProviderGate>
<PullBboxIntoRefImageIconButton />
</CanvasManagerProviderGate>
)}
</Flex>
<Flex gap={2} w="full">
{isIPAdapterConfig(config) && (
<Flex flexDir="column" gap={2} w="full">
{!isFLUX && <IPAdapterMethod method={config.method} onChange={onChangeIPMethod} />}
<Weight weight={config.weight} onChange={onChangeWeight} />
<BeginEndStepPct beginEndStepPct={config.beginEndStepPct} onChange={onChangeBeginEndStepPct} />
</Flex>
)}
{isFLUXReduxConfig(config) && (
<Flex flexDir="column" gap={2} w="full" alignItems="flex-start">
<FLUXReduxImageInfluence
imageInfluence={config.imageInfluence ?? 'lowest'}
onChange={onChangeFLUXReduxImageInfluence}
/>
</Flex>
)}
<Flex alignItems="center" justifyContent="center" h={32} w={32} aspectRatio="1/1" flexGrow={1}>
<RefImageImage
image={config.image}
onChangeImage={onChangeImage}
dndTarget={setGlobalReferenceImageDndTarget}
dndTargetData={dndTargetData}
/>
</Flex>
</Flex>
</Flex>
);
});
RefImageSettingsContent.displayName = 'RefImageSettingsContent';
const buildSelectIPAdapterHasImage = (id: string) =>
createSelector(selectRefImagesSlice, (refImages) => {
const referenceImage = selectRefImageEntity(refImages, id);
return !!referenceImage && referenceImage.config.image !== null;
});
export const RefImageSettings = memo(() => {
const id = useRefImageIdContext();
const tab = useAppSelector(selectActiveTab);
const canvasManager = useCanvasManagerSafe();
const selectIPAdapterHasImage = useMemo(() => buildSelectIPAdapterHasImage(id), [id]);
const hasImage = useAppSelector(selectIPAdapterHasImage);
if (!hasImage && canvasManager && tab === 'canvas') {
return (
<CanvasManagerProviderGate>
<RefImageNoImageStateWithCanvasOptions />
</CanvasManagerProviderGate>
);
}
if (!hasImage) {
return <RefImageNoImageState />;
}
return <RefImageSettingsContent />;
});
RefImageSettings.displayName = 'RefImageSettings';

View File

@@ -0,0 +1,16 @@
import { createSelector } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/store/storeHooks';
import { selectRefImageEntityOrThrow, selectRefImagesSlice } from 'features/controlLayers/store/refImagesSlice';
import { useMemo } from 'react';
export const useRefImageEntity = (id: string) => {
const selectEntity = useMemo(
() =>
createSelector(selectRefImagesSlice, (refImages) =>
selectRefImageEntityOrThrow(refImages, id, `useRefImageState(${id})`)
),
[id]
);
const entity = useAppSelector(selectEntity);
return entity;
};

View File

@@ -3,9 +3,9 @@ import { useAppSelector } from 'app/store/storeHooks';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import {
buildSelectValidRegionalGuidanceActions,
useAddRegionalGuidanceIPAdapter,
useAddRegionalGuidanceNegativePrompt,
useAddRegionalGuidancePositivePrompt,
useAddNegativePromptToExistingRegionalGuidance,
useAddPositivePromptToExistingRegionalGuidance,
useAddRefImageToExistingRegionalGuidance,
} from 'features/controlLayers/hooks/addLayerHooks';
import { useMemo } from 'react';
import { useTranslation } from 'react-i18next';
@@ -14,9 +14,9 @@ import { PiPlusBold } from 'react-icons/pi';
export const RegionalGuidanceAddPromptsIPAdapterButtons = () => {
const entityIdentifier = useEntityIdentifierContext('regional_guidance');
const { t } = useTranslation();
const addRegionalGuidanceIPAdapter = useAddRegionalGuidanceIPAdapter(entityIdentifier);
const addRegionalGuidancePositivePrompt = useAddRegionalGuidancePositivePrompt(entityIdentifier);
const addRegionalGuidanceNegativePrompt = useAddRegionalGuidanceNegativePrompt(entityIdentifier);
const addRegionalGuidanceIPAdapter = useAddRefImageToExistingRegionalGuidance(entityIdentifier);
const addRegionalGuidancePositivePrompt = useAddPositivePromptToExistingRegionalGuidance(entityIdentifier);
const addRegionalGuidanceNegativePrompt = useAddNegativePromptToExistingRegionalGuidance(entityIdentifier);
const selectValidActions = useMemo(
() => buildSelectValidRegionalGuidanceActions(entityIdentifier),

View File

@@ -2,25 +2,25 @@ import { Flex, IconButton, Spacer, Text } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { BeginEndStepPct } from 'features/controlLayers/components/common/BeginEndStepPct';
import { FLUXReduxImageInfluence } from 'features/controlLayers/components/common/FLUXReduxImageInfluence';
import { IPAdapterCLIPVisionModel } from 'features/controlLayers/components/common/IPAdapterCLIPVisionModel';
import { Weight } from 'features/controlLayers/components/common/Weight';
import { CLIPVisionModel } from 'features/controlLayers/components/IPAdapter/CLIPVisionModel';
import { FLUXReduxImageInfluence } from 'features/controlLayers/components/IPAdapter/FLUXReduxImageInfluence';
import { IPAdapterImagePreview } from 'features/controlLayers/components/IPAdapter/IPAdapterImagePreview';
import { IPAdapterMethod } from 'features/controlLayers/components/IPAdapter/IPAdapterMethod';
import { RegionalReferenceImageModel } from 'features/controlLayers/components/IPAdapter/RegionalReferenceImageModel';
import { IPAdapterMethod } from 'features/controlLayers/components/RefImage/IPAdapterMethod';
import { RefImageImage } from 'features/controlLayers/components/RefImage/RefImageImage';
import { RegionalGuidanceIPAdapterSettingsEmptyState } from 'features/controlLayers/components/RegionalGuidance/RegionalGuidanceIPAdapterSettingsEmptyState';
import { RegionalReferenceImageModel } from 'features/controlLayers/components/RegionalGuidance/RegionalReferenceImageModel';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { usePullBboxIntoRegionalGuidanceReferenceImage } from 'features/controlLayers/hooks/saveCanvasHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import {
rgIPAdapterBeginEndStepPctChanged,
rgIPAdapterCLIPVisionModelChanged,
rgIPAdapterDeleted,
rgIPAdapterFLUXReduxImageInfluenceChanged,
rgIPAdapterImageChanged,
rgIPAdapterMethodChanged,
rgIPAdapterModelChanged,
rgIPAdapterWeightChanged,
rgRefImageDeleted,
rgRefImageFLUXReduxImageInfluenceChanged,
rgRefImageImageChanged,
rgRefImageIPAdapterBeginEndStepPctChanged,
rgRefImageIPAdapterCLIPVisionModelChanged,
rgRefImageIPAdapterMethodChanged,
rgRefImageIPAdapterWeightChanged,
rgRefImageModelChanged,
} from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice, selectRegionalGuidanceReferenceImage } from 'features/controlLayers/store/selectors';
import type {
@@ -46,64 +46,64 @@ const RegionalGuidanceIPAdapterSettingsContent = memo(({ referenceImageId }: Pro
const { t } = useTranslation();
const dispatch = useAppDispatch();
const onDeleteIPAdapter = useCallback(() => {
dispatch(rgIPAdapterDeleted({ entityIdentifier, referenceImageId }));
dispatch(rgRefImageDeleted({ entityIdentifier, referenceImageId }));
}, [dispatch, entityIdentifier, referenceImageId]);
const selectIPAdapter = useMemo(
const selectConfig = useMemo(
() =>
createSelector(selectCanvasSlice, (canvas) => {
const referenceImage = selectRegionalGuidanceReferenceImage(canvas, entityIdentifier, referenceImageId);
assert(referenceImage, `Regional Guidance IP Adapter with id ${referenceImageId} not found`);
return referenceImage.ipAdapter;
return referenceImage.config;
}),
[entityIdentifier, referenceImageId]
);
const ipAdapter = useAppSelector(selectIPAdapter);
const config = useAppSelector(selectConfig);
const onChangeBeginEndStepPct = useCallback(
(beginEndStepPct: [number, number]) => {
dispatch(rgIPAdapterBeginEndStepPctChanged({ entityIdentifier, referenceImageId, beginEndStepPct }));
dispatch(rgRefImageIPAdapterBeginEndStepPctChanged({ entityIdentifier, referenceImageId, beginEndStepPct }));
},
[dispatch, entityIdentifier, referenceImageId]
);
const onChangeWeight = useCallback(
(weight: number) => {
dispatch(rgIPAdapterWeightChanged({ entityIdentifier, referenceImageId, weight }));
dispatch(rgRefImageIPAdapterWeightChanged({ entityIdentifier, referenceImageId, weight }));
},
[dispatch, entityIdentifier, referenceImageId]
);
const onChangeIPMethod = useCallback(
(method: IPMethodV2) => {
dispatch(rgIPAdapterMethodChanged({ entityIdentifier, referenceImageId, method }));
dispatch(rgRefImageIPAdapterMethodChanged({ entityIdentifier, referenceImageId, method }));
},
[dispatch, entityIdentifier, referenceImageId]
);
const onChangeFLUXReduxImageInfluence = useCallback(
(imageInfluence: FLUXReduxImageInfluenceType) => {
dispatch(rgIPAdapterFLUXReduxImageInfluenceChanged({ entityIdentifier, referenceImageId, imageInfluence }));
dispatch(rgRefImageFLUXReduxImageInfluenceChanged({ entityIdentifier, referenceImageId, imageInfluence }));
},
[dispatch, entityIdentifier, referenceImageId]
);
const onChangeModel = useCallback(
(modelConfig: IPAdapterModelConfig | FLUXReduxModelConfig) => {
dispatch(rgIPAdapterModelChanged({ entityIdentifier, referenceImageId, modelConfig }));
dispatch(rgRefImageModelChanged({ entityIdentifier, referenceImageId, modelConfig }));
},
[dispatch, entityIdentifier, referenceImageId]
);
const onChangeCLIPVisionModel = useCallback(
(clipVisionModel: CLIPVisionModelV2) => {
dispatch(rgIPAdapterCLIPVisionModelChanged({ entityIdentifier, referenceImageId, clipVisionModel }));
dispatch(rgRefImageIPAdapterCLIPVisionModelChanged({ entityIdentifier, referenceImageId, clipVisionModel }));
},
[dispatch, entityIdentifier, referenceImageId]
);
const onChangeImage = useCallback(
(imageDTO: ImageDTO | null) => {
dispatch(rgIPAdapterImageChanged({ entityIdentifier, referenceImageId, imageDTO }));
dispatch(rgRefImageImageChanged({ entityIdentifier, referenceImageId, imageDTO }));
},
[dispatch, entityIdentifier, referenceImageId]
);
@@ -112,9 +112,9 @@ const RegionalGuidanceIPAdapterSettingsContent = memo(({ referenceImageId }: Pro
() =>
setRegionalGuidanceReferenceImageDndTarget.getData(
{ entityIdentifier, referenceImageId },
ipAdapter.image?.image_name
config.image?.image_name
),
[entityIdentifier, ipAdapter.image?.image_name, referenceImageId]
[entityIdentifier, config.image?.image_name, referenceImageId]
);
const pullBboxIntoIPAdapter = usePullBboxIntoRegionalGuidanceReferenceImage(entityIdentifier, referenceImageId);
@@ -140,9 +140,9 @@ const RegionalGuidanceIPAdapterSettingsContent = memo(({ referenceImageId }: Pro
</Flex>
<Flex flexDir="column" gap={2} position="relative" w="full">
<Flex gap={2} alignItems="center" w="full">
<RegionalReferenceImageModel modelKey={ipAdapter.model?.key ?? null} onChangeModel={onChangeModel} />
{ipAdapter.type === 'ip_adapter' && (
<CLIPVisionModel model={ipAdapter.clipVisionModel} onChange={onChangeCLIPVisionModel} />
<RegionalReferenceImageModel modelKey={config.model?.key ?? null} onChangeModel={onChangeModel} />
{config.type === 'ip_adapter' && (
<IPAdapterCLIPVisionModel model={config.clipVisionModel} onChange={onChangeCLIPVisionModel} />
)}
<IconButton
onClick={pullBboxIntoIPAdapter}
@@ -154,24 +154,24 @@ const RegionalGuidanceIPAdapterSettingsContent = memo(({ referenceImageId }: Pro
/>
</Flex>
<Flex gap={2} w="full">
{ipAdapter.type === 'ip_adapter' && (
{config.type === 'ip_adapter' && (
<Flex flexDir="column" gap={2} w="full">
<IPAdapterMethod method={ipAdapter.method} onChange={onChangeIPMethod} />
<Weight weight={ipAdapter.weight} onChange={onChangeWeight} />
<BeginEndStepPct beginEndStepPct={ipAdapter.beginEndStepPct} onChange={onChangeBeginEndStepPct} />
<IPAdapterMethod method={config.method} onChange={onChangeIPMethod} />
<Weight weight={config.weight} onChange={onChangeWeight} />
<BeginEndStepPct beginEndStepPct={config.beginEndStepPct} onChange={onChangeBeginEndStepPct} />
</Flex>
)}
{ipAdapter.type === 'flux_redux' && (
{config.type === 'flux_redux' && (
<Flex flexDir="column" gap={2} w="full">
<FLUXReduxImageInfluence
imageInfluence={ipAdapter.imageInfluence ?? 'lowest'}
imageInfluence={config.imageInfluence ?? 'lowest'}
onChange={onChangeFLUXReduxImageInfluence}
/>
</Flex>
)}
<Flex alignItems="center" justifyContent="center" h={32} w={32} aspectRatio="1/1" flexGrow={1}>
<IPAdapterImagePreview
image={ipAdapter.image}
<RefImageImage
image={config.image}
onChangeImage={onChangeImage}
dndTarget={setRegionalGuidanceReferenceImageDndTarget}
dndTargetData={dndTargetData}
@@ -191,17 +191,16 @@ const buildSelectIPAdapterHasImage = (
) =>
createSelector(selectCanvasSlice, (canvas) => {
const referenceImage = selectRegionalGuidanceReferenceImage(canvas, entityIdentifier, referenceImageId);
return !!referenceImage && referenceImage.ipAdapter.image !== null;
return !!referenceImage && referenceImage.config.image !== null;
});
export const RegionalGuidanceIPAdapterSettings = memo(({ referenceImageId }: Props) => {
const entityIdentifier = useEntityIdentifierContext('regional_guidance');
const selectIPAdapterHasImage = useMemo(
const selectHasImage = useMemo(
() => buildSelectIPAdapterHasImage(entityIdentifier, referenceImageId),
[entityIdentifier, referenceImageId]
);
const hasImage = useAppSelector(selectIPAdapterHasImage);
const hasImage = useAppSelector(selectHasImage);
if (!hasImage) {
return <RegionalGuidanceIPAdapterSettingsEmptyState referenceImageId={referenceImageId} />;

View File

@@ -4,7 +4,7 @@ import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { usePullBboxIntoRegionalGuidanceReferenceImage } from 'features/controlLayers/hooks/saveCanvasHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { rgIPAdapterDeleted } from 'features/controlLayers/store/canvasSlice';
import { rgRefImageDeleted } from 'features/controlLayers/store/canvasSlice';
import type { SetRegionalGuidanceReferenceImageDndTargetData } from 'features/dnd/dnd';
import { setRegionalGuidanceReferenceImageDndTarget } from 'features/dnd/dnd';
import { DndDropTarget } from 'features/dnd/DndDropTarget';
@@ -35,7 +35,7 @@ export const RegionalGuidanceIPAdapterSettingsEmptyState = memo(({ referenceImag
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}, [dispatch]);
const onDeleteIPAdapter = useCallback(() => {
dispatch(rgIPAdapterDeleted({ entityIdentifier, referenceImageId }));
dispatch(rgRefImageDeleted({ entityIdentifier, referenceImageId }));
}, [dispatch, entityIdentifier, referenceImageId]);
const pullBboxIntoIPAdapter = usePullBboxIntoRegionalGuidanceReferenceImage(entityIdentifier, referenceImageId);
@@ -83,7 +83,7 @@ export const RegionalGuidanceIPAdapterSettingsEmptyState = memo(({ referenceImag
</Flex>
<Flex alignItems="center" gap={2} p={4}>
<Text textAlign="center" color="base.300">
<Trans i18nKey="controlLayers.referenceImageEmptyState" components={components} />
<Trans i18nKey="controlLayers.referenceImageEmptyStateWithCanvasTab" components={components} />
</Text>
</Flex>
<input {...uploadApi.getUploadInputProps()} />

View File

@@ -3,9 +3,9 @@ import { useAppSelector } from 'app/store/storeHooks';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import {
buildSelectValidRegionalGuidanceActions,
useAddRegionalGuidanceIPAdapter,
useAddRegionalGuidanceNegativePrompt,
useAddRegionalGuidancePositivePrompt,
useAddNegativePromptToExistingRegionalGuidance,
useAddPositivePromptToExistingRegionalGuidance,
useAddRefImageToExistingRegionalGuidance,
} from 'features/controlLayers/hooks/addLayerHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { memo, useMemo } from 'react';
@@ -15,9 +15,9 @@ export const RegionalGuidanceMenuItemsAddPromptsAndIPAdapter = memo(() => {
const entityIdentifier = useEntityIdentifierContext('regional_guidance');
const { t } = useTranslation();
const isBusy = useCanvasIsBusy();
const addRegionalGuidanceIPAdapter = useAddRegionalGuidanceIPAdapter(entityIdentifier);
const addRegionalGuidancePositivePrompt = useAddRegionalGuidancePositivePrompt(entityIdentifier);
const addRegionalGuidanceNegativePrompt = useAddRegionalGuidanceNegativePrompt(entityIdentifier);
const addRegionalGuidanceIPAdapter = useAddRefImageToExistingRegionalGuidance(entityIdentifier);
const addRegionalGuidancePositivePrompt = useAddPositivePromptToExistingRegionalGuidance(entityIdentifier);
const addRegionalGuidanceNegativePrompt = useAddNegativePromptToExistingRegionalGuidance(entityIdentifier);
const selectValidActions = useMemo(
() => buildSelectValidRegionalGuidanceActions(entityIdentifier),
[entityIdentifier]

Some files were not shown because too many files have changed in this diff Show More