Compare commits

...

363 Commits

Author SHA1 Message Date
psychedelicious
b12d802f40 chore: bump version to v5.4.1rc1 2024-11-07 10:51:05 +11:00
psychedelicious
a01d44f813 chore(ui): lint 2024-11-06 10:25:46 -05:00
psychedelicious
63fb3a15e9 feat(ui): default to no control model selected for control layers 2024-11-06 10:25:46 -05:00
psychedelicious
4d0837541b feat(ui): add simple mode filtering 2024-11-06 10:25:46 -05:00
psychedelicious
999809b4c7 fix(ui): minor viewer close button styling 2024-11-06 10:25:46 -05:00
psychedelicious
c452edfb9f feat(ui): add control layer empty state 2024-11-06 10:25:46 -05:00
psychedelicious
ad2cdbd8a2 feat(ui): tooltip for canvas preview image 2024-11-06 10:25:46 -05:00
psychedelicious
f15c24bfa7 feat(ui): add " (recommended)" to balanced control mode label 2024-11-06 10:25:46 -05:00
psychedelicious
d1f653f28c feat(ui): make default control end step 0.75 2024-11-06 10:25:46 -05:00
psychedelicious
244465d3a6 feat(ui): make default control weight 0.75 2024-11-06 10:25:46 -05:00
psychedelicious
c6236ab70c feat(ui): add menubar-ish header on comparison 2024-11-06 10:25:46 -05:00
psychedelicious
644d5cb411 feat(ui): add menubar-ish header on viewer 2024-11-06 10:25:46 -05:00
Riku
bb0a630416 fix(ui): adjust knip config to ignore parameter schema exports 2024-11-06 22:51:17 +11:00
Riku
2148ae9287 feat(ui): simplify parameter schema declaration and type inference 2024-11-06 22:51:17 +11:00
psychedelicious
42d242609c chore(gh): update pr template w/ reminder for what's new copy 2024-11-06 19:03:31 +11:00
psychedelicious
fd0a52392b feat(ui): added line about when denoising str is disabled 2024-11-06 19:01:33 +11:00
psychedelicious
e64415d59a feat(ui): revised logic to disable denoising str 2024-11-06 19:01:33 +11:00
psychedelicious
1871e0bdbf feat(ui): tweaked denoise str styling 2024-11-06 19:01:33 +11:00
Mary Hipp
3ae9a965c2 lint 2024-11-06 19:01:33 +11:00
Mary Hipp
85932e35a7 update copy again 2024-11-06 19:01:33 +11:00
Mary Hipp
41b07a56cc update popover copy and add image 2024-11-06 19:01:33 +11:00
Mary Hipp
54064c0cb8 fix(ui): match badge height to slider height so layout does not shift 2024-11-06 19:01:33 +11:00
Mary Hipp
68284b37fa remove opacity logic from WavyLine, add badge explaining disabled state, add translations 2024-11-06 19:01:33 +11:00
Mary Hipp
ae5bc6f5d6 feat(ui): move denoising strength to layers panel w/ visualization of how much change will be applied, only enable if 1+ enabled raster layer 2024-11-06 19:01:33 +11:00
Mary Hipp
6dc16c9f54 wip 2024-11-06 19:01:33 +11:00
Brandon Rising
faa9ac4e15 fix: get_clip_variant_type should never return None 2024-11-06 09:59:50 +11:00
Mary Hipp Rogers
d0460849b0 fix bad merge conflict (#7273)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-11-05 16:02:03 -05:00
Mary Hipp Rogers
bed3c2dd77 update Whats New for 5.3.1 (#7272)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-11-05 15:43:16 -05:00
Mary Hipp
916ddd17d7 fix(ui): fix link for infill method popover 2024-11-05 15:39:03 -05:00
Mary Hipp
accfa7407f fix undefined 2024-11-05 15:30:17 -05:00
Mary Hipp
908db31e48 feat(api,ui): allow Whats New module to get content from back-end 2024-11-05 15:30:17 -05:00
Mary Hipp
b70f632b26 fix(ui): add some feedback while layers are merging 2024-11-05 12:38:50 -05:00
Brandon Rising
d07a6385ab Always default to ClipVariantType.L instead of None 2024-11-05 12:03:40 -05:00
Brandon Rising
68df612fa1 fix: Never throw an exception when finding the clip variant type 2024-11-05 12:03:40 -05:00
psychedelicious
3b96c79461 chore: bump version to v5.4.0 2024-11-05 10:09:21 +11:00
psychedelicious
89bda5b983 Ryan/sd3 diffusers (#7222)
## Summary

Nodes to support SD3.5 txt2img generations
* adds SD3.5 to starter models
* adds default workflow for SD3.5 txt2img

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-11-05 08:21:28 +11:00
Brandon Rising
22bff1fb22 Fix conditional within filter_by_variant to not read all candidates as default 2024-11-04 12:42:09 -05:00
Mary Hipp
55ba6488d1 fix up types file 2024-11-04 12:42:09 -05:00
brandonrising
2d78859171 Create bespoke latents to image node for sd3 2024-11-04 12:42:09 -05:00
Mary Hipp
3a661bac34 fix(ui): exclude submodels from model manager 2024-11-04 12:42:09 -05:00
Mary Hipp
bb8a02de18 update schema 2024-11-04 12:42:09 -05:00
maryhipp
78155344f6 update node fields for SD3 to match other SD nodes 2024-11-04 12:42:09 -05:00
Brandon Rising
391a24b0f6 Re-add erroniously removed hash code 2024-11-04 12:42:09 -05:00
Brandon Rising
e75903389f Run ruff, fix bug in hf downloading code which failed to download parts of a model 2024-11-04 12:42:09 -05:00
Brandon Rising
27567052f2 Create new latent factors for sd35 2024-11-04 12:42:09 -05:00
Brandon Rising
6f447f7169 Rather than .fp16., some repos start the suffix with .fp16... for weights spread across multiple files 2024-11-04 12:42:09 -05:00
Mary Hipp
8b370cc182 (ui): dont show SD3 in main model dropdown yet 2024-11-04 12:42:09 -05:00
maryhipp
af583d2971 ruff format 2024-11-04 12:42:09 -05:00
Mary Hipp
0ebe8fb1bd (ui): add required/optional logic to other submodel fields 2024-11-04 12:42:09 -05:00
maryhipp
befb629f46 add default workflow 2024-11-04 12:42:09 -05:00
maryhipp
874d67cb37 add SD3.5 to starter models 2024-11-04 12:42:09 -05:00
Mary Hipp
19f7a1295a (ui): add fields for CLIP-L and CLIP-G, remove MainModelConfig type changes 2024-11-04 12:42:09 -05:00
maryhipp
78bd605617 (nodes,api): expose the submodels on SD3 model loader as optional, add types needed for CLIP-L and CLIP-G fields 2024-11-04 12:42:09 -05:00
Brandon Rising
b87f4e59a5 Create clip variant type, create new fucntions for discerning clipL and clipG in the frontend 2024-11-04 12:42:09 -05:00
Ryan Dick
1eca4f12c8 Make T5 encoder optonal in SD3 workflows. 2024-11-04 12:42:09 -05:00
Ryan Dick
f1de11d6bf Make the default CFG for SD3 3.5. 2024-11-04 12:42:09 -05:00
Ryan Dick
9361ed9d70 Add progress images to SD3 and make denoising cancellable. 2024-11-04 12:42:09 -05:00
Brandon Rising
ebabf4f7a8 Setup Model and T5 Encoder selection fields for sd3 nodes 2024-11-04 12:42:09 -05:00
Brandon Rising
606f3321f5 Initial wave of frontend updates for sd-3 node inputs 2024-11-04 12:42:09 -05:00
Brandon Rising
3970aa30fb define submodels on sd3 models during probe 2024-11-04 12:42:09 -05:00
Ryan Dick
678436e07c Add tqdm progress bar for SD3. 2024-11-04 12:42:09 -05:00
Ryan Dick
c620581699 Bug fixes to get SD3 text-to-image workflow running. 2024-11-04 12:42:09 -05:00
Ryan Dick
c331d42ce4 Temporary hack for testing SD3 model loader. 2024-11-04 12:42:09 -05:00
Ryan Dick
1ac9b502f1 Fix Sd3TextEncoderInvocation output type. 2024-11-04 12:42:09 -05:00
Ryan Dick
3fa478a12f Initial draft of SD3DenoiseInvocation. 2024-11-04 12:42:09 -05:00
Ryan Dick
2d86298b7f Add first draft of Sd3TextEncoderInvocation. 2024-11-04 12:42:09 -05:00
Ryan Dick
009cdb714c Add Sd3ModelLoaderInvocation. 2024-11-04 12:42:09 -05:00
Ryan Dick
9d3f5427b4 Move FluxModelLoaderInvocation to its own file. model.py was getting bloated. 2024-11-04 12:42:09 -05:00
Ryan Dick
e4b17f019a Get diffusers SD3 model probing working. 2024-11-04 12:42:09 -05:00
Ryan Dick
586c00bc02 (minor) Remove unused dict. 2024-11-04 12:42:09 -05:00
Eugene Brodsky
0f11fda65a fix(deps): pin mediapipe strictly to a known working version 2024-11-04 10:16:19 -05:00
psychedelicious
3e75331ef7 fix(ui): load workflow from file
In a8de6406c5 a change was made to many menus in an effort to improve performance. The menus were made to be lazy, so that they are mounted only while open.

This causes unexpected behaviour when there is some logic in the menu that may need to execute after the user selects a menu item.

In this case, when you click to load a workflow from file, the file picker opens but then the menuitem unmounts, taking the input element and all uploading logic with it. When you select a file, nothing happens because we've nuked the handlers by unmounting everything.

Easy fix - un-lazy-fy the menu.

Closes #7240
2024-11-04 08:02:55 -05:00
psychedelicious
be133408ac fix(nodes): relaxed validation for segment anything
The validation on this node causes graph validation to valid. It must be validated _after_ instantiation.

Also, it was a bit too strict. The only case we explicitly do not handle is when both bboxes and points are provided. It's acceptable if neither are provided.

Closes #7248
2024-11-04 08:00:52 -05:00
psychedelicious
7e1e0d6928 fix(ui): non-default filters can erase layer
When filtering, we use a listener to trigger processing the image whenever a filter setting changes. For example, if the user changes from canny to depth, and auto-process is enabled, we re-process the layer with new filter settings.

The filterer has a method to reset its ephemeral state. This includes the filter settings, so resetting the ephemeral state is expected to trigger processing of the filter.

When we exit filtering, we reset the ephemeral state before resetting everything else, like the listeners.

This can cause problem when we exit filtering. The sequence:
- Start filtering a layer.
- Auto-process the filter in response to starting the filter process.
- Change the filter settings.
- Auto-process the filter in response to the changed settings.
- Apply the filter.
- Exit filtering, first by resetting the ephemeral state.
- Auto-process the filter in response to the reset settings.*
- Finish exiting, including unsubscribing from listeners.

*Whoops! That last auto-process has now borked the layer's rendering by processing a filter when we shouldn't be processing a filter.

We need to first unsubscribe from listeners, so we don't react to that change to the filter settings and erroneously process the layer.

Also, add a check to the `processImmediate` method to prevent processing if that method is accidentally called without first starting the filterer.

The same issue could affect the segmenyanything module - same fixes are implemented there.
2024-11-04 07:11:20 -05:00
psychedelicious
cd3d8df5a8 fix(ui): save canvas to gallery does nothing
The root issue is the compositing cache. When we save the canvas to gallery, we need to first composite raster layers together and then upload the image.

The compositor makes extensive use of caching to reduce the number of images created and improve performance. There are two "layers" of caching:
1. Caching the composite canvas element, which is used both for uploading the canvas and for generation mode analysis.
2. Caching the uploaded composite canvas element as an image.

The combination of these caches allows for the various processes that require composite canvases to do minimal work.

But this causes a problem in this situation, because the user expects a new image to be uploaded when they click save to gallery.

For example, suppose we have already composited and uploaded the raster layer state for use in a generation. Then, we ask the compositor to save the canvas to gallery.

The compositor sees that we are requesting an image for the current canvas state, and instead of recompositing and uploading the image again, it just returns the cached image.

In this case, no image is uploaded and it the button does nothing.

We need to be able to opt out of the caching at some level, for certain actions. A `forceUpload` arg is added to the compositor's high-level `getCompositeImageDTO` method to do this.

When true, we ignore the uppermost caching layer (the uploaded image layer), but still use the lower caching layer (the canvas element layer). So we don't recompute the canvas element, but we do upload it as a new image to the server.
2024-11-04 07:11:20 -05:00
psychedelicious
24d3c22017 fix(ui): temp fix for stuck tooltips 2024-11-04 07:11:20 -05:00
psychedelicious
b0d37f4e51 fix(ui): progress image does not reset when canceling generation
Previously, we cleared the canvas progress image when the canvas had no active generations. This allowed for a brief flash of canvas state between the last progress image for a given generation, and when the output image for that generation rendered. Here's the sequence:
- Progress images are received and rendered
- Generation completes - no active canvas generations
- Clear the progress image -> canvas layers visible unexpectedly, creating an awkward jarring change
- Generation output image is rendered -> output image overlaid on canvas layers

In 83538c4b2b I attempted to fix this by only clearing the progress image while we were not staging.

This isn't quite right, though. We are often staging with no active generations - for example, you have a few images completed and are waiting to choose one.

In this situation, if you cancel a pending generation, the logic to clear the progress image doesn't fire because it sees staging is in progress.

What we really need is:
- Staging area module clears the progress image once it has rendered an output image.
- Progress image module clears the progress image when a generation is canceled or failed, in which case there will be no output image.

To do this, we can add an event listener to the progress image module to listen for queue item status changes, and when we get a cancelation or failure, clear the progress image.
2024-11-04 07:11:20 -05:00
psychedelicious
3559124674 feat(ui): use nanostores in CanvasProgressImageModule for internal state 2024-11-04 07:11:20 -05:00
Eugene Brodsky
6c33e02141 fix(pkg): pin torch to <2.5.0 to prevent unnecessary downloads
pip's dependency resolution doesn't take into account transitive
dependencies when choosing package versions for download.
Even though `torch=~2.4.1` is required by `diffusers`, pip will
download 2.5.0 and higher, but only install 2.4.1.
Pinning torch to <2.5.0 prevents this behaviour.
2024-11-01 12:27:28 -04:00
psychedelicious
8cf94d602f chore: bump version to v5.3.1 2024-11-01 13:31:51 +11:00
psychedelicious
016a6f182f Make T2I Adapters work with any resolution supported by the models (#7215)
## Summary

This change mimics the unet padding strategy to align T2I featuremaps
with the latents during denoising. It also slightly adjusts the crop and
scale logic so that the control will match the input image without
shifting when it needs to pad.

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

Image generated at 1032x1024

![image](https://github.com/user-attachments/assets/7ea579e4-61dc-4b6b-aa84-33d676d160c6)

Image generated at 1080x1040 to prove feature alignment.

![image](https://github.com/user-attachments/assets/ee6e5b6a-d0d5-474d-9fc4-f65c104964bd)

Edge artifacts on the bottom and right are a result of SDXL's unet
padding, and t2i influence will be cut off in those regions.

## Merge Plan

Contingent on #7205 
Currently the Canvas UI prevents users from generating non-64
resolutions while t2i adapter layers are active. Will leave this as a
draft until fixing that.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-11-01 13:22:00 +11:00
Kent Keirsey
6fbc019142 Merge branch 'main' into t2i_resolution_hack 2024-10-31 22:08:38 -04:00
psychedelicious
26f95d6a97 fix(ui): disable move tool when staging 2024-10-31 22:08:16 -04:00
psychedelicious
40f7b0d171 fix(ui): cursor disappearing on empty layers 2024-10-31 22:08:16 -04:00
psychedelicious
4904700751 feat(ui): more info in state module repr 2024-10-31 22:08:16 -04:00
psychedelicious
83538c4b2b fix(ui): flash of canvas state between last progress image and generation result 2024-10-31 22:08:16 -04:00
psychedelicious
eb7b559529 fix(ui): sync canvas layer visibility when staging state changes 2024-10-31 22:08:16 -04:00
Kent Keirsey
4945465cf0 Merge branch 'main' into t2i_resolution_hack 2024-10-31 21:17:06 -04:00
Will
7eed7282a9 removing periods from update link to prevent page not found error 2024-11-01 07:42:31 +11:00
psychedelicious
47f0781822 fix(ui): add missing translations
Closes #7229
2024-11-01 07:40:52 +11:00
Eugene Brodsky
88b8e3e3d5 chore(deps): adjust pins for torch, numpy, other dependencies, to satisfy stricted dependency resolution 2024-10-31 16:26:53 -04:00
dunkeroni
47c3ab9214 Remove UI restrictions for T2I resolutions 2024-10-31 16:07:46 -04:00
dunkeroni
d6d436b59c Merge branch 'invoke-ai:main' into t2i_resolution_hack 2024-10-31 15:52:24 -04:00
Hippalectryon
6ff7057967 fix broken link in installer 2024-10-31 09:50:08 -04:00
psychedelicious
e032ab1179 fix(ui): ensure compositing rect is rendered correctly
This fixes an issue uncovered by the previous commit in which we do not exit filter/select-object on save-as.
2024-10-31 08:57:10 -04:00
psychedelicious
65bddfcd93 feat(ui): filter/select-object do not exit on save-as 2024-10-31 08:57:10 -04:00
aidawanglion
2d3ce418dd translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 73.7% (1160 of 1573 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
2024-10-31 17:18:35 +11:00
Hosted Weblate
548d72f7b9 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
2024-10-31 17:18:35 +11:00
aidawanglion
19837a0f29 translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 73.3% (1146 of 1563 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
2024-10-31 17:18:35 +11:00
aidawanglion
483b65a1dc translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 69.4% (1086 of 1563 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
2024-10-31 17:18:35 +11:00
Riccardo Giovanetti
b85931c7ab translationBot(ui): update translation (Italian)
Currently translated at 99.4% (1554 of 1563 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2024-10-31 17:18:35 +11:00
Hosted Weblate
9225f47338 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
2024-10-31 17:18:35 +11:00
Riccardo Giovanetti
bccac5e4a6 translationBot(ui): update translation (Italian)
Currently translated at 99.4% (1553 of 1562 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2024-10-31 17:18:35 +11:00
Hosted Weblate
7cb07fdc04 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
2024-10-31 17:18:35 +11:00
dakota2472
b137450026 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (1562 of 1562 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2024-10-31 17:18:35 +11:00
Hosted Weblate
dc5090469a translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
2024-10-31 17:18:35 +11:00
Thomas Bolteau
e0ae2ace89 translationBot(ui): update translation (French)
Currently translated at 100.0% (1561 of 1561 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
2024-10-31 17:18:35 +11:00
Riku
269faae04b translationBot(ui): update translation (German)
Currently translated at 71.4% (1115 of 1561 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
2024-10-31 17:18:35 +11:00
Riccardo Giovanetti
e282acd41c translationBot(ui): update translation (Italian)
Currently translated at 98.8% (1543 of 1561 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2024-10-31 17:18:35 +11:00
Ettore Atalan
a266668348 translationBot(ui): update translation (German)
Currently translated at 69.3% (1083 of 1561 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
2024-10-31 17:18:35 +11:00
Riccardo Giovanetti
3bb3e142fc translationBot(ui): update translation (Italian)
Currently translated at 98.8% (1543 of 1561 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2024-10-31 17:18:35 +11:00
Hosted Weblate
6ac6d70a22 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-10-31 17:18:35 +11:00
Riccardo Giovanetti
b0acf33ba5 translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1496 of 1518 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-10-31 17:18:35 +11:00
qyouqme
b3eb64b64c translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 66.0% (1003 of 1518 strings)

Co-authored-by: qyouqme <camtasiacn@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-10-31 17:18:35 +11:00
Riku
95f8ab1a29 translationBot(ui): update translation (German)
Currently translated at 71.3% (1083 of 1518 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-10-31 17:18:35 +11:00
Hosted Weblate
4e043384db translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-10-31 17:18:35 +11:00
psychedelicious
0f5df8ba17 chore(ui): lint 2024-10-31 16:54:31 +11:00
psychedelicious
2826ab48a2 refactor(ui): layer interaction locking
Previously we maintained an `isInteractable` flag, which was derived from these layer flags:
- Locked/unlocked
- Enabled/disabled
- Layer's type visible/hidden

When a layer was not interactable, we blocked all layer actions.

After comparing to the behaviour in Affinity and considering user feedback, I've loosened these restrictions while maintaining safety. First, some definitions.

There two kinds of layer actions - mutating actions and non-mutating actions.
- Mutating actions are drawing on the layer, cropping it, filtering it, converting it, etc. Anything that changes the layer.
- Non-mutating actions are copying the layer, saving the layer to gallery, etc. Anything that _uses_ the layer.

Then, there are two broad canvas states - busy and not busy. "Busy" means the canvas is actively filtering, staging, compositing layers together, etc - something that is "single-threaded" by nature.

And here are the revised restrictions:
- When canvas is busy, you cannot initiate any layer actions.
- When the canvas is not busy, and the layer is locked, you initiate any mutating actions.
- When the canvas is not busy and the layer is not locked, you can initiate any layer action.

Besides safely giving users more freedom, it also fixes an issue where the context menu for a layer was disabled if it was not the selected layer.
2024-10-31 16:54:31 +11:00
psychedelicious
7ff1b635c8 docs: clarify comments for invoke method return annotation validation 2024-10-31 16:21:07 +11:00
psychedelicious
dfb5e8b5e5 tests: add invoke method & output annotation tests 2024-10-31 16:21:07 +11:00
psychedelicious
7259da799c feat(nodes): attempt to look up invoke return types by name 2024-10-31 16:21:07 +11:00
psychedelicious
965069fce1 tests: fix nodes tests
they now require a valid output
2024-10-31 16:21:07 +11:00
psychedelicious
90232806d9 feat(nodes): add validation for invoke method return types 2024-10-31 16:21:07 +11:00
Hippalectryon
81bc153399 Fix link in dev docs 2024-10-31 16:06:44 +11:00
Jonathan
c63e526f99 Update FAQ.md
Fixed typo
2024-10-31 16:04:23 +11:00
nirmal0001
2b74263007 Update patchmatch.md
add required Install dependencies for arch linux
2024-10-31 16:01:57 +11:00
psychedelicious
d3a82f7119 feat(ui): do not show hftoken error until user attempts to set it 2024-10-31 15:47:14 +11:00
Mary Hipp
291c5a0341 lint 2024-10-31 15:47:14 +11:00
Mary Hipp
bcb41399ca feat(ui,api): support for HF tokens in UI, handle Unauthorized and Forbidden errors 2024-10-31 15:47:14 +11:00
psychedelicious
6f0f53849b tests: reset config changes in test_deny_nodes when finished testing 2024-10-31 15:22:14 +11:00
psychedelicious
4e7d63761a fix(nodes): nodes denylist handling
- Add method to force a rebuild of the pydantic type adapter for the union of invocations, which is used to validate graphs.
- Update the xfail'd test.
2024-10-31 15:22:14 +11:00
psychedelicious
198c84105d fix(ui): compositor not setting processing flag when cleaning up 2024-10-30 16:27:36 +11:00
psychedelicious
2453b9f443 chore: bump version to v5.3.0rc1 2024-10-30 13:11:41 +11:00
psychedelicious
b091aca986 chore(ui): lint 2024-10-30 11:05:46 +11:00
psychedelicious
8f02ce54a0 perf(ui): cache image data & transparency mode during generation mode calculation
Perf boost and reduces the number of images we create on the backend.
2024-10-30 11:05:46 +11:00
psychedelicious
f4b7c63002 feat(ui): omit non-render-impacting keys when hashing entities
Had missed several of these, which means we were invalidating caches far too often. For example, when you changed a RG prompt, we were invalidating the cached canvas for that entity, even though changing the prompt doesn't affect the canvas at all.
2024-10-30 11:05:46 +11:00
psychedelicious
a4629280b5 feat(ui): use typeguard instead of string comparison 2024-10-30 11:05:46 +11:00
psychedelicious
855fb007da tidy(ui): minor type fix 2024-10-30 11:05:46 +11:00
psychedelicious
d805b52c1f feat(ui): merge down deletes merged entities 2024-10-30 11:05:46 +11:00
psychedelicious
2ea55685bb feat(ui): add save to assets for inpaint & rg 2024-10-30 11:05:46 +11:00
psychedelicious
bd6ff3deaa feat(ui): add merge down for all entity types 2024-10-30 11:05:46 +11:00
psychedelicious
82dd53ec88 tidy(ui): clean up merge visible logic 2024-10-30 11:05:46 +11:00
psychedelicious
71d749541d feat(ui): control layers supports merge visible
The "lighter" GlobalCompositeOperation is used. This seems to be the best one when merging control layers, as it retains edge maps.
2024-10-30 11:05:46 +11:00
psychedelicious
48a57fc4b9 feat(ui): support globalCompositeOperation when compositing canvas 2024-10-30 11:05:46 +11:00
psychedelicious
530e0910fc feat(ui): regional guidance supports merge visible 2024-10-30 11:05:46 +11:00
psychedelicious
2fdf8fc0a2 feat(ui): merge visible creates new layer
Previously, merge visible deleted all other visible layers. This is not how affinity works, I should have confirmed before making it work like this in the first place.Ï
2024-10-30 11:05:46 +11:00
psychedelicious
91db9c9300 refactor(ui): generalize compositor methods
`CanvasCompositorModule` had a fairly inflexible API, only supporting compositing all raster layers or inpaint masks.

The API has been generalized work with a list of canvas entities. This enables `Merge Down` and `Merge Selected` functionality (though `Merge Selected` is not part of this set of changes).
2024-10-30 11:05:46 +11:00
psychedelicious
bc42205593 fix(ui): remember to disable isFiltering when finishing filtering 2024-10-30 09:19:30 +11:00
psychedelicious
2e3cba6416 fix(ui): flash of original layer when applying filter/segment
Let the parent module adopt the filtered/segemented image instead of destroying it and making the parent re-create it, which results in a brief flash of the parent layer's original objects before the new image is rendered.
2024-10-30 09:19:30 +11:00
psychedelicious
7852aacd11 fix(uI): track whether graph succeeded in runGraphAndReturnImageOutput
This prevents extraneous graph cancel requests when cleaning up the abort signal after a successful run of a graph.
2024-10-30 09:19:30 +11:00
psychedelicious
6cccd67ecd feat(ui): update SAM module to w/ minor improvements from filter module 2024-10-30 09:19:30 +11:00
psychedelicious
a7a89c9de1 feat(ui): use more resilient logic in canvas filter module, same as in SAM module 2024-10-30 09:19:30 +11:00
psychedelicious
5ca8eed89e tidy(ui): remove all buffer renderer interactions in SAM module
We don't use the buffer rendere in this module; there's no reason to clear it.
2024-10-30 09:19:30 +11:00
psychedelicious
c885c3c9a6 fix(ui): filter layer data pushed to parent rendered when saving as 2024-10-30 09:19:30 +11:00
Mary Hipp
d81c38c350 update announcements 2024-10-29 09:53:13 -04:00
Riku
92d5b73215 fix(ui): seamless zod parameter cleanup 2024-10-29 20:43:44 +11:00
Riku
097e92db6a fix(ui): always write seamless metadata
Ensure images without seamless enabled correctly reset the setting
when all parameters are recalled
2024-10-29 20:43:44 +11:00
Riku
84c6209a45 feat(ui): display seamless values in metadata viewer 2024-10-29 20:43:44 +11:00
Riku
107e48808a fix(ui): recall seamless settings 2024-10-29 20:43:44 +11:00
dunkeroni
47168b5505 chore: make ruff 2024-10-29 14:07:20 +11:00
dunkeroni
58152ec981 fix preview progress bar pre-denoise 2024-10-29 14:07:20 +11:00
dunkeroni
c74afbf332 convert to bgr on sdxl t2i 2024-10-29 14:07:20 +11:00
psychedelicious
7cdda00a54 feat(ui): rearrange canvas paste back nodes to save an image step
We were scaling the unscaled image and mask down before doing the paste-back, but this adds an extraneous step & image output.

We can do the paste-back first, then scale to output size after. So instead of 2 resizes before the paste-back, we have 1 resize after.

The end result is the same.
2024-10-29 11:13:31 +11:00
psychedelicious
a74282bce6 feat(ui): graph builders use objects for arg instead of many args 2024-10-29 11:13:31 +11:00
psychedelicious
107f048c7a feat(ui): extract canvas output node prefix to constant 2024-10-29 11:13:31 +11:00
Ryan Dick
a2486a5f06 Remove unused prediction_type and upcast_attention from from_single_file(...) calls. 2024-10-28 13:05:17 -04:00
Ryan Dick
07ab116efb Remove load_safety_checker=False from calls to from_single_file(...).
This param has been deprecated, and by including it (even when set to
False) the safety checker automatically gets downloaded.
2024-10-28 13:05:17 -04:00
Ryan Dick
1a13af3c7a Fix huggingface_hub.errors imports after version bump. 2024-10-28 13:05:17 -04:00
Ryan Dick
f2966a2594 Fix changed import for FromOriginalControlNetMixin after diffusers bump. 2024-10-28 13:05:17 -04:00
Ryan Dick
58bb97e3c6 Bump diffusers, accelerate, and huggingface-hub. 2024-10-28 13:05:17 -04:00
dunkeroni
34569a2410 Make T2I Adapters compatible with x8 resolutions 2024-10-27 15:38:22 -04:00
psychedelicious
a84aa5c049 fix(ui): canvas alerts blocking metadata panel 2024-10-27 09:46:01 +11:00
dunkeroni
acfa9c87ef Merge branch 'main' into sdxl_t2i_bgr 2024-10-25 23:44:13 -04:00
dunkeroni
f245d8e429 chore: make ruff 2024-10-25 23:43:33 -04:00
dunkeroni
62cf0f54e0 fix preview progress bar pre-denoise 2024-10-25 23:22:06 -04:00
dunkeroni
5f015e76ba convert to bgr on sdxl t2i 2024-10-25 23:04:17 -04:00
psychedelicious
aebcec28e0 chore: bump version to v5.3.0 2024-10-25 22:37:59 -04:00
psychedelicious
db1c5a94f7 feat(ui): image ctx -> New from Image -> Canvas as Raster/Control Layer 2024-10-25 22:27:00 -04:00
psychedelicious
56222a8493 feat(ui): organize layer context menu items 2024-10-25 22:27:00 -04:00
psychedelicious
b7510ce709 feat(ui): filter, select object and transform UI buttons
- Restore dedicated `Apply` buttons
- Remove icons from the buttons, too much noise when the words are short and clear
- Update loading state to show a spinner next to the `Process` button instead of on _every_ button
2024-10-25 22:27:00 -04:00
psychedelicious
5739799e2e fix(ui): close viewer when transforming 2024-10-25 22:27:00 -04:00
psychedelicious
813cf87920 feat(ui): move canvas alerts to top-left corner 2024-10-25 22:27:00 -04:00
psychedelicious
c95b151daf feat(ui): add layer title heading for canvas ctx menu 2024-10-25 22:27:00 -04:00
psychedelicious
a0f823a3cf feat(ui): reset shouldShowStagedImage flag when starting staging 2024-10-25 22:27:00 -04:00
Hippalectryon
64e0f6d688 Improve dev install docs
Fix numbering
2024-10-25 08:27:26 -04:00
psychedelicious
ddd5b1087c fix(nodes): return copies of objects in invocation ctx
Closes #6820
2024-10-25 08:26:09 -04:00
psychedelicious
008be9b846 feat(ui): add all save as options to filter 2024-10-25 08:12:14 -04:00
psychedelicious
8e7cabdc04 feat(ui): add Replace Current open to Select Object -> Save As 2024-10-25 08:12:14 -04:00
psychedelicious
a4c4237f99 feat(ui): use PiPlayFill for process buttons for filter & select object 2024-10-25 08:12:14 -04:00
psychedelicious
bda3740dcd feat(ui): use fill style icons for Filter 2024-10-25 08:12:14 -04:00
psychedelicious
5b4633baa9 feat(ui): use PiShapesFill icon for Select Object 2024-10-25 08:12:14 -04:00
psychedelicious
96351181cb feat(ui): make canvas layer toolbar icons a bit larger 2024-10-25 08:12:14 -04:00
psychedelicious
957d591d99 feat(ui): "Auto-Mask" -> "Select Object" 2024-10-25 08:12:14 -04:00
psychedelicious
75f605ba1a feat(ui): support inverted selection in auto-mask 2024-10-25 08:12:14 -04:00
psychedelicious
ab898a7180 chore(ui): typegen 2024-10-25 08:12:14 -04:00
psychedelicious
c9a4516ab1 feat(nodes): add invert to apply_tensor_mask_to_image 2024-10-25 08:12:14 -04:00
psychedelicious
fe97c0d5eb tweak(ui): default settings verbiage 2024-10-25 16:09:59 +11:00
psychedelicious
6056764840 feat(ui): disable default settings button when synced
A blue button is begging to be clicked, but clicking it will do nothing. Instead, we should communicate that no action is needed by disabling the button when the default settings are already in use.
2024-10-25 16:09:59 +11:00
psychedelicious
8747c0dbb0 fix(ui): handle no model selection in default settings tooltip 2024-10-25 16:09:59 +11:00
psychedelicious
c5cdd5f9c6 fix(ui): use const EMPTY_OBJECT to prevent rerenders 2024-10-25 16:09:59 +11:00
psychedelicious
abc5d53159 fix(ui): use explicit null check when comparing default settings
Using `&&` will result in false negatives for settings where a falsy value might be valid. For example, any setting for which 0 is a valid number. To be on the safe side, just use an explicit null check on all values.
2024-10-25 16:09:59 +11:00
psychedelicious
2f76019a89 tweak(ui): defaults sync tooltip styling 2024-10-25 16:09:59 +11:00
Mary Hipp
3f45beb1ed feat(ui): add out of sync details to model default settings button 2024-10-25 16:09:59 +11:00
Mary Hipp
bc1126a85b (ui): add setting for showing model descriptions in dropdown defaulted to true 2024-10-25 14:52:33 +11:00
psychedelicious
380017041e fix(app): mutating an image also changes the in-memory cached image
We use an in-memory cache for PIL images to reduce I/O. If a node mutates the image in any way, the cached image object is also updated (but the on-disk image file is not).

We've lucked out that this hasn't caused major issues in the past (well, maybe it has but we didn't understand them?) mainly because of a happy accident. When you call `context.images.get_pil` in a node, if you provide an image mode (e.g. `mode="RGB"`), we call `convert`  on the image. This returns a copy. The node can do whatever it wants to that copy and nothing breaks.

However, when mode is not specified, we return the image directly. This is where we get in trouble - nodes that load the image like this, and then mutate the image, update the cache. Other nodes that reference that same image will now get the mutated version of it.

The fix is super simple - we make sure to return only copies from `get_pil`.
2024-10-25 10:22:22 +11:00
psychedelicious
ab7cdbb7e0 fix(ui): do not delete point on right-mouse click 2024-10-25 10:22:22 +11:00
psychedelicious
e5b78d0221 fix(ui): canvas drop area grid layout 2024-10-25 10:22:22 +11:00
psychedelicious
1acaa6c486 chore: bump version to v5.3.0rc2 2024-10-25 07:50:58 +11:00
psychedelicious
b0381076b7 revert(ui): drop targets for inpaint mask and rg 2024-10-25 07:42:46 +11:00
psychedelicious
ffff2d6dbb feat(ui): add New from Image submenu for image ctx menu 2024-10-25 07:42:46 +11:00
psychedelicious
afa9f07649 fix(ui): missing cursor when transforming 2024-10-25 07:42:46 +11:00
psychedelicious
addb5c49ea feat(ui): support dnd images onto inpaint mask/rg entities 2024-10-25 07:42:46 +11:00
psychedelicious
a112d2d55b feat(ui): add logging to useCopyLayerToClipboard 2024-10-25 07:42:46 +11:00
psychedelicious
619a271c8a feat(ui): disable copy to clipboard when layer is empty 2024-10-25 07:42:46 +11:00
psychedelicious
909f2ee36d feat(ui): add help tooltip to automask 2024-10-25 07:42:46 +11:00
psychedelicious
b4cf3d9d03 fix(ui): canvas context menu w/ eraser tool erases 2024-10-25 07:42:46 +11:00
psychedelicious
e6ab6e0293 chore(ui): lint 2024-10-24 08:39:29 -04:00
psychedelicious
66d9c7c631 fix(ui): icon for automask save as 2024-10-24 08:39:29 -04:00
psychedelicious
fec45f3eb6 feat(ui): animate automask preview overlay 2024-10-24 08:39:29 -04:00
psychedelicious
7211d1a6fc feat(ui): add context menu options for layer type convert/copy 2024-10-24 08:39:29 -04:00
psychedelicious
f3069754a9 feat(ui): add logic to convert/copy between all layer types 2024-10-24 08:39:29 -04:00
psychedelicious
4f43152aeb fix(ui): handle pen/touch events on submenu 2024-10-24 08:39:29 -04:00
psychedelicious
7125055d02 fix(ui): icon menu item group spacing 2024-10-24 08:39:29 -04:00
psychedelicious
c91a9ce390 feat(ui): add pull bbox to global ref image ctx menu 2024-10-24 08:39:29 -04:00
psychedelicious
3e7b73da2c feat(ui): add entity context menu as canvas context menu sub-menu 2024-10-24 08:39:29 -04:00
psychedelicious
61ac50c00d feat(ui): use sub-menu for image metadata recall 2024-10-24 08:39:29 -04:00
psychedelicious
c1201f0bce feat(ui): add useSubMenu hook to abstract logic for sub-menus 2024-10-24 08:39:29 -04:00
psychedelicious
acdffac5ad feat(ui): close viewer when filtering/transforming/automasking 2024-10-24 08:39:29 -04:00
psychedelicious
e420300fa4 feat(ui): replace automask apply w/ save as menu 2024-10-24 08:39:29 -04:00
psychedelicious
260a5a4f9a feat(ui): add automask button to toolbar 2024-10-24 08:39:29 -04:00
psychedelicious
ed0c2006fe feat(ui): rename "foreground"/"background" -> "include"/"exclude" 2024-10-24 08:39:29 -04:00
psychedelicious
9ffd888c86 feat(ui): remove neutral points 2024-10-24 08:39:29 -04:00
psychedelicious
175a9dc28d feat(ui): more resilient auto-masking processing
- Use a hash of the last processed points instead of a `hasProcessed` flag to determine whether or not we should re-process a given set of points.
- Store point coords in state instead of pulling them out of the konva node positions. This makes moving a point a more explicit action in code.
- Add a `roundCoord` util to round the x and y values of a coordinate.
- Ensure we always re-process when $points changes.
2024-10-24 08:39:29 -04:00
psychedelicious
5764e4f7f2 chore(ui): lint 2024-10-24 23:34:06 +11:00
psychedelicious
4275a494b9 tweak(ui): bundle info icon 2024-10-24 23:34:06 +11:00
psychedelicious
a3deb8d30d tweak(ui): bundle tooltip styling 2024-10-24 23:34:06 +11:00
Mary Hipp
aafdb0a37b update popover copy 2024-10-24 23:34:06 +11:00
Mary Hipp
56a815719a update schema 2024-10-24 23:34:06 +11:00
Mary Hipp
4db26bfa3a (ui): add information popovers for other layer types 2024-10-24 23:34:06 +11:00
Mary Hipp
8d84ccb12b bump UI dep for combobox descriptions 2024-10-24 23:34:06 +11:00
Mary Hipp
3321d14997 undo show descriptions for now 2024-10-24 23:34:06 +11:00
maryhipp
43cc4684e1 (api) make sure all controlnet starter models will still have pre-processors correctly assigned when probed based on name 2024-10-24 23:34:06 +11:00
Mary Hipp
afa5a4b17c (ui): add informational popover for controlnet layers 2024-10-24 23:34:06 +11:00
Mary Hipp
33c433fe59 (ui): show models in starter bundles on hover, use previous_names for isInstalled logic, allow grouped model combobox to optionally show descriptions 2024-10-24 23:34:06 +11:00
maryhipp
9cd47fa857 (api): update names of starter models, add ability to track previous_names so it does not mess up logic that prevents dupe starter model installs 2024-10-24 23:34:06 +11:00
psychedelicious
32d9abe802 tweak(ui): prevent show/hide boards button cutoff
The use of hard 25% widths caused issues for some translations. Adjusted styling to not rely on any hard numbers. Tested with a project name and URL.
2024-10-24 08:21:16 -04:00
psychedelicious
3947d4a165 fix(ui): normalize infill alpha to 0-255 when building infill nodes
The browser/UI uses float 0-1 for alpha, while backend uses 0-255. We need to normalize the value when building the infill nodes for outpaint.
2024-10-24 19:22:36 +11:00
psychedelicious
3583d03b70 feat(ui): improve subs and cleanup in filterer module
- Subscribe when starting the filterer
- Remember to abort the abortcontroller when destroying
- Unsubscribe when destroying
2024-10-23 08:21:12 -04:00
psychedelicious
bc954b9996 feat(ui): abort controller in SAM module when destroying 2024-10-23 08:21:12 -04:00
psychedelicious
c08075946a feat(ui): only subscribe listeners when segmenting
Realized we are doing a lot of event listening even when segmenting is not occuring. I don't think this will have a meaningful performance impact, but it makes sense to remove these listeners when not in use.
2024-10-23 08:21:12 -04:00
psychedelicious
df8df914e8 docs(ui): add comments to CanvasSegmentAnythingModule 2024-10-23 08:21:12 -04:00
psychedelicious
33924e8491 feat(ui): ensure abort controllers are cleaned up 2024-10-23 08:21:12 -04:00
psychedelicious
7e5ce1d69d fix(ui): when last SAM point is deleted, reset ephemeral state 2024-10-23 08:21:12 -04:00
Riku
6a24594140 feat(ui): move model manager in-place install state to redux
- persists across sessions/refreshes
- shared state for all installers (local path, scan folder)
2024-10-23 21:17:31 +11:00
psychedelicious
61d26cffe6 chore: bump version to v5.3.0rc1 2024-10-23 16:11:20 +11:00
psychedelicious
fdbc244dbe tidy(ui): autoProcessFilter -> autoProcess
It's used for more than filters now.
2024-10-23 16:01:15 +11:00
psychedelicious
0eea84c90d chore(ui): lint 2024-10-23 16:01:15 +11:00
psychedelicious
e079a91800 feat(ui): reorder point type radios 2024-10-23 16:01:15 +11:00
psychedelicious
eb20173487 fix(ui): set hasProcessed on segment module when deleting a point 2024-10-23 16:01:15 +11:00
psychedelicious
20dd0779b5 feat(ui): use radio instead of drop-down for point label 2024-10-23 16:01:15 +11:00
psychedelicious
b384a92f5c fix(ui): let segment module handle cursor if segmenting 2024-10-23 16:01:15 +11:00
psychedelicious
116d32fbbe feat(ui): auto-process for segment anything 2024-10-23 16:01:15 +11:00
psychedelicious
b044f31a61 fix(ui): translation for isolated layer preview 2024-10-23 16:01:15 +11:00
psychedelicious
6c3c24403b feat(ui): rename "Segment" -> "Auto Mask" 2024-10-23 16:01:15 +11:00
psychedelicious
591f48bb95 chore(ui): lint 2024-10-23 16:01:15 +11:00
psychedelicious
dc6e45485c feat(ui): update CanvasSegmentAnythingModule for new nodes 2024-10-23 16:01:15 +11:00
psychedelicious
829820479d chore(ui): typegen 2024-10-23 16:01:15 +11:00
psychedelicious
48a471bfb8 fix(nodes): apply_tensor_mask_to_image transparent image handling
Fix an issue where if the input image is transparent in a region to be masked, that transparent region ends up opaque black. Need to respect the input image transparency by applying the mask to the alpha channel only.
2024-10-23 16:01:15 +11:00
psychedelicious
ff72315db2 feat(nodes): update SAM backend and nodes to work with SAM points 2024-10-23 16:01:15 +11:00
psychedelicious
790846297a feat(ui): add more data to canvas module reprs 2024-10-23 16:01:15 +11:00
psychedelicious
230b455a13 tidy(ui): $pointTypeEnglish -> $pointTypeString 2024-10-23 16:01:15 +11:00
psychedelicious
71f0fff55b fix(ui): right click on stage draws 2024-10-23 16:01:15 +11:00
psychedelicious
7f2c83b9e6 feat(ui): consolidate isolated preview settings
`isolatedFilteringPreview` and `isolatedTransformingPreview` are merged into `isolatedLayerPreview`. This is also used for segment anything.
2024-10-23 16:01:15 +11:00
psychedelicious
bc85bd4bd4 tidy(ui): clean up and document CanvasSegmentAnythingModule 2024-10-23 16:01:15 +11:00
psychedelicious
38b09d73e4 feat(ui): masking UX (wip - interaction state issue) 2024-10-23 16:01:15 +11:00
psychedelicious
606c4ae88c feat(ui): masking UX (wip - issue w/ positioning) 2024-10-23 16:01:15 +11:00
psychedelicious
f666bac77f tidy(ui): CanvasToolView -> CanvasViewToolModule 2024-10-23 16:01:15 +11:00
psychedelicious
c9bf7da23a tidy(ui): CanvasToolRect -> CanvasRectToolModule 2024-10-23 16:01:15 +11:00
psychedelicious
dfc65b93e9 tidy(ui): CanvasToolMove -> CanvasMoveToolModule 2024-10-23 16:01:15 +11:00
psychedelicious
9ca40b4cf5 tidy(ui): CanvasToolErase -> CanvasEraserToolModule 2024-10-23 16:01:15 +11:00
psychedelicious
d571e71d5e tidy(ui): CanvasToolColorPicker -> CanvasColorPickerToolModule 2024-10-23 16:01:15 +11:00
psychedelicious
ad1e6c3fe6 tidy(ui): CanvasToolBrush -> CanvasBrushToolModule 2024-10-23 16:01:15 +11:00
psychedelicious
21d02911dd tidy(ui): CanvasBboxModule -> CanvasBboxToolModule, move file 2024-10-23 16:01:15 +11:00
psychedelicious
43afe0bd9a feat(ui): move cursor handling to tool modules
Also add cursors for move tool and bbox tool - when pointer is over the layer or bbox, use the move cursor.
2024-10-23 16:01:15 +11:00
psychedelicious
e7a68c446d feat(ui): add CanvasToolView
It's nearly a noop but I think it makes sense to have a module for each tool...
2024-10-23 16:01:15 +11:00
psychedelicious
b9c68a2e7e feat(ui): add CanvasToolMove
It's essentially a noop but I think it makes sense to have a module for each tool...
2024-10-23 16:01:15 +11:00
psychedelicious
371a1b1af3 feat(ui): make CanvasBboxModule child of CanvasToolModule 2024-10-23 16:01:15 +11:00
psychedelicious
dae4591de6 feat(ui): let tool modules set own visibility 2024-10-23 16:01:15 +11:00
psychedelicious
8ccb2e30ce feat(ui): bail on stage events when not targeting the stage 2024-10-23 16:01:15 +11:00
psychedelicious
b8106a4613 fix(ui): bail on drawing when mouse not down 2024-10-23 16:01:15 +11:00
psychedelicious
ce51e9582a feat(ui): add CanvasRectTool 2024-10-23 16:01:15 +11:00
psychedelicious
00848eb631 feat(ui): let color picker tool handle its events 2024-10-23 16:01:15 +11:00
psychedelicious
b48430a892 feat(ui): let eraser tool handle its events 2024-10-23 16:01:15 +11:00
psychedelicious
f94a218561 tidy(ui): remove extraneous checks from CanvasToolBrush 2024-10-23 16:01:15 +11:00
psychedelicious
9b6ed40875 fix(ui): edge case where pressure could be added erroneously to points 2024-10-23 16:01:15 +11:00
psychedelicious
26553dbb0e tidy(ui): CanvasToolModule 2024-10-23 16:01:15 +11:00
psychedelicious
9eb695d0b4 docs(ui): update CanvasToolModule 2024-10-23 16:01:15 +11:00
psychedelicious
babab17e1d feat(ui): let brush tool handle its events
Move brush tool event logic to its class.
2024-10-23 16:01:15 +11:00
psychedelicious
d0a80f3347 feat(ui): create zCoordinateWithPressure & export type from canvas types 2024-10-23 16:01:15 +11:00
psychedelicious
9b30363177 tidy(ui): CanvasToolModule structure 2024-10-23 16:01:15 +11:00
psychedelicious
89bde36b0c feat(ui): support draggable SAM points 2024-10-23 16:01:15 +11:00
psychedelicious
86a8476d97 feat(ui): working segment anything flow 2024-10-23 16:01:15 +11:00
psychedelicious
afa0661e55 chore(ui): typegen 2024-10-23 16:01:15 +11:00
psychedelicious
ba09c1277f feat(nodes): hacked together nodes for segment anything w/ points 2024-10-23 16:01:15 +11:00
psychedelicious
80bf9ddb71 feat(ui): rough out points UI for segment anything module 2024-10-23 16:01:15 +11:00
psychedelicious
1dbc98d747 feat(ui): add CanvasSegmentAnythingModule (wip) 2024-10-23 16:01:15 +11:00
psychedelicious
0698188ea2 feat(ui): support readonly arrays in SerializableObject type 2024-10-23 16:01:15 +11:00
psychedelicious
59d0ad4505 chore(ui): migrate from ts-toolbelt to type-fest
`ts-toolbelt` is unmaintained while `type-fest` is very actively maintained. Both provide similar TS utilities.
2024-10-23 16:01:15 +11:00
Thomas Bolteau
074a5692dd translationBot(ui): update translation (French)
Currently translated at 100.0% (1509 of 1509 strings)

translationBot(ui): update translation (French)

Currently translated at 100.0% (1509 of 1509 strings)

Co-authored-by: Thomas Bolteau <thomas.bolteau50@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2024-10-23 10:23:37 +11:00
Васянатор
bb0741146a translationBot(ui): update translation (Russian)
Currently translated at 99.6% (1504 of 1509 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-10-23 10:23:37 +11:00
Riccardo Giovanetti
1845d9a87a translationBot(ui): update translation (Italian)
Currently translated at 98.8% (1492 of 1509 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-10-23 10:23:37 +11:00
Riku
748c393e71 translationBot(ui): update translation (German)
Currently translated at 71.0% (1072 of 1509 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-10-23 10:23:37 +11:00
David Burnett
9bd17ea02f Get flux working with MPS on 2.4.1, with GGUF support 2024-10-23 10:20:42 +11:00
David Burnett
24f9b46fbc ruff fix 2024-10-23 10:09:24 +11:00
David Burnett
54b3aa1d01 load t5 model in the same format as it is saved, seems to load as float32 on Macs 2024-10-23 10:09:24 +11:00
Maximilian Maag
d85733f22b fix(installer): pytorch and ROCm versions are incompatible
Each version of torch is only available for specific versions of CUDA and ROCm.
The Invoke installer and dockerfile try to install torch 2.4.1 with ROCm 5.6
support, which does not exist. As a result, the installation falls back to the
default CUDA version so AMD GPUs aren't detected. This commits fixes that by
bumping the ROCm version to 6.1, as suggested by the PyTorch documentation. [1]

The specified CUDA version of 12.4 is still correct according to [1] so it does
need to be changed.

Closes #7006
Closes #7146

[1]: https://pytorch.org/get-started/previous-versions/#v241
2024-10-23 09:59:00 +11:00
psychedelicious
aff6ad0316 FLUX XLabs IP-Adapter Support (#7157)
## Summary

This PR adds support for the XLabs IP-Adapter
(https://huggingface.co/XLabs-AI/flux-ip-adapter) in workflows. Linear
UI integration is coming in a follow-up PR. The XLabs IP-Adapter can be
installed in the Starter Models tab.

Usage tips:

- Use a `cfg_scale` value of 2.0 to 4.0
- Start with an IP-Adatper weight of ~0.6 and adjust from there.
- Set `cfg_scale_start_step = 1`
- Set `cfg_scale_end_step` to roughly the halfway point (it's
unnecessary to apply CFG to all steps, and this will improve processing
time).

Sample workflow:
<img width="976" alt="image"
src="https://github.com/user-attachments/assets/4627b459-7e5a-4703-80e7-f7575c5fce19">

Result:

![image](https://github.com/user-attachments/assets/220b6a4c-69c6-447f-8df6-8aa6a56f3b3f)

## Related Issues / Discussions

Prerequisite: https://github.com/invoke-ai/InvokeAI/pull/7152

## Remaining TODO:

- [ ] Update default workflows.

## QA Instructions

- [x] Test basic happy path
- [x] Test with multiple IP-Adapters (it runs, but results aren't great)
- [ ] ~Test with multiple images to a single IP-Adapter~ (this is not
supported for now)
- [ ] Test automatic runtime installation of CLIP-L, CLIP-H, and CLIP-G
image encoder models if they are not already installed.
- [ ] Test starter model installation of the XLabs FLUX IP-Adapter
- [ ] Test SD and SDXL IP-Adapters for regression.
- [ ] Check peak memory utilization.

## Merge Plan

- [ ] Merge #7152 
- [ ] Change target branch to main

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-10-23 09:57:39 +11:00
psychedelicious
61496fdcbc fix(nodes): load IP Adapter images as RGB
FLUX IP Adapter only works with RGB. Did the same for non-FLUX to be safe & consistent, though I don't think it's strictly necessary.
2024-10-23 08:34:15 +10:00
psychedelicious
ee8975401a fix(ui): remove special handling for flux in IPAdapterModel
This masked an issue w/ the CLIP Vision model. Issue is now handled in reducer/graph builder.
2024-10-23 08:31:10 +10:00
psychedelicious
bf3260446d fix(ui): use flux_ip_adapter for flux 2024-10-23 08:30:11 +10:00
psychedelicious
f53823b45e fix(ui): update CLIP Vision when ipa model changes 2024-10-23 08:29:14 +10:00
Ryan Dick
5cbe89afdd Merge branch 'main' into ryan/flux-ip-adapter-cfg-2 2024-10-22 21:17:36 +00:00
Ryan Dick
c466d50c3d FLUX CFG support (#7152)
## Summary

Add support for Classifier-Free Guidance with FLUX.

- Using CFG doubles the time for the denoising process. Running both the
positive and negative conditioning in a single batch is left for future
work, because most users are already VRAM-constrained (this would
probably be faster at the cost of higher peak VRAM).
- Negative text conditioning is optional and only required if `cfg_scale
!= 1.0`
- CFG is skipped if `cfg_scale == 1.0` (i.e. no compute overhead in this
case)
- `cfg_scale_start_step` and `cfg_scale_end_step` can be used to easily
control the range of steps that CFG is applied for.
- CFG is a prerequisite for IP-Adapter support.

## Example

Positive Caption: `Professional photography of a luxury hotel in the
Nevada desert`
CFG: 1.0

![image](https://github.com/user-attachments/assets/f25ff832-d69b-4c5f-88f4-9429ce96d598)

Positive Caption: `Professional photography of a luxury hotel in the
Nevada desert`
Negative Caption: `Swimming pool`
CFG: 2.0
Same seed

![image](https://github.com/user-attachments/assets/27e3b952-2795-469f-bb24-b7fddb726ba1)


## QA Instructions

- [ ] Test interactions with ControlNet
- [ ] Verify that peak RAM/VRAM utilization has not increased
significantly
- [ ] Test that CFG is skipped when cfg_scale == 1.0
- [ ] Test that negative text conditioning can be omitted when cfg_scale
== 1.0
- [ ] Test that a clear error message is returned when negative text
conditioning is omitted when cfg_scale != 1.0
- [ ] Test that the negative text prompt gets applied when cfg_scale
>1.0
- [ ] Test that a collection of cfg_scale values can be provided for
per-step control.
- [ ] Test that `cfg_scale_start_step` and `cfg_scale_end_step` control
the range of steps that CFG is applied

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-10-22 17:09:40 -04:00
Ryan Dick
d20b894a61 Add cfg_scale_start_step and cfg_scale_end_step to FLUX Denoise node. 2024-10-23 07:59:48 +11:00
Ryan Dick
20362448b9 Make negative_text_conditioning nullable on FLUX Denoise invocation. 2024-10-23 07:59:48 +11:00
Ryan Dick
5df10cc494 Add support for cfg_scale list on FLUX Denoise node. 2024-10-23 07:59:48 +11:00
Ryan Dick
da171114ea Naive implementation of CFG for FLUX. 2024-10-23 07:59:48 +11:00
Eugene Brodsky
62919a443c fix(installer): remove xformers before installation 2024-10-23 07:57:52 +11:00
Mary Hipp
ffcec91d87 Merge branch 'ryan/flux-ip-adapter-cfg-2' of https://github.com/invoke-ai/InvokeAI into ryan/flux-ip-adapter-cfg-2 2024-10-22 15:23:35 -04:00
Mary Hipp
0a96466b60 feat(ui): add IP adapters to FLUX in linear UI 2024-10-22 15:22:56 -04:00
Ryan Dick
e48cab0276 Only allow a single image prompt for FLUX IP-Adapters (haven't really looked into this much, but punting on it for now). 2024-10-22 16:32:01 +00:00
Ryan Dick
740f6eb19f Skip tests that use the meta device - they fail on the MacOS CI runners. 2024-10-22 15:56:49 +00:00
psychedelicious
d1bb4c2c70 fix(nodes): FluxDenoiseInvocation.controlnet_vae missing default=None 2024-10-22 10:54:15 +11:00
Ryan Dick
e545f18a45 (minor) Fix ruff. 2024-10-21 22:38:06 +00:00
Ryan Dick
e8cd1bb3d8 Add FLUX IP-Adapter starter models. 2024-10-21 22:17:42 +00:00
Ryan Dick
90a906e203 Simplify handling of CLIP ViT selection for FLUX IP-Adapter invocation. 2024-10-21 19:54:59 +00:00
Ryan Dick
5546110127 Add FluxIPAdapterInvocation. 2024-10-21 18:27:40 +00:00
Ryan Dick
73bbb12f7a Use a black image as the negative IP prompt for parity with X-Labs implementation. 2024-10-21 15:47:22 +00:00
Ryan Dick
dde54740c5 Test out IP-Adapter with CFG. 2024-10-21 15:47:17 +00:00
Ryan Dick
f70a8e2c1a A bunch of HACKS to get ViT-L CLIP vision encoder working for FLUX IP-Adapter. Need to revisit how to clean this all up long term. 2024-10-21 15:43:00 +00:00
Ryan Dick
fdccdd52d5 Fixes to get XLabsIpAdapterExtension running. 2024-10-21 15:43:00 +00:00
Ryan Dick
31ffd73423 Initial draft of integrating FLUX IP-Adapter inference support. 2024-10-21 15:42:56 +00:00
Ryan Dick
3fa1012879 Add IPAdapterDoubleBlocks wrapper to tidy FLUX ip-adapter handling. 2024-10-21 15:38:50 +00:00
Ryan Dick
c2a8fbd8d6 (minor) Move infer_xlabs_ip_adapter_params_from_state_dict(...) to state_dict_utils.py. 2024-10-21 15:38:50 +00:00
Ryan Dick
d6643d7263 Add model loading code for xlabs FLUX IP-Adapter (not tested). 2024-10-21 15:38:50 +00:00
Ryan Dick
412e79d8e6 Add model probing for XLabs FLUX IP-Adapter. 2024-10-21 15:38:50 +00:00
Ryan Dick
f939dbdc33 Add is_state_dict_xlabs_ip_adapter() utility function. 2024-10-21 15:38:50 +00:00
Ryan Dick
24a0ca86f5 Add logic for loading an Xlabs IP-Adapter from a state dict. 2024-10-21 15:38:50 +00:00
Ryan Dick
95c30f6a8b Add initial logic for inferring FLUX IP-Adapter params from a state_dict. 2024-10-21 15:38:50 +00:00
Ryan Dick
ac7441e606 Fixup typing/imports for IPDoubleStreamBlockProcessor. 2024-10-21 15:38:50 +00:00
Ryan Dick
9c9af312fe Copy IPDoubleStreamBlockProcessor from 47495425db/src/flux/modules/layers.py (L221). 2024-10-21 15:38:50 +00:00
Ryan Dick
7bf5927c43 Add XLabs IP-Adapter state dict for unit tests. 2024-10-21 15:38:50 +00:00
Ryan Dick
32c7cdd856 Add cfg_scale_start_step and cfg_scale_end_step to FLUX Denoise node. 2024-10-21 14:52:02 +00:00
Mary Hipp
bbd89d54b4 add it to list 2024-10-19 14:08:49 +11:00
Mary Hipp
ee61006a49 add starter model 2024-10-19 14:08:49 +11:00
psychedelicious
0b43f5fd64 docs(ui): improve docstrings for LoggingOverrides 2024-10-19 08:04:20 +11:00
psychedelicious
6c61266990 refactor(ui): logging config handling
Introduce two-stage logging configuration and overrides for enabled status, log level and log namespaces.

The first stage in `<InvokeAIUI />`, before we set up redux (and therefore before we have access to the user's configured logging setup). In this stage, we use the overrides or default values.

The second stage is in `<App />`, after we set up redux, via `useSyncLoggingConfig`. In this stage, we use the overrides or the user's configured logging setup. This hook also handles pushing changes made by the user into localstorage.

Other changes:
- Extract logging config to util function
- Remove the `useEffect` from `SettingsModal` that was changing the logging settings
- Remove extraneous log effects from `useLogger`
- Export new `LoggingOverrides` type
2024-10-19 08:04:20 +11:00
Maximilian Maag
2d5afe8094 fix(installer): Print maximize suggestion when Python is found, not when it's missing 2024-10-18 16:35:51 -04:00
Maximilian Maag
2430137d19 fix(installer): Avoid misleading error message when searching for python binary
which prints a message to stderr when it doesn't find anything. In this case,
not finding anything is expected so the error is misleading.
2024-10-18 16:35:51 -04:00
Ryan Dick
6df4ee5fc8 Make negative_text_conditioning nullable on FLUX Denoise invocation. 2024-10-18 20:31:27 +00:00
Ryan Dick
371742d8f9 Add support for cfg_scale list on FLUX Denoise node. 2024-10-18 20:14:47 +00:00
psychedelicious
5440c03767 fix(app): directory traversal when deleting images 2024-10-18 14:27:41 +11:00
Ryan Dick
73d4c4d56d Naive implementation of CFG for FLUX. 2024-10-16 16:22:35 +00:00
303 changed files with 13284 additions and 3467 deletions

View File

@@ -19,3 +19,4 @@
- [ ] _The PR has a short but descriptive title, suitable for a changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_

View File

@@ -38,7 +38,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.6"; \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm6.1"; \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu124"; \
fi &&\

View File

@@ -5,7 +5,7 @@ If you're a new contributor to InvokeAI or Open Source Projects, this is the gui
## New Contributor Checklist
- [x] Set up your local development environment & fork of InvokAI by following [the steps outlined here](../dev-environment.md)
- [x] Set up your local tooling with [this guide](InvokeAI/contributing/LOCAL_DEVELOPMENT/#developing-invokeai-in-vscode). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Set up your local tooling with [this guide](../LOCAL_DEVELOPMENT.md). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Familiarize yourself with [Git](https://www.atlassian.com/git) & our project structure by reading through the [development documentation](development.md)
- [x] Join the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord
- [x] Choose an issue to work on! This can be achieved by asking in the #dev-chat channel, tackling a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) or finding an item on the [roadmap](https://github.com/orgs/invoke-ai/projects/7). If nothing in any of those places catches your eye, feel free to work on something of interest to you!

View File

@@ -17,46 +17,49 @@ If you just want to use Invoke, you should use the [installer][installer link].
## Setup
1. Run through the [requirements][requirements link].
1. [Fork and clone][forking link] the [InvokeAI repo][repo link].
1. Create an directory for user data (images, models, db, etc). This is typically at `~/invokeai`, but if you already have a non-dev install, you may want to create a separate directory for the dev install.
1. Create a python virtual environment inside the directory you just created:
2. [Fork and clone][forking link] the [InvokeAI repo][repo link].
3. Create an directory for user data (images, models, db, etc). This is typically at `~/invokeai`, but if you already have a non-dev install, you may want to create a separate directory for the dev install.
4. Create a python virtual environment inside the directory you just created:
```sh
python3 -m venv .venv --prompt InvokeAI-Dev
```
```sh
python3 -m venv .venv --prompt InvokeAI-Dev
```
1. Activate the venv (you'll need to do this every time you want to run the app):
5. Activate the venv (you'll need to do this every time you want to run the app):
```sh
source .venv/bin/activate
```
```sh
source .venv/bin/activate
```
1. Install the repo as an [editable install][editable install link]:
6. Install the repo as an [editable install][editable install link]:
```sh
pip install -e ".[dev,test,xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
```sh
pip install -e ".[dev,test,xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
Refer to the [manual installation][manual install link]] instructions for more determining the correct install options. `xformers` is optional, but `dev` and `test` are not.
Refer to the [manual installation][manual install link]] instructions for more determining the correct install options. `xformers` is optional, but `dev` and `test` are not.
1. Install the frontend dev toolchain:
7. Install the frontend dev toolchain:
- [`nodejs`](https://nodejs.org/) (recommend v20 LTS)
- [`pnpm`](https://pnpm.io/installation#installing-a-specific-version) (must be v8 - not v9!)
- [`pnpm`](https://pnpm.io/8.x/installation) (must be v8 - not v9!)
1. Do a production build of the frontend:
8. Do a production build of the frontend:
```sh
pnpm build
```
```sh
cd PATH_TO_INVOKEAI_REPO/invokeai/frontend/web
pnpm i
pnpm build
```
1. Start the application:
9. Start the application:
```sh
python scripts/invokeai-web.py
```
```sh
cd PATH_TO_INVOKEAI_REPO
python scripts/invokeai-web.py
```
1. Access the UI at `localhost:9090`.
10. Access the UI at `localhost:9090`.
## Updating the UI

View File

@@ -209,7 +209,7 @@ checkpoint models.
To solve this, go to the Model Manager tab (the cube), select the
checkpoint model that's giving you trouble, and press the "Convert"
button in the upper right of your browser window. This will conver the
button in the upper right of your browser window. This will convert the
checkpoint into a diffusers model, after which loading should be
faster and less memory-intensive.

View File

@@ -97,16 +97,16 @@ Prior to installing PyPatchMatch, you need to take the following steps:
sudo pacman -S --needed base-devel
```
2. Install `opencv` and `blas`:
2. Install `opencv`, `blas`, and required dependencies:
```sh
sudo pacman -S opencv blas
sudo pacman -S opencv blas fmt glew vtk hdf5
```
or for CUDA support
```sh
sudo pacman -S opencv-cuda blas
sudo pacman -S opencv-cuda blas fmt glew vtk hdf5
```
3. Fix the naming of the `opencv` package configuration file:

View File

@@ -12,7 +12,7 @@ MINIMUM_PYTHON_VERSION=3.10.0
MAXIMUM_PYTHON_VERSION=3.11.100
PYTHON=""
for candidate in python3.11 python3.10 python3 python ; do
if ppath=`which $candidate`; then
if ppath=`which $candidate 2>/dev/null`; then
# when using `pyenv`, the executable for an inactive Python version will exist but will not be operational
# we check that this found executable can actually run
if [ $($candidate --version &>/dev/null; echo ${PIPESTATUS}) -gt 0 ]; then continue; fi
@@ -30,10 +30,11 @@ done
if [ -z "$PYTHON" ]; then
echo "A suitable Python interpreter could not be found"
echo "Please install Python $MINIMUM_PYTHON_VERSION or higher (maximum $MAXIMUM_PYTHON_VERSION) before running this script. See instructions at $INSTRUCTIONS for help."
echo "For the best user experience we suggest enlarging or maximizing this window now."
read -p "Press any key to exit"
exit -1
fi
echo "For the best user experience we suggest enlarging or maximizing this window now."
exec $PYTHON ./lib/main.py ${@}
read -p "Press any key to exit"

View File

@@ -245,6 +245,9 @@ class InvokeAiInstance:
pip = local[self.pip]
# Uninstall xformers if it is present; the correct version of it will be reinstalled if needed
_ = pip["uninstall", "-yqq", "xformers"] & FG
pipeline = pip[
"install",
"--require-virtualenv",
@@ -407,7 +410,7 @@ def get_torch_source() -> Tuple[str | None, str | None]:
optional_modules: str | None = None
if OS == "Linux":
if device == GpuType.ROCM:
url = "https://download.pytorch.org/whl/rocm5.6"
url = "https://download.pytorch.org/whl/rocm6.1"
elif device == GpuType.CPU:
url = "https://download.pytorch.org/whl/cpu"
elif device == GpuType.CUDA:

View File

@@ -259,7 +259,7 @@ def select_gpu() -> GpuType:
[
f"Detected the [gold1]{OS}-{ARCH}[/] platform",
"",
"See [deep_sky_blue1]https://invoke-ai.github.io/InvokeAI/#system[/] to ensure your system meets the minimum requirements.",
"See [deep_sky_blue1]https://invoke-ai.github.io/InvokeAI/installation/requirements/[/] to ensure your system meets the minimum requirements.",
"",
"[red3]🠶[/] [b]Your GPU drivers must be correctly installed before using InvokeAI![/] [red3]🠴[/]",
]

View File

@@ -68,7 +68,7 @@ do_line_input() {
printf "2: Open the developer console\n"
printf "3: Command-line help\n"
printf "Q: Quit\n\n"
printf "To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.\n\n"
printf "To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest\n\n"
read -p "Please enter 1-4, Q: [1] " yn
choice=${yn:='1'}
do_choice $choice

View File

@@ -40,6 +40,8 @@ class AppVersion(BaseModel):
version: str = Field(description="App version")
highlights: Optional[list[str]] = Field(default=None, description="Highlights of release")
class AppDependencyVersions(BaseModel):
"""App depencency Versions Response"""

View File

@@ -1,6 +1,7 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records."""
import contextlib
import io
import pathlib
import shutil
@@ -10,6 +11,7 @@ from enum import Enum
from tempfile import TemporaryDirectory
from typing import List, Optional, Type
import huggingface_hub
from fastapi import Body, Path, Query, Response, UploadFile
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.routing import APIRouter
@@ -27,6 +29,7 @@ from invokeai.app.services.model_records import (
ModelRecordChanges,
UnknownModelException,
)
from invokeai.app.util.suppress_output import SuppressOutput
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
@@ -808,7 +811,11 @@ def get_is_installed(
for model in installed_models:
if model.source == starter_model.source:
return True
if model.name == starter_model.name and model.base == starter_model.base and model.type == starter_model.type:
if (
(model.name == starter_model.name or model.name in starter_model.previous_names)
and model.base == starter_model.base
and model.type == starter_model.type
):
return True
return False
@@ -919,3 +926,51 @@ async def get_stats() -> Optional[CacheStats]:
"""Return performance statistics on the model manager's RAM cache. Will return null if no models have been loaded."""
return ApiDependencies.invoker.services.model_manager.load.ram_cache.stats
class HFTokenStatus(str, Enum):
VALID = "valid"
INVALID = "invalid"
UNKNOWN = "unknown"
class HFTokenHelper:
@classmethod
def get_status(cls) -> HFTokenStatus:
try:
if huggingface_hub.get_token_permission(huggingface_hub.get_token()):
# Valid token!
return HFTokenStatus.VALID
# No token set
return HFTokenStatus.INVALID
except Exception:
return HFTokenStatus.UNKNOWN
@classmethod
def set_token(cls, token: str) -> HFTokenStatus:
with SuppressOutput(), contextlib.suppress(Exception):
huggingface_hub.login(token=token, add_to_git_credential=False)
return cls.get_status()
@model_manager_router.get("/hf_login", operation_id="get_hf_login_status", response_model=HFTokenStatus)
async def get_hf_login_status() -> HFTokenStatus:
token_status = HFTokenHelper.get_status()
if token_status is HFTokenStatus.UNKNOWN:
ApiDependencies.invoker.services.logger.warning("Unable to verify HF token")
return token_status
@model_manager_router.post("/hf_login", operation_id="do_hf_login", response_model=HFTokenStatus)
async def do_hf_login(
token: str = Body(description="Hugging Face token to use for login", embed=True),
) -> HFTokenStatus:
HFTokenHelper.set_token(token)
token_status = HFTokenHelper.get_status()
if token_status is HFTokenStatus.UNKNOWN:
ApiDependencies.invoker.services.logger.warning("Unable to verify HF token")
return token_status

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import inspect
import re
import sys
import warnings
from abc import ABC, abstractmethod
from enum import Enum
@@ -192,12 +193,19 @@ class BaseInvocation(ABC, BaseModel):
"""Gets a pydantc TypeAdapter for the union of all invocation types."""
if not cls._typeadapter or cls._typeadapter_needs_update:
AnyInvocation = TypeAliasType(
"AnyInvocation", Annotated[Union[tuple(cls._invocation_classes)], Field(discriminator="type")]
"AnyInvocation", Annotated[Union[tuple(cls.get_invocations())], Field(discriminator="type")]
)
cls._typeadapter = TypeAdapter(AnyInvocation)
cls._typeadapter_needs_update = False
return cls._typeadapter
@classmethod
def invalidate_typeadapter(cls) -> None:
"""Invalidates the typeadapter, forcing it to be rebuilt on next access. If the invocation allowlist or
denylist is changed, this should be called to ensure the typeadapter is updated and validation respects
the updated allowlist and denylist."""
cls._typeadapter_needs_update = True
@classmethod
def get_invocations(cls) -> Iterable[BaseInvocation]:
"""Gets all invocations, respecting the allowlist and denylist."""
@@ -479,6 +487,26 @@ def invocation(
title="type", default=invocation_type, json_schema_extra={"field_kind": FieldKind.NodeAttribute}
)
# Validate the `invoke()` method is implemented
if "invoke" in cls.__abstractmethods__:
raise ValueError(f'Invocation "{invocation_type}" must implement the "invoke" method')
# And validate that `invoke()` returns a subclass of `BaseInvocationOutput
invoke_return_annotation = signature(cls.invoke).return_annotation
try:
# TODO(psyche): If `invoke()` is not defined, `return_annotation` ends up as the string "BaseInvocationOutput"
# instead of the class `BaseInvocationOutput`. This may be a pydantic bug: https://github.com/pydantic/pydantic/issues/7978
if isinstance(invoke_return_annotation, str):
invoke_return_annotation = getattr(sys.modules[cls.__module__], invoke_return_annotation)
assert invoke_return_annotation is not BaseInvocationOutput
assert issubclass(invoke_return_annotation, BaseInvocationOutput)
except Exception:
raise ValueError(
f'Invocation "{invocation_type}" must have a return annotation of a subclass of BaseInvocationOutput (got "{invoke_return_annotation}")'
)
docstring = cls.__doc__
cls = create_model(
cls.__qualname__,

View File

@@ -13,6 +13,7 @@ from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from diffusers.schedulers.scheduling_dpmsolver_sde import DPMSolverSDEScheduler
from diffusers.schedulers.scheduling_tcd import TCDScheduler
from diffusers.schedulers.scheduling_utils import SchedulerMixin as Scheduler
from PIL import Image
from pydantic import field_validator
from torchvision.transforms.functional import resize as tv_resize
from transformers import CLIPVisionModelWithProjection
@@ -510,6 +511,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
context: InvocationContext,
t2i_adapters: Optional[Union[T2IAdapterField, list[T2IAdapterField]]],
ext_manager: ExtensionsManager,
bgr_mode: bool = False,
) -> None:
if t2i_adapters is None:
return
@@ -519,6 +521,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
t2i_adapters = [t2i_adapters]
for t2i_adapter_field in t2i_adapters:
image = context.images.get_pil(t2i_adapter_field.image.image_name)
if bgr_mode: # SDXL t2i trained on cv2's BGR outputs, but PIL won't convert straight to BGR
r, g, b = image.split()
image = Image.merge("RGB", (b, g, r))
ext_manager.add_extension(
T2IAdapterExt(
node_context=context,
@@ -547,7 +553,9 @@ class DenoiseLatentsInvocation(BaseInvocation):
if not isinstance(single_ipa_image_fields, list):
single_ipa_image_fields = [single_ipa_image_fields]
single_ipa_images = [context.images.get_pil(image.image_name) for image in single_ipa_image_fields]
single_ipa_images = [
context.images.get_pil(image.image_name, mode="RGB") for image in single_ipa_image_fields
]
with image_encoder_model_info as image_encoder_model:
assert isinstance(image_encoder_model, CLIPVisionModelWithProjection)
# Get image embeddings from CLIP and ImageProjModel.
@@ -614,13 +622,17 @@ class DenoiseLatentsInvocation(BaseInvocation):
for t2i_adapter_field in t2i_adapter:
t2i_adapter_model_config = context.models.get_config(t2i_adapter_field.t2i_adapter_model.key)
t2i_adapter_loaded_model = context.models.load(t2i_adapter_field.t2i_adapter_model)
image = context.images.get_pil(t2i_adapter_field.image.image_name)
image = context.images.get_pil(t2i_adapter_field.image.image_name, mode="RGB")
# The max_unet_downscale is the maximum amount that the UNet model downscales the latent image internally.
if t2i_adapter_model_config.base == BaseModelType.StableDiffusion1:
max_unet_downscale = 8
elif t2i_adapter_model_config.base == BaseModelType.StableDiffusionXL:
max_unet_downscale = 4
# SDXL adapters are trained on cv2's BGR outputs
r, g, b = image.split()
image = Image.merge("RGB", (b, g, r))
else:
raise ValueError(f"Unexpected T2I-Adapter base model type: '{t2i_adapter_model_config.base}'.")
@@ -628,29 +640,39 @@ class DenoiseLatentsInvocation(BaseInvocation):
with t2i_adapter_loaded_model as t2i_adapter_model:
total_downscale_factor = t2i_adapter_model.total_downscale_factor
# Resize the T2I-Adapter input image.
# We select the resize dimensions so that after the T2I-Adapter's total_downscale_factor is applied, the
# result will match the latent image's dimensions after max_unet_downscale is applied.
t2i_input_height = latents_shape[2] // max_unet_downscale * total_downscale_factor
t2i_input_width = latents_shape[3] // max_unet_downscale * total_downscale_factor
# Note: We have hard-coded `do_classifier_free_guidance=False`. This is because we only want to prepare
# a single image. If CFG is enabled, we will duplicate the resultant tensor after applying the
# T2I-Adapter model.
#
# Note: We re-use the `prepare_control_image(...)` from ControlNet for T2I-Adapter, because it has many
# of the same requirements (e.g. preserving binary masks during resize).
# Assuming fixed dimensional scaling of LATENT_SCALE_FACTOR.
_, _, latent_height, latent_width = latents_shape
control_height_resize = latent_height * LATENT_SCALE_FACTOR
control_width_resize = latent_width * LATENT_SCALE_FACTOR
t2i_image = prepare_control_image(
image=image,
do_classifier_free_guidance=False,
width=t2i_input_width,
height=t2i_input_height,
width=control_width_resize,
height=control_height_resize,
num_channels=t2i_adapter_model.config["in_channels"], # mypy treats this as a FrozenDict
device=t2i_adapter_model.device,
dtype=t2i_adapter_model.dtype,
resize_mode=t2i_adapter_field.resize_mode,
)
# Resize the T2I-Adapter input image.
# We select the resize dimensions so that after the T2I-Adapter's total_downscale_factor is applied, the
# result will match the latent image's dimensions after max_unet_downscale is applied.
# We crop the image to this size so that the positions match the input image on non-standard resolutions
t2i_input_height = latents_shape[2] // max_unet_downscale * total_downscale_factor
t2i_input_width = latents_shape[3] // max_unet_downscale * total_downscale_factor
if t2i_image.shape[2] > t2i_input_height or t2i_image.shape[3] > t2i_input_width:
t2i_image = t2i_image[
:, :, : min(t2i_image.shape[2], t2i_input_height), : min(t2i_image.shape[3], t2i_input_width)
]
adapter_state = t2i_adapter_model(t2i_image)
if do_classifier_free_guidance:
@@ -898,7 +920,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
# ext = extension_field.to_extension(exit_stack, context, ext_manager)
# ext_manager.add_extension(ext)
self.parse_controlnet_field(exit_stack, context, self.control, ext_manager)
self.parse_t2i_adapter_field(exit_stack, context, self.t2i_adapter, ext_manager)
bgr_mode = self.unet.unet.base == BaseModelType.StableDiffusionXL
self.parse_t2i_adapter_field(exit_stack, context, self.t2i_adapter, ext_manager, bgr_mode)
# ext: t2i/ip adapter
ext_manager.run_callback(ExtensionCallbackType.SETUP, denoise_ctx)

View File

@@ -41,6 +41,7 @@ class UIType(str, Enum, metaclass=MetaEnum):
# region Model Field Types
MainModel = "MainModelField"
FluxMainModel = "FluxMainModelField"
SD3MainModel = "SD3MainModelField"
SDXLMainModel = "SDXLMainModelField"
SDXLRefinerModel = "SDXLRefinerModelField"
ONNXModel = "ONNXModelField"
@@ -52,6 +53,8 @@ class UIType(str, Enum, metaclass=MetaEnum):
T2IAdapterModel = "T2IAdapterModelField"
T5EncoderModel = "T5EncoderModelField"
CLIPEmbedModel = "CLIPEmbedModelField"
CLIPLEmbedModel = "CLIPLEmbedModelField"
CLIPGEmbedModel = "CLIPGEmbedModelField"
SpandrelImageToImageModel = "SpandrelImageToImageModelField"
# endregion
@@ -131,8 +134,10 @@ class FieldDescriptions:
clip = "CLIP (tokenizer, text encoder, LoRAs) and skipped layer count"
t5_encoder = "T5 tokenizer and text encoder"
clip_embed_model = "CLIP Embed loader"
clip_g_model = "CLIP-G Embed loader"
unet = "UNet (scheduler, LoRAs)"
transformer = "Transformer"
mmditx = "MMDiTX"
vae = "VAE"
cond = "Conditioning tensor"
controlnet_model = "ControlNet model to load"
@@ -140,6 +145,7 @@ class FieldDescriptions:
lora_model = "LoRA model to load"
main_model = "Main model (UNet, VAE, CLIP) to load"
flux_model = "Flux model (Transformer) to load"
sd3_model = "SD3 model (MMDiTX) to load"
sdxl_main_model = "SDXL Main model (UNet, VAE, CLIP1, CLIP2) to load"
sdxl_refiner_model = "SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load"
onnx_main_model = "ONNX Main model (UNet, VAE, CLIP) to load"
@@ -246,6 +252,12 @@ class FluxConditioningField(BaseModel):
conditioning_name: str = Field(description="The name of conditioning tensor")
class SD3ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""
conditioning_name: str = Field(description="The name of conditioning tensor")
class ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""

View File

@@ -1,15 +1,19 @@
from contextlib import ExitStack
from typing import Callable, Iterator, Optional, Tuple
import numpy as np
import numpy.typing as npt
import torch
import torchvision.transforms as tv_transforms
from torchvision.transforms.functional import resize as tv_resize
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
FluxConditioningField,
ImageField,
Input,
InputField,
LatentsField,
@@ -17,6 +21,7 @@ from invokeai.app.invocations.fields import (
WithMetadata,
)
from invokeai.app.invocations.flux_controlnet import FluxControlNetField
from invokeai.app.invocations.ip_adapter import IPAdapterField
from invokeai.app.invocations.model import TransformerField, VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
@@ -26,6 +31,8 @@ from invokeai.backend.flux.denoise import denoise
from invokeai.backend.flux.extensions.inpaint_extension import InpaintExtension
from invokeai.backend.flux.extensions.instantx_controlnet_extension import InstantXControlNetExtension
from invokeai.backend.flux.extensions.xlabs_controlnet_extension import XLabsControlNetExtension
from invokeai.backend.flux.extensions.xlabs_ip_adapter_extension import XLabsIPAdapterExtension
from invokeai.backend.flux.ip_adapter.xlabs_ip_adapter_flux import XlabsIpAdapterFlux
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.sampling_utils import (
clip_timestep_schedule_fractional,
@@ -49,7 +56,7 @@ from invokeai.backend.util.devices import TorchDevice
title="FLUX Denoise",
tags=["image", "flux"],
category="image",
version="3.1.0",
version="3.2.0",
classification=Classification.Prototype,
)
class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
@@ -82,6 +89,24 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
positive_text_conditioning: FluxConditioningField = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
)
negative_text_conditioning: FluxConditioningField | None = InputField(
default=None,
description="Negative conditioning tensor. Can be None if cfg_scale is 1.0.",
input=Input.Connection,
)
cfg_scale: float | list[float] = InputField(default=1.0, description=FieldDescriptions.cfg_scale, title="CFG Scale")
cfg_scale_start_step: int = InputField(
default=0,
title="CFG Scale Start Step",
description="Index of the first step to apply cfg_scale. Negative indices count backwards from the "
+ "the last step (e.g. a value of -1 refers to the final step).",
)
cfg_scale_end_step: int = InputField(
default=-1,
title="CFG Scale End Step",
description="Index of the last step to apply cfg_scale. Negative indices count backwards from the "
+ "last step (e.g. a value of -1 refers to the final step).",
)
width: int = InputField(default=1024, multiple_of=16, description="Width of the generated image.")
height: int = InputField(default=1024, multiple_of=16, description="Height of the generated image.")
num_steps: int = InputField(
@@ -96,10 +121,15 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
default=None, input=Input.Connection, description="ControlNet models."
)
controlnet_vae: VAEField | None = InputField(
default=None,
description=FieldDescriptions.vae,
input=Input.Connection,
)
ip_adapter: IPAdapterField | list[IPAdapterField] | None = InputField(
description=FieldDescriptions.ip_adapter, title="IP-Adapter", default=None, input=Input.Connection
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = self._run_diffusion(context)
@@ -108,6 +138,19 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)
def _load_text_conditioning(
self, context: InvocationContext, conditioning_name: str, dtype: torch.dtype
) -> Tuple[torch.Tensor, torch.Tensor]:
# Load the conditioning data.
cond_data = context.conditioning.load(conditioning_name)
assert len(cond_data.conditionings) == 1
flux_conditioning = cond_data.conditionings[0]
assert isinstance(flux_conditioning, FLUXConditioningInfo)
flux_conditioning = flux_conditioning.to(dtype=dtype)
t5_embeddings = flux_conditioning.t5_embeds
clip_embeddings = flux_conditioning.clip_embeds
return t5_embeddings, clip_embeddings
def _run_diffusion(
self,
context: InvocationContext,
@@ -115,13 +158,15 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
inference_dtype = torch.bfloat16
# Load the conditioning data.
cond_data = context.conditioning.load(self.positive_text_conditioning.conditioning_name)
assert len(cond_data.conditionings) == 1
flux_conditioning = cond_data.conditionings[0]
assert isinstance(flux_conditioning, FLUXConditioningInfo)
flux_conditioning = flux_conditioning.to(dtype=inference_dtype)
t5_embeddings = flux_conditioning.t5_embeds
clip_embeddings = flux_conditioning.clip_embeds
pos_t5_embeddings, pos_clip_embeddings = self._load_text_conditioning(
context, self.positive_text_conditioning.conditioning_name, inference_dtype
)
neg_t5_embeddings: torch.Tensor | None = None
neg_clip_embeddings: torch.Tensor | None = None
if self.negative_text_conditioning is not None:
neg_t5_embeddings, neg_clip_embeddings = self._load_text_conditioning(
context, self.negative_text_conditioning.conditioning_name, inference_dtype
)
# Load the input latents, if provided.
init_latents = context.tensors.load(self.latents.latents_name) if self.latents else None
@@ -182,8 +227,16 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
b, _c, latent_h, latent_w = x.shape
img_ids = generate_img_ids(h=latent_h, w=latent_w, batch_size=b, device=x.device, dtype=x.dtype)
bs, t5_seq_len, _ = t5_embeddings.shape
txt_ids = torch.zeros(bs, t5_seq_len, 3, dtype=inference_dtype, device=TorchDevice.choose_torch_device())
pos_bs, pos_t5_seq_len, _ = pos_t5_embeddings.shape
pos_txt_ids = torch.zeros(
pos_bs, pos_t5_seq_len, 3, dtype=inference_dtype, device=TorchDevice.choose_torch_device()
)
neg_txt_ids: torch.Tensor | None = None
if neg_t5_embeddings is not None:
neg_bs, neg_t5_seq_len, _ = neg_t5_embeddings.shape
neg_txt_ids = torch.zeros(
neg_bs, neg_t5_seq_len, 3, dtype=inference_dtype, device=TorchDevice.choose_torch_device()
)
# Pack all latent tensors.
init_latents = pack(init_latents) if init_latents is not None else None
@@ -204,6 +257,21 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
noise=noise,
)
# Compute the IP-Adapter image prompt clip embeddings.
# We do this before loading other models to minimize peak memory.
# TODO(ryand): We should really do this in a separate invocation to benefit from caching.
ip_adapter_fields = self._normalize_ip_adapter_fields()
pos_image_prompt_clip_embeds, neg_image_prompt_clip_embeds = self._prep_ip_adapter_image_prompt_clip_embeds(
ip_adapter_fields, context
)
cfg_scale = self.prep_cfg_scale(
cfg_scale=self.cfg_scale,
timesteps=timesteps,
cfg_scale_start_step=self.cfg_scale_start_step,
cfg_scale_end_step=self.cfg_scale_end_step,
)
with ExitStack() as exit_stack:
# Prepare ControlNet extensions.
# Note: We do this before loading the transformer model to minimize peak memory (see implementation).
@@ -252,23 +320,88 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
else:
raise ValueError(f"Unsupported model format: {config.format}")
# Prepare IP-Adapter extensions.
pos_ip_adapter_extensions, neg_ip_adapter_extensions = self._prep_ip_adapter_extensions(
pos_image_prompt_clip_embeds=pos_image_prompt_clip_embeds,
neg_image_prompt_clip_embeds=neg_image_prompt_clip_embeds,
ip_adapter_fields=ip_adapter_fields,
context=context,
exit_stack=exit_stack,
dtype=inference_dtype,
)
x = denoise(
model=transformer,
img=x,
img_ids=img_ids,
txt=t5_embeddings,
txt_ids=txt_ids,
vec=clip_embeddings,
txt=pos_t5_embeddings,
txt_ids=pos_txt_ids,
vec=pos_clip_embeddings,
neg_txt=neg_t5_embeddings,
neg_txt_ids=neg_txt_ids,
neg_vec=neg_clip_embeddings,
timesteps=timesteps,
step_callback=self._build_step_callback(context),
guidance=self.guidance,
cfg_scale=cfg_scale,
inpaint_extension=inpaint_extension,
controlnet_extensions=controlnet_extensions,
pos_ip_adapter_extensions=pos_ip_adapter_extensions,
neg_ip_adapter_extensions=neg_ip_adapter_extensions,
)
x = unpack(x.float(), self.height, self.width)
return x
@classmethod
def prep_cfg_scale(
cls, cfg_scale: float | list[float], timesteps: list[float], cfg_scale_start_step: int, cfg_scale_end_step: int
) -> list[float]:
"""Prepare the cfg_scale schedule.
- Clips the cfg_scale schedule based on cfg_scale_start_step and cfg_scale_end_step.
- If cfg_scale is a list, then it is assumed to be a schedule and is returned as-is.
- If cfg_scale is a scalar, then a linear schedule is created from cfg_scale_start_step to cfg_scale_end_step.
"""
# num_steps is the number of denoising steps, which is one less than the number of timesteps.
num_steps = len(timesteps) - 1
# Normalize cfg_scale to a list if it is a scalar.
cfg_scale_list: list[float]
if isinstance(cfg_scale, float):
cfg_scale_list = [cfg_scale] * num_steps
elif isinstance(cfg_scale, list):
cfg_scale_list = cfg_scale
else:
raise ValueError(f"Unsupported cfg_scale type: {type(cfg_scale)}")
assert len(cfg_scale_list) == num_steps
# Handle negative indices for cfg_scale_start_step and cfg_scale_end_step.
start_step_index = cfg_scale_start_step
if start_step_index < 0:
start_step_index = num_steps + start_step_index
end_step_index = cfg_scale_end_step
if end_step_index < 0:
end_step_index = num_steps + end_step_index
# Validate the start and end step indices.
if not (0 <= start_step_index < num_steps):
raise ValueError(f"Invalid cfg_scale_start_step. Out of range: {cfg_scale_start_step}.")
if not (0 <= end_step_index < num_steps):
raise ValueError(f"Invalid cfg_scale_end_step. Out of range: {cfg_scale_end_step}.")
if start_step_index > end_step_index:
raise ValueError(
f"cfg_scale_start_step ({cfg_scale_start_step}) must be before cfg_scale_end_step "
+ f"({cfg_scale_end_step})."
)
# Set values outside the start and end step indices to 1.0. This is equivalent to disabling cfg_scale for those
# steps.
clipped_cfg_scale = [1.0] * num_steps
clipped_cfg_scale[start_step_index : end_step_index + 1] = cfg_scale_list[start_step_index : end_step_index + 1]
return clipped_cfg_scale
def _prep_inpaint_mask(self, context: InvocationContext, latents: torch.Tensor) -> torch.Tensor | None:
"""Prepare the inpaint mask.
@@ -408,6 +541,112 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
return controlnet_extensions
def _normalize_ip_adapter_fields(self) -> list[IPAdapterField]:
if self.ip_adapter is None:
return []
elif isinstance(self.ip_adapter, IPAdapterField):
return [self.ip_adapter]
elif isinstance(self.ip_adapter, list):
return self.ip_adapter
else:
raise ValueError(f"Unsupported IP-Adapter type: {type(self.ip_adapter)}")
def _prep_ip_adapter_image_prompt_clip_embeds(
self,
ip_adapter_fields: list[IPAdapterField],
context: InvocationContext,
) -> tuple[list[torch.Tensor], list[torch.Tensor]]:
"""Run the IPAdapter CLIPVisionModel, returning image prompt embeddings."""
clip_image_processor = CLIPImageProcessor()
pos_image_prompt_clip_embeds: list[torch.Tensor] = []
neg_image_prompt_clip_embeds: list[torch.Tensor] = []
for ip_adapter_field in ip_adapter_fields:
# `ip_adapter_field.image` could be a list or a single ImageField. Normalize to a list here.
ipa_image_fields: list[ImageField]
if isinstance(ip_adapter_field.image, ImageField):
ipa_image_fields = [ip_adapter_field.image]
elif isinstance(ip_adapter_field.image, list):
ipa_image_fields = ip_adapter_field.image
else:
raise ValueError(f"Unsupported IP-Adapter image type: {type(ip_adapter_field.image)}")
if len(ipa_image_fields) != 1:
raise ValueError(
f"FLUX IP-Adapter only supports a single image prompt (received {len(ipa_image_fields)})."
)
ipa_images = [context.images.get_pil(image.image_name, mode="RGB") for image in ipa_image_fields]
pos_images: list[npt.NDArray[np.uint8]] = []
neg_images: list[npt.NDArray[np.uint8]] = []
for ipa_image in ipa_images:
assert ipa_image.mode == "RGB"
pos_image = np.array(ipa_image)
# We use a black image as the negative image prompt for parity with
# https://github.com/XLabs-AI/x-flux-comfyui/blob/45c834727dd2141aebc505ae4b01f193a8414e38/nodes.py#L592-L593
# An alternative scheme would be to apply zeros_like() after calling the clip_image_processor.
neg_image = np.zeros_like(pos_image)
pos_images.append(pos_image)
neg_images.append(neg_image)
with context.models.load(ip_adapter_field.image_encoder_model) as image_encoder_model:
assert isinstance(image_encoder_model, CLIPVisionModelWithProjection)
clip_image: torch.Tensor = clip_image_processor(images=pos_images, return_tensors="pt").pixel_values
clip_image = clip_image.to(device=image_encoder_model.device, dtype=image_encoder_model.dtype)
pos_clip_image_embeds = image_encoder_model(clip_image).image_embeds
clip_image = clip_image_processor(images=neg_images, return_tensors="pt").pixel_values
clip_image = clip_image.to(device=image_encoder_model.device, dtype=image_encoder_model.dtype)
neg_clip_image_embeds = image_encoder_model(clip_image).image_embeds
pos_image_prompt_clip_embeds.append(pos_clip_image_embeds)
neg_image_prompt_clip_embeds.append(neg_clip_image_embeds)
return pos_image_prompt_clip_embeds, neg_image_prompt_clip_embeds
def _prep_ip_adapter_extensions(
self,
ip_adapter_fields: list[IPAdapterField],
pos_image_prompt_clip_embeds: list[torch.Tensor],
neg_image_prompt_clip_embeds: list[torch.Tensor],
context: InvocationContext,
exit_stack: ExitStack,
dtype: torch.dtype,
) -> tuple[list[XLabsIPAdapterExtension], list[XLabsIPAdapterExtension]]:
pos_ip_adapter_extensions: list[XLabsIPAdapterExtension] = []
neg_ip_adapter_extensions: list[XLabsIPAdapterExtension] = []
for ip_adapter_field, pos_image_prompt_clip_embed, neg_image_prompt_clip_embed in zip(
ip_adapter_fields, pos_image_prompt_clip_embeds, neg_image_prompt_clip_embeds, strict=True
):
ip_adapter_model = exit_stack.enter_context(context.models.load(ip_adapter_field.ip_adapter_model))
assert isinstance(ip_adapter_model, XlabsIpAdapterFlux)
ip_adapter_model = ip_adapter_model.to(dtype=dtype)
if ip_adapter_field.mask is not None:
raise ValueError("IP-Adapter masks are not yet supported in Flux.")
ip_adapter_extension = XLabsIPAdapterExtension(
model=ip_adapter_model,
image_prompt_clip_embed=pos_image_prompt_clip_embed,
weight=ip_adapter_field.weight,
begin_step_percent=ip_adapter_field.begin_step_percent,
end_step_percent=ip_adapter_field.end_step_percent,
)
ip_adapter_extension.run_image_proj(dtype=dtype)
pos_ip_adapter_extensions.append(ip_adapter_extension)
ip_adapter_extension = XLabsIPAdapterExtension(
model=ip_adapter_model,
image_prompt_clip_embed=neg_image_prompt_clip_embed,
weight=ip_adapter_field.weight,
begin_step_percent=ip_adapter_field.begin_step_percent,
end_step_percent=ip_adapter_field.end_step_percent,
)
ip_adapter_extension.run_image_proj(dtype=dtype)
neg_ip_adapter_extensions.append(ip_adapter_extension)
return pos_ip_adapter_extensions, neg_ip_adapter_extensions
def _lora_iterator(self, context: InvocationContext) -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in self.transformer.loras:
lora_info = context.models.load(lora.lora)

View File

@@ -0,0 +1,89 @@
from builtins import float
from typing import List, Literal, Union
from pydantic import field_validator, model_validator
from typing_extensions import Self
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import InputField, UIType
from invokeai.app.invocations.ip_adapter import (
CLIP_VISION_MODEL_MAP,
IPAdapterField,
IPAdapterInvocation,
IPAdapterOutput,
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import (
IPAdapterCheckpointConfig,
IPAdapterInvokeAIConfig,
)
@invocation(
"flux_ip_adapter",
title="FLUX IP-Adapter",
tags=["ip_adapter", "control"],
category="ip_adapter",
version="1.0.0",
classification=Classification.Prototype,
)
class FluxIPAdapterInvocation(BaseInvocation):
"""Collects FLUX IP-Adapter info to pass to other nodes."""
# FLUXIPAdapterInvocation is based closely on IPAdapterInvocation, but with some unsupported features removed.
image: ImageField = InputField(description="The IP-Adapter image prompt(s).")
ip_adapter_model: ModelIdentifierField = InputField(
description="The IP-Adapter model.", title="IP-Adapter Model", ui_type=UIType.IPAdapterModel
)
# Currently, the only known ViT model used by FLUX IP-Adapters is ViT-L.
clip_vision_model: Literal["ViT-L"] = InputField(description="CLIP Vision model to use.", default="ViT-L")
weight: Union[float, List[float]] = InputField(
default=1, description="The weight given to the IP-Adapter", title="Weight"
)
begin_step_percent: float = InputField(
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
)
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v: float) -> float:
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self) -> Self:
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
assert isinstance(ip_adapter_info, (IPAdapterInvokeAIConfig, IPAdapterCheckpointConfig))
# Note: There is a IPAdapterInvokeAIConfig.image_encoder_model_id field, but it isn't trustworthy.
image_encoder_starter_model = CLIP_VISION_MODEL_MAP[self.clip_vision_model]
image_encoder_model_id = image_encoder_starter_model.source
image_encoder_model_name = image_encoder_starter_model.name
image_encoder_model = IPAdapterInvocation.get_clip_image_encoder(
context, image_encoder_model_id, image_encoder_model_name
)
return IPAdapterOutput(
ip_adapter=IPAdapterField(
image=self.image,
ip_adapter_model=self.ip_adapter_model,
image_encoder_model=ModelIdentifierField.from_config(image_encoder_model),
weight=self.weight,
target_blocks=[], # target_blocks is currently unused for FLUX IP-Adapters.
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
mask=None, # mask is currently unused for FLUX IP-Adapters.
),
)

View File

@@ -0,0 +1,89 @@
from typing import Literal
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, T5EncoderField, TransformerField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.util import max_seq_lengths
from invokeai.backend.model_manager.config import (
CheckpointConfigBase,
SubModelType,
)
@invocation_output("flux_model_loader_output")
class FluxModelLoaderOutput(BaseInvocationOutput):
"""Flux base model loader output"""
transformer: TransformerField = OutputField(description=FieldDescriptions.transformer, title="Transformer")
clip: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP")
t5_encoder: T5EncoderField = OutputField(description=FieldDescriptions.t5_encoder, title="T5 Encoder")
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
max_seq_len: Literal[256, 512] = OutputField(
description="The max sequence length to used for the T5 encoder. (256 for schnell transformer, 512 for dev transformer)",
title="Max Seq Length",
)
@invocation(
"flux_model_loader",
title="Flux Main Model",
tags=["model", "flux"],
category="model",
version="1.0.4",
classification=Classification.Prototype,
)
class FluxModelLoaderInvocation(BaseInvocation):
"""Loads a flux base model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.flux_model,
ui_type=UIType.FluxMainModel,
input=Input.Direct,
)
t5_encoder_model: ModelIdentifierField = InputField(
description=FieldDescriptions.t5_encoder, ui_type=UIType.T5EncoderModel, input=Input.Direct, title="T5 Encoder"
)
clip_embed_model: ModelIdentifierField = InputField(
description=FieldDescriptions.clip_embed_model,
ui_type=UIType.CLIPEmbedModel,
input=Input.Direct,
title="CLIP Embed",
)
vae_model: ModelIdentifierField = InputField(
description=FieldDescriptions.vae_model, ui_type=UIType.FluxVAEModel, title="VAE"
)
def invoke(self, context: InvocationContext) -> FluxModelLoaderOutput:
for key in [self.model.key, self.t5_encoder_model.key, self.clip_embed_model.key, self.vae_model.key]:
if not context.models.exists(key):
raise ValueError(f"Unknown model: {key}")
transformer = self.model.model_copy(update={"submodel_type": SubModelType.Transformer})
vae = self.vae_model.model_copy(update={"submodel_type": SubModelType.VAE})
tokenizer = self.clip_embed_model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
clip_encoder = self.clip_embed_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
tokenizer2 = self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.Tokenizer2})
t5_encoder = self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.TextEncoder2})
transformer_config = context.models.get_config(transformer)
assert isinstance(transformer_config, CheckpointConfigBase)
return FluxModelLoaderOutput(
transformer=TransformerField(transformer=transformer, loras=[]),
clip=CLIPField(tokenizer=tokenizer, text_encoder=clip_encoder, loras=[], skipped_layers=0),
t5_encoder=T5EncoderField(tokenizer=tokenizer2, text_encoder=t5_encoder),
vae=VAEField(vae=vae),
max_seq_len=max_seq_lengths[transformer_config.config_path],
)

View File

@@ -9,6 +9,7 @@ from invokeai.app.invocations.fields import FieldDescriptions, InputField, Outpu
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import (
AnyModelConfig,
@@ -17,6 +18,12 @@ from invokeai.backend.model_manager.config import (
IPAdapterInvokeAIConfig,
ModelType,
)
from invokeai.backend.model_manager.starter_models import (
StarterModel,
clip_vit_l_image_encoder,
ip_adapter_sd_image_encoder,
ip_adapter_sdxl_image_encoder,
)
class IPAdapterField(BaseModel):
@@ -55,10 +62,14 @@ class IPAdapterOutput(BaseInvocationOutput):
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
CLIP_VISION_MODEL_MAP = {"ViT-H": "ip_adapter_sd_image_encoder", "ViT-G": "ip_adapter_sdxl_image_encoder"}
CLIP_VISION_MODEL_MAP: dict[Literal["ViT-L", "ViT-H", "ViT-G"], StarterModel] = {
"ViT-L": clip_vit_l_image_encoder,
"ViT-H": ip_adapter_sd_image_encoder,
"ViT-G": ip_adapter_sdxl_image_encoder,
}
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.4.1")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.5.0")
class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes."""
@@ -70,7 +81,7 @@ class IPAdapterInvocation(BaseInvocation):
ui_order=-1,
ui_type=UIType.IPAdapterModel,
)
clip_vision_model: Literal["ViT-H", "ViT-G"] = InputField(
clip_vision_model: Literal["ViT-H", "ViT-G", "ViT-L"] = InputField(
description="CLIP Vision model to use. Overrides model settings. Mandatory for checkpoint models.",
default="ViT-H",
ui_order=2,
@@ -111,9 +122,11 @@ class IPAdapterInvocation(BaseInvocation):
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
else:
image_encoder_model_name = CLIP_VISION_MODEL_MAP[self.clip_vision_model]
image_encoder_starter_model = CLIP_VISION_MODEL_MAP[self.clip_vision_model]
image_encoder_model_id = image_encoder_starter_model.source
image_encoder_model_name = image_encoder_starter_model.name
image_encoder_model = self._get_image_encoder(context, image_encoder_model_name)
image_encoder_model = self.get_clip_image_encoder(context, image_encoder_model_id, image_encoder_model_name)
if self.method == "style":
if ip_adapter_info.base == "sd-1":
@@ -147,7 +160,10 @@ class IPAdapterInvocation(BaseInvocation):
),
)
def _get_image_encoder(self, context: InvocationContext, image_encoder_model_name: str) -> AnyModelConfig:
@classmethod
def get_clip_image_encoder(
cls, context: InvocationContext, image_encoder_model_id: str, image_encoder_model_name: str
) -> AnyModelConfig:
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
)
@@ -159,7 +175,11 @@ class IPAdapterInvocation(BaseInvocation):
)
installer = context._services.model_manager.install
job = installer.heuristic_import(f"InvokeAI/{image_encoder_model_name}")
# Note: We hard-code the type to CLIPVision here because if the model contains both a CLIPVision and a
# CLIPText model, the probe may treat it as a CLIPText model.
job = installer.heuristic_import(
image_encoder_model_id, ModelRecordChanges(name=image_encoder_model_name, type=ModelType.CLIPVision)
)
installer.wait_for_job(job, timeout=600) # Wait for up to 10 minutes
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision

View File

@@ -5,6 +5,7 @@ from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, InvocationContext, invocation
from invokeai.app.invocations.fields import ImageField, InputField, TensorField, WithBoard, WithMetadata
from invokeai.app.invocations.primitives import ImageOutput, MaskOutput
from invokeai.backend.image_util.util import pil_to_np
@invocation(
@@ -148,3 +149,55 @@ class MaskTensorToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
mask_pil = Image.fromarray(mask_np, mode="L")
image_dto = context.images.save(image=mask_pil)
return ImageOutput.build(image_dto)
@invocation(
"apply_tensor_mask_to_image",
title="Apply Tensor Mask to Image",
tags=["mask"],
category="mask",
version="1.0.0",
)
class ApplyMaskTensorToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Applies a tensor mask to an image.
The image is converted to RGBA and the mask is applied to the alpha channel."""
mask: TensorField = InputField(description="The mask tensor to apply.")
image: ImageField = InputField(description="The image to apply the mask to.")
invert: bool = InputField(default=False, description="Whether to invert the mask.")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name, mode="RGBA")
mask = context.tensors.load(self.mask.tensor_name)
# Squeeze the channel dimension if it exists.
if mask.dim() == 3:
mask = mask.squeeze(0)
# Ensure that the mask is binary.
if mask.dtype != torch.bool:
mask = mask > 0.5
mask_np = (mask.float() * 255).byte().cpu().numpy().astype(np.uint8)
if self.invert:
mask_np = 255 - mask_np
# Apply the mask only to the alpha channel where the original alpha is non-zero. This preserves the original
# image's transparency - else the transparent regions would end up as opaque black.
# Separate the image into R, G, B, and A channels
image_np = pil_to_np(image)
r, g, b, a = np.split(image_np, 4, axis=-1)
# Apply the mask to the alpha channel
new_alpha = np.where(a.squeeze() > 0, mask_np, a.squeeze())
# Stack the RGB channels with the modified alpha
masked_image_np = np.dstack([r.squeeze(), g.squeeze(), b.squeeze(), new_alpha])
# Convert back to an image (RGBA)
masked_image = Image.fromarray(masked_image_np.astype(np.uint8), "RGBA")
image_dto = context.images.save(image=masked_image)
return ImageOutput.build(image_dto)

View File

@@ -40,7 +40,7 @@ class IPAdapterMetadataField(BaseModel):
image: ImageField = Field(description="The IP-Adapter image prompt.")
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model.")
clip_vision_model: Literal["ViT-H", "ViT-G"] = Field(description="The CLIP Vision model")
clip_vision_model: Literal["ViT-L", "ViT-H", "ViT-G"] = Field(description="The CLIP Vision model")
method: Literal["full", "style", "composition"] = Field(description="Method to apply IP Weights with")
weight: Union[float, list[float]] = Field(description="The weight given to the IP-Adapter")
begin_step_percent: float = Field(description="When the IP-Adapter is first applied (% of total steps)")

View File

@@ -1,5 +1,5 @@
import copy
from typing import List, Literal, Optional
from typing import List, Optional
from pydantic import BaseModel, Field
@@ -13,11 +13,9 @@ from invokeai.app.invocations.baseinvocation import (
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.flux.util import max_seq_lengths
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
CheckpointConfigBase,
ModelType,
SubModelType,
)
@@ -139,78 +137,6 @@ class ModelIdentifierInvocation(BaseInvocation):
return ModelIdentifierOutput(model=self.model)
@invocation_output("flux_model_loader_output")
class FluxModelLoaderOutput(BaseInvocationOutput):
"""Flux base model loader output"""
transformer: TransformerField = OutputField(description=FieldDescriptions.transformer, title="Transformer")
clip: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP")
t5_encoder: T5EncoderField = OutputField(description=FieldDescriptions.t5_encoder, title="T5 Encoder")
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
max_seq_len: Literal[256, 512] = OutputField(
description="The max sequence length to used for the T5 encoder. (256 for schnell transformer, 512 for dev transformer)",
title="Max Seq Length",
)
@invocation(
"flux_model_loader",
title="Flux Main Model",
tags=["model", "flux"],
category="model",
version="1.0.4",
classification=Classification.Prototype,
)
class FluxModelLoaderInvocation(BaseInvocation):
"""Loads a flux base model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.flux_model,
ui_type=UIType.FluxMainModel,
input=Input.Direct,
)
t5_encoder_model: ModelIdentifierField = InputField(
description=FieldDescriptions.t5_encoder, ui_type=UIType.T5EncoderModel, input=Input.Direct, title="T5 Encoder"
)
clip_embed_model: ModelIdentifierField = InputField(
description=FieldDescriptions.clip_embed_model,
ui_type=UIType.CLIPEmbedModel,
input=Input.Direct,
title="CLIP Embed",
)
vae_model: ModelIdentifierField = InputField(
description=FieldDescriptions.vae_model, ui_type=UIType.FluxVAEModel, title="VAE"
)
def invoke(self, context: InvocationContext) -> FluxModelLoaderOutput:
for key in [self.model.key, self.t5_encoder_model.key, self.clip_embed_model.key, self.vae_model.key]:
if not context.models.exists(key):
raise ValueError(f"Unknown model: {key}")
transformer = self.model.model_copy(update={"submodel_type": SubModelType.Transformer})
vae = self.vae_model.model_copy(update={"submodel_type": SubModelType.VAE})
tokenizer = self.clip_embed_model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
clip_encoder = self.clip_embed_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
tokenizer2 = self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.Tokenizer2})
t5_encoder = self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.TextEncoder2})
transformer_config = context.models.get_config(transformer)
assert isinstance(transformer_config, CheckpointConfigBase)
return FluxModelLoaderOutput(
transformer=TransformerField(transformer=transformer, loras=[]),
clip=CLIPField(tokenizer=tokenizer, text_encoder=clip_encoder, loras=[], skipped_layers=0),
t5_encoder=T5EncoderField(tokenizer=tokenizer2, text_encoder=t5_encoder),
vae=VAEField(vae=vae),
max_seq_len=max_seq_lengths[transformer_config.config_path],
)
@invocation(
"main_model_loader",
title="Main Model",

View File

@@ -18,6 +18,7 @@ from invokeai.app.invocations.fields import (
InputField,
LatentsField,
OutputField,
SD3ConditioningField,
TensorField,
UIComponent,
)
@@ -426,6 +427,17 @@ class FluxConditioningOutput(BaseInvocationOutput):
return cls(conditioning=FluxConditioningField(conditioning_name=conditioning_name))
@invocation_output("sd3_conditioning_output")
class SD3ConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single SD3 conditioning tensor"""
conditioning: SD3ConditioningField = OutputField(description=FieldDescriptions.cond)
@classmethod
def build(cls, conditioning_name: str) -> "SD3ConditioningOutput":
return cls(conditioning=SD3ConditioningField(conditioning_name=conditioning_name))
@invocation_output("conditioning_output")
class ConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single conditioning tensor"""

View File

@@ -0,0 +1,260 @@
from typing import Callable, Tuple
import torch
from diffusers.models.transformers.transformer_sd3 import SD3Transformer2DModel
from diffusers.schedulers.scheduling_flow_match_euler_discrete import FlowMatchEulerDiscreteScheduler
from tqdm import tqdm
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
SD3ConditioningField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import TransformerField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.invocations.sd3_text_encoder import SD3_T5_MAX_SEQ_LEN
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import BaseModelType
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import SD3ConditioningInfo
from invokeai.backend.util.devices import TorchDevice
@invocation(
"sd3_denoise",
title="SD3 Denoise",
tags=["image", "sd3"],
category="image",
version="1.0.0",
classification=Classification.Prototype,
)
class SD3DenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Run denoising process with a SD3 model."""
transformer: TransformerField = InputField(
description=FieldDescriptions.sd3_model,
input=Input.Connection,
title="Transformer",
)
positive_conditioning: SD3ConditioningField = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
)
negative_conditioning: SD3ConditioningField = InputField(
description=FieldDescriptions.negative_cond, input=Input.Connection
)
cfg_scale: float | list[float] = InputField(default=3.5, description=FieldDescriptions.cfg_scale, title="CFG Scale")
width: int = InputField(default=1024, multiple_of=16, description="Width of the generated image.")
height: int = InputField(default=1024, multiple_of=16, description="Height of the generated image.")
steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps)
seed: int = InputField(default=0, description="Randomness seed for reproducibility.")
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = self._run_diffusion(context)
latents = latents.detach().to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)
def _load_text_conditioning(
self,
context: InvocationContext,
conditioning_name: str,
joint_attention_dim: int,
dtype: torch.dtype,
device: torch.device,
) -> Tuple[torch.Tensor, torch.Tensor]:
# Load the conditioning data.
cond_data = context.conditioning.load(conditioning_name)
assert len(cond_data.conditionings) == 1
sd3_conditioning = cond_data.conditionings[0]
assert isinstance(sd3_conditioning, SD3ConditioningInfo)
sd3_conditioning = sd3_conditioning.to(dtype=dtype, device=device)
t5_embeds = sd3_conditioning.t5_embeds
if t5_embeds is None:
t5_embeds = torch.zeros(
(1, SD3_T5_MAX_SEQ_LEN, joint_attention_dim),
device=device,
dtype=dtype,
)
clip_prompt_embeds = torch.cat([sd3_conditioning.clip_l_embeds, sd3_conditioning.clip_g_embeds], dim=-1)
clip_prompt_embeds = torch.nn.functional.pad(
clip_prompt_embeds, (0, t5_embeds.shape[-1] - clip_prompt_embeds.shape[-1])
)
prompt_embeds = torch.cat([clip_prompt_embeds, t5_embeds], dim=-2)
pooled_prompt_embeds = torch.cat(
[sd3_conditioning.clip_l_pooled_embeds, sd3_conditioning.clip_g_pooled_embeds], dim=-1
)
return prompt_embeds, pooled_prompt_embeds
def _get_noise(
self,
num_samples: int,
num_channels_latents: int,
height: int,
width: int,
dtype: torch.dtype,
device: torch.device,
seed: int,
) -> torch.Tensor:
# We always generate noise on the same device and dtype then cast to ensure consistency across devices/dtypes.
rand_device = "cpu"
rand_dtype = torch.float16
return torch.randn(
num_samples,
num_channels_latents,
int(height) // LATENT_SCALE_FACTOR,
int(width) // LATENT_SCALE_FACTOR,
device=rand_device,
dtype=rand_dtype,
generator=torch.Generator(device=rand_device).manual_seed(seed),
).to(device=device, dtype=dtype)
def _prepare_cfg_scale(self, num_timesteps: int) -> list[float]:
"""Prepare the CFG scale list.
Args:
num_timesteps (int): The number of timesteps in the scheduler. Could be different from num_steps depending
on the scheduler used (e.g. higher order schedulers).
Returns:
list[float]: _description_
"""
if isinstance(self.cfg_scale, float):
cfg_scale = [self.cfg_scale] * num_timesteps
elif isinstance(self.cfg_scale, list):
assert len(self.cfg_scale) == num_timesteps
cfg_scale = self.cfg_scale
else:
raise ValueError(f"Invalid CFG scale type: {type(self.cfg_scale)}")
return cfg_scale
def _run_diffusion(
self,
context: InvocationContext,
):
inference_dtype = TorchDevice.choose_torch_dtype()
device = TorchDevice.choose_torch_device()
transformer_info = context.models.load(self.transformer.transformer)
# Load/process the conditioning data.
# TODO(ryand): Make CFG optional.
do_classifier_free_guidance = True
pos_prompt_embeds, pos_pooled_prompt_embeds = self._load_text_conditioning(
context=context,
conditioning_name=self.positive_conditioning.conditioning_name,
joint_attention_dim=transformer_info.model.config.joint_attention_dim,
dtype=inference_dtype,
device=device,
)
neg_prompt_embeds, neg_pooled_prompt_embeds = self._load_text_conditioning(
context=context,
conditioning_name=self.negative_conditioning.conditioning_name,
joint_attention_dim=transformer_info.model.config.joint_attention_dim,
dtype=inference_dtype,
device=device,
)
# TODO(ryand): Support both sequential and batched CFG inference.
prompt_embeds = torch.cat([neg_prompt_embeds, pos_prompt_embeds], dim=0)
pooled_prompt_embeds = torch.cat([neg_pooled_prompt_embeds, pos_pooled_prompt_embeds], dim=0)
# Prepare the scheduler.
scheduler = FlowMatchEulerDiscreteScheduler()
scheduler.set_timesteps(num_inference_steps=self.steps, device=device)
timesteps = scheduler.timesteps
assert isinstance(timesteps, torch.Tensor)
# Prepare the CFG scale list.
cfg_scale = self._prepare_cfg_scale(len(timesteps))
# Generate initial latent noise.
num_channels_latents = transformer_info.model.config.in_channels
assert isinstance(num_channels_latents, int)
noise = self._get_noise(
num_samples=1,
num_channels_latents=num_channels_latents,
height=self.height,
width=self.width,
dtype=inference_dtype,
device=device,
seed=self.seed,
)
latents: torch.Tensor = noise
total_steps = len(timesteps)
step_callback = self._build_step_callback(context)
step_callback(
PipelineIntermediateState(
step=0,
order=1,
total_steps=total_steps,
timestep=int(timesteps[0]),
latents=latents,
),
)
with transformer_info.model_on_device() as (cached_weights, transformer):
assert isinstance(transformer, SD3Transformer2DModel)
# 6. Denoising loop
for step_idx, t in tqdm(list(enumerate(timesteps))):
# Expand the latents if we are doing CFG.
latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
# Expand the timestep to match the latent model input.
timestep = t.expand(latent_model_input.shape[0])
noise_pred = transformer(
hidden_states=latent_model_input,
timestep=timestep,
encoder_hidden_states=prompt_embeds,
pooled_projections=pooled_prompt_embeds,
joint_attention_kwargs=None,
return_dict=False,
)[0]
# Apply CFG.
if do_classifier_free_guidance:
noise_pred_uncond, noise_pred_cond = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + cfg_scale[step_idx] * (noise_pred_cond - noise_pred_uncond)
# Compute the previous noisy sample x_t -> x_t-1.
latents_dtype = latents.dtype
latents = scheduler.step(model_output=noise_pred, timestep=t, sample=latents, return_dict=False)[0]
# TODO(ryand): This MPS dtype handling was copied from diffusers, I haven't tested to see if it's
# needed.
if latents.dtype != latents_dtype:
if torch.backends.mps.is_available():
# some platforms (eg. apple mps) misbehave due to a pytorch bug: https://github.com/pytorch/pytorch/pull/99272
latents = latents.to(latents_dtype)
step_callback(
PipelineIntermediateState(
step=step_idx + 1,
order=1,
total_steps=total_steps,
timestep=int(t),
latents=latents,
),
)
return latents
def _build_step_callback(self, context: InvocationContext) -> Callable[[PipelineIntermediateState], None]:
def step_callback(state: PipelineIntermediateState) -> None:
context.util.sd_step_callback(state, BaseModelType.StableDiffusion3)
return step_callback

View File

@@ -0,0 +1,73 @@
from contextlib import nullcontext
import torch
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from einops import rearrange
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
LatentsField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.extensions.seamless import SeamlessExt
from invokeai.backend.util.devices import TorchDevice
@invocation(
"sd3_l2i",
title="SD3 Latents to Image",
tags=["latents", "image", "vae", "l2i", "sd3"],
category="latents",
version="1.3.0",
)
class SD3LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (AutoencoderKL))
with SeamlessExt.static_patch_model(vae_info.model, self.vae.seamless_axes), vae_info as vae:
assert isinstance(vae, (AutoencoderKL))
latents = latents.to(vae.device)
vae.disable_tiling()
tiling_context = nullcontext()
# clear memory as vae decode can request a lot
TorchDevice.empty_cache()
with torch.inference_mode(), tiling_context:
# copied from diffusers pipeline
latents = latents / vae.config.scaling_factor
img = vae.decode(latents, return_dict=False)[0]
img = img.clamp(-1, 1)
img = rearrange(img[0], "c h w -> h w c") # noqa: F821
img_pil = Image.fromarray((127.5 * (img + 1.0)).byte().cpu().numpy())
TorchDevice.empty_cache()
image_dto = context.images.save(image=img_pil)
return ImageOutput.build(image_dto)

View File

@@ -0,0 +1,108 @@
from typing import Optional
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, T5EncoderField, TransformerField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import SubModelType
@invocation_output("sd3_model_loader_output")
class Sd3ModelLoaderOutput(BaseInvocationOutput):
"""SD3 base model loader output."""
transformer: TransformerField = OutputField(description=FieldDescriptions.transformer, title="Transformer")
clip_l: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP L")
clip_g: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP G")
t5_encoder: T5EncoderField = OutputField(description=FieldDescriptions.t5_encoder, title="T5 Encoder")
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation(
"sd3_model_loader",
title="SD3 Main Model",
tags=["model", "sd3"],
category="model",
version="1.0.0",
classification=Classification.Prototype,
)
class Sd3ModelLoaderInvocation(BaseInvocation):
"""Loads a SD3 base model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sd3_model,
ui_type=UIType.SD3MainModel,
input=Input.Direct,
)
t5_encoder_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.t5_encoder,
ui_type=UIType.T5EncoderModel,
input=Input.Direct,
title="T5 Encoder",
default=None,
)
clip_l_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.clip_embed_model,
ui_type=UIType.CLIPLEmbedModel,
input=Input.Direct,
title="CLIP L Encoder",
default=None,
)
clip_g_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.clip_g_model,
ui_type=UIType.CLIPGEmbedModel,
input=Input.Direct,
title="CLIP G Encoder",
default=None,
)
vae_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.vae_model, ui_type=UIType.VAEModel, title="VAE", default=None
)
def invoke(self, context: InvocationContext) -> Sd3ModelLoaderOutput:
transformer = self.model.model_copy(update={"submodel_type": SubModelType.Transformer})
vae = (
self.vae_model.model_copy(update={"submodel_type": SubModelType.VAE})
if self.vae_model
else self.model.model_copy(update={"submodel_type": SubModelType.VAE})
)
tokenizer_l = self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
clip_encoder_l = (
self.clip_l_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
if self.clip_l_model
else self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
)
tokenizer_g = self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer2})
clip_encoder_g = (
self.clip_g_model.model_copy(update={"submodel_type": SubModelType.TextEncoder2})
if self.clip_g_model
else self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder2})
)
tokenizer_t5 = (
self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.Tokenizer3})
if self.t5_encoder_model
else self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer3})
)
t5_encoder = (
self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.TextEncoder3})
if self.t5_encoder_model
else self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder3})
)
return Sd3ModelLoaderOutput(
transformer=TransformerField(transformer=transformer, loras=[]),
clip_l=CLIPField(tokenizer=tokenizer_l, text_encoder=clip_encoder_l, loras=[], skipped_layers=0),
clip_g=CLIPField(tokenizer=tokenizer_g, text_encoder=clip_encoder_g, loras=[], skipped_layers=0),
t5_encoder=T5EncoderField(tokenizer=tokenizer_t5, text_encoder=t5_encoder),
vae=VAEField(vae=vae),
)

View File

@@ -0,0 +1,199 @@
from contextlib import ExitStack
from typing import Iterator, Tuple
import torch
from transformers import (
CLIPTextModel,
CLIPTextModelWithProjection,
CLIPTokenizer,
T5EncoderModel,
T5Tokenizer,
T5TokenizerFast,
)
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField
from invokeai.app.invocations.model import CLIPField, T5EncoderField
from invokeai.app.invocations.primitives import SD3ConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.lora.conversions.flux_lora_constants import FLUX_LORA_CLIP_PREFIX
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoRAPatcher
from invokeai.backend.model_manager.config import ModelFormat
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData, SD3ConditioningInfo
# The SD3 T5 Max Sequence Length set based on the default in diffusers.
SD3_T5_MAX_SEQ_LEN = 256
@invocation(
"sd3_text_encoder",
title="SD3 Text Encoding",
tags=["prompt", "conditioning", "sd3"],
category="conditioning",
version="1.0.0",
classification=Classification.Prototype,
)
class Sd3TextEncoderInvocation(BaseInvocation):
"""Encodes and preps a prompt for a SD3 image."""
clip_l: CLIPField = InputField(
title="CLIP L",
description=FieldDescriptions.clip,
input=Input.Connection,
)
clip_g: CLIPField = InputField(
title="CLIP G",
description=FieldDescriptions.clip,
input=Input.Connection,
)
# The SD3 models were trained with text encoder dropout, so the T5 encoder can be omitted to save time/memory.
t5_encoder: T5EncoderField | None = InputField(
title="T5Encoder",
default=None,
description=FieldDescriptions.t5_encoder,
input=Input.Connection,
)
prompt: str = InputField(description="Text prompt to encode.")
@torch.no_grad()
def invoke(self, context: InvocationContext) -> SD3ConditioningOutput:
# Note: The text encoding model are run in separate functions to ensure that all model references are locally
# scoped. This ensures that earlier models can be freed and gc'd before loading later models (if necessary).
clip_l_embeddings, clip_l_pooled_embeddings = self._clip_encode(context, self.clip_l)
clip_g_embeddings, clip_g_pooled_embeddings = self._clip_encode(context, self.clip_g)
t5_embeddings: torch.Tensor | None = None
if self.t5_encoder is not None:
t5_embeddings = self._t5_encode(context, SD3_T5_MAX_SEQ_LEN)
conditioning_data = ConditioningFieldData(
conditionings=[
SD3ConditioningInfo(
clip_l_embeds=clip_l_embeddings,
clip_l_pooled_embeds=clip_l_pooled_embeddings,
clip_g_embeds=clip_g_embeddings,
clip_g_pooled_embeds=clip_g_pooled_embeddings,
t5_embeds=t5_embeddings,
)
]
)
conditioning_name = context.conditioning.save(conditioning_data)
return SD3ConditioningOutput.build(conditioning_name)
def _t5_encode(self, context: InvocationContext, max_seq_len: int) -> torch.Tensor:
assert self.t5_encoder is not None
t5_tokenizer_info = context.models.load(self.t5_encoder.tokenizer)
t5_text_encoder_info = context.models.load(self.t5_encoder.text_encoder)
prompt = [self.prompt]
with (
t5_text_encoder_info as t5_text_encoder,
t5_tokenizer_info as t5_tokenizer,
):
assert isinstance(t5_text_encoder, T5EncoderModel)
assert isinstance(t5_tokenizer, (T5Tokenizer, T5TokenizerFast))
text_inputs = t5_tokenizer(
prompt,
padding="max_length",
max_length=max_seq_len,
truncation=True,
add_special_tokens=True,
return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
untruncated_ids = t5_tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
assert isinstance(text_input_ids, torch.Tensor)
assert isinstance(untruncated_ids, torch.Tensor)
if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
text_input_ids, untruncated_ids
):
removed_text = t5_tokenizer.batch_decode(untruncated_ids[:, max_seq_len - 1 : -1])
context.logger.warning(
"The following part of your input was truncated because `max_sequence_length` is set to "
f" {max_seq_len} tokens: {removed_text}"
)
prompt_embeds = t5_text_encoder(text_input_ids.to(t5_text_encoder.device))[0]
assert isinstance(prompt_embeds, torch.Tensor)
return prompt_embeds
def _clip_encode(
self, context: InvocationContext, clip_model: CLIPField, tokenizer_max_length: int = 77
) -> Tuple[torch.Tensor, torch.Tensor]:
clip_tokenizer_info = context.models.load(clip_model.tokenizer)
clip_text_encoder_info = context.models.load(clip_model.text_encoder)
prompt = [self.prompt]
with (
clip_text_encoder_info.model_on_device() as (cached_weights, clip_text_encoder),
clip_tokenizer_info as clip_tokenizer,
ExitStack() as exit_stack,
):
assert isinstance(clip_text_encoder, (CLIPTextModel, CLIPTextModelWithProjection))
assert isinstance(clip_tokenizer, CLIPTokenizer)
clip_text_encoder_config = clip_text_encoder_info.config
assert clip_text_encoder_config is not None
# Apply LoRA models to the CLIP encoder.
# Note: We apply the LoRA after the transformer has been moved to its target device for faster patching.
if clip_text_encoder_config.format in [ModelFormat.Diffusers]:
# The model is non-quantized, so we can apply the LoRA weights directly into the model.
exit_stack.enter_context(
LoRAPatcher.apply_lora_patches(
model=clip_text_encoder,
patches=self._clip_lora_iterator(context, clip_model),
prefix=FLUX_LORA_CLIP_PREFIX,
cached_weights=cached_weights,
)
)
else:
# There are currently no supported CLIP quantized models. Add support here if needed.
raise ValueError(f"Unsupported model format: {clip_text_encoder_config.format}")
clip_text_encoder = clip_text_encoder.eval().requires_grad_(False)
text_inputs = clip_tokenizer(
prompt,
padding="max_length",
max_length=tokenizer_max_length,
truncation=True,
return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
untruncated_ids = clip_tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
assert isinstance(text_input_ids, torch.Tensor)
assert isinstance(untruncated_ids, torch.Tensor)
if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
text_input_ids, untruncated_ids
):
removed_text = clip_tokenizer.batch_decode(untruncated_ids[:, tokenizer_max_length - 1 : -1])
context.logger.warning(
"The following part of your input was truncated because CLIP can only handle sequences up to"
f" {tokenizer_max_length} tokens: {removed_text}"
)
prompt_embeds = clip_text_encoder(
input_ids=text_input_ids.to(clip_text_encoder.device), output_hidden_states=True
)
pooled_prompt_embeds = prompt_embeds[0]
prompt_embeds = prompt_embeds.hidden_states[-2]
return prompt_embeds, pooled_prompt_embeds
def _clip_lora_iterator(
self, context: InvocationContext, clip_model: CLIPField
) -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in clip_model.loras:
lora_info = context.models.load(lora.lora)
assert isinstance(lora_info.model, LoRAModelRaw)
yield (lora_info.model, lora.weight)
del lora_info

View File

@@ -1,9 +1,11 @@
from enum import Enum
from pathlib import Path
from typing import Literal
import numpy as np
import torch
from PIL import Image
from pydantic import BaseModel, Field
from transformers import AutoModelForMaskGeneration, AutoProcessor
from transformers.models.sam import SamModel
from transformers.models.sam.processing_sam import SamProcessor
@@ -23,12 +25,31 @@ SEGMENT_ANYTHING_MODEL_IDS: dict[SegmentAnythingModelKey, str] = {
}
class SAMPointLabel(Enum):
negative = -1
neutral = 0
positive = 1
class SAMPoint(BaseModel):
x: int = Field(..., description="The x-coordinate of the point")
y: int = Field(..., description="The y-coordinate of the point")
label: SAMPointLabel = Field(..., description="The label of the point")
class SAMPointsField(BaseModel):
points: list[SAMPoint] = Field(..., description="The points of the object")
def to_list(self) -> list[list[int]]:
return [[point.x, point.y, point.label.value] for point in self.points]
@invocation(
"segment_anything",
title="Segment Anything",
tags=["prompt", "segmentation"],
category="segmentation",
version="1.0.0",
version="1.1.0",
)
class SegmentAnythingInvocation(BaseInvocation):
"""Runs a Segment Anything Model."""
@@ -40,7 +61,13 @@ class SegmentAnythingInvocation(BaseInvocation):
model: SegmentAnythingModelKey = InputField(description="The Segment Anything model to use.")
image: ImageField = InputField(description="The image to segment.")
bounding_boxes: list[BoundingBoxField] = InputField(description="The bounding boxes to prompt the SAM model with.")
bounding_boxes: list[BoundingBoxField] | None = InputField(
default=None, description="The bounding boxes to prompt the SAM model with."
)
point_lists: list[SAMPointsField] | None = InputField(
default=None,
description="The list of point lists to prompt the SAM model with. Each list of points represents a single object.",
)
apply_polygon_refinement: bool = InputField(
description="Whether to apply polygon refinement to the masks. This will smooth the edges of the masks slightly and ensure that each mask consists of a single closed polygon (before merging).",
default=True,
@@ -55,7 +82,12 @@ class SegmentAnythingInvocation(BaseInvocation):
# The models expect a 3-channel RGB image.
image_pil = context.images.get_pil(self.image.image_name, mode="RGB")
if len(self.bounding_boxes) == 0:
if self.point_lists is not None and self.bounding_boxes is not None:
raise ValueError("Only one of point_lists or bounding_box can be provided.")
if (not self.bounding_boxes or len(self.bounding_boxes) == 0) and (
not self.point_lists or len(self.point_lists) == 0
):
combined_mask = torch.zeros(image_pil.size[::-1], dtype=torch.bool)
else:
masks = self._segment(context=context, image=image_pil)
@@ -83,14 +115,13 @@ class SegmentAnythingInvocation(BaseInvocation):
assert isinstance(sam_processor, SamProcessor)
return SegmentAnythingPipeline(sam_model=sam_model, sam_processor=sam_processor)
def _segment(
self,
context: InvocationContext,
image: Image.Image,
) -> list[torch.Tensor]:
def _segment(self, context: InvocationContext, image: Image.Image) -> list[torch.Tensor]:
"""Use Segment Anything (SAM) to generate masks given an image + a set of bounding boxes."""
# Convert the bounding boxes to the SAM input format.
sam_bounding_boxes = [[bb.x_min, bb.y_min, bb.x_max, bb.y_max] for bb in self.bounding_boxes]
sam_bounding_boxes = (
[[bb.x_min, bb.y_min, bb.x_max, bb.y_max] for bb in self.bounding_boxes] if self.bounding_boxes else None
)
sam_points = [p.to_list() for p in self.point_lists] if self.point_lists else None
with (
context.models.load_remote_model(
@@ -98,7 +129,7 @@ class SegmentAnythingInvocation(BaseInvocation):
) as sam_pipeline,
):
assert isinstance(sam_pipeline, SegmentAnythingPipeline)
masks = sam_pipeline.segment(image=image, bounding_boxes=sam_bounding_boxes)
masks = sam_pipeline.segment(image=image, bounding_boxes=sam_bounding_boxes, point_lists=sam_points)
masks = self._process_masks(masks)
if self.apply_polygon_refinement:
@@ -141,9 +172,10 @@ class SegmentAnythingInvocation(BaseInvocation):
return masks
def _filter_masks(self, masks: list[torch.Tensor], bounding_boxes: list[BoundingBoxField]) -> list[torch.Tensor]:
def _filter_masks(
self, masks: list[torch.Tensor], bounding_boxes: list[BoundingBoxField] | None
) -> list[torch.Tensor]:
"""Filter the detected masks based on the specified mask filter."""
assert len(masks) == len(bounding_boxes)
if self.mask_filter == "all":
return masks
@@ -151,6 +183,10 @@ class SegmentAnythingInvocation(BaseInvocation):
# Find the largest mask.
return [max(masks, key=lambda x: float(x.sum()))]
elif self.mask_filter == "highest_box_score":
assert (
bounding_boxes is not None
), "Bounding boxes must be provided to use the 'highest_box_score' mask filter."
assert len(masks) == len(bounding_boxes)
# Find the index of the bounding box with the highest score.
# Note that we fallback to -1.0 if the score is None. This is mainly to satisfy the type checker. In most
# cases the scores should all be non-None when using this filtering mode. That being said, -1.0 is a

View File

@@ -110,15 +110,26 @@ class DiskImageFileStorage(ImageFileStorageBase):
except Exception as e:
raise ImageFileDeleteException from e
# TODO: make this a bit more flexible for e.g. cloud storage
def get_path(self, image_name: str, thumbnail: bool = False) -> Path:
path = self.__output_folder / image_name
base_folder = self.__thumbnails_folder if thumbnail else self.__output_folder
filename = get_thumbnail_name(image_name) if thumbnail else image_name
if thumbnail:
thumbnail_name = get_thumbnail_name(image_name)
path = self.__thumbnails_folder / thumbnail_name
# Strip any path information from the filename
basename = Path(filename).name
return path
if basename != filename:
raise ValueError("Invalid image name, potential directory traversal detected")
image_path = base_folder / basename
# Ensure the image path is within the base folder to prevent directory traversal
resolved_base = base_folder.resolve()
resolved_image_path = image_path.resolve()
if not resolved_image_path.is_relative_to(resolved_base):
raise ValueError("Image path outside outputs folder, potential directory traversal detected")
return resolved_image_path
def validate_path(self, path: Union[str, Path]) -> bool:
"""Validates the path given for an image or thumbnail."""

View File

@@ -15,6 +15,7 @@ from invokeai.app.util.model_exclude_null import BaseModelExcludeNull
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ClipVariantType,
ControlAdapterDefaultSettings,
MainModelDefaultSettings,
ModelFormat,
@@ -85,7 +86,7 @@ class ModelRecordChanges(BaseModelExcludeNull):
# Checkpoint-specific changes
# TODO(MM2): Should we expose these? Feels footgun-y...
variant: Optional[ModelVariantType] = Field(description="The variant of the model.", default=None)
variant: Optional[ModelVariantType | ClipVariantType] = Field(description="The variant of the model.", default=None)
prediction_type: Optional[SchedulerPredictionType] = Field(
description="The prediction type of the model.", default=None
)

View File

@@ -1,3 +1,4 @@
from copy import deepcopy
from dataclasses import dataclass
from pathlib import Path
from typing import TYPE_CHECKING, Callable, Optional, Union
@@ -221,7 +222,7 @@ class ImagesInterface(InvocationContextInterface):
)
def get_pil(self, image_name: str, mode: IMAGE_MODES | None = None) -> Image:
"""Gets an image as a PIL Image object.
"""Gets an image as a PIL Image object. This method returns a copy of the image.
Args:
image_name: The name of the image to get.
@@ -233,11 +234,15 @@ class ImagesInterface(InvocationContextInterface):
image = self._services.images.get_pil_image(image_name)
if mode and mode != image.mode:
try:
# convert makes a copy!
image = image.convert(mode)
except ValueError:
self._services.logger.warning(
f"Could not convert image from {image.mode} to {mode}. Using original mode instead."
)
else:
# copy the image to prevent the user from modifying the original
image = image.copy()
return image
def get_metadata(self, image_name: str) -> Optional[MetadataField]:
@@ -290,15 +295,15 @@ class TensorsInterface(InvocationContextInterface):
return name
def load(self, name: str) -> Tensor:
"""Loads a tensor by name.
"""Loads a tensor by name. This method returns a copy of the tensor.
Args:
name: The name of the tensor to load.
Returns:
The loaded tensor.
The tensor.
"""
return self._services.tensors.load(name)
return self._services.tensors.load(name).clone()
class ConditioningInterface(InvocationContextInterface):
@@ -316,16 +321,16 @@ class ConditioningInterface(InvocationContextInterface):
return name
def load(self, name: str) -> ConditioningFieldData:
"""Loads conditioning data by name.
"""Loads conditioning data by name. This method returns a copy of the conditioning data.
Args:
name: The name of the conditioning data to load.
Returns:
The loaded conditioning data.
The conditioning data.
"""
return self._services.conditioning.load(name)
return deepcopy(self._services.conditioning.load(name))
class ModelsInterface(InvocationContextInterface):

View File

@@ -0,0 +1,382 @@
{
"name": "SD3.5 Text to Image",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 3.5",
"version": "1.0.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD3.5, default",
"notes": "",
"exposedFields": [
{
"nodeId": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"fieldName": "model"
},
{
"nodeId": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"fieldName": "prompt"
}
],
"meta": {
"version": "3.0.0",
"category": "default"
},
"id": "e3a51d6b-8208-4d6d-b187-fcfe8b32934c",
"nodes": [
{
"id": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"type": "invocation",
"data": {
"id": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"type": "sd3_model_loader",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "f7b20be9-92a8-4cfb-bca4-6c3b5535c10b",
"hash": "placeholder",
"name": "stable-diffusion-3.5-medium",
"base": "sd-3",
"type": "main"
}
},
"t5_encoder_model": {
"name": "t5_encoder_model",
"label": ""
},
"clip_l_model": {
"name": "clip_l_model",
"label": ""
},
"clip_g_model": {
"name": "clip_g_model",
"label": ""
},
"vae_model": {
"name": "vae_model",
"label": ""
}
}
},
"position": {
"x": -55.58689609637031,
"y": -111.53602444662268
}
},
{
"id": "f7e394ac-6394-4096-abcb-de0d346506b3",
"type": "invocation",
"data": {
"id": "f7e394ac-6394-4096-abcb-de0d346506b3",
"type": "rand_int",
"version": "1.0.1",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": false,
"nodePack": "invokeai",
"inputs": {
"low": {
"name": "low",
"label": "",
"value": 0
},
"high": {
"name": "high",
"label": "",
"value": 2147483647
}
}
},
"position": {
"x": 470.45870147220353,
"y": 350.3141781644303
}
},
{
"id": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"type": "invocation",
"data": {
"id": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"type": "sd3_l2i",
"version": "1.3.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": false,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
}
}
},
"position": {
"x": 1192.3097009334897,
"y": -366.0994675072209
}
},
{
"id": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"type": "invocation",
"data": {
"id": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"type": "sd3_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"clip_l": {
"name": "clip_l",
"label": ""
},
"clip_g": {
"name": "clip_g",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"prompt": {
"name": "prompt",
"label": "",
"value": ""
}
}
},
"position": {
"x": 408.16054647924784,
"y": 65.06415352118786
}
},
{
"id": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"type": "invocation",
"data": {
"id": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"type": "sd3_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"clip_l": {
"name": "clip_l",
"label": ""
},
"clip_g": {
"name": "clip_g",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"prompt": {
"name": "prompt",
"label": "",
"value": ""
}
}
},
"position": {
"x": 378.9283412440941,
"y": -302.65777497352553
}
},
{
"id": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"type": "invocation",
"data": {
"id": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"type": "sd3_denoise",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"transformer": {
"name": "transformer",
"label": ""
},
"positive_conditioning": {
"name": "positive_conditioning",
"label": ""
},
"negative_conditioning": {
"name": "negative_conditioning",
"label": ""
},
"cfg_scale": {
"name": "cfg_scale",
"label": "",
"value": 3.5
},
"width": {
"name": "width",
"label": "",
"value": 1024
},
"height": {
"name": "height",
"label": "",
"value": 1024
},
"steps": {
"name": "steps",
"label": "",
"value": 30
},
"seed": {
"name": "seed",
"label": "",
"value": 0
}
}
},
"position": {
"x": 813.7814762740603,
"y": -142.20529727605867
}
}
],
"edges": [
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cvae-9eb72af0-dd9e-4ec5-ad87-d65e3c01f48bvae",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ct5_encoder-3b4f7f27-cfc0-4373-a009-99c5290d0cd6t5_encoder",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ct5_encoder-e17d34e7-6ed1-493c-9a85-4fcd291cb084t5_encoder",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_g-3b4f7f27-cfc0-4373-a009-99c5290d0cd6clip_g",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "clip_g",
"targetHandle": "clip_g"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_g-e17d34e7-6ed1-493c-9a85-4fcd291cb084clip_g",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "clip_g",
"targetHandle": "clip_g"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_l-3b4f7f27-cfc0-4373-a009-99c5290d0cd6clip_l",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "clip_l",
"targetHandle": "clip_l"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_l-e17d34e7-6ed1-493c-9a85-4fcd291cb084clip_l",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "clip_l",
"targetHandle": "clip_l"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ctransformer-c7539f7b-7ac5-49b9-93eb-87ede611409ftransformer",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "transformer",
"targetHandle": "transformer"
},
{
"id": "reactflow__edge-f7e394ac-6394-4096-abcb-de0d346506b3value-c7539f7b-7ac5-49b9-93eb-87ede611409fseed",
"type": "default",
"source": "f7e394ac-6394-4096-abcb-de0d346506b3",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-c7539f7b-7ac5-49b9-93eb-87ede611409flatents-9eb72af0-dd9e-4ec5-ad87-d65e3c01f48blatents",
"type": "default",
"source": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"target": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-e17d34e7-6ed1-493c-9a85-4fcd291cb084conditioning-c7539f7b-7ac5-49b9-93eb-87ede611409fpositive_conditioning",
"type": "default",
"source": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-3b4f7f27-cfc0-4373-a009-99c5290d0cd6conditioning-c7539f7b-7ac5-49b9-93eb-87ede611409fnegative_conditioning",
"type": "default",
"source": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
}
]
}

View File

@@ -34,6 +34,25 @@ SD1_5_LATENT_RGB_FACTORS = [
[-0.1307, -0.1874, -0.7445], # L4
]
SD3_5_LATENT_RGB_FACTORS = [
[-0.05240681, 0.03251581, 0.0749016],
[-0.0580572, 0.00759826, 0.05729818],
[0.16144888, 0.01270368, -0.03768577],
[0.14418615, 0.08460266, 0.15941818],
[0.04894035, 0.0056485, -0.06686988],
[0.05187166, 0.19222395, 0.06261094],
[0.1539433, 0.04818359, 0.07103094],
[-0.08601796, 0.09013458, 0.10893912],
[-0.12398469, -0.06766567, 0.0033688],
[-0.0439737, 0.07825329, 0.02258823],
[0.03101129, 0.06382551, 0.07753657],
[-0.01315361, 0.08554491, -0.08772475],
[0.06464487, 0.05914605, 0.13262741],
[-0.07863674, -0.02261737, -0.12761454],
[-0.09923835, -0.08010759, -0.06264447],
[-0.03392309, -0.0804029, -0.06078822],
]
FLUX_LATENT_RGB_FACTORS = [
[-0.0412, 0.0149, 0.0521],
[0.0056, 0.0291, 0.0768],
@@ -110,6 +129,9 @@ def stable_diffusion_step_callback(
sdxl_latent_rgb_factors = torch.tensor(SDXL_LATENT_RGB_FACTORS, dtype=sample.dtype, device=sample.device)
sdxl_smooth_matrix = torch.tensor(SDXL_SMOOTH_MATRIX, dtype=sample.dtype, device=sample.device)
image = sample_to_lowres_estimated_image(sample, sdxl_latent_rgb_factors, sdxl_smooth_matrix)
elif base_model == BaseModelType.StableDiffusion3:
sd3_latent_rgb_factors = torch.tensor(SD3_5_LATENT_RGB_FACTORS, dtype=sample.dtype, device=sample.device)
image = sample_to_lowres_estimated_image(sample, sd3_latent_rgb_factors)
else:
v1_5_latent_rgb_factors = torch.tensor(SD1_5_LATENT_RGB_FACTORS, dtype=sample.dtype, device=sample.device)
image = sample_to_lowres_estimated_image(sample, v1_5_latent_rgb_factors)

View File

@@ -0,0 +1,83 @@
import einops
import torch
from invokeai.backend.flux.extensions.xlabs_ip_adapter_extension import XLabsIPAdapterExtension
from invokeai.backend.flux.math import attention
from invokeai.backend.flux.modules.layers import DoubleStreamBlock
class CustomDoubleStreamBlockProcessor:
"""A class containing a custom implementation of DoubleStreamBlock.forward() with additional features
(IP-Adapter, etc.).
"""
@staticmethod
def _double_stream_block_forward(
block: DoubleStreamBlock, img: torch.Tensor, txt: torch.Tensor, vec: torch.Tensor, pe: torch.Tensor
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""This function is a direct copy of DoubleStreamBlock.forward(), but it returns some of the intermediate
values.
"""
img_mod1, img_mod2 = block.img_mod(vec)
txt_mod1, txt_mod2 = block.txt_mod(vec)
# prepare image for attention
img_modulated = block.img_norm1(img)
img_modulated = (1 + img_mod1.scale) * img_modulated + img_mod1.shift
img_qkv = block.img_attn.qkv(img_modulated)
img_q, img_k, img_v = einops.rearrange(img_qkv, "B L (K H D) -> K B H L D", K=3, H=block.num_heads)
img_q, img_k = block.img_attn.norm(img_q, img_k, img_v)
# prepare txt for attention
txt_modulated = block.txt_norm1(txt)
txt_modulated = (1 + txt_mod1.scale) * txt_modulated + txt_mod1.shift
txt_qkv = block.txt_attn.qkv(txt_modulated)
txt_q, txt_k, txt_v = einops.rearrange(txt_qkv, "B L (K H D) -> K B H L D", K=3, H=block.num_heads)
txt_q, txt_k = block.txt_attn.norm(txt_q, txt_k, txt_v)
# run actual attention
q = torch.cat((txt_q, img_q), dim=2)
k = torch.cat((txt_k, img_k), dim=2)
v = torch.cat((txt_v, img_v), dim=2)
attn = attention(q, k, v, pe=pe)
txt_attn, img_attn = attn[:, : txt.shape[1]], attn[:, txt.shape[1] :]
# calculate the img bloks
img = img + img_mod1.gate * block.img_attn.proj(img_attn)
img = img + img_mod2.gate * block.img_mlp((1 + img_mod2.scale) * block.img_norm2(img) + img_mod2.shift)
# calculate the txt bloks
txt = txt + txt_mod1.gate * block.txt_attn.proj(txt_attn)
txt = txt + txt_mod2.gate * block.txt_mlp((1 + txt_mod2.scale) * block.txt_norm2(txt) + txt_mod2.shift)
return img, txt, img_q
@staticmethod
def custom_double_block_forward(
timestep_index: int,
total_num_timesteps: int,
block_index: int,
block: DoubleStreamBlock,
img: torch.Tensor,
txt: torch.Tensor,
vec: torch.Tensor,
pe: torch.Tensor,
ip_adapter_extensions: list[XLabsIPAdapterExtension],
) -> tuple[torch.Tensor, torch.Tensor]:
"""A custom implementation of DoubleStreamBlock.forward() with additional features:
- IP-Adapter support
"""
img, txt, img_q = CustomDoubleStreamBlockProcessor._double_stream_block_forward(block, img, txt, vec, pe)
# Apply IP-Adapter conditioning.
for ip_adapter_extension in ip_adapter_extensions:
img = ip_adapter_extension.run_ip_adapter(
timestep_index=timestep_index,
total_num_timesteps=total_num_timesteps,
block_index=block_index,
block=block,
img_q=img_q,
img=img,
)
return img, txt

View File

@@ -1,3 +1,4 @@
import math
from typing import Callable
import torch
@@ -7,6 +8,7 @@ from invokeai.backend.flux.controlnet.controlnet_flux_output import ControlNetFl
from invokeai.backend.flux.extensions.inpaint_extension import InpaintExtension
from invokeai.backend.flux.extensions.instantx_controlnet_extension import InstantXControlNetExtension
from invokeai.backend.flux.extensions.xlabs_controlnet_extension import XLabsControlNetExtension
from invokeai.backend.flux.extensions.xlabs_ip_adapter_extension import XLabsIPAdapterExtension
from invokeai.backend.flux.model import Flux
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
@@ -16,15 +18,23 @@ def denoise(
# model input
img: torch.Tensor,
img_ids: torch.Tensor,
# positive text conditioning
txt: torch.Tensor,
txt_ids: torch.Tensor,
vec: torch.Tensor,
# negative text conditioning
neg_txt: torch.Tensor | None,
neg_txt_ids: torch.Tensor | None,
neg_vec: torch.Tensor | None,
# sampling parameters
timesteps: list[float],
step_callback: Callable[[PipelineIntermediateState], None],
guidance: float,
cfg_scale: list[float],
inpaint_extension: InpaintExtension | None,
controlnet_extensions: list[XLabsControlNetExtension | InstantXControlNetExtension],
pos_ip_adapter_extensions: list[XLabsIPAdapterExtension],
neg_ip_adapter_extensions: list[XLabsIPAdapterExtension],
):
# step 0 is the initial state
total_steps = len(timesteps) - 1
@@ -37,10 +47,9 @@ def denoise(
latents=img,
),
)
step = 1
# guidance_vec is ignored for schnell.
guidance_vec = torch.full((img.shape[0],), guidance, device=img.device, dtype=img.dtype)
for t_curr, t_prev in tqdm(list(zip(timesteps[:-1], timesteps[1:], strict=True))):
for step_index, (t_curr, t_prev) in tqdm(list(enumerate(zip(timesteps[:-1], timesteps[1:], strict=True)))):
t_vec = torch.full((img.shape[0],), t_curr, dtype=img.dtype, device=img.device)
# Run ControlNet models.
@@ -48,7 +57,7 @@ def denoise(
for controlnet_extension in controlnet_extensions:
controlnet_residuals.append(
controlnet_extension.run_controlnet(
timestep_index=step - 1,
timestep_index=step_index,
total_num_timesteps=total_steps,
img=img,
img_ids=img_ids,
@@ -61,7 +70,7 @@ def denoise(
)
# Merge the ControlNet residuals from multiple ControlNets.
# TODO(ryand): We may want to alculate the sum just-in-time to keep peak memory low. Keep in mind, that the
# TODO(ryand): We may want to calculate the sum just-in-time to keep peak memory low. Keep in mind, that the
# controlnet_residuals datastructure is efficient in that it likely contains multiple references to the same
# tensors. Calculating the sum materializes each tensor into its own instance.
merged_controlnet_residuals = sum_controlnet_flux_outputs(controlnet_residuals)
@@ -74,10 +83,39 @@ def denoise(
y=vec,
timesteps=t_vec,
guidance=guidance_vec,
timestep_index=step_index,
total_num_timesteps=total_steps,
controlnet_double_block_residuals=merged_controlnet_residuals.double_block_residuals,
controlnet_single_block_residuals=merged_controlnet_residuals.single_block_residuals,
ip_adapter_extensions=pos_ip_adapter_extensions,
)
step_cfg_scale = cfg_scale[step_index]
# If step_cfg_scale, is 1.0, then we don't need to run the negative prediction.
if not math.isclose(step_cfg_scale, 1.0):
# TODO(ryand): Add option to run positive and negative predictions in a single batch for better performance
# on systems with sufficient VRAM.
if neg_txt is None or neg_txt_ids is None or neg_vec is None:
raise ValueError("Negative text conditioning is required when cfg_scale is not 1.0.")
neg_pred = model(
img=img,
img_ids=img_ids,
txt=neg_txt,
txt_ids=neg_txt_ids,
y=neg_vec,
timesteps=t_vec,
guidance=guidance_vec,
timestep_index=step_index,
total_num_timesteps=total_steps,
controlnet_double_block_residuals=None,
controlnet_single_block_residuals=None,
ip_adapter_extensions=neg_ip_adapter_extensions,
)
pred = neg_pred + step_cfg_scale * (pred - neg_pred)
preview_img = img - t_curr * pred
img = img + (t_prev - t_curr) * pred
@@ -87,13 +125,12 @@ def denoise(
step_callback(
PipelineIntermediateState(
step=step,
step=step_index + 1,
order=1,
total_steps=total_steps,
timestep=int(t_curr),
latents=preview_img,
),
)
step += 1
return img

View File

@@ -0,0 +1,89 @@
import math
from typing import List, Union
import einops
import torch
from PIL import Image
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
from invokeai.backend.flux.ip_adapter.xlabs_ip_adapter_flux import XlabsIpAdapterFlux
from invokeai.backend.flux.modules.layers import DoubleStreamBlock
class XLabsIPAdapterExtension:
def __init__(
self,
model: XlabsIpAdapterFlux,
image_prompt_clip_embed: torch.Tensor,
weight: Union[float, List[float]],
begin_step_percent: float,
end_step_percent: float,
):
self._model = model
self._image_prompt_clip_embed = image_prompt_clip_embed
self._weight = weight
self._begin_step_percent = begin_step_percent
self._end_step_percent = end_step_percent
self._image_proj: torch.Tensor | None = None
def _get_weight(self, timestep_index: int, total_num_timesteps: int) -> float:
first_step = math.floor(self._begin_step_percent * total_num_timesteps)
last_step = math.ceil(self._end_step_percent * total_num_timesteps)
if timestep_index < first_step or timestep_index > last_step:
return 0.0
if isinstance(self._weight, list):
return self._weight[timestep_index]
return self._weight
@staticmethod
def run_clip_image_encoder(
pil_image: List[Image.Image], image_encoder: CLIPVisionModelWithProjection
) -> torch.Tensor:
clip_image_processor = CLIPImageProcessor()
clip_image: torch.Tensor = clip_image_processor(images=pil_image, return_tensors="pt").pixel_values
clip_image = clip_image.to(device=image_encoder.device, dtype=image_encoder.dtype)
clip_image_embeds = image_encoder(clip_image).image_embeds
return clip_image_embeds
def run_image_proj(self, dtype: torch.dtype):
image_prompt_clip_embed = self._image_prompt_clip_embed.to(dtype=dtype)
self._image_proj = self._model.image_proj(image_prompt_clip_embed)
def run_ip_adapter(
self,
timestep_index: int,
total_num_timesteps: int,
block_index: int,
block: DoubleStreamBlock,
img_q: torch.Tensor,
img: torch.Tensor,
) -> torch.Tensor:
"""The logic in this function is based on:
https://github.com/XLabs-AI/x-flux/blob/47495425dbed499be1e8e5a6e52628b07349cba2/src/flux/modules/layers.py#L245-L301
"""
weight = self._get_weight(timestep_index=timestep_index, total_num_timesteps=total_num_timesteps)
if weight < 1e-6:
return img
ip_adapter_block = self._model.ip_adapter_double_blocks.double_blocks[block_index]
ip_key = ip_adapter_block.ip_adapter_double_stream_k_proj(self._image_proj)
ip_value = ip_adapter_block.ip_adapter_double_stream_v_proj(self._image_proj)
# Reshape projections for multi-head attention.
ip_key = einops.rearrange(ip_key, "B L (H D) -> B H L D", H=block.num_heads)
ip_value = einops.rearrange(ip_value, "B L (H D) -> B H L D", H=block.num_heads)
# Compute attention between IP projections and the latent query.
ip_attn = torch.nn.functional.scaled_dot_product_attention(
img_q, ip_key, ip_value, dropout_p=0.0, is_causal=False
)
ip_attn = einops.rearrange(ip_attn, "B H L D -> B L (H D)", H=block.num_heads)
img = img + weight * ip_attn
return img

View File

@@ -0,0 +1,93 @@
# This file is based on:
# https://github.com/XLabs-AI/x-flux/blob/47495425dbed499be1e8e5a6e52628b07349cba2/src/flux/modules/layers.py#L221
import einops
import torch
from invokeai.backend.flux.math import attention
from invokeai.backend.flux.modules.layers import DoubleStreamBlock
class IPDoubleStreamBlockProcessor(torch.nn.Module):
"""Attention processor for handling IP-adapter with double stream block."""
def __init__(self, context_dim: int, hidden_dim: int):
super().__init__()
# Ensure context_dim matches the dimension of image_proj
self.context_dim = context_dim
self.hidden_dim = hidden_dim
# Initialize projections for IP-adapter
self.ip_adapter_double_stream_k_proj = torch.nn.Linear(context_dim, hidden_dim, bias=True)
self.ip_adapter_double_stream_v_proj = torch.nn.Linear(context_dim, hidden_dim, bias=True)
torch.nn.init.zeros_(self.ip_adapter_double_stream_k_proj.weight)
torch.nn.init.zeros_(self.ip_adapter_double_stream_k_proj.bias)
torch.nn.init.zeros_(self.ip_adapter_double_stream_v_proj.weight)
torch.nn.init.zeros_(self.ip_adapter_double_stream_v_proj.bias)
def __call__(
self,
attn: DoubleStreamBlock,
img: torch.Tensor,
txt: torch.Tensor,
vec: torch.Tensor,
pe: torch.Tensor,
image_proj: torch.Tensor,
ip_scale: float = 1.0,
):
# Prepare image for attention
img_mod1, img_mod2 = attn.img_mod(vec)
txt_mod1, txt_mod2 = attn.txt_mod(vec)
img_modulated = attn.img_norm1(img)
img_modulated = (1 + img_mod1.scale) * img_modulated + img_mod1.shift
img_qkv = attn.img_attn.qkv(img_modulated)
img_q, img_k, img_v = einops.rearrange(
img_qkv, "B L (K H D) -> K B H L D", K=3, H=attn.num_heads, D=attn.head_dim
)
img_q, img_k = attn.img_attn.norm(img_q, img_k, img_v)
txt_modulated = attn.txt_norm1(txt)
txt_modulated = (1 + txt_mod1.scale) * txt_modulated + txt_mod1.shift
txt_qkv = attn.txt_attn.qkv(txt_modulated)
txt_q, txt_k, txt_v = einops.rearrange(
txt_qkv, "B L (K H D) -> K B H L D", K=3, H=attn.num_heads, D=attn.head_dim
)
txt_q, txt_k = attn.txt_attn.norm(txt_q, txt_k, txt_v)
q = torch.cat((txt_q, img_q), dim=2)
k = torch.cat((txt_k, img_k), dim=2)
v = torch.cat((txt_v, img_v), dim=2)
attn1 = attention(q, k, v, pe=pe)
txt_attn, img_attn = attn1[:, : txt.shape[1]], attn1[:, txt.shape[1] :]
# print(f"txt_attn shape: {txt_attn.size()}")
# print(f"img_attn shape: {img_attn.size()}")
img = img + img_mod1.gate * attn.img_attn.proj(img_attn)
img = img + img_mod2.gate * attn.img_mlp((1 + img_mod2.scale) * attn.img_norm2(img) + img_mod2.shift)
txt = txt + txt_mod1.gate * attn.txt_attn.proj(txt_attn)
txt = txt + txt_mod2.gate * attn.txt_mlp((1 + txt_mod2.scale) * attn.txt_norm2(txt) + txt_mod2.shift)
# IP-adapter processing
ip_query = img_q # latent sample query
ip_key = self.ip_adapter_double_stream_k_proj(image_proj)
ip_value = self.ip_adapter_double_stream_v_proj(image_proj)
# Reshape projections for multi-head attention
ip_key = einops.rearrange(ip_key, "B L (H D) -> B H L D", H=attn.num_heads, D=attn.head_dim)
ip_value = einops.rearrange(ip_value, "B L (H D) -> B H L D", H=attn.num_heads, D=attn.head_dim)
# Compute attention between IP projections and the latent query
ip_attention = torch.nn.functional.scaled_dot_product_attention(
ip_query, ip_key, ip_value, dropout_p=0.0, is_causal=False
)
ip_attention = einops.rearrange(ip_attention, "B H L D -> B L (H D)", H=attn.num_heads, D=attn.head_dim)
img = img + ip_scale * ip_attention
return img, txt

View File

@@ -0,0 +1,50 @@
from typing import Any, Dict
import torch
from invokeai.backend.flux.ip_adapter.xlabs_ip_adapter_flux import XlabsIpAdapterParams
def is_state_dict_xlabs_ip_adapter(sd: Dict[str, Any]) -> bool:
"""Is the state dict for an XLabs FLUX IP-Adapter model?
This is intended to be a reasonably high-precision detector, but it is not guaranteed to have perfect precision.
"""
# If all of the expected keys are present, then this is very likely an XLabs IP-Adapter model.
expected_keys = {
"double_blocks.0.processor.ip_adapter_double_stream_k_proj.bias",
"double_blocks.0.processor.ip_adapter_double_stream_k_proj.weight",
"double_blocks.0.processor.ip_adapter_double_stream_v_proj.bias",
"double_blocks.0.processor.ip_adapter_double_stream_v_proj.weight",
"ip_adapter_proj_model.norm.bias",
"ip_adapter_proj_model.norm.weight",
"ip_adapter_proj_model.proj.bias",
"ip_adapter_proj_model.proj.weight",
}
if expected_keys.issubset(sd.keys()):
return True
return False
def infer_xlabs_ip_adapter_params_from_state_dict(state_dict: dict[str, torch.Tensor]) -> XlabsIpAdapterParams:
num_double_blocks = 0
context_dim = 0
hidden_dim = 0
# Count the number of double blocks.
double_block_index = 0
while f"double_blocks.{double_block_index}.processor.ip_adapter_double_stream_k_proj.weight" in state_dict:
double_block_index += 1
num_double_blocks = double_block_index
hidden_dim = state_dict["double_blocks.0.processor.ip_adapter_double_stream_k_proj.weight"].shape[0]
context_dim = state_dict["double_blocks.0.processor.ip_adapter_double_stream_k_proj.weight"].shape[1]
clip_embeddings_dim = state_dict["ip_adapter_proj_model.proj.weight"].shape[1]
return XlabsIpAdapterParams(
num_double_blocks=num_double_blocks,
context_dim=context_dim,
hidden_dim=hidden_dim,
clip_embeddings_dim=clip_embeddings_dim,
)

View File

@@ -0,0 +1,67 @@
from dataclasses import dataclass
import torch
from invokeai.backend.ip_adapter.ip_adapter import ImageProjModel
class IPDoubleStreamBlock(torch.nn.Module):
def __init__(self, context_dim: int, hidden_dim: int):
super().__init__()
self.context_dim = context_dim
self.hidden_dim = hidden_dim
self.ip_adapter_double_stream_k_proj = torch.nn.Linear(context_dim, hidden_dim, bias=True)
self.ip_adapter_double_stream_v_proj = torch.nn.Linear(context_dim, hidden_dim, bias=True)
class IPAdapterDoubleBlocks(torch.nn.Module):
def __init__(self, num_double_blocks: int, context_dim: int, hidden_dim: int):
super().__init__()
self.double_blocks = torch.nn.ModuleList(
[IPDoubleStreamBlock(context_dim, hidden_dim) for _ in range(num_double_blocks)]
)
@dataclass
class XlabsIpAdapterParams:
num_double_blocks: int
context_dim: int
hidden_dim: int
clip_embeddings_dim: int
class XlabsIpAdapterFlux(torch.nn.Module):
def __init__(self, params: XlabsIpAdapterParams):
super().__init__()
self.image_proj = ImageProjModel(
cross_attention_dim=params.context_dim, clip_embeddings_dim=params.clip_embeddings_dim
)
self.ip_adapter_double_blocks = IPAdapterDoubleBlocks(
num_double_blocks=params.num_double_blocks, context_dim=params.context_dim, hidden_dim=params.hidden_dim
)
def load_xlabs_state_dict(self, state_dict: dict[str, torch.Tensor], assign: bool = False):
"""We need this custom function to load state dicts rather than using .load_state_dict(...) because the model
structure does not match the state_dict structure.
"""
# Split the state_dict into the image projection model and the double blocks.
image_proj_sd: dict[str, torch.Tensor] = {}
double_blocks_sd: dict[str, torch.Tensor] = {}
for k, v in state_dict.items():
if k.startswith("ip_adapter_proj_model."):
image_proj_sd[k] = v
elif k.startswith("double_blocks."):
double_blocks_sd[k] = v
else:
raise ValueError(f"Unexpected key: {k}")
# Initialize the image projection model.
image_proj_sd = {k.replace("ip_adapter_proj_model.", ""): v for k, v in image_proj_sd.items()}
self.image_proj.load_state_dict(image_proj_sd, assign=assign)
# Initialize the double blocks.
double_blocks_sd = {k.replace("processor.", ""): v for k, v in double_blocks_sd.items()}
self.ip_adapter_double_blocks.load_state_dict(double_blocks_sd, assign=assign)

View File

@@ -5,6 +5,8 @@ from dataclasses import dataclass
import torch
from torch import Tensor, nn
from invokeai.backend.flux.custom_block_processor import CustomDoubleStreamBlockProcessor
from invokeai.backend.flux.extensions.xlabs_ip_adapter_extension import XLabsIPAdapterExtension
from invokeai.backend.flux.modules.layers import (
DoubleStreamBlock,
EmbedND,
@@ -88,8 +90,11 @@ class Flux(nn.Module):
timesteps: Tensor,
y: Tensor,
guidance: Tensor | None,
timestep_index: int,
total_num_timesteps: int,
controlnet_double_block_residuals: list[Tensor] | None,
controlnet_single_block_residuals: list[Tensor] | None,
ip_adapter_extensions: list[XLabsIPAdapterExtension],
) -> Tensor:
if img.ndim != 3 or txt.ndim != 3:
raise ValueError("Input img and txt tensors must have 3 dimensions.")
@@ -111,7 +116,19 @@ class Flux(nn.Module):
if controlnet_double_block_residuals is not None:
assert len(controlnet_double_block_residuals) == len(self.double_blocks)
for block_index, block in enumerate(self.double_blocks):
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
assert isinstance(block, DoubleStreamBlock)
img, txt = CustomDoubleStreamBlockProcessor.custom_double_block_forward(
timestep_index=timestep_index,
total_num_timesteps=total_num_timesteps,
block_index=block_index,
block=block,
img=img,
txt=txt,
vec=vec,
pe=pe,
ip_adapter_extensions=ip_adapter_extensions,
)
if controlnet_double_block_residuals is not None:
img += controlnet_double_block_residuals[block_index]

View File

@@ -168,8 +168,17 @@ def generate_img_ids(h: int, w: int, batch_size: int, device: torch.device, dtyp
Returns:
torch.Tensor: Image position ids.
"""
if device.type == "mps":
orig_dtype = dtype
dtype = torch.float16
img_ids = torch.zeros(h // 2, w // 2, 3, device=device, dtype=dtype)
img_ids[..., 1] = img_ids[..., 1] + torch.arange(h // 2, device=device, dtype=dtype)[:, None]
img_ids[..., 2] = img_ids[..., 2] + torch.arange(w // 2, device=device, dtype=dtype)[None, :]
img_ids = repeat(img_ids, "h w c -> b (h w) c", b=batch_size)
if device.type == "mps":
img_ids.to(orig_dtype)
return img_ids

View File

@@ -1,4 +1,4 @@
from typing import Optional
from typing import Optional, TypeAlias
import torch
from PIL import Image
@@ -7,6 +7,14 @@ from transformers.models.sam.processing_sam import SamProcessor
from invokeai.backend.raw_model import RawModel
# Type aliases for the inputs to the SAM model.
ListOfBoundingBoxes: TypeAlias = list[list[int]]
"""A list of bounding boxes. Each bounding box is in the format [xmin, ymin, xmax, ymax]."""
ListOfPoints: TypeAlias = list[list[int]]
"""A list of points. Each point is in the format [x, y]."""
ListOfPointLabels: TypeAlias = list[int]
"""A list of SAM point labels. Each label is an integer where -1 is background, 0 is neutral, and 1 is foreground."""
class SegmentAnythingPipeline(RawModel):
"""A wrapper class for the transformers SAM model and processor that makes it compatible with the model manager."""
@@ -27,20 +35,53 @@ class SegmentAnythingPipeline(RawModel):
return calc_module_size(self._sam_model)
def segment(self, image: Image.Image, bounding_boxes: list[list[int]]) -> torch.Tensor:
def segment(
self,
image: Image.Image,
bounding_boxes: list[list[int]] | None = None,
point_lists: list[list[list[int]]] | None = None,
) -> torch.Tensor:
"""Run the SAM model.
Either bounding_boxes or point_lists must be provided. If both are provided, bounding_boxes will be used and
point_lists will be ignored.
Args:
image (Image.Image): The image to segment.
bounding_boxes (list[list[int]]): The bounding box prompts. Each bounding box is in the format
[xmin, ymin, xmax, ymax].
point_lists (list[list[list[int]]]): The points prompts. Each point is in the format [x, y, label].
`label` is an integer where -1 is background, 0 is neutral, and 1 is foreground.
Returns:
torch.Tensor: The segmentation masks. dtype: torch.bool. shape: [num_masks, channels, height, width].
"""
# Add batch dimension of 1 to the bounding boxes.
boxes = [bounding_boxes]
inputs = self._sam_processor(images=image, input_boxes=boxes, return_tensors="pt").to(self._sam_model.device)
# Prep the inputs:
# - Create a list of bounding boxes or points and labels.
# - Add a batch dimension of 1 to the inputs.
if bounding_boxes:
input_boxes: list[ListOfBoundingBoxes] | None = [bounding_boxes]
input_points: list[ListOfPoints] | None = None
input_labels: list[ListOfPointLabels] | None = None
elif point_lists:
input_boxes: list[ListOfBoundingBoxes] | None = None
input_points: list[ListOfPoints] | None = []
input_labels: list[ListOfPointLabels] | None = []
for point_list in point_lists:
input_points.append([[p[0], p[1]] for p in point_list])
input_labels.append([p[2] for p in point_list])
else:
raise ValueError("Either bounding_boxes or points and labels must be provided.")
inputs = self._sam_processor(
images=image,
input_boxes=input_boxes,
input_points=input_points,
input_labels=input_labels,
return_tensors="pt",
).to(self._sam_model.device)
outputs = self._sam_model(**inputs)
masks = self._sam_processor.post_process_masks(
masks=outputs.pred_masks,

View File

@@ -53,6 +53,7 @@ class BaseModelType(str, Enum):
Any = "any"
StableDiffusion1 = "sd-1"
StableDiffusion2 = "sd-2"
StableDiffusion3 = "sd-3"
StableDiffusionXL = "sdxl"
StableDiffusionXLRefiner = "sdxl-refiner"
Flux = "flux"
@@ -83,8 +84,10 @@ class SubModelType(str, Enum):
Transformer = "transformer"
TextEncoder = "text_encoder"
TextEncoder2 = "text_encoder_2"
TextEncoder3 = "text_encoder_3"
Tokenizer = "tokenizer"
Tokenizer2 = "tokenizer_2"
Tokenizer3 = "tokenizer_3"
VAE = "vae"
VAEDecoder = "vae_decoder"
VAEEncoder = "vae_encoder"
@@ -92,6 +95,13 @@ class SubModelType(str, Enum):
SafetyChecker = "safety_checker"
class ClipVariantType(str, Enum):
"""Variant type."""
L = "large"
G = "gigantic"
class ModelVariantType(str, Enum):
"""Variant type."""
@@ -147,6 +157,15 @@ class ModelSourceType(str, Enum):
DEFAULTS_PRECISION = Literal["fp16", "fp32"]
AnyVariant: TypeAlias = Union[ModelVariantType, ClipVariantType, None]
class SubmodelDefinition(BaseModel):
path_or_prefix: str
model_type: ModelType
variant: AnyVariant = None
class MainModelDefaultSettings(BaseModel):
vae: str | None = Field(default=None, description="Default VAE for this model (model key)")
vae_precision: DEFAULTS_PRECISION | None = Field(default=None, description="Default VAE precision for this model")
@@ -193,6 +212,9 @@ class ModelConfigBase(BaseModel):
schema["required"].extend(["key", "type", "format"])
model_config = ConfigDict(validate_assignment=True, json_schema_extra=json_schema_extra)
submodels: Optional[Dict[SubModelType, SubmodelDefinition]] = Field(
description="Loadable submodels in this model", default=None
)
class CheckpointConfigBase(ModelConfigBase):
@@ -335,7 +357,7 @@ class MainConfigBase(ModelConfigBase):
default_settings: Optional[MainModelDefaultSettings] = Field(
description="Default settings for this model", default=None
)
variant: ModelVariantType = ModelVariantType.Normal
variant: AnyVariant = ModelVariantType.Normal
class MainCheckpointConfig(CheckpointConfigBase, MainConfigBase):
@@ -394,6 +416,8 @@ class IPAdapterBaseConfig(ModelConfigBase):
class IPAdapterInvokeAIConfig(IPAdapterBaseConfig):
"""Model config for IP Adapter diffusers format models."""
# TODO(ryand): Should we deprecate this field? From what I can tell, it hasn't been probed correctly for a long
# time. Need to go through the history to make sure I'm understanding this fully.
image_encoder_model_id: str
format: Literal[ModelFormat.InvokeAI]
@@ -417,12 +441,33 @@ class CLIPEmbedDiffusersConfig(DiffusersConfigBase):
type: Literal[ModelType.CLIPEmbed] = ModelType.CLIPEmbed
format: Literal[ModelFormat.Diffusers] = ModelFormat.Diffusers
variant: ClipVariantType = ClipVariantType.L
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.CLIPEmbed.value}.{ModelFormat.Diffusers.value}")
class CLIPGEmbedDiffusersConfig(CLIPEmbedDiffusersConfig):
"""Model config for CLIP-G Embeddings."""
variant: ClipVariantType = ClipVariantType.G
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.CLIPEmbed.value}.{ModelFormat.Diffusers.value}.{ClipVariantType.G}")
class CLIPLEmbedDiffusersConfig(CLIPEmbedDiffusersConfig):
"""Model config for CLIP-L Embeddings."""
variant: ClipVariantType = ClipVariantType.L
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.CLIPEmbed.value}.{ModelFormat.Diffusers.value}.{ClipVariantType.L}")
class CLIPVisionDiffusersConfig(DiffusersConfigBase):
"""Model config for CLIPVision."""
@@ -499,6 +544,8 @@ AnyModelConfig = Annotated[
Annotated[SpandrelImageToImageConfig, SpandrelImageToImageConfig.get_tag()],
Annotated[CLIPVisionDiffusersConfig, CLIPVisionDiffusersConfig.get_tag()],
Annotated[CLIPEmbedDiffusersConfig, CLIPEmbedDiffusersConfig.get_tag()],
Annotated[CLIPLEmbedDiffusersConfig, CLIPLEmbedDiffusersConfig.get_tag()],
Annotated[CLIPGEmbedDiffusersConfig, CLIPGEmbedDiffusersConfig.get_tag()],
],
Discriminator(get_model_discriminator_value),
]

View File

@@ -0,0 +1,41 @@
from pathlib import Path
from typing import Optional
from transformers import CLIPVisionModelWithProjection
from invokeai.backend.model_manager.config import (
AnyModel,
AnyModelConfig,
BaseModelType,
DiffusersConfigBase,
ModelFormat,
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.load.load_default import ModelLoader
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.CLIPVision, format=ModelFormat.Diffusers)
class ClipVisionLoader(ModelLoader):
"""Class to load CLIPVision models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if not isinstance(config, DiffusersConfigBase):
raise ValueError("Only DiffusersConfigBase models are currently supported here.")
if submodel_type is not None:
raise Exception("There are no submodels in CLIP Vision models.")
model_path = Path(config.path)
model = CLIPVisionModelWithProjection.from_pretrained(
model_path, torch_dtype=self._torch_dtype, local_files_only=True
)
assert isinstance(model, CLIPVisionModelWithProjection)
return model

View File

@@ -19,6 +19,10 @@ from invokeai.backend.flux.controlnet.state_dict_utils import (
is_state_dict_xlabs_controlnet,
)
from invokeai.backend.flux.controlnet.xlabs_controlnet_flux import XLabsControlNetFlux
from invokeai.backend.flux.ip_adapter.state_dict_utils import infer_xlabs_ip_adapter_params_from_state_dict
from invokeai.backend.flux.ip_adapter.xlabs_ip_adapter_flux import (
XlabsIpAdapterFlux,
)
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.flux.util import ae_params, params
@@ -35,6 +39,7 @@ from invokeai.backend.model_manager.config import (
CLIPEmbedDiffusersConfig,
ControlNetCheckpointConfig,
ControlNetDiffusersConfig,
IPAdapterCheckpointConfig,
MainBnbQuantized4bCheckpointConfig,
MainCheckpointConfig,
MainGGUFCheckpointConfig,
@@ -123,9 +128,9 @@ class BnbQuantizedLlmInt8bCheckpointModel(ModelLoader):
"The bnb modules are not available. Please install bitsandbytes if available on your platform."
)
match submodel_type:
case SubModelType.Tokenizer2:
case SubModelType.Tokenizer2 | SubModelType.Tokenizer3:
return T5Tokenizer.from_pretrained(Path(config.path) / "tokenizer_2", max_length=512)
case SubModelType.TextEncoder2:
case SubModelType.TextEncoder2 | SubModelType.TextEncoder3:
te2_model_path = Path(config.path) / "text_encoder_2"
model_config = AutoConfig.from_pretrained(te2_model_path)
with accelerate.init_empty_weights():
@@ -167,10 +172,10 @@ class T5EncoderCheckpointModel(ModelLoader):
raise ValueError("Only T5EncoderConfig models are currently supported here.")
match submodel_type:
case SubModelType.Tokenizer2:
case SubModelType.Tokenizer2 | SubModelType.Tokenizer3:
return T5Tokenizer.from_pretrained(Path(config.path) / "tokenizer_2", max_length=512)
case SubModelType.TextEncoder2:
return T5EncoderModel.from_pretrained(Path(config.path) / "text_encoder_2")
case SubModelType.TextEncoder2 | SubModelType.TextEncoder3:
return T5EncoderModel.from_pretrained(Path(config.path) / "text_encoder_2", torch_dtype="auto")
raise ValueError(
f"Only Tokenizer and TextEncoder submodels are currently supported. Received: {submodel_type.value if submodel_type else 'None'}"
@@ -352,3 +357,26 @@ class FluxControlnetModel(ModelLoader):
model.load_state_dict(sd, assign=True)
return model
@ModelLoaderRegistry.register(base=BaseModelType.Flux, type=ModelType.IPAdapter, format=ModelFormat.Checkpoint)
class FluxIpAdapterModel(ModelLoader):
"""Class to load FLUX IP-Adapter models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if not isinstance(config, IPAdapterCheckpointConfig):
raise ValueError(f"Unexpected model config type: {type(config)}.")
sd = load_file(Path(config.path))
params = infer_xlabs_ip_adapter_params_from_state_dict(sd)
with accelerate.init_empty_weights():
model = XlabsIpAdapterFlux(params=params)
model.load_xlabs_state_dict(sd, assign=True)
return model

View File

@@ -22,7 +22,6 @@ from invokeai.backend.model_manager.load.load_default import ModelLoader
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.CLIPVision, format=ModelFormat.Diffusers)
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.T2IAdapter, format=ModelFormat.Diffusers)
class GenericDiffusersLoader(ModelLoader):
"""Class to load simple diffusers models."""

View File

@@ -42,6 +42,7 @@ VARIANT_TO_IN_CHANNEL_MAP = {
@ModelLoaderRegistry.register(
base=BaseModelType.StableDiffusionXLRefiner, type=ModelType.Main, format=ModelFormat.Diffusers
)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion3, type=ModelType.Main, format=ModelFormat.Diffusers)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion1, type=ModelType.Main, format=ModelFormat.Checkpoint)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion2, type=ModelType.Main, format=ModelFormat.Checkpoint)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusionXL, type=ModelType.Main, format=ModelFormat.Checkpoint)
@@ -51,13 +52,6 @@ VARIANT_TO_IN_CHANNEL_MAP = {
class StableDiffusionDiffusersModel(GenericDiffusersLoader):
"""Class to load main models."""
model_base_to_model_type = {
BaseModelType.StableDiffusion1: "FrozenCLIPEmbedder",
BaseModelType.StableDiffusion2: "FrozenOpenCLIPEmbedder",
BaseModelType.StableDiffusionXL: "SDXL",
BaseModelType.StableDiffusionXLRefiner: "SDXL-Refiner",
}
def _load_model(
self,
config: AnyModelConfig,
@@ -117,8 +111,6 @@ class StableDiffusionDiffusersModel(GenericDiffusersLoader):
load_class = load_classes[config.base][config.variant]
except KeyError as e:
raise Exception(f"No diffusers pipeline known for base={config.base}, variant={config.variant}") from e
prediction_type = config.prediction_type.value
upcast_attention = config.upcast_attention
# Without SilenceWarnings we get log messages like this:
# site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
@@ -129,13 +121,7 @@ class StableDiffusionDiffusersModel(GenericDiffusersLoader):
# ['text_model.embeddings.position_ids']
with SilenceWarnings():
pipeline = load_class.from_single_file(
config.path,
torch_dtype=self._torch_dtype,
prediction_type=prediction_type,
upcast_attention=upcast_attention,
load_safety_checker=False,
)
pipeline = load_class.from_single_file(config.path, torch_dtype=self._torch_dtype)
if not submodel_type:
return pipeline

View File

@@ -20,7 +20,7 @@ from typing import Optional
import requests
from huggingface_hub import HfApi, configure_http_backend, hf_hub_url
from huggingface_hub.utils._errors import RepositoryNotFoundError, RevisionNotFoundError
from huggingface_hub.errors import RepositoryNotFoundError, RevisionNotFoundError
from pydantic.networks import AnyHttpUrl
from requests.sessions import Session

View File

@@ -1,7 +1,7 @@
import json
import re
from pathlib import Path
from typing import Any, Dict, Literal, Optional, Union
from typing import Any, Callable, Dict, Literal, Optional, Union
import safetensors.torch
import spandrel
@@ -14,6 +14,7 @@ from invokeai.backend.flux.controlnet.state_dict_utils import (
is_state_dict_instantx_controlnet,
is_state_dict_xlabs_controlnet,
)
from invokeai.backend.flux.ip_adapter.state_dict_utils import is_state_dict_xlabs_ip_adapter
from invokeai.backend.lora.conversions.flux_diffusers_lora_conversion_utils import (
is_state_dict_likely_in_flux_diffusers_format,
)
@@ -21,6 +22,7 @@ from invokeai.backend.lora.conversions.flux_kohya_lora_conversion_utils import i
from invokeai.backend.model_hash.model_hash import HASHING_ALGORITHMS, ModelHash
from invokeai.backend.model_manager.config import (
AnyModelConfig,
AnyVariant,
BaseModelType,
ControlAdapterDefaultSettings,
InvalidModelConfigException,
@@ -32,8 +34,15 @@ from invokeai.backend.model_manager.config import (
ModelType,
ModelVariantType,
SchedulerPredictionType,
SubmodelDefinition,
SubModelType,
)
from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import ConfigLoader
from invokeai.backend.model_manager.util.model_util import (
get_clip_variant_type,
lora_token_vector_length,
read_checkpoint_meta,
)
from invokeai.backend.model_manager.util.model_util import lora_token_vector_length, read_checkpoint_meta
from invokeai.backend.quantization.gguf.ggml_tensor import GGMLTensor
from invokeai.backend.quantization.gguf.loaders import gguf_sd_loader
from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel
@@ -111,6 +120,7 @@ class ModelProbe(object):
"StableDiffusionXLPipeline": ModelType.Main,
"StableDiffusionXLImg2ImgPipeline": ModelType.Main,
"StableDiffusionXLInpaintPipeline": ModelType.Main,
"StableDiffusion3Pipeline": ModelType.Main,
"LatentConsistencyModelPipeline": ModelType.Main,
"AutoencoderKL": ModelType.VAE,
"AutoencoderTiny": ModelType.VAE,
@@ -121,8 +131,12 @@ class ModelProbe(object):
"CLIPTextModel": ModelType.CLIPEmbed,
"T5EncoderModel": ModelType.T5Encoder,
"FluxControlNetModel": ModelType.ControlNet,
"SD3Transformer2DModel": ModelType.Main,
"CLIPTextModelWithProjection": ModelType.CLIPEmbed,
}
TYPE2VARIANT: Dict[ModelType, Callable[[str], Optional[AnyVariant]]] = {ModelType.CLIPEmbed: get_clip_variant_type}
@classmethod
def register_probe(
cls, format: Literal["diffusers", "checkpoint", "onnx"], model_type: ModelType, probe_class: type[ProbeBase]
@@ -169,7 +183,10 @@ class ModelProbe(object):
fields["path"] = model_path.as_posix()
fields["type"] = fields.get("type") or model_type
fields["base"] = fields.get("base") or probe.get_base_type()
fields["variant"] = fields.get("variant") or probe.get_variant_type()
variant_func = cls.TYPE2VARIANT.get(fields["type"], None)
fields["variant"] = (
fields.get("variant") or (variant_func and variant_func(model_path.as_posix())) or probe.get_variant_type()
)
fields["prediction_type"] = fields.get("prediction_type") or probe.get_scheduler_prediction_type()
fields["image_encoder_model_id"] = fields.get("image_encoder_model_id") or probe.get_image_encoder_model_id()
fields["name"] = fields.get("name") or cls.get_model_name(model_path)
@@ -216,6 +233,10 @@ class ModelProbe(object):
and fields["prediction_type"] == SchedulerPredictionType.VPrediction
)
get_submodels = getattr(probe, "get_submodels", None)
if fields["base"] == BaseModelType.StableDiffusion3 and callable(get_submodels):
fields["submodels"] = get_submodels()
model_info = ModelConfigFactory.make_config(fields) # , key=fields.get("key", None))
return model_info
@@ -243,8 +264,6 @@ class ModelProbe(object):
"cond_stage_model.",
"first_stage_model.",
"model.diffusion_model.",
# FLUX models in the official BFL format contain keys with the "double_blocks." prefix.
"double_blocks.",
# Some FLUX checkpoint files contain transformer keys prefixed with "model.diffusion_model".
# This prefix is typically used to distinguish between multiple models bundled in a single file.
"model.diffusion_model.double_blocks.",
@@ -252,6 +271,10 @@ class ModelProbe(object):
):
# Keys starting with double_blocks are associated with Flux models
return ModelType.Main
# FLUX models in the official BFL format contain keys with the "double_blocks." prefix, but we must be
# careful to avoid false positives on XLabs FLUX IP-Adapter models.
elif key.startswith("double_blocks.") and "ip_adapter" not in key:
return ModelType.Main
elif key.startswith(("encoder.conv_in", "decoder.conv_in")):
return ModelType.VAE
elif key.startswith(("lora_te_", "lora_unet_")):
@@ -274,7 +297,14 @@ class ModelProbe(object):
)
):
return ModelType.ControlNet
elif key.startswith(("image_proj.", "ip_adapter.")):
elif key.startswith(
(
"image_proj.",
"ip_adapter.",
# XLabs FLUX IP-Adapter models have keys startinh with "ip_adapter_proj_model.".
"ip_adapter_proj_model.",
)
):
return ModelType.IPAdapter
elif key in {"emb_params", "string_to_param"}:
return ModelType.TextualInversion
@@ -452,8 +482,9 @@ MODEL_NAME_TO_PREPROCESSOR = {
"normal": "normalbae_image_processor",
"sketch": "pidi_image_processor",
"scribble": "lineart_image_processor",
"lineart": "lineart_image_processor",
"lineart anime": "lineart_anime_image_processor",
"lineart_anime": "lineart_anime_image_processor",
"lineart": "lineart_image_processor",
"softedge": "hed_image_processor",
"hed": "hed_image_processor",
"shuffle": "content_shuffle_image_processor",
@@ -672,6 +703,10 @@ class IPAdapterCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
checkpoint = self.checkpoint
if is_state_dict_xlabs_ip_adapter(checkpoint):
return BaseModelType.Flux
for key in checkpoint.keys():
if not key.startswith(("image_proj.", "ip_adapter.")):
continue
@@ -732,18 +767,33 @@ class FolderProbeBase(ProbeBase):
class PipelineFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
with open(self.model_path / "unet" / "config.json", "r") as file:
unet_conf = json.load(file)
if unet_conf["cross_attention_dim"] == 768:
return BaseModelType.StableDiffusion1
elif unet_conf["cross_attention_dim"] == 1024:
return BaseModelType.StableDiffusion2
elif unet_conf["cross_attention_dim"] == 1280:
return BaseModelType.StableDiffusionXLRefiner
elif unet_conf["cross_attention_dim"] == 2048:
return BaseModelType.StableDiffusionXL
else:
raise InvalidModelConfigException(f"Unknown base model for {self.model_path}")
# Handle pipelines with a UNet (i.e SD 1.x, SD2, SDXL).
config_path = self.model_path / "unet" / "config.json"
if config_path.exists():
with open(config_path) as file:
unet_conf = json.load(file)
if unet_conf["cross_attention_dim"] == 768:
return BaseModelType.StableDiffusion1
elif unet_conf["cross_attention_dim"] == 1024:
return BaseModelType.StableDiffusion2
elif unet_conf["cross_attention_dim"] == 1280:
return BaseModelType.StableDiffusionXLRefiner
elif unet_conf["cross_attention_dim"] == 2048:
return BaseModelType.StableDiffusionXL
else:
raise InvalidModelConfigException(f"Unknown base model for {self.model_path}")
# Handle pipelines with a transformer (i.e. SD3).
config_path = self.model_path / "transformer" / "config.json"
if config_path.exists():
with open(config_path) as file:
transformer_conf = json.load(file)
if transformer_conf["_class_name"] == "SD3Transformer2DModel":
return BaseModelType.StableDiffusion3
else:
raise InvalidModelConfigException(f"Unknown base model for {self.model_path}")
raise InvalidModelConfigException(f"Unknown base model for {self.model_path}")
def get_scheduler_prediction_type(self) -> SchedulerPredictionType:
with open(self.model_path / "scheduler" / "scheduler_config.json", "r") as file:
@@ -755,6 +805,23 @@ class PipelineFolderProbe(FolderProbeBase):
else:
raise InvalidModelConfigException("Unknown scheduler prediction type: {scheduler_conf['prediction_type']}")
def get_submodels(self) -> Dict[SubModelType, SubmodelDefinition]:
config = ConfigLoader.load_config(self.model_path, config_name="model_index.json")
submodels: Dict[SubModelType, SubmodelDefinition] = {}
for key, value in config.items():
if key.startswith("_") or not (isinstance(value, list) and len(value) == 2):
continue
model_loader = str(value[1])
if model_type := ModelProbe.CLASS2TYPE.get(model_loader):
variant_func = ModelProbe.TYPE2VARIANT.get(model_type, None)
submodels[SubModelType(key)] = SubmodelDefinition(
path_or_prefix=(self.model_path / key).resolve().as_posix(),
model_type=model_type,
variant=variant_func and variant_func((self.model_path / key).as_posix()),
)
return submodels
def get_variant_type(self) -> ModelVariantType:
# This only works for pipelines! Any kind of
# exception results in our returning the

View File

@@ -13,6 +13,9 @@ class StarterModelWithoutDependencies(BaseModel):
type: ModelType
format: Optional[ModelFormat] = None
is_installed: bool = False
# allows us to track what models a user has installed across name changes within starter models
# if you update a starter model name, please add the old one to this list for that starter model
previous_names: list[str] = []
class StarterModel(StarterModelWithoutDependencies):
@@ -25,22 +28,6 @@ class StarterModelBundles(BaseModel):
models: list[StarterModel]
ip_adapter_sd_image_encoder = StarterModel(
name="IP Adapter SD1.5 Image Encoder",
base=BaseModelType.StableDiffusion1,
source="InvokeAI/ip_adapter_sd_image_encoder",
description="IP Adapter SD Image Encoder",
type=ModelType.CLIPVision,
)
ip_adapter_sdxl_image_encoder = StarterModel(
name="IP Adapter SDXL Image Encoder",
base=BaseModelType.StableDiffusionXL,
source="InvokeAI/ip_adapter_sdxl_image_encoder",
description="IP Adapter SDXL Image Encoder",
type=ModelType.CLIPVision,
)
cyberrealistic_negative = StarterModel(
name="CyberRealistic Negative v3",
base=BaseModelType.StableDiffusion1,
@@ -49,6 +36,32 @@ cyberrealistic_negative = StarterModel(
type=ModelType.TextualInversion,
)
# region CLIP Image Encoders
ip_adapter_sd_image_encoder = StarterModel(
name="IP Adapter SD1.5 Image Encoder",
base=BaseModelType.StableDiffusion1,
source="InvokeAI/ip_adapter_sd_image_encoder",
description="IP Adapter SD Image Encoder",
type=ModelType.CLIPVision,
)
ip_adapter_sdxl_image_encoder = StarterModel(
name="IP Adapter SDXL Image Encoder",
base=BaseModelType.StableDiffusionXL,
source="InvokeAI/ip_adapter_sdxl_image_encoder",
description="IP Adapter SDXL Image Encoder",
type=ModelType.CLIPVision,
)
# Note: This model is installed from the same source as the CLIPEmbed model below. The model contains both the image
# encoder and the text encoder, but we need separate model entries so that they get loaded correctly.
clip_vit_l_image_encoder = StarterModel(
name="clip-vit-large-patch14",
base=BaseModelType.Any,
source="InvokeAI/clip-vit-large-patch14",
description="CLIP ViT-L Image Encoder",
type=ModelType.CLIPVision,
)
# endregion
# region TextEncoders
t5_base_encoder = StarterModel(
name="t5_base_encoder",
@@ -127,6 +140,22 @@ flux_dev = StarterModel(
type=ModelType.Main,
dependencies=[t5_base_encoder, flux_vae, clip_l_encoder],
)
sd35_medium = StarterModel(
name="SD3.5 Medium",
base=BaseModelType.StableDiffusion3,
source="stabilityai/stable-diffusion-3.5-medium",
description="Medium SD3.5 Model: ~15GB",
type=ModelType.Main,
dependencies=[],
)
sd35_large = StarterModel(
name="SD3.5 Large",
base=BaseModelType.StableDiffusion3,
source="stabilityai/stable-diffusion-3.5-large",
description="Large SD3.5 Model: ~19G",
type=ModelType.Main,
dependencies=[],
)
cyberrealistic_sd1 = StarterModel(
name="CyberRealistic v4.1",
base=BaseModelType.StableDiffusion1,
@@ -186,6 +215,16 @@ dreamshaper_sdxl = StarterModel(
type=ModelType.Main,
dependencies=[sdxl_fp16_vae_fix],
)
archvis_sdxl = StarterModel(
name="Architecture (RealVisXL5)",
base=BaseModelType.StableDiffusionXL,
source="SG161222/RealVisXL_V5.0",
description="A photorealistic model, with architecture among its many use cases",
type=ModelType.Main,
dependencies=[sdxl_fp16_vae_fix],
)
sdxl_refiner = StarterModel(
name="SDXL Refiner",
base=BaseModelType.StableDiffusionXLRefiner,
@@ -223,36 +262,49 @@ easy_neg_sd1 = StarterModel(
# endregion
# region IP Adapter
ip_adapter_sd1 = StarterModel(
name="IP Adapter",
name="Standard Reference (IP Adapter)",
base=BaseModelType.StableDiffusion1,
source="https://huggingface.co/InvokeAI/ip_adapter_sd15/resolve/main/ip-adapter_sd15.safetensors",
description="IP-Adapter for SD 1.5 models",
description="References images with a more generalized/looser degree of precision.",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sd_image_encoder],
previous_names=["IP Adapter"],
)
ip_adapter_plus_sd1 = StarterModel(
name="IP Adapter Plus",
name="Precise Reference (IP Adapter Plus)",
base=BaseModelType.StableDiffusion1,
source="https://huggingface.co/InvokeAI/ip_adapter_plus_sd15/resolve/main/ip-adapter-plus_sd15.safetensors",
description="Refined IP-Adapter for SD 1.5 models",
description="References images with a higher degree of precision.",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sd_image_encoder],
previous_names=["IP Adapter Plus"],
)
ip_adapter_plus_face_sd1 = StarterModel(
name="IP Adapter Plus Face",
name="Face Reference (IP Adapter Plus Face)",
base=BaseModelType.StableDiffusion1,
source="https://huggingface.co/InvokeAI/ip_adapter_plus_face_sd15/resolve/main/ip-adapter-plus-face_sd15.safetensors",
description="Refined IP-Adapter for SD 1.5 models, adapted for faces",
description="References images with a higher degree of precision, adapted for faces",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sd_image_encoder],
previous_names=["IP Adapter Plus Face"],
)
ip_adapter_sdxl = StarterModel(
name="IP Adapter SDXL",
name="Standard Reference (IP Adapter ViT-H)",
base=BaseModelType.StableDiffusionXL,
source="https://huggingface.co/InvokeAI/ip_adapter_sdxl_vit_h/resolve/main/ip-adapter_sdxl_vit-h.safetensors",
description="IP-Adapter for SDXL models",
description="References images with a higher degree of precision.",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sdxl_image_encoder],
previous_names=["IP Adapter SDXL"],
)
ip_adapter_flux = StarterModel(
name="Standard Reference (XLabs FLUX IP-Adapter)",
base=BaseModelType.Flux,
source="https://huggingface.co/XLabs-AI/flux-ip-adapter/resolve/main/flux-ip-adapter.safetensors",
description="References images with a more generalized/looser degree of precision.",
type=ModelType.IPAdapter,
dependencies=[clip_vit_l_image_encoder],
previous_names=["XLabs FLUX IP-Adapter"],
)
# endregion
# region ControlNet
@@ -271,157 +323,162 @@ qr_code_cnet_sdxl = StarterModel(
type=ModelType.ControlNet,
)
canny_sd1 = StarterModel(
name="canny",
name="Hard Edge Detection (canny)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_canny",
description="ControlNet weights trained on sd-1.5 with canny conditioning.",
description="Uses detected edges in the image to control composition.",
type=ModelType.ControlNet,
previous_names=["canny"],
)
inpaint_cnet_sd1 = StarterModel(
name="inpaint",
name="Inpainting",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_inpaint",
description="ControlNet weights trained on sd-1.5 with canny conditioning, inpaint version",
type=ModelType.ControlNet,
previous_names=["inpaint"],
)
mlsd_sd1 = StarterModel(
name="mlsd",
name="Line Drawing (mlsd)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_mlsd",
description="ControlNet weights trained on sd-1.5 with canny conditioning, MLSD version",
description="Uses straight line detection for controlling the generation.",
type=ModelType.ControlNet,
previous_names=["mlsd"],
)
depth_sd1 = StarterModel(
name="depth",
name="Depth Map",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11f1p_sd15_depth",
description="ControlNet weights trained on sd-1.5 with depth conditioning",
description="Uses depth information in the image to control the depth in the generation.",
type=ModelType.ControlNet,
previous_names=["depth"],
)
normal_bae_sd1 = StarterModel(
name="normal_bae",
name="Lighting Detection (Normals)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_normalbae",
description="ControlNet weights trained on sd-1.5 with normalbae image conditioning",
description="Uses detected lighting information to guide the lighting of the composition.",
type=ModelType.ControlNet,
previous_names=["normal_bae"],
)
seg_sd1 = StarterModel(
name="seg",
name="Segmentation Map",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_seg",
description="ControlNet weights trained on sd-1.5 with seg image conditioning",
description="Uses segmentation maps to guide the structure of the composition.",
type=ModelType.ControlNet,
previous_names=["seg"],
)
lineart_sd1 = StarterModel(
name="lineart",
name="Lineart",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_lineart",
description="ControlNet weights trained on sd-1.5 with lineart image conditioning",
description="Uses lineart detection to guide the lighting of the composition.",
type=ModelType.ControlNet,
previous_names=["lineart"],
)
lineart_anime_sd1 = StarterModel(
name="lineart_anime",
name="Lineart Anime",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15s2_lineart_anime",
description="ControlNet weights trained on sd-1.5 with anime image conditioning",
description="Uses anime lineart detection to guide the lighting of the composition.",
type=ModelType.ControlNet,
previous_names=["lineart_anime"],
)
openpose_sd1 = StarterModel(
name="openpose",
name="Pose Detection (openpose)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_openpose",
description="ControlNet weights trained on sd-1.5 with openpose image conditioning",
description="Uses pose information to control the pose of human characters in the generation.",
type=ModelType.ControlNet,
previous_names=["openpose"],
)
scribble_sd1 = StarterModel(
name="scribble",
name="Contour Detection (scribble)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_scribble",
description="ControlNet weights trained on sd-1.5 with scribble image conditioning",
description="Uses edges, contours, or line art in the image to control composition.",
type=ModelType.ControlNet,
previous_names=["scribble"],
)
softedge_sd1 = StarterModel(
name="softedge",
name="Soft Edge Detection (softedge)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_softedge",
description="ControlNet weights trained on sd-1.5 with soft edge conditioning",
description="Uses a soft edge detection map to control composition.",
type=ModelType.ControlNet,
previous_names=["softedge"],
)
shuffle_sd1 = StarterModel(
name="shuffle",
name="Remix (shuffle)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11e_sd15_shuffle",
description="ControlNet weights trained on sd-1.5 with shuffle image conditioning",
type=ModelType.ControlNet,
previous_names=["shuffle"],
)
tile_sd1 = StarterModel(
name="tile",
name="Tile",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11f1e_sd15_tile",
description="ControlNet weights trained on sd-1.5 with tiled image conditioning",
type=ModelType.ControlNet,
)
ip2p_sd1 = StarterModel(
name="ip2p",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11e_sd15_ip2p",
description="ControlNet weights trained on sd-1.5 with ip2p conditioning.",
description="Uses image data to replicate exact colors/structure in the resulting generation.",
type=ModelType.ControlNet,
previous_names=["tile"],
)
canny_sdxl = StarterModel(
name="canny-sdxl",
name="Hard Edge Detection (canny)",
base=BaseModelType.StableDiffusionXL,
source="xinsir/controlNet-canny-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 with canny conditioning, by Xinsir.",
description="Uses detected edges in the image to control composition.",
type=ModelType.ControlNet,
previous_names=["canny-sdxl"],
)
depth_sdxl = StarterModel(
name="depth-sdxl",
name="Depth Map",
base=BaseModelType.StableDiffusionXL,
source="diffusers/controlNet-depth-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 with depth conditioning.",
description="Uses depth information in the image to control the depth in the generation.",
type=ModelType.ControlNet,
previous_names=["depth-sdxl"],
)
softedge_sdxl = StarterModel(
name="softedge-dexined-sdxl",
name="Soft Edge Detection (softedge)",
base=BaseModelType.StableDiffusionXL,
source="SargeZT/controlNet-sd-xl-1.0-softedge-dexined",
description="ControlNet weights trained on sdxl-1.0 with dexined soft edge preprocessing.",
type=ModelType.ControlNet,
)
depth_zoe_16_sdxl = StarterModel(
name="depth-16bit-zoe-sdxl",
base=BaseModelType.StableDiffusionXL,
source="SargeZT/controlNet-sd-xl-1.0-depth-16bit-zoe",
description="ControlNet weights trained on sdxl-1.0 with Zoe's preprocessor (16 bits).",
type=ModelType.ControlNet,
)
depth_zoe_32_sdxl = StarterModel(
name="depth-zoe-sdxl",
base=BaseModelType.StableDiffusionXL,
source="diffusers/controlNet-zoe-depth-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 with Zoe's preprocessor (32 bits).",
description="Uses a soft edge detection map to control composition.",
type=ModelType.ControlNet,
previous_names=["softedge-dexined-sdxl"],
)
openpose_sdxl = StarterModel(
name="openpose-sdxl",
name="Pose Detection (openpose)",
base=BaseModelType.StableDiffusionXL,
source="xinsir/controlNet-openpose-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 compatible with the DWPose processor by Xinsir.",
description="Uses pose information to control the pose of human characters in the generation.",
type=ModelType.ControlNet,
previous_names=["openpose-sdxl", "controlnet-openpose-sdxl"],
)
scribble_sdxl = StarterModel(
name="scribble-sdxl",
name="Contour Detection (scribble)",
base=BaseModelType.StableDiffusionXL,
source="xinsir/controlNet-scribble-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 compatible with various lineart processors and black/white sketches by Xinsir.",
description="Uses edges, contours, or line art in the image to control composition.",
type=ModelType.ControlNet,
previous_names=["scribble-sdxl", "controlnet-scribble-sdxl"],
)
tile_sdxl = StarterModel(
name="tile-sdxl",
name="Tile",
base=BaseModelType.StableDiffusionXL,
source="xinsir/controlNet-tile-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 with tiled image conditioning",
description="Uses image data to replicate exact colors/structure in the resulting generation.",
type=ModelType.ControlNet,
previous_names=["tile-sdxl"],
)
union_cnet_sdxl = StarterModel(
name="Multi-Guidance Detection (Union Pro)",
base=BaseModelType.StableDiffusionXL,
source="InvokeAI/Xinsir-SDXL_Controlnet_Union",
description="A unified ControlNet for SDXL model that supports 10+ control types",
type=ModelType.ControlNet,
)
union_cnet_flux = StarterModel(
@@ -434,60 +491,52 @@ union_cnet_flux = StarterModel(
# endregion
# region T2I Adapter
t2i_canny_sd1 = StarterModel(
name="canny-sd15",
name="Hard Edge Detection (canny)",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_canny_sd15v2",
description="T2I Adapter weights trained on sd-1.5 with canny conditioning.",
description="Uses detected edges in the image to control composition",
type=ModelType.T2IAdapter,
previous_names=["canny-sd15"],
)
t2i_sketch_sd1 = StarterModel(
name="sketch-sd15",
name="Sketch",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_sketch_sd15v2",
description="T2I Adapter weights trained on sd-1.5 with sketch conditioning.",
description="Uses a sketch to control composition",
type=ModelType.T2IAdapter,
previous_names=["sketch-sd15"],
)
t2i_depth_sd1 = StarterModel(
name="depth-sd15",
name="Depth Map",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_depth_sd15v2",
description="T2I Adapter weights trained on sd-1.5 with depth conditioning.",
type=ModelType.T2IAdapter,
)
t2i_zoe_depth_sd1 = StarterModel(
name="zoedepth-sd15",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_zoedepth_sd15v1",
description="T2I Adapter weights trained on sd-1.5 with zoe depth conditioning.",
description="Uses depth information in the image to control the depth in the generation.",
type=ModelType.T2IAdapter,
previous_names=["depth-sd15"],
)
t2i_canny_sdxl = StarterModel(
name="canny-sdxl",
name="Hard Edge Detection (canny)",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-canny-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with canny conditioning.",
type=ModelType.T2IAdapter,
)
t2i_zoe_depth_sdxl = StarterModel(
name="zoedepth-sdxl",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-depth-zoe-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with zoe depth conditioning.",
description="Uses detected edges in the image to control composition",
type=ModelType.T2IAdapter,
previous_names=["canny-sdxl"],
)
t2i_lineart_sdxl = StarterModel(
name="lineart-sdxl",
name="Lineart",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-lineart-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with lineart conditioning.",
description="Uses lineart detection to guide the lighting of the composition.",
type=ModelType.T2IAdapter,
previous_names=["lineart-sdxl"],
)
t2i_sketch_sdxl = StarterModel(
name="sketch-sdxl",
name="Sketch",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-sketch-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with sketch conditioning.",
description="Uses a sketch to control composition",
type=ModelType.T2IAdapter,
previous_names=["sketch-sdxl"],
)
# endregion
# region SpandrelImageToImage
@@ -537,6 +586,8 @@ STARTER_MODELS: list[StarterModel] = [
flux_dev_quantized,
flux_schnell,
flux_dev,
sd35_medium,
sd35_large,
cyberrealistic_sd1,
rev_animated_sd1,
dreamshaper_8_sd1,
@@ -545,6 +596,7 @@ STARTER_MODELS: list[StarterModel] = [
deliberate_inpainting_sd1,
juggernaut_sdxl,
dreamshaper_sdxl,
archvis_sdxl,
sdxl_refiner,
sdxl_fp16_vae_fix,
flux_vae,
@@ -555,6 +607,7 @@ STARTER_MODELS: list[StarterModel] = [
ip_adapter_plus_sd1,
ip_adapter_plus_face_sd1,
ip_adapter_sdxl,
ip_adapter_flux,
qr_code_cnet_sd1,
qr_code_cnet_sdxl,
canny_sd1,
@@ -570,22 +623,18 @@ STARTER_MODELS: list[StarterModel] = [
softedge_sd1,
shuffle_sd1,
tile_sd1,
ip2p_sd1,
canny_sdxl,
depth_sdxl,
softedge_sdxl,
depth_zoe_16_sdxl,
depth_zoe_32_sdxl,
openpose_sdxl,
scribble_sdxl,
tile_sdxl,
union_cnet_sdxl,
union_cnet_flux,
t2i_canny_sd1,
t2i_sketch_sd1,
t2i_depth_sd1,
t2i_zoe_depth_sd1,
t2i_canny_sdxl,
t2i_zoe_depth_sdxl,
t2i_lineart_sdxl,
t2i_sketch_sdxl,
realesrgan_x4,
@@ -616,7 +665,6 @@ sd1_bundle: list[StarterModel] = [
softedge_sd1,
shuffle_sd1,
tile_sd1,
ip2p_sd1,
swinir,
]
@@ -627,8 +675,6 @@ sdxl_bundle: list[StarterModel] = [
canny_sdxl,
depth_sdxl,
softedge_sdxl,
depth_zoe_16_sdxl,
depth_zoe_32_sdxl,
openpose_sdxl,
scribble_sdxl,
tile_sdxl,
@@ -642,6 +688,7 @@ flux_bundle: list[StarterModel] = [
t5_8b_quantized_encoder,
clip_l_encoder,
union_cnet_flux,
ip_adapter_flux,
]
STARTER_BUNDLES: dict[str, list[StarterModel]] = {

View File

@@ -8,6 +8,7 @@ import safetensors
import torch
from picklescan.scanner import scan_file_path
from invokeai.backend.model_manager.config import ClipVariantType
from invokeai.backend.quantization.gguf.loaders import gguf_sd_loader
@@ -165,3 +166,23 @@ def convert_bundle_to_flux_transformer_checkpoint(
del transformer_state_dict[k]
return original_state_dict
def get_clip_variant_type(location: str) -> Optional[ClipVariantType]:
try:
path = Path(location)
config_path = path / "config.json"
if not config_path.exists():
return ClipVariantType.L
with open(config_path) as file:
clip_conf = json.load(file)
hidden_size = clip_conf.get("hidden_size", -1)
match hidden_size:
case 1280:
return ClipVariantType.G
case 768:
return ClipVariantType.L
case _:
return ClipVariantType.L
except Exception:
return ClipVariantType.L

View File

@@ -129,9 +129,11 @@ def _filter_by_variant(files: List[Path], variant: ModelRepoVariant) -> Set[Path
# Some special handling is needed here if there is not an exact match and if we cannot infer the variant
# from the file name. In this case, we only give this file a point if the requested variant is FP32 or DEFAULT.
if candidate_variant_label == f".{variant}" or (
not candidate_variant_label and variant in [ModelRepoVariant.FP32, ModelRepoVariant.Default]
):
if (
variant is not ModelRepoVariant.Default
and candidate_variant_label
and candidate_variant_label.startswith(f".{variant.value}")
) or (not candidate_variant_label and variant in [ModelRepoVariant.FP32, ModelRepoVariant.Default]):
score += 1
if parent not in subfolder_weights:
@@ -146,7 +148,7 @@ def _filter_by_variant(files: List[Path], variant: ModelRepoVariant) -> Set[Path
# Check if at least one of the files has the explicit fp16 variant.
at_least_one_fp16 = False
for candidate in candidate_list:
if len(candidate.path.suffixes) == 2 and candidate.path.suffixes[0] == ".fp16":
if len(candidate.path.suffixes) == 2 and candidate.path.suffixes[0].startswith(".fp16"):
at_least_one_fp16 = True
break
@@ -162,7 +164,16 @@ def _filter_by_variant(files: List[Path], variant: ModelRepoVariant) -> Set[Path
# candidate.
highest_score_candidate = max(candidate_list, key=lambda candidate: candidate.score)
if highest_score_candidate:
result.add(highest_score_candidate.path)
pattern = r"^(.*?)-\d+-of-\d+(\.\w+)$"
match = re.match(pattern, highest_score_candidate.path.as_posix())
if match:
for candidate in candidate_list:
if candidate.path.as_posix().startswith(match.group(1)) and candidate.path.as_posix().endswith(
match.group(2)
):
result.add(candidate.path)
else:
result.add(highest_score_candidate.path)
# If one of the architecture-related variants was specified and no files matched other than
# config and text files then we return an empty list

View File

@@ -54,6 +54,11 @@ GGML_TENSOR_OP_TABLE = {
torch.ops.aten.mul.Tensor: dequantize_and_run, # pyright: ignore
}
if torch.backends.mps.is_available():
GGML_TENSOR_OP_TABLE.update(
{torch.ops.aten.linear.default: dequantize_and_run} # pyright: ignore
)
class GGMLTensor(torch.Tensor):
"""A torch.Tensor sub-class holding a quantized GGML tensor.

View File

@@ -499,6 +499,22 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
for idx, value in enumerate(single_t2i_adapter_data.adapter_state):
accum_adapter_state[idx] += value * t2i_adapter_weight
# Hack: force compatibility with irregular resolutions by padding the feature map with zeros
for idx, tensor in enumerate(accum_adapter_state):
# The tensor size is supposed to be some integer downscale factor of the latents size.
# Internally, the unet will pad the latents before downscaling between levels when it is no longer divisible by its downscale factor.
# If the latent size does not scale down evenly, we need to pad the tensor so that it matches the the downscaled padded latents later on.
scale_factor = latents.size()[-1] // tensor.size()[-1]
required_padding_width = math.ceil(latents.size()[-1] / scale_factor) - tensor.size()[-1]
required_padding_height = math.ceil(latents.size()[-2] / scale_factor) - tensor.size()[-2]
tensor = torch.nn.functional.pad(
tensor,
(0, required_padding_width, 0, required_padding_height, 0, 0, 0, 0),
mode="constant",
value=0,
)
accum_adapter_state[idx] = tensor
down_intrablock_additional_residuals = accum_adapter_state
# Handle inpainting models.

View File

@@ -49,9 +49,32 @@ class FLUXConditioningInfo:
return self
@dataclass
class SD3ConditioningInfo:
clip_l_pooled_embeds: torch.Tensor
clip_l_embeds: torch.Tensor
clip_g_pooled_embeds: torch.Tensor
clip_g_embeds: torch.Tensor
t5_embeds: torch.Tensor | None
def to(self, device: torch.device | None = None, dtype: torch.dtype | None = None):
self.clip_l_pooled_embeds = self.clip_l_pooled_embeds.to(device=device, dtype=dtype)
self.clip_l_embeds = self.clip_l_embeds.to(device=device, dtype=dtype)
self.clip_g_pooled_embeds = self.clip_g_pooled_embeds.to(device=device, dtype=dtype)
self.clip_g_embeds = self.clip_g_embeds.to(device=device, dtype=dtype)
if self.t5_embeds is not None:
self.t5_embeds = self.t5_embeds.to(device=device, dtype=dtype)
return self
@dataclass
class ConditioningFieldData:
conditionings: List[BasicConditioningInfo] | List[SDXLConditioningInfo] | List[FLUXConditioningInfo]
conditionings: (
List[BasicConditioningInfo]
| List[SDXLConditioningInfo]
| List[FLUXConditioningInfo]
| List[SD3ConditioningInfo]
)
@dataclass

View File

@@ -33,7 +33,7 @@ class PreviewExt(ExtensionBase):
def initial_preview(self, ctx: DenoiseContext):
self.callback(
PipelineIntermediateState(
step=-1,
step=0,
order=ctx.scheduler.order,
total_steps=len(ctx.inputs.timesteps),
timestep=int(ctx.scheduler.config.num_train_timesteps), # TODO: is there any code which uses it?

View File

@@ -3,7 +3,7 @@ from typing import Any, Dict, List, Optional, Tuple, Union
import diffusers
import torch
from diffusers.configuration_utils import ConfigMixin, register_to_config
from diffusers.loaders import FromOriginalControlNetMixin
from diffusers.loaders.single_file_model import FromOriginalModelMixin
from diffusers.models.attention_processor import AttentionProcessor, AttnProcessor
from diffusers.models.controlnet import ControlNetConditioningEmbedding, ControlNetOutput, zero_module
from diffusers.models.embeddings import (
@@ -32,7 +32,9 @@ from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.get_logger(__name__)
class ControlNetModel(ModelMixin, ConfigMixin, FromOriginalControlNetMixin):
# NOTE(ryand): I'm not the origina author of this code, but for future reference, it appears that this class was copied
# from diffusers in order to add support for the encoder_attention_mask argument.
class ControlNetModel(ModelMixin, ConfigMixin, FromOriginalModelMixin):
"""
A ControlNet model.

View File

@@ -9,6 +9,7 @@ const config: KnipConfig = {
'src/services/api/schema.ts',
'src/features/nodes/types/v1/**',
'src/features/nodes/types/v2/**',
'src/features/parameters/types/parameterSchemas.ts',
// TODO(psyche): maybe we can clean up these utils after canvas v2 release
'src/features/controlLayers/konva/util.ts',
// TODO(psyche): restore HRF functionality?

View File

@@ -58,7 +58,7 @@
"@dnd-kit/sortable": "^8.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@fontsource-variable/inter": "^5.1.0",
"@invoke-ai/ui-library": "^0.0.42",
"@invoke-ai/ui-library": "^0.0.43",
"@nanostores/react": "^0.7.3",
"@reduxjs/toolkit": "2.2.3",
"@roarr/browser-log-writer": "^1.3.0",
@@ -114,8 +114,7 @@
},
"peerDependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0",
"ts-toolbelt": "^9.6.0"
"react-dom": "^18.2.0"
},
"devDependencies": {
"@invoke-ai/eslint-config-react": "^0.0.14",
@@ -149,8 +148,8 @@
"prettier": "^3.3.3",
"rollup-plugin-visualizer": "^5.12.0",
"storybook": "^8.3.4",
"ts-toolbelt": "^9.6.0",
"tsafe": "^1.7.5",
"type-fest": "^4.26.1",
"typescript": "^5.6.2",
"vite": "^5.4.8",
"vite-plugin-css-injected-by-js": "^3.5.2",

View File

@@ -24,8 +24,8 @@ dependencies:
specifier: ^5.1.0
version: 5.1.0
'@invoke-ai/ui-library':
specifier: ^0.0.42
version: 0.0.42(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1)
specifier: ^0.0.43
version: 0.0.43(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1)
'@nanostores/react':
specifier: ^0.7.3
version: 0.7.3(nanostores@0.11.3)(react@18.3.1)
@@ -277,12 +277,12 @@ devDependencies:
storybook:
specifier: ^8.3.4
version: 8.3.4
ts-toolbelt:
specifier: ^9.6.0
version: 9.6.0
tsafe:
specifier: ^1.7.5
version: 1.7.5
type-fest:
specifier: ^4.26.1
version: 4.26.1
typescript:
specifier: ^5.6.2
version: 5.6.2
@@ -1696,20 +1696,20 @@ packages:
prettier: 3.3.3
dev: true
/@invoke-ai/ui-library@0.0.42(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-OuDXRipBO5mu+Nv4qN8cd8MiwiGBdq6h4PirVgPI9/ltbdcIzePgUJ0dJns26lflHSTRWW38I16wl4YTw3mNWA==}
/@invoke-ai/ui-library@0.0.43(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-t3fPYyks07ue3dEBPJuTHbeDLnDckDCOrtvc07mMDbLOnlPEZ0StaeiNGH+oO8qLzAuMAlSTdswgHfzTc2MmPw==}
peerDependencies:
'@fontsource-variable/inter': ^5.0.16
react: ^18.2.0
react-dom: ^18.2.0
dependencies:
'@chakra-ui/anatomy': 2.2.2
'@chakra-ui/anatomy': 2.3.4
'@chakra-ui/icons': 2.2.4(@chakra-ui/react@2.10.2)(react@18.3.1)
'@chakra-ui/layout': 2.3.1(@chakra-ui/system@2.6.2)(react@18.3.1)
'@chakra-ui/portal': 2.1.0(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/react': 2.10.2(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/styled-system': 2.9.2
'@chakra-ui/theme-tools': 2.1.2(@chakra-ui/styled-system@2.9.2)
'@chakra-ui/styled-system': 2.11.2(react@18.3.1)
'@chakra-ui/theme-tools': 2.2.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1)
'@emotion/react': 11.13.3(@types/react@18.3.11)(react@18.3.1)
'@emotion/styled': 11.13.0(@emotion/react@11.13.3)(@types/react@18.3.11)(react@18.3.1)
'@fontsource-variable/inter': 5.1.0
@@ -8830,10 +8830,6 @@ packages:
resolution: {integrity: sha512-tLJxacIQUM82IR7JO1UUkKlYuUTmoY9HBJAmNWFzheSlDS5SPMcNIepejHJa4BpPQLAcbRhRf3GDJzyj6rbKvA==}
dev: false
/ts-toolbelt@9.6.0:
resolution: {integrity: sha512-nsZd8ZeNUzukXPlJmTBwUAuABDe/9qtVDelJeT/qW0ow3ZS3BsQJtNkan1802aM9Uf68/Y8ljw86Hu0h5IUW3w==}
dev: true
/tsafe@1.7.5:
resolution: {integrity: sha512-tbNyyBSbwfbilFfiuXkSOj82a6++ovgANwcoqBAcO9/REPoZMEQoE8kWPeO0dy5A2D/2Lajr8Ohue5T0ifIvLQ==}
dev: true

Binary file not shown.

After

Width:  |  Height:  |  Size: 895 KiB

View File

@@ -93,7 +93,10 @@
"placeholderSelectAModel": "Modell auswählen",
"reset": "Zurücksetzen",
"none": "Keine",
"new": "Neu"
"new": "Neu",
"ok": "OK",
"close": "Schließen",
"clipboard": "Zwischenablage"
},
"gallery": {
"galleryImageSize": "Bildgröße",
@@ -156,7 +159,11 @@
"displayBoardSearch": "Board durchsuchen",
"displaySearch": "Bild suchen",
"go": "Los",
"jump": "Springen"
"jump": "Springen",
"assetsTab": "Dateien, die Sie zur Verwendung in Ihren Projekten hochgeladen haben.",
"imagesTab": "Bilder, die Sie in Invoke erstellt und gespeichert haben.",
"boardsSettings": "Ordnereinstellungen",
"imagesSettings": "Galeriebildereinstellungen"
},
"hotkeys": {
"noHotkeysFound": "Kein Hotkey gefunden",
@@ -267,6 +274,18 @@
"applyFilter": {
"title": "Filter anwenden",
"desc": "Wende den ausstehenden Filter auf die ausgewählte Ebene an."
},
"cancelFilter": {
"title": "Filter abbrechen",
"desc": "Den ausstehenden Filter abbrechen."
},
"applyTransform": {
"desc": "Die ausstehende Transformation auf die ausgewählte Ebene anwenden.",
"title": "Transformation anwenden"
},
"cancelTransform": {
"title": "Transformation abbrechen",
"desc": "Die ausstehende Transformation abbrechen."
}
},
"viewer": {
@@ -517,14 +536,12 @@
"addModels": "Model hinzufügen",
"deleteModelImage": "Lösche Model Bild",
"huggingFaceRepoID": "HuggingFace Repo ID",
"hfToken": "HuggingFace Schlüssel",
"huggingFacePlaceholder": "besitzer/model-name",
"modelSettings": "Modelleinstellungen",
"typePhraseHere": "Phrase hier eingeben",
"spandrelImageToImage": "Bild zu Bild (Spandrel)",
"starterModels": "Einstiegsmodelle",
"t5Encoder": "T5-Kodierer",
"useDefaultSettings": "Standardeinstellungen verwenden",
"uploadImage": "Bild hochladen",
"urlOrLocalPath": "URL oder lokaler Pfad",
"install": "Installieren",
@@ -563,7 +580,18 @@
"scanResults": "Ergebnisse des Scans",
"urlOrLocalPathHelper": "URLs sollten auf eine einzelne Datei deuten. Lokale Pfade können zusätzlich auch auf einen Ordner für ein einzelnes Diffusers-Modell hinweisen.",
"inplaceInstallDesc": "Installieren Sie Modelle, ohne die Dateien zu kopieren. Wenn Sie das Modell verwenden, wird es direkt von seinem Speicherort geladen. Wenn deaktiviert, werden die Dateien während der Installation in das von Invoke verwaltete Modellverzeichnis kopiert.",
"scanFolderHelper": "Der Ordner wird rekursiv nach Modellen durchsucht. Dies kann bei sehr großen Ordnern etwas dauern."
"scanFolderHelper": "Der Ordner wird rekursiv nach Modellen durchsucht. Dies kann bei sehr großen Ordnern etwas dauern.",
"includesNModels": "Enthält {{n}} Modelle und deren Abhängigkeiten",
"starterBundles": "Starterpakete",
"installingXModels_one": "{{count}} Modell wird installiert",
"installingXModels_other": "{{count}} Modelle werden installiert",
"skippingXDuplicates_one": ", überspringe {{count}} Duplikat",
"skippingXDuplicates_other": ", überspringe {{count}} Duplikate",
"installingModel": "Modell wird installiert",
"loraTriggerPhrases": "LoRA-Auslösephrasen",
"installingBundle": "Bündel wird installiert",
"triggerPhrases": "Auslösephrasen",
"mainModelTriggerPhrases": "Hauptmodell-Auslösephrasen"
},
"parameters": {
"images": "Bilder",
@@ -649,10 +677,41 @@
"toast": {
"uploadFailed": "Hochladen fehlgeschlagen",
"imageCopied": "Bild kopiert",
"parametersNotSet": "Parameter nicht festgelegt",
"parametersNotSet": "Parameter nicht zurückgerufen",
"addedToBoard": "Dem Board hinzugefügt",
"loadedWithWarnings": "Workflow mit Warnungen geladen",
"imageSaved": "Bild gespeichert"
"imageSaved": "Bild gespeichert",
"linkCopied": "Link kopiert",
"problemCopyingLayer": "Ebene kann nicht kopiert werden",
"problemSavingLayer": "Ebene kann nicht gespeichert werden",
"parameterSetDesc": "{{parameter}} zurückgerufen",
"imageUploaded": "Bild hochgeladen",
"problemCopyingImage": "Bild kann nicht kopiert werden",
"parameterNotSetDesc": "{{parameter}} kann nicht zurückgerufen werden",
"prunedQueue": "Warteschlange bereinigt",
"modelAddedSimple": "Modell zur Warteschlange hinzugefügt",
"parametersSet": "Parameter zurückgerufen",
"imageNotLoadedDesc": "Bild konnte nicht gefunden werden",
"setControlImage": "Als Kontrollbild festlegen",
"sentToUpscale": "An Vergrößerung gesendet",
"parameterNotSetDescWithMessage": "{{parameter}} kann nicht zurückgerufen werden: {{message}}",
"unableToLoadImageMetadata": "Bildmetadaten können nicht geladen werden",
"unableToLoadImage": "Bild kann nicht geladen werden",
"serverError": "Serverfehler",
"parameterNotSet": "Parameter nicht zurückgerufen",
"sessionRef": "Sitzung: {{sessionId}}",
"problemDownloadingImage": "Bild kann nicht heruntergeladen werden",
"parameters": "Parameter",
"parameterSet": "Parameter zurückgerufen",
"importFailed": "Import fehlgeschlagen",
"importSuccessful": "Import erfolgreich",
"setNodeField": "Als Knotenfeld festlegen",
"somethingWentWrong": "Etwas ist schief gelaufen",
"workflowLoaded": "Arbeitsablauf geladen",
"workflowDeleted": "Arbeitsablauf gelöscht",
"errorCopied": "Fehler kopiert",
"layerCopiedToClipboard": "Ebene in die Zwischenablage kopiert",
"sentToCanvas": "An Leinwand gesendet"
},
"accessibility": {
"uploadImage": "Bild hochladen",
@@ -667,7 +726,8 @@
"about": "Über",
"submitSupportTicket": "Support-Ticket senden",
"toggleRightPanel": "Rechtes Bedienfeld umschalten (G)",
"toggleLeftPanel": "Linkes Bedienfeld umschalten (T)"
"toggleLeftPanel": "Linkes Bedienfeld umschalten (T)",
"uploadImages": "Bild(er) hochladen"
},
"boards": {
"autoAddBoard": "Board automatisch erstellen",
@@ -702,7 +762,7 @@
"shared": "Geteilte Ordner",
"archiveBoard": "Ordner archivieren",
"archived": "Archiviert",
"noBoards": "Kein {boardType}} Ordner",
"noBoards": "Kein {{boardType}} Ordner",
"hideBoards": "Ordner verstecken",
"viewBoards": "Ordner ansehen",
"deletedPrivateBoardsCannotbeRestored": "Gelöschte Boards können nicht wiederhergestellt werden. Wenn Sie „Nur Board löschen“ wählen, werden die Bilder in einen privaten, nicht kategorisierten Status für den Ersteller des Bildes versetzt.",
@@ -795,7 +855,6 @@
"width": "Breite",
"createdBy": "Erstellt von",
"steps": "Schritte",
"seamless": "Nahtlos",
"positivePrompt": "Positiver Prompt",
"generationMode": "Generierungsmodus",
"Threshold": "Rauschen-Schwelle",
@@ -811,7 +870,8 @@
"parameterSet": "Parameter {{parameter}} setzen",
"recallParameter": "{{label}} Abrufen",
"parsingFailed": "Parsing Fehlgeschlagen",
"canvasV2Metadata": "Leinwand"
"canvasV2Metadata": "Leinwand",
"guidance": "Führung"
},
"popovers": {
"noiseUseCPU": {
@@ -1137,7 +1197,21 @@
"workflowNotes": "Notizen",
"workflowTags": "Tags",
"workflowVersion": "Version",
"saveToGallery": "In Galerie speichern"
"saveToGallery": "In Galerie speichern",
"noWorkflows": "Keine Arbeitsabläufe",
"noMatchingWorkflows": "Keine passenden Arbeitsabläufe",
"unknownErrorValidatingWorkflow": "Unbekannter Fehler beim Validieren des Arbeitsablaufes",
"inputFieldTypeParseError": "Typ des Eingabefelds {{node}}.{{field}} kann nicht analysiert werden ({{message}})",
"workflowSettings": "Arbeitsablauf Editor Einstellungen",
"unableToLoadWorkflow": "Arbeitsablauf kann nicht geladen werden",
"viewMode": "In linearen Ansicht verwenden",
"unableToValidateWorkflow": "Arbeitsablauf kann nicht validiert werden",
"outputFieldTypeParseError": "Typ des Ausgabefelds {{node}}.{{field}} kann nicht analysiert werden ({{message}})",
"unableToGetWorkflowVersion": "Version des Arbeitsablaufschemas kann nicht bestimmt werden",
"unknownFieldType": "$t(nodes.unknownField) Typ: {{type}}",
"unknownField": "Unbekanntes Feld",
"unableToUpdateNodes_one": "{{count}} Knoten kann nicht aktualisiert werden",
"unableToUpdateNodes_other": "{{count}} Knoten können nicht aktualisiert werden"
},
"hrf": {
"enableHrf": "Korrektur für hohe Auflösungen",
@@ -1267,15 +1341,7 @@
"enableLogging": "Protokollierung aktivieren"
},
"whatsNew": {
"whatsNewInInvoke": "Was gibt's Neues",
"canvasV2Announcement": {
"fluxSupport": "Unterstützung für Flux-Modelle",
"newCanvas": "Eine leistungsstarke neue Kontrollfläche",
"newLayerTypes": "Neue Ebenentypen für noch mehr Kontrolle",
"readReleaseNotes": "Anmerkungen zu dieser Version lesen",
"watchReleaseVideo": "Video über diese Version anzeigen",
"watchUiUpdatesOverview": "Interface-Updates Übersicht"
}
"whatsNewInInvoke": "Was gibt's Neues"
},
"stylePresets": {
"name": "Name",

View File

@@ -94,6 +94,7 @@
"close": "Close",
"copy": "Copy",
"copyError": "$t(gallery.copy) Error",
"clipboard": "Clipboard",
"on": "On",
"off": "Off",
"or": "or",
@@ -681,7 +682,8 @@
"recallParameters": "Recall Parameters",
"recallParameter": "Recall {{label}}",
"scheduler": "Scheduler",
"seamless": "Seamless",
"seamlessXAxis": "Seamless X Axis",
"seamlessYAxis": "Seamless Y Axis",
"seed": "Seed",
"steps": "Steps",
"strength": "Image to image strength",
@@ -712,8 +714,12 @@
"convertToDiffusersHelpText4": "This is a one time process only. It might take around 30s-60s depending on the specifications of your computer.",
"convertToDiffusersHelpText5": "Please make sure you have enough disk space. Models generally vary between 2GB-7GB in size.",
"convertToDiffusersHelpText6": "Do you wish to convert this model?",
"noDefaultSettings": "No default settings configured for this model. Visit the Model Manager to add default settings.",
"defaultSettings": "Default Settings",
"defaultSettingsSaved": "Default Settings Saved",
"defaultSettingsOutOfSync": "Some settings do not match the model's defaults:",
"restoreDefaultSettings": "Click to use the model's default settings.",
"usingDefaultSettings": "Using model's default settings",
"delete": "Delete",
"deleteConfig": "Delete Config",
"deleteModel": "Delete Model",
@@ -727,7 +733,17 @@
"huggingFacePlaceholder": "owner/model-name",
"huggingFaceRepoID": "HuggingFace Repo ID",
"huggingFaceHelper": "If multiple models are found in this repo, you will be prompted to select one to install.",
"hfToken": "HuggingFace Token",
"hfTokenLabel": "HuggingFace Token (Required for some models)",
"hfTokenHelperText": "A HF token is required to use some models. Click here to create or get your token.",
"hfTokenInvalid": "Invalid or Missing HF Token",
"hfForbidden": "You do not have access to this HF model",
"hfForbiddenErrorMessage": "We recommend visiting the repo page on HuggingFace.com. The owner may require acceptance of terms in order to download.",
"hfTokenInvalidErrorMessage": "Invalid or missing HuggingFace token.",
"hfTokenRequired": "You are trying to download a model that requires a valid HuggingFace Token.",
"hfTokenInvalidErrorMessage2": "Update it in the ",
"hfTokenUnableToVerify": "Unable to Verify HF Token",
"hfTokenUnableToVerifyErrorMessage": "Unable to verify HuggingFace token. This is likely due to a network error. Please try again later.",
"hfTokenSaved": "HF Token Saved",
"imageEncoderModelId": "Image Encoder Model ID",
"includesNModels": "Includes {{n}} models and their dependencies",
"installQueue": "Install Queue",
@@ -798,7 +814,6 @@
"uploadImage": "Upload Image",
"urlOrLocalPath": "URL or Local Path",
"urlOrLocalPathHelper": "URLs should point to a single file. Local paths can point to a single file or folder for a single diffusers model.",
"useDefaultSettings": "Use Default Settings",
"vae": "VAE",
"vaePrecision": "VAE Precision",
"variant": "Variant",
@@ -982,6 +997,7 @@
"controlNetControlMode": "Control Mode",
"copyImage": "Copy Image",
"denoisingStrength": "Denoising Strength",
"noRasterLayers": "No Raster Layers",
"downloadImage": "Download Image",
"general": "General",
"guidance": "Guidance",
@@ -1032,6 +1048,7 @@
"patchmatchDownScaleSize": "Downscale",
"perlinNoise": "Perlin Noise",
"positivePromptPlaceholder": "Positive Prompt",
"recallMetadata": "Recall Metadata",
"iterations": "Iterations",
"scale": "Scale",
"scaleBeforeProcessing": "Scale Before Processing",
@@ -1108,6 +1125,9 @@
"enableInformationalPopovers": "Enable Informational Popovers",
"informationalPopoversDisabled": "Informational Popovers Disabled",
"informationalPopoversDisabledDesc": "Informational popovers have been disabled. Enable them in Settings.",
"enableModelDescriptions": "Enable Model Descriptions in Dropdowns",
"modelDescriptionsDisabled": "Model Descriptions in Dropdowns Disabled",
"modelDescriptionsDisabledDesc": "Model descriptions in dropdowns have been disabled. Enable them in Settings.",
"enableInvisibleWatermark": "Enable Invisible Watermark",
"enableNSFWChecker": "Enable NSFW Checker",
"general": "General",
@@ -1251,6 +1271,33 @@
"heading": "Mask Adjustments",
"paragraphs": ["Adjust the mask."]
},
"inpainting": {
"heading": "Inpainting",
"paragraphs": ["Controls which area is modified, guided by Denoising Strength."]
},
"rasterLayer": {
"heading": "Raster Layer",
"paragraphs": ["Pixel-based content of your canvas, used during image generation."]
},
"regionalGuidance": {
"heading": "Regional Guidance",
"paragraphs": ["Brush to guide where elements from global prompts should appear."]
},
"regionalGuidanceAndReferenceImage": {
"heading": "Regional Guidance and Regional Reference Image",
"paragraphs": [
"For Regional Guidance, brush to guide where elements from global prompts should appear.",
"For Regional Reference Image, brush to apply a reference image to specific areas."
]
},
"globalReferenceImage": {
"heading": "Global Reference Image",
"paragraphs": ["Applies a reference image to influence the entire generation."]
},
"regionalReferenceImage": {
"heading": "Regional Reference Image",
"paragraphs": ["Brush to apply a reference image to specific areas."]
},
"controlNet": {
"heading": "ControlNet",
"paragraphs": [
@@ -1366,8 +1413,9 @@
"paramDenoisingStrength": {
"heading": "Denoising Strength",
"paragraphs": [
"How much noise is added to the input image.",
"0 will result in an identical image, while 1 will result in a completely new image."
"Controls how much the generated image varies from the raster layer(s).",
"Lower strength stays closer to the combined visible raster layers. Higher strength relies more on the global prompt.",
"When there are no raster layers with visible content, this setting is ignored."
]
},
"paramHeight": {
@@ -1606,14 +1654,17 @@
"newControlLayerError": "Problem Creating Control Layer",
"newRasterLayerOk": "Created Raster Layer",
"newRasterLayerError": "Problem Creating Raster Layer",
"newFromImage": "New from Image",
"pullBboxIntoLayerOk": "Bbox Pulled Into Layer",
"pullBboxIntoLayerError": "Problem Pulling BBox Into Layer",
"pullBboxIntoReferenceImageOk": "Bbox Pulled Into ReferenceImage",
"pullBboxIntoReferenceImageError": "Problem Pulling BBox Into ReferenceImage",
"regionIsEmpty": "Selected region is empty",
"mergeVisible": "Merge Visible",
"mergeVisibleOk": "Merged visible layers",
"mergeVisibleError": "Error merging visible layers",
"mergeDown": "Merge Down",
"mergeVisibleOk": "Merged layers",
"mergeVisibleError": "Error merging layers",
"mergingLayers": "Merging layers",
"clearHistory": "Clear History",
"bboxOverlay": "Show Bbox Overlay",
"resetCanvas": "Reset Canvas",
@@ -1648,6 +1699,8 @@
"controlLayer": "Control Layer",
"inpaintMask": "Inpaint Mask",
"regionalGuidance": "Regional Guidance",
"canvasAsRasterLayer": "$t(controlLayers.canvas) as $t(controlLayers.rasterLayer)",
"canvasAsControlLayer": "$t(controlLayers.canvas) as $t(controlLayers.controlLayer)",
"referenceImage": "Reference Image",
"regionalReferenceImage": "Regional Reference Image",
"globalReferenceImage": "Global Reference Image",
@@ -1688,8 +1741,18 @@
"layer_other": "Layers",
"layer_withCount_one": "Layer ({{count}})",
"layer_withCount_other": "Layers ({{count}})",
"convertToControlLayer": "Convert to Control Layer",
"convertToRasterLayer": "Convert to Raster Layer",
"convertRasterLayerTo": "Convert $t(controlLayers.rasterLayer) To",
"convertControlLayerTo": "Convert $t(controlLayers.controlLayer) To",
"convertInpaintMaskTo": "Convert $t(controlLayers.inpaintMask) To",
"convertRegionalGuidanceTo": "Convert $t(controlLayers.regionalGuidance) To",
"copyRasterLayerTo": "Copy $t(controlLayers.rasterLayer) To",
"copyControlLayerTo": "Copy $t(controlLayers.controlLayer) To",
"copyInpaintMaskTo": "Copy $t(controlLayers.inpaintMask) To",
"copyRegionalGuidanceTo": "Copy $t(controlLayers.regionalGuidance) To",
"newRasterLayer": "New $t(controlLayers.rasterLayer)",
"newControlLayer": "New $t(controlLayers.controlLayer)",
"newInpaintMask": "New $t(controlLayers.inpaintMask)",
"newRegionalGuidance": "New $t(controlLayers.regionalGuidance)",
"transparency": "Transparency",
"enableTransparencyEffect": "Enable Transparency Effect",
"disableTransparencyEffect": "Disable Transparency Effect",
@@ -1713,9 +1776,11 @@
"newGallerySessionDesc": "This will clear the canvas and all settings except for your model selection. Generations will be sent to the gallery.",
"newCanvasSession": "New Canvas Session",
"newCanvasSessionDesc": "This will clear the canvas and all settings except for your model selection. Generations will be staged on the canvas.",
"replaceCurrent": "Replace Current",
"controlLayerEmptyState": "<UploadButton>Upload an image</UploadButton>, drag an image from the <GalleryButton>gallery</GalleryButton> onto this layer, or draw on the canvas to get started.",
"controlMode": {
"controlMode": "Control Mode",
"balanced": "Balanced",
"balanced": "Balanced (recommended)",
"prompt": "Prompt",
"control": "Control",
"megaControl": "Mega Control"
@@ -1754,6 +1819,9 @@
"process": "Process",
"apply": "Apply",
"cancel": "Cancel",
"advanced": "Advanced",
"processingLayerWith": "Processing layer with the {{type}} filter.",
"forMoreControl": "For more control, click Advanced below.",
"spandrel_filter": {
"label": "Image-to-Image Model",
"description": "Run an image-to-image model on the selected layer.",
@@ -1842,6 +1910,25 @@
"apply": "Apply",
"cancel": "Cancel"
},
"selectObject": {
"selectObject": "Select Object",
"pointType": "Point Type",
"invertSelection": "Invert Selection",
"include": "Include",
"exclude": "Exclude",
"neutral": "Neutral",
"apply": "Apply",
"reset": "Reset",
"saveAs": "Save As",
"cancel": "Cancel",
"process": "Process",
"help1": "Select a single target object. Add <Bold>Include</Bold> and <Bold>Exclude</Bold> points to indicate which parts of the layer are part of the target object.",
"help2": "Start with one <Bold>Include</Bold> point within the target object. Add more points to refine the selection. Fewer points typically produce better results.",
"help3": "Invert the selection to select everything except the target object.",
"clickToAdd": "Click on the layer to add a point",
"dragToMove": "Drag a point to move it",
"clickToRemove": "Click on a point to remove it"
},
"settings": {
"snapToGrid": {
"label": "Snap to Grid",
@@ -1852,10 +1939,10 @@
"label": "Preserve Masked Region",
"alert": "Preserving Masked Region"
},
"isolatedPreview": "Isolated Preview",
"isolatedStagingPreview": "Isolated Staging Preview",
"isolatedFilteringPreview": "Isolated Filtering Preview",
"isolatedTransformingPreview": "Isolated Transforming Preview",
"isolatedPreview": "Isolated Preview",
"isolatedLayerPreview": "Isolated Layer Preview",
"isolatedLayerPreviewDesc": "Whether to show only this layer when performing operations like filtering or transforming.",
"invertBrushSizeScrollDirection": "Invert Scroll for Brush Size",
"pressureSensitivity": "Pressure Sensitivity"
},
@@ -1881,6 +1968,8 @@
"newRegionalReferenceImage": "New Regional Reference Image",
"newControlLayer": "New Control Layer",
"newRasterLayer": "New Raster Layer",
"newInpaintMask": "New Inpaint Mask",
"newRegionalGuidance": "New Regional Guidance",
"cropCanvasToBbox": "Crop Canvas to Bbox"
},
"stagingArea": {
@@ -2013,13 +2102,10 @@
},
"whatsNew": {
"whatsNewInInvoke": "What's New in Invoke",
"canvasV2Announcement": {
"newCanvas": "A powerful new control canvas",
"newLayerTypes": "New layer types for even more control",
"fluxSupport": "Support for the Flux family of models",
"readReleaseNotes": "Read Release Notes",
"watchReleaseVideo": "Watch Release Video",
"watchUiUpdatesOverview": "Watch UI Updates Overview"
}
"line1": "<StrongComponent>Layer Merging</StrongComponent>: New <StrongComponent>Merge Down</StrongComponent> and improved <StrongComponent>Merge Visible</StrongComponent> for all layers, with special handling for Regional Guidance and Control Layers.",
"line2": "<StrongComponent>HF Token Support</StrongComponent>: Upload models that require Hugging Face authentication.",
"readReleaseNotes": "Read Release Notes",
"watchRecentReleaseVideos": "Watch Recent Release Videos",
"watchUiUpdatesOverview": "Watch UI Updates Overview"
}
}

View File

@@ -5,8 +5,8 @@
"reportBugLabel": "Signaler un bug",
"settingsLabel": "Paramètres",
"img2img": "Image vers Image",
"nodes": "Processus",
"upload": "Télécharger",
"nodes": "Workflows",
"upload": "Importer",
"load": "Charger",
"back": "Retour",
"statusDisconnected": "Hors ligne",
@@ -51,7 +51,7 @@
"green": "Vert",
"delete": "Supprimer",
"simple": "Simple",
"template": "Modèle",
"template": "Template",
"advanced": "Avancé",
"copy": "Copier",
"saveAs": "Enregistrer sous",
@@ -95,7 +95,8 @@
"positivePrompt": "Prompt Positif",
"negativePrompt": "Prompt Négatif",
"ok": "Ok",
"close": "Fermer"
"close": "Fermer",
"clipboard": "Presse-papier"
},
"gallery": {
"galleryImageSize": "Taille de l'image",
@@ -117,8 +118,8 @@
"bulkDownloadRequestFailed": "Problème lors de la préparation du téléchargement",
"copy": "Copier",
"autoAssignBoardOnClick": "Assigner automatiquement une Planche lors du clic",
"dropToUpload": "$t(gallery.drop) pour Charger",
"dropOrUpload": "$t(gallery.drop) ou Séléctioner",
"dropToUpload": "$t(gallery.drop) pour Importer",
"dropOrUpload": "$t(gallery.drop) ou Importer",
"oldestFirst": "Plus Ancien en premier",
"deleteImagePermanent": "Les Images supprimées ne peuvent pas être restorées.",
"displaySearch": "Recherche d'Image",
@@ -161,7 +162,7 @@
"unstarImage": "Retirer le marquage de l'Image",
"viewerImage": "Visualisation de l'Image",
"imagesSettings": "Paramètres des images de la galerie",
"assetsTab": "Fichiers que vous avez chargé pour vos projets.",
"assetsTab": "Fichiers que vous avez importés pour vos projets.",
"imagesTab": "Images que vous avez créées et enregistrées dans Invoke.",
"boardsSettings": "Paramètres des planches"
},
@@ -219,7 +220,6 @@
"typePhraseHere": "Écrire une phrase ici",
"cancel": "Annuler",
"defaultSettingsSaved": "Paramètres par défaut enregistrés",
"hfToken": "Token HuggingFace",
"imageEncoderModelId": "ID du modèle d'encodeur d'image",
"path": "Chemin sur le disque",
"repoVariant": "Variante de dépôt",
@@ -243,7 +243,7 @@
"noModelsInstalled": "Aucun modèle installé",
"urlOrLocalPath": "URL ou chemin local",
"prune": "Vider",
"uploadImage": "Charger une image",
"uploadImage": "Importer une image",
"addModels": "Ajouter des modèles",
"install": "Installer",
"localOnly": "local uniquement",
@@ -254,7 +254,6 @@
"loraModels": "LoRAs",
"main": "Principal",
"urlOrLocalPathHelper": "Les URL doivent pointer vers un seul fichier. Les chemins locaux peuvent pointer vers un seul fichier ou un dossier pour un seul modèle de diffuseurs.",
"useDefaultSettings": "Utiliser les paramètres par défaut",
"modelImageUpdateFailed": "Mise à jour de l'image du modèle échouée",
"loraTriggerPhrases": "Phrases de déclenchement LoRA",
"mainModelTriggerPhrases": "Phrases de déclenchement du modèle principal",
@@ -273,24 +272,39 @@
"spandrelImageToImage": "Image vers Image (Spandrel)",
"starterModelsInModelManager": "Les modèles de démarrage peuvent être trouvés dans le gestionnaire de modèles",
"t5Encoder": "Encodeur T5",
"learnMoreAboutSupportedModels": "En savoir plus sur les modèles que nous prenons en charge"
"learnMoreAboutSupportedModels": "En savoir plus sur les modèles que nous prenons en charge",
"includesNModels": "Contient {{n}} modèles et leurs dépendances",
"starterBundles": "Packs de démarrages",
"starterBundleHelpText": "Installe facilement tous les modèles nécessaire pour démarrer avec un modèle de base, incluant un modèle principal, ControlNets, IP Adapters et plus encore. Choisir un pack igniorera tous les modèles déjà installés.",
"installingXModels_one": "En cours d'installation de {{count}} modèle",
"installingXModels_many": "En cours d'installation de {{count}} modèles",
"installingXModels_other": "En cours d'installation de {{count}} modèles",
"skippingXDuplicates_one": ", en ignorant {{count}} doublon",
"skippingXDuplicates_many": ", en ignorant {{count}} doublons",
"skippingXDuplicates_other": ", en ignorant {{count}} doublons",
"installingModel": "Modèle en cours d'installation",
"installingBundle": "Pack en cours d'installation",
"noDefaultSettings": "Aucun paramètre par défaut configuré pour ce modèle. Visitez le Gestionnaire de Modèles pour ajouter des paramètres par défaut.",
"usingDefaultSettings": "Utilisation des paramètres par défaut du modèle",
"defaultSettingsOutOfSync": "Certain paramètres ne correspondent pas aux valeurs par défaut du modèle :",
"restoreDefaultSettings": "Cliquez pour utiliser les paramètres par défaut du modèle."
},
"parameters": {
"images": "Images",
"steps": "Etapes",
"cfgScale": "CFG Echelle",
"steps": "Étapes",
"cfgScale": "Échelle CFG",
"width": "Largeur",
"height": "Hauteur",
"seed": "Graine",
"shuffle": "Mélanger la graine",
"shuffle": "Nouvelle graine",
"noiseThreshold": "Seuil de Bruit",
"perlinNoise": "Bruit de Perlin",
"type": "Type",
"strength": "Force",
"upscaling": "Agrandissement",
"scale": "Echelle",
"scale": "Échelle",
"imageFit": "Ajuster Image Initiale à la Taille de Sortie",
"scaleBeforeProcessing": "Echelle Avant Traitement",
"scaleBeforeProcessing": "Échelle Avant Traitement",
"scaledWidth": "Larg. Échelle",
"scaledHeight": "Haut. Échelle",
"infillMethod": "Méthode de Remplissage",
@@ -411,35 +425,38 @@
"clearIntermediatesWithCount_other": "Effacé {{count}} Intermédiaires",
"informationalPopoversDisabled": "Pop-ups d'information désactivés",
"informationalPopoversDisabledDesc": "Les pop-ups d'information ont été désactivés. Activez-les dans les paramètres.",
"confirmOnNewSession": "Confirmer lors d'une nouvelle session"
"confirmOnNewSession": "Confirmer lors d'une nouvelle session",
"modelDescriptionsDisabledDesc": "Les descriptions des modèles dans les menus déroulants ont été désactivées. Activez-les dans les paramètres.",
"enableModelDescriptions": "Activer les descriptions de modèle dans les menus déroulants",
"modelDescriptionsDisabled": "Descriptions de modèle dans les menus déroulants désactivés"
},
"toast": {
"uploadFailed": "Téléchargement échoué",
"uploadFailed": "Importation échouée",
"imageCopied": "Image copiée",
"parametersNotSet": "Paramètres non rappelés",
"serverError": "Erreur du serveur",
"uploadFailedInvalidUploadDesc": "Doit être une unique image PNG ou JPEG",
"uploadFailedInvalidUploadDesc": "Doit être des images au format PNG ou JPEG.",
"problemCopyingImage": "Impossible de copier l'image",
"parameterSet": "Paramètre Rappelé",
"parameterNotSet": "Paramètre non Rappelé",
"canceled": "Traitement annulé",
"addedToBoard": "Ajouté à la planche",
"workflowLoaded": "Processus chargé",
"addedToBoard": "Ajouté aux ressources de la planche {{name}}",
"workflowLoaded": "Workflow chargé",
"connected": "Connecté au serveur",
"setNodeField": "Définir comme champ de nœud",
"imageUploadFailed": "Échec de l'importation de l'image",
"loadedWithWarnings": "Processus chargé avec des avertissements",
"loadedWithWarnings": "Workflow chargé avec des avertissements",
"imageUploaded": "Image importée",
"modelAddedSimple": "Modèle ajouté à la file d'attente",
"setControlImage": "Définir comme image de contrôle",
"workflowDeleted": "Processus supprimé",
"workflowDeleted": "Workflow supprimé",
"baseModelChangedCleared_one": "Effacé ou désactivé {{count}} sous-modèle incompatible",
"baseModelChangedCleared_many": "Effacé ou désactivé {{count}} sous-modèles incompatibles",
"baseModelChangedCleared_other": "Effacé ou désactivé {{count}} sous-modèles incompatibles",
"invalidUpload": "Téléchargement invalide",
"invalidUpload": "Importation invalide",
"problemDownloadingImage": "Impossible de télécharger l'image",
"problemRetrievingWorkflow": "Problème de récupération du processus",
"problemDeletingWorkflow": "Problème de suppression du processus",
"problemRetrievingWorkflow": "Problème de récupération du Workflow",
"problemDeletingWorkflow": "Problème de suppression du Workflow",
"prunedQueue": "File d'attente vidée",
"parameters": "Paramètres",
"modelImportCanceled": "Importation du modèle annulée",
@@ -468,10 +485,15 @@
"baseModelChanged": "Modèle de base changé",
"problemSavingLayer": "Impossible d'enregistrer la couche",
"imageNotLoadedDesc": "Image introuvable",
"linkCopied": "Lien copié"
"linkCopied": "Lien copié",
"imagesWillBeAddedTo": "Les images Importées seront ajoutées au ressources de la Planche {{boardName}}.",
"uploadFailedInvalidUploadDesc_withCount_one": "Doit être au maximum une image PNG ou JPEG.",
"uploadFailedInvalidUploadDesc_withCount_many": "Doit être au maximum {{count}} images PNG ou JPEG.",
"uploadFailedInvalidUploadDesc_withCount_other": "Doit être au maximum {{count}} images PNG ou JPEG.",
"addedToUncategorized": "Ajouté aux ressources de la planche $t(boards.uncategorized)"
},
"accessibility": {
"uploadImage": "Charger une image",
"uploadImage": "Importer une image",
"reset": "Réinitialiser",
"nextImage": "Image suivante",
"previousImage": "Image précédente",
@@ -483,7 +505,8 @@
"submitSupportTicket": "Envoyer un ticket de support",
"resetUI": "$t(accessibility.reset) l'Interface Utilisateur",
"toggleRightPanel": "Afficher/Masquer le panneau de droite (G)",
"toggleLeftPanel": "Afficher/Masquer le panneau de gauche (T)"
"toggleLeftPanel": "Afficher/Masquer le panneau de gauche (T)",
"uploadImages": "Importer Image(s)"
},
"boards": {
"move": "Déplacer",
@@ -533,7 +556,7 @@
"accordions": {
"advanced": {
"title": "Avancé",
"options": "$t(accordions.advanced.title) Options"
"options": "Options $t(accordions.advanced.title)"
},
"image": {
"title": "Image"
@@ -614,7 +637,7 @@
"graphQueued": "Graph ajouté à la file d'attente",
"other": "Autre",
"generation": "Génération",
"workflows": "Processus",
"workflows": "Workflows",
"batchFailedToQueue": "Impossible d'ajouter le Lot dans à la file d'attente",
"graphFailedToQueue": "Impossible d'ajouter le graph à la file d'attente",
"item": "Élément",
@@ -687,8 +710,8 @@
"desc": "Rappelle toutes les métadonnées pour l'image actuelle."
},
"loadWorkflow": {
"title": "Charger le processus",
"desc": "Charge le processus enregistré de l'image actuelle (s'il en a un)."
"title": "Ouvrir un Workflow",
"desc": "Charge le workflow enregistré lié à l'image actuelle (s'il en a un)."
},
"recallSeed": {
"desc": "Rappelle la graine pour l'image actuelle.",
@@ -739,8 +762,8 @@
"desc": "Séléctionne l'onglet Agrandissement."
},
"selectWorkflowsTab": {
"desc": "Sélectionne l'onglet Processus.",
"title": "Sélectionner l'onglet Processus"
"desc": "Sélectionne l'onglet Workflows.",
"title": "Sélectionner l'onglet Workflows"
},
"togglePanels": {
"desc": "Affiche ou masque les panneaux gauche et droit en même temps.",
@@ -946,11 +969,11 @@
},
"undo": {
"title": "Annuler",
"desc": "Annule la dernière action de processus."
"desc": "Annule la dernière action de workflow."
},
"redo": {
"title": "Rétablir",
"desc": "Rétablit la dernière action de processus."
"desc": "Rétablit la dernière action de workflow."
},
"addNode": {
"desc": "Ouvre le menu d'ajout de nœud.",
@@ -968,7 +991,7 @@
"desc": "Colle les nœuds et les connections copiés.",
"title": "Coller"
},
"title": "Processus"
"title": "Workflows"
}
},
"popovers": {
@@ -1355,6 +1378,43 @@
"Des valeurs de guidage élevées peuvent entraîner une saturation excessive, et un guidage élevé ou faible peut entraîner des résultats de génération déformés. Le guidage ne s'applique qu'aux modèles FLUX DEV."
],
"heading": "Guidage"
},
"globalReferenceImage": {
"heading": "Image de Référence Globale",
"paragraphs": [
"Applique une image de référence pour influencer l'ensemble de la génération."
]
},
"regionalReferenceImage": {
"heading": "Image de Référence Régionale",
"paragraphs": [
"Pinceau pour appliquer une image de référence à des zones spécifiques."
]
},
"inpainting": {
"heading": "Inpainting",
"paragraphs": [
"Contrôle la zone qui est modifiée, guidé par la force de débruitage."
]
},
"regionalGuidance": {
"heading": "Guide Régional",
"paragraphs": [
"Pinceau pour guider l'emplacement des éléments provenant des prompts globaux."
]
},
"regionalGuidanceAndReferenceImage": {
"heading": "Guide régional et image de référence régionale",
"paragraphs": [
"Pour le Guide Régional, utilisez le pinceau pour indiquer où les éléments des prompts globaux doivent apparaître.",
"Pour l'image de référence régionale, pinceau pour appliquer une image de référence à des zones spécifiques."
]
},
"rasterLayer": {
"heading": "Couche Rastérisation",
"paragraphs": [
"Contenu basé sur les pixels de votre toile, utilisé lors de la génération d'images."
]
}
},
"dynamicPrompts": {
@@ -1375,12 +1435,11 @@
"positivePrompt": "Prompt Positif",
"allPrompts": "Tous les Prompts",
"negativePrompt": "Prompt Négatif",
"seamless": "Sans jointure",
"metadata": "Métadonné",
"scheduler": "Planificateur",
"imageDetails": "Détails de l'Image",
"seed": "Graine",
"workflow": "Processus",
"workflow": "Workflow",
"width": "Largeur",
"Threshold": "Seuil de bruit",
"noMetaData": "Aucune métadonnée trouvée",
@@ -1400,13 +1459,14 @@
"parameterSet": "Paramètre {{parameter}} défini",
"parsingFailed": "L'analyse a échoué",
"recallParameter": "Rappeler {{label}}",
"canvasV2Metadata": "Toile"
"canvasV2Metadata": "Toile",
"guidance": "Guide"
},
"sdxl": {
"freePromptStyle": "Écriture de Prompt manuelle",
"concatPromptStyle": "Lier Prompt & Style",
"negStylePrompt": "Prompt Négatif",
"posStylePrompt": "Prompt Positif",
"negStylePrompt": "Style Prompt Négatif",
"posStylePrompt": "Style Prompt Positif",
"refinerStart": "Démarrer le Refiner",
"denoisingStrength": "Force de débruitage",
"steps": "Étapes",
@@ -1428,8 +1488,8 @@
"hideMinimapnodes": "Masquer MiniCarte",
"zoomOutNodes": "Dézoomer",
"zoomInNodes": "Zoomer",
"downloadWorkflow": "Télécharger processus en JSON",
"loadWorkflow": "Charger le processus",
"downloadWorkflow": "Exporter le Workflow au format JSON",
"loadWorkflow": "Charger un Workflow",
"reloadNodeTemplates": "Recharger les modèles de nœuds",
"animatedEdges": "Connexions animées",
"cannotConnectToSelf": "Impossible de se connecter à soi-même",
@@ -1452,16 +1512,16 @@
"float": "Flottant",
"mismatchedVersion": "Nœud invalide : le nœud {{node}} de type {{type}} a une version incompatible (essayez de mettre à jour?)",
"missingTemplate": "Nœud invalide : le nœud {{node}} de type {{type}} modèle manquant (non installé?)",
"noWorkflow": "Pas de processus",
"noWorkflow": "Pas de Workflow",
"validateConnectionsHelp": "Prévenir la création de connexions invalides et l'invocation de graphes invalides",
"workflowSettings": "Paramètres de l'Éditeur de Processus",
"workflowValidation": "Erreur de validation du processus",
"workflowSettings": "Paramètres de l'Éditeur de Workflow",
"workflowValidation": "Erreur de validation du Workflow",
"executionStateInProgress": "En cours",
"node": "Noeud",
"scheduler": "Planificateur",
"notes": "Notes",
"notesDescription": "Ajouter des notes sur votre processus",
"unableToLoadWorkflow": "Impossible de charger le processus",
"notesDescription": "Ajouter des notes sur votre workflow",
"unableToLoadWorkflow": "Impossible de charger le Workflow",
"addNode": "Ajouter un nœud",
"problemSettingTitle": "Problème lors de définition du Titre",
"connectionWouldCreateCycle": "La connexion créerait un cycle",
@@ -1484,7 +1544,7 @@
"noOutputRecorded": "Aucun résultat enregistré",
"removeLinearView": "Retirer de la vue linéaire",
"snapToGrid": "Aligner sur la grille",
"workflow": "Processus",
"workflow": "Workflow",
"updateApp": "Mettre à jour l'application",
"updateNode": "Mettre à jour le nœud",
"nodeOutputs": "Sorties de nœud",
@@ -1497,7 +1557,7 @@
"string": "Chaîne de caractères",
"workflowName": "Nom",
"snapToGridHelp": "Aligner les nœuds sur la grille lors du déplacement",
"unableToValidateWorkflow": "Impossible de valider le processus",
"unableToValidateWorkflow": "Impossible de valider le Workflow",
"validateConnections": "Valider les connexions et le graphique",
"unableToUpdateNodes_one": "Impossible de mettre à jour {{count}} nœud",
"unableToUpdateNodes_many": "Impossible de mettre à jour {{count}} nœuds",
@@ -1510,15 +1570,15 @@
"nodePack": "Paquet de nœuds",
"sourceNodeDoesNotExist": "Connexion invalide : le nœud source/de sortie {{node}} n'existe pas",
"sourceNodeFieldDoesNotExist": "Connexion invalide : {{node}}.{{field}} n'existe pas",
"unableToGetWorkflowVersion": "Impossible d'obtenir la version du schéma de processus",
"newWorkflowDesc2": "Votre processus actuel comporte des modifications non enregistrées.",
"unableToGetWorkflowVersion": "Impossible d'obtenir la version du schéma du Workflow",
"newWorkflowDesc2": "Votre workflow actuel comporte des modifications non enregistrées.",
"deletedInvalidEdge": "Connexion invalide supprimé {{source}} -> {{target}}",
"targetNodeDoesNotExist": "Connexion invalide : le nœud cible/entrée {{node}} n'existe pas",
"targetNodeFieldDoesNotExist": "Connexion invalide : le champ {{node}}.{{field}} n'existe pas",
"nodeVersion": "Version du noeud",
"clearWorkflowDesc2": "Votre processus actuel comporte des modifications non enregistrées.",
"clearWorkflow": "Effacer le Processus",
"clearWorkflowDesc": "Effacer ce processus et en commencer un nouveau?",
"clearWorkflowDesc2": "Votre workflow actuel comporte des modifications non enregistrées.",
"clearWorkflow": "Effacer le Workflow",
"clearWorkflowDesc": "Effacer ce workflow et en commencer un nouveau?",
"unsupportedArrayItemType": "type d'élément de tableau non pris en charge \"{{type}}\"",
"addLinearView": "Ajouter à la vue linéaire",
"collectionOrScalarFieldType": "{{name}} (Unique ou Collection)",
@@ -1527,7 +1587,7 @@
"ipAdapter": "IP-Adapter",
"viewMode": "Utiliser en vue linéaire",
"collectionFieldType": "{{name}} (Collection)",
"newWorkflow": "Nouveau processus",
"newWorkflow": "Nouveau Workflow",
"reorderLinearView": "Réorganiser la vue linéaire",
"unknownOutput": "Sortie inconnue : {{name}}",
"outputFieldTypeParseError": "Impossible d'analyser le type du champ de sortie {{node}}.{{field}} ({{message}})",
@@ -1537,13 +1597,13 @@
"unknownFieldType": "$t(nodes.unknownField) type : {{type}}",
"inputFieldTypeParseError": "Impossible d'analyser le type du champ d'entrée {{node}}.{{field}} ({{message}})",
"unableToExtractSchemaNameFromRef": "impossible d'extraire le nom du schéma à partir de la référence",
"editMode": "Modifier dans l'éditeur de processus",
"unknownErrorValidatingWorkflow": "Erreur inconnue lors de la validation du processus",
"editMode": "Modifier dans l'éditeur de Workflow",
"unknownErrorValidatingWorkflow": "Erreur inconnue lors de la validation du Workflow",
"updateAllNodes": "Mettre à jour les nœuds",
"allNodesUpdated": "Tous les nœuds mis à jour",
"newWorkflowDesc": "Créer un nouveau processus?",
"newWorkflowDesc": "Créer un nouveau workflow?",
"edit": "Modifier",
"noFieldsViewMode": "Ce processus n'a aucun champ sélectionné à afficher. Consultez le processus complet pour configurer les valeurs.",
"noFieldsViewMode": "Ce workflow n'a aucun champ sélectionné à afficher. Consultez le workflow complet pour configurer les valeurs.",
"graph": "Graph",
"modelAccessError": "Impossible de trouver le modèle {{key}}, réinitialisation aux paramètres par défaut",
"showEdgeLabelsHelp": "Afficher le nom sur les connections, indiquant les nœuds connectés",
@@ -1557,9 +1617,9 @@
"missingInvocationTemplate": "Modèle d'invocation manquant",
"imageAccessError": "Impossible de trouver l'image {{image_name}}, réinitialisation à la valeur par défaut",
"boardAccessError": "Impossible de trouver la planche {{board_id}}, réinitialisation à la valeur par défaut",
"workflowHelpText": "Besoin d'aide? Consultez notre guide sur <LinkComponent>Comment commencer avec les Processus</LinkComponent>.",
"noWorkflows": "Aucun Processus",
"noMatchingWorkflows": "Aucun processus correspondant"
"workflowHelpText": "Besoin d'aide? Consultez notre guide sur <LinkComponent>Comment commencer avec les Workflows</LinkComponent>.",
"noWorkflows": "Aucun Workflows",
"noMatchingWorkflows": "Aucun Workflows correspondant"
},
"models": {
"noMatchingModels": "Aucun modèle correspondant",
@@ -1576,59 +1636,51 @@
},
"workflows": {
"workflowLibrary": "Bibliothèque",
"loading": "Chargement des processus",
"searchWorkflows": "Rechercher des processus",
"workflowCleared": "Processus effacé",
"loading": "Chargement des Workflows",
"searchWorkflows": "Chercher des Workflows",
"workflowCleared": "Workflow effacé",
"noDescription": "Aucune description",
"deleteWorkflow": "Supprimer le processus",
"openWorkflow": "Ouvrir le processus",
"uploadWorkflow": "Charger à partir du fichier",
"workflowName": "Nom du processus",
"unnamedWorkflow": "Processus sans nom",
"saveWorkflowAs": "Enregistrer le processus sous",
"workflows": "Processus",
"savingWorkflow": "Enregistrement du processus...",
"saveWorkflowToProject": "Enregistrer le processus dans le projet",
"deleteWorkflow": "Supprimer le Workflow",
"openWorkflow": "Ouvrir le Workflow",
"uploadWorkflow": "Charger à partir d'un fichier",
"workflowName": "Nom du Workflow",
"unnamedWorkflow": "Workflow sans nom",
"saveWorkflowAs": "Enregistrer le Workflow sous",
"workflows": "Workflows",
"savingWorkflow": "Enregistrement du Workflow...",
"saveWorkflowToProject": "Enregistrer le Workflow dans le projet",
"downloadWorkflow": "Enregistrer dans le fichier",
"saveWorkflow": "Enregistrer le processus",
"problemSavingWorkflow": "Problème de sauvegarde du processus",
"workflowEditorMenu": "Menu de l'Éditeur de Processus",
"newWorkflowCreated": "Nouveau processus créé",
"clearWorkflowSearchFilter": "Réinitialiser le filtre de recherche de processus",
"problemLoading": "Problème de chargement des processus",
"workflowSaved": "Processus enregistré",
"noWorkflows": "Pas de processus",
"saveWorkflow": "Enregistrer le Workflow",
"problemSavingWorkflow": "Problème de sauvegarde du Workflow",
"workflowEditorMenu": "Menu de l'Éditeur de Workflow",
"newWorkflowCreated": "Nouveau Workflow créé",
"clearWorkflowSearchFilter": "Réinitialiser le filtre de recherche de Workflow",
"problemLoading": "Problème de chargement des Workflows",
"workflowSaved": "Workflow enregistré",
"noWorkflows": "Pas de Workflows",
"ascending": "Ascendant",
"loadFromGraph": "Charger le processus à partir du graphique",
"loadFromGraph": "Charger le Workflow à partir du graphique",
"descending": "Descendant",
"created": "Créé",
"updated": "Mis à jour",
"loadWorkflow": "$t(common.load) Processus",
"loadWorkflow": "$t(common.load) Workflow",
"convertGraph": "Convertir le graphique",
"opened": "Ouvert",
"name": "Nom",
"autoLayout": "Mise en page automatique",
"defaultWorkflows": "Processus par défaut",
"userWorkflows": "Processus utilisateur",
"projectWorkflows": "Processus du projet",
"defaultWorkflows": "Workflows par défaut",
"userWorkflows": "Workflows de l'utilisateur",
"projectWorkflows": "Workflows du projet",
"copyShareLink": "Copier le lien de partage",
"chooseWorkflowFromLibrary": "Choisir le Processus dans la Bibliothèque",
"uploadAndSaveWorkflow": "Charger dans la bibliothèque",
"chooseWorkflowFromLibrary": "Choisir le Workflow dans la Bibliothèque",
"uploadAndSaveWorkflow": "Importer dans la bibliothèque",
"edit": "Modifer",
"deleteWorkflow2": "Êtes-vous sûr de vouloir supprimer ce processus? Ceci ne peut pas être annulé.",
"deleteWorkflow2": "Êtes-vous sûr de vouloir supprimer ce Workflow? Cette action ne peut pas être annulé.",
"download": "Télécharger",
"copyShareLinkForWorkflow": "Copier le lien de partage pour le processus",
"copyShareLinkForWorkflow": "Copier le lien de partage pour le Workflow",
"delete": "Supprimer"
},
"whatsNew": {
"canvasV2Announcement": {
"watchReleaseVideo": "Regarder la vidéo de lancement",
"newLayerTypes": "Nouveaux types de couches pour un contrôle encore plus précis",
"fluxSupport": "Support pour la famille de modèles Flux",
"readReleaseNotes": "Lire les notes de version",
"newCanvas": "Une nouvelle Toile de contrôle puissant",
"watchUiUpdatesOverview": "Regarder l'aperçu des mises à jour de l'UI"
},
"whatsNewInInvoke": "Quoi de neuf dans Invoke"
},
"ui": {
@@ -1639,7 +1691,7 @@
"gallery": "Galerie",
"upscalingTab": "$t(ui.tabs.upscaling) $t(common.tab)",
"generation": "Génération",
"workflows": "Processus",
"workflows": "Workflows",
"workflowsTab": "$t(ui.tabs.workflows) $t(common.tab)",
"models": "Modèles",
"modelsTab": "$t(ui.tabs.models) $t(common.tab)"
@@ -1749,7 +1801,9 @@
"bboxGroup": "Créer à partir de la bounding box",
"newRegionalReferenceImage": "Nouvelle image de référence régionale",
"newGlobalReferenceImage": "Nouvelle image de référence globale",
"newControlLayer": "Nouveau couche de contrôle"
"newControlLayer": "Nouveau couche de contrôle",
"newInpaintMask": "Nouveau Masque Inpaint",
"newRegionalGuidance": "Nouveau Guide Régional"
},
"bookmark": "Marque-page pour Changement Rapide",
"saveLayerToAssets": "Enregistrer la couche dans les ressources",
@@ -1762,8 +1816,6 @@
"on": "Activé",
"label": "Aligner sur la grille"
},
"isolatedFilteringPreview": "Aperçu de filtrage isolé",
"isolatedTransformingPreview": "Aperçu de transformation isolée",
"invertBrushSizeScrollDirection": "Inverser le défilement pour la taille du pinceau",
"pressureSensitivity": "Sensibilité à la pression",
"preserveMask": {
@@ -1771,9 +1823,10 @@
"alert": "Préserver la zone masquée"
},
"isolatedPreview": "Aperçu Isolé",
"isolatedStagingPreview": "Aperçu de l'attente isolé"
"isolatedStagingPreview": "Aperçu de l'attente isolé",
"isolatedLayerPreview": "Aperçu de la couche isolée",
"isolatedLayerPreviewDesc": "Pour afficher uniquement cette couche lors de l'exécution d'opérations telles que le filtrage ou la transformation."
},
"convertToRasterLayer": "Convertir en Couche de Rastérisation",
"transparency": "Transparence",
"moveBackward": "Reculer",
"rectangle": "Rectangle",
@@ -1896,7 +1949,6 @@
"globalReferenceImage_withCount_one": "$t(controlLayers.globalReferenceImage)",
"globalReferenceImage_withCount_many": "Images de référence globales",
"globalReferenceImage_withCount_other": "Images de référence globales",
"convertToControlLayer": "Convertir en Couche de Contrôle",
"layer_withCount_one": "Couche {{count}}",
"layer_withCount_many": "Couches {{count}}",
"layer_withCount_other": "Couches {{count}}",
@@ -1959,7 +2011,41 @@
"pullBboxIntoReferenceImageOk": "Bounding Box insérée dans l'Image de référence",
"controlLayer_withCount_one": "$t(controlLayers.controlLayer)",
"controlLayer_withCount_many": "Controler les couches",
"controlLayer_withCount_other": "Controler les couches"
"controlLayer_withCount_other": "Controler les couches",
"copyInpaintMaskTo": "Copier $t(controlLayers.inpaintMask) vers",
"copyRegionalGuidanceTo": "Copier $t(controlLayers.regionalGuidance) vers",
"convertRasterLayerTo": "Convertir $t(controlLayers.rasterLayer) vers",
"selectObject": {
"selectObject": "Sélectionner l'objet",
"clickToAdd": "Cliquez sur la couche pour ajouter un point",
"apply": "Appliquer",
"cancel": "Annuler",
"dragToMove": "Faites glisser un point pour le déplacer",
"clickToRemove": "Cliquez sur un point pour le supprimer",
"include": "Inclure",
"invertSelection": "Sélection Inversée",
"saveAs": "Enregistrer sous",
"neutral": "Neutre",
"pointType": "Type de point",
"exclude": "Exclure",
"process": "Traiter",
"reset": "Réinitialiser",
"help1": "Sélectionnez un seul objet cible. Ajoutez des points <Bold>Inclure</Bold> et <Bold>Exclure</Bold> pour indiquer quelles parties de la couche font partie de l'objet cible.",
"help2": "Commencez par un point <Bold>Inclure</Bold> au sein de l'objet cible. Ajoutez d'autres points pour affiner la sélection. Moins de points produisent généralement de meilleurs résultats.",
"help3": "Inversez la sélection pour sélectionner tout sauf l'objet cible."
},
"canvasAsControlLayer": "$t(controlLayers.canvas) en tant que $t(controlLayers.controlLayer)",
"convertRegionalGuidanceTo": "Convertir $t(controlLayers.regionalGuidance) vers",
"copyRasterLayerTo": "Copier $t(controlLayers.rasterLayer) vers",
"newControlLayer": "Nouveau $t(controlLayers.controlLayer)",
"newRegionalGuidance": "Nouveau $t(controlLayers.regionalGuidance)",
"replaceCurrent": "Remplacer Actuel",
"convertControlLayerTo": "Convertir $t(controlLayers.controlLayer) vers",
"convertInpaintMaskTo": "Convertir $t(controlLayers.inpaintMask) vers",
"copyControlLayerTo": "Copier $t(controlLayers.controlLayer) vers",
"newInpaintMask": "Nouveau $t(controlLayers.inpaintMask)",
"newRasterLayer": "Nouveau $t(controlLayers.rasterLayer)",
"canvasAsRasterLayer": "$t(controlLayers.canvas) en tant que $t(controlLayers.rasterLayer)"
},
"upscaling": {
"exceedsMaxSizeDetails": "La limite maximale d'agrandissement est de {{maxUpscaleDimension}}x{{maxUpscaleDimension}} pixels. Veuillez essayer une image plus petite ou réduire votre sélection d'échelle.",
@@ -1980,57 +2066,57 @@
"missingTileControlNetModel": "Aucun modèle ControlNet valide installé"
},
"stylePresets": {
"deleteTemplate": "Supprimer le modèle",
"editTemplate": "Modifier le modèle",
"deleteTemplate": "Supprimer le template",
"editTemplate": "Modifier le template",
"exportFailed": "Impossible de générer et de télécharger le CSV",
"name": "Nom",
"acceptedColumnsKeys": "Colonnes/clés acceptées :",
"promptTemplatesDesc1": "Les modèles de prompt ajoutent du texte aux prompts que vous écrivez dans la zone de saisie des prompts.",
"promptTemplatesDesc1": "Les templates de prompt ajoutent du texte aux prompts que vous écrivez dans la zone de saisie.",
"private": "Privé",
"searchByName": "Rechercher par nom",
"viewList": "Afficher la liste des modèles",
"noTemplates": "Aucun modèle",
"viewList": "Afficher la liste des templates",
"noTemplates": "Aucun templates",
"insertPlaceholder": "Insérer un placeholder",
"defaultTemplates": "Modèles par défaut",
"defaultTemplates": "Template pré-défini",
"deleteImage": "Supprimer l'image",
"createPromptTemplate": "Créer un modèle de prompt",
"createPromptTemplate": "Créer un template de prompt",
"negativePrompt": "Prompt négatif",
"promptTemplatesDesc3": "Si vous omettez le placeholder, le modèle sera ajouté à la fin de votre prompt.",
"promptTemplatesDesc3": "Si vous omettez le placeholder, le template sera ajouté à la fin de votre prompt.",
"positivePrompt": "Prompt positif",
"choosePromptTemplate": "Choisir un modèle de prompt",
"choosePromptTemplate": "Choisir un template de prompt",
"toggleViewMode": "Basculer le mode d'affichage",
"updatePromptTemplate": "Mettre à jour le modèle de prompt",
"flatten": "Intégrer le modèle sélectionné dans le prompt actuel",
"myTemplates": "Mes modèles",
"updatePromptTemplate": "Mettre à jour le template de prompt",
"flatten": "Intégrer le template sélectionné dans le prompt actuel",
"myTemplates": "Mes Templates",
"type": "Type",
"exportDownloaded": "Exportation téléchargée",
"clearTemplateSelection": "Supprimer la sélection de modèle",
"promptTemplateCleared": "Modèle de prompt effacé",
"templateDeleted": "Modèle de prompt supprimé",
"exportPromptTemplates": "Exporter mes modèles de prompt (CSV)",
"clearTemplateSelection": "Supprimer la sélection de template",
"promptTemplateCleared": "Template de prompt effacé",
"templateDeleted": "Template de prompt supprimé",
"exportPromptTemplates": "Exporter mes templates de prompt (CSV)",
"nameColumn": "'nom'",
"positivePromptColumn": "\"prompt\" ou \"prompt_positif\"",
"useForTemplate": "Utiliser pour le modèle de prompt",
"uploadImage": "Charger une image",
"importTemplates": "Importer des modèles de prompt (CSV/JSON)",
"useForTemplate": "Utiliser pour le template de prompt",
"uploadImage": "Importer une image",
"importTemplates": "Importer des templates de prompt (CSV/JSON)",
"negativePromptColumn": "'prompt_négatif'",
"deleteTemplate2": "Êtes-vous sûr de vouloir supprimer ce modèle? Cette action ne peut pas être annulée.",
"deleteTemplate2": "Êtes-vous sûr de vouloir supprimer ce template? Cette action ne peut pas être annulée.",
"preview": "Aperçu",
"shared": "Partagé",
"noMatchingTemplates": "Aucun modèle correspondant",
"sharedTemplates": "Modèles partagés",
"unableToDeleteTemplate": "Impossible de supprimer le modèle de prompt",
"noMatchingTemplates": "Aucun templates correspondant",
"sharedTemplates": "Template partagés",
"unableToDeleteTemplate": "Impossible de supprimer le template de prompt",
"active": "Actif",
"copyTemplate": "Copier le modèle",
"viewModeTooltip": "Voici à quoi ressemblera votre prompt avec le modèle actuellement sélectionné. Pour modifier votre prompt, cliquez n'importe où dans la zone de texte.",
"promptTemplatesDesc2": "Utilisez la chaîne de remplacement <Pre>{{placeholder}}</Pre> pour spécifier où votre prompt doit être inclus dans le modèle."
"copyTemplate": "Copier le template",
"viewModeTooltip": "Voici à quoi ressemblera votre prompt avec le template actuellement sélectionné. Pour modifier votre prompt, cliquez n'importe où dans la zone de texte.",
"promptTemplatesDesc2": "Utilisez la chaîne de remplacement <Pre>{{placeholder}}</Pre> pour spécifier où votre prompt doit être inclus dans le template."
},
"system": {
"logNamespaces": {
"config": "Configuration",
"canvas": "Toile",
"generation": "Génération",
"workflows": "Processus",
"workflows": "Workflows",
"system": "Système",
"models": "Modèles",
"logNamespaces": "Journalisation des espaces de noms",
@@ -2051,8 +2137,12 @@
"enableLogging": "Activer la journalisation"
},
"newUserExperience": {
"toGetStarted": "Pour commencer, saisissez un prompt dans la boîte et cliquez sur <StrongComponent>Invoke</StrongComponent> pour générer votre première image. Sélectionnez un modèle de prompt pour améliorer les résultats. Vous pouvez choisir de sauvegarder vos images directement dans la <StrongComponent>Galerie</StrongComponent> ou de les modifier sur la <StrongComponent>Toile</StrongComponent>.",
"gettingStartedSeries": "Vous souhaitez plus de conseils? Consultez notre <LinkComponent>Série de démarrage</LinkComponent> pour des astuces sur l'exploitation du plein potentiel de l'Invoke Studio."
"toGetStarted": "Pour commencer, saisissez un prompt dans la boîte et cliquez sur <StrongComponent>Invoke</StrongComponent> pour générer votre première image. Sélectionnez un template de prompt pour améliorer les résultats. Vous pouvez choisir de sauvegarder vos images directement dans la <StrongComponent>Galerie</StrongComponent> ou de les modifier sur la <StrongComponent>Toile</StrongComponent>.",
"gettingStartedSeries": "Vous souhaitez plus de conseils? Consultez notre <LinkComponent>Série de démarrage</LinkComponent> pour des astuces sur l'exploitation du plein potentiel de l'Invoke Studio.",
"noModelsInstalled": "Il semble qu'aucun modèle ne soit installé",
"downloadStarterModels": "Télécharger les modèles de démarrage",
"importModels": "Importer des Modèles",
"toGetStartedLocal": "Pour commencer, assurez-vous de télécharger ou d'importer des modèles nécessaires pour exécuter Invoke. Ensuite, saisissez le prompt dans la boîte et cliquez sur <StrongComponent>Invoke</StrongComponent> pour générer votre première image. Sélectionnez un template de prompt pour améliorer les résultats. Vous pouvez choisir de sauvegarder vos images directement sur <StrongComponent>Galerie</StrongComponent> ou les modifier sur la <StrongComponent>Toile</StrongComponent>."
},
"upsell": {
"shareAccess": "Partager l'accès",

View File

@@ -92,7 +92,9 @@
"none": "Niente",
"new": "Nuovo",
"view": "Vista",
"close": "Chiudi"
"close": "Chiudi",
"clipboard": "Appunti",
"ok": "Ok"
},
"gallery": {
"galleryImageSize": "Dimensione dell'immagine",
@@ -542,7 +544,6 @@
"defaultSettingsSaved": "Impostazioni predefinite salvate",
"defaultSettings": "Impostazioni predefinite",
"metadata": "Metadati",
"useDefaultSettings": "Usa le impostazioni predefinite",
"triggerPhrases": "Frasi Trigger",
"deleteModelImage": "Elimina l'immagine del modello",
"localOnly": "solo locale",
@@ -577,7 +578,26 @@
"noMatchingModels": "Nessun modello corrispondente",
"starterModelsInModelManager": "I modelli iniziali possono essere trovati in Gestione Modelli",
"spandrelImageToImage": "Immagine a immagine (Spandrel)",
"learnMoreAboutSupportedModels": "Scopri di più sui modelli che supportiamo"
"learnMoreAboutSupportedModels": "Scopri di più sui modelli che supportiamo",
"starterBundles": "Pacchetti per iniziare",
"installingBundle": "Installazione del pacchetto",
"skippingXDuplicates_one": ", saltando {{count}} duplicato",
"skippingXDuplicates_many": ", saltando {{count}} duplicati",
"skippingXDuplicates_other": ", saltando {{count}} duplicati",
"installingModel": "Installazione del modello",
"installingXModels_one": "Installazione di {{count}} modello",
"installingXModels_many": "Installazione di {{count}} modelli",
"installingXModels_other": "Installazione di {{count}} modelli",
"includesNModels": "Include {{n}} modelli e le loro dipendenze",
"starterBundleHelpText": "Installa facilmente tutti i modelli necessari per iniziare con un modello base, tra cui un modello principale, controlnet, adattatori IP e altro. Selezionando un pacchetto salterai tutti i modelli che hai già installato.",
"noDefaultSettings": "Nessuna impostazione predefinita configurata per questo modello. Visita Gestione Modelli per aggiungere impostazioni predefinite.",
"defaultSettingsOutOfSync": "Alcune impostazioni non corrispondono a quelle predefinite del modello:",
"restoreDefaultSettings": "Fare clic per utilizzare le impostazioni predefinite del modello.",
"usingDefaultSettings": "Utilizzo delle impostazioni predefinite del modello",
"huggingFace": "HuggingFace",
"huggingFaceRepoID": "HuggingFace Repository ID",
"clipEmbed": "CLIP Embed",
"t5Encoder": "T5 Encoder"
},
"parameters": {
"images": "Immagini",
@@ -678,7 +698,8 @@
"boxBlur": "Sfocatura Box",
"staged": "Maschera espansa",
"optimizedImageToImage": "Immagine-a-immagine ottimizzata",
"sendToCanvas": "Invia alla Tela"
"sendToCanvas": "Invia alla Tela",
"coherenceMinDenoise": "Riduzione minima del rumore"
},
"settings": {
"models": "Modelli",
@@ -713,7 +734,10 @@
"reloadingIn": "Ricaricando in",
"informationalPopoversDisabled": "Testo informativo a comparsa disabilitato",
"informationalPopoversDisabledDesc": "I testi informativi a comparsa sono disabilitati. Attivali nelle impostazioni.",
"confirmOnNewSession": "Conferma su nuova sessione"
"confirmOnNewSession": "Conferma su nuova sessione",
"enableModelDescriptions": "Abilita le descrizioni dei modelli nei menu a discesa",
"modelDescriptionsDisabled": "Descrizioni dei modelli nei menu a discesa disabilitate",
"modelDescriptionsDisabledDesc": "Le descrizioni dei modelli nei menu a discesa sono state disabilitate. Abilitale nelle Impostazioni."
},
"toast": {
"uploadFailed": "Caricamento fallito",
@@ -722,7 +746,7 @@
"serverError": "Errore del Server",
"connected": "Connesso al server",
"canceled": "Elaborazione annullata",
"uploadFailedInvalidUploadDesc": "Deve essere una singola immagine PNG o JPEG",
"uploadFailedInvalidUploadDesc": "Devono essere immagini PNG o JPEG.",
"parameterSet": "Parametro richiamato",
"parameterNotSet": "Parametro non richiamato",
"problemCopyingImage": "Impossibile copiare l'immagine",
@@ -731,7 +755,7 @@
"baseModelChangedCleared_other": "Cancellati o disabilitati {{count}} sottomodelli incompatibili",
"loadedWithWarnings": "Flusso di lavoro caricato con avvisi",
"imageUploaded": "Immagine caricata",
"addedToBoard": "Aggiunto alla bacheca",
"addedToBoard": "Aggiunto alle risorse della bacheca {{name}}",
"modelAddedSimple": "Modello aggiunto alla Coda",
"imageUploadFailed": "Caricamento immagine non riuscito",
"setControlImage": "Imposta come immagine di controllo",
@@ -770,7 +794,12 @@
"imageSavingFailed": "Salvataggio dell'immagine non riuscito",
"layerCopiedToClipboard": "Livello copiato negli appunti",
"imageNotLoadedDesc": "Impossibile trovare l'immagine",
"linkCopied": "Collegamento copiato"
"linkCopied": "Collegamento copiato",
"addedToUncategorized": "Aggiunto alle risorse della bacheca $t(boards.uncategorized)",
"imagesWillBeAddedTo": "Le immagini caricate verranno aggiunte alle risorse della bacheca {{boardName}}.",
"uploadFailedInvalidUploadDesc_withCount_one": "Devi caricare al massimo 1 immagine PNG o JPEG.",
"uploadFailedInvalidUploadDesc_withCount_many": "Devi caricare al massimo {{count}} immagini PNG o JPEG.",
"uploadFailedInvalidUploadDesc_withCount_other": "Devi caricare al massimo {{count}} immagini PNG o JPEG."
},
"accessibility": {
"invokeProgressBar": "Barra di avanzamento generazione",
@@ -785,7 +814,8 @@
"about": "Informazioni",
"submitSupportTicket": "Invia ticket di supporto",
"toggleLeftPanel": "Attiva/disattiva il pannello sinistro (T)",
"toggleRightPanel": "Attiva/disattiva il pannello destro (G)"
"toggleRightPanel": "Attiva/disattiva il pannello destro (G)",
"uploadImages": "Carica immagine(i)"
},
"nodes": {
"zoomOutNodes": "Rimpicciolire",
@@ -1059,7 +1089,8 @@
"noLoRAsInstalled": "Nessun LoRA installato",
"addLora": "Aggiungi LoRA",
"defaultVAE": "VAE predefinito",
"concepts": "Concetti"
"concepts": "Concetti",
"lora": "LoRA"
},
"invocationCache": {
"disable": "Disabilita",
@@ -1116,7 +1147,8 @@
"paragraphs": [
"Scegli quanti livelli del modello CLIP saltare.",
"Alcuni modelli funzionano meglio con determinate impostazioni di CLIP Skip."
]
],
"heading": "CLIP Skip"
},
"compositingCoherencePass": {
"heading": "Passaggio di Coerenza",
@@ -1475,6 +1507,42 @@
"Controlla quanto il prompt influenza il processo di generazione.",
"Valori di guida elevati possono causare sovrasaturazione e una guida elevata o bassa può causare risultati di generazione distorti. La guida si applica solo ai modelli FLUX DEV."
]
},
"regionalReferenceImage": {
"paragraphs": [
"Pennello per applicare un'immagine di riferimento ad aree specifiche."
],
"heading": "Immagine di riferimento Regionale"
},
"rasterLayer": {
"paragraphs": [
"Contenuto basato sui pixel della tua tela, utilizzato durante la generazione dell'immagine."
],
"heading": "Livello Raster"
},
"regionalGuidance": {
"heading": "Guida Regionale",
"paragraphs": [
"Pennello per guidare la posizione in cui devono apparire gli elementi dei prompt globali."
]
},
"regionalGuidanceAndReferenceImage": {
"heading": "Guida regionale e immagine di riferimento regionale",
"paragraphs": [
"Per la Guida Regionale, utilizzare il pennello per indicare dove devono apparire gli elementi dei prompt globali.",
"Per l'immagine di riferimento regionale, utilizzare il pennello per applicare un'immagine di riferimento ad aree specifiche."
]
},
"globalReferenceImage": {
"heading": "Immagine di riferimento Globale",
"paragraphs": [
"Applica un'immagine di riferimento per influenzare l'intera generazione."
]
},
"inpainting": {
"paragraphs": [
"Controlla quale area viene modificata, in base all'intensità di riduzione del rumore."
]
}
},
"sdxl": {
@@ -1496,7 +1564,6 @@
"refinerSteps": "Passi Affinamento"
},
"metadata": {
"seamless": "Senza giunture",
"positivePrompt": "Prompt positivo",
"negativePrompt": "Prompt negativo",
"generationMode": "Modalità generazione",
@@ -1524,7 +1591,10 @@
"parsingFailed": "Analisi non riuscita",
"recallParameter": "Richiama {{label}}",
"canvasV2Metadata": "Tela",
"guidance": "Guida"
"guidance": "Guida",
"seamlessXAxis": "Asse X senza giunte",
"seamlessYAxis": "Asse Y senza giunte",
"vae": "VAE"
},
"hrf": {
"enableHrf": "Abilita Correzione Alta Risoluzione",
@@ -1621,11 +1691,11 @@
"regionalGuidance": "Guida regionale",
"opacity": "Opacità",
"mergeVisible": "Fondi il visibile",
"mergeVisibleOk": "Livelli visibili uniti",
"mergeVisibleOk": "Livelli uniti",
"deleteReferenceImage": "Elimina l'immagine di riferimento",
"referenceImage": "Immagine di riferimento",
"fitBboxToLayers": "Adatta il riquadro di delimitazione ai livelli",
"mergeVisibleError": "Errore durante l'unione dei livelli visibili",
"mergeVisibleError": "Errore durante l'unione dei livelli",
"regionalReferenceImage": "Immagine di riferimento Regionale",
"newLayerFromImage": "Nuovo livello da immagine",
"newCanvasFromImage": "Nuova tela da immagine",
@@ -1717,7 +1787,7 @@
"composition": "Solo Composizione",
"ipAdapterMethod": "Metodo Adattatore IP"
},
"showingType": "Mostrare {{type}}",
"showingType": "Mostra {{type}}",
"dynamicGrid": "Griglia dinamica",
"tool": {
"view": "Muovi",
@@ -1845,8 +1915,6 @@
"layer_withCount_one": "Livello ({{count}})",
"layer_withCount_many": "Livelli ({{count}})",
"layer_withCount_other": "Livelli ({{count}})",
"convertToControlLayer": "Converti in livello di controllo",
"convertToRasterLayer": "Converti in livello raster",
"unlocked": "Sbloccato",
"enableTransparencyEffect": "Abilita l'effetto trasparenza",
"replaceLayer": "Sostituisci livello",
@@ -1859,9 +1927,7 @@
"newCanvasSession": "Nuova sessione Tela",
"deleteSelected": "Elimina selezione",
"settings": {
"isolatedFilteringPreview": "Anteprima del filtraggio isolata",
"isolatedStagingPreview": "Anteprima di generazione isolata",
"isolatedTransformingPreview": "Anteprima di trasformazione isolata",
"isolatedPreview": "Anteprima isolata",
"invertBrushSizeScrollDirection": "Inverti scorrimento per dimensione pennello",
"snapToGrid": {
@@ -1873,7 +1939,9 @@
"preserveMask": {
"alert": "Preservare la regione mascherata",
"label": "Preserva la regione mascherata"
}
},
"isolatedLayerPreview": "Anteprima livello isolato",
"isolatedLayerPreviewDesc": "Se visualizzare solo questo livello quando si eseguono operazioni come il filtraggio o la trasformazione."
},
"transform": {
"reset": "Reimposta",
@@ -1918,9 +1986,46 @@
"canvasGroup": "Tela",
"newRasterLayer": "Nuovo Livello Raster",
"saveCanvasToGallery": "Salva la Tela nella Galleria",
"saveToGalleryGroup": "Salva nella Galleria"
"saveToGalleryGroup": "Salva nella Galleria",
"newInpaintMask": "Nuova maschera Inpaint",
"newRegionalGuidance": "Nuova Guida Regionale"
},
"newImg2ImgCanvasFromImage": "Nuova Immagine da immagine"
"newImg2ImgCanvasFromImage": "Nuova Immagine da immagine",
"copyRasterLayerTo": "Copia $t(controlLayers.rasterLayer) in",
"copyControlLayerTo": "Copia $t(controlLayers.controlLayer) in",
"copyInpaintMaskTo": "Copia $t(controlLayers.inpaintMask) in",
"selectObject": {
"dragToMove": "Trascina un punto per spostarlo",
"clickToAdd": "Fare clic sul livello per aggiungere un punto",
"clickToRemove": "Clicca su un punto per rimuoverlo",
"help3": "Inverte la selezione per selezionare tutto tranne l'oggetto di destinazione.",
"pointType": "Tipo punto",
"apply": "Applica",
"reset": "Reimposta",
"cancel": "Annulla",
"selectObject": "Seleziona oggetto",
"invertSelection": "Inverti selezione",
"exclude": "Escludi",
"include": "Includi",
"neutral": "Neutro",
"saveAs": "Salva come",
"process": "Elabora",
"help1": "Seleziona un singolo oggetto di destinazione. Aggiungi i punti <Bold>Includi</Bold> e <Bold>Escludi</Bold> per indicare quali parti del livello fanno parte dell'oggetto di destinazione.",
"help2": "Inizia con un punto <Bold>Include</Bold> all'interno dell'oggetto di destinazione. Aggiungi altri punti per perfezionare la selezione. Meno punti in genere producono risultati migliori."
},
"convertControlLayerTo": "Converti $t(controlLayers.controlLayer) in",
"newRasterLayer": "Nuovo $t(controlLayers.rasterLayer)",
"newRegionalGuidance": "Nuova $t(controlLayers.regionalGuidance)",
"canvasAsRasterLayer": "$t(controlLayers.canvas) come $t(controlLayers.rasterLayer)",
"canvasAsControlLayer": "$t(controlLayers.canvas) come $t(controlLayers.controlLayer)",
"convertInpaintMaskTo": "Converti $t(controlLayers.inpaintMask) in",
"copyRegionalGuidanceTo": "Copia $t(controlLayers.regionalGuidance) in",
"convertRasterLayerTo": "Converti $t(controlLayers.rasterLayer) in",
"convertRegionalGuidanceTo": "Converti $t(controlLayers.regionalGuidance) in",
"newControlLayer": "Nuovo $t(controlLayers.controlLayer)",
"newInpaintMask": "Nuova $t(controlLayers.inpaintMask)",
"replaceCurrent": "Sostituisci corrente",
"mergeDown": "Unire in basso"
},
"ui": {
"tabs": {
@@ -2006,18 +2111,20 @@
},
"newUserExperience": {
"gettingStartedSeries": "Desideri maggiori informazioni? Consulta la nostra <LinkComponent>Getting Started Series</LinkComponent> per suggerimenti su come sfruttare appieno il potenziale di Invoke Studio.",
"toGetStarted": "Per iniziare, inserisci un prompt nella casella e fai clic su <StrongComponent>Invoke</StrongComponent> per generare la tua prima immagine. Seleziona un modello di prompt per migliorare i risultati. Puoi scegliere di salvare le tue immagini direttamente nella <StrongComponent>Galleria</StrongComponent> o modificarle nella <StrongComponent>Tela</StrongComponent>."
"toGetStarted": "Per iniziare, inserisci un prompt nella casella e fai clic su <StrongComponent>Invoke</StrongComponent> per generare la tua prima immagine. Seleziona un modello di prompt per migliorare i risultati. Puoi scegliere di salvare le tue immagini direttamente nella <StrongComponent>Galleria</StrongComponent> o modificarle nella <StrongComponent>Tela</StrongComponent>.",
"importModels": "Importa modelli",
"downloadStarterModels": "Scarica i modelli per iniziare",
"noModelsInstalled": "Sembra che tu non abbia installato alcun modello",
"toGetStartedLocal": "Per iniziare, assicurati di scaricare o importare i modelli necessari per eseguire Invoke. Quindi, inserisci un prompt nella casella e fai clic su <StrongComponent>Invoke</StrongComponent> per generare la tua prima immagine. Seleziona un modello di prompt per migliorare i risultati. Puoi scegliere di salvare le tue immagini direttamente nella <StrongComponent>Galleria</StrongComponent> o modificarle nella <StrongComponent>Tela</StrongComponent>."
},
"whatsNew": {
"canvasV2Announcement": {
"readReleaseNotes": "Leggi le Note di Rilascio",
"fluxSupport": "Supporto per la famiglia di modelli Flux",
"newCanvas": "Una nuova potente tela di controllo",
"watchReleaseVideo": "Guarda il video di rilascio",
"watchUiUpdatesOverview": "Guarda le novità dell'interfaccia",
"newLayerTypes": "Nuovi tipi di livello per un miglior controllo"
},
"whatsNewInInvoke": "Novità in Invoke"
"whatsNewInInvoke": "Novità in Invoke",
"line2": "Supporto Flux esteso, ora con immagini di riferimento globali",
"line3": "Tooltip e menu contestuali migliorati",
"readReleaseNotes": "Leggi le note di rilascio",
"watchRecentReleaseVideos": "Guarda i video su questa versione",
"line1": "Strumento <ItalicComponent>Seleziona oggetto</ItalicComponent> per la selezione e la modifica precise degli oggetti",
"watchUiUpdatesOverview": "Guarda le novità dell'interfaccia"
},
"system": {
"logLevel": {

View File

@@ -229,7 +229,6 @@
"submitSupportTicket": "サポート依頼を送信する"
},
"metadata": {
"seamless": "シームレス",
"Threshold": "ノイズ閾値",
"seed": "シード",
"width": "幅",

View File

@@ -155,7 +155,6 @@
"path": "Pad",
"triggerPhrases": "Triggerzinnen",
"typePhraseHere": "Typ zin hier in",
"useDefaultSettings": "Gebruik standaardinstellingen",
"modelImageDeleteFailed": "Fout bij verwijderen modelafbeelding",
"modelImageUpdated": "Modelafbeelding bijgewerkt",
"modelImageUpdateFailed": "Fout bij bijwerken modelafbeelding",
@@ -666,7 +665,6 @@
}
},
"metadata": {
"seamless": "Naadloos",
"positivePrompt": "Positieve prompt",
"negativePrompt": "Negatieve prompt",
"generationMode": "Genereermodus",

View File

@@ -94,7 +94,8 @@
"reset": "Сброс",
"none": "Ничего",
"new": "Новый",
"ok": "Ok"
"ok": "Ok",
"close": "Закрыть"
},
"gallery": {
"galleryImageSize": "Размер изображений",
@@ -160,7 +161,9 @@
"openViewer": "Открыть просмотрщик",
"closeViewer": "Закрыть просмотрщик",
"imagesTab": "Изображения, созданные и сохраненные в Invoke.",
"assetsTab": "Файлы, которые вы загрузили для использования в своих проектах."
"assetsTab": "Файлы, которые вы загрузили для использования в своих проектах.",
"boardsSettings": "Настройки доски",
"imagesSettings": "Настройки галереи изображений"
},
"hotkeys": {
"searchHotkeys": "Поиск горячих клавиш",
@@ -541,7 +544,6 @@
"scanResults": "Результаты сканирования",
"source": "Источник",
"triggerPhrases": "Триггерные фразы",
"useDefaultSettings": "Использовать стандартные настройки",
"modelName": "Название модели",
"modelSettings": "Настройки модели",
"upcastAttention": "Внимание",
@@ -570,7 +572,6 @@
"simpleModelPlaceholder": "URL или путь к локальному файлу или папке diffusers",
"urlOrLocalPath": "URL или локальный путь",
"urlOrLocalPathHelper": "URL-адреса должны указывать на один файл. Локальные пути могут указывать на один файл или папку для одной модели диффузоров.",
"hfToken": "Токен HuggingFace",
"starterModels": "Стартовые модели",
"textualInversions": "Текстовые инверсии",
"loraModels": "LoRAs",
@@ -583,7 +584,18 @@
"learnMoreAboutSupportedModels": "Подробнее о поддерживаемых моделях",
"t5Encoder": "T5 энкодер",
"spandrelImageToImage": "Image to Image (Spandrel)",
"clipEmbed": "CLIP Embed"
"clipEmbed": "CLIP Embed",
"installingXModels_one": "Установка {{count}} модели",
"installingXModels_few": "Установка {{count}} моделей",
"installingXModels_many": "Установка {{count}} моделей",
"installingBundle": "Установка пакета",
"installingModel": "Установка модели",
"starterBundles": "Стартовые пакеты",
"skippingXDuplicates_one": ", пропуская {{count}} дубликат",
"skippingXDuplicates_few": ", пропуская {{count}} дубликата",
"skippingXDuplicates_many": ", пропуская {{count}} дубликатов",
"includesNModels": "Включает в себя {{n}} моделей и их зависимостей",
"starterBundleHelpText": "Легко установите все модели, необходимые для начала работы с базовой моделью, включая основную модель, сети управления, IP-адаптеры и многое другое. При выборе комплекта все уже установленные модели будут пропущены."
},
"parameters": {
"images": "Изображения",
@@ -730,7 +742,7 @@
"serverError": "Ошибка сервера",
"connected": "Подключено к серверу",
"canceled": "Обработка отменена",
"uploadFailedInvalidUploadDesc": "Должно быть одно изображение в формате PNG или JPEG",
"uploadFailedInvalidUploadDesc": "Это должны быть изображения PNG или JPEG.",
"parameterNotSet": "Параметр не задан",
"parameterSet": "Параметр задан",
"problemCopyingImage": "Не удается скопировать изображение",
@@ -742,7 +754,7 @@
"setNodeField": "Установить как поле узла",
"invalidUpload": "Неверная загрузка",
"imageUploaded": "Изображение загружено",
"addedToBoard": "Добавлено на доску",
"addedToBoard": "Добавлено в активы доски {{name}}",
"workflowLoaded": "Рабочий процесс загружен",
"problemDeletingWorkflow": "Проблема с удалением рабочего процесса",
"modelAddedSimple": "Модель добавлена в очередь",
@@ -777,7 +789,13 @@
"unableToLoadStylePreset": "Невозможно загрузить предустановку стиля",
"layerCopiedToClipboard": "Слой скопирован в буфер обмена",
"sentToUpscale": "Отправить на увеличение",
"layerSavedToAssets": "Слой сохранен в активах"
"layerSavedToAssets": "Слой сохранен в активах",
"linkCopied": "Ссылка скопирована",
"addedToUncategorized": "Добавлено в активы доски $t(boards.uncategorized)",
"imagesWillBeAddedTo": "Загруженные изображения будут добавлены в активы доски {{boardName}}.",
"uploadFailedInvalidUploadDesc_withCount_one": "Должно быть не более {{count}} изображения в формате PNG или JPEG.",
"uploadFailedInvalidUploadDesc_withCount_few": "Должно быть не более {{count}} изображений в формате PNG или JPEG.",
"uploadFailedInvalidUploadDesc_withCount_many": "Должно быть не более {{count}} изображений в формате PNG или JPEG."
},
"accessibility": {
"uploadImage": "Загрузить изображение",
@@ -792,7 +810,8 @@
"about": "Об этом",
"submitSupportTicket": "Отправить тикет в службу поддержки",
"toggleRightPanel": "Переключить правую панель (G)",
"toggleLeftPanel": "Переключить левую панель (T)"
"toggleLeftPanel": "Переключить левую панель (T)",
"uploadImages": "Загрузить изображения"
},
"nodes": {
"zoomInNodes": "Увеличьте масштаб",
@@ -933,7 +952,7 @@
"saveToGallery": "Сохранить в галерею",
"noWorkflows": "Нет рабочих процессов",
"noMatchingWorkflows": "Нет совпадающих рабочих процессов",
"workflowHelpText": "Нужна помощь? Ознакомьтесь с нашим руководством <LinkComponent>Getting Started with Workflows</LinkComponent>"
"workflowHelpText": "Нужна помощь? Ознакомьтесь с нашим руководством <LinkComponent>Getting Started with Workflows</LinkComponent>."
},
"boards": {
"autoAddBoard": "Авто добавление Доски",
@@ -1381,7 +1400,6 @@
}
},
"metadata": {
"seamless": "Бесшовность",
"positivePrompt": "Запрос",
"negativePrompt": "Негативный запрос",
"generationMode": "Режим генерации",
@@ -1409,7 +1427,8 @@
"recallParameter": "Отозвать {{label}}",
"allPrompts": "Все запросы",
"imageDimensions": "Размеры изображения",
"canvasV2Metadata": "Холст"
"canvasV2Metadata": "Холст",
"guidance": "Точность"
},
"queue": {
"status": "Статус",
@@ -1561,7 +1580,12 @@
"defaultWorkflows": "Стандартные рабочие процессы",
"deleteWorkflow2": "Вы уверены, что хотите удалить этот рабочий процесс? Это нельзя отменить.",
"chooseWorkflowFromLibrary": "Выбрать рабочий процесс из библиотеки",
"uploadAndSaveWorkflow": "Загрузить в библиотеку"
"uploadAndSaveWorkflow": "Загрузить в библиотеку",
"edit": "Редактировать",
"download": "Скачать",
"copyShareLink": "Скопировать ссылку на общий доступ",
"copyShareLinkForWorkflow": "Скопировать ссылку на общий доступ для рабочего процесса",
"delete": "Удалить"
},
"hrf": {
"enableHrf": "Включить исправление высокого разрешения",
@@ -1809,14 +1833,12 @@
},
"settings": {
"isolatedPreview": "Изолированный предпросмотр",
"isolatedTransformingPreview": "Изолированный предпросмотр преобразования",
"invertBrushSizeScrollDirection": "Инвертировать прокрутку для размера кисти",
"snapToGrid": {
"label": "Привязка к сетке",
"on": "Вкл",
"off": "Выкл"
},
"isolatedFilteringPreview": "Изолированный предпросмотр фильтрации",
"pressureSensitivity": "Чувствительность к давлению",
"isolatedStagingPreview": "Изолированный предпросмотр на промежуточной стадии",
"preserveMask": {
@@ -1838,7 +1860,6 @@
"enableAutoNegative": "Включить авто негатив",
"maskFill": "Заполнение маски",
"viewProgressInViewer": "Просматривайте прогресс и результаты в <Btn>Просмотрщике изображений</Btn>.",
"convertToRasterLayer": "Конвертировать в растровый слой",
"tool": {
"move": "Двигать",
"bbox": "Ограничительная рамка",
@@ -1890,7 +1911,10 @@
"fitToBbox": "Вместить в рамку",
"reset": "Сбросить",
"apply": "Применить",
"cancel": "Отменить"
"cancel": "Отменить",
"fitModeContain": "Уместить",
"fitMode": "Режим подгонки",
"fitModeFill": "Заполнить"
},
"disableAutoNegative": "Отключить авто негатив",
"deleteReferenceImage": "Удалить эталонное изображение",
@@ -1903,7 +1927,6 @@
"newGallerySession": "Новая сессия галереи",
"sendToCanvasDesc": "Нажатие кнопки Invoke отображает вашу текущую работу на холсте.",
"globalReferenceImages_withCount_hidden": "Глобальные эталонные изображения ({{count}} скрыто)",
"convertToControlLayer": "Конвертировать в контрольный слой",
"layer_withCount_one": "Слой ({{count}})",
"layer_withCount_few": "Слои ({{count}})",
"layer_withCount_many": "Слои ({{count}})",
@@ -1920,7 +1943,8 @@
"globalReferenceImage": "Глобальное эталонное изображение",
"sendToGallery": "Отправить в галерею",
"referenceImage": "Эталонное изображение",
"addGlobalReferenceImage": "Добавить $t(controlLayers.globalReferenceImage)"
"addGlobalReferenceImage": "Добавить $t(controlLayers.globalReferenceImage)",
"newImg2ImgCanvasFromImage": "Новое img2img из изображения"
},
"ui": {
"tabs": {
@@ -2032,14 +2056,6 @@
}
},
"whatsNew": {
"canvasV2Announcement": {
"newLayerTypes": "Новые типы слоев для еще большего контроля",
"readReleaseNotes": "Прочитать информацию о выпуске",
"watchReleaseVideo": "Смотреть видео о выпуске",
"fluxSupport": "Поддержка семейства моделей Flux",
"newCanvas": "Новый мощный холст управления",
"watchUiUpdatesOverview": "Обзор обновлений пользовательского интерфейса"
},
"whatsNewInInvoke": "Что нового в Invoke"
},
"newUserExperience": {

View File

@@ -82,7 +82,21 @@
"dontShowMeThese": "请勿显示这些内容",
"beta": "测试版",
"toResolve": "解决",
"tab": "标签页"
"tab": "标签页",
"apply": "应用",
"edit": "编辑",
"off": "关",
"loadingImage": "正在加载图片",
"ok": "确定",
"placeholderSelectAModel": "选择一个模型",
"close": "关闭",
"reset": "重设",
"none": "无",
"new": "新建",
"view": "视图",
"alpha": "透明度通道",
"openInViewer": "在查看器中打开",
"clipboard": "剪贴板"
},
"gallery": {
"galleryImageSize": "预览大小",
@@ -124,7 +138,7 @@
"selectAllOnPage": "选择本页全部",
"swapImages": "交换图像",
"exitBoardSearch": "退出面板搜索",
"exitSearch": "退出搜索",
"exitSearch": "退出图像搜索",
"oldestFirst": "最旧在前",
"sortDirection": "排序方向",
"showStarredImagesFirst": "优先显示收藏的图片",
@@ -135,17 +149,333 @@
"searchImages": "按元数据搜索",
"jump": "跳过",
"compareHelp2": "按 <Kbd>M</Kbd> 键切换不同的比较模式。",
"displayBoardSearch": "显示面板搜索",
"displaySearch": "显示搜索",
"displayBoardSearch": "板搜索",
"displaySearch": "图像搜索",
"stretchToFit": "拉伸以适应",
"exitCompare": "退出对比",
"compareHelp1": "在点击图库中的图片或使用箭头键切换比较图片时,请按住<Kbd>Alt</Kbd> 键。",
"go": "运行"
"go": "运行",
"boardsSettings": "画板设置",
"imagesSettings": "画廊图片设置",
"gallery": "画廊",
"move": "移动",
"imagesTab": "您在Invoke中创建和保存的图片。",
"openViewer": "打开查看器",
"closeViewer": "关闭查看器",
"assetsTab": "您已上传用于项目的文件。"
},
"hotkeys": {
"searchHotkeys": "检索快捷键",
"noHotkeysFound": "未找到快捷键",
"clearSearch": "清除检索项"
"clearSearch": "清除检索项",
"app": {
"cancelQueueItem": {
"title": "取消",
"desc": "取消当前正在处理的队列项目。"
},
"selectQueueTab": {
"title": "选择队列标签",
"desc": "选择队列标签。"
},
"toggleLeftPanel": {
"desc": "显示或隐藏左侧面板。",
"title": "开关左侧面板"
},
"resetPanelLayout": {
"title": "重设面板布局",
"desc": "将左侧和右侧面板重置为默认大小和布局。"
},
"togglePanels": {
"title": "开关面板",
"desc": "同时显示或隐藏左右两侧的面板。"
},
"selectWorkflowsTab": {
"title": "选择工作流标签",
"desc": "选择工作流标签。"
},
"selectModelsTab": {
"title": "选择模型标签",
"desc": "选择模型标签。"
},
"toggleRightPanel": {
"title": "开关右侧面板",
"desc": "显示或隐藏右侧面板。"
},
"clearQueue": {
"title": "清除队列",
"desc": "取消并清除所有队列条目。"
},
"selectCanvasTab": {
"title": "选择画布标签",
"desc": "选择画布标签。"
},
"invokeFront": {
"desc": "将生成请求排队,添加到队列的前面。",
"title": "调用(前台)"
},
"selectUpscalingTab": {
"title": "选择放大选项卡",
"desc": "选择高清放大选项卡。"
},
"focusPrompt": {
"title": "聚焦提示",
"desc": "将光标焦点移动到正向提示。"
},
"title": "应用程序",
"invoke": {
"title": "调用",
"desc": "将生成请求排队,添加到队列的末尾。"
}
},
"canvas": {
"selectBrushTool": {
"title": "画笔工具",
"desc": "选择画笔工具。"
},
"selectEraserTool": {
"title": "橡皮擦工具",
"desc": "选择橡皮擦工具。"
},
"title": "画布",
"selectColorPickerTool": {
"title": "拾色器工具",
"desc": "选择拾色器工具。"
},
"fitBboxToCanvas": {
"title": "使边界框适应画布",
"desc": "缩放并调整视图以适应边界框。"
},
"setZoomTo400Percent": {
"title": "缩放到400%",
"desc": "将画布的缩放设置为400%。"
},
"setZoomTo800Percent": {
"desc": "将画布的缩放设置为800%。",
"title": "缩放到800%"
},
"redo": {
"desc": "重做上一次画布操作。",
"title": "重做"
},
"nextEntity": {
"title": "下一层",
"desc": "在列表中选择下一层。"
},
"selectRectTool": {
"title": "矩形工具",
"desc": "选择矩形工具。"
},
"selectViewTool": {
"title": "视图工具",
"desc": "选择视图工具。"
},
"prevEntity": {
"desc": "在列表中选择上一层。",
"title": "上一层"
},
"transformSelected": {
"desc": "变换所选图层。",
"title": "变换"
},
"selectBboxTool": {
"title": "边界框工具",
"desc": "选择边界框工具。"
},
"setZoomTo200Percent": {
"title": "缩放到200%",
"desc": "将画布的缩放设置为200%。"
},
"applyFilter": {
"title": "应用过滤器",
"desc": "将待处理的过滤器应用于所选图层。"
},
"filterSelected": {
"title": "过滤器",
"desc": "对所选图层进行过滤。仅适用于栅格层和控制层。"
},
"cancelFilter": {
"title": "取消过滤器",
"desc": "取消待处理的过滤器。"
},
"incrementToolWidth": {
"title": "增加工具宽度",
"desc": "增加所选的画笔或橡皮擦工具的宽度。"
},
"decrementToolWidth": {
"desc": "减少所选的画笔或橡皮擦工具的宽度。",
"title": "减少工具宽度"
},
"selectMoveTool": {
"title": "移动工具",
"desc": "选择移动工具。"
},
"setFillToWhite": {
"title": "将颜色设置为白色",
"desc": "将当前工具的颜色设置为白色。"
},
"cancelTransform": {
"desc": "取消待处理的变换。",
"title": "取消变换"
},
"applyTransform": {
"title": "应用变换",
"desc": "将待处理的变换应用于所选图层。"
},
"setZoomTo100Percent": {
"title": "缩放到100%",
"desc": "将画布的缩放设置为100%。"
},
"resetSelected": {
"title": "重置图层",
"desc": "重置选定的图层。仅适用于修复蒙版和区域指导。"
},
"undo": {
"title": "撤消",
"desc": "撤消上一次画布操作。"
},
"quickSwitch": {
"title": "图层快速切换",
"desc": "在最后两个选定的图层之间切换。如果某个图层被书签标记,则始终在该图层和最后一个未标记的图层之间切换。"
},
"fitLayersToCanvas": {
"title": "使图层适应画布",
"desc": "缩放并调整视图以适应所有可见图层。"
},
"deleteSelected": {
"title": "删除图层",
"desc": "删除选定的图层。"
}
},
"hotkeys": "快捷键",
"workflows": {
"pasteSelection": {
"title": "粘贴",
"desc": "粘贴复制的节点和边。"
},
"title": "工作流",
"addNode": {
"title": "添加节点",
"desc": "打开添加节点菜单。"
},
"copySelection": {
"desc": "复制选定的节点和边。",
"title": "复制"
},
"pasteSelectionWithEdges": {
"title": "带边缘的粘贴",
"desc": "粘贴复制的节点、边,以及与复制的节点连接的所有边。"
},
"selectAll": {
"title": "全选",
"desc": "选择所有节点和边。"
},
"deleteSelection": {
"title": "删除",
"desc": "删除选定的节点和边。"
},
"undo": {
"title": "撤销",
"desc": "撤销上一个工作流操作。"
},
"redo": {
"desc": "重做上一个工作流操作。",
"title": "重做"
}
},
"gallery": {
"title": "画廊",
"galleryNavUp": {
"title": "向上导航",
"desc": "在图库网格中向上导航,选择该图像。如果在页面顶部,则转到上一页。"
},
"galleryNavUpAlt": {
"title": "向上导航(比较图像)",
"desc": "与向上导航相同,但选择比较图像,如果比较模式尚未打开,则将其打开。"
},
"selectAllOnPage": {
"desc": "选择当前页面上的所有图像。",
"title": "选页面上的所有内容"
},
"galleryNavDownAlt": {
"title": "向下导航(比较图像)",
"desc": "与向下导航相同,但选择比较图像,如果比较模式尚未打开,则将其打开。"
},
"galleryNavLeftAlt": {
"title": "向左导航(比较图像)",
"desc": "与向左导航相同,但选择比较图像,如果比较模式尚未打开,则将其打开。"
},
"clearSelection": {
"title": "清除选择",
"desc": "清除当前的选择(如果有的话)。"
},
"deleteSelection": {
"title": "删除",
"desc": "删除所有选定的图像。默认情况下,系统会提示您确认删除。如果这些图像当前在应用中使用,系统将发出警告。"
},
"galleryNavLeft": {
"title": "向左导航",
"desc": "在图库网格中向左导航,选择该图像。如果处于行的第一张图像,转到上一行。如果处于页面的第一张图像,转到上一页。"
},
"galleryNavRight": {
"title": "向右导航",
"desc": "在图库网格中向右导航,选择该图像。如果在行的最后一张图像,转到下一行。如果在页面的最后一张图像,转到下一页。"
},
"galleryNavDown": {
"desc": "在图库网格中向下导航,选择该图像。如果在页面底部,则转到下一页。",
"title": "向下导航"
},
"galleryNavRightAlt": {
"title": "向右导航(比较图像)",
"desc": "与向右导航相同,但选择比较图像,如果比较模式尚未打开,则将其打开。"
}
},
"viewer": {
"toggleMetadata": {
"desc": "显示或隐藏当前图像的元数据覆盖。",
"title": "显示/隐藏元数据"
},
"recallPrompts": {
"desc": "召回当前图像的正面和负面提示。",
"title": "召回提示"
},
"toggleViewer": {
"title": "显示/隐藏图像查看器",
"desc": "显示或隐藏图像查看器。仅在画布选项卡上可用。"
},
"recallAll": {
"desc": "召回当前图像的所有元数据。",
"title": "召回所有元数据"
},
"recallSeed": {
"title": "召回种子",
"desc": "召回当前图像的种子。"
},
"swapImages": {
"title": "交换比较图像",
"desc": "交换正在比较的图像。"
},
"nextComparisonMode": {
"title": "下一个比较模式",
"desc": "环浏览比较模式。"
},
"loadWorkflow": {
"title": "加载工作流",
"desc": "加载当前图像的保存工作流程(如果有的话)。"
},
"title": "图像查看器",
"remix": {
"title": "混合",
"desc": "召回当前图像的所有元数据,除了种子。"
},
"useSize": {
"title": "使用尺寸",
"desc": "使用当前图像的尺寸作为边界框尺寸。"
},
"runPostprocessing": {
"title": "行后处理",
"desc": "对当前图像运行所选的后处理。"
}
}
},
"modelManager": {
"modelManager": "模型管理器",
@@ -210,7 +540,6 @@
"noModelsInstalled": "无已安装的模型",
"urlOrLocalPathHelper": "链接应该指向单个文件.本地路径可以指向单个文件,或者对于单个扩散模型(diffusers model),可以指向一个文件夹.",
"modelSettings": "模型设置",
"useDefaultSettings": "使用默认设置",
"scanPlaceholder": "本地文件夹路径",
"installRepo": "安装仓库",
"modelImageDeleted": "模型图像已删除",
@@ -249,7 +578,16 @@
"loraTriggerPhrases": "LoRA 触发词",
"ipAdapters": "IP适配器",
"spandrelImageToImage": "图生图(Spandrel)",
"starterModelsInModelManager": "您可以在模型管理器中找到初始模型"
"starterModelsInModelManager": "您可以在模型管理器中找到初始模型",
"noDefaultSettings": "此模型没有配置默认设置。请访问模型管理器添加默认设置。",
"clipEmbed": "CLIP 嵌入",
"defaultSettingsOutOfSync": "某些设置与模型的默认值不匹配:",
"restoreDefaultSettings": "点击以使用模型的默认设置。",
"usingDefaultSettings": "使用模型的默认设置",
"huggingFace": "HuggingFace",
"hfTokenInvalid": "HF 令牌无效或缺失",
"hfTokenLabel": "HuggingFace 令牌(某些模型所需)",
"hfTokenHelperText": "使用某些模型需要 HF 令牌。点击这里创建或获取你的令牌。"
},
"parameters": {
"images": "图像",
@@ -367,7 +705,7 @@
"uploadFailed": "上传失败",
"imageCopied": "图像已复制",
"parametersNotSet": "参数未恢复",
"uploadFailedInvalidUploadDesc": "必须是单张的 PNG 或 JPEG 图",
"uploadFailedInvalidUploadDesc": "必须是单 PNG 或 JPEG 图像。",
"connected": "服务器连接",
"parameterSet": "参数已恢复",
"parameterNotSet": "参数未恢复",
@@ -379,7 +717,7 @@
"setControlImage": "设为控制图像",
"setNodeField": "设为节点字段",
"imageUploaded": "图像已上传",
"addedToBoard": "添加到面板",
"addedToBoard": "添加到{{name}}的资产中",
"workflowLoaded": "工作流已加载",
"imageUploadFailed": "图像上传失败",
"baseModelChangedCleared_other": "已清除或禁用{{count}}个不兼容的子模型",
@@ -416,7 +754,9 @@
"createIssue": "创建问题",
"about": "关于",
"submitSupportTicket": "提交支持工单",
"toggleRightPanel": "切换右侧面板(G)"
"toggleRightPanel": "切换右侧面板(G)",
"uploadImages": "上传图片",
"toggleLeftPanel": "开关左侧面板(T)"
},
"nodes": {
"zoomInNodes": "放大",
@@ -569,7 +909,7 @@
"cancelSucceeded": "项目已取消",
"queue": "队列",
"batch": "批处理",
"clearQueueAlertDialog": "清队列时会立即取消所有处理的项目并且会完全清队列。",
"clearQueueAlertDialog": "清队列立即取消所有正在处理的项目并完全清队列。待处理的过滤器将被取消。",
"pending": "待定",
"completedIn": "完成于",
"resumeFailed": "恢复处理器时出现问题",
@@ -610,7 +950,15 @@
"openQueue": "打开队列",
"prompts_other": "提示词",
"iterations_other": "迭代",
"generations_other": "生成"
"generations_other": "生成",
"canvas": "画布",
"workflows": "工作流",
"generation": "生成",
"other": "其他",
"gallery": "画廊",
"destination": "目标存储",
"upscaling": "高清放大",
"origin": "来源"
},
"sdxl": {
"refinerStart": "Refiner 开始作用时机",
@@ -649,7 +997,6 @@
"workflow": "工作流",
"steps": "步数",
"scheduler": "调度器",
"seamless": "无缝",
"recallParameters": "召回参数",
"noRecallParameters": "未找到要召回的参数",
"vae": "VAE",
@@ -658,7 +1005,11 @@
"parsingFailed": "解析失败",
"recallParameter": "调用{{label}}",
"imageDimensions": "图像尺寸",
"parameterSet": "已设置参数{{parameter}}"
"parameterSet": "已设置参数{{parameter}}",
"guidance": "指导",
"seamlessXAxis": "无缝 X 轴",
"seamlessYAxis": "无缝 Y 轴",
"canvasV2Metadata": "画布"
},
"models": {
"noMatchingModels": "无相匹配的模型",
@@ -709,7 +1060,8 @@
"shared": "共享面板",
"archiveBoard": "归档面板",
"archived": "已归档",
"assetsWithCount_other": "{{count}}项资源"
"assetsWithCount_other": "{{count}}项资源",
"updateBoardError": "更新画板出错"
},
"dynamicPrompts": {
"seedBehaviour": {
@@ -1175,7 +1527,8 @@
},
"prompt": {
"addPromptTrigger": "添加提示词触发器",
"noMatchingTriggers": "没有匹配的触发器"
"noMatchingTriggers": "没有匹配的触发器",
"compatibleEmbeddings": "兼容的嵌入"
},
"controlLayers": {
"autoNegative": "自动反向",
@@ -1186,8 +1539,8 @@
"moveToFront": "移动到前面",
"addLayer": "添加层",
"deletePrompt": "删除提示词",
"addPositivePrompt": "添加 $t(common.positivePrompt)",
"addNegativePrompt": "添加 $t(common.negativePrompt)",
"addPositivePrompt": "添加 $t(controlLayers.prompt)",
"addNegativePrompt": "添加 $t(controlLayers.negativePrompt)",
"rectangle": "矩形",
"opacity": "透明度"
},

View File

@@ -58,7 +58,6 @@
"model": "模型",
"seed": "種子",
"vae": "VAE",
"seamless": "無縫",
"metadata": "元數據",
"width": "寬度",
"height": "高度"

View File

@@ -4,6 +4,7 @@ import type { StudioInitAction } from 'app/hooks/useStudioInitAction';
import { useStudioInitAction } from 'app/hooks/useStudioInitAction';
import { useSyncQueueStatus } from 'app/hooks/useSyncQueueStatus';
import { useLogger } from 'app/logging/useLogger';
import { useSyncLoggingConfig } from 'app/logging/useSyncLoggingConfig';
import { appStarted } from 'app/store/middleware/listenerMiddleware/listeners/appStarted';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import type { PartialAppConfig } from 'app/types/invokeai';
@@ -59,6 +60,7 @@ const App = ({ config = DEFAULT_CONFIG, studioInitAction }: Props) => {
useGlobalModifiersInit();
useGlobalHotkeys();
useGetOpenAPISchemaQuery();
useSyncLoggingConfig();
const { dropzone, isHandlingUpload, setIsHandlingUpload } = useFullscreenDropzone();

View File

@@ -2,6 +2,8 @@ import 'i18n';
import type { Middleware } from '@reduxjs/toolkit';
import type { StudioInitAction } from 'app/hooks/useStudioInitAction';
import type { LoggingOverrides } from 'app/logging/logger';
import { $loggingOverrides, configureLogging } from 'app/logging/logger';
import { $authToken } from 'app/store/nanostores/authToken';
import { $baseUrl } from 'app/store/nanostores/baseUrl';
import { $customNavComponent } from 'app/store/nanostores/customNavComponent';
@@ -20,7 +22,7 @@ import Loading from 'common/components/Loading/Loading';
import AppDndContext from 'features/dnd/components/AppDndContext';
import type { WorkflowCategory } from 'features/nodes/types/workflow';
import type { PropsWithChildren, ReactNode } from 'react';
import React, { lazy, memo, useEffect, useMemo } from 'react';
import React, { lazy, memo, useEffect, useLayoutEffect, useMemo } from 'react';
import { Provider } from 'react-redux';
import { addMiddleware, resetMiddlewares } from 'redux-dynamic-middlewares';
import { $socketOptions } from 'services/events/stores';
@@ -46,6 +48,7 @@ interface Props extends PropsWithChildren {
isDebugging?: boolean;
logo?: ReactNode;
workflowCategories?: WorkflowCategory[];
loggingOverrides?: LoggingOverrides;
}
const InvokeAIUI = ({
@@ -65,7 +68,26 @@ const InvokeAIUI = ({
isDebugging = false,
logo,
workflowCategories,
loggingOverrides,
}: Props) => {
useLayoutEffect(() => {
/*
* We need to configure logging before anything else happens - useLayoutEffect ensures we set this at the first
* possible opportunity.
*
* Once redux initializes, we will check the user's settings and update the logging config accordingly. See
* `useSyncLoggingConfig`.
*/
$loggingOverrides.set(loggingOverrides);
// Until we get the user's settings, we will use the overrides OR default values.
configureLogging(
loggingOverrides?.logIsEnabled ?? true,
loggingOverrides?.logLevel ?? 'debug',
loggingOverrides?.logNamespaces ?? '*'
);
}, [loggingOverrides]);
useEffect(() => {
// configure API client token
if (token) {

View File

@@ -9,11 +9,10 @@ const serializeMessage: MessageSerializer = (message) => {
};
ROARR.serializeMessage = serializeMessage;
ROARR.write = createLogWriter();
export const BASE_CONTEXT = {};
const BASE_CONTEXT = {};
export const $logger = atom<Logger>(Roarr.child(BASE_CONTEXT));
const $logger = atom<Logger>(Roarr.child(BASE_CONTEXT));
export const zLogNamespace = z.enum([
'canvas',
@@ -35,8 +34,22 @@ export const zLogLevel = z.enum(['trace', 'debug', 'info', 'warn', 'error', 'fat
export type LogLevel = z.infer<typeof zLogLevel>;
export const isLogLevel = (v: unknown): v is LogLevel => zLogLevel.safeParse(v).success;
/**
* Override logging settings.
* @property logIsEnabled Override the enabled log state. Omit to use the user's settings.
* @property logNamespaces Override the enabled log namespaces. Use `"*"` for all namespaces. Omit to use the user's settings.
* @property logLevel Override the log level. Omit to use the user's settings.
*/
export type LoggingOverrides = {
logIsEnabled?: boolean;
logNamespaces?: LogNamespace[] | '*';
logLevel?: LogLevel;
};
export const $loggingOverrides = atom<LoggingOverrides | undefined>();
// Translate human-readable log levels to numbers, used for log filtering
export const LOG_LEVEL_MAP: Record<LogLevel, number> = {
const LOG_LEVEL_MAP: Record<LogLevel, number> = {
trace: 10,
debug: 20,
info: 30,
@@ -44,3 +57,40 @@ export const LOG_LEVEL_MAP: Record<LogLevel, number> = {
error: 50,
fatal: 60,
};
/**
* Configure logging, pushing settings to local storage.
*
* @param logIsEnabled Whether logging is enabled
* @param logLevel The log level
* @param logNamespaces A list of log namespaces to enable, or '*' to enable all
*/
export const configureLogging = (
logIsEnabled: boolean = true,
logLevel: LogLevel = 'warn',
logNamespaces: LogNamespace[] | '*'
): void => {
if (!logIsEnabled) {
// Disable console log output
localStorage.setItem('ROARR_LOG', 'false');
} else {
// Enable console log output
localStorage.setItem('ROARR_LOG', 'true');
// Use a filter to show only logs of the given level
let filter = `context.logLevel:>=${LOG_LEVEL_MAP[logLevel]}`;
const namespaces = logNamespaces === '*' ? zLogNamespace.options : logNamespaces;
if (namespaces.length > 0) {
filter += ` AND (${namespaces.map((ns) => `context.namespace:${ns}`).join(' OR ')})`;
} else {
// This effectively hides all logs because we use namespaces for all logs
filter += ' AND context.namespace:undefined';
}
localStorage.setItem('ROARR_FILTER', filter);
}
ROARR.write = createLogWriter();
};

View File

@@ -1,53 +1,9 @@
import { createLogWriter } from '@roarr/browser-log-writer';
import { useAppSelector } from 'app/store/storeHooks';
import {
selectSystemLogIsEnabled,
selectSystemLogLevel,
selectSystemLogNamespaces,
} from 'features/system/store/systemSlice';
import { useEffect, useMemo } from 'react';
import { ROARR, Roarr } from 'roarr';
import { useMemo } from 'react';
import type { LogNamespace } from './logger';
import { $logger, BASE_CONTEXT, LOG_LEVEL_MAP, logger } from './logger';
import { logger } from './logger';
export const useLogger = (namespace: LogNamespace) => {
const logLevel = useAppSelector(selectSystemLogLevel);
const logNamespaces = useAppSelector(selectSystemLogNamespaces);
const logIsEnabled = useAppSelector(selectSystemLogIsEnabled);
// The provided Roarr browser log writer uses localStorage to config logging to console
useEffect(() => {
if (logIsEnabled) {
// Enable console log output
localStorage.setItem('ROARR_LOG', 'true');
// Use a filter to show only logs of the given level
let filter = `context.logLevel:>=${LOG_LEVEL_MAP[logLevel]}`;
if (logNamespaces.length > 0) {
filter += ` AND (${logNamespaces.map((ns) => `context.namespace:${ns}`).join(' OR ')})`;
} else {
filter += ' AND context.namespace:undefined';
}
localStorage.setItem('ROARR_FILTER', filter);
} else {
// Disable console log output
localStorage.setItem('ROARR_LOG', 'false');
}
ROARR.write = createLogWriter();
}, [logLevel, logIsEnabled, logNamespaces]);
// Update the module-scoped logger context as needed
useEffect(() => {
// TODO: type this properly
//eslint-disable-next-line @typescript-eslint/no-explicit-any
const newContext: Record<string, any> = {
...BASE_CONTEXT,
};
$logger.set(Roarr.child(newContext));
}, []);
const log = useMemo(() => logger(namespace), [namespace]);
return log;

View File

@@ -0,0 +1,43 @@
import { useStore } from '@nanostores/react';
import { $loggingOverrides, configureLogging } from 'app/logging/logger';
import { useAppSelector } from 'app/store/storeHooks';
import { useAssertSingleton } from 'common/hooks/useAssertSingleton';
import {
selectSystemLogIsEnabled,
selectSystemLogLevel,
selectSystemLogNamespaces,
} from 'features/system/store/systemSlice';
import { useLayoutEffect } from 'react';
/**
* This hook synchronizes the logging configuration stored in Redux with the logging system, which uses localstorage.
*
* The sync is one-way: from Redux to localstorage. This means that changes made in the UI will be reflected in the
* logging system, but changes made directly to localstorage will not be reflected in the UI.
*
* See {@link configureLogging}
*/
export const useSyncLoggingConfig = () => {
useAssertSingleton('useSyncLoggingConfig');
const loggingOverrides = useStore($loggingOverrides);
const logLevel = useAppSelector(selectSystemLogLevel);
const logNamespaces = useAppSelector(selectSystemLogNamespaces);
const logIsEnabled = useAppSelector(selectSystemLogIsEnabled);
useLayoutEffect(() => {
configureLogging(
loggingOverrides?.logIsEnabled ?? logIsEnabled,
loggingOverrides?.logLevel ?? logLevel,
loggingOverrides?.logNamespaces ?? logNamespaces
);
}, [
logIsEnabled,
logLevel,
logNamespaces,
loggingOverrides?.logIsEnabled,
loggingOverrides?.logLevel,
loggingOverrides?.logNamespaces,
]);
};

View File

@@ -2,12 +2,13 @@ import { createAction } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { deepClone } from 'common/util/deepClone';
import { selectDefaultControlAdapter, selectDefaultIPAdapter } from 'features/controlLayers/hooks/addLayerHooks';
import { selectDefaultIPAdapter } from 'features/controlLayers/hooks/addLayerHooks';
import { getPrefixedId } from 'features/controlLayers/konva/util';
import {
controlLayerAdded,
entityRasterized,
entitySelected,
inpaintMaskAdded,
rasterLayerAdded,
referenceImageAdded,
referenceImageIPAdapterImageChanged,
@@ -17,11 +18,12 @@ import {
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import type {
CanvasControlLayerState,
CanvasInpaintMaskState,
CanvasRasterLayerState,
CanvasReferenceImageState,
CanvasRegionalGuidanceState,
} from 'features/controlLayers/store/types';
import { imageDTOToImageObject, imageDTOToImageWithDims } from 'features/controlLayers/store/util';
import { imageDTOToImageObject, imageDTOToImageWithDims, initialControlNet } from 'features/controlLayers/store/util';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { isValidDrop } from 'features/dnd/util/isValidDrop';
import { imageToCompareChanged, selectionChanged } from 'features/gallery/store/gallerySlice';
@@ -110,6 +112,46 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
return;
}
/**
/**
* Image dropped on Inpaint Mask
*/
if (
overData.actionType === 'ADD_INPAINT_MASK_FROM_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const imageObject = imageDTOToImageObject(activeData.payload.imageDTO);
const { x, y } = selectCanvasSlice(getState()).bbox.rect;
const overrides: Partial<CanvasInpaintMaskState> = {
objects: [imageObject],
position: { x, y },
};
dispatch(inpaintMaskAdded({ overrides, isSelected: true }));
return;
}
/**
/**
* Image dropped on Regional Guidance
*/
if (
overData.actionType === 'ADD_REGIONAL_GUIDANCE_FROM_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const imageObject = imageDTOToImageObject(activeData.payload.imageDTO);
const { x, y } = selectCanvasSlice(getState()).bbox.rect;
const overrides: Partial<CanvasRegionalGuidanceState> = {
objects: [imageObject],
position: { x, y },
};
dispatch(rgAdded({ overrides, isSelected: true }));
return;
}
/**
* Image dropped on Raster layer
*/
@@ -121,11 +163,10 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
const state = getState();
const imageObject = imageDTOToImageObject(activeData.payload.imageDTO);
const { x, y } = selectCanvasSlice(state).bbox.rect;
const defaultControlAdapter = selectDefaultControlAdapter(state);
const overrides: Partial<CanvasControlLayerState> = {
objects: [imageObject],
position: { x, y },
controlAdapter: defaultControlAdapter,
controlAdapter: deepClone(initialControlNet),
};
dispatch(controlLayerAdded({ overrides, isSelected: true }));
return;

View File

@@ -164,7 +164,7 @@ const handleVAEModels: ModelHandler = (models, state, dispatch, log) => {
// We have a VAE selected, need to check if it is available
// Grab just the VAE models
const vaeModels = models.filter(isNonFluxVAEModelConfig);
const vaeModels = models.filter((m) => isNonFluxVAEModelConfig(m));
// If the current VAE model is available, we don't need to do anything
if (vaeModels.some((m) => m.key === selectedVAEModel.key)) {
@@ -297,7 +297,7 @@ const handleUpscaleModel: ModelHandler = (models, state, dispatch, log) => {
const handleT5EncoderModels: ModelHandler = (models, state, dispatch, log) => {
const selectedT5EncoderModel = state.params.t5EncoderModel;
const t5EncoderModels = models.filter(isT5EncoderModelConfig);
const t5EncoderModels = models.filter((m) => isT5EncoderModelConfig(m));
// If the currently selected model is available, we don't need to do anything
if (selectedT5EncoderModel && t5EncoderModels.some((m) => m.key === selectedT5EncoderModel.key)) {
@@ -325,7 +325,7 @@ const handleT5EncoderModels: ModelHandler = (models, state, dispatch, log) => {
const handleCLIPEmbedModels: ModelHandler = (models, state, dispatch, log) => {
const selectedCLIPEmbedModel = state.params.clipEmbedModel;
const CLIPEmbedModels = models.filter(isCLIPEmbedModelConfig);
const CLIPEmbedModels = models.filter((m) => isCLIPEmbedModelConfig(m));
// If the currently selected model is available, we don't need to do anything
if (selectedCLIPEmbedModel && CLIPEmbedModels.some((m) => m.key === selectedCLIPEmbedModel.key)) {
@@ -353,7 +353,7 @@ const handleCLIPEmbedModels: ModelHandler = (models, state, dispatch, log) => {
const handleFLUXVAEModels: ModelHandler = (models, state, dispatch, log) => {
const selectedFLUXVAEModel = state.params.fluxVAE;
const fluxVAEModels = models.filter(isFluxVAEModelConfig);
const fluxVAEModels = models.filter((m) => isFluxVAEModelConfig(m));
// If the currently selected model is available, we don't need to do anything
if (selectedFLUXVAEModel && fluxVAEModels.some((m) => m.key === selectedFLUXVAEModel.key)) {

View File

@@ -4,6 +4,8 @@ import { atom } from 'nanostores';
/**
* A fallback non-writable atom that always returns `false`, used when a nanostores atom is only conditionally available
* in a hook or component.
*
* @knipignore
*/
export const $false: ReadableAtom<boolean> = atom(false);
/**

View File

@@ -1,7 +1,7 @@
import type { FilterType } from 'features/controlLayers/store/filters';
import type { ParameterPrecision, ParameterScheduler } from 'features/parameters/types/parameterSchemas';
import type { TabName } from 'features/ui/store/uiTypes';
import type { O } from 'ts-toolbelt';
import type { PartialDeep } from 'type-fest';
/**
* A disable-able application feature
@@ -119,4 +119,4 @@ export type AppConfig = {
};
};
export type PartialAppConfig = O.Partial<AppConfig, 'deep'>;
export type PartialAppConfig = PartialDeep<AppConfig>;

View File

@@ -26,5 +26,9 @@ export const IconMenuItem = ({ tooltip, icon, ...props }: Props) => {
};
export const IconMenuItemGroup = ({ children }: { children: ReactNode }) => {
return <Flex gap={2}>{children}</Flex>;
return (
<Flex gap={2} justifyContent="space-between">
{children}
</Flex>
);
};

View File

@@ -1,5 +1,6 @@
import type { PopoverProps } from '@invoke-ai/ui-library';
import commercialLicenseBg from 'public/assets/images/commercial-license-bg.png';
import denoisingStrength from 'public/assets/images/denoising-strength.png';
export type Feature =
| 'clipSkip'
@@ -23,8 +24,10 @@ export type Feature =
| 'dynamicPrompts'
| 'dynamicPromptsMaxPrompts'
| 'dynamicPromptsSeedBehaviour'
| 'globalReferenceImage'
| 'imageFit'
| 'infillMethod'
| 'inpainting'
| 'ipAdapterMethod'
| 'lora'
| 'loraWeight'
@@ -46,6 +49,7 @@ export type Feature =
| 'paramVAEPrecision'
| 'paramWidth'
| 'patchmatchDownScaleSize'
| 'rasterLayer'
| 'refinerModel'
| 'refinerNegativeAestheticScore'
| 'refinerPositiveAestheticScore'
@@ -53,6 +57,9 @@ export type Feature =
| 'refinerStart'
| 'refinerSteps'
| 'refinerCfgScale'
| 'regionalGuidance'
| 'regionalGuidanceAndReferenceImage'
| 'regionalReferenceImage'
| 'scaleBeforeProcessing'
| 'seamlessTilingXAxis'
| 'seamlessTilingYAxis'
@@ -76,6 +83,24 @@ export const POPOVER_DATA: { [key in Feature]?: PopoverData } = {
clipSkip: {
href: 'https://support.invoke.ai/support/solutions/articles/151000178161-advanced-settings',
},
inpainting: {
href: 'https://support.invoke.ai/support/solutions/articles/151000096702-inpainting-outpainting-and-bounding-box',
},
rasterLayer: {
href: 'https://support.invoke.ai/support/solutions/articles/151000094998-raster-layers-and-initial-images',
},
regionalGuidance: {
href: 'https://support.invoke.ai/support/solutions/articles/151000165024-regional-guidance-layers',
},
regionalGuidanceAndReferenceImage: {
href: 'https://support.invoke.ai/support/solutions/articles/151000165024-regional-guidance-layers',
},
globalReferenceImage: {
href: 'https://support.invoke.ai/support/solutions/articles/151000159340-global-and-regional-reference-images-ip-adapters-',
},
regionalReferenceImage: {
href: 'https://support.invoke.ai/support/solutions/articles/151000159340-global-and-regional-reference-images-ip-adapters-',
},
controlNet: {
href: 'https://support.invoke.ai/support/solutions/articles/151000105880',
},
@@ -101,7 +126,7 @@ export const POPOVER_DATA: { [key in Feature]?: PopoverData } = {
href: 'https://support.invoke.ai/support/solutions/articles/151000158838-compositing-settings',
},
infillMethod: {
href: 'https://support.invoke.ai/support/solutions/articles/151000158841-infill-and-scaling',
href: 'https://support.invoke.ai/support/solutions/articles/151000158838-compositing-settings',
},
scaleBeforeProcessing: {
href: 'https://support.invoke.ai/support/solutions/articles/151000158841',
@@ -114,6 +139,7 @@ export const POPOVER_DATA: { [key in Feature]?: PopoverData } = {
},
paramDenoisingStrength: {
href: 'https://support.invoke.ai/support/solutions/articles/151000094998-image-to-image',
image: denoisingStrength,
},
paramHrf: {
href: 'https://support.invoke.ai/support/solutions/articles/151000096700-how-can-i-get-larger-images-what-does-upscaling-do-',

View File

@@ -0,0 +1,57 @@
type Props = {
/**
* The amplitude of the wave. 0 is a straight line, higher values create more pronounced waves.
*/
amplitude: number;
/**
* The number of segments in the line. More segments create a smoother wave.
*/
segments?: number;
/**
* The color of the wave.
*/
stroke: string;
/**
* The width of the wave.
*/
strokeWidth: number;
/**
* The width of the SVG.
*/
width: number;
/**
* The height of the SVG.
*/
height: number;
};
const WavyLine = ({ amplitude, stroke, strokeWidth, width, height, segments = 5 }: Props) => {
// Calculate the path dynamically based on waviness
const generatePath = () => {
if (amplitude === 0) {
// If waviness is 0, return a straight line
return `M0,${height / 2} L${width},${height / 2}`;
}
const clampedAmplitude = Math.min(height / 2, amplitude); // Cap amplitude to half the height
const segmentWidth = width / segments;
let path = `M0,${height / 2}`; // Start in the middle of the left edge
// Loop through each segment and alternate the y position to create waves
for (let i = 1; i <= segments; i++) {
const x = i * segmentWidth;
const y = height / 2 + (i % 2 === 0 ? clampedAmplitude : -clampedAmplitude);
path += ` Q${x - segmentWidth / 2},${y} ${x},${height / 2}`;
}
return path;
};
return (
<svg width={width} height={height} viewBox={`0 0 ${width} ${height}`} xmlns="http://www.w3.org/2000/svg">
<path d={generatePath()} fill="none" stroke={stroke} strokeWidth={strokeWidth} />
</svg>
);
};
export default WavyLine;

View File

@@ -127,8 +127,6 @@ export const buildUseDisclosure = (defaultIsOpen: boolean): [() => UseDisclosure
*
* Hook to manage a boolean state. Use this for a local boolean state.
* @param defaultIsOpen Initial state of the disclosure
*
* @knipignore
*/
export const useDisclosure = (defaultIsOpen: boolean): UseDisclosure => {
const [isOpen, set] = useState(defaultIsOpen);

View File

@@ -4,6 +4,7 @@ import { useAppSelector } from 'app/store/storeHooks';
import type { GroupBase } from 'chakra-react-select';
import { selectParamsSlice } from 'features/controlLayers/store/paramsSlice';
import type { ModelIdentifierField } from 'features/nodes/types/common';
import { selectSystemShouldEnableModelDescriptions } from 'features/system/store/systemSlice';
import { groupBy, reduce } from 'lodash-es';
import { useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
@@ -37,6 +38,7 @@ export const useGroupedModelCombobox = <T extends AnyModelConfig>(
): UseGroupedModelComboboxReturn => {
const { t } = useTranslation();
const base = useAppSelector(selectBaseWithSDXLFallback);
const shouldShowModelDescriptions = useAppSelector(selectSystemShouldEnableModelDescriptions);
const { modelConfigs, selectedModel, getIsDisabled, onChange, isLoading, groupByType = false } = arg;
const options = useMemo<GroupBase<ComboboxOption>[]>(() => {
if (!modelConfigs) {
@@ -51,6 +53,7 @@ export const useGroupedModelCombobox = <T extends AnyModelConfig>(
options: val.map((model) => ({
label: model.name,
value: model.key,
description: (shouldShowModelDescriptions && model.description) || undefined,
isDisabled: getIsDisabled ? getIsDisabled(model) : false,
})),
});
@@ -60,7 +63,7 @@ export const useGroupedModelCombobox = <T extends AnyModelConfig>(
);
_options.sort((a) => (a.label?.split('/')[0]?.toLowerCase().includes(base) ? -1 : 1));
return _options;
}, [modelConfigs, groupByType, getIsDisabled, base]);
}, [modelConfigs, groupByType, getIsDisabled, base, shouldShowModelDescriptions]);
const value = useMemo(
() =>

View File

@@ -202,46 +202,6 @@ const createSelector = (
if (controlLayer.controlAdapter.model?.base !== model?.base) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterIncompatibleBaseModel'));
}
// T2I Adapters require images have dimensions that are multiples of 64 (SD1.5) or 32 (SDXL)
if (controlLayer.controlAdapter.type === 't2i_adapter') {
const multiple = model?.base === 'sdxl' ? 32 : 64;
if (bbox.scaleMethod === 'none') {
if (bbox.rect.width % 16 !== 0) {
reasons.push({
content: i18n.t('parameters.invoke.layer.t2iAdapterIncompatibleBboxWidth', {
multiple,
width: bbox.rect.width,
}),
});
}
if (bbox.rect.height % 16 !== 0) {
reasons.push({
content: i18n.t('parameters.invoke.layer.t2iAdapterIncompatibleBboxHeight', {
multiple,
height: bbox.rect.height,
}),
});
}
} else {
if (bbox.scaledSize.width % 16 !== 0) {
reasons.push({
content: i18n.t('parameters.invoke.layer.t2iAdapterIncompatibleScaledBboxWidth', {
multiple,
width: bbox.scaledSize.width,
}),
});
}
if (bbox.scaledSize.height % 16 !== 0) {
reasons.push({
content: i18n.t('parameters.invoke.layer.t2iAdapterIncompatibleScaledBboxHeight', {
multiple,
height: bbox.scaledSize.height,
}),
});
}
}
}
if (problems.length) {
const content = upperFirst(problems.join(', '));
reasons.push({ prefix, content });

View File

@@ -1,5 +1,7 @@
import type { ComboboxOnChange, ComboboxOption } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import type { ModelIdentifierField } from 'features/nodes/types/common';
import { selectSystemShouldEnableModelDescriptions } from 'features/system/store/systemSlice';
import { useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import type { AnyModelConfig } from 'services/api/types';
@@ -24,13 +26,16 @@ type UseModelComboboxReturn = {
export const useModelCombobox = <T extends AnyModelConfig>(arg: UseModelComboboxArg<T>): UseModelComboboxReturn => {
const { t } = useTranslation();
const { modelConfigs, selectedModel, getIsDisabled, onChange, isLoading, optionsFilter = () => true } = arg;
const shouldShowModelDescriptions = useAppSelector(selectSystemShouldEnableModelDescriptions);
const options = useMemo<ComboboxOption[]>(() => {
return modelConfigs.filter(optionsFilter).map((model) => ({
label: model.name,
value: model.key,
description: (shouldShowModelDescriptions && model.description) || undefined,
isDisabled: getIsDisabled ? getIsDisabled(model) : false,
}));
}, [optionsFilter, getIsDisabled, modelConfigs]);
}, [optionsFilter, getIsDisabled, modelConfigs, shouldShowModelDescriptions]);
const value = useMemo(
() => options.find((m) => (selectedModel ? m.value === selectedModel.key : false)),

View File

@@ -0,0 +1,161 @@
import type { MenuButtonProps, MenuItemProps, MenuListProps, MenuProps } from '@invoke-ai/ui-library';
import { Box, Flex, Icon, Text } from '@invoke-ai/ui-library';
import { useDisclosure } from 'common/hooks/useBoolean';
import type { FocusEventHandler, PointerEvent, RefObject } from 'react';
import { useCallback, useEffect, useRef } from 'react';
import { PiCaretRightBold } from 'react-icons/pi';
import { useDebouncedCallback } from 'use-debounce';
const offset: [number, number] = [0, 8];
type UseSubMenuReturn = {
parentMenuItemProps: Partial<MenuItemProps>;
menuProps: Partial<MenuProps>;
menuButtonProps: Partial<MenuButtonProps>;
menuListProps: Partial<MenuListProps> & { ref: RefObject<HTMLDivElement> };
};
/**
* A hook that provides the necessary props to create a sub-menu within a menu.
*
* The sub-menu should be wrapped inside a parent `MenuItem` component.
*
* Use SubMenuButtonContent to render a button with a label and a right caret icon.
*
* TODO(psyche): Add keyboard handling for sub-menu.
*
* @example
* ```tsx
* const SubMenuExample = () => {
* const subMenu = useSubMenu();
* return (
* <Menu>
* <MenuButton>Open Parent Menu</MenuButton>
* <MenuList>
* <MenuItem>Parent Item 1</MenuItem>
* <MenuItem>Parent Item 2</MenuItem>
* <MenuItem>Parent Item 3</MenuItem>
* <MenuItem {...subMenu.parentMenuItemProps} icon={<PiImageBold />}>
* <Menu {...subMenu.menuProps}>
* <MenuButton {...subMenu.menuButtonProps}>
* <SubMenuButtonContent label="Open Sub Menu" />
* </MenuButton>
* <MenuList {...subMenu.menuListProps}>
* <MenuItem>Sub Item 1</MenuItem>
* <MenuItem>Sub Item 2</MenuItem>
* <MenuItem>Sub Item 3</MenuItem>
* </MenuList>
* </Menu>
* </MenuItem>
* </MenuList>
* </Menu>
* );
* };
* ```
*/
export const useSubMenu = (): UseSubMenuReturn => {
const subMenu = useDisclosure(false);
const menuListRef = useRef<HTMLDivElement>(null);
const closeDebounced = useDebouncedCallback(subMenu.close, 300);
const openAndCancelPendingClose = useCallback(() => {
closeDebounced.cancel();
subMenu.open();
}, [closeDebounced, subMenu]);
const toggleAndCancelPendingClose = useCallback(() => {
if (subMenu.isOpen) {
subMenu.close();
return;
} else {
closeDebounced.cancel();
subMenu.toggle();
}
}, [closeDebounced, subMenu]);
const onBlurMenuList = useCallback<FocusEventHandler<HTMLDivElement>>(
(e) => {
// Don't trigger blur if focus is moving to a child element - e.g. from a sub-menu item to another sub-menu item
if (e.currentTarget.contains(e.relatedTarget)) {
closeDebounced.cancel();
return;
}
subMenu.close();
},
[closeDebounced, subMenu]
);
const onParentMenuItemPointerLeave = useCallback(
(e: PointerEvent<HTMLButtonElement>) => {
/**
* The pointerleave event is triggered when the pen or touch device is lifted, which would close the sub-menu.
* However, we want to keep the sub-menu open until the pen or touch device pressed some other element. This
* will be handled in the useEffect below - just ignore the pointerleave event for pen and touch devices.
*/
if (e.pointerType === 'pen' || e.pointerType === 'touch') {
return;
}
subMenu.close();
},
[subMenu]
);
/**
* When using a mouse, the pointerleave events close the menu. But when using a pen or touch device, we need to close
* the sub-menu when the user taps outside of the menu list. So we need to listen for clicks outside of the menu list
* and close the menu accordingly.
*/
useEffect(() => {
const el = menuListRef.current;
if (!el) {
return;
}
const controller = new AbortController();
window.addEventListener(
'click',
(e) => {
if (menuListRef.current?.contains(e.target as Node)) {
return;
}
subMenu.close();
},
{ signal: controller.signal }
);
return () => {
controller.abort();
};
}, [subMenu]);
return {
parentMenuItemProps: {
onClick: toggleAndCancelPendingClose,
onPointerEnter: openAndCancelPendingClose,
onPointerLeave: onParentMenuItemPointerLeave,
closeOnSelect: false,
},
menuProps: {
isOpen: subMenu.isOpen,
onClose: subMenu.close,
placement: 'right',
offset: offset,
closeOnBlur: false,
},
menuButtonProps: {
as: Box,
width: 'full',
height: 'full',
},
menuListProps: {
ref: menuListRef,
onPointerEnter: openAndCancelPendingClose,
onPointerLeave: closeDebounced,
onBlur: onBlurMenuList,
},
};
};
export const SubMenuButtonContent = ({ label }: { label: string }) => {
return (
<Flex w="full" h="full" flexDir="row" justifyContent="space-between" alignItems="center">
<Text>{label}</Text>
<Icon as={PiCaretRightBold} />
</Flex>
);
};

View File

@@ -1,4 +1,12 @@
type SerializableValue = string | number | boolean | null | undefined | SerializableValue[] | SerializableObject;
type SerializableValue =
| string
| number
| boolean
| null
| undefined
| SerializableValue[]
| readonly SerializableValue[]
| SerializableObject;
export type SerializableObject = {
[k: string | number]: SerializableValue;
};

View File

@@ -0,0 +1,15 @@
import type { CSSProperties } from 'react';
/**
* Chakra's Tooltip's method of finding the nearest scroll parent has a problem - it assumes the first parent with
* `overflow: hidden` is the scroll parent. In this case, the Collapse component has that style, but isn't scrollable
* itself. The result is that the tooltip does not close on scroll, because the scrolling happens higher up in the DOM.
*
* As a hacky workaround, we can set the overflow to `visible`, which allows the scroll parent search to continue up to
* the actual scroll parent (in this case, the OverlayScrollbarsComponent in BoardsListWrapper).
*
* See: https://github.com/chakra-ui/chakra-ui/issues/7871#issuecomment-2453780958
*/
export const fixTooltipCloseOnScrollStyles: CSSProperties = {
overflow: 'visible',
};

View File

@@ -1,5 +1,6 @@
import { Button, Flex, Heading } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
import {
useAddControlLayer,
useAddGlobalReferenceImage,
@@ -28,70 +29,80 @@ export const CanvasAddEntityButtons = memo(() => {
<Flex position="relative" flexDir="column" gap={4} top="20%">
<Flex flexDir="column" justifyContent="flex-start" gap={2}>
<Heading size="xs">{t('controlLayers.global')}</Heading>
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addGlobalReferenceImage}
isDisabled={isFLUX}
>
{t('controlLayers.globalReferenceImage')}
</Button>
<InformationalPopover feature="globalReferenceImage">
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addGlobalReferenceImage}
>
{t('controlLayers.globalReferenceImage')}
</Button>
</InformationalPopover>
</Flex>
<Flex flexDir="column" gap={2}>
<Heading size="xs">{t('controlLayers.regional')}</Heading>
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addInpaintMask}
>
{t('controlLayers.inpaintMask')}
</Button>
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addRegionalGuidance}
isDisabled={isFLUX}
>
{t('controlLayers.regionalGuidance')}
</Button>
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addRegionalReferenceImage}
isDisabled={isFLUX}
>
{t('controlLayers.regionalReferenceImage')}
</Button>
<InformationalPopover feature="inpainting">
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addInpaintMask}
>
{t('controlLayers.inpaintMask')}
</Button>
</InformationalPopover>
<InformationalPopover feature="regionalGuidance">
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addRegionalGuidance}
isDisabled={isFLUX}
>
{t('controlLayers.regionalGuidance')}
</Button>
</InformationalPopover>
<InformationalPopover feature="regionalReferenceImage">
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addRegionalReferenceImage}
isDisabled={isFLUX}
>
{t('controlLayers.regionalReferenceImage')}
</Button>
</InformationalPopover>
</Flex>
<Flex flexDir="column" justifyContent="flex-start" gap={2}>
<Heading size="xs">{t('controlLayers.layer_other')}</Heading>
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addControlLayer}
>
{t('controlLayers.controlLayer')}
</Button>
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addRasterLayer}
>
{t('controlLayers.rasterLayer')}
</Button>
<InformationalPopover feature="controlNet">
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addControlLayer}
>
{t('controlLayers.controlLayer')}
</Button>
</InformationalPopover>
<InformationalPopover feature="rasterLayer">
<Button
size="sm"
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addRasterLayer}
>
{t('controlLayers.rasterLayer')}
</Button>
</InformationalPopover>
</Flex>
</Flex>
</Flex>

View File

@@ -13,7 +13,7 @@ export const CanvasAlertsPreserveMask = memo(() => {
}
return (
<Alert status="warning" borderRadius="base" fontSize="sm" shadow="md" w="fit-content" alignSelf="flex-end">
<Alert status="warning" borderRadius="base" fontSize="sm" shadow="md" w="fit-content">
<AlertIcon />
<AlertTitle>{t('controlLayers.settings.preserveMask.alert')}</AlertTitle>
</Alert>

View File

@@ -98,7 +98,7 @@ const CanvasAlertsSelectedEntityStatusContent = memo(({ entityIdentifier, adapte
}
return (
<Alert status={alert.status} borderRadius="base" fontSize="sm" shadow="md" w="fit-content" alignSelf="flex-end">
<Alert status={alert.status} borderRadius="base" fontSize="sm" shadow="md" w="fit-content">
<AlertIcon />
<AlertTitle>{alert.title}</AlertTitle>
</Alert>

View File

@@ -132,7 +132,6 @@ const AlertWrapper = ({
fontSize="sm"
shadow="md"
w="fit-content"
alignSelf="flex-end"
>
<Flex w="full" alignItems="center">
<AlertIcon />

View File

@@ -0,0 +1,24 @@
import { FormControl, FormLabel, Switch } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { selectAutoProcess, settingsAutoProcessToggled } from 'features/controlLayers/store/canvasSettingsSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
export const CanvasAutoProcessSwitch = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const autoProcess = useAppSelector(selectAutoProcess);
const onChange = useCallback(() => {
dispatch(settingsAutoProcessToggled());
}, [dispatch]);
return (
<FormControl w="min-content">
<FormLabel m={0}>{t('controlLayers.filter.autoProcess')}</FormLabel>
<Switch size="sm" isChecked={autoProcess} onChange={onChange} />
</FormControl>
);
});
CanvasAutoProcessSwitch.displayName = 'CanvasAutoProcessSwitch';

View File

@@ -1,4 +1,5 @@
import { MenuGroup, MenuItem } from '@invoke-ai/ui-library';
import { Menu, MenuButton, MenuGroup, MenuItem, MenuList } from '@invoke-ai/ui-library';
import { SubMenuButtonContent, useSubMenu } from 'common/hooks/useSubMenu';
import { CanvasContextMenuItemsCropCanvasToBbox } from 'features/controlLayers/components/CanvasContextMenu/CanvasContextMenuItemsCropCanvasToBbox';
import { NewLayerIcon } from 'features/controlLayers/components/common/icons';
import {
@@ -16,6 +17,8 @@ import { PiFloppyDiskBold } from 'react-icons/pi';
export const CanvasContextMenuGlobalMenuItems = memo(() => {
const { t } = useTranslation();
const saveSubMenu = useSubMenu();
const newSubMenu = useSubMenu();
const isBusy = useCanvasIsBusy();
const saveCanvasToGallery = useSaveCanvasToGallery();
const saveBboxToGallery = useSaveBboxToGallery();
@@ -28,27 +31,41 @@ export const CanvasContextMenuGlobalMenuItems = memo(() => {
<>
<MenuGroup title={t('controlLayers.canvasContextMenu.canvasGroup')}>
<CanvasContextMenuItemsCropCanvasToBbox />
</MenuGroup>
<MenuGroup title={t('controlLayers.canvasContextMenu.saveToGalleryGroup')}>
<MenuItem icon={<PiFloppyDiskBold />} isDisabled={isBusy} onClick={saveCanvasToGallery}>
{t('controlLayers.canvasContextMenu.saveCanvasToGallery')}
<MenuItem {...saveSubMenu.parentMenuItemProps} icon={<PiFloppyDiskBold />}>
<Menu {...saveSubMenu.menuProps}>
<MenuButton {...saveSubMenu.menuButtonProps}>
<SubMenuButtonContent label={t('controlLayers.canvasContextMenu.saveToGalleryGroup')} />
</MenuButton>
<MenuList {...saveSubMenu.menuListProps}>
<MenuItem icon={<PiFloppyDiskBold />} isDisabled={isBusy} onClick={saveCanvasToGallery}>
{t('controlLayers.canvasContextMenu.saveCanvasToGallery')}
</MenuItem>
<MenuItem icon={<PiFloppyDiskBold />} isDisabled={isBusy} onClick={saveBboxToGallery}>
{t('controlLayers.canvasContextMenu.saveBboxToGallery')}
</MenuItem>
</MenuList>
</Menu>
</MenuItem>
<MenuItem icon={<PiFloppyDiskBold />} isDisabled={isBusy} onClick={saveBboxToGallery}>
{t('controlLayers.canvasContextMenu.saveBboxToGallery')}
</MenuItem>
</MenuGroup>
<MenuGroup title={t('controlLayers.canvasContextMenu.bboxGroup')}>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newGlobalReferenceImageFromBbox}>
{t('controlLayers.canvasContextMenu.newGlobalReferenceImage')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newRegionalReferenceImageFromBbox}>
{t('controlLayers.canvasContextMenu.newRegionalReferenceImage')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newControlLayerFromBbox}>
{t('controlLayers.canvasContextMenu.newControlLayer')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newRasterLayerFromBbox}>
{t('controlLayers.canvasContextMenu.newRasterLayer')}
<MenuItem {...newSubMenu.parentMenuItemProps} icon={<NewLayerIcon />}>
<Menu {...newSubMenu.menuProps}>
<MenuButton {...newSubMenu.menuButtonProps}>
<SubMenuButtonContent label={t('controlLayers.canvasContextMenu.bboxGroup')} />
</MenuButton>
<MenuList {...newSubMenu.menuListProps}>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newGlobalReferenceImageFromBbox}>
{t('controlLayers.canvasContextMenu.newGlobalReferenceImage')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newRegionalReferenceImageFromBbox}>
{t('controlLayers.canvasContextMenu.newRegionalReferenceImage')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newControlLayerFromBbox}>
{t('controlLayers.canvasContextMenu.newControlLayer')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newRasterLayerFromBbox}>
{t('controlLayers.canvasContextMenu.newRasterLayer')}
</MenuItem>
</MenuList>
</Menu>
</MenuItem>
</MenuGroup>
</>

View File

@@ -1,39 +1,43 @@
import { MenuGroup } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { CanvasEntityMenuItemsCopyToClipboard } from 'features/controlLayers/components/common/CanvasEntityMenuItemsCopyToClipboard';
import { CanvasEntityMenuItemsCropToBbox } from 'features/controlLayers/components/common/CanvasEntityMenuItemsCropToBbox';
import { CanvasEntityMenuItemsDelete } from 'features/controlLayers/components/common/CanvasEntityMenuItemsDelete';
import { CanvasEntityMenuItemsFilter } from 'features/controlLayers/components/common/CanvasEntityMenuItemsFilter';
import { CanvasEntityMenuItemsSave } from 'features/controlLayers/components/common/CanvasEntityMenuItemsSave';
import { CanvasEntityMenuItemsTransform } from 'features/controlLayers/components/common/CanvasEntityMenuItemsTransform';
import { ControlLayerMenuItems } from 'features/controlLayers/components/ControlLayer/ControlLayerMenuItems';
import { InpaintMaskMenuItems } from 'features/controlLayers/components/InpaintMask/InpaintMaskMenuItems';
import { IPAdapterMenuItems } from 'features/controlLayers/components/IPAdapter/IPAdapterMenuItems';
import { RasterLayerMenuItems } from 'features/controlLayers/components/RasterLayer/RasterLayerMenuItems';
import { RegionalGuidanceMenuItems } from 'features/controlLayers/components/RegionalGuidance/RegionalGuidanceMenuItems';
import {
EntityIdentifierContext,
useEntityIdentifierContext,
} from 'features/controlLayers/contexts/EntityIdentifierContext';
import { useEntityTitle } from 'features/controlLayers/hooks/useEntityTitle';
import { useEntityTypeString } from 'features/controlLayers/hooks/useEntityTypeString';
import { selectSelectedEntityIdentifier } from 'features/controlLayers/store/selectors';
import {
isFilterableEntityIdentifier,
isSaveableEntityIdentifier,
isTransformableEntityIdentifier,
} from 'features/controlLayers/store/types';
import type { PropsWithChildren } from 'react';
import { memo } from 'react';
import type { Equals } from 'tsafe';
import { assert } from 'tsafe';
const CanvasContextMenuSelectedEntityMenuItemsContent = memo(() => {
const entityIdentifier = useEntityIdentifierContext();
const title = useEntityTitle(entityIdentifier);
return (
<MenuGroup title={title}>
{isFilterableEntityIdentifier(entityIdentifier) && <CanvasEntityMenuItemsFilter />}
{isTransformableEntityIdentifier(entityIdentifier) && <CanvasEntityMenuItemsTransform />}
{isSaveableEntityIdentifier(entityIdentifier) && <CanvasEntityMenuItemsCopyToClipboard />}
{isSaveableEntityIdentifier(entityIdentifier) && <CanvasEntityMenuItemsSave />}
{isTransformableEntityIdentifier(entityIdentifier) && <CanvasEntityMenuItemsCropToBbox />}
<CanvasEntityMenuItemsDelete />
</MenuGroup>
);
if (entityIdentifier.type === 'raster_layer') {
return <RasterLayerMenuItems />;
}
if (entityIdentifier.type === 'control_layer') {
return <ControlLayerMenuItems />;
}
if (entityIdentifier.type === 'inpaint_mask') {
return <InpaintMaskMenuItems />;
}
if (entityIdentifier.type === 'regional_guidance') {
return <RegionalGuidanceMenuItems />;
}
if (entityIdentifier.type === 'reference_image') {
return <IPAdapterMenuItems />;
}
assert<Equals<typeof entityIdentifier.type, never>>(false);
});
CanvasContextMenuSelectedEntityMenuItemsContent.displayName = 'CanvasContextMenuSelectedEntityMenuItemsContent';
export const CanvasContextMenuSelectedEntityMenuItems = memo(() => {
@@ -45,9 +49,20 @@ export const CanvasContextMenuSelectedEntityMenuItems = memo(() => {
return (
<EntityIdentifierContext.Provider value={selectedEntityIdentifier}>
<CanvasContextMenuSelectedEntityMenuItemsContent />
<CanvasContextMenuSelectedEntityMenuGroup>
<CanvasContextMenuSelectedEntityMenuItemsContent />
</CanvasContextMenuSelectedEntityMenuGroup>
</EntityIdentifierContext.Provider>
);
});
CanvasContextMenuSelectedEntityMenuItems.displayName = 'CanvasContextMenuSelectedEntityMenuItems';
const CanvasContextMenuSelectedEntityMenuGroup = memo((props: PropsWithChildren) => {
const entityIdentifier = useEntityIdentifierContext();
const title = useEntityTypeString(entityIdentifier.type);
return <MenuGroup title={title}>{props.children}</MenuGroup>;
});
CanvasContextMenuSelectedEntityMenuGroup.displayName = 'CanvasContextMenuSelectedEntityMenuGroup';

View File

@@ -62,6 +62,7 @@ export const CanvasDropArea = memo(() => {
data={addControlLayerFromImageDropData}
/>
</GridItem>
<GridItem position="relative">
<IAIDroppable
dropLabel={t('controlLayers.canvasContextMenu.newRegionalReferenceImage')}

View File

@@ -29,7 +29,7 @@ export const EntityListGlobalActionBarAddLayerMenu = memo(() => {
<Menu>
<MenuButton
as={IconButton}
size="sm"
minW={8}
variant="link"
alignSelf="stretch"
tooltip={t('controlLayers.addLayer')}
@@ -40,7 +40,7 @@ export const EntityListGlobalActionBarAddLayerMenu = memo(() => {
/>
<MenuList>
<MenuGroup title={t('controlLayers.global')}>
<MenuItem icon={<PiPlusBold />} onClick={addGlobalReferenceImage} isDisabled={isFLUX}>
<MenuItem icon={<PiPlusBold />} onClick={addGlobalReferenceImage}>
{t('controlLayers.globalReferenceImage')}
</MenuItem>
</MenuGroup>

View File

@@ -4,6 +4,7 @@ import { EntityListSelectedEntityActionBarDuplicateButton } from 'features/contr
import { EntityListSelectedEntityActionBarFill } from 'features/controlLayers/components/CanvasEntityList/EntityListSelectedEntityActionBarFill';
import { EntityListSelectedEntityActionBarFilterButton } from 'features/controlLayers/components/CanvasEntityList/EntityListSelectedEntityActionBarFilterButton';
import { EntityListSelectedEntityActionBarOpacity } from 'features/controlLayers/components/CanvasEntityList/EntityListSelectedEntityActionBarOpacity';
import { EntityListSelectedEntityActionBarSelectObjectButton } from 'features/controlLayers/components/CanvasEntityList/EntityListSelectedEntityActionBarSelectObjectButton';
import { EntityListSelectedEntityActionBarTransformButton } from 'features/controlLayers/components/CanvasEntityList/EntityListSelectedEntityActionBarTransformButton';
import { memo } from 'react';
@@ -16,6 +17,7 @@ export const EntityListSelectedEntityActionBar = memo(() => {
<Spacer />
<EntityListSelectedEntityActionBarFill />
<Flex h="full">
<EntityListSelectedEntityActionBarSelectObjectButton />
<EntityListSelectedEntityActionBarFilterButton />
<EntityListSelectedEntityActionBarTransformButton />
<EntityListSelectedEntityActionBarSaveToAssetsButton />

Some files were not shown because too many files have changed in this diff Show More